WorldWideScience

Sample records for large-scale information storage

  1. Advances in Large-Scale Solar Heating and Long Term Storage in Denmark

    DEFF Research Database (Denmark)

    Heller, Alfred

    2000-01-01

    According to (the) information from the European Large-Scale Solar Heating Network, (See http://www.hvac.chalmers.se/cshp/), the area of installed solar collectors for large-scale application is in Europe, approximately 8 mill m2, corresponding to about 4000 MW thermal power. The 11 plants...... the last 10 years and the corresponding cost per collector area for the final installed plant is kept constant, even so the solar production is increased. Unfortunately large-scale seasonal storage was not able to keep up with the advances in solar technology, at least for pit water and gravel storage...... of the total 51 plants are equipped with long-term storage. In Denmark, 7 plants are installed, comprising of approx. 18,000-m2 collector area with new plants planned. The development of these plants and the involved technologies will be presented in this paper, with a focus on the improvements for Danish...

  2. Prospects for large scale electricity storage in Denmark

    DEFF Research Database (Denmark)

    Krog Ekman, Claus; Jensen, Søren Højgaard

    2010-01-01

    In a future power systems with additional wind power capacity there will be an increased need for large scale power management as well as reliable balancing and reserve capabilities. Different technologies for large scale electricity storage provide solutions to the different challenges arising w...

  3. Correction: Large-scale electricity storage utilizing reversible solid oxide cells combined with underground storage of CO2 and CH4

    DEFF Research Database (Denmark)

    Jensen, Søren Højgaard; Graves, Christopher R.; Mogensen, Mogens Bjerg

    2017-01-01

    Correction for ‘Large-scale electricity storage utilizing reversible solid oxide cells combined with underground storage of CO2 and CH4’ by S. H. Jensen et al., Energy Environ. Sci., 2015, 8, 2471–2479.......Correction for ‘Large-scale electricity storage utilizing reversible solid oxide cells combined with underground storage of CO2 and CH4’ by S. H. Jensen et al., Energy Environ. Sci., 2015, 8, 2471–2479....

  4. The viability of balancing wind generation with large scale energy storage

    International Nuclear Information System (INIS)

    Nyamdash, Batsaikhan; Denny, Eleanor; O'Malley, Mark

    2010-01-01

    This paper studies the impact of combining wind generation and dedicated large scale energy storage on the conventional thermal plant mix and the CO 2 emissions of a power system. Different strategies are proposed here in order to explore the best operational strategy for the wind and storage system in terms of its effect on the net load. Furthermore, the economic viability of combining wind and large scale storage is studied. The empirical application, using data for the Irish power system, shows that combined wind and storage reduces the participation of mid-merit plants and increases the participation of base-load plants. Moreover, storage negates some of the CO 2 emissions reduction of the wind generation. It was also found that the wind and storage output can significantly reduce the variability of the net load under certain operational strategies and the optimal strategy depends on the installed wind capacity. However, in the absence of any supporting mechanism none of the storage devices were economically viable when they were combined with the wind generation on the Irish power system. - Research Highlights: → Energy storage would displace the peaking and mid-merit plants generations by the base-load plants generations. Energy storage may negate the CO 2 emissions reduction that is due to the increased wind generations. →Energy storage reduces the variation of the net load. →Under certain market conditions, merchant type energy storage is not viable.

  5. Large-scale electrophysiology: acquisition, compression, encryption, and storage of big data.

    Science.gov (United States)

    Brinkmann, Benjamin H; Bower, Mark R; Stengel, Keith A; Worrell, Gregory A; Stead, Matt

    2009-05-30

    The use of large-scale electrophysiology to obtain high spatiotemporal resolution brain recordings (>100 channels) capable of probing the range of neural activity from local field potential oscillations to single-neuron action potentials presents new challenges for data acquisition, storage, and analysis. Our group is currently performing continuous, long-term electrophysiological recordings in human subjects undergoing evaluation for epilepsy surgery using hybrid intracranial electrodes composed of up to 320 micro- and clinical macroelectrode arrays. DC-capable amplifiers, sampling at 32kHz per channel with 18-bits of A/D resolution are capable of resolving extracellular voltages spanning single-neuron action potentials, high frequency oscillations, and high amplitude ultra-slow activity, but this approach generates 3 terabytes of data per day (at 4 bytes per sample) using current data formats. Data compression can provide several practical benefits, but only if data can be compressed and appended to files in real-time in a format that allows random access to data segments of varying size. Here we describe a state-of-the-art, scalable, electrophysiology platform designed for acquisition, compression, encryption, and storage of large-scale data. Data are stored in a file format that incorporates lossless data compression using range-encoded differences, a 32-bit cyclically redundant checksum to ensure data integrity, and 128-bit encryption for protection of patient information.

  6. Thermal System Analysis and Optimization of Large-Scale Compressed Air Energy Storage (CAES

    Directory of Open Access Journals (Sweden)

    Zhongguang Fu

    2015-08-01

    Full Text Available As an important solution to issues regarding peak load and renewable energy resources on grids, large-scale compressed air energy storage (CAES power generation technology has recently become a popular research topic in the area of large-scale industrial energy storage. At present, the combination of high-expansion ratio turbines with advanced gas turbine technology is an important breakthrough in energy storage technology. In this study, a new gas turbine power generation system is coupled with current CAES technology. Moreover, a thermodynamic cycle system is optimized by calculating for the parameters of a thermodynamic system. Results show that the thermal efficiency of the new system increases by at least 5% over that of the existing system.

  7. Large-scale CO2 storage — Is it feasible?

    Directory of Open Access Journals (Sweden)

    Johansen H.

    2013-06-01

    Full Text Available CCS is generally estimated to have to account for about 20% of the reduction of CO2 emissions to the atmosphere. This paper focuses on the technical aspects of CO2 storage, even if the CCS challenge is equally dependent upon finding viable international solutions to a wide range of economic, political and cultural issues. It has already been demonstrated that it is technically possible to store adequate amounts of CO2 in the subsurface (Sleipner, InSalah, Snøhvit. The large-scale storage challenge (several Gigatons of CO2 per year is more an issue of minimizing cost without compromising safety, and of making international regulations.The storage challenge may be split into 4 main parts: 1 finding reservoirs with adequate storage capacity, 2 make sure that the sealing capacity above the reservoir is sufficient, 3 build the infrastructure for transport, drilling and injection, and 4 set up and perform the necessary monitoring activities. More than 150 years of worldwide experience from the production of oil and gas is an important source of competence for CO2 storage. The storage challenge is however different in three important aspects: 1 the storage activity results in pressure increase in the subsurface, 2 there is no production of fluids that give important feedback on reservoir performance, and 3 the monitoring requirement will have to extend for a much longer time into the future than what is needed during oil and gas production. An important property of CO2 is that its behaviour in the subsurface is significantly different from that of oil and gas. CO2 in contact with water is reactive and corrosive, and may impose great damage on both man-made and natural materials, if proper precautions are not executed. On the other hand, the long-term effect of most of these reactions is that a large amount of CO2 will become immobilized and permanently stored as solid carbonate minerals. The reduced opportunity for direct monitoring of fluid samples

  8. Large-scale CO2 storage — Is it feasible?

    Science.gov (United States)

    Johansen, H.

    2013-06-01

    CCS is generally estimated to have to account for about 20% of the reduction of CO2 emissions to the atmosphere. This paper focuses on the technical aspects of CO2 storage, even if the CCS challenge is equally dependent upon finding viable international solutions to a wide range of economic, political and cultural issues. It has already been demonstrated that it is technically possible to store adequate amounts of CO2 in the subsurface (Sleipner, InSalah, Snøhvit). The large-scale storage challenge (several Gigatons of CO2 per year) is more an issue of minimizing cost without compromising safety, and of making international regulations.The storage challenge may be split into 4 main parts: 1) finding reservoirs with adequate storage capacity, 2) make sure that the sealing capacity above the reservoir is sufficient, 3) build the infrastructure for transport, drilling and injection, and 4) set up and perform the necessary monitoring activities. More than 150 years of worldwide experience from the production of oil and gas is an important source of competence for CO2 storage. The storage challenge is however different in three important aspects: 1) the storage activity results in pressure increase in the subsurface, 2) there is no production of fluids that give important feedback on reservoir performance, and 3) the monitoring requirement will have to extend for a much longer time into the future than what is needed during oil and gas production. An important property of CO2 is that its behaviour in the subsurface is significantly different from that of oil and gas. CO2 in contact with water is reactive and corrosive, and may impose great damage on both man-made and natural materials, if proper precautions are not executed. On the other hand, the long-term effect of most of these reactions is that a large amount of CO2 will become immobilized and permanently stored as solid carbonate minerals. The reduced opportunity for direct monitoring of fluid samples close to the

  9. Large-Scale Wireless Temperature Monitoring System for Liquefied Petroleum Gas Storage Tanks

    Directory of Open Access Journals (Sweden)

    Guangwen Fan

    2015-09-01

    Full Text Available Temperature distribution is a critical indicator of the health condition for Liquefied Petroleum Gas (LPG storage tanks. In this paper, we present a large-scale wireless temperature monitoring system to evaluate the safety of LPG storage tanks. The system includes wireless sensors networks, high temperature fiber-optic sensors, and monitoring software. Finally, a case study on real-world LPG storage tanks proves the feasibility of the system. The unique features of wireless transmission, automatic data acquisition and management, local and remote access make the developed system a good alternative for temperature monitoring of LPG storage tanks in practical applications.

  10. Large-Scale Wireless Temperature Monitoring System for Liquefied Petroleum Gas Storage Tanks.

    Science.gov (United States)

    Fan, Guangwen; Shen, Yu; Hao, Xiaowei; Yuan, Zongming; Zhou, Zhi

    2015-09-18

    Temperature distribution is a critical indicator of the health condition for Liquefied Petroleum Gas (LPG) storage tanks. In this paper, we present a large-scale wireless temperature monitoring system to evaluate the safety of LPG storage tanks. The system includes wireless sensors networks, high temperature fiber-optic sensors, and monitoring software. Finally, a case study on real-world LPG storage tanks proves the feasibility of the system. The unique features of wireless transmission, automatic data acquisition and management, local and remote access make the developed system a good alternative for temperature monitoring of LPG storage tanks in practical applications.

  11. Large-scale electricity storage utilizing reversible solid oxide cells combined with underground storage of CO2 and CH4

    DEFF Research Database (Denmark)

    Jensen, Søren Højgaard; Graves, Christopher R.; Mogensen, Mogens Bjerg

    2015-01-01

    Electricity storage is needed on an unprecedented scale to sustain the ongoing transition of electricity generation from fossil fuels to intermittent renewable energy sources like wind and solar power. Today pumped hydro is the only commercially viable large-scale electricity storage technology......-scale electricity storage with a round-trip efficiency exceeding 70% and an estimated storage cost around 3 b kW-1 h-1, i.e., comparable to pumped hydro and much better than previously proposed technologies...

  12. A low-cost iron-cadmium redox flow battery for large-scale energy storage

    Science.gov (United States)

    Zeng, Y. K.; Zhao, T. S.; Zhou, X. L.; Wei, L.; Jiang, H. R.

    2016-10-01

    The redox flow battery (RFB) is one of the most promising large-scale energy storage technologies that offer a potential solution to the intermittency of renewable sources such as wind and solar. The prerequisite for widespread utilization of RFBs is low capital cost. In this work, an iron-cadmium redox flow battery (Fe/Cd RFB) with a premixed iron and cadmium solution is developed and tested. It is demonstrated that the coulombic efficiency and energy efficiency of the Fe/Cd RFB reach 98.7% and 80.2% at 120 mA cm-2, respectively. The Fe/Cd RFB exhibits stable efficiencies with capacity retention of 99.87% per cycle during the cycle test. Moreover, the Fe/Cd RFB is estimated to have a low capital cost of 108 kWh-1 for 8-h energy storage. Intrinsically low-cost active materials, high cell performance and excellent capacity retention equip the Fe/Cd RFB to be a promising solution for large-scale energy storage systems.

  13. A novel iron-lead redox flow battery for large-scale energy storage

    Science.gov (United States)

    Zeng, Y. K.; Zhao, T. S.; Zhou, X. L.; Wei, L.; Ren, Y. X.

    2017-04-01

    The redox flow battery (RFB) is one of the most promising large-scale energy storage technologies for the massive utilization of intermittent renewables especially wind and solar energy. This work presents a novel redox flow battery that utilizes inexpensive and abundant Fe(II)/Fe(III) and Pb/Pb(II) redox couples as redox materials. Experimental results show that both the Fe(II)/Fe(III) and Pb/Pb(II) redox couples have fast electrochemical kinetics in methanesulfonic acid, and that the coulombic efficiency and energy efficiency of the battery are, respectively, as high as 96.2% and 86.2% at 40 mA cm-2. Furthermore, the battery exhibits stable performance in terms of efficiencies and discharge capacities during the cycle test. The inexpensive redox materials, fast electrochemical kinetics and stable cycle performance make the present battery a promising candidate for large-scale energy storage applications.

  14. Impact of Data Placement on Resilience in Large-Scale Object Storage Systems

    Energy Technology Data Exchange (ETDEWEB)

    Carns, Philip; Harms, Kevin; Jenkins, John; Mubarak, Misbah; Ross, Robert; Carothers, Christopher

    2016-05-02

    Distributed object storage architectures have become the de facto standard for high-performance storage in big data, cloud, and HPC computing. Object storage deployments using commodity hardware to reduce costs often employ object replication as a method to achieve data resilience. Repairing object replicas after failure is a daunting task for systems with thousands of servers and billions of objects, however, and it is increasingly difficult to evaluate such scenarios at scale on realworld systems. Resilience and availability are both compromised if objects are not repaired in a timely manner. In this work we leverage a high-fidelity discrete-event simulation model to investigate replica reconstruction on large-scale object storage systems with thousands of servers, billions of objects, and petabytes of data. We evaluate the behavior of CRUSH, a well-known object placement algorithm, and identify configuration scenarios in which aggregate rebuild performance is constrained by object placement policies. After determining the root cause of this bottleneck, we then propose enhancements to CRUSH and the usage policies atop it to enable scalable replica reconstruction. We use these methods to demonstrate a simulated aggregate rebuild rate of 410 GiB/s (within 5% of projected ideal linear scaling) on a 1,024-node commodity storage system. We also uncover an unexpected phenomenon in rebuild performance based on the characteristics of the data stored on the system.

  15. The role of large scale storage in a GB low carbon energy future: Issues and policy challenges

    International Nuclear Information System (INIS)

    Gruenewald, Philipp; Cockerill, Tim; Contestabile, Marcello; Pearson, Peter

    2011-01-01

    Large scale storage offers the prospect of capturing and using excess electricity within a low carbon energy system, which otherwise might have to be wasted. Incorporating the role of storage into current scenario tools is challenging, because it requires high temporal resolution to reflect the effects of intermittent sources on system balancing. This study draws on results from a model with such resolution. It concludes that large scale storage could become economically viable for scenarios with high penetration of renewables. As the proportion of intermittent sources increases, the optimal type of storage shifts towards solutions with low energy related costs, even at the expense of efficiency. However, a range of uncertainties have been identified, concerning storage technology development, the regulatory environment, alternatives to storage and the stochastic uncertainty of year-on-year revenues. All of these negatively affect the cost of finance and the chances of successful market uptake. We argue, therefore, that, if the possible wider system and social benefits from the presence of storage are to be achieved, stronger and more strategic policy support may be necessary. More work on the social and system benefits of storage is needed to gauge the appropriate extent of support measures. - Highlights: → Time resolved modelling shows future potential for large scale power storage in GB. → The value of storage is highly sensitive to a range of parameters. → Uncertainty over the revenue from storage could pose a barrier to investment. → To realise wider system benefits stronger and more strategic policy support may be necessary.

  16. Large temporal scale and capacity subsurface bulk energy storage with CO2

    Science.gov (United States)

    Saar, M. O.; Fleming, M. R.; Adams, B. M.; Ogland-Hand, J.; Nelson, E. S.; Randolph, J.; Sioshansi, R.; Kuehn, T. H.; Buscheck, T. A.; Bielicki, J. M.

    2017-12-01

    Decarbonizing energy systems by increasing the penetration of variable renewable energy (VRE) technologies requires efficient and short- to long-term energy storage. Very large amounts of energy can be stored in the subsurface as heat and/or pressure energy in order to provide both short- and long-term (seasonal) storage, depending on the implementation. This energy storage approach can be quite efficient, especially where geothermal energy is naturally added to the system. Here, we present subsurface heat and/or pressure energy storage with supercritical carbon dioxide (CO2) and discuss the system's efficiency, deployment options, as well as its advantages and disadvantages, compared to several other energy storage options. CO2-based subsurface bulk energy storage has the potential to be particularly efficient and large-scale, both temporally (i.e., seasonal) and spatially. The latter refers to the amount of energy that can be stored underground, using CO2, at a geologically conducive location, potentially enabling storing excess power from a substantial portion of the power grid. The implication is that it would be possible to employ centralized energy storage for (a substantial part of) the power grid, where the geology enables CO2-based bulk subsurface energy storage, whereas the VRE technologies (solar, wind) are located on that same power grid, where (solar, wind) conditions are ideal. However, this may require reinforcing the power grid's transmission lines in certain parts of the grid to enable high-load power transmission from/to a few locations.

  17. Survey and analysis of selected jointly owned large-scale electric utility storage projects

    Energy Technology Data Exchange (ETDEWEB)

    1982-05-01

    The objective of this study was to examine and document the issues surrounding the curtailment in commercialization of large-scale electric storage projects. It was sensed that if these issues could be uncovered, then efforts might be directed toward clearing away these barriers and allowing these technologies to penetrate the market to their maximum potential. Joint-ownership of these projects was seen as a possible solution to overcoming the major barriers, particularly economic barriers, of commercializaton. Therefore, discussions with partners involved in four pumped storage projects took place to identify the difficulties and advantages of joint-ownership agreements. The four plants surveyed included Yards Creek (Public Service Electric and Gas and Jersey Central Power and Light); Seneca (Pennsylvania Electric and Cleveland Electric Illuminating Company); Ludington (Consumers Power and Detroit Edison, and Bath County (Virginia Electric Power Company and Allegheny Power System, Inc.). Also investigated were several pumped storage projects which were never completed. These included Blue Ridge (American Electric Power); Cornwall (Consolidated Edison); Davis (Allegheny Power System, Inc.) and Kttatiny Mountain (General Public Utilities). Institutional, regulatory, technical, environmental, economic, and special issues at each project were investgated, and the conclusions relative to each issue are presented. The major barriers preventing the growth of energy storage are the high cost of these systems in times of extremely high cost of capital, diminishing load growth and regulatory influences which will not allow the building of large-scale storage systems due to environmental objections or other reasons. However, the future for energy storage looks viable despite difficult economic times for the utility industry. Joint-ownership can ease some of the economic hardships for utilites which demonstrate a need for energy storage.

  18. The application of liquid air energy storage for large scale long duration solutions to grid balancing

    Science.gov (United States)

    Brett, Gareth; Barnett, Matthew

    2014-12-01

    Liquid Air Energy Storage (LAES) provides large scale, long duration energy storage at the point of demand in the 5 MW/20 MWh to 100 MW/1,000 MWh range. LAES combines mature components from the industrial gas and electricity industries assembled in a novel process and is one of the few storage technologies that can be delivered at large scale, with no geographical constraints. The system uses no exotic materials or scarce resources and all major components have a proven lifetime of 25+ years. The system can also integrate low grade waste heat to increase power output. Founded in 2005, Highview Power Storage, is a UK based developer of LAES. The company has taken the concept from academic analysis, through laboratory testing, and in 2011 commissioned the world's first fully integrated system at pilot plant scale (300 kW/2.5 MWh) hosted at SSE's (Scottish & Southern Energy) 80 MW Biomass Plant in Greater London which was partly funded by a Department of Energy and Climate Change (DECC) grant. Highview is now working with commercial customers to deploy multi MW commercial reference plants in the UK and abroad.

  19. A Report on Simulation-Driven Reliability and Failure Analysis of Large-Scale Storage Systems

    Energy Technology Data Exchange (ETDEWEB)

    Wan, Lipeng [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Wang, Feiyi [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Oral, H. Sarp [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Vazhkudai, Sudharshan S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Cao, Qing [Univ. of Tennessee, Knoxville, TN (United States)

    2014-11-01

    High-performance computing (HPC) storage systems provide data availability and reliability using various hardware and software fault tolerance techniques. Usually, reliability and availability are calculated at the subsystem or component level using limited metrics such as, mean time to failure (MTTF) or mean time to data loss (MTTDL). This often means settling on simple and disconnected failure models (such as exponential failure rate) to achieve tractable and close-formed solutions. However, such models have been shown to be insufficient in assessing end-to-end storage system reliability and availability. We propose a generic simulation framework aimed at analyzing the reliability and availability of storage systems at scale, and investigating what-if scenarios. The framework is designed for an end-to-end storage system, accommodating the various components and subsystems, their interconnections, failure patterns and propagation, and performs dependency analysis to capture a wide-range of failure cases. We evaluate the framework against a large-scale storage system that is in production and analyze its failure projections toward and beyond the end of lifecycle. We also examine the potential operational impact by studying how different types of components affect the overall system reliability and availability, and present the preliminary results

  20. A central solar-industrial waste heat heating system with large scale borehole thermal storage

    NARCIS (Netherlands)

    Guo, F.; Yang, X.; Xu, L.; Torrens, I.; Hensen, J.L.M.

    2017-01-01

    In this paper, a new research of seasonal thermal storage is introduced. This study aims to maximize the utilization of renewable energy source and industrial waste heat (IWH) for urban district heating systems in both heating and non-heating seasons through the use of large-scale seasonal thermal

  1. Valuation framework for large scale electricity storage in a case with wind curtailment

    International Nuclear Information System (INIS)

    Loisel, Rodica; Mercier, Arnaud; Gatzen, Christoph; Elms, Nick; Petric, Hrvoje

    2010-01-01

    This paper investigates the value of large scale applications of electricity storage in selected European power systems in the context of wind generation confronted with a grid bottleneck. It analyzes the market value to 2030 of two storage technologies, assuming the market situation projected for Germany and France. The analysis assesses the evolution of storage economics based on the net present value of cash flows. Sensitivities to market and regulatory drivers of value are assessed, e.g. electricity price spreads, ancillary services revenues, wind curtailment and the level of carbon prices. The paper concludes by suggesting possible ways to improve the competitiveness of electricity storage, such as research and development and deployment programmes, and changes to the design of power markets and regulatory arrangements to enable storage owners to better capture the benefits of storage. Such changes would allow electricity storage, where economically viable, to play a critical role in establishing a future sustainable European power system. - Research highlights: →CAES and PHS are not cost-effective for current market design in France and Germany → Market reforms are run to reward bottleneck avoiding and ancillary reserves → Storage is profitable when all potential socio-economic benefits are aggregated → R and D and D programs for storage improvement are economically and socially justified.

  2. Large-Scale Demonstration of Liquid Hydrogen Storage with Zero Boiloff for In-Space Applications

    Science.gov (United States)

    Hastings, L. J.; Bryant, C. B.; Flachbart, R. H.; Holt, K. A.; Johnson, E.; Hedayat, A.; Hipp, B.; Plachta, D. W.

    2010-01-01

    Cryocooler and passive insulation technology advances have substantially improved prospects for zero-boiloff cryogenic storage. Therefore, a cooperative effort by NASA s Ames Research Center, Glenn Research Center, and Marshall Space Flight Center (MSFC) was implemented to develop zero-boiloff concepts for in-space cryogenic storage. Described herein is one program element - a large-scale, zero-boiloff demonstration using the MSFC multipurpose hydrogen test bed (MHTB). A commercial cryocooler was interfaced with an existing MHTB spray bar mixer and insulation system in a manner that enabled a balance between incoming and extracted thermal energy.

  3. Assessing the value of storage services in large-scale multireservoir systems

    Science.gov (United States)

    Tilmant, A.; Arjoon, D.; Guilherme, G. F.

    2012-12-01

    both countries, the highly contrasted hydrologic regime of the Euphrates river could only be dealt with through storage. However, due to political tensions, those projects were carried out without much cooperation and coordination among riparian countries. The development started in the late 1960s with the construction of the head reservoir in Turkey (Keban dam) and the most downstream reservoir in Syria (Tabqa dam). Thirty years later, five other dams in both countries had been commissioned, changing the economy of this region through the export of hydroelectric power (7812 MW) and agricultural products (cotton and cereals). The operating policies and marginal water values of this multipurpose multiresevoir system are determined using Stochastic Dual Dynamic Programming, an optimization algorithm that can handle large-scale reservoir operation problems while keeping an individual representation of the hydraulic infrastructure and the demand sites. The analysis of the simulation results reveal that the average value of storage for the entire cascade of reservoirs is around 420 million US/a, which is 18% of the annual short-run benefits of the system (2.26 billion US/a).

  4. LASSIE: the large analogue signal and scaling information environment for FAIR

    International Nuclear Information System (INIS)

    Hoffmann, T.; Braeuning, H.; Haseitl, R.

    2012-01-01

    At FAIR, the Facility for Antiproton and Ion Research, several new accelerators and storage rings such as the SIS-100, HESR, CR, the inter-connecting HEBT beam lines, S-FRS and experiments will be built. All of these installations are equipped with beam diagnostic devices and other components, which deliver time-resolved analogue signals to show status, quality and performance of the accelerators. These signals can originate from particle detectors such as ionization chambers and plastic scintillators, but also from adapted output signals of transformers, collimators, magnet functions, RF cavities and others. To visualize and precisely correlate the time axis of all input signals a dedicated FESA based data acquisition and analysis system named LASSIE, the Large Analogue Signal and Scaling Information Environment, is currently being developed. The main operation mode of LASSIE is currently pulse counting with latching VME scaler boards. Later enhancements for ADC, QDC, or TDC digitization in the future are foreseen. The concept, features and challenges of this large distributed data acquisition system are presented. (authors)

  5. Centralized manure digestion. Selection of locations and estimation of costs of large-scale manure storage application

    International Nuclear Information System (INIS)

    1995-03-01

    A study to assess the possibilities and the consequences of the use of existing Dutch large scale manure silos at centralised anaerobic digestion plants (CAD-plants) for manure and energy-rich organic wastes is carried out. Reconstruction of these large scale manure silos into digesters for a CAD-plant is not self-evident due to the high height/diameter ratio of these silos and the extra investments that have to be made for additional facilities for roofing, insulation, mixing and heating. From the results of an inventory and selection of large scale manure silos with a storage capacity above 1,500 m 3 it appeared that there are 21 locations in The Netherlands that can be qualified for realisation of a CAD plant with a processing capacity of 100 m 3 biomass (80% manure, 20% additives) per day. These locations are found in particular at the 'shortage-areas' for manure fertilisation in the Dutch provinces Groningen and Drenthe. Three of these 21 locations with large scale silos are considered to be the most suitable for realisation of a large scale CAD-plant. The selection is based on an optimal scale for a CAD-plant of 300 m 3 material (80% manure, 20% additives) to be processed per day and the most suitable consuming markets for the biogas produced at the CAD-plant. The three locations are at Middelharnis, Veendam, and Klazinaveen. Applying the conditions as used in this study and accounting for all costs for transport of manure, additives and end-product including the costs for the storage facilities, a break-even operation might be realised at a minimum income for the additives of approximately 50 Dutch guilders per m 3 (including TAV). This income price is considerably lower than the prevailing costs for tipping or processing of organic wastes in The Netherlands. This study revealed that a break-even exploitation of a large scale CAD-plant for the processing of manure with energy-rich additives is possible. (Abstract Truncated)

  6. A comparative study of all-vanadium and iron-chromium redox flow batteries for large-scale energy storage

    Science.gov (United States)

    Zeng, Y. K.; Zhao, T. S.; An, L.; Zhou, X. L.; Wei, L.

    2015-12-01

    The promise of redox flow batteries (RFBs) utilizing soluble redox couples, such as all vanadium ions as well as iron and chromium ions, is becoming increasingly recognized for large-scale energy storage of renewables such as wind and solar, owing to their unique advantages including scalability, intrinsic safety, and long cycle life. An ongoing question associated with these two RFBs is determining whether the vanadium redox flow battery (VRFB) or iron-chromium redox flow battery (ICRFB) is more suitable and competitive for large-scale energy storage. To address this concern, a comparative study has been conducted for the two types of battery based on their charge-discharge performance, cycle performance, and capital cost. It is found that: i) the two batteries have similar energy efficiencies at high current densities; ii) the ICRFB exhibits a higher capacity decay rate than does the VRFB; and iii) the ICRFB is much less expensive in capital costs when operated at high power densities or at large capacities.

  7. Energy modeling and analysis for optimal grid integration of large-scale variable renewables using hydrogen storage in Japan

    International Nuclear Information System (INIS)

    Komiyama, Ryoichi; Otsuki, Takashi; Fujii, Yasumasa

    2015-01-01

    Although the extensive introduction of VRs (variable renewables) will play an essential role to resolve energy and environmental issues in Japan after the Fukushima nuclear accident, its large-scale integration would pose a technical challenge in the grid management; as one of technical countermeasures, hydrogen storage receives much attention, as well as rechargeable battery, for controlling the intermittency of VR power output. For properly planning renewable energy policies, energy system modeling is important to quantify and qualitatively understand its potential benefits and impacts. This paper analyzes the optimal grid integration of large-scale VRs using hydrogen storage in Japan by developing a high time-resolution optimal power generation mix model. Simulation results suggest that the installation of hydrogen storage is promoted by both its cost reduction and CO 2 regulation policy. In addition, hydrogen storage turns out to be suitable for storing VR energy in a long period of time. Finally, through a sensitivity analysis of rechargeable battery cost, hydrogen storage is economically competitive with rechargeable battery; the cost of both technologies should be more elaborately recognized for formulating effective energy policies to integrate massive VRs into the country's power system in an economical manner. - Highlights: • Authors analyze hydrogen storage coupled with VRs (variable renewables). • Simulation analysis is done by developing an optimal power generation mix model. • Hydrogen storage installation is promoted by its cost decline and CO 2 regulation. • Hydrogen storage is suitable for storing VR energy in a long period of time. • Hydrogen storage is economically competitive with rechargeable battery

  8. Improving Large-scale Storage System Performance via Topology-aware and Balanced Data Placement

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Feiyi [ORNL; Oral, H Sarp [ORNL; Vazhkudai, Sudharshan S [ORNL

    2014-01-01

    With the advent of big data, the I/O subsystems of large-scale compute clusters are becoming a center of focus, with more applications putting greater demands on end-to-end I/O performance. These subsystems are often complex in design. They comprise of multiple hardware and software layers to cope with the increasing capacity, capability and scalability requirements of data intensive applications. The sharing nature of storage resources and the intrinsic interactions across these layers make it to realize user-level, end-to-end performance gains a great challenge. We propose a topology-aware resource load balancing strategy to improve per-application I/O performance. We demonstrate the effectiveness of our algorithm on an extreme-scale compute cluster, Titan, at the Oak Ridge Leadership Computing Facility (OLCF). Our experiments with both synthetic benchmarks and a real-world application show that, even under congestion, our proposed algorithm can improve large-scale application I/O performance significantly, resulting in both the reduction of application run times and higher resolution simulation runs.

  9. Requirements and principles for the implementation and construction of large-scale geographic information systems

    Science.gov (United States)

    Smith, Terence R.; Menon, Sudhakar; Star, Jeffrey L.; Estes, John E.

    1987-01-01

    This paper provides a brief survey of the history, structure and functions of 'traditional' geographic information systems (GIS), and then suggests a set of requirements that large-scale GIS should satisfy, together with a set of principles for their satisfaction. These principles, which include the systematic application of techniques from several subfields of computer science to the design and implementation of GIS and the integration of techniques from computer vision and image processing into standard GIS technology, are discussed in some detail. In particular, the paper provides a detailed discussion of questions relating to appropriate data models, data structures and computational procedures for the efficient storage, retrieval and analysis of spatially-indexed data.

  10. A Ranking Approach on Large-Scale Graph With Multidimensional Heterogeneous Information.

    Science.gov (United States)

    Wei, Wei; Gao, Bin; Liu, Tie-Yan; Wang, Taifeng; Li, Guohui; Li, Hang

    2016-04-01

    Graph-based ranking has been extensively studied and frequently applied in many applications, such as webpage ranking. It aims at mining potentially valuable information from the raw graph-structured data. Recently, with the proliferation of rich heterogeneous information (e.g., node/edge features and prior knowledge) available in many real-world graphs, how to effectively and efficiently leverage all information to improve the ranking performance becomes a new challenging problem. Previous methods only utilize part of such information and attempt to rank graph nodes according to link-based methods, of which the ranking performances are severely affected by several well-known issues, e.g., over-fitting or high computational complexity, especially when the scale of graph is very large. In this paper, we address the large-scale graph-based ranking problem and focus on how to effectively exploit rich heterogeneous information of the graph to improve the ranking performance. Specifically, we propose an innovative and effective semi-supervised PageRank (SSP) approach to parameterize the derived information within a unified semi-supervised learning framework (SSLF-GR), then simultaneously optimize the parameters and the ranking scores of graph nodes. Experiments on the real-world large-scale graphs demonstrate that our method significantly outperforms the algorithms that consider such graph information only partially.

  11. Large Scale Self-Organizing Information Distribution System

    National Research Council Canada - National Science Library

    Low, Steven

    2005-01-01

    This project investigates issues in "large-scale" networks. Here "large-scale" refers to networks with large number of high capacity nodes and transmission links, and shared by a large number of users...

  12. Operational design and pressure response of large-scale compressed air energy storage in porous formations

    Science.gov (United States)

    Wang, Bo; Bauer, Sebastian

    2017-04-01

    With the rapid growth of energy production from intermittent renewable sources like wind and solar power plants, large-scale energy storage options are required to compensate for fluctuating power generation on different time scales. Compressed air energy storage (CAES) in porous formations is seen as a promising option for balancing short-term diurnal fluctuations. CAES is a power-to-power energy storage, which converts electricity to mechanical energy, i.e. highly pressurized air, and stores it in the subsurface. This study aims at designing the storage setup and quantifying the pressure response of a large-scale CAES operation in a porous sandstone formation, thus assessing the feasibility of this storage option. For this, numerical modelling of a synthetic site and a synthetic operational cycle is applied. A hypothetic CAES scenario using a typical anticline structure in northern Germany was investigated. The top of the storage formation is at 700 m depth and the thickness is 20 m. The porosity and permeability were assumed to have a homogenous distribution with a value of 0.35 and 500 mD, respectively. According to the specifications of the Huntorf CAES power plant, a gas turbine producing 321 MW power with a minimum inlet pressure of 43 bars at an air mass flowrate of 417 kg/s was assumed. Pressure loss in the gas wells was accounted for using an analytical solution, which defines a minimum bottom hole pressure of 47 bars. Two daily extraction cycles of 6 hours each were set to the early morning and the late afternoon in order to bypass the massive solar energy production around noon. A two-year initial filling of the reservoir with air and ten years of daily cyclic operation were numerically simulated using the Eclipse E300 reservoir simulator. The simulation results show that using 12 wells the storage formation with a permeability of 500 mD can support the required 6-hour continuous power output of 321MW, which corresponds an energy output of 3852 MWh per

  13. Monitoring and Information Fusion for Search and Rescue Operations in Large-Scale Disasters

    National Research Council Canada - National Science Library

    Nardi, Daniele

    2002-01-01

    ... for information fusion with application to search-and-rescue and large scale disaster relief. The objective is to develop and to deploy tools to support the monitoring activities in an intervention caused by a large-scale disaster...

  14. Reconstructing Information in Large-Scale Structure via Logarithmic Mapping

    Science.gov (United States)

    Szapudi, Istvan

    We propose to develop a new method to extract information from large-scale structure data combining two-point statistics and non-linear transformations; before, this information was available only with substantially more complex higher-order statistical methods. Initially, most of the cosmological information in large-scale structure lies in two-point statistics. With non- linear evolution, some of that useful information leaks into higher-order statistics. The PI and group has shown in a series of theoretical investigations how that leakage occurs, and explained the Fisher information plateau at smaller scales. This plateau means that even as more modes are added to the measurement of the power spectrum, the total cumulative information (loosely speaking the inverse errorbar) is not increasing. Recently we have shown in Neyrinck et al. (2009, 2010) that a logarithmic (and a related Gaussianization or Box-Cox) transformation on the non-linear Dark Matter or galaxy field reconstructs a surprisingly large fraction of this missing Fisher information of the initial conditions. This was predicted by the earlier wave mechanical formulation of gravitational dynamics by Szapudi & Kaiser (2003). The present proposal is focused on working out the theoretical underpinning of the method to a point that it can be used in practice to analyze data. In particular, one needs to deal with the usual real-life issues of galaxy surveys, such as complex geometry, discrete sam- pling (Poisson or sub-Poisson noise), bias (linear, or non-linear, deterministic, or stochastic), redshift distortions, pro jection effects for 2D samples, and the effects of photometric redshift errors. We will develop methods for weak lensing and Sunyaev-Zeldovich power spectra as well, the latter specifically targetting Planck. In addition, we plan to investigate the question of residual higher- order information after the non-linear mapping, and possible applications for cosmology. Our aim will be to work out

  15. Development of large scale fusion plasma simulation and storage grid on JAERI Origin3800 system

    International Nuclear Information System (INIS)

    Idomura, Yasuhiro; Wang, Xin

    2003-01-01

    Under the Numerical EXperiment of Tokamak (NEXT) research project, various fluid, particle, and hybrid codes have been developed. These codes require a computational environment which consists of high performance processors, high speed storage system, and high speed parallelized visualization system. In this paper, the performance of the JAERI Origin3800 system is examined from a point of view of these requests. In the performance tests, it is shown that the representative particle and fluid codes operate with 15 - 40% of processing efficiency up to 512 processors. A storage area network (SAN) provides high speed parallel data transfer. A parallel visualization system enables order to magnitude faster visualization of a large scale simulation data compared with the previous graphic workstations. Accordingly, an extremely advanced simulation environment is realized on the JAERI Origin3800 system. Recently, development of a storage grid is underway in order to improve a computational environment of remote users. The storage grid is constructed by a combination of SAN and a wavelength division multiplexer (WDM). The preliminary tests show that compared with the existing data transfer methods, it enables dramatically high speed data transfer ∼100 Gbps over a wide area network. (author)

  16. Large-scale fabrication of pseudocapacitive glass windows that combine electrochromism and energy storage.

    Science.gov (United States)

    Yang, Peihua; Sun, Peng; Chai, Zhisheng; Huang, Langhuan; Cai, Xiang; Tan, Shaozao; Song, Jinhui; Mai, Wenjie

    2014-10-27

    Multifunctional glass windows that combine energy storage and electrochromism have been obtained by facile thermal evaporation and electrodeposition methods. For example, WO3 films that had been deposited on fluorine-doped tin oxide (FTO) glass exhibited a high specific capacitance of 639.8 F g(-1). Their color changed from transparent to deep blue with an abrupt decrease in optical transmittance from 91.3% to 15.1% at a wavelength of 633 nm when a voltage of -0.6 V (vs. Ag/AgCl) was applied, demonstrating its excellent energy-storage and electrochromism properties. As a second example, a polyaniline-based pseudocapacitive glass was also developed, and its color can change from green to blue. A large-scale pseudocapacitive WO3-based glass window (15×15 cm(2)) was fabricated as a prototype. Such smart pseudocapacitive glass windows show great potential in functioning as electrochromic windows and concurrently powering electronic devices, such as mobile phones or laptops. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Large mass storage facility

    International Nuclear Information System (INIS)

    Peskin, A.M.

    1978-01-01

    The report of a committee to study the questions surrounding possible acquisition of a large mass-storage device is presented. The current computing environment at BNL and justification for an online large mass storage device are briefly discussed. Possible devices to meet the requirements of large mass storage are surveyed, including future devices. The future computing needs of BNL are prognosticated. 2 figures, 4 tables

  18. Assessment of economically optimal water management and geospatial potential for large-scale water storage

    Science.gov (United States)

    Weerasinghe, Harshi; Schneider, Uwe A.

    2010-05-01

    Assessment of economically optimal water management and geospatial potential for large-scale water storage Weerasinghe, Harshi; Schneider, Uwe A Water is an essential but limited and vulnerable resource for all socio-economic development and for maintaining healthy ecosystems. Water scarcity accelerated due to population expansion, improved living standards, and rapid growth in economic activities, has profound environmental and social implications. These include severe environmental degradation, declining groundwater levels, and increasing problems of water conflicts. Water scarcity is predicted to be one of the key factors limiting development in the 21st century. Climate scientists have projected spatial and temporal changes in precipitation and changes in the probability of intense floods and droughts in the future. As scarcity of accessible and usable water increases, demand for efficient water management and adaptation strategies increases as well. Addressing water scarcity requires an intersectoral and multidisciplinary approach in managing water resources. This would in return safeguard the social welfare and the economical benefit to be at their optimal balance without compromising the sustainability of ecosystems. This paper presents a geographically explicit method to assess the potential for water storage with reservoirs and a dynamic model that identifies the dimensions and material requirements under an economically optimal water management plan. The methodology is applied to the Elbe and Nile river basins. Input data for geospatial analysis at watershed level are taken from global data repositories and include data on elevation, rainfall, soil texture, soil depth, drainage, land use and land cover; which are then downscaled to 1km spatial resolution. Runoff potential for different combinations of land use and hydraulic soil groups and for mean annual precipitation levels are derived by the SCS-CN method. Using the overlay and decision tree algorithms

  19. Characterization of Pliocene and Miocene Formations in the Wilmington Graben, Offshore Los Angeles, for Large-Scale Geologic Storage of CO₂

    Energy Technology Data Exchange (ETDEWEB)

    Bruno, Michael [Geomechanics Technologies, Incorporated, Monrovia, CA (United States)

    2014-12-08

    Geomechanics Technologies has completed a detailed characterization study of the Wilmington Graben offshore Southern California area for large-scale CO₂ storage. This effort has included: an evaluation of existing wells in both State and Federal waters, field acquisition of about 175 km (109 mi) of new seismic data, new well drilling, development of integrated 3D geologic, geomechanics, and fluid flow models for the area. The geologic analysis indicates that more than 796 MMt of storage capacity is available within the Pliocene and Miocene formations in the Graben for midrange geologic estimates (P50). Geomechanical analyses indicate that injection can be conducted without significant risk for surface deformation, induced stresses or fault activation. Numerical analysis of fluid migration indicates that injection into the Pliocene Formation at depths of 1525 m (5000 ft) would lead to undesirable vertical migration of the CO₂ plume. Recent well drilling however, indicates that deeper sand is present at depths exceeding 2135 m (7000 ft), which could be viable for large volume storage. For vertical containment, injection would need to be limited to about 250,000 metric tons per year per well, would need to be placed at depths greater than 7000ft, and would need to be placed in new wells located at least 1 mile from any existing offset wells. As a practical matter, this would likely limit storage operations in the Wilmington Graben to about 1 million tons per year or less. A quantitative risk analysis for the Wilmington Graben indicate that such large scale CO₂ storage in the area would represent higher risk than other similar size projects in the US and overseas.

  20. Cosmological parameters from large scale structure - geometric versus shape information

    CERN Document Server

    Hamann, Jan; Lesgourgues, Julien; Rampf, Cornelius; Wong, Yvonne Y Y

    2010-01-01

    The matter power spectrum as derived from large scale structure (LSS) surveys contains two important and distinct pieces of information: an overall smooth shape and the imprint of baryon acoustic oscillations (BAO). We investigate the separate impact of these two types of information on cosmological parameter estimation, and show that for the simplest cosmological models, the broad-band shape information currently contained in the SDSS DR7 halo power spectrum (HPS) is by far superseded by geometric information derived from the baryonic features. An immediate corollary is that contrary to popular beliefs, the upper limit on the neutrino mass m_\

  1. Techno-economic Modeling of the Integration of 20% Wind and Large-scale Energy Storage in ERCOT by 2030

    Energy Technology Data Exchange (ETDEWEB)

    Baldick, Ross; Webber, Michael; King, Carey; Garrison, Jared; Cohen, Stuart; Lee, Duehee

    2012-12-21

    This study's objective is to examine interrelated technical and economic avenues for the Electric Reliability Council of Texas (ERCOT) grid to incorporate up to and over 20% wind generation by 2030. Our specific interests are to look at the factors that will affect the implementation of both high level of wind power penetration (> 20% generation) and installation of large scale storage.

  2. Modeling and Coordinated Control Strategy of Large Scale Grid-Connected Wind/Photovoltaic/Energy Storage Hybrid Energy Conversion System

    Directory of Open Access Journals (Sweden)

    Lingguo Kong

    2015-01-01

    Full Text Available An AC-linked large scale wind/photovoltaic (PV/energy storage (ES hybrid energy conversion system for grid-connected application was proposed in this paper. Wind energy conversion system (WECS and PV generation system are the primary power sources of the hybrid system. The ES system, including battery and fuel cell (FC, is used as a backup and a power regulation unit to ensure continuous power supply and to take care of the intermittent nature of wind and photovoltaic resources. Static synchronous compensator (STATCOM is employed to support the AC-linked bus voltage and improve low voltage ride through (LVRT capability of the proposed system. An overall power coordinated control strategy is designed to manage real-power and reactive-power flows among the different energy sources, the storage unit, and the STATCOM system in the hybrid system. A simulation case study carried out on Western System Coordinating Council (WSCC 3-machine 9-bus test system for the large scale hybrid energy conversion system has been developed using the DIgSILENT/Power Factory software platform. The hybrid system performance under different scenarios has been verified by simulation studies using practical load demand profiles and real weather data.

  3. Participatory Design of Large-Scale Information Systems

    DEFF Research Database (Denmark)

    Simonsen, Jesper; Hertzum, Morten

    2008-01-01

    into a PD process model that (1) emphasizes PD experiments as transcending traditional prototyping by evaluating fully integrated systems exposed to real work practices; (2) incorporates improvisational change management including anticipated, emergent, and opportunity-based change; and (3) extends initial...... design and development into a sustained and ongoing stepwise implementation that constitutes an overall technology-driven organizational change. The process model is presented through a largescale PD experiment in the Danish healthcare sector. We reflect on our experiences from this experiment......In this article we discuss how to engage in large-scale information systems development by applying a participatory design (PD) approach that acknowledges the unique situated work practices conducted by the domain experts of modern organizations. We reconstruct the iterative prototyping approach...

  4. Visual attention mitigates information loss in small- and large-scale neural codes

    Science.gov (United States)

    Sprague, Thomas C; Saproo, Sameer; Serences, John T

    2015-01-01

    Summary The visual system transforms complex inputs into robust and parsimonious neural codes that efficiently guide behavior. Because neural communication is stochastic, the amount of encoded visual information necessarily decreases with each synapse. This constraint requires processing sensory signals in a manner that protects information about relevant stimuli from degradation. Such selective processing – or selective attention – is implemented via several mechanisms, including neural gain and changes in tuning properties. However, examining each of these effects in isolation obscures their joint impact on the fidelity of stimulus feature representations by large-scale population codes. Instead, large-scale activity patterns can be used to reconstruct representations of relevant and irrelevant stimuli, providing a holistic understanding about how neuron-level modulations collectively impact stimulus encoding. PMID:25769502

  5. Foundations of Large-Scale Multimedia Information Management and Retrieval

    CERN Document Server

    Chang, Edward Y

    2011-01-01

    "Foundations of Large-Scale Multimedia Information Management and Retrieval - Mathematics of Perception" covers knowledge representation and semantic analysis of multimedia data and scalability in signal extraction, data mining, and indexing. The book is divided into two parts: Part I - Knowledge Representation and Semantic Analysis focuses on the key components of mathematics of perception as it applies to data management and retrieval. These include feature selection/reduction, knowledge representation, semantic analysis, distance function formulation for measuring similarity, and

  6. The Analysis of RDF Semantic Data Storage Optimization in Large Data Era

    Science.gov (United States)

    He, Dandan; Wang, Lijuan; Wang, Can

    2018-03-01

    With the continuous development of information technology and network technology in China, the Internet has also ushered in the era of large data. In order to obtain the effective acquisition of information in the era of large data, it is necessary to optimize the existing RDF semantic data storage and realize the effective query of various data. This paper discusses the storage optimization of RDF semantic data under large data.

  7. A review of large-scale solar heating systems in Europe

    International Nuclear Information System (INIS)

    Fisch, M.N.; Guigas, M.; Dalenback, J.O.

    1998-01-01

    Large-scale solar applications benefit from the effect of scale. Compared to small solar domestic hot water (DHW) systems for single-family houses, the solar heat cost can be cut at least in third. The most interesting projects for replacing fossil fuels and the reduction of CO 2 -emissions are solar systems with seasonal storage in combination with gas or biomass boilers. In the framework of the EU-APAS project Large-scale Solar Heating Systems, thirteen existing plants in six European countries have been evaluated. lie yearly solar gains of the systems are between 300 and 550 kWh per m 2 collector area. The investment cost of solar plants with short-term storage varies from 300 up to 600 ECU per m 2 . Systems with seasonal storage show investment costs twice as high. Results of studies concerning the market potential for solar heating plants, taking new collector concepts and industrial production into account, are presented. Site specific studies and predesign of large-scale solar heating plants in six European countries for housing developments show a 50% cost reduction compared to existing projects. The cost-benefit-ratio for the planned systems with long-term storage is between 0.7 and 1.5 ECU per kWh per year. (author)

  8. Visual attention mitigates information loss in small- and large-scale neural codes.

    Science.gov (United States)

    Sprague, Thomas C; Saproo, Sameer; Serences, John T

    2015-04-01

    The visual system transforms complex inputs into robust and parsimonious neural codes that efficiently guide behavior. Because neural communication is stochastic, the amount of encoded visual information necessarily decreases with each synapse. This constraint requires that sensory signals are processed in a manner that protects information about relevant stimuli from degradation. Such selective processing--or selective attention--is implemented via several mechanisms, including neural gain and changes in tuning properties. However, examining each of these effects in isolation obscures their joint impact on the fidelity of stimulus feature representations by large-scale population codes. Instead, large-scale activity patterns can be used to reconstruct representations of relevant and irrelevant stimuli, thereby providing a holistic understanding about how neuron-level modulations collectively impact stimulus encoding. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Large-scale runoff generation - parsimonious parameterisation using high-resolution topography

    Science.gov (United States)

    Gong, L.; Halldin, S.; Xu, C.-Y.

    2011-08-01

    World water resources have primarily been analysed by global-scale hydrological models in the last decades. Runoff generation in many of these models are based on process formulations developed at catchments scales. The division between slow runoff (baseflow) and fast runoff is primarily governed by slope and spatial distribution of effective water storage capacity, both acting at very small scales. Many hydrological models, e.g. VIC, account for the spatial storage variability in terms of statistical distributions; such models are generally proven to perform well. The statistical approaches, however, use the same runoff-generation parameters everywhere in a basin. The TOPMODEL concept, on the other hand, links the effective maximum storage capacity with real-world topography. Recent availability of global high-quality, high-resolution topographic data makes TOPMODEL attractive as a basis for a physically-based runoff-generation algorithm at large scales, even if its assumptions are not valid in flat terrain or for deep groundwater systems. We present a new runoff-generation algorithm for large-scale hydrology based on TOPMODEL concepts intended to overcome these problems. The TRG (topography-derived runoff generation) algorithm relaxes the TOPMODEL equilibrium assumption so baseflow generation is not tied to topography. TRG only uses the topographic index to distribute average storage to each topographic index class. The maximum storage capacity is proportional to the range of topographic index and is scaled by one parameter. The distribution of storage capacity within large-scale grid cells is obtained numerically through topographic analysis. The new topography-derived distribution function is then inserted into a runoff-generation framework similar VIC's. Different basin parts are parameterised by different storage capacities, and different shapes of the storage-distribution curves depend on their topographic characteristics. The TRG algorithm is driven by the

  10. Implementation of a large-scale hospital information infrastructure for multi-unit health-care services.

    Science.gov (United States)

    Yoo, Sun K; Kim, Dong Keun; Kim, Jung C; Park, Youn Jung; Chang, Byung Chul

    2008-01-01

    With the increase in demand for high quality medical services, the need for an innovative hospital information system has become essential. An improved system has been implemented in all hospital units of the Yonsei University Health System. Interoperability between multi-units required appropriate hardware infrastructure and software architecture. This large-scale hospital information system encompassed PACS (Picture Archiving and Communications Systems), EMR (Electronic Medical Records) and ERP (Enterprise Resource Planning). It involved two tertiary hospitals and 50 community hospitals. The monthly data production rate by the integrated hospital information system is about 1.8 TByte and the total quantity of data produced so far is about 60 TByte. Large scale information exchange and sharing will be particularly useful for telemedicine applications.

  11. Optimal Siting and Sizing of Energy Storage System for Power Systems with Large-scale Wind Power Integration

    DEFF Research Database (Denmark)

    Zhao, Haoran; Wu, Qiuwei; Huang, Shaojun

    2015-01-01

    This paper proposes algorithms for optimal sitingand sizing of Energy Storage System (ESS) for the operationplanning of power systems with large scale wind power integration.The ESS in this study aims to mitigate the wind powerfluctuations during the interval between two rolling Economic......Dispatches (EDs) in order to maintain generation-load balance.The charging and discharging of ESS is optimized consideringoperation cost of conventional generators, capital cost of ESSand transmission losses. The statistics from simulated systemoperations are then coupled to the planning process to determinethe...

  12. Large-scale Health Information Database and Privacy Protection.

    Science.gov (United States)

    Yamamoto, Ryuichi

    2016-09-01

    Japan was once progressive in the digitalization of healthcare fields but unfortunately has fallen behind in terms of the secondary use of data for public interest. There has recently been a trend to establish large-scale health databases in the nation, and a conflict between data use for public interest and privacy protection has surfaced as this trend has progressed. Databases for health insurance claims or for specific health checkups and guidance services were created according to the law that aims to ensure healthcare for the elderly; however, there is no mention in the act about using these databases for public interest in general. Thus, an initiative for such use must proceed carefully and attentively. The PMDA projects that collect a large amount of medical record information from large hospitals and the health database development project that the Ministry of Health, Labour and Welfare (MHLW) is working on will soon begin to operate according to a general consensus; however, the validity of this consensus can be questioned if issues of anonymity arise. The likelihood that researchers conducting a study for public interest would intentionally invade the privacy of their subjects is slim. However, patients could develop a sense of distrust about their data being used since legal requirements are ambiguous. Nevertheless, without using patients' medical records for public interest, progress in medicine will grind to a halt. Proper legislation that is clear for both researchers and patients will therefore be highly desirable. A revision of the Act on the Protection of Personal Information is currently in progress. In reality, however, privacy is not something that laws alone can protect; it will also require guidelines and self-discipline. We now live in an information capitalization age. I will introduce the trends in legal reform regarding healthcare information and discuss some basics to help people properly face the issue of health big data and privacy

  13. Large-scale hydrological model river storage and discharge correction using a satellite altimetry-based discharge product

    Science.gov (United States)

    Emery, Charlotte Marie; Paris, Adrien; Biancamaria, Sylvain; Boone, Aaron; Calmant, Stéphane; Garambois, Pierre-André; Santos da Silva, Joecila

    2018-04-01

    Land surface models (LSMs) are widely used to study the continental part of the water cycle. However, even though their accuracy is increasing, inherent model uncertainties can not be avoided. In the meantime, remotely sensed observations of the continental water cycle variables such as soil moisture, lakes and river elevations are more frequent and accurate. Therefore, those two different types of information can be combined, using data assimilation techniques to reduce a model's uncertainties in its state variables or/and in its input parameters. The objective of this study is to present a data assimilation platform that assimilates into the large-scale ISBA-CTRIP LSM a punctual river discharge product, derived from ENVISAT nadir altimeter water elevation measurements and rating curves, over the whole Amazon basin. To deal with the scale difference between the model and the observation, the study also presents an initial development for a localization treatment that allows one to limit the impact of observations to areas close to the observation and in the same hydrological network. This assimilation platform is based on the ensemble Kalman filter and can correct either the CTRIP river water storage or the discharge. Root mean square error (RMSE) compared to gauge discharges is globally reduced until 21 % and at Óbidos, near the outlet, RMSE is reduced by up to 52 % compared to ENVISAT-based discharge. Finally, it is shown that localization improves results along the main tributaries.

  14. Large scale access tests and online interfaces to ATLAS conditions databases

    International Nuclear Information System (INIS)

    Amorim, A; Lopes, L; Pereira, P; Simoes, J; Soloviev, I; Burckhart, D; Schmitt, J V D; Caprini, M; Kolos, S

    2008-01-01

    The access of the ATLAS Trigger and Data Acquisition (TDAQ) system to the ATLAS Conditions Databases sets strong reliability and performance requirements on the database storage and access infrastructures. Several applications were developed to support the integration of Conditions database access with the online services in TDAQ, including the interface to the Information Services (IS) and to the TDAQ Configuration Databases. The information storage requirements were the motivation for the ONline A Synchronous Interface to COOL (ONASIC) from the Information Service (IS) to LCG/COOL databases. ONASIC avoids the possible backpressure from Online Database servers by managing a local cache. In parallel, OKS2COOL was developed to store Configuration Databases into an Offline Database with history record. The DBStressor application was developed to test and stress the access to the Conditions database using the LCG/COOL interface while operating in an integrated way as a TDAQ application. The performance scaling of simultaneous Conditions database read accesses was studied in the context of the ATLAS High Level Trigger large computing farms. A large set of tests were performed involving up to 1000 computing nodes that simultaneously accessed the LCG central database server infrastructure at CERN

  15. Large Scale Production of Densified Hydrogen Using Integrated Refrigeration and Storage

    Science.gov (United States)

    Notardonato, William U.; Swanger, Adam Michael; Jumper, Kevin M.; Fesmire, James E.; Tomsik, Thomas M.; Johnson, Wesley L.

    2017-01-01

    Recent demonstration of advanced liquid hydrogen storage techniques using Integrated Refrigeration and Storage (IRAS) technology at NASA Kennedy Space Center led to the production of large quantities of solid densified liquid and slush hydrogen in a 125,000 L tank. Production of densified hydrogen was performed at three different liquid levels and LH2 temperatures were measured by twenty silicon diode temperature sensors. System energy balances and solid mass fractions are calculated. Experimental data reveal hydrogen temperatures dropped well below the triple point during testing (up to 1 K), and were continuing to trend downward prior to system shutdown. Sub-triple point temperatures were seen to evolve in a time dependent manner along the length of the horizontal, cylindrical vessel. Twenty silicon diode temperature sensors were recorded over approximately one month for testing at two different fill levels (33 67). The phenomenon, observed at both two fill levels, is described and presented detailed and explained herein., and The implications of using IRAS for energy storage, propellant densification, and future cryofuel systems are discussed.

  16. Dual Decomposition for Large-Scale Power Balancing

    DEFF Research Database (Denmark)

    Halvgaard, Rasmus; Jørgensen, John Bagterp; Vandenberghe, Lieven

    2013-01-01

    Dual decomposition is applied to power balancing of exible thermal storage units. The centralized large-scale problem is decomposed into smaller subproblems and solved locallyby each unit in the Smart Grid. Convergence is achieved by coordinating the units consumption through a negotiation...

  17. Large-scale runoff generation – parsimonious parameterisation using high-resolution topography

    Directory of Open Access Journals (Sweden)

    L. Gong

    2011-08-01

    Full Text Available World water resources have primarily been analysed by global-scale hydrological models in the last decades. Runoff generation in many of these models are based on process formulations developed at catchments scales. The division between slow runoff (baseflow and fast runoff is primarily governed by slope and spatial distribution of effective water storage capacity, both acting at very small scales. Many hydrological models, e.g. VIC, account for the spatial storage variability in terms of statistical distributions; such models are generally proven to perform well. The statistical approaches, however, use the same runoff-generation parameters everywhere in a basin. The TOPMODEL concept, on the other hand, links the effective maximum storage capacity with real-world topography. Recent availability of global high-quality, high-resolution topographic data makes TOPMODEL attractive as a basis for a physically-based runoff-generation algorithm at large scales, even if its assumptions are not valid in flat terrain or for deep groundwater systems. We present a new runoff-generation algorithm for large-scale hydrology based on TOPMODEL concepts intended to overcome these problems. The TRG (topography-derived runoff generation algorithm relaxes the TOPMODEL equilibrium assumption so baseflow generation is not tied to topography. TRG only uses the topographic index to distribute average storage to each topographic index class. The maximum storage capacity is proportional to the range of topographic index and is scaled by one parameter. The distribution of storage capacity within large-scale grid cells is obtained numerically through topographic analysis. The new topography-derived distribution function is then inserted into a runoff-generation framework similar VIC's. Different basin parts are parameterised by different storage capacities, and different shapes of the storage-distribution curves depend on their topographic characteristics. The TRG algorithm

  18. Penalized Estimation in Large-Scale Generalized Linear Array Models

    DEFF Research Database (Denmark)

    Lund, Adam; Vincent, Martin; Hansen, Niels Richard

    2017-01-01

    Large-scale generalized linear array models (GLAMs) can be challenging to fit. Computation and storage of its tensor product design matrix can be impossible due to time and memory constraints, and previously considered design matrix free algorithms do not scale well with the dimension...

  19. Evaluation of drought propagation in an ensemble mean of large-scale hydrological models

    Directory of Open Access Journals (Sweden)

    A. F. Van Loon

    2012-11-01

    underestimation of wet-to-dry-season droughts and snow-related droughts. Furthermore, almost no composite droughts were simulated for slowly responding areas, while many multi-year drought events were expected in these systems.

    We conclude that most drought propagation processes are reasonably well reproduced by the ensemble mean of large-scale models in contrasting catchments in Europe. Challenges, however, remain in catchments with cold and semi-arid climates and catchments with large storage in aquifers or lakes. This leads to a high uncertainty in hydrological drought simulation at large scales. Improvement of drought simulation in large-scale models should focus on a better representation of hydrological processes that are important for drought development, such as evapotranspiration, snow accumulation and melt, and especially storage. Besides the more explicit inclusion of storage in large-scale models, also parametrisation of storage processes requires attention, for example through a global-scale dataset on aquifer characteristics, improved large-scale datasets on other land characteristics (e.g. soils, land cover, and calibration/evaluation of the models against observations of storage (e.g. in snow, groundwater.

  20. Large-scale hydrogen production using nuclear reactors

    Energy Technology Data Exchange (ETDEWEB)

    Ryland, D.; Stolberg, L.; Kettner, A.; Gnanapragasam, N.; Suppiah, S. [Atomic Energy of Canada Limited, Chalk River, ON (Canada)

    2014-07-01

    For many years, Atomic Energy of Canada Limited (AECL) has been studying the feasibility of using nuclear reactors, such as the Supercritical Water-cooled Reactor, as an energy source for large scale hydrogen production processes such as High Temperature Steam Electrolysis and the Copper-Chlorine thermochemical cycle. Recent progress includes the augmentation of AECL's experimental capabilities by the construction of experimental systems to test high temperature steam electrolysis button cells at ambient pressure and temperatures up to 850{sup o}C and CuCl/HCl electrolysis cells at pressures up to 7 bar and temperatures up to 100{sup o}C. In parallel, detailed models of solid oxide electrolysis cells and the CuCl/HCl electrolysis cell are being refined and validated using experimental data. Process models are also under development to assess options for economic integration of these hydrogen production processes with nuclear reactors. Options for large-scale energy storage, including hydrogen storage, are also under study. (author)

  1. NASA's Information Power Grid: Large Scale Distributed Computing and Data Management

    Science.gov (United States)

    Johnston, William E.; Vaziri, Arsi; Hinke, Tom; Tanner, Leigh Ann; Feiereisen, William J.; Thigpen, William; Tang, Harry (Technical Monitor)

    2001-01-01

    Large-scale science and engineering are done through the interaction of people, heterogeneous computing resources, information systems, and instruments, all of which are geographically and organizationally dispersed. The overall motivation for Grids is to facilitate the routine interactions of these resources in order to support large-scale science and engineering. Multi-disciplinary simulations provide a good example of a class of applications that are very likely to require aggregation of widely distributed computing, data, and intellectual resources. Such simulations - e.g. whole system aircraft simulation and whole system living cell simulation - require integrating applications and data that are developed by different teams of researchers frequently in different locations. The research team's are the only ones that have the expertise to maintain and improve the simulation code and/or the body of experimental data that drives the simulations. This results in an inherently distributed computing and data management environment.

  2. Direction of information flow in large-scale resting-state networks is frequency-dependent.

    Science.gov (United States)

    Hillebrand, Arjan; Tewarie, Prejaas; van Dellen, Edwin; Yu, Meichen; Carbo, Ellen W S; Douw, Linda; Gouw, Alida A; van Straaten, Elisabeth C W; Stam, Cornelis J

    2016-04-05

    Normal brain function requires interactions between spatially separated, and functionally specialized, macroscopic regions, yet the directionality of these interactions in large-scale functional networks is unknown. Magnetoencephalography was used to determine the directionality of these interactions, where directionality was inferred from time series of beamformer-reconstructed estimates of neuronal activation, using a recently proposed measure of phase transfer entropy. We observed well-organized posterior-to-anterior patterns of information flow in the higher-frequency bands (alpha1, alpha2, and beta band), dominated by regions in the visual cortex and posterior default mode network. Opposite patterns of anterior-to-posterior flow were found in the theta band, involving mainly regions in the frontal lobe that were sending information to a more distributed network. Many strong information senders in the theta band were also frequent receivers in the alpha2 band, and vice versa. Our results provide evidence that large-scale resting-state patterns of information flow in the human brain form frequency-dependent reentry loops that are dominated by flow from parieto-occipital cortex to integrative frontal areas in the higher-frequency bands, which is mirrored by a theta band anterior-to-posterior flow.

  3. A review on technology maturity of small scale energy storage technologies★

    Directory of Open Access Journals (Sweden)

    Nguyen Thu-Trang

    2017-01-01

    Full Text Available This paper reviews the current status of energy storage technologies which have the higher potential to be applied in small scale energy systems. Small scale energy systems can be categorized as ones that are able to supply energy in various forms for a building, or a small area, or a limited community, or an enterprise; typically, they are end-user systems. Energy storage technologies are classified based on their form of energy stored. A two-step evaluation is proposed for selecting suitable storage technologies for small scale energy systems, including identifying possible technical options, and addressing techno-economic aspects. Firstly, a review on energy storage technologies at small scale level is carried out. Secondly, an assessment of technology readiness level (TRL is conducted. The TRLs are ranked according to information gathered from literature review. Levels of market maturity of the technologies are addressed by taking into account their market development stages through reviewing published materials. The TRLs and the levels of market maturity are then combined into a technology maturity curve. Additionally, market driving factors are identified by using different stages in product life cycle. The results indicate that lead-acid, micro pumped hydro storage, NaS battery, NiCd battery, flywheel, NaNiCl battery, Li-ion battery, and sensible thermal storage are the most mature technologies for small scale energy systems. In the near future, hydrogen fuel cells, thermal storages using phase change materials and thermochemical materials are expected to become more popular in the energy storage market.

  4. Large-scale Health Information Database and Privacy Protection*1

    Science.gov (United States)

    YAMAMOTO, Ryuichi

    2016-01-01

    Japan was once progressive in the digitalization of healthcare fields but unfortunately has fallen behind in terms of the secondary use of data for public interest. There has recently been a trend to establish large-scale health databases in the nation, and a conflict between data use for public interest and privacy protection has surfaced as this trend has progressed. Databases for health insurance claims or for specific health checkups and guidance services were created according to the law that aims to ensure healthcare for the elderly; however, there is no mention in the act about using these databases for public interest in general. Thus, an initiative for such use must proceed carefully and attentively. The PMDA*2 projects that collect a large amount of medical record information from large hospitals and the health database development project that the Ministry of Health, Labour and Welfare (MHLW) is working on will soon begin to operate according to a general consensus; however, the validity of this consensus can be questioned if issues of anonymity arise. The likelihood that researchers conducting a study for public interest would intentionally invade the privacy of their subjects is slim. However, patients could develop a sense of distrust about their data being used since legal requirements are ambiguous. Nevertheless, without using patients’ medical records for public interest, progress in medicine will grind to a halt. Proper legislation that is clear for both researchers and patients will therefore be highly desirable. A revision of the Act on the Protection of Personal Information is currently in progress. In reality, however, privacy is not something that laws alone can protect; it will also require guidelines and self-discipline. We now live in an information capitalization age. I will introduce the trends in legal reform regarding healthcare information and discuss some basics to help people properly face the issue of health big data and privacy

  5. Using relational databases for improved sequence similarity searching and large-scale genomic analyses.

    Science.gov (United States)

    Mackey, Aaron J; Pearson, William R

    2004-10-01

    Relational databases are designed to integrate diverse types of information and manage large sets of search results, greatly simplifying genome-scale analyses. Relational databases are essential for management and analysis of large-scale sequence analyses, and can also be used to improve the statistical significance of similarity searches by focusing on subsets of sequence libraries most likely to contain homologs. This unit describes using relational databases to improve the efficiency of sequence similarity searching and to demonstrate various large-scale genomic analyses of homology-related data. This unit describes the installation and use of a simple protein sequence database, seqdb_demo, which is used as a basis for the other protocols. These include basic use of the database to generate a novel sequence library subset, how to extend and use seqdb_demo for the storage of sequence similarity search results and making use of various kinds of stored search results to address aspects of comparative genomic analysis.

  6. Towards Portable Large-Scale Image Processing with High-Performance Computing.

    Science.gov (United States)

    Huo, Yuankai; Blaber, Justin; Damon, Stephen M; Boyd, Brian D; Bao, Shunxing; Parvathaneni, Prasanna; Noguera, Camilo Bermudez; Chaganti, Shikha; Nath, Vishwesh; Greer, Jasmine M; Lyu, Ilwoo; French, William R; Newton, Allen T; Rogers, Baxter P; Landman, Bennett A

    2018-05-03

    High-throughput, large-scale medical image computing demands tight integration of high-performance computing (HPC) infrastructure for data storage, job distribution, and image processing. The Vanderbilt University Institute for Imaging Science (VUIIS) Center for Computational Imaging (CCI) has constructed a large-scale image storage and processing infrastructure that is composed of (1) a large-scale image database using the eXtensible Neuroimaging Archive Toolkit (XNAT), (2) a content-aware job scheduling platform using the Distributed Automation for XNAT pipeline automation tool (DAX), and (3) a wide variety of encapsulated image processing pipelines called "spiders." The VUIIS CCI medical image data storage and processing infrastructure have housed and processed nearly half-million medical image volumes with Vanderbilt Advanced Computing Center for Research and Education (ACCRE), which is the HPC facility at the Vanderbilt University. The initial deployment was natively deployed (i.e., direct installations on a bare-metal server) within the ACCRE hardware and software environments, which lead to issues of portability and sustainability. First, it could be laborious to deploy the entire VUIIS CCI medical image data storage and processing infrastructure to another HPC center with varying hardware infrastructure, library availability, and software permission policies. Second, the spiders were not developed in an isolated manner, which has led to software dependency issues during system upgrades or remote software installation. To address such issues, herein, we describe recent innovations using containerization techniques with XNAT/DAX which are used to isolate the VUIIS CCI medical image data storage and processing infrastructure from the underlying hardware and software environments. The newly presented XNAT/DAX solution has the following new features: (1) multi-level portability from system level to the application level, (2) flexible and dynamic software

  7. Integrating weather and geotechnical monitoring data for assessing the stability of large scale surface mining operations

    Directory of Open Access Journals (Sweden)

    Steiakakis Chrysanthos

    2016-01-01

    Full Text Available The geotechnical challenges for safe slope design in large scale surface mining operations are enormous. Sometimes one degree of slope inclination can significantly reduce the overburden to ore ratio and therefore dramatically improve the economics of the operation, while large scale slope failures may have a significant impact on human lives. Furthermore, adverse weather conditions, such as high precipitation rates, may unfavorably affect the already delicate balance between operations and safety. Geotechnical, weather and production parameters should be systematically monitored and evaluated in order to safely operate such pits. Appropriate data management, processing and storage are critical to ensure timely and informed decisions.

  8. A large scale analysis of information-theoretic network complexity measures using chemical structures.

    Directory of Open Access Journals (Sweden)

    Matthias Dehmer

    Full Text Available This paper aims to investigate information-theoretic network complexity measures which have already been intensely used in mathematical- and medicinal chemistry including drug design. Numerous such measures have been developed so far but many of them lack a meaningful interpretation, e.g., we want to examine which kind of structural information they detect. Therefore, our main contribution is to shed light on the relatedness between some selected information measures for graphs by performing a large scale analysis using chemical networks. Starting from several sets containing real and synthetic chemical structures represented by graphs, we study the relatedness between a classical (partition-based complexity measure called the topological information content of a graph and some others inferred by a different paradigm leading to partition-independent measures. Moreover, we evaluate the uniqueness of network complexity measures numerically. Generally, a high uniqueness is an important and desirable property when designing novel topological descriptors having the potential to be applied to large chemical databases.

  9. Microbiological and environmental effects of aquifer thermal energy storage - studies at the Stuttgart man-made aquifer and a large-scale model system

    International Nuclear Information System (INIS)

    Adinolfi, M.; Ruck, W.

    1993-01-01

    The storage of thermal energy, either heat or cold, in natural or artificial aquifers creates local perturbations of the indigenous microflora and the environmental properties. Within an international working group of the International Energy Agency (IEA Annex VI) possible environmental impacts of ATES-systems were recognized and investigated. Investigations of storage systems on natural sites, man-made aquifers and large-scale models of impounded aquifers showed changes in microbial populations, but until now no adverse microbiological processes associated with ATES-systems could be documented. However, examinations with a model system indicate an increased risk of environmental impact. Therefore, the operation of ATES-systems should be accompanied by chemical and biological investigations. (orig.) [de

  10. ARRA-Multi-Level Energy Storage and Controls for Large-Scale Wind Energy Integration

    Energy Technology Data Exchange (ETDEWEB)

    David Wenzhong Gao

    2012-09-30

    intelligent controller that increases battery life within hybrid energy storage systems for wind application was developed. Comprehensive studies have been conducted and simulation results are analyzed. A permanent magnet synchronous generator, coupled with a variable speed wind turbine, is connected to a power grid (14-bus system). A rectifier, a DC-DC converter and an inverter are used to provide a complete model of the wind system. An Energy Storage System (ESS) is connected to a DC-link through a DC-DC converter. An intelligent controller is applied to the DC-DC converter to help the Voltage Source Inverter (VSI) to regulate output power and also to control the operation of the battery and supercapacitor. This ensures a longer life time for the batteries. The detailed model is simulated in PSCAD/EMTP. Additionally, economic analysis has been done for different methods that can reduce the wind power output fluctuation. These methods are, wind power curtailment, dumping loads, battery energy storage system and hybrid energy storage system. From the results, application of single advanced HESS can save more money for wind turbines owners. Generally the income would be the same for most of methods because the wind does not change and maximum power point tracking can be applied to most systems. On the other hand, the cost is the key point. For short term and small wind turbine, the BESS is the cheapest and applicable method while for large scale wind turbines and wind farms the application of advanced HESS would be the best method to reduce the power fluctuation. The key outcomes of this project include a new intelligent controller that can reduce energy exchanged between the battery and DC-link, reduce charging/discharging cycles, reduce depth of discharge and increase time interval between charge/discharge, and lower battery temperature. This improves the overall lifetime of battery energy storages. Additionally, a new design method based on probability help optimize the

  11. Inferring Large-Scale Terrestrial Water Storage Through GRACE and GPS Data Fusion in Cloud Computing Environments

    Science.gov (United States)

    Rude, C. M.; Li, J. D.; Gowanlock, M.; Herring, T.; Pankratius, V.

    2016-12-01

    Surface subsidence due to depletion of groundwater can lead to permanent compaction of aquifers and damaged infrastructure. However, studies of such effects on a large scale are challenging and compute intensive because they involve fusing a variety of data sets beyond direct measurements from groundwater wells, such as gravity change measurements from the Gravity Recovery and Climate Experiment (GRACE) or surface displacements measured by GPS receivers. Our work therefore leverages Amazon cloud computing to enable these types of analyses spanning the entire continental US. Changes in groundwater storage are inferred from surface displacements measured by GPS receivers stationed throughout the country. Receivers located on bedrock are anti-correlated with changes in water levels from elastic deformation due to loading, while stations on aquifers correlate with groundwater changes due to poroelastic expansion and compaction. Correlating linearly detrended equivalent water thickness measurements from GRACE with linearly detrended and Kalman filtered vertical displacements of GPS stations located throughout the United States helps compensate for the spatial and temporal limitations of GRACE. Our results show that the majority of GPS stations are negatively correlated with GRACE in a statistically relevant way, as most GPS stations are located on bedrock in order to provide stable reference locations and measure geophysical processes such as tectonic deformations. Additionally, stations located on the Central Valley California aquifer show statistically significant positive correlations. Through the identification of positive and negative correlations, deformation phenomena can be classified as loading or poroelastic expansion due to changes in groundwater. This method facilitates further studies of terrestrial water storage on a global scale. This work is supported by NASA AIST-NNX15AG84G (PI: V. Pankratius) and Amazon.

  12. Experimental investigation of the dynamic behavior of a large-scale refrigeration – PCM energy storage system. Validation of a complete model

    International Nuclear Information System (INIS)

    Wu, Jing; Tremeac, Brice; Terrier, Marie-France; Charni, Mehdi; Gagnière, Emilie; Couenne, Françoise; Hamroun, Boussad; Jallut, Christian

    2016-01-01

    In the area of buildings refrigeration, the use of thermal energy storages coupled with heat pumps is a significant way for reducing the operating costs and optimizing the design of equipment. In this paper, a prototype of large-scale refrigeration - PCM (Phase Change Material) energy storage system is described, from which experimental results on transient behavior are obtained. A dynamic model for transient simulation of the coupled system is presented. The fluid flows through the heat exchangers and the storage tank are represented by a cascade of Continuous Stirred Tank Reactors (CSTRs). Switching procedures between different model configurations associated to phase transitions within heat exchangers and PCM storage tank are mathematically performed by matrix operations. The compressor, the expansion valve and the pressure drop across the evaporator are represented by static models based on empirical correlations. A PI controller for the expansion valve opening is integrated in the heat pump model to maintain the superheat at evaporator exit. The model is validated by a complete and detailed comparison between simulation and experimental results. - Highlights: • Experimental investigation of a refrigeration-PCM storage system is presented. • A detailed dynamic model for the coupled system is proposed. • Fluid flows in heat exchangers and PCM storage are represented by a cascade of CSTRs. • Phase transitions events according to time and space within heat exchangers and PCM storage are considered in the model. • Complete comparisons between experimental and simulation results are carried out.

  13. Enabling Large-Scale Biomedical Analysis in the Cloud

    Directory of Open Access Journals (Sweden)

    Ying-Chih Lin

    2013-01-01

    Full Text Available Recent progress in high-throughput instrumentations has led to an astonishing growth in both volume and complexity of biomedical data collected from various sources. The planet-size data brings serious challenges to the storage and computing technologies. Cloud computing is an alternative to crack the nut because it gives concurrent consideration to enable storage and high-performance computing on large-scale data. This work briefly introduces the data intensive computing system and summarizes existing cloud-based resources in bioinformatics. These developments and applications would facilitate biomedical research to make the vast amount of diversification data meaningful and usable.

  14. Analysis Methods for Extracting Knowledge from Large-Scale WiFi Monitoring to Inform Building Facility Planning

    DEFF Research Database (Denmark)

    Ruiz-Ruiz, Antonio; Blunck, Henrik; Prentow, Thor Siiger

    2014-01-01

    realistic data to inform facility planning. In this paper, we propose analysis methods to extract knowledge from large sets of network collected WiFi traces to better inform facility management and planning in large building complexes. The analysis methods, which build on a rich set of temporal and spatial......The optimization of logistics in large building com- plexes with many resources, such as hospitals, require realistic facility management and planning. Current planning practices rely foremost on manual observations or coarse unverified as- sumptions and therefore do not properly scale or provide....... Spatio-temporal visualization tools built on top of these methods enable planners to inspect and explore extracted information to inform facility-planning activities. To evaluate the methods, we present results for a large hospital complex covering more than 10 hectares. The evaluation is based on Wi...

  15. Wind power impacts and electricity storage - a time scale perspective

    DEFF Research Database (Denmark)

    Hedegaard, Karsten; Meibom, Peter

    2012-01-01

    Integrating large amounts of wind power in energy systems poses balancing challenges due to the variable and only partly predictable nature of wind. The challenges cover different time scales from intra-hour, intra-day/day-ahead to several days and seasonal level. Along with flexible electricity...... demand options, various electricity storage technologies are being discussed as candidates for contributing to large-scale wind power integration and these also differ in terms of the time scales at which they can operate. In this paper, using the case of Western Denmark in 2025 with an expected 57% wind...... power penetration, wind power impacts on different time scales are analysed. Results show consecutive negative and high net load period lengths indicating a significant potential for flexibility measures capable of charging/activating demand and discharging/inactivating demand in periods of 1 h to one...

  16. Multi-scale theoretical investigation of hydrogen storage in covalent organic frameworks.

    Science.gov (United States)

    Tylianakis, Emmanuel; Klontzas, Emmanouel; Froudakis, George E

    2011-03-01

    The quest for efficient hydrogen storage materials has been the limiting step towards the commercialization of hydrogen as an energy carrier and has attracted a lot of attention from the scientific community. Sophisticated multi-scale theoretical techniques have been considered as a valuable tool for the prediction of materials storage properties. Such techniques have also been used for the investigation of hydrogen storage in a novel category of porous materials known as Covalent Organic Frameworks (COFs). These framework materials are consisted of light elements and are characterized by exceptional physicochemical properties such as large surface areas and pore volumes. Combinations of ab initio, Molecular Dynamics (MD) and Grand Canonical Monte-Carlo (GCMC) calculations have been performed to investigate the hydrogen adsorption in these ultra-light materials. The purpose of the present review is to summarize the theoretical hydrogen storage studies that have been published after the discovery of COFs. Experimental and theoretical studies have proven that COFs have comparable or better hydrogen storage abilities than other competitive materials such as MOF. The key factors that can lead to the improvement of the hydrogen storage properties of COFs are highlighted, accompanied with some recently presented theoretical multi-scale studies concerning these factors.

  17. Estimating restorable wetland water storage at landscape scales

    Science.gov (United States)

    Jones, Charles Nathan; Evenson, Grey R.; McLaughlin, Daniel L.; Vanderhoof, Melanie; Lang, Megan W.; McCarty, Greg W.; Golden, Heather E.; Lane, Charles R.; Alexander, Laurie C.

    2018-01-01

    Globally, hydrologic modifications such as ditching and subsurface drainage have significantly reduced wetland water storage capacity (i.e., volume of surface water a wetland can retain) and consequent wetland functions. While wetland area has been well documented across many landscapes and used to guide restoration efforts, few studies have directly quantified the associated wetland storage capacity. Here, we present a novel raster-based approach to quantify both contemporary and potential (i.e., restorable) storage capacities of individual depressional basins across landscapes. We demonstrate the utility of this method by applying it to the Delmarva Peninsula, a region punctuated by both depressional wetlands and drainage ditches. Across the entire peninsula, we estimated that restoration (i.e., plugging ditches) could increase storage capacity by 80%. Focusing on an individual watershed, we found that over 59% of restorable storage capacity occurs within 20 m of the drainage network, and that 93% occurs within 1 m elevation of the drainage network. Our demonstration highlights widespread ditching in this landscape, spatial patterns of both contemporary and potential storage capacities, and clear opportunities for hydrologic restoration. In Delmarva and more broadly, our novel approach can inform targeted landscape-scale conservation and restoration efforts to optimize hydrologically mediated wetland functions.

  18. DNA MemoChip: Long-Term and High Capacity Information Storage and Select Retrieval.

    Science.gov (United States)

    Stefano, George B; Wang, Fuzhou; Kream, Richard M

    2018-02-26

    Over the course of history, human beings have never stopped seeking effective methods for information storage. From rocks to paper, and through the past several decades of using computer disks, USB sticks, and on to the thin silicon "chips" and "cloud" storage of today, it would seem that we have reached an era of efficiency for managing innumerable and ever-expanding data. Astonishingly, when tracing this technological path, one realizes that our ancient methods of informational storage far outlast paper (10,000 vs. 1,000 years, respectively), let alone the computer-based memory devices that only last, on average, 5 to 25 years. During this time of fast-paced information generation, it becomes increasingly difficult for current storage methods to retain such massive amounts of data, and to maintain appropriate speeds with which to retrieve it, especially when in demand by a large number of users. Others have proposed that DNA-based information storage provides a way forward for information retention as a result of its temporal stability. It is now evident that DNA represents a potentially economical and sustainable mechanism for storing information, as demonstrated by its decoding from a 700,000 year-old horse genome. The fact that the human genome is present in a cell, containing also the varied mitochondrial genome, indicates DNA's great potential for large data storage in a 'smaller' space.

  19. Development of Best Practices for Large-scale Data Management Infrastructure

    NARCIS (Netherlands)

    S. Stadtmüller; H.F. Mühleisen (Hannes); C. Bizer; M.L. Kersten (Martin); J.A. de Rijke (Arjen); F.E. Groffen (Fabian); Y. Zhang (Ying); G. Ladwig; A. Harth; M Trampus

    2012-01-01

    htmlabstractThe amount of available data for processing is constantly increasing and becomes more diverse. We collect our experiences on deploying large-scale data management tools on local-area clusters or cloud infrastructures and provide guidance to use these computing and storage

  20. Economic analysis of a new class of vanadium redox-flow battery for medium- and large-scale energy storage in commercial applications with renewable energy

    International Nuclear Information System (INIS)

    Li, Ming-Jia; Zhao, Wei; Chen, Xi; Tao, Wen-Quan

    2017-01-01

    Highlights: • A new class of the vanadium redox-flow battery (VRB) is developed. • The new class of VRB is more economic. It is simple process and easy to scale-up. • There are three levels of cell stacks and electrolytes with different qualities. • The economic analysis of the VRB system for renewable energy bases is carried out. • Related polices and suggestions based on the result are provided. - Abstract: Interest in the implement of vanadium redox-flow battery (VRB) for energy storage is growing, which is widely applicable to large-scale renewable energy (e.g. wind energy and solar photo-voltaic), developing distributed generation, lowering the imbalance and increasing the usage of electricity. However, a comprehensive economic analysis of the VRB for energy storage is obscured for various commercial applications, yet it is fundamental for implementation of the VRB in commercial electricity markets. In this study, based on a new class of the VRB that was developed by our team, a comprehensive economic analysis of the VRB for large-scale energy storage is carried out. The results illustrate the economy of the VRB applications for three typical energy systems: (1) The VRB storage system instead of the normal lead-acid battery to be the uninterrupted power supply (UPS) battery for office buildings and hospitals; (2) Application of vanadium battery in household distributed photo-voltaic power generation systems; (3) The wind power and solar power stations equipped with the VRB storage systems. The economic perspectives and cost-benefit analysis of the VRB storage systems may underpin optimisation for maximum profitability. In this case, two findings are concluded. First, with the fixed capacity power or fixed discharging time, the greater profit ratio will be generated from the longer time or the larger capacity power. Second, when the profit ratio, discharging time and capacity power are all variables, it is necessary to find out the best optimisation

  1. Information Power Grid: Distributed High-Performance Computing and Large-Scale Data Management for Science and Engineering

    Science.gov (United States)

    Johnston, William E.; Gannon, Dennis; Nitzberg, Bill

    2000-01-01

    We use the term "Grid" to refer to distributed, high performance computing and data handling infrastructure that incorporates geographically and organizationally dispersed, heterogeneous resources that are persistent and supported. This infrastructure includes: (1) Tools for constructing collaborative, application oriented Problem Solving Environments / Frameworks (the primary user interfaces for Grids); (2) Programming environments, tools, and services providing various approaches for building applications that use aggregated computing and storage resources, and federated data sources; (3) Comprehensive and consistent set of location independent tools and services for accessing and managing dynamic collections of widely distributed resources: heterogeneous computing systems, storage systems, real-time data sources and instruments, human collaborators, and communications systems; (4) Operational infrastructure including management tools for distributed systems and distributed resources, user services, accounting and auditing, strong and location independent user authentication and authorization, and overall system security services The vision for NASA's Information Power Grid - a computing and data Grid - is that it will provide significant new capabilities to scientists and engineers by facilitating routine construction of information based problem solving environments / frameworks. Such Grids will knit together widely distributed computing, data, instrument, and human resources into just-in-time systems that can address complex and large-scale computing and data analysis problems. Examples of these problems include: (1) Coupled, multidisciplinary simulations too large for single systems (e.g., multi-component NPSS turbomachine simulation); (2) Use of widely distributed, federated data archives (e.g., simultaneous access to metrological, topological, aircraft performance, and flight path scheduling databases supporting a National Air Space Simulation systems}; (3

  2. Grid scale energy storage in salt caverns

    Energy Technology Data Exchange (ETDEWEB)

    Crotogino, F.; Donadei, S.

    2011-05-15

    Fossil energy sources require some 20% of the annual consumption to be stored to secure emergency cover, cold winter supply, peak shaving, seasonal swing, load management and energy trading. Today the electric power industry benefits from the extreme high energy density of fossil and nuclear fuels. This is one important reason why e.g. the German utilities are able to provide highly reliable grid operation at a electric power storage capacity at their pumped hydro power stations of less then 1 hour (40 GWh) related to the total load in the grid - i.e. only 0,06% compared to 20% for natural gas. Along with the changeover to renewable wind-and to a lesser extent PV-based electricity production this 'outsourcing' of storage services to fossil and nuclear fuels will decline. One important way out will be grid scale energy storage in geological formations. The present discussion, research projects and plans for balancing short term wind and solar power fluctuations focus primarily on the installation of Compressed Air Energy Storages (CAES) if the capacity of existing pumped hydro plants cannot be expanded, e.g. because of environmental issues or lack of suitable topography. Because of their small energy density, these storage options are, however, generally less suitable for balancing for longer term fluctuations in case of larger amounts of excess wind power, wind flaws or even seasonal fluctuations. One important way out are large underground hydrogen storages which provide a much higher energy density because of chemical energy bond. Underground hydrogen storage is state of the art since many years in Great Britain and in the USA for the (petro-) chemical industry. (Author)

  3. Small Form Factor Information Storage Devices for Mobile Applications in Korea

    Science.gov (United States)

    Park, Young-Pil; Park, No-Cheol; Kim, Chul-Jin

    Recently, the ubiquitous environment in which anybody can reach a lot of information data without any limitations on the place and time has become an important social issue. There are two basic requirements in the field of information storage devices which have to be satisfied; the first is the demand for the improvement of memory capacity to manage the increased data capacity in personal and official purposes. The second is the demand for new development of information storage devices small enough to be applied to mobile multimedia digital electronics, including digital camera, PDA and mobile phones. To summarize, for the sake of mobile applications, it is necessary to develop information storage devices which have simultaneously a large capacity and a small size. Korea possesses the necessary infrastructure for developing such small sized information storage devices. It has a good digital market, major digital companies, and various research institutes. Nowadays, many companies and research institutes including university cooperate together in the research on small sized information storage devices. Thus, it is expected that small form factor optical disk drives will be commercialized in the very near future in Korea.

  4. Energy storage for electrical systems in the USA

    Directory of Open Access Journals (Sweden)

    Eugene Freeman

    2016-10-01

    Full Text Available Energy storage is becoming increasingly important as renewable generation sources such as Wind Turbine and Photo Voltaic Solar are added to the mix in electrical power generation and distribution systems. The paper discusses the basic drivers for energy storage and provides brief descriptions of the various energy storage technologies available. The information summarizes current technical tradeoffs with different storage approaches and identifies issues surrounding deployment of large scale energy storage systems.

  5. Emerging Cyber Infrastructure for NASA's Large-Scale Climate Data Analytics

    Science.gov (United States)

    Duffy, D.; Spear, C.; Bowen, M. K.; Thompson, J. H.; Hu, F.; Yang, C. P.; Pierce, D.

    2016-12-01

    The resolution of NASA climate and weather simulations have grown dramatically over the past few years with the highest-fidelity models reaching down to 1.5 KM global resolutions. With each doubling of the resolution, the resulting data sets grow by a factor of eight in size. As the climate and weather models push the envelope even further, a new infrastructure to store data and provide large-scale data analytics is necessary. The NASA Center for Climate Simulation (NCCS) has deployed the Data Analytics Storage Service (DASS) that combines scalable storage with the ability to perform in-situ analytics. Within this system, large, commonly used data sets are stored in a POSIX file system (write once/read many); examples of data stored include Landsat, MERRA2, observing system simulation experiments, and high-resolution downscaled reanalysis. The total size of this repository is on the order of 15 petabytes of storage. In addition to the POSIX file system, the NCCS has deployed file system connectors to enable emerging analytics built on top of the Hadoop File System (HDFS) to run on the same storage servers within the DASS. Coupled with a custom spatiotemporal indexing approach, users can now run emerging analytical operations built on MapReduce and Spark on the same data files stored within the POSIX file system without having to make additional copies. This presentation will discuss the architecture of this system and present benchmark performance measurements from traditional TeraSort and Wordcount to large-scale climate analytical operations on NetCDF data.

  6. Output Control Technologies for a Large-scale PV System Considering Impacts on a Power Grid

    Science.gov (United States)

    Kuwayama, Akira

    The mega-solar demonstration project named “Verification of Grid Stabilization with Large-scale PV Power Generation systems” had been completed in March 2011 at Wakkanai, the northernmost city of Japan. The major objectives of this project were to evaluate adverse impacts of large-scale PV power generation systems connected to the power grid and develop output control technologies with integrated battery storage system. This paper describes the outline and results of this project. These results show the effectiveness of battery storage system and also proposed output control methods for a large-scale PV system to ensure stable operation of power grids. NEDO, New Energy and Industrial Technology Development Organization of Japan conducted this project and HEPCO, Hokkaido Electric Power Co., Inc managed the overall project.

  7. Cloud-enabled large-scale land surface model simulations with the NASA Land Information System

    Science.gov (United States)

    Duffy, D.; Vaughan, G.; Clark, M. P.; Peters-Lidard, C. D.; Nijssen, B.; Nearing, G. S.; Rheingrover, S.; Kumar, S.; Geiger, J. V.

    2017-12-01

    Developed by the Hydrological Sciences Laboratory at NASA Goddard Space Flight Center (GSFC), the Land Information System (LIS) is a high-performance software framework for terrestrial hydrology modeling and data assimilation. LIS provides the ability to integrate satellite and ground-based observational products and advanced modeling algorithms to extract land surface states and fluxes. Through a partnership with the National Center for Atmospheric Research (NCAR) and the University of Washington, the LIS model is currently being extended to include the Structure for Unifying Multiple Modeling Alternatives (SUMMA). With the addition of SUMMA in LIS, meaningful simulations containing a large multi-model ensemble will be enabled and can provide advanced probabilistic continental-domain modeling capabilities at spatial scales relevant for water managers. The resulting LIS/SUMMA application framework is difficult for non-experts to install due to the large amount of dependencies on specific versions of operating systems, libraries, and compilers. This has created a significant barrier to entry for domain scientists that are interested in using the software on their own systems or in the cloud. In addition, the requirement to support multiple run time environments across the LIS community has created a significant burden on the NASA team. To overcome these challenges, LIS/SUMMA has been deployed using Linux containers, which allows for an entire software package along with all dependences to be installed within a working runtime environment, and Kubernetes, which orchestrates the deployment of a cluster of containers. Within a cloud environment, users can now easily create a cluster of virtual machines and run large-scale LIS/SUMMA simulations. Installations that have taken weeks and months can now be performed in minutes of time. This presentation will discuss the steps required to create a cloud-enabled large-scale simulation, present examples of its use, and

  8. Large scale electrolysers

    International Nuclear Information System (INIS)

    B Bello; M Junker

    2006-01-01

    Hydrogen production by water electrolysis represents nearly 4 % of the world hydrogen production. Future development of hydrogen vehicles will require large quantities of hydrogen. Installation of large scale hydrogen production plants will be needed. In this context, development of low cost large scale electrolysers that could use 'clean power' seems necessary. ALPHEA HYDROGEN, an European network and center of expertise on hydrogen and fuel cells, has performed for its members a study in 2005 to evaluate the potential of large scale electrolysers to produce hydrogen in the future. The different electrolysis technologies were compared. Then, a state of art of the electrolysis modules currently available was made. A review of the large scale electrolysis plants that have been installed in the world was also realized. The main projects related to large scale electrolysis were also listed. Economy of large scale electrolysers has been discussed. The influence of energy prices on the hydrogen production cost by large scale electrolysis was evaluated. (authors)

  9. Battery Energy Storage Market: Commercial Scale, Lithium-ion Projects in the U.S.

    Energy Technology Data Exchange (ETDEWEB)

    McLaren, Joyce; Gagnon, Pieter; Anderson, Kate; Elgqvist, Emma; Fu, Ran; Remo, Tim

    2016-10-01

    This slide deck presents current market data on the commercial scale li-ion battery storage projects in the U.S. It includes existing project locations, cost data and project cost breakdown, a map of demand charges across the U.S. and information about how the ITC and MACRS apply to energy storage projects that are paired with solar PV technology.

  10. ASDF: A New Adaptable Data Format for Seismology Suitable for Large-Scale Workflows

    Science.gov (United States)

    Krischer, L.; Smith, J. A.; Spinuso, A.; Tromp, J.

    2014-12-01

    Increases in the amounts of available data as well as computational power opens the possibility to tackle ever larger and more complex problems. This comes with a slew of new problems, two of which are the need for a more efficient use of available resources and a sensible organization and storage of the data. Both need to be satisfied in order to properly scale a problem and both are frequent bottlenecks in large seismic inversions using ambient noise or more traditional techniques.We present recent developments and ideas regarding a new data format, named ASDF (Adaptable Seismic Data Format), for all branches of seismology aiding with the aforementioned problems. The key idea is to store all information necessary to fully understand a set of data in a single file. This enables the construction of self-explaining and exchangeable data sets facilitating collaboration on large-scale problems. We incorporate the existing metadata standards FDSN StationXML and QuakeML together with waveform and auxiliary data into a common container based on the HDF5 standard. A further critical component of the format is the storage of provenance information as an extension of W3C PROV, meaning information about the history of the data, assisting with the general problem of reproducibility.Applications of the proposed new format are numerous. In the context of seismic tomography it enables the full description and storage of synthetic waveforms including information about the used model, the solver, the parameters, and other variables that influenced the final waveforms. Furthermore, intermediate products like adjoint sources, cross correlations, and receiver functions can be described and most importantly exchanged with others.Usability and tool support is crucial for any new format to gain acceptance and we additionally present a fully functional implementation of this format based on Python and ObsPy. It offers a convenient way to discover and analyze data sets as well as making

  11. Preparatory study of energy storage systems

    International Nuclear Information System (INIS)

    Stortelder, B.J.M.

    1993-01-01

    Based on a literature survey, information from other institutes and interviews with KEMA-experts a state of the art is given of small-scale, medium-scale and large-scale energy storage systems. The results of the survey can be used to optimize the electric power supply. Attention is paid to the criteria capacity, efficiency, dynamic performance, economic aspects and the environmental impacts

  12. Large-Scale Multifunctional Electrochromic-Energy Storage Device Based on Tungsten Trioxide Monohydrate Nanosheets and Prussian White.

    Science.gov (United States)

    Bi, Zhijie; Li, Xiaomin; Chen, Yongbo; He, Xiaoli; Xu, Xiaoke; Gao, Xiangdong

    2017-09-06

    A high-performance electrochromic-energy storage device (EESD) is developed, which successfully realizes the multifunctional combination of electrochromism and energy storage by constructing tungsten trioxide monohydrate (WO 3 ·H 2 O) nanosheets and Prussian white (PW) film as asymmetric electrodes. The EESD presents excellent electrochromic properties of broad optical modulation (61.7%), ultrafast response speed (1.84/1.95 s), and great coloration efficiency (139.4 cm 2 C -1 ). In particular, remarkable cyclic stability (sustaining 82.5% of its initial optical modulation after 2500 cycles as an electrochromic device, almost fully maintaining its capacitance after 1000 cycles as an energy storage device) is achieved. The EESD is also able to visually detect the energy storage level via reversible and fast color changes. Moreover, the EESD can be combined with commercial solar cells to constitute an intelligent operating system in the architectures, which would realize the adjustment of indoor sunlight and the improvement of physical comfort totally by the rational utilization of solar energy without additional electricity. Besides, a scaled-up EESD (10 × 11 cm 2 ) is further fabricated as a prototype. Such promising EESD shows huge potential in practically serving as electrochromic smart windows and energy storage devices.

  13. Optimal Offering and Operating Strategy for a Large Wind-Storage System as a Price Maker

    DEFF Research Database (Denmark)

    Ding, Huajie; Pinson, Pierre; Hu, Zechun

    2017-01-01

    Wind farms and energy storage systems are playing increasingly more important roles in power systems, which makes their offering non-negligible in some markets. From the perspective of wind farm-energy storage systems (WF-ESS), this paper proposes an integrated strategy of day-ahead offering...... and real-time operation policies to maximize their overall profit. As participants with large capacity in electricity markets can influence cleared prices by strategic offering, a large scaled WFESS is assumed to be a price maker in day-ahead markets. Correspondingly, the strategy considers influence...

  14. Circuit engineering principles for construction of bipolar large-scale integrated circuit storage devices and very large-scale main memory

    Science.gov (United States)

    Neklyudov, A. A.; Savenkov, V. N.; Sergeyez, A. G.

    1984-06-01

    Memories are improved by increasing speed or the memory volume on a single chip. The most effective means for increasing speeds in bipolar memories are current control circuits with the lowest extraction times for a specific power consumption (1/4 pJ/bit). The control current circuitry involves multistage current switches and circuits accelerating transient processes in storage elements and links. Circuit principles for the design of bipolar memories with maximum speeds for an assigned minimum of circuit topology are analyzed. Two main classes of storage with current control are considered: the ECL type and super-integrated injection type storage with data capacities of N = 1/4 and N 4/16, respectively. The circuits reduce logic voltage differentials and the volumes of lexical and discharge buses and control circuit buses. The limiting speed is determined by the antiinterference requirements of the memory in storage and extraction modes.

  15. Large-scale hydrology in Europe : observed patterns and model performance

    Energy Technology Data Exchange (ETDEWEB)

    Gudmundsson, Lukas

    2011-06-15

    In a changing climate, terrestrial water storages are of great interest as water availability impacts key aspects of ecosystem functioning. Thus, a better understanding of the variations of wet and dry periods will contribute to fully grasp processes of the earth system such as nutrient cycling and vegetation dynamics. Currently, river runoff from small, nearly natural, catchments is one of the few variables of the terrestrial water balance that is regularly monitored with detailed spatial and temporal coverage on large scales. River runoff, therefore, provides a foundation to approach European hydrology with respect to observed patterns on large scales, with regard to the ability of models to capture these.The analysis of observed river flow from small catchments, focused on the identification and description of spatial patterns of simultaneous temporal variations of runoff. These are dominated by large-scale variations of climatic variables but also altered by catchment processes. It was shown that time series of annual low, mean and high flows follow the same atmospheric drivers. The observation that high flows are more closely coupled to large scale atmospheric drivers than low flows, indicates the increasing influence of catchment properties on runoff under dry conditions. Further, it was shown that the low-frequency variability of European runoff is dominated by two opposing centres of simultaneous variations, such that dry years in the north are accompanied by wet years in the south.Large-scale hydrological models are simplified representations of our current perception of the terrestrial water balance on large scales. Quantification of the models strengths and weaknesses is the prerequisite for a reliable interpretation of simulation results. Model evaluations may also enable to detect shortcomings with model assumptions and thus enable a refinement of the current perception of hydrological systems. The ability of a multi model ensemble of nine large-scale

  16. Contextual Compression of Large-Scale Wind Turbine Array Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Gruchalla, Kenny M [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Brunhart-Lupo, Nicholas J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Potter, Kristin C [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Clyne, John [National Center for Atmospheric Research (NCAR)

    2017-12-04

    Data sizes are becoming a critical issue particularly for HPC applications. We have developed a user-driven lossy wavelet-based storage model to facilitate the analysis and visualization of large-scale wind turbine array simulations. The model stores data as heterogeneous blocks of wavelet coefficients, providing high-fidelity access to user-defined data regions believed the most salient, while providing lower-fidelity access to less salient regions on a block-by-block basis. In practice, by retaining the wavelet coefficients as a function of feature saliency, we have seen data reductions in excess of 94 percent, while retaining lossless information in the turbine-wake regions most critical to analysis and providing enough (low-fidelity) contextual information in the upper atmosphere to track incoming coherent turbulent structures. Our contextual wavelet compression approach has allowed us to deliver interative visual analysis while providing the user control over where data loss, and thus reduction in accuracy, in the analysis occurs. We argue this reduced but contextualized representation is a valid approach and encourages contextual data management.

  17. Analysis of an HTS coil for large scale superconducting magnetic energy storage

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Ji Young; Lee, Se Yeon; Choi, Kyeong Dal; Park, Sang Ho; Hong, Gye Won; Kim, Sung Soo; Kim, Woo Seok [Korea Polytechnic University, Siheung (Korea, Republic of); Lee, Ji Kwang [Woosuk University, Wanju (Korea, Republic of)

    2015-06-15

    It has been well known that a toroid is the inevitable shape for a high temperature superconducting (HTS) coil as a component of a large scale superconducting magnetic energy storage system (SMES) because it is the best option to minimize a magnetic field intensity applied perpendicularly to the HTS wires. Even though a perfect toroid coil does not have a perpendicular magnetic field, for a practical toroid coil composed of many HTS pancake coils, some type of perpendicular magnetic field cannot be avoided, which is a major cause of degradation of the HTS wires. In order to suggest an optimum design solution for an HTS SMES system, we need an accurate, fast, and effective calculation for the magnetic field, mechanical stresses, and stored energy. As a calculation method for these criteria, a numerical calculation such as an finite element method (FEM) has usually been adopted. However, a 3-dimensional FEM can involve complicated calculation and can be relatively time consuming, which leads to very inefficient iterations for an optimal design process. In this paper, we suggested an intuitive and effective way to determine the maximum magnetic field intensity in the HTS coil by using an analytic and statistical calculation method. We were able to achieve a remarkable reduction of the calculation time by using this method. The calculation results using this method for sample model coils were compared with those obtained by conventional numerical method to verify the accuracy and availability of this proposed method. After the successful substitution of this calculation method for the proposed design program, a similar method of determining the maximum mechanical stress in the HTS coil will also be studied as a future work.

  18. Analysis of an HTS coil for large scale superconducting magnetic energy storage

    International Nuclear Information System (INIS)

    Lee, Ji Young; Lee, Se Yeon; Choi, Kyeong Dal; Park, Sang Ho; Hong, Gye Won; Kim, Sung Soo; Kim, Woo Seok; Lee, Ji Kwang

    2015-01-01

    It has been well known that a toroid is the inevitable shape for a high temperature superconducting (HTS) coil as a component of a large scale superconducting magnetic energy storage system (SMES) because it is the best option to minimize a magnetic field intensity applied perpendicularly to the HTS wires. Even though a perfect toroid coil does not have a perpendicular magnetic field, for a practical toroid coil composed of many HTS pancake coils, some type of perpendicular magnetic field cannot be avoided, which is a major cause of degradation of the HTS wires. In order to suggest an optimum design solution for an HTS SMES system, we need an accurate, fast, and effective calculation for the magnetic field, mechanical stresses, and stored energy. As a calculation method for these criteria, a numerical calculation such as an finite element method (FEM) has usually been adopted. However, a 3-dimensional FEM can involve complicated calculation and can be relatively time consuming, which leads to very inefficient iterations for an optimal design process. In this paper, we suggested an intuitive and effective way to determine the maximum magnetic field intensity in the HTS coil by using an analytic and statistical calculation method. We were able to achieve a remarkable reduction of the calculation time by using this method. The calculation results using this method for sample model coils were compared with those obtained by conventional numerical method to verify the accuracy and availability of this proposed method. After the successful substitution of this calculation method for the proposed design program, a similar method of determining the maximum mechanical stress in the HTS coil will also be studied as a future work

  19. A manganese-hydrogen battery with potential for grid-scale energy storage

    Science.gov (United States)

    Chen, Wei; Li, Guodong; Pei, Allen; Li, Yuzhang; Liao, Lei; Wang, Hongxia; Wan, Jiayu; Liang, Zheng; Chen, Guangxu; Zhang, Hao; Wang, Jiangyan; Cui, Yi

    2018-05-01

    Batteries including lithium-ion, lead-acid, redox-flow and liquid-metal batteries show promise for grid-scale storage, but they are still far from meeting the grid's storage needs such as low cost, long cycle life, reliable safety and reasonable energy density for cost and footprint reduction. Here, we report a rechargeable manganese-hydrogen battery, where the cathode is cycled between soluble Mn2+ and solid MnO2 with a two-electron reaction, and the anode is cycled between H2 gas and H2O through well-known catalytic reactions of hydrogen evolution and oxidation. This battery chemistry exhibits a discharge voltage of 1.3 V, a rate capability of 100 mA cm-2 (36 s of discharge) and a lifetime of more than 10,000 cycles without decay. We achieve a gravimetric energy density of 139 Wh kg-1 (volumetric energy density of 210 Wh l-1), with the theoretical gravimetric energy density of 174 Wh kg-1 (volumetric energy density of 263 Wh l-1) in a 4 M MnSO4 electrolyte. The manganese-hydrogen battery involves low-cost abundant materials and has the potential to be scaled up for large-scale energy storage.

  20. Impacts of compressed air energy storage plant on an electricity market with a large renewable energy portfolio

    International Nuclear Information System (INIS)

    Foley, A.; Díaz Lobera, I.

    2013-01-01

    Renewable energy generation is expected to continue to increase globally due to renewable energy targets and obligations to reduce greenhouse gas emissions. Some renewable energy sources are variable power sources, for example wind, wave and solar. Energy storage technologies can manage the issues associated with variable renewable generation and align non-dispatchable renewable energy generation with load demands. Energy storage technologies can play different roles in each of the step of the electric power supply chain. Moreover, large scale energy storage systems can act as renewable energy integrators by smoothing the variability. Compressed air energy storage is one such technology. This paper examines the impacts of a compressed air energy storage facility in a pool based wholesale electricity market in a power system with a large renewable energy portfolio

  1. Active power reserves evaluation in large scale PVPPs

    DEFF Research Database (Denmark)

    Crăciun, Bogdan-Ionut; Kerekes, Tamas; Sera, Dezso

    2013-01-01

    The present trend on investing in renewable ways of producing electricity in the detriment of conventional fossil fuel-based plants will lead to a certain point where these plants have to provide ancillary services and contribute to overall grid stability. Photovoltaic (PV) power has the fastest...... growth among all renewable energies and managed to reach high penetration levels creating instabilities which at the moment are corrected by the conventional generation. This paradigm will change in the future scenarios where most of the power is supplied by large scale renewable plants and parts...... of the ancillary services have to be shared by the renewable plants. The main focus of the proposed paper is to technically and economically analyze the possibility of having active power reserves in large scale PV power plants (PVPPs) without any auxiliary storage equipment. The provided reserves should...

  2. Regenesys utility scale energy storage. Project summary

    International Nuclear Information System (INIS)

    2004-01-01

    This report summarises the work to date, the current situation and the future direction of a project carried out by Regenesys Technology Ltd. (RGN) to investigate the benefits of electrochemical energy storage for power generators using renewable energy sources focussing on wind energy. The background to the study is traced covering the progress of the Regenesys energy storage technology, and the milestones achieved and lessons learnt. Details are given of the planned renewable-store-market interface to allow renewable generators optimise revenue under the New Electricity Trading Arrangements (NETA) and help in the connection of the renewable energy to the electric grid system. The four integrated work programmes of the project are described and involve a system study examining market penetration of renewable generators, a technical study into connection of renewable generators and energy storage, a small scale demonstration, and a pilot scale energy storage plant at Little Barton in Cambridgeshire. Problems leading to the closure of the project are discussed

  3. Regenesys utility scale energy storage. Project summary

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2004-07-01

    This report summarises the work to date, the current situation and the future direction of a project carried out by Regenesys Technology Ltd. (RGN) to investigate the benefits of electrochemical energy storage for power generators using renewable energy sources focussing on wind energy. The background to the study is traced covering the progress of the Regenesys energy storage technology, and the milestones achieved and lessons learnt. Details are given of the planned renewable-store-market interface to allow renewable generators optimise revenue under the New Electricity Trading Arrangements (NETA) and help in the connection of the renewable energy to the electric grid system. The four integrated work programmes of the project are described and involve a system study examining market penetration of renewable generators, a technical study into connection of renewable generators and energy storage, a small scale demonstration, and a pilot scale energy storage plant at Little Barton in Cambridgeshire. Problems leading to the closure of the project are discussed.

  4. Large Survey Database: A Distributed Framework for Storage and Analysis of Large Datasets

    Science.gov (United States)

    Juric, Mario

    2011-01-01

    The Large Survey Database (LSD) is a Python framework and DBMS for distributed storage, cross-matching and querying of large survey catalogs (>10^9 rows, >1 TB). The primary driver behind its development is the analysis of Pan-STARRS PS1 data. It is specifically optimized for fast queries and parallel sweeps of positionally and temporally indexed datasets. It transparently scales to more than >10^2 nodes, and can be made to function in "shared nothing" architectures. An LSD database consists of a set of vertically and horizontally partitioned tables, physically stored as compressed HDF5 files. Vertically, we partition the tables into groups of related columns ('column groups'), storing together logically related data (e.g., astrometry, photometry). Horizontally, the tables are partitioned into partially overlapping ``cells'' by position in space (lon, lat) and time (t). This organization allows for fast lookups based on spatial and temporal coordinates, as well as data and task distribution. The design was inspired by the success of Google BigTable (Chang et al., 2006). Our programming model is a pipelined extension of MapReduce (Dean and Ghemawat, 2004). An SQL-like query language is used to access data. For complex tasks, map-reduce ``kernels'' that operate on query results on a per-cell basis can be written, with the framework taking care of scheduling and execution. The combination leverages users' familiarity with SQL, while offering a fully distributed computing environment. LSD adds little overhead compared to direct Python file I/O. In tests, we sweeped through 1.1 Grows of PanSTARRS+SDSS data (220GB) less than 15 minutes on a dual CPU machine. In a cluster environment, we achieved bandwidths of 17Gbits/sec (I/O limited). Based on current experience, we believe LSD should scale to be useful for analysis and storage of LSST-scale datasets. It can be downloaded from http://mwscience.net/lsd.

  5. Understanding I/O workload characteristics of a Peta-scale storage system

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Youngjae [ORNL; Gunasekaran, Raghul [ORNL

    2015-01-01

    Understanding workload characteristics is critical for optimizing and improving the performance of current systems and software, and architecting new storage systems based on observed workload patterns. In this paper, we characterize the I/O workloads of scientific applications of one of the world s fastest high performance computing (HPC) storage cluster, Spider, at the Oak Ridge Leadership Computing Facility (OLCF). OLCF flagship petascale simulation platform, Titan, and other large HPC clusters, in total over 250 thousands compute cores, depend on Spider for their I/O needs. We characterize the system utilization, the demands of reads and writes, idle time, storage space utilization, and the distribution of read requests to write requests for the Peta-scale Storage Systems. From this study, we develop synthesized workloads, and we show that the read and write I/O bandwidth usage as well as the inter-arrival time of requests can be modeled as a Pareto distribution. We also study the I/O load imbalance problems using I/O performance data collected from the Spider storage system.

  6. Re-evaluation of the 1995 Hanford Large Scale Drum Fire Test Results

    International Nuclear Information System (INIS)

    Yang, J M

    2007-01-01

    A large-scale drum performance test was conducted at the Hanford Site in June 1995, in which over one hundred (100) 55-gal drums in each of two storage configurations were subjected to severe fuel pool fires. The two storage configurations in the test were pallet storage and rack storage. The description and results of the large-scale drum test at the Hanford Site were reported in WHC-SD-WM-TRP-246, ''Solid Waste Drum Array Fire Performance,'' Rev. 0, 1995. This was one of the main references used to develop the analytical methodology to predict drum failures in WHC-SD-SQA-ANAL-501, 'Fire Protection Guide for Waste Drum Storage Array,'' September 1996. Three drum failure modes were observed from the test reported in WHC-SD-WM-TRP-246. They consisted of seal failure, lid warping, and catastrophic lid ejection. There was no discernible failure criterion that distinguished one failure mode from another. Hence, all three failure modes were treated equally for the purpose of determining the number of failed drums. General observations from the results of the test are as follows: (lg b ullet) Trash expulsion was negligible. (lg b ullet) Flame impingement was identified as the main cause for failure. (lg b ullet) The range of drum temperatures at failure was 600 C to 800 C. This is above the yield strength temperature for steel, approximately 540 C (1,000 F). (lg b ullet) The critical heat flux required for failure is above 45 kW/m 2 . (lg b ullet) Fire propagation from one drum to the next was not observed. The statistical evaluation of the test results using, for example, the student's t-distribution, will demonstrate that the failure criteria for TRU waste drums currently employed at nuclear facilities are very conservative relative to the large-scale test results. Hence, the safety analysis utilizing the general criteria described in the five bullets above will lead to a technically robust and defensible product that bounds the potential consequences from postulated

  7. Design techniques for large scale linear measurement systems

    International Nuclear Information System (INIS)

    Candy, J.V.

    1979-03-01

    Techniques to design measurement schemes for systems modeled by large scale linear time invariant systems, i.e., physical systems modeled by a large number (> 5) of ordinary differential equations, are described. The techniques are based on transforming the physical system model to a coordinate system facilitating the design and then transforming back to the original coordinates. An example of a three-stage, four-species, extraction column used in the reprocessing of spent nuclear fuel elements is presented. The basic ideas are briefly discussed in the case of noisy measurements. An example using a plutonium nitrate storage vessel (reprocessing) with measurement uncertainty is also presented

  8. Workflow management in large distributed systems

    International Nuclear Information System (INIS)

    Legrand, I; Newman, H; Voicu, R; Dobre, C; Grigoras, C

    2011-01-01

    The MonALISA (Monitoring Agents using a Large Integrated Services Architecture) framework provides a distributed service system capable of controlling and optimizing large-scale, data-intensive applications. An essential part of managing large-scale, distributed data-processing facilities is a monitoring system for computing facilities, storage, networks, and the very large number of applications running on these systems in near realtime. All this monitoring information gathered for all the subsystems is essential for developing the required higher-level services—the components that provide decision support and some degree of automated decisions—and for maintaining and optimizing workflow in large-scale distributed systems. These management and global optimization functions are performed by higher-level agent-based services. We present several applications of MonALISA's higher-level services including optimized dynamic routing, control, data-transfer scheduling, distributed job scheduling, dynamic allocation of storage resource to running jobs and automated management of remote services among a large set of grid facilities.

  9. Workflow management in large distributed systems

    Science.gov (United States)

    Legrand, I.; Newman, H.; Voicu, R.; Dobre, C.; Grigoras, C.

    2011-12-01

    The MonALISA (Monitoring Agents using a Large Integrated Services Architecture) framework provides a distributed service system capable of controlling and optimizing large-scale, data-intensive applications. An essential part of managing large-scale, distributed data-processing facilities is a monitoring system for computing facilities, storage, networks, and the very large number of applications running on these systems in near realtime. All this monitoring information gathered for all the subsystems is essential for developing the required higher-level services—the components that provide decision support and some degree of automated decisions—and for maintaining and optimizing workflow in large-scale distributed systems. These management and global optimization functions are performed by higher-level agent-based services. We present several applications of MonALISA's higher-level services including optimized dynamic routing, control, data-transfer scheduling, distributed job scheduling, dynamic allocation of storage resource to running jobs and automated management of remote services among a large set of grid facilities.

  10. An interactive display system for large-scale 3D models

    Science.gov (United States)

    Liu, Zijian; Sun, Kun; Tao, Wenbing; Liu, Liman

    2018-04-01

    With the improvement of 3D reconstruction theory and the rapid development of computer hardware technology, the reconstructed 3D models are enlarging in scale and increasing in complexity. Models with tens of thousands of 3D points or triangular meshes are common in practical applications. Due to storage and computing power limitation, it is difficult to achieve real-time display and interaction with large scale 3D models for some common 3D display software, such as MeshLab. In this paper, we propose a display system for large-scale 3D scene models. We construct the LOD (Levels of Detail) model of the reconstructed 3D scene in advance, and then use an out-of-core view-dependent multi-resolution rendering scheme to realize the real-time display of the large-scale 3D model. With the proposed method, our display system is able to render in real time while roaming in the reconstructed scene and 3D camera poses can also be displayed. Furthermore, the memory consumption can be significantly decreased via internal and external memory exchange mechanism, so that it is possible to display a large scale reconstructed scene with over millions of 3D points or triangular meshes in a regular PC with only 4GB RAM.

  11. Large storage operations under climate change: expanding uncertainties and evolving tradeoffs

    Science.gov (United States)

    Giuliani, Matteo; Anghileri, Daniela; Castelletti, Andrea; Vu, Phuong Nam; Soncini-Sessa, Rodolfo

    2016-03-01

    In a changing climate and society, large storage systems can play a key role for securing water, energy, and food, and rebalancing their cross-dependencies. In this letter, we study the role of large storage operations as flexible means of adaptation to climate change. In particular, we explore the impacts of different climate projections for different future time horizons on the multi-purpose operations of the existing system of large dams in the Red River basin (China-Laos-Vietnam). We identify the main vulnerabilities of current system operations, understand the risk of failure across sectors by exploring the evolution of the system tradeoffs, quantify how the uncertainty associated to climate scenarios is expanded by the storage operations, and assess the expected costs if no adaptation is implemented. Results show that, depending on the climate scenario and the time horizon considered, the existing operations are predicted to change on average from -7 to +5% in hydropower production, +35 to +520% in flood damages, and +15 to +160% in water supply deficit. These negative impacts can be partially mitigated by adapting the existing operations to future climate, reducing the loss of hydropower to 5%, potentially saving around 34.4 million US year-1 at the national scale. Since the Red River is paradigmatic of many river basins across south east Asia, where new large dams are under construction or are planned to support fast growing economies, our results can support policy makers in prioritizing responses and adaptation strategies to the changing climate.

  12. Ensuring Adequate Health and Safety Information for Decision Makers during Large-Scale Chemical Releases

    Science.gov (United States)

    Petropoulos, Z.; Clavin, C.; Zuckerman, B.

    2015-12-01

    The 2014 4-Methylcyclohexanemethanol (MCHM) spill in the Elk River of West Virginia highlighted existing gaps in emergency planning for, and response to, large-scale chemical releases in the United States. The Emergency Planning and Community Right-to-Know Act requires that facilities with hazardous substances provide Material Safety Data Sheets (MSDSs), which contain health and safety information on the hazardous substances. The MSDS produced by Eastman Chemical Company, the manufacturer of MCHM, listed "no data available" for various human toxicity subcategories, such as reproductive toxicity and carcinogenicity. As a result of incomplete toxicity data, the public and media received conflicting messages on the safety of the contaminated water from government officials, industry, and the public health community. Two days after the governor lifted the ban on water use, the health department partially retracted the ban by warning pregnant women to continue avoiding the contaminated water, which the Centers for Disease Control and Prevention deemed safe three weeks later. The response in West Virginia represents a failure in risk communication and calls to question if government officials have sufficient information to support evidence-based decisions during future incidents. Research capabilities, like the National Science Foundation RAPID funding, can provide a solution to some of the data gaps, such as information on environmental fate in the case of the MCHM spill. In order to inform policy discussions on this issue, a methodology for assessing the outcomes of RAPID and similar National Institutes of Health grants in the context of emergency response is employed to examine the efficacy of research-based capabilities in enhancing public health decision making capacity. The results of this assessment highlight potential roles rapid scientific research can fill in ensuring adequate health and safety data is readily available for decision makers during large-scale

  13. Hybrid dual gate ferroelectric memory for multilevel information storage

    KAUST Repository

    Khan, Yasser

    2015-01-01

    Here, we report hybrid organic/inorganic ferroelectric memory with multilevel information storage using transparent p-type SnO semiconductor and ferroelectric P(VDF-TrFE) polymer. The dual gate devices include a top ferroelectric field-effect transistor (FeFET) and a bottom thin-film transistor (TFT). The devices are all fabricated at low temperatures (∼200°C), and demonstrate excellent performance with high hole mobility of 2.7 cm2 V-1 s-1, large memory window of ∼18 V, and a low sub-threshold swing ∼-4 V dec-1. The channel conductance of the bottom-TFT and the top-FeFET can be controlled independently by the bottom and top gates, respectively. The results demonstrate multilevel nonvolatile information storage using ferroelectric memory devices with good retention characteristics.

  14. An advanced joint inversion system for CO2 storage modeling with large date sets for characterization and real-time monitoring-enhancing storage performance and reducing failure risks under uncertainties

    Energy Technology Data Exchange (ETDEWEB)

    Kitanidis, Peter [Stanford Univ., CA (United States)

    2016-04-30

    As large-scale, commercial storage projects become operational, the problem of utilizing information from diverse sources becomes more critically important. In this project, we developed, tested, and applied an advanced joint data inversion system for CO2 storage modeling with large data sets for use in site characterization and real-time monitoring. Emphasis was on the development of advanced and efficient computational algorithms for joint inversion of hydro-geophysical data, coupled with state-of-the-art forward process simulations. The developed system consists of (1) inversion tools using characterization data, such as 3D seismic survey (amplitude images), borehole log and core data, as well as hydraulic, tracer and thermal tests before CO2 injection, (2) joint inversion tools for updating the geologic model with the distribution of rock properties, thus reducing uncertainty, using hydro-geophysical monitoring data, and (3) highly efficient algorithms for directly solving the dense or sparse linear algebra systems derived from the joint inversion. The system combines methods from stochastic analysis, fast linear algebra, and high performance computing. The developed joint inversion tools have been tested through synthetic CO2 storage examples.

  15. Contextual Compression of Large-Scale Wind Turbine Array Simulations: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Gruchalla, Kenny M [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Brunhart-Lupo, Nicholas J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Potter, Kristin C [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Clyne, John [National Center for Atmospheric Research

    2017-11-03

    Data sizes are becoming a critical issue particularly for HPC applications. We have developed a user-driven lossy wavelet-based storage model to facilitate the analysis and visualization of large-scale wind turbine array simulations. The model stores data as heterogeneous blocks of wavelet coefficients, providing high-fidelity access to user-defined data regions believed the most salient, while providing lower-fidelity access to less salient regions on a block-by-block basis. In practice, by retaining the wavelet coefficients as a function of feature saliency, we have seen data reductions in excess of 94 percent, while retaining lossless information in the turbine-wake regions most critical to analysis and providing enough (low-fidelity) contextual information in the upper atmosphere to track incoming coherent turbulent structures. Our contextual wavelet compression approach has allowed us to deliver interactive visual analysis while providing the user control over where data loss, and thus reduction in accuracy, in the analysis occurs. We argue this reduced but contexualized representation is a valid approach and encourages contextual data management.

  16. First experiences with large SAN storage and Linux

    International Nuclear Information System (INIS)

    Wezel, Jos van; Marten, Holger; Verstege, Bernhard; Jaeger, Axel

    2004-01-01

    The use of a storage area network (SAN) with Linux opens possibilities for scalable and affordable large data storage and poses a new challenge for cluster computing. The GridKa center uses a commercial parallel file system to create a highly available high-speed data storage using a combination of Fibre Channel (SAN) and Ethernet (LAN) to optimize between data throughput and costs. This article describes the design, implementation and optimizations of the GridKa storage solution which will offer over 400 TB online storage for 600 nodes. Presented are some throughput measurements of one of the largest Linux-based parallel storage systems in the world

  17. Ecological research in the large-scale biosphere-atmosphere experiment in Amazonia: early results

    NARCIS (Netherlands)

    Keller, M.; Alencar, A.; Asner, G.P.; Braswell, B.; Bustamante, M.; Davidson, E.; Feldpausch, T.; Fernandes, E.; Goulden, M.; Kabat, P.; Kruijt, B.; Luizão, F.; Miller, S.; Markewitz, D.; Nobre, A.D.; Nobre, C.A.; Priante Filho, N.; Rocha, da H.; Silva Dias, P.; Randow, von C.; Vourlitis, G.L.

    2004-01-01

    The Large-scale Biosphere-Atmosphere Experiment in Amazonia (LBA) is a multinational, interdisciplinary research program led by Brazil. Ecological studies in LBA focus on how tropical forest conversion, regrowth, and selective logging influence carbon storage,. nutrient dynamics, trace gas fluxes,

  18. Geospatial Optimization of Siting Large-Scale Solar Projects

    Energy Technology Data Exchange (ETDEWEB)

    Macknick, Jordan [National Renewable Energy Lab. (NREL), Golden, CO (United States); Quinby, Ted [National Renewable Energy Lab. (NREL), Golden, CO (United States); Caulfield, Emmet [Stanford Univ., CA (United States); Gerritsen, Margot [Stanford Univ., CA (United States); Diffendorfer, Jay [U.S. Geological Survey, Boulder, CO (United States); Haines, Seth [U.S. Geological Survey, Boulder, CO (United States)

    2014-03-01

    Recent policy and economic conditions have encouraged a renewed interest in developing large-scale solar projects in the U.S. Southwest. However, siting large-scale solar projects is complex. In addition to the quality of the solar resource, solar developers must take into consideration many environmental, social, and economic factors when evaluating a potential site. This report describes a proof-of-concept, Web-based Geographical Information Systems (GIS) tool that evaluates multiple user-defined criteria in an optimization algorithm to inform discussions and decisions regarding the locations of utility-scale solar projects. Existing siting recommendations for large-scale solar projects from governmental and non-governmental organizations are not consistent with each other, are often not transparent in methods, and do not take into consideration the differing priorities of stakeholders. The siting assistance GIS tool we have developed improves upon the existing siting guidelines by being user-driven, transparent, interactive, capable of incorporating multiple criteria, and flexible. This work provides the foundation for a dynamic siting assistance tool that can greatly facilitate siting decisions among multiple stakeholders.

  19. Application of simplified models to CO2 migration and immobilization in large-scale geological systems

    KAUST Repository

    Gasda, Sarah E.

    2012-07-01

    Long-term stabilization of injected carbon dioxide (CO 2) is an essential component of risk management for geological carbon sequestration operations. However, migration and trapping phenomena are inherently complex, involving processes that act over multiple spatial and temporal scales. One example involves centimeter-scale density instabilities in the dissolved CO 2 region leading to large-scale convective mixing that can be a significant driver for CO 2 dissolution. Another example is the potentially important effect of capillary forces, in addition to buoyancy and viscous forces, on the evolution of mobile CO 2. Local capillary effects lead to a capillary transition zone, or capillary fringe, where both fluids are present in the mobile state. This small-scale effect may have a significant impact on large-scale plume migration as well as long-term residual and dissolution trapping. Computational models that can capture both large and small-scale effects are essential to predict the role of these processes on the long-term storage security of CO 2 sequestration operations. Conventional modeling tools are unable to resolve sufficiently all of these relevant processes when modeling CO 2 migration in large-scale geological systems. Herein, we present a vertically-integrated approach to CO 2 modeling that employs upscaled representations of these subgrid processes. We apply the model to the Johansen formation, a prospective site for sequestration of Norwegian CO 2 emissions, and explore the sensitivity of CO 2 migration and trapping to subscale physics. Model results show the relative importance of different physical processes in large-scale simulations. The ability of models such as this to capture the relevant physical processes at large spatial and temporal scales is important for prediction and analysis of CO 2 storage sites. © 2012 Elsevier Ltd.

  20. Large scale renewable power generation advances in technologies for generation, transmission and storage

    CERN Document Server

    Hossain, Jahangir

    2014-01-01

    This book focuses on the issues of integrating large-scale renewable power generation into existing grids. It includes a new protection technique for renewable generators along with the inclusion of current status of smart grid.

  1. Sorption heat storage for long-term low-temperature applications: A review on the advancements at material and prototype scale

    NARCIS (Netherlands)

    Scapino, L.; Zondag, H.A.; Van Bael, J.; Diriken, J.; Rindt, C.C.M.

    2017-01-01

    Sorption heat storage has the potential to store large amounts of thermal energy from renewables and other distributed energy sources. This article provides an overview on the recent advancements on long-term sorption heat storage at material- and prototype- scales. The focus is on applications

  2. Large-scale weakly supervised object localization via latent category learning.

    Science.gov (United States)

    Chong Wang; Kaiqi Huang; Weiqiang Ren; Junge Zhang; Maybank, Steve

    2015-04-01

    Localizing objects in cluttered backgrounds is challenging under large-scale weakly supervised conditions. Due to the cluttered image condition, objects usually have large ambiguity with backgrounds. Besides, there is also a lack of effective algorithm for large-scale weakly supervised localization in cluttered backgrounds. However, backgrounds contain useful latent information, e.g., the sky in the aeroplane class. If this latent information can be learned, object-background ambiguity can be largely reduced and background can be suppressed effectively. In this paper, we propose the latent category learning (LCL) in large-scale cluttered conditions. LCL is an unsupervised learning method which requires only image-level class labels. First, we use the latent semantic analysis with semantic object representation to learn the latent categories, which represent objects, object parts or backgrounds. Second, to determine which category contains the target object, we propose a category selection strategy by evaluating each category's discrimination. Finally, we propose the online LCL for use in large-scale conditions. Evaluation on the challenging PASCAL Visual Object Class (VOC) 2007 and the large-scale imagenet large-scale visual recognition challenge 2013 detection data sets shows that the method can improve the annotation precision by 10% over previous methods. More importantly, we achieve the detection precision which outperforms previous results by a large margin and can be competitive to the supervised deformable part model 5.0 baseline on both data sets.

  3. The impact of large-scale energy storage requirements on the choice between electricity and hydrogen as the major energy carrier in a non-fossil renewables-only scenario

    International Nuclear Information System (INIS)

    Converse, Alvin O.

    2006-01-01

    The need for large-scale storage, when the energy source is subject to periods of low-energy generation, as it would be in a direct solar or wind energy system, could be the factor which justifies the choice of hydrogen, rather than electricity, as the principal energy carrier. It could also be the 'Achilles heel' of a solar-based sustainable energy system, tipping the choice to a nuclear breeder system

  4. Influence of Extrinsic Information Scaling Coefficient on Double-Iterative Decoding Algorithm for Space-Time Turbo Codes with Large Number of Antennas

    Directory of Open Access Journals (Sweden)

    TRIFINA, L.

    2011-02-01

    Full Text Available This paper analyzes the extrinsic information scaling coefficient influence on double-iterative decoding algorithm for space-time turbo codes with large number of antennas. The max-log-APP algorithm is used, scaling both the extrinsic information in the turbo decoder and the one used at the input of the interference-canceling block. Scaling coefficients of 0.7 or 0.75 lead to a 0.5 dB coding gain compared to the no-scaling case, for one or more iterations to cancel the spatial interferences.

  5. Grid scale energy storage in salt caverns

    Energy Technology Data Exchange (ETDEWEB)

    Crotogino, Fritz; Donadei, Sabine [KBB Underground Technologies GmbH, Hannover (Germany)

    2009-07-01

    Fossil energy sources require some 20% of the annual consumption to be stored to secure emergency cover, peak shaving, seasonal balancing, etc. Today the electric power industry benefits from the extreme high energy density of fossil fuels. This is one important reason why the German utilities are able to provide highly reliable grid operation at a electric power storage capacity at their pumped hydro power stations of less then 1 hour (40 GWh) related to the total load in the grid - i.e. only 0,06% related to natural gas. Along with the changeover to renewable wind based electricity production this ''outsourcing'' of storage services to fossil fuels will decline. One important way out will be grid scale energy storage. The present discussion for balancing short term wind and solar power fluctuations focuses primarily on the installation of Compressed Air Energy Storages (CAES) in addition to existing pumped hydro plants. Because of their small energy density, these storage options are, however, generally not suitable for balancing for longer term fluctuations in case of larger amounts of excess wind power or even seasonal fluctuations. Underground hydrogen storages, however, provide a much higher energy density because of chemical energy bond - standard practice since many years. The first part of the article describes the present status and performance of grid scale energy storages in geological formations, mainly salt caverns. It is followed by a compilation of generally suitable locations in Europe and particularly Germany. The second part deals with first results of preliminary investigations in possibilities and limits of offshore CAES power stations. (orig.)

  6. A Dynamic Optimization Strategy for the Operation of Large Scale Seawater Reverses Osmosis System

    Directory of Open Access Journals (Sweden)

    Aipeng Jiang

    2014-01-01

    Full Text Available In this work, an efficient strategy was proposed for efficient solution of the dynamic model of SWRO system. Since the dynamic model is formulated by a set of differential-algebraic equations, simultaneous strategies based on collocations on finite element were used to transform the DAOP into large scale nonlinear programming problem named Opt2. Then, simulation of RO process and storage tanks was carried element by element and step by step with fixed control variables. All the obtained values of these variables then were used as the initial value for the optimal solution of SWRO system. Finally, in order to accelerate the computing efficiency and at the same time to keep enough accuracy for the solution of Opt2, a simple but efficient finite element refinement rule was used to reduce the scale of Opt2. The proposed strategy was applied to a large scale SWRO system with 8 RO plants and 4 storage tanks as case study. Computing result shows that the proposed strategy is quite effective for optimal operation of the large scale SWRO system; the optimal problem can be successfully solved within decades of iterations and several minutes when load and other operating parameters fluctuate.

  7. Informational and emotional elements in online support groups: a Bayesian approach to large-scale content analysis.

    Science.gov (United States)

    Deetjen, Ulrike; Powell, John A

    2016-05-01

    This research examines the extent to which informational and emotional elements are employed in online support forums for 14 purposively sampled chronic medical conditions and the factors that influence whether posts are of a more informational or emotional nature. Large-scale qualitative data were obtained from Dailystrength.org. Based on a hand-coded training dataset, all posts were classified into informational or emotional using a Bayesian classification algorithm to generalize the findings. Posts that could not be classified with a probability of at least 75% were excluded. The overall tendency toward emotional posts differs by condition: mental health (depression, schizophrenia) and Alzheimer's disease consist of more emotional posts, while informational posts relate more to nonterminal physical conditions (irritable bowel syndrome, diabetes, asthma). There is no gender difference across conditions, although prostate cancer forums are oriented toward informational support, whereas breast cancer forums rather feature emotional support. Across diseases, the best predictors for emotional content are lower age and a higher number of overall posts by the support group member. The results are in line with previous empirical research and unify empirical findings from single/2-condition research. Limitations include the analytical restriction to predefined categories (informational, emotional) through the chosen machine-learning approach. Our findings provide an empirical foundation for building theory on informational versus emotional support across conditions, give insights for practitioners to better understand the role of online support groups for different patients, and show the usefulness of machine-learning approaches to analyze large-scale qualitative health data from online settings. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  8. An Novel Architecture of Large-scale Communication in IOT

    Science.gov (United States)

    Ma, Wubin; Deng, Su; Huang, Hongbin

    2018-03-01

    In recent years, many scholars have done a great deal of research on the development of Internet of Things and networked physical systems. However, few people have made the detailed visualization of the large-scale communications architecture in the IOT. In fact, the non-uniform technology between IPv6 and access points has led to a lack of broad principles of large-scale communications architectures. Therefore, this paper presents the Uni-IPv6 Access and Information Exchange Method (UAIEM), a new architecture and algorithm that addresses large-scale communications in the IOT.

  9. Are large-scale flow experiments informing the science and management of freshwater ecosystems?

    Science.gov (United States)

    Olden, Julian D.; Konrad, Christopher P.; Melis, Theodore S.; Kennard, Mark J.; Freeman, Mary C.; Mims, Meryl C.; Bray, Erin N.; Gido, Keith B.; Hemphill, Nina P.; Lytle, David A.; McMullen, Laura E.; Pyron, Mark; Robinson, Christopher T.; Schmidt, John C.; Williams, John G.

    2013-01-01

    Greater scientific knowledge, changing societal values, and legislative mandates have emphasized the importance of implementing large-scale flow experiments (FEs) downstream of dams. We provide the first global assessment of FEs to evaluate their success in advancing science and informing management decisions. Systematic review of 113 FEs across 20 countries revealed that clear articulation of experimental objectives, while not universally practiced, was crucial for achieving management outcomes and changing dam-operating policies. Furthermore, changes to dam operations were three times less likely when FEs were conducted primarily for scientific purposes. Despite the recognized importance of riverine flow regimes, four-fifths of FEs involved only discrete flow events. Over three-quarters of FEs documented both abiotic and biotic outcomes, but only one-third examined multiple taxonomic responses, thus limiting how FE results can inform holistic dam management. Future FEs will present new opportunities to advance scientifically credible water policies.

  10. Deployment of Wireless Sensor Networks in Crop Storages

    DEFF Research Database (Denmark)

    Juul, Jakob Pilegaard; Green, Ole; Jacobsen, Rune Hylsberg

    2015-01-01

    of a wireless sensor network based system that provides continuous, automatic, and up-to-date information on a crop storage, while presenting the data in an easily accessible manner, is also described. The design decisions, challenges, and practical experiences from real-world large scale deployment...

  11. Large mass storage facility

    Energy Technology Data Exchange (ETDEWEB)

    Peskin, Arnold M.

    1978-08-01

    This is the final report of a study group organized to investigate questions surrounding the acquisition of a large mass storage facility. The programatic justification for such a system at Brookhaven is reviewed. Several candidate commercial products are identified and discussed. A draft of a procurement specification is developed. Some thoughts on possible new directions for computing at Brookhaven are also offered, although this topic was addressed outside of the context of the group's deliberations. 2 figures, 3 tables.

  12. Assessment of Future Whole-System Value of Large-Scale Pumped Storage Plants in Europe

    Directory of Open Access Journals (Sweden)

    Fei Teng

    2018-01-01

    Full Text Available This paper analyses the impacts and benefits of the pumped storage plant (PSP and its upgrade to variable speed on generation and transmission capacity requirements, capital costs, system operating costs and carbon emissions in the future European electricity system. The combination of a deterministic system planning tool, Whole-electricity System Investment Model (WeSIM, and a stochastic system operation optimisation tool, Advanced Stochastic Unit Commitment (ASUC, is used to analyse the whole-system value of PSP technology and to quantify the impact of European balancing market integration and other competing flexible technologies on the value of the PSP. Case studies on the Pan-European system demonstrate that PSPs can reduce the total system cost by up to €13 billion per annum by 2050 in a scenario with a high share of renewables. Upgrading the PSP to variable-speed drive enhances its long-term benefits by 10–20%. On the other hand, balancing market integration across Europe may potentially reduce the overall value of the variable-speed PSP, although the effect can vary across different European regions. The results also suggest that large-scale deployment of demand-side response (DSR leads to a significant reduction in the value of PSPs, while the value of PSPs increases by circa 18% when the total European interconnection capacity is halved. The benefit of PSPs in reducing emissions is relatively negligible by 2030 but constitutes around 6–10% of total annual carbon emissions from the European power sector by 2050.

  13. Risk Management of Large RC Structures within Spatial Information System

    DEFF Research Database (Denmark)

    Qin, Jianjun; Faber, Michael Havbro

    2012-01-01

    Abstract: The present article addresses the development of a spatial information system (SIS), which aims to facilitate risk management of large‐scale concrete structures. The formulation of the SIS is based on ideas developed in the context of indicator‐based risk modeling for concrete structures...... subject to corrosion and geographical information system based risk modeling concerning large‐scale risk management. The term “risk management” here refers in particular to the process of condition assessment and optimization of the inspection and repair activities. The SIS facilitates the storage...... and handling of all relevant information to the risk management. The probabilistic modeling utilized in the condition assessment takes basis in a Bayesian hierarchical modeling philosophy. It facilitates the updating of risks as well as optimizing inspection plans whenever new information about the condition...

  14. Emergent Semantics Interoperability in Large-Scale Decentralized Information Systems

    CERN Document Server

    Cudré-Mauroux, Philippe

    2008-01-01

    Peer-to-peer systems are evolving with new information-system architectures, leading to the idea that the principles of decentralization and self-organization will offer new approaches in informatics, especially for systems that scale with the number of users or for which central authorities do not prevail. This book describes a new way of building global agreements (semantic interoperability) based only on decentralized, self-organizing interactions.

  15. Chemically intuited, large-scale screening of MOFs by machine learning techniques

    Science.gov (United States)

    Borboudakis, Giorgos; Stergiannakos, Taxiarchis; Frysali, Maria; Klontzas, Emmanuel; Tsamardinos, Ioannis; Froudakis, George E.

    2017-10-01

    A novel computational methodology for large-scale screening of MOFs is applied to gas storage with the use of machine learning technologies. This approach is a promising trade-off between the accuracy of ab initio methods and the speed of classical approaches, strategically combined with chemical intuition. The results demonstrate that the chemical properties of MOFs are indeed predictable (stochastically, not deterministically) using machine learning methods and automated analysis protocols, with the accuracy of predictions increasing with sample size. Our initial results indicate that this methodology is promising to apply not only to gas storage in MOFs but in many other material science projects.

  16. Information contained within the large scale gas injection test (Lasgit) dataset exposed using a bespoke data analysis tool-kit

    International Nuclear Information System (INIS)

    Bennett, D.P.; Thomas, H.R.; Cuss, R.J.; Harrington, J.F.; Vardon, P.J.

    2012-01-01

    Document available in extended abstract form only. The Large Scale Gas Injection Test (Lasgit) is a field scale experiment run by the British Geological Survey (BGS) and is located approximately 420 m underground at SKB's Aespoe Hard Rock Laboratory (HRL) in Sweden. It has been designed to study the impact on safety of gas build up within a KBS-3V concept high level radioactive waste repository. Lasgit has been in almost continuous operation for approximately seven years and is still underway. An analysis of the dataset arising from the Lasgit experiment with particular attention to the smaller scale features and phenomenon recorded has been undertaken in parallel to the macro scale analysis performed by the BGS. Lasgit is a highly instrumented, frequently sampled and long-lived experiment leading to a substantial dataset containing in excess of 14.7 million datum points. The data is anticipated to include a wealth of information, including information regarding overall processes as well as smaller scale or 'second order' features. Due to the size of the dataset coupled with the detailed analysis of the dataset required and the reduction in subjectivity associated with measurement compared to observation, computational analysis is essential. Moreover, due to the length of operation and complexity of experimental activity, the Lasgit dataset is not typically suited to 'out of the box' time series analysis algorithms. In particular, the features that are not suited to standard algorithms include non-uniformities due to (deliberate) changes in sample rate at various points in the experimental history and missing data due to hardware malfunction/failure causing interruption of logging cycles. To address these features a computational tool-kit capable of performing an Exploratory Data Analysis (EDA) on long-term, large-scale datasets with non-uniformities has been developed. Particular tool-kit abilities include: the parameterization of signal variation in the dataset

  17. Development of large scale production of Nd-doped phosphate glasses for megajoule-scale laser systems

    International Nuclear Information System (INIS)

    Ficini, G.; Campbell, J.H.

    1996-01-01

    Nd-doped phosphate glasses are the preferred gain medium for high-peak-power lasers used for Inertial Confinement Fusion research because they have excellent energy storage and extraction characteristics. In addition, these glasses can be manufactured defect-free in large sizes and at relatively low cost. To meet the requirements of the future mega-joule size lasers, advanced laser glass manufacturing methods are being developed that would enable laser glass to be continuously produced at the rate of several thousand large (790 x 440 x 44 mm 3 ) plates of glass per year. This represents more than a 10 to 100-fold improvement in the scale of the present manufacturing technology

  18. Large-scale Health Information Database and Privacy Protection*1

    OpenAIRE

    YAMAMOTO, Ryuichi

    2016-01-01

    Japan was once progressive in the digitalization of healthcare fields but unfortunately has fallen behind in terms of the secondary use of data for public interest. There has recently been a trend to establish large-scale health databases in the nation, and a conflict between data use for public interest and privacy protection has surfaced as this trend has progressed. Databases for health insurance claims or for specific health checkups and guidance services were created according to the law...

  19. Large-scale solar purchasing

    International Nuclear Information System (INIS)

    1999-01-01

    The principal objective of the project was to participate in the definition of a new IEA task concerning solar procurement (''the Task'') and to assess whether involvement in the task would be in the interest of the UK active solar heating industry. The project also aimed to assess the importance of large scale solar purchasing to UK active solar heating market development and to evaluate the level of interest in large scale solar purchasing amongst potential large scale purchasers (in particular housing associations and housing developers). A further aim of the project was to consider means of stimulating large scale active solar heating purchasing activity within the UK. (author)

  20. A self-scaling, distributed information architecture for public health, research, and clinical care.

    Science.gov (United States)

    McMurry, Andrew J; Gilbert, Clint A; Reis, Ben Y; Chueh, Henry C; Kohane, Isaac S; Mandl, Kenneth D

    2007-01-01

    This study sought to define a scalable architecture to support the National Health Information Network (NHIN). This architecture must concurrently support a wide range of public health, research, and clinical care activities. The architecture fulfils five desiderata: (1) adopt a distributed approach to data storage to protect privacy, (2) enable strong institutional autonomy to engender participation, (3) provide oversight and transparency to ensure patient trust, (4) allow variable levels of access according to investigator needs and institutional policies, (5) define a self-scaling architecture that encourages voluntary regional collaborations that coalesce to form a nationwide network. Our model has been validated by a large-scale, multi-institution study involving seven medical centers for cancer research. It is the basis of one of four open architectures developed under funding from the Office of the National Coordinator of Health Information Technology, fulfilling the biosurveillance use case defined by the American Health Information Community. The model supports broad applicability for regional and national clinical information exchanges. This model shows the feasibility of an architecture wherein the requirements of care providers, investigators, and public health authorities are served by a distributed model that grants autonomy, protects privacy, and promotes participation.

  1. Photorealistic large-scale urban city model reconstruction.

    Science.gov (United States)

    Poullis, Charalambos; You, Suya

    2009-01-01

    The rapid and efficient creation of virtual environments has become a crucial part of virtual reality applications. In particular, civil and defense applications often require and employ detailed models of operations areas for training, simulations of different scenarios, planning for natural or man-made events, monitoring, surveillance, games, and films. A realistic representation of the large-scale environments is therefore imperative for the success of such applications since it increases the immersive experience of its users and helps reduce the difference between physical and virtual reality. However, the task of creating such large-scale virtual environments still remains a time-consuming and manual work. In this work, we propose a novel method for the rapid reconstruction of photorealistic large-scale virtual environments. First, a novel, extendible, parameterized geometric primitive is presented for the automatic building identification and reconstruction of building structures. In addition, buildings with complex roofs containing complex linear and nonlinear surfaces are reconstructed interactively using a linear polygonal and a nonlinear primitive, respectively. Second, we present a rendering pipeline for the composition of photorealistic textures, which unlike existing techniques, can recover missing or occluded texture information by integrating multiple information captured from different optical sensors (ground, aerial, and satellite).

  2. Information Storage and Management Storing, Managing, and Protecting Digital Information

    CERN Document Server

    EMC

    2009-01-01

    The spiraling growth of digital information makes the ISM book a "must have" addition to your IT reference library. This exponential growth has driven information management technology to new levels of sophistication and complexity, exposing a skills gap that challenge IT managers and professionals alike. The ISM book, written by storage professionals from EMC Corporation, takes an ' open' approach to teaching information storage and management, focusing on concepts and principles – rather that product specifics – that can be applied in all IT environments. The book enables existing

  3. Battery energy storage systems: Assessment for small-scale renewable energy integration

    Energy Technology Data Exchange (ETDEWEB)

    Nair, Nirmal-Kumar C.; Garimella, Niraj [Power Systems Group, Department of Electrical and Computer Engineering, The University of Auckland, 38 Princes Street, Science Centre, Auckland 1142 (New Zealand)

    2010-11-15

    Concerns arising due to the variability and intermittency of renewable energy sources while integrating with the power grid can be mitigated to an extent by incorporating a storage element within the renewable energy harnessing system. Thus, battery energy storage systems (BESS) are likely to have a significant impact in the small-scale integration of renewable energy sources into commercial building and residential dwelling. These storage technologies not only enable improvements in consumption levels from renewable energy sources but also provide a range of technical and monetary benefits. This paper provides a modelling framework to be able to quantify the associated benefits of renewable resource integration followed by an overview of various small-scale energy storage technologies. A simple, practical and comprehensive assessment of battery energy storage technologies for small-scale renewable applications based on their technical merit and economic feasibility is presented. Software such as Simulink and HOMER provides the platforms for technical and economic assessments of the battery technologies respectively. (author)

  4. A family of conjugate gradient methods for large-scale nonlinear equations.

    Science.gov (United States)

    Feng, Dexiang; Sun, Min; Wang, Xueyong

    2017-01-01

    In this paper, we present a family of conjugate gradient projection methods for solving large-scale nonlinear equations. At each iteration, it needs low storage and the subproblem can be easily solved. Compared with the existing solution methods for solving the problem, its global convergence is established without the restriction of the Lipschitz continuity on the underlying mapping. Preliminary numerical results are reported to show the efficiency of the proposed method.

  5. Energy storage

    Science.gov (United States)

    Kaier, U.

    1981-04-01

    Developments in the area of energy storage are characterized, with respect to theory and laboratory, by an emergence of novel concepts and technologies for storing electric energy and heat. However, there are no new commercial devices on the market. New storage batteries as basis for a wider introduction of electric cars, and latent heat storage devices, as an aid for solar technology applications, with satisfactory performance standards are not yet commercially available. Devices for the intermediate storage of electric energy for solar electric-energy systems, and for satisfying peak-load current demands in the case of public utility companies are considered. In spite of many promising novel developments, there is yet no practical alternative to the lead-acid storage battery. Attention is given to central heat storage for systems transporting heat energy, small-scale heat storage installations, and large-scale technical energy-storage systems.

  6. Proactive replica checking to assure reliability of data in cloud storage with minimum replication

    Science.gov (United States)

    Murarka, Damini; Maheswari, G. Uma

    2017-11-01

    The two major issues for cloud storage systems are data reliability and storage costs. For data reliability protection, multi-replica replication strategy which is used mostly in current clouds acquires huge storage consumption, leading to a large storage cost for applications within the loud specifically. This paper presents a cost-efficient data reliability mechanism named PRCR to cut back the cloud storage consumption. PRCR ensures data reliability of large cloud information with the replication that might conjointly function as a price effective benchmark for replication. The duplication shows that when resembled to the standard three-replica approach, PRCR will scale back to consume only a simple fraction of the cloud storage from one-third of the storage, thence considerably minimizing the cloud storage price.

  7. Mountaineer Commerical Scale Carbon Capture and Storage (CCS) Project

    Energy Technology Data Exchange (ETDEWEB)

    Deanna Gilliland; Matthew Usher

    2011-12-31

    The Final Technical documents all work performed during the award period on the Mountaineer Commercial Scale Carbon Capture & Storage project. This report presents the findings and conclusions produced as a consequence of this work. As identified in the Cooperative Agreement DE-FE0002673, AEP's objective of the Mountaineer Commercial Scale Carbon Capture and Storage (MT CCS II) project is to design, build and operate a commercial scale carbon capture and storage (CCS) system capable of treating a nominal 235 MWe slip stream of flue gas from the outlet duct of the Flue Gas Desulfurization (FGD) system at AEP's Mountaineer Power Plant (Mountaineer Plant), a 1300 MWe coal-fired generating station in New Haven, WV. The CCS system is designed to capture 90% of the CO{sub 2} from the incoming flue gas using the Alstom Chilled Ammonia Process (CAP) and compress, transport, inject and store 1.5 million tonnes per year of the captured CO{sub 2} in deep saline reservoirs. Specific Project Objectives include: (1) Achieve a minimum of 90% carbon capture efficiency during steady-state operations; (2) Demonstrate progress toward capture and storage at less than a 35% increase in cost of electricity (COE); (3) Store CO{sub 2} at a rate of 1.5 million tonnes per year in deep saline reservoirs; and (4) Demonstrate commercial technology readiness of the integrated CO{sub 2} capture and storage system.

  8. Impact of small-scale storage systems on the photovoltaic penetration potential at the municipal scale

    Science.gov (United States)

    Ramirez Camargo, Luis; Dorner, Wolfgang

    2016-04-01

    The yearly cumulated technical energy generation potential of grid-connected roof-top photovoltaic power plants is significantly larger than the demand of domestic buildings in sparsely populated municipalities in central Europe. However, an energy balance with cumulated annual values does not deliver the right picture about the actual potential for photovoltaics since these run on a highly variable energy source as solar radiation. The mismatch between the periods of generation and demand creates hard limitations for the deployment of the theoretical energy generation potential of roof-top photovoltaics. The actual penetration of roof-top photovoltaic is restricted by the energy quality requirements of the grid and/or the available storage capacity for the electricity production beyond the coverage of own demands. In this study we evaluate in how far small-scale storage systems can contribute to increment the grid-connected roof-top photovoltaic penetration in domestic buildings at a municipal scale. To accomplish this, we calculate, in a first step, the total technical roof-top photovoltaic energy generation potential of a municipality in a high spatiotemporal resolution using a procedure that relies on geographic information systems. Posteriorly, we constrain the set of potential photovoltaic plants to the ones that would be necessary to cover the total yearly demand of the municipality. We assume that photovoltaic plants with the highest yearly yield are the ones that should be installed. For this sub-set of photovoltaic plants we consider five scenarios: 1) no storage 2) one 7 kWh battery is installed in every building with a roof-top photovoltaic plant 3) one 10 kWh battery is installed in every building with a roof-top photovoltaic plant 4) one 7 kWh battery is installed in every domestic building in the municipality 5) one 10 kWh battery is installed in every domestic building in the municipality. Afterwards we evaluate the energy balance of the

  9. ECOLOGICAL RESEARCH IN THE LARGE-SCALE BIOSPHERE–ATMOSPHERE EXPERIMENT IN AMAZONIA: EARLY RESULTS.

    Science.gov (United States)

    M. Keller; A. Alencar; G. P. Asner; B. Braswell; M. Bustamente; E. Davidson; T. Feldpausch; E. Fern ndes; M. Goulden; P. Kabat; B. Kruijt; F. Luizao; S. Miller; D. Markewitz; A. D. Nobre; C. A. Nobre; N. Priante Filho; H. Rocha; P. Silva Dias; C von Randow; G. L. Vourlitis

    2004-01-01

    The Large-scale Biosphere–Atmosphere Experiment in Amazonia (LBA) is a multinational, interdisciplinary research program led by Brazil. Ecological studies in LBA focus on how tropical forest conversion, regrowth, and selective logging influence carbon storage, nutrient dynamics, trace gas fluxes, and the prospect for sustainable land use in the Amazon region. Early...

  10. NOSQL FOR STORAGE AND RETRIEVAL OF LARGE LIDAR DATA COLLECTIONS

    Directory of Open Access Journals (Sweden)

    J. Boehm

    2015-08-01

    Full Text Available Developments in LiDAR technology over the past decades have made LiDAR to become a mature and widely accepted source of geospatial information. This in turn has led to an enormous growth in data volume. The central idea for a file-centric storage of LiDAR point clouds is the observation that large collections of LiDAR data are typically delivered as large collections of files, rather than single files of terabyte size. This split of the dataset, commonly referred to as tiling, was usually done to accommodate a specific processing pipeline. It makes therefore sense to preserve this split. A document oriented NoSQL database can easily emulate this data partitioning, by representing each tile (file in a separate document. The document stores the metadata of the tile. The actual files are stored in a distributed file system emulated by the NoSQL database. We demonstrate the use of MongoDB a highly scalable document oriented NoSQL database for storing large LiDAR files. MongoDB like any NoSQL database allows for queries on the attributes of the document. As a specialty MongoDB also allows spatial queries. Hence we can perform spatial queries on the bounding boxes of the LiDAR tiles. Inserting and retrieving files on a cloud-based database is compared to native file system and cloud storage transfer speed.

  11. Nosql for Storage and Retrieval of Large LIDAR Data Collections

    Science.gov (United States)

    Boehm, J.; Liu, K.

    2015-08-01

    Developments in LiDAR technology over the past decades have made LiDAR to become a mature and widely accepted source of geospatial information. This in turn has led to an enormous growth in data volume. The central idea for a file-centric storage of LiDAR point clouds is the observation that large collections of LiDAR data are typically delivered as large collections of files, rather than single files of terabyte size. This split of the dataset, commonly referred to as tiling, was usually done to accommodate a specific processing pipeline. It makes therefore sense to preserve this split. A document oriented NoSQL database can easily emulate this data partitioning, by representing each tile (file) in a separate document. The document stores the metadata of the tile. The actual files are stored in a distributed file system emulated by the NoSQL database. We demonstrate the use of MongoDB a highly scalable document oriented NoSQL database for storing large LiDAR files. MongoDB like any NoSQL database allows for queries on the attributes of the document. As a specialty MongoDB also allows spatial queries. Hence we can perform spatial queries on the bounding boxes of the LiDAR tiles. Inserting and retrieving files on a cloud-based database is compared to native file system and cloud storage transfer speed.

  12. Algorithm 873: LSTRS: MATLAB Software for Large-Scale Trust-Region Subproblems and Regularization

    DEFF Research Database (Denmark)

    Rojas Larrazabal, Marielba de la Caridad; Santos, Sandra A.; Sorensen, Danny C.

    2008-01-01

    A MATLAB 6.0 implementation of the LSTRS method is resented. LSTRS was described in Rojas, M., Santos, S.A., and Sorensen, D.C., A new matrix-free method for the large-scale trust-region subproblem, SIAM J. Optim., 11(3):611-646, 2000. LSTRS is designed for large-scale quadratic problems with one...... at each step. LSTRS relies on matrix-vector products only and has low and fixed storage requirements, features that make it suitable for large-scale computations. In the MATLAB implementation, the Hessian matrix of the quadratic objective function can be specified either explicitly, or in the form...... of a matrix-vector multiplication routine. Therefore, the implementation preserves the matrix-free nature of the method. A description of the LSTRS method and of the MATLAB software, version 1.2, is presented. Comparisons with other techniques and applications of the method are also included. A guide...

  13. Homogenization of Large-Scale Movement Models in Ecology

    Science.gov (United States)

    Garlick, M.J.; Powell, J.A.; Hooten, M.B.; McFarlane, L.R.

    2011-01-01

    A difficulty in using diffusion models to predict large scale animal population dispersal is that individuals move differently based on local information (as opposed to gradients) in differing habitat types. This can be accommodated by using ecological diffusion. However, real environments are often spatially complex, limiting application of a direct approach. Homogenization for partial differential equations has long been applied to Fickian diffusion (in which average individual movement is organized along gradients of habitat and population density). We derive a homogenization procedure for ecological diffusion and apply it to a simple model for chronic wasting disease in mule deer. Homogenization allows us to determine the impact of small scale (10-100 m) habitat variability on large scale (10-100 km) movement. The procedure generates asymptotic equations for solutions on the large scale with parameters defined by small-scale variation. The simplicity of this homogenization procedure is striking when compared to the multi-dimensional homogenization procedure for Fickian diffusion,and the method will be equally straightforward for more complex models. ?? 2010 Society for Mathematical Biology.

  14. Distributed system for large-scale remote research

    International Nuclear Information System (INIS)

    Ueshima, Yutaka

    2002-01-01

    In advanced photon research, large-scale simulations and high-resolution observations are powerfull tools. In numerical and real experiments, the real-time visualization and steering system is considered as a hopeful method of data analysis. This approach is valid in the typical analysis at one time or low cost experiment and simulation. In research of an unknown problem, it is necessary that the output data be analyzed many times because conclusive analysis is difficult at one time. Consequently, output data should be filed to refer and analyze at any time. To support research, we need the automatic functions, transporting data files from data generator to data storage, analyzing data, tracking history of data handling, and so on. The supporting system will be a functionally distributed system. (author)

  15. A family of conjugate gradient methods for large-scale nonlinear equations

    Directory of Open Access Journals (Sweden)

    Dexiang Feng

    2017-09-01

    Full Text Available Abstract In this paper, we present a family of conjugate gradient projection methods for solving large-scale nonlinear equations. At each iteration, it needs low storage and the subproblem can be easily solved. Compared with the existing solution methods for solving the problem, its global convergence is established without the restriction of the Lipschitz continuity on the underlying mapping. Preliminary numerical results are reported to show the efficiency of the proposed method.

  16. Large scale use of brazing and high temperature brazing for the fabrication of the 6.4 km long vacuum system of the HERA electron storage ring

    International Nuclear Information System (INIS)

    Ballion, R.; Boster, J.; Giesske, W.; Hartwig, H.; Jagnow, D.; Kouptsidis, J.; Pape, R.; Prohl, W.; Schumann, G.; Schwartz, M.; Iversen, K.; Mucklenbeck, J.

    1989-01-01

    The 6.4 km long vacuum system for electrons in the large storage ring HERA at Hamburg consists of about 1,400 components having lengths between .14 and 12 m. The vacuum components are mainly made from variously shaped tubes of the copper alloy CuSn2. This alloy combines sufficient mechanical strength with the high thermal conductivity needed to remove the 6 MW dissipated power of the synchrotron-light. The vacuum components consist additionally of parts made from stainless steel such as flanges, chambers for pumps, beam monitors, etc. All of these parts are connected in a vacuum tight manner and on a large scale by using brazing and high temperature brazing both in a vacuum or in a reducing gas atmosphere. (orig.)

  17. Mountaineer Commercial Scale Carbon Capture and Storage Project Topical Report: Preliminary Public Design Report

    Energy Technology Data Exchange (ETDEWEB)

    Guy Cerimele

    2011-09-30

    This Preliminary Public Design Report consolidates for public use nonproprietary design information on the Mountaineer Commercial Scale Carbon Capture & Storage project. The report is based on the preliminary design information developed during the Phase I - Project Definition Phase, spanning the time period of February 1, 2010 through September 30, 2011. The report includes descriptions and/or discussions for: (1) DOE's Clean Coal Power Initiative, overall project & Phase I objectives, and the historical evolution of DOE and American Electric Power (AEP) sponsored projects leading to the current project; (2) Alstom's Chilled Ammonia Process (CAP) carbon capture retrofit technology and the carbon storage and monitoring system; (3) AEP's retrofit approach in terms of plant operational and integration philosophy; (4) The process island equipment and balance of plant systems for the CAP technology; (5) The carbon storage system, addressing injection wells, monitoring wells, system monitoring and controls logic philosophy; (6) Overall project estimate that includes the overnight cost estimate, cost escalation for future year expenditures, and major project risks that factored into the development of the risk based contingency; and (7) AEP's decision to suspend further work on the project at the end of Phase I, notwithstanding its assessment that the Alstom CAP technology is ready for commercial demonstration at the intended scale.

  18. Analysis of Large- Capacity Water Heaters in Electric Thermal Storage Programs

    Energy Technology Data Exchange (ETDEWEB)

    Cooke, Alan L. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Anderson, David M. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Winiarski, David W. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Carmichael, Robert T. [Cadeo Group, Washington D. C. (United States); Mayhorn, Ebony T. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Fisher, Andrew R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2015-03-17

    This report documents a national impact analysis of large tank heat pump water heaters (HPWH) in electric thermal storage (ETS) programs and conveys the findings related to concerns raised by utilities regarding the ability of large-tank heat pump water heaters to provide electric thermal storage services.

  19. Carbon dioxide emissions effects of grid-scale electricity storage in a decarbonizing power system

    Science.gov (United States)

    Craig, Michael T.; Jaramillo, Paulina; Hodge, Bri-Mathias

    2018-01-01

    While grid-scale electricity storage (hereafter ‘storage’) could be crucial for deeply decarbonizing the electric power system, it would increase carbon dioxide (CO2) emissions in current systems across the United States. To better understand how storage transitions from increasing to decreasing system CO2 emissions, we quantify the effect of storage on operational CO2 emissions as a power system decarbonizes under a moderate and strong CO2 emission reduction target through 2045. Under each target, we compare the effect of storage on CO2 emissions when storage participates in only energy, only reserve, and energy and reserve markets. We conduct our study in the Electricity Reliability Council of Texas (ERCOT) system and use a capacity expansion model to forecast generator fleet changes and a unit commitment and economic dispatch model to quantify system CO2 emissions with and without storage. We find that storage would increase CO2 emissions in the current ERCOT system, but would decrease CO2 emissions in 2025 through 2045 under both decarbonization targets. Storage reduces CO2 emissions primarily by enabling gas-fired generation to displace coal-fired generation, but also by reducing wind and solar curtailment. We further find that the market in which storage participates drives large differences in the magnitude, but not the direction, of the effect of storage on CO2 emissions.

  20. Information handbook on independent spent fuel storage installations

    International Nuclear Information System (INIS)

    Raddatz, M.G.; Waters, M.D.

    1996-12-01

    In this information handbook, the staff of the U.S. Nuclear Regulatory Commission describes (1) background information regarding the licensing and history of independent spent fuel storage installations (ISFSIs), (2) a discussion of the licensing process, (3) a description of all currently approved or certified models of dry cask storage systems (DCSSs), and (4) a description of sites currently storing spent fuel in an ISFSI. Storage of spent fuel at ISFSIs must be in accordance with the provisions of 10 CFR Part 72. The staff has provided this handbook for information purposes only. The accuracy of any information herein is not guaranteed. For verification or for more details, the reader should refer to the respective docket files for each DCSS and ISFSI site. The information in this handbook is current as of September 1, 1996

  1. Energy transfers in large-scale and small-scale dynamos

    Science.gov (United States)

    Samtaney, Ravi; Kumar, Rohit; Verma, Mahendra

    2015-11-01

    We present the energy transfers, mainly energy fluxes and shell-to-shell energy transfers in small-scale dynamo (SSD) and large-scale dynamo (LSD) using numerical simulations of MHD turbulence for Pm = 20 (SSD) and for Pm = 0.2 on 10243 grid. For SSD, we demonstrate that the magnetic energy growth is caused by nonlocal energy transfers from the large-scale or forcing-scale velocity field to small-scale magnetic field. The peak of these energy transfers move towards lower wavenumbers as dynamo evolves, which is the reason for the growth of the magnetic fields at the large scales. The energy transfers U2U (velocity to velocity) and B2B (magnetic to magnetic) are forward and local. For LSD, we show that the magnetic energy growth takes place via energy transfers from large-scale velocity field to large-scale magnetic field. We observe forward U2U and B2B energy flux, similar to SSD.

  2. Long term storage of finished gasolines in large salt caverns

    Energy Technology Data Exchange (ETDEWEB)

    Koenig, J.W.J. [German Strategic Petroleum Reserve, Hamburg (Germany)

    1995-05-01

    Strategic oil stocking requires large low cost storage facilities. Crude oil has been held in very large salt mines and/or artificially made salt caverns for many years, notably in Europe and the USA. Following crude oil, gasoils and refinery light feed stocks have been tried also. Military organisations tried jet fuel and early cases of underground aviation gasoline storage in steel tanks have been reported.

  3. Large underground radioactive waste storage tanks successfully cleaned at Oak Ridge National Laboratory

    International Nuclear Information System (INIS)

    Billingsley, K.; Burks, B.L.; Johnson, M.; Mims, C.; Powell, J.; Hoesen, D. van

    1998-05-01

    Waste retrieval operations were successfully completed in two large underground radioactive waste storage tanks in 1997. The US Department of Energy (DOE) and the Gunite Tanks Team worked cooperatively during two 10-week waste removal campaigns and removed approximately 58,300 gallons of waste from the tanks. About 100 gallons of a sludge and liquid heel remain in each of the 42,500 gallon tanks. These tanks are 25 ft. in diameter and 11 ft. deep, and are located in the North Tank Farm in the center of Oak Ridge National Laboratory. Less than 2% of the radioactive contaminants remain in the tanks, proving the effectiveness of the Radioactive Tank Cleaning System, and accomplishing the first field-scale cleaning of contaminated underground storage tanks with a robotic system in the DOE complex

  4. National Waste Terminal Storage Program Information Management Plan. Volume I. Management summary

    International Nuclear Information System (INIS)

    1977-05-01

    A comprehensive information management plan is needed for the processing of the large amount of documentation that will accumulate in the National Waste Terminal Storage program over the next decade. The plan will apply to all documentation from OWI contractors, subcontractors, and suppliers, and to external documentation from OWI organizations

  5. Challenges for Large Scale Structure Theory

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    I will describe some of the outstanding questions in Cosmology where answers could be provided by observations of the Large Scale Structure of the Universe at late times.I will discuss some of the theoretical challenges which will have to be overcome to extract this information from the observations. I will describe some of the theoretical tools that might be useful to achieve this goal. 

  6. Large-Scale Optimization for Bayesian Inference in Complex Systems

    Energy Technology Data Exchange (ETDEWEB)

    Willcox, Karen [MIT; Marzouk, Youssef [MIT

    2013-11-12

    The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimization) Project focused on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimization and inversion methods. The project was a collaborative effort among MIT, the University of Texas at Austin, Georgia Institute of Technology, and Sandia National Laboratories. The research was directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. The MIT--Sandia component of the SAGUARO Project addressed the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas--Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to-observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as ``reduce then sample'' and ``sample then reduce.'' In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to

  7. Optical information storage

    International Nuclear Information System (INIS)

    Woike, T.

    1996-01-01

    In order to increase storage capacity and data transfer velocity by about three orders of magnitude compared to CD or magnetic disc it is necessary to work with optical techniques, especially with holography. About 100 TByte can be stored in a waver of an area of 50 cm 2 via holograms which corresponds to a density of 2.10 9 Byte/mm 2 . Every hologram contains data of 1 MByte, so that parallel-processing is possible for read-out. Using high-speed CCD-arrays a read-out velocity of 1 MByte/μsec can be reached. Further, holographic technics are very important in solid state physics. We will discuss the existence of a space charge field in Sr 1-x Ba x Nb 2 O 6 doped with cerium and the physical properties of metastable states, which are suited for information storage. (author) 19 figs., 9 refs

  8. Third generation participatory design in health informatics--making user participation applicable to large-scale information system projects.

    Science.gov (United States)

    Pilemalm, Sofie; Timpka, Toomas

    2008-04-01

    Participatory Design (PD) methods in the field of health informatics have mainly been applied to the development of small-scale systems with homogeneous user groups in local settings. Meanwhile, health service organizations are becoming increasingly large and complex in character, making it necessary to extend the scope of the systems that are used for managing data, information and knowledge. This study reports participatory action research on the development of a PD framework for large-scale system design. The research was conducted in a public health informatics project aimed at developing a system for 175,000 users. A renewed PD framework was developed in response to six major limitations experienced to be associated with the existing methods. The resulting framework preserves the theoretical grounding, but extends the toolbox to suit applications in networked health service organizations. Future research should involve evaluations of the framework in other health service settings where comprehensive HISs are developed.

  9. Large-scale data analytics

    CERN Document Server

    Gkoulalas-Divanis, Aris

    2014-01-01

    Provides cutting-edge research in large-scale data analytics from diverse scientific areas Surveys varied subject areas and reports on individual results of research in the field Shares many tips and insights into large-scale data analytics from authors and editors with long-term experience and specialization in the field

  10. SCALE6.1 Hybrid Shielding Methodology For The Spent Fuel Dry Storage

    International Nuclear Information System (INIS)

    Matijevic, M.; Pevec, D.; Trontl, K.

    2015-01-01

    The SCALE6.1/MAVRIC hybrid deterministic-stochastic shielding methodology was used for dose rates calculation of the generic spent fuel dry storage installation. The neutron-gamma dose rates around the cask array were calculated over a large problem domain in order to determine the boundary of the controlled area. The FW-CADIS methodology, based on the deterministic forward and adjoint solution over the phase - space, was used for optimized, global Monte Carlo results over the mesh tally. The cask inventory was modeled as homogenized material corresponding to 20 fuel assemblies from a standard mid - sized PWR reactor. The global simulation model was an array of 32 casks in 2 rows with concrete foundations and external air, which makes a large spatial domain for shielding calculations. The dose rates around the casks were determined using FW-CADIS method with weighted adjoint source and mesh tally covering a portion of spatial domain of interest. The conservatively obtained dose rates give the upper boundary, since the activation reduction of sources was not taken into account when sequential filling of the dry storage will start. The effective area of the dry storage installation can be additionally reduced with lowering concrete foundation under the ground, embankment raising, and with extra concrete walls, that would additionally lower the dominant gamma dose rates. (author).

  11. Optical information storage

    Energy Technology Data Exchange (ETDEWEB)

    Woike, T [Koeln Univ., Inst. fuer Kristallography, Koeln (Germany)

    1996-11-01

    In order to increase storage capacity and data transfer velocity by about three orders of magnitude compared to CD or magnetic disc it is necessary to work with optical techniques, especially with holography. About 100 TByte can be stored in a waver of an area of 50 cm{sup 2} via holograms which corresponds to a density of 2.10{sup 9} Byte/mm{sup 2}. Every hologram contains data of 1 MByte, so that parallel-processing is possible for read-out. Using high-speed CCD-arrays a read-out velocity of 1 MByte/{mu}sec can be reached. Further, holographic technics are very important in solid state physics. We will discuss the existence of a space charge field in Sr{sub 1-x}Ba{sub x}Nb{sub 2}O{sub 6} doped with cerium and the physical properties of metastable states, which are suited for information storage. (author) 19 figs., 9 refs.

  12. Multi-level discriminative dictionary learning with application to large scale image classification.

    Science.gov (United States)

    Shen, Li; Sun, Gang; Huang, Qingming; Wang, Shuhui; Lin, Zhouchen; Wu, Enhua

    2015-10-01

    The sparse coding technique has shown flexibility and capability in image representation and analysis. It is a powerful tool in many visual applications. Some recent work has shown that incorporating the properties of task (such as discrimination for classification task) into dictionary learning is effective for improving the accuracy. However, the traditional supervised dictionary learning methods suffer from high computation complexity when dealing with large number of categories, making them less satisfactory in large scale applications. In this paper, we propose a novel multi-level discriminative dictionary learning method and apply it to large scale image classification. Our method takes advantage of hierarchical category correlation to encode multi-level discriminative information. Each internal node of the category hierarchy is associated with a discriminative dictionary and a classification model. The dictionaries at different layers are learnt to capture the information of different scales. Moreover, each node at lower layers also inherits the dictionary of its parent, so that the categories at lower layers can be described with multi-scale information. The learning of dictionaries and associated classification models is jointly conducted by minimizing an overall tree loss. The experimental results on challenging data sets demonstrate that our approach achieves excellent accuracy and competitive computation cost compared with other sparse coding methods for large scale image classification.

  13. Rank Order Coding: a Retinal Information Decoding Strategy Revealed by Large-Scale Multielectrode Array Retinal Recordings.

    Science.gov (United States)

    Portelli, Geoffrey; Barrett, John M; Hilgen, Gerrit; Masquelier, Timothée; Maccione, Alessandro; Di Marco, Stefano; Berdondini, Luca; Kornprobst, Pierre; Sernagor, Evelyne

    2016-01-01

    How a population of retinal ganglion cells (RGCs) encodes the visual scene remains an open question. Going beyond individual RGC coding strategies, results in salamander suggest that the relative latencies of a RGC pair encode spatial information. Thus, a population code based on this concerted spiking could be a powerful mechanism to transmit visual information rapidly and efficiently. Here, we tested this hypothesis in mouse by recording simultaneous light-evoked responses from hundreds of RGCs, at pan-retinal level, using a new generation of large-scale, high-density multielectrode array consisting of 4096 electrodes. Interestingly, we did not find any RGCs exhibiting a clear latency tuning to the stimuli, suggesting that in mouse, individual RGC pairs may not provide sufficient information. We show that a significant amount of information is encoded synergistically in the concerted spiking of large RGC populations. Thus, the RGC population response described with relative activities, or ranks, provides more relevant information than classical independent spike count- or latency- based codes. In particular, we report for the first time that when considering the relative activities across the whole population, the wave of first stimulus-evoked spikes is an accurate indicator of stimulus content. We show that this coding strategy coexists with classical neural codes, and that it is more efficient and faster. Overall, these novel observations suggest that already at the level of the retina, concerted spiking provides a reliable and fast strategy to rapidly transmit new visual scenes.

  14. Prototype Vector Machine for Large Scale Semi-Supervised Learning

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Kai; Kwok, James T.; Parvin, Bahram

    2009-04-29

    Practicaldataminingrarelyfalls exactlyinto the supervisedlearning scenario. Rather, the growing amount of unlabeled data poses a big challenge to large-scale semi-supervised learning (SSL). We note that the computationalintensivenessofgraph-based SSLarises largely from the manifold or graph regularization, which in turn lead to large models that are dificult to handle. To alleviate this, we proposed the prototype vector machine (PVM), a highlyscalable,graph-based algorithm for large-scale SSL. Our key innovation is the use of"prototypes vectors" for effcient approximation on both the graph-based regularizer and model representation. The choice of prototypes are grounded upon two important criteria: they not only perform effective low-rank approximation of the kernel matrix, but also span a model suffering the minimum information loss compared with the complete model. We demonstrate encouraging performance and appealing scaling properties of the PVM on a number of machine learning benchmark data sets.

  15. Large-scale grid management

    International Nuclear Information System (INIS)

    Langdal, Bjoern Inge; Eggen, Arnt Ove

    2003-01-01

    The network companies in the Norwegian electricity industry now have to establish a large-scale network management, a concept essentially characterized by (1) broader focus (Broad Band, Multi Utility,...) and (2) bigger units with large networks and more customers. Research done by SINTEF Energy Research shows so far that the approaches within large-scale network management may be structured according to three main challenges: centralization, decentralization and out sourcing. The article is part of a planned series

  16. Licensing of spent fuel dry storage and consolidated rod storage

    International Nuclear Information System (INIS)

    Bailey, W.J.

    1990-02-01

    The results of this study, performed by Pacific Northwest Laboratory (PNL) and sponsored by the US Department of Energy (DOE), respond to the nuclear industry's recommendation that a report be prepared that collects and describes the licensing issues (and their resolutions) that confront a new applicant requesting approval from the US Nuclear Regulatory Commission (NRC) for dry storage of spent fuel or for large-scale storage of consolidated spent fuel rods in pools. The issues are identified in comments, questions, and requests from the NRC during its review of applicants' submittals. Included in the report are discussions of (1) the 18 topical reports on cask and module designs for dry storage fuel that have been submitted to the NRC, (2) the three license applications for dry storage of spent fuel at independent spent fuel storage installations (ISFSIs) that have been submitted to the NRC, and (3) the three applications (one of which was later withdrawn) for large-scale storage of consolidated fuel rods in existing spent fuel storage pools at reactors that were submitted tot he NRC. For each of the applications submitted, examples of some of the issues (and suggestions for their resolutions) are described. The issues and their resolutions are also covered in detail in an example in each of the three subject areas: (1) the application for the CASTOR V/21 dry spent fuel storage cask, (2) the application for the ISFSI for dry storage of spent fuel at Surry, and (3) the application for full-scale wet storage of consolidated spent fuel at Millstone-2. The conclusions in the report include examples of major issues that applicants have encountered. Recommendations for future applicants to follow are listed. 401 refs., 26 tabs

  17. Biogas infrastructure from farm-scale to regional scale, line-pack storage in biogas grids

    NARCIS (Netherlands)

    Hengeveld, Evert Jan

    2016-01-01

    Biogas infrastructure from farm-scale to regional scale, line-pack storage in biogas grids. The number of local and regional initiatives encouraging the production and use of regional produced energy grows. In these new developments biogas can play a role, as a producer of energy, but also in

  18. Development of Seasonal Storage in Denmark

    DEFF Research Database (Denmark)

    Heller, Alfred

    2000-01-01

    National survey on seasonal (thermal, large-scale) storage activities in Denmark. A storage programme under the Danish Energy Agency. Programme background, objectives, activities, projects and results.Technologies presented: Pit water storage, gravel water storage with pipe heat exchangers, lining...... materials for pit and lid designs....

  19. Towards Large-area Field-scale Operational Evapotranspiration for Water Use Mapping

    Science.gov (United States)

    Senay, G. B.; Friedrichs, M.; Morton, C.; Huntington, J. L.; Verdin, J.

    2017-12-01

    Field-scale evapotranspiration (ET) estimates are needed for improving surface and groundwater use and water budget studies. Ideally, field-scale ET estimates would be at regional to national levels and cover long time periods. As a result of large data storage and computational requirements associated with processing field-scale satellite imagery such as Landsat, numerous challenges remain to develop operational ET estimates over large areas for detailed water use and availability studies. However, the combination of new science, data availability, and cloud computing technology is enabling unprecedented capabilities for ET mapping. To demonstrate this capability, we used Google's Earth Engine cloud computing platform to create nationwide annual ET estimates with 30-meter resolution Landsat ( 16,000 images) and gridded weather data using the Operational Simplified Surface Energy Balance (SSEBop) model in support of the National Water Census, a USGS research program designed to build decision support capacity for water management agencies and other natural resource managers. By leveraging Google's Earth Engine Application Programming Interface (API) and developing software in a collaborative, open-platform environment, we rapidly advance from research towards applications for large-area field-scale ET mapping. Cloud computing of the Landsat image archive combined with other satellite, climate, and weather data, is creating never imagined opportunities for assessing ET model behavior and uncertainty, and ultimately providing the ability for more robust operational monitoring and assessment of water use at field-scales.

  20. High-fidelity polarization storage in a gigahertz bandwidth quantum memory

    International Nuclear Information System (INIS)

    England, D G; Michelberger, P S; Champion, T F M; Reim, K F; Lee, K C; Sprague, M R; Jin, X-M; Langford, N K; Kolthammer, W S; Nunn, J; Walmsley, I A

    2012-01-01

    We demonstrate a dual-rail optical Raman memory inside a polarization interferometer; this enables us to store polarization-encoded information at GHz bandwidths in a room-temperature atomic ensemble. By performing full process tomography on the system, we measure up to 97 ± 1% process fidelity for the storage and retrieval process. At longer storage times, the process fidelity remains high, despite a loss of efficiency. The fidelity is 86 ± 4% for 1.5 μs storage time, which is 5000 times the pulse duration. Hence, high fidelity is combined with a large time-bandwidth product. This high performance, with an experimentally simple setup, demonstrates the suitability of the Raman memory for integration into large-scale quantum networks. (paper)

  1. An Improved GRACE Terrestrial Water Storage Assimilation System For Estimating Large-Scale Soil Moisture and Shallow Groundwater

    Science.gov (United States)

    Girotto, M.; De Lannoy, G. J. M.; Reichle, R. H.; Rodell, M.

    2015-12-01

    The Gravity Recovery And Climate Experiment (GRACE) mission is unique because it provides highly accurate column integrated estimates of terrestrial water storage (TWS) variations. Major limitations of GRACE-based TWS observations are related to their monthly temporal and coarse spatial resolution (around 330 km at the equator), and to the vertical integration of the water storage components. These challenges can be addressed through data assimilation. To date, it is still not obvious how best to assimilate GRACE-TWS observations into a land surface model, in order to improve hydrological variables, and many details have yet to be worked out. This presentation discusses specific recent features of the assimilation of gridded GRACE-TWS data into the NASA Goddard Earth Observing System (GEOS-5) Catchment land surface model to improve soil moisture and shallow groundwater estimates at the continental scale. The major recent advancements introduced by the presented work with respect to earlier systems include: 1) the assimilation of gridded GRACE-TWS data product with scaling factors that are specifically derived for data assimilation purposes only; 2) the assimilation is performed through a 3D assimilation scheme, in which reasonable spatial and temporal error standard deviations and correlations are exploited; 3) the analysis step uses an optimized calculation and application of the analysis increments; 4) a poor-man's adaptive estimation of a spatially variable measurement error. This work shows that even if they are characterized by a coarse spatial and temporal resolution, the observed column integrated GRACE-TWS data have potential for improving our understanding of soil moisture and shallow groundwater variations.

  2. Long-Time Data Storage: Relevant Time Scales

    Directory of Open Access Journals (Sweden)

    Miko C. Elwenspoek

    2011-02-01

    Full Text Available Dynamic processes relevant for long-time storage of information about human kind are discussed, ranging from biological and geological processes to the lifecycle of stars and the expansion of the universe. Major results are that life will end ultimately and the remaining time that the earth is habitable for complex life is about half a billion years. A system retrieved within the next million years will be read by beings very closely related to Homo sapiens. During this time the surface of the earth will change making it risky to place a small number of large memory systems on earth; the option to place it on the moon might be more favorable. For much longer timescales both options do not seem feasible because of geological processes on the earth and the flux of small meteorites to the moon.

  3. Ethics of large-scale change

    OpenAIRE

    Arler, Finn

    2006-01-01

      The subject of this paper is long-term large-scale changes in human society. Some very significant examples of large-scale change are presented: human population growth, human appropriation of land and primary production, the human use of fossil fuels, and climate change. The question is posed, which kind of attitude is appropriate when dealing with large-scale changes like these from an ethical point of view. Three kinds of approaches are discussed: Aldo Leopold's mountain thinking, th...

  4. Comparison of Dry Gas Seasonal Storage with CO2 Storage and Re-Use Potential

    OpenAIRE

    Killerud, Marie

    2013-01-01

    To make large-scale CO2 storage economic, many groups have proposed using CO2in EOR projects to create value for CO2 storage. However, CO2 EOR projectsgenerally require a large and variable supply of CO2 and consequently may requiretemporary storage of CO2 in geological formations. In order to store CO2 atoffshore sites as a source for CO2 EOR projects, the CO2 needs to be extractedfrom a storage site to a certain extent. Alternatively, CO2 EOR projects maybe developed alongside saline aquife...

  5. Presenting an Approach for Conducting Knowledge Architecture within Large-Scale Organizations.

    Science.gov (United States)

    Varaee, Touraj; Habibi, Jafar; Mohaghar, Ali

    2015-01-01

    Knowledge architecture (KA) establishes the basic groundwork for the successful implementation of a short-term or long-term knowledge management (KM) program. An example of KA is the design of a prototype before a new vehicle is manufactured. Due to a transformation to large-scale organizations, the traditional architecture of organizations is undergoing fundamental changes. This paper explores the main strengths and weaknesses in the field of KA within large-scale organizations and provides a suitable methodology and supervising framework to overcome specific limitations. This objective was achieved by applying and updating the concepts from the Zachman information architectural framework and the information architectural methodology of enterprise architecture planning (EAP). The proposed solution may be beneficial for architects in knowledge-related areas to successfully accomplish KM within large-scale organizations. The research method is descriptive; its validity is confirmed by performing a case study and polling the opinions of KA experts.

  6. A Combined Eulerian-Lagrangian Data Representation for Large-Scale Applications.

    Science.gov (United States)

    Sauer, Franz; Xie, Jinrong; Ma, Kwan-Liu

    2017-10-01

    The Eulerian and Lagrangian reference frames each provide a unique perspective when studying and visualizing results from scientific systems. As a result, many large-scale simulations produce data in both formats, and analysis tasks that simultaneously utilize information from both representations are becoming increasingly popular. However, due to their fundamentally different nature, drawing correlations between these data formats is a computationally difficult task, especially in a large-scale setting. In this work, we present a new data representation which combines both reference frames into a joint Eulerian-Lagrangian format. By reorganizing Lagrangian information according to the Eulerian simulation grid into a "unit cell" based approach, we can provide an efficient out-of-core means of sampling, querying, and operating with both representations simultaneously. We also extend this design to generate multi-resolution subsets of the full data to suit the viewer's needs and provide a fast flow-aware trajectory construction scheme. We demonstrate the effectiveness of our method using three large-scale real world scientific datasets and provide insight into the types of performance gains that can be achieved.

  7. The role of reservoir storage in large-scale surface water availability analysis for Europe

    Science.gov (United States)

    Garrote, L. M.; Granados, A.; Martin-Carrasco, F.; Iglesias, A.

    2017-12-01

    A regional assessment of current and future water availability in Europe is presented in this study. The assessment was made using the Water Availability and Adaptation Policy Analysis (WAAPA) model. The model was built on the river network derived from the Hydro1K digital elevation maps, including all major river basins of Europe. Reservoir storage volume was taken from the World Register of Dams of ICOLD, including all dams with storage capacity over 5 hm3. Potential Water Availability is defined as the maximum amount of water that could be supplied at a certain point of the river network to satisfy a regular demand under pre-specified reliability requirements. Water availability is the combined result of hydrological processes, which determine streamflow in natural conditions, and human intervention, which determines the available hydraulic infrastructure to manage water and establishes water supply conditions through operating rules. The WAAPA algorithm estimates the maximum demand that can be supplied at every node of the river network accounting for the regulation capacity of reservoirs under different management scenarios. The model was run for a set of hydrologic scenarios taken from the Inter-Sectoral Impact Model Intercomparison Project (ISIMIP), where the PCRGLOBWB hydrological model was forced with results from five global climate models. Model results allow the estimation of potential water stress by comparing water availability to projections of water abstractions along the river network under different management alternatives. The set of sensitivity analyses performed showed the effect of policy alternatives on water availability and highlighted the large uncertainties linked to hydrological and anthropological processes.

  8. Are the traditional large-scale drought indices suitable for shallow water wetlands? An example in the Everglades.

    Science.gov (United States)

    Zhao, Dehua; Wang, Penghe; Zuo, Jie; Zhang, Hui; An, Shuqing; Ramesh, Reddy K

    2017-08-01

    Numerous drought indices have been developed over the past several decades. However, few studies have focused on the suitability of indices for studies of ephemeral wetlands. The objective is to answer the following question: can the traditional large-scale drought indices characterize drought severity in shallow water wetlands such as the Everglades? The question was approached from two perspectives: the available water quantity and the response of wetland ecosystems to drought. The results showed the unsuitability of traditional large-scale drought indices for characterizing the actual available water quantity based on two findings. (1) Large spatial variations in precipitation (P), potential evapotranspiration (PE), water table depth (WTD) and the monthly water storage change (SC) were observed in the Everglades; notably, the spatial variation in SC, which reflects the monthly water balance, was 1.86 and 1.62 times larger than the temporal variation between seasons and between years, respectively. (2) The large-scale water balance measured based on the water storage variation had an average indicating efficiency (IE) of only 60.01% due to the redistribution of interior water. The spatial distribution of variations in the Normalized Different Vegetation Index (NDVI) in the 2011 dry season showed significantly positive, significantly negative and weak correlations with the minimum WTD in wet prairies, graminoid prairies and sawgrass wetlands, respectively. The significant and opposite correlations imply the unsuitability of the traditional large-scale drought indices in evaluating the effect of drought on shallow water wetlands. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Large-scale retrieval for medical image analytics: A comprehensive review.

    Science.gov (United States)

    Li, Zhongyu; Zhang, Xiaofan; Müller, Henning; Zhang, Shaoting

    2018-01-01

    Over the past decades, medical image analytics was greatly facilitated by the explosion of digital imaging techniques, where huge amounts of medical images were produced with ever-increasing quality and diversity. However, conventional methods for analyzing medical images have achieved limited success, as they are not capable to tackle the huge amount of image data. In this paper, we review state-of-the-art approaches for large-scale medical image analysis, which are mainly based on recent advances in computer vision, machine learning and information retrieval. Specifically, we first present the general pipeline of large-scale retrieval, summarize the challenges/opportunities of medical image analytics on a large-scale. Then, we provide a comprehensive review of algorithms and techniques relevant to major processes in the pipeline, including feature representation, feature indexing, searching, etc. On the basis of existing work, we introduce the evaluation protocols and multiple applications of large-scale medical image retrieval, with a variety of exploratory and diagnostic scenarios. Finally, we discuss future directions of large-scale retrieval, which can further improve the performance of medical image analysis. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Quantum information storage using tunable flux qubits

    Energy Technology Data Exchange (ETDEWEB)

    Steffen, Matthias; Brito, Frederico; DiVincenzo, David; Farinelli, Matthew; Keefe, George; Ketchen, Mark; Kumar, Shwetank; Milliken, Frank; Rothwell, Mary Beth; Rozen, Jim; Koch, Roger H, E-mail: msteffe@us.ibm.co [IBM Watson Research Center, Yorktown Heights, NY 10598 (United States)

    2010-02-10

    We present details and results for a superconducting quantum bit (qubit) design in which a tunable flux qubit is coupled strongly to a transmission line. Quantum information storage in the transmission line is demonstrated with a dephasing time of T{sub 2}approx2.5 mus. However, energy lifetimes of the qubit are found to be short (approx10 ns) and not consistent with predictions. Several design and material changes do not affect qubit coherence times. In order to determine the cause of these short coherence times, we fabricated standard flux qubits based on a design which was previously successfully used by others. Initial results show significantly improved coherence times, possibly implicating losses associated with the large size of our qubit. (topical review)

  11. Regenesys utility scale energy storage. Overview report of combined energy storage and renewable generation

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2004-07-01

    The first part of the paper briefly discusses the advantages and disadvantages of various forms of renewable energy sources with respect to the United Kingdom. It discusses the intermittent nature of wind and solar power and the less intermittent nature of hydro power and energy from biomass. The need to store energy generated, particularly from the intermittent sources, is discussed with special reference to electric batteries and pumped storage. If the energy cannot be stored and delivered when required, then the commercial viability of the source will be adversely affected - the economics and how this fits with NETA are discussed briefly. The second part of the paper is an overview of some relevant literature discussing (a) how the problems of fluctuating supplies may be managed, (b) an analytical assessment of the contribution from wind farms, (c) how fluctuations in wind power can be smoothed using sodium-sulfur batteries, (d) how small generators can get together and reduce trading costs and imbalance exposure under NETA, (e) the benefits of large-scale energy storage to network management and embedded generation, (f) distribution networks, (g) embedded generation and network management issues and (h) costs and benefits of embedded generation. The work was carried out as part of the DTI New and Renewable Energy Programme managed by Future Energy Solutions.

  12. Regenesys utility scale energy storage. Overview report of combined energy storage and renewable generation

    International Nuclear Information System (INIS)

    2004-01-01

    The first part of the paper briefly discusses the advantages and disadvantages of various forms of renewable energy sources with respect to the United Kingdom. It discusses the intermittent nature of wind and solar power and the less intermittent nature of hydro power and energy from biomass. The need to store energy generated, particularly from the intermittent sources, is discussed with special reference to electric batteries and pumped storage. If the energy cannot be stored and delivered when required, then the commercial viability of the source will be adversely affected - the economics and how this fits with NETA are discussed briefly. The second part of the paper is an overview of some relevant literature discussing (a) how the problems of fluctuating supplies may be managed, (b) an analytical assessment of the contribution from wind farms, (c) how fluctuations in wind power can be smoothed using sodium-sulfur batteries, (d) how small generators can get together and reduce trading costs and imbalance exposure under NETA, (e) the benefits of large-scale energy storage to network management and embedded generation, (f) distribution networks, (g) embedded generation and network management issues and (h) costs and benefits of embedded generation. The work was carried out as part of the DTI New and Renewable Energy Programme managed by Future Energy Solutions

  13. Large-scale runoff generation – parsimonious parameterisation using high-resolution topography

    OpenAIRE

    L. Gong; S. Halldin; C.-Y. Xu

    2010-01-01

    World water resources have primarily been analysed by global-scale hydrological models in the last decades. Runoff generation in many of these models are based on process formulations developed at catchments scales. The division between slow runoff (baseflow) and fast runoff is primarily governed by slope and spatial distribution of effective water storage capacity, both acting a very small scales. Many hydrological models, e.g. VIC, account for the spatial storage variability in terms...

  14. A generic library for large scale solution of PDEs on modern heterogeneous architectures

    DEFF Research Database (Denmark)

    Glimberg, Stefan Lemvig; Engsig-Karup, Allan Peter

    2012-01-01

    Adapting to new programming models for modern multi- and many-core architectures requires code-rewriting and changing algorithms and data structures, in order to achieve good efficiency and scalability. We present a generic library for solving large scale partial differential equations (PDEs......), capable of utilizing heterogeneous CPU/GPU environments. The library can be used for fast proto-typing of PDE solvers, based on finite difference approximations of spatial derivatives in one, two, or three dimensions. In order to efficiently solve large scale problems, we keep memory consumption...... and memory access low, using a low-storage implementation of flexible-order finite difference operators. We will illustrate the use of library components by assembling such matrix-free operators to be used with one of the supported iterative solvers, such as GMRES, CG, Multigrid or Defect Correction...

  15. New Visions for Large Scale Networks: Research and Applications

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — This paper documents the findings of the March 12-14, 2001 Workshop on New Visions for Large-Scale Networks: Research and Applications. The workshops objectives were...

  16. Challenges in Managing Trustworthy Large-scale Digital Science

    Science.gov (United States)

    Evans, B. J. K.

    2017-12-01

    The increased use of large-scale international digital science has opened a number of challenges for managing, handling, using and preserving scientific information. The large volumes of information are driven by three main categories - model outputs including coupled models and ensembles, data products that have been processing to a level of usability, and increasingly heuristically driven data analysis. These data products are increasingly the ones that are usable by the broad communities, and far in excess of the raw instruments data outputs. The data, software and workflows are then shared and replicated to allow broad use at an international scale, which places further demands of infrastructure to support how the information is managed reliably across distributed resources. Users necessarily rely on these underlying "black boxes" so that they are productive to produce new scientific outcomes. The software for these systems depend on computational infrastructure, software interconnected systems, and information capture systems. This ranges from the fundamentals of the reliability of the compute hardware, system software stacks and libraries, and the model software. Due to these complexities and capacity of the infrastructure, there is an increased emphasis of transparency of the approach and robustness of the methods over the full reproducibility. Furthermore, with large volume data management, it is increasingly difficult to store the historical versions of all model and derived data. Instead, the emphasis is on the ability to access the updated products and the reliability by which both previous outcomes are still relevant and can be updated for the new information. We will discuss these challenges and some of the approaches underway that are being used to address these issues.

  17. [Wound information management system: a standardized scheme for acquisition, storage and management of wound information].

    Science.gov (United States)

    Liu, Hu; Su, Rong-jia; Wu, Min-jie; Zhang, Yi; Qiu, Xiang-jun; Feng, Jian-gang; Xie, Ting; Lu, Shu-liang

    2012-06-01

    To form a wound information management scheme with objectivity, standardization, and convenience by means of wound information management system. A wound information management system was set up with the acquisition terminal, the defined wound description, the data bank, and related softwares. The efficacy of this system was evaluated in clinical practice. The acquisition terminal was composed of the third generation mobile phone and the software. It was feasible to get access to the wound information, including description, image, and therapeutic plan from the data bank by mobile phone. During 4 months, a collection of a total of 232 wound treatment information was entered, and accordingly standardized data of 38 patients were formed automatically. This system can provide standardized wound information management by standardized techniques of acquisition, transmission, and storage of wound information. It can be used widely in hospitals, especially primary medical institutions. Data resource of the system makes it possible for epidemiological study with large sample size in future.

  18. Electricity Storage. Technology Brief

    Energy Technology Data Exchange (ETDEWEB)

    Simbolotti, G. [Italian National Agency for New Technologies, Energy and Sustainable Economic Development ENEA, Rome (Italy); Kempener, R. [International Renewable Energy Agency IRENA, Bonn (Germany)

    2012-04-15

    Electricity storage is a key technology for electricity systems with a high share of renewables as it allows electricity to be generated when renewable sources (i.e. wind, sunlight) are available and to be consumed on demand. It is expected that the increasing price of fossil fuels and peak-load electricity and the growing share of renewables will result in electricity storage to grow rapidly and become more cost effective. However, electricity storage is technically challenging because electricity can only be stored after conversion into other forms of energy, and this involves expensive equipment and energy losses. At present, the only commercial storage option is pumped hydro power where surplus electricity (e.g. electricity produced overnight by base-load coal or nuclear power) is used to pump water from a lower to an upper reservoir. The stored energy is then used to produce hydropower during daily high-demand periods. Pumped hydro plants are large-scale storage systems with a typical efficiency between 70% and 80%, which means that a quarter of the energy is lost in the process. Other storage technologies with different characteristics (i.e. storage process and capacity, conversion back to electricity and response to power demand, energy losses and costs) are currently in demonstration or pre-commercial stages and discussed in this brief report: Compressed air energy storage (CAES) systems, Flywheels; Electrical batteries; Supercapacitors; Superconducting magnetic storage; and Thermal energy storage. No single electricity storage technology scores high in all dimensions. The technology of choice often depends on the size of the system, the specific service, the electricity sources and the marginal cost of peak electricity. Pumped hydro currently accounts for 95% of the global storage capacity and still offers a considerable expansion potential but does not suit residential or small-size applications. CAES expansion is limited due to the lack of suitable

  19. Presenting an Approach for Conducting Knowledge Architecture within Large-Scale Organizations

    Science.gov (United States)

    Varaee, Touraj; Habibi, Jafar; Mohaghar, Ali

    2015-01-01

    Knowledge architecture (KA) establishes the basic groundwork for the successful implementation of a short-term or long-term knowledge management (KM) program. An example of KA is the design of a prototype before a new vehicle is manufactured. Due to a transformation to large-scale organizations, the traditional architecture of organizations is undergoing fundamental changes. This paper explores the main strengths and weaknesses in the field of KA within large-scale organizations and provides a suitable methodology and supervising framework to overcome specific limitations. This objective was achieved by applying and updating the concepts from the Zachman information architectural framework and the information architectural methodology of enterprise architecture planning (EAP). The proposed solution may be beneficial for architects in knowledge-related areas to successfully accomplish KM within large-scale organizations. The research method is descriptive; its validity is confirmed by performing a case study and polling the opinions of KA experts. PMID:25993414

  20. Storage process of large solid radioactive wastes

    International Nuclear Information System (INIS)

    Morin, Bruno; Thiery, Daniel.

    1976-01-01

    Process for the storage of large size solid radioactive waste, consisting of contaminated objects such as cartridge filters, metal swarf, tools, etc, whereby such waste is incorporated in a thermohardening resin at room temperature, after prior addition of at least one inert charge to the resin. Cross-linking of the resin is then brought about [fr

  1. Stability Analysis of Buffer Storage Large Basket and Temporary Storage Pre-packaging Basket Used in the Type B Radwaste Process Area

    International Nuclear Information System (INIS)

    Kim, Sung Kyun; Lee, Kune Woo; Moon, Jei Kwon

    2011-01-01

    The ITER radioactive waste (radwaste) treatment and storage systems are currently being designed to manage Type B, Type A and dust radwastes generated during the ITER machine operation. The Type B management system is to be in the hot cell building basement with temporary storage and the modular type storages outside the hot cell building for the pre-packed Type B radwaste during the ITER operation of 20 years. In order to store Type B radwaste components in onsite storage, the waste treatment chain process for Type B radwastes was developed as follows. First, Type B full components filled in a large basket are imported from Tokamak to the hot cell basement and they are stored in the buffer storage before treatment. Second, they are cut properly with a laser cutting machine or band saw machine and sliced waste parts are filled in a pre-packaging basket. Third, the sampling of Type B components is performed and then the tritium removal treatment is done in an oven to remove tritium from the waste surface and then the sampling is performed again. Forth, the characterization is performed by using a gamma spectrometry. Fifth, the pre-packaging operation is done to ensure the final packaging of the radwaste. Sixth, the pre-packaging baskets are stored in the temporary storage for 6 months and then they are sent to the extension storage and stored until export to host country. One of issues in the waste treatment scheme is to analyze the stacking stability of a stack of large baskets and pre-packaging baskets in the storage system. The baseline plan is to stack the large baskets in two layers in the buffer storage and to stack the pre-packaging baskets in three layers in the temporary storage and extension storage. In this study, the stacking stability analysis for the buffer storage large basket and temporary storage pre-packaging basket was performed for various stack failure modes

  2. Self-oscillations in large storages of highly mineralized brines

    Science.gov (United States)

    Lyubimova, Tatyana; Lepikhin, Anatoly; Tsiberkin, Kirill; Parshakova, Yanina

    2014-05-01

    -oscillations. Numerical simulation of the dynamics of suspended sediment in the storage is performed within the framework of two-dimensional unsteady approach taking into account the temperature jumps due to the water evaporation from the free surface and the radiation heating of the sediments. The dynamics of sediment in a rectangular cavity of the length 500 m and depth 10 m is considered. Initially, the water is assumed to be motionless and nonuniformly heated. The calculations show that in the first stage of the process the flows arise near the boundaries of the heated areas. Next, the large-scale vortices with the characteristic size equal to the depth of the storage are formed. The sediment located at the bottom sets into motion and only some portion of sediment located near the bottom remains motionless. Throughout several hours the mass fraction of the suspended particles in water increases, then the flow decays and the sedimentation of particles is observed. This work was supported by RFBR and Perm Region Government (grant 13-01-96040) and by President of Russian Federation (grant 4022.2014.1 for the support of Leading Scientific Schools).

  3. Communication of technical information to lay audiences. [National Waste Terminal Storage (NWTS) program

    Energy Technology Data Exchange (ETDEWEB)

    Bowes, J.E.; Stamm, K.R.; Jackson, K.M.; Moore, J.

    1978-05-01

    One of the objectives of the National Waste Terminal Storage (NWTS) Program is to provide terminal storage facilities for commercial radioactive wastes in various geologic formations at multiple locations in the United States. The activities performed under the NWTS Program will affect regional, state, and local areas, and widespread public interest in this program is expected. Since a large part of the NWTS Program deals with technical information it was considered desirable to initiate a study dealing with possible methods of effectively transmitting this technical information to the general public. This study has the objective of preparing a state-of-the-art report on the communication of technical information to lay audiences. The particular task of communicating information about the NWTS Program to the public is discussed where appropriate. The results of this study will aid the NWTS Program in presenting to the public the quite diverse technical information generated within the program so that a widespread, thorough public understanding of the NWTS Program might be achieved. An annotated bibliography is included.

  4. Political consultation and large-scale research

    International Nuclear Information System (INIS)

    Bechmann, G.; Folkers, H.

    1977-01-01

    Large-scale research and policy consulting have an intermediary position between sociological sub-systems. While large-scale research coordinates science, policy, and production, policy consulting coordinates science, policy and political spheres. In this very position, large-scale research and policy consulting lack of institutional guarantees and rational back-ground guarantee which are characteristic for their sociological environment. This large-scale research can neither deal with the production of innovative goods under consideration of rentability, nor can it hope for full recognition by the basis-oriented scientific community. Policy consulting knows neither the competence assignment of the political system to make decisions nor can it judge succesfully by the critical standards of the established social science, at least as far as the present situation is concerned. This intermediary position of large-scale research and policy consulting has, in three points, a consequence supporting the thesis which states that this is a new form of institutionalization of science: These are: 1) external control, 2) the organization form, 3) the theoretical conception of large-scale research and policy consulting. (orig.) [de

  5. Partitioned based approach for very large scale database in Indian nuclear power plants

    International Nuclear Information System (INIS)

    Tiwari, Sachin; Upadhyay, Pushp; Sengupta, Nabarun; Bhandarkar, S.G.; Agilandaeswari

    2012-01-01

    This paper presents a partition based approach for handling very large tables with size running in giga-bytes to tera-bytes. The scheme is developed from our experience in handling large signal storage which is required in various computer based data acquisition and control room operator information systems such as Distribution Recording System (DRS) and Computerised Operator Information System (COIS). Whenever there is a disturbance in an operating nuclear power plant, it triggers an action where a large volume of data from multiple sources is generated and this data needs to be stored. Concurrency issues as data is from multiple sources and very large amount of data are the problems which are addressed in this paper by applying partition based approach. Advantages of partition based approach with other techniques are discussed. (author)

  6. Development of a 3D Stream Network and Topography for Improved Large-Scale Hydraulic Modeling

    Science.gov (United States)

    Saksena, S.; Dey, S.; Merwade, V.

    2016-12-01

    Most digital elevation models (DEMs) used for hydraulic modeling do not include channel bed elevations. As a result, the DEMs are complimented with additional bathymetric data for accurate hydraulic simulations. Existing methods to acquire bathymetric information through field surveys or through conceptual models are limited to reach-scale applications. With an increasing focus on large scale hydraulic modeling of rivers, a framework to estimate and incorporate bathymetry for an entire stream network is needed. This study proposes an interpolation-based algorithm to estimate bathymetry for a stream network by modifying the reach-based empirical River Channel Morphology Model (RCMM). The effect of a 3D stream network that includes river bathymetry is then investigated by creating a 1D hydraulic model (HEC-RAS) and 2D hydrodynamic model (Integrated Channel and Pond Routing) for the Upper Wabash River Basin in Indiana, USA. Results show improved simulation of flood depths and storage in the floodplain. Similarly, the impact of river bathymetry incorporation is more significant in the 2D model as compared to the 1D model.

  7. Large-scale multimedia modeling applications

    International Nuclear Information System (INIS)

    Droppo, J.G. Jr.; Buck, J.W.; Whelan, G.; Strenge, D.L.; Castleton, K.J.; Gelston, G.M.

    1995-08-01

    Over the past decade, the US Department of Energy (DOE) and other agencies have faced increasing scrutiny for a wide range of environmental issues related to past and current practices. A number of large-scale applications have been undertaken that required analysis of large numbers of potential environmental issues over a wide range of environmental conditions and contaminants. Several of these applications, referred to here as large-scale applications, have addressed long-term public health risks using a holistic approach for assessing impacts from potential waterborne and airborne transport pathways. Multimedia models such as the Multimedia Environmental Pollutant Assessment System (MEPAS) were designed for use in such applications. MEPAS integrates radioactive and hazardous contaminants impact computations for major exposure routes via air, surface water, ground water, and overland flow transport. A number of large-scale applications of MEPAS have been conducted to assess various endpoints for environmental and human health impacts. These applications are described in terms of lessons learned in the development of an effective approach for large-scale applications

  8. Large Scale Simulation Platform for NODES Validation Study

    Energy Technology Data Exchange (ETDEWEB)

    Sotorrio, P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Qin, Y. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Min, L. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-04-27

    This report summarizes the Large Scale (LS) simulation platform created for the Eaton NODES project. The simulation environment consists of both wholesale market simulator and distribution simulator and includes the CAISO wholesale market model and a PG&E footprint of 25-75 feeders to validate the scalability under a scenario of 33% RPS in California with additional 17% of DERS coming from distribution and customers. The simulator can generate hourly unit commitment, 5-minute economic dispatch, and 4-second AGC regulation signals. The simulator is also capable of simulating greater than 10k individual controllable devices. Simulated DERs include water heaters, EVs, residential and light commercial HVAC/buildings, and residential-level battery storage. Feeder-level voltage regulators and capacitor banks are also simulated for feeder-level real and reactive power management and Vol/Var control.

  9. Sub-surface laser nanostructuring in stratified metal/dielectric media: a versatile platform towards flexible, durable and large-scale plasmonic writing

    International Nuclear Information System (INIS)

    Siozios, A; Bellas, D V; Lidorikis, E; Patsalas, P; Kalfagiannis, N; Cranton, W M; Koutsogeorgis, D C; Bazioti, C; Dimitrakopulos, G P; Vourlias, G

    2015-01-01

    Laser nanostructuring of pure ultrathin metal layers or ceramic/metal composite thin films has emerged as a promising route for the fabrication of plasmonic patterns with applications in information storage, cryptography, and security tagging. However, the environmental sensitivity of pure Ag layers and the complexity of ceramic/metal composite film growth hinder the implementation of this technology to large-scale production, as well as its combination with flexible substrates. In the present work we investigate an alternative pathway, namely, starting from non-plasmonic multilayer metal/dielectric layers, whose growth is compatible with large scale production such as in-line sputtering and roll-to-roll deposition, which are then transformed into plasmonic templates by single-shot UV-laser annealing (LA). This entirely cold, large-scale process leads to a subsurface nanoconstruction involving plasmonic Ag nanoparticles (NPs) embedded in a hard and inert dielectric matrix on top of both rigid and flexible substrates. The subsurface encapsulation of Ag NPs provides durability and long-term stability, while the cold character of LA suits the use of sensitive flexible substrates. The morphology of the final composite film depends primarily on the nanocrystalline character of the dielectric host and its thermal conductivity. We demonstrate the emergence of a localized surface plasmon resonance, and its tunability depending on the applied fluence and environmental pressure. The results are well explained by theoretical photothermal modeling. Overall, our findings qualify the proposed process as an excellent candidate for versatile, large-scale optical encoding applications. (paper)

  10. Report of the Workshop on Petascale Systems Integration for LargeScale Facilities

    Energy Technology Data Exchange (ETDEWEB)

    Kramer, William T.C.; Walter, Howard; New, Gary; Engle, Tom; Pennington, Rob; Comes, Brad; Bland, Buddy; Tomlison, Bob; Kasdorf, Jim; Skinner, David; Regimbal, Kevin

    2007-10-01

    There are significant issues regarding Large Scale System integration that are not being addressed in other forums such as current research portfolios or vendor user groups. Unfortunately, the issues in the area of large-scale system integration often fall into a netherworld; not research, not facilities, not procurement, not operations, not user services. Taken together, these issues along with the impact of sub-optimal integration technology means the time required to deploy, integrate and stabilize large scale system may consume up to 20 percent of the useful life of such systems. Improving the state of the art for large scale systems integration has potential to increase the scientific productivity of these systems. Sites have significant expertise, but there are no easy ways to leverage this expertise among them . Many issues inhibit the sharing of information, including available time and effort, as well as issues with sharing proprietary information. Vendors also benefit in the long run from the solutions to issues detected during site testing and integration. There is a great deal of enthusiasm for making large scale system integration a full-fledged partner along with the other major thrusts supported by funding agencies in the definition, design, and use of a petascale systems. Integration technology and issues should have a full 'seat at the table' as petascale and exascale initiatives and programs are planned. The workshop attendees identified a wide range of issues and suggested paths forward. Pursuing these with funding opportunities and innovation offers the opportunity to dramatically improve the state of large scale system integration.

  11. Direction of information flow in large-scale resting-state networks is frequency-dependent

    NARCIS (Netherlands)

    Hillebrand, Arjan; Tewarie, Prejaas; Van Dellen, Edwin; Yu, Meichen; Carbo, Ellen W S; Douw, Linda; Gouw, Alida A.; Van Straaten, Elisabeth C W; Stam, Cornelis J.

    2016-01-01

    Normal brain function requires interactions between spatially separated, and functionally specialized, macroscopic regions, yet the directionality of these interactions in large-scale functional networks is unknown. Magnetoencephalography was used to determine the directionality of these

  12. An industrial perspective on bioreactor scale-down: what we can learn from combined large-scale bioprocess and model fluid studies.

    Science.gov (United States)

    Noorman, Henk

    2011-08-01

    For industrial bioreactor design, operation, control and optimization, the scale-down approach is often advocated to efficiently generate data on a small scale, and effectively apply suggested improvements to the industrial scale. In all cases it is important to ensure that the scale-down conditions are representative of the real large-scale bioprocess. Progress is hampered by limited detailed and local information from large-scale bioprocesses. Complementary to real fermentation studies, physical aspects of model fluids such as air-water in large bioreactors provide useful information with limited effort and cost. Still, in industrial practice, investments of time, capital and resources often prohibit systematic work, although, in the end, savings obtained in this way are trivial compared to the expenses that result from real process disturbances, batch failures, and non-flyers with loss of business opportunity. Here we try to highlight what can be learned from real large-scale bioprocess in combination with model fluid studies, and to provide suitable computation tools to overcome data restrictions. Focus is on a specific well-documented case for a 30-m(3) bioreactor. Areas for further research from an industrial perspective are also indicated. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Report of the LASCAR forum: Large scale reprocessing plant safeguards

    International Nuclear Information System (INIS)

    1992-01-01

    This report has been prepared to provide information on the studies which were carried out from 1988 to 1992 under the auspices of the multinational forum known as Large Scale Reprocessing Plant Safeguards (LASCAR) on safeguards for four large scale reprocessing plants operated or planned to be operated in the 1990s. The report summarizes all of the essential results of these studies. The participants in LASCAR were from France, Germany, Japan, the United Kingdom, the United States of America, the Commission of the European Communities - Euratom, and the International Atomic Energy Agency

  14. Decentralized Large-Scale Power Balancing

    DEFF Research Database (Denmark)

    Halvgaard, Rasmus; Jørgensen, John Bagterp; Poulsen, Niels Kjølstad

    2013-01-01

    problem is formulated as a centralized large-scale optimization problem but is then decomposed into smaller subproblems that are solved locally by each unit connected to an aggregator. For large-scale systems the method is faster than solving the full problem and can be distributed to include an arbitrary...

  15. Large Scale Landslide Database System Established for the Reservoirs in Southern Taiwan

    Science.gov (United States)

    Tsai, Tsai-Tsung; Tsai, Kuang-Jung; Shieh, Chjeng-Lun

    2017-04-01

    Typhoon Morakot seriously attack southern Taiwan awaken the public awareness of large scale landslide disasters. Large scale landslide disasters produce large quantity of sediment due to negative effects on the operating functions of reservoirs. In order to reduce the risk of these disasters within the study area, the establishment of a database for hazard mitigation / disaster prevention is necessary. Real time data and numerous archives of engineering data, environment information, photo, and video, will not only help people make appropriate decisions, but also bring the biggest concern for people to process and value added. The study tried to define some basic data formats / standards from collected various types of data about these reservoirs and then provide a management platform based on these formats / standards. Meanwhile, in order to satisfy the practicality and convenience, the large scale landslide disasters database system is built both provide and receive information abilities, which user can use this large scale landslide disasters database system on different type of devices. IT technology progressed extreme quick, the most modern system might be out of date anytime. In order to provide long term service, the system reserved the possibility of user define data format /standard and user define system structure. The system established by this study was based on HTML5 standard language, and use the responsive web design technology. This will make user can easily handle and develop this large scale landslide disasters database system.

  16. 3D large-scale calculations using the method of characteristics

    International Nuclear Information System (INIS)

    Dahmani, M.; Roy, R.; Koclas, J.

    2004-01-01

    An overview of the computational requirements and the numerical developments made in order to be able to solve 3D large-scale problems using the characteristics method will be presented. To accelerate the MCI solver, efficient acceleration techniques were implemented and parallelization was performed. However, for the very large problems, the size of the tracking file used to store the tracks can still become prohibitive and exceed the capacity of the machine. The new 3D characteristics solver MCG will now be introduced. This methodology is dedicated to solve very large 3D problems (a part or a whole core) without spatial homogenization. In order to eliminate the input/output problems occurring when solving these large problems, we define a new computing scheme that requires more CPU resources than the usual one, based on sweeps over large tracking files. The huge capacity of storage needed in some problems and the related I/O queries needed by the characteristics solver are replaced by on-the-fly recalculation of tracks at each iteration step. Using this technique, large 3D problems are no longer I/O-bound, and distributed CPU resources can be efficiently used. (author)

  17. Design of a large remote seismic exploration data acquisition system, with the architecture of a distributed storage area network

    International Nuclear Information System (INIS)

    Cao, Ping; Song, Ke-zhu; Yang, Jun-feng; Ruan, Fu-ming

    2011-01-01

    Nowadays, seismic exploration data acquisition (DAQ) systems have been developed into remote forms with a large-scale coverage area. In this kind of application, some features must be mentioned. Firstly, there are many sensors which are placed remotely. Secondly, the total data throughput is high. Thirdly, optical fibres are not suitable everywhere because of cost control, harsh running environments, etc. Fourthly, the ability of expansibility and upgrading is a must for this kind of application. It is a challenge to design this kind of remote DAQ (rDAQ). Data transmission, clock synchronization, data storage, etc must be considered carefully. A fourth-hierarchy model of rDAQ is proposed. In this model, rDAQ is divided into four different function levels. From this model, a simple and clear architecture based on a distributed storage area network is proposed. rDAQs with this architecture have advantages of flexible configuration, expansibility and stability. This architecture can be applied to design and realize from simple single cable systems to large-scale exploration DAQs

  18. Advanced I/O for large-scale scientific applications

    International Nuclear Information System (INIS)

    Klasky, Scott; Schwan, Karsten; Oldfield, Ron A.; Lofstead, Gerald F. II

    2010-01-01

    As scientific simulations scale to use petascale machines and beyond, the data volumes generated pose a dual problem. First, with increasing machine sizes, the careful tuning of IO routines becomes more and more important to keep the time spent in IO acceptable. It is not uncommon, for instance, to have 20% of an application's runtime spent performing IO in a 'tuned' system. Careful management of the IO routines can move that to 5% or even less in some cases. Second, the data volumes are so large, on the order of 10s to 100s of TB, that trying to discover the scientifically valid contributions requires assistance at runtime to both organize and annotate the data. Waiting for offline processing is not feasible due both to the impact on the IO system and the time required. To reduce this load and improve the ability of scientists to use the large amounts of data being produced, new techniques for data management are required. First, there is a need for techniques for efficient movement of data from the compute space to storage. These techniques should understand the underlying system infrastructure and adapt to changing system conditions. Technologies include aggregation networks, data staging nodes for a closer parity to the IO subsystem, and autonomic IO routines that can detect system bottlenecks and choose different approaches, such as splitting the output into multiple targets, staggering output processes. Such methods must be end-to-end, meaning that even with properly managed asynchronous techniques, it is still essential to properly manage the later synchronous interaction with the storage system to maintain acceptable performance. Second, for the data being generated, annotations and other metadata must be incorporated to help the scientist understand output data for the simulation run as a whole, to select data and data features without concern for what files or other storage technologies were employed. All of these features should be attained while

  19. Automating large-scale reactor systems

    International Nuclear Information System (INIS)

    Kisner, R.A.

    1985-01-01

    This paper conveys a philosophy for developing automated large-scale control systems that behave in an integrated, intelligent, flexible manner. Methods for operating large-scale systems under varying degrees of equipment degradation are discussed, and a design approach that separates the effort into phases is suggested. 5 refs., 1 fig

  20. Cloud-based bioinformatics workflow platform for large-scale next-generation sequencing analyses.

    Science.gov (United States)

    Liu, Bo; Madduri, Ravi K; Sotomayor, Borja; Chard, Kyle; Lacinski, Lukasz; Dave, Utpal J; Li, Jianqiang; Liu, Chunchen; Foster, Ian T

    2014-06-01

    Due to the upcoming data deluge of genome data, the need for storing and processing large-scale genome data, easy access to biomedical analyses tools, efficient data sharing and retrieval has presented significant challenges. The variability in data volume results in variable computing and storage requirements, therefore biomedical researchers are pursuing more reliable, dynamic and convenient methods for conducting sequencing analyses. This paper proposes a Cloud-based bioinformatics workflow platform for large-scale next-generation sequencing analyses, which enables reliable and highly scalable execution of sequencing analyses workflows in a fully automated manner. Our platform extends the existing Galaxy workflow system by adding data management capabilities for transferring large quantities of data efficiently and reliably (via Globus Transfer), domain-specific analyses tools preconfigured for immediate use by researchers (via user-specific tools integration), automatic deployment on Cloud for on-demand resource allocation and pay-as-you-go pricing (via Globus Provision), a Cloud provisioning tool for auto-scaling (via HTCondor scheduler), and the support for validating the correctness of workflows (via semantic verification tools). Two bioinformatics workflow use cases as well as performance evaluation are presented to validate the feasibility of the proposed approach. Copyright © 2014 Elsevier Inc. All rights reserved.

  1. Large-scale straw supplies to existing coal-fired power stations

    International Nuclear Information System (INIS)

    Gylling, M.; Parsby, M.; Thellesen, H.Z.; Keller, P.

    1992-08-01

    It is considered that large-scale supply of straw to power stations and decentral cogeneration plants could open up new economical systems and methods of organization of straw supply in Denmark. This thesis is elucidated and involved constraints are pointed out. The aim is to describe to what extent large-scale straw supply is interesting with regard to monetary savings and available resources. Analyses of models, systems and techniques described in a foregoing project are carried out. It is reckoned that the annual total amount of surplus straw in Denmark is 3.6 million tons. At present, use of straw which is not agricultural is limited to district heating plants with an annual consumption of 2-12 thousand tons. A prerequisite for a significant increase in the use of straw is an annual consumption by power and cogeneration plants of more than 100.000 tons. All aspects of straw management are examined in detail, also in relation to two actual Danish coal-fired plants. The reliability of straw supply is considered. It is concluded that very significant resources of straw are available in Denmark but there remain a number of constraints. Price competitiveness must be considered in relation to other fuels. It is suggested that the use of corn harvests, with whole stems attached (handled as large bales or in the same way as sliced straw alone) as fuel, would result in significant monetary savings in transport and storage especially. An equal status for whole-harvested corn with other forms of biomass fuels, with following changes in taxes and subsidies could possibly reduce constraints on large scale straw fuel supply. (AB) (13 refs.)

  2. Future hydrogen markets for large-scale hydrogen production systems

    International Nuclear Information System (INIS)

    Forsberg, Charles W.

    2007-01-01

    The cost of delivered hydrogen includes production, storage, and distribution. For equal production costs, large users (>10 6 m 3 /day) will favor high-volume centralized hydrogen production technologies to avoid collection costs for hydrogen from widely distributed sources. Potential hydrogen markets were examined to identify and characterize those markets that will favor large-scale hydrogen production technologies. The two high-volume centralized hydrogen production technologies are nuclear energy and fossil energy with carbon dioxide sequestration. The potential markets for these technologies are: (1) production of liquid fuels (gasoline, diesel and jet) including liquid fuels with no net greenhouse gas emissions and (2) peak electricity production. The development of high-volume centralized hydrogen production technologies requires an understanding of the markets to (1) define hydrogen production requirements (purity, pressure, volumes, need for co-product oxygen, etc.); (2) define and develop technologies to use the hydrogen, and (3) create the industrial partnerships to commercialize such technologies. (author)

  3. Potential for large-scale solar collector system to offset carbon-based heating in the Ontario greenhouse sector

    Science.gov (United States)

    Semple, Lucas M.; Carriveau, Rupp; Ting, David S.-K.

    2018-04-01

    In the Ontario greenhouse sector the misalignment of available solar radiation during the summer months and large heating demand during the winter months makes solar thermal collector systems an unviable option without some form of seasonal energy storage. Information obtained from Ontario greenhouse operators has shown that over 20% of annual natural gas usage occurs during the summer months for greenhouse pre-heating prior to sunrise. A transient model of the greenhouse microclimate and indoor conditioning systems is carried out using TRNSYS software and validated with actual natural gas usage data. A large-scale solar thermal collector system is then incorporated and found to reduce the annual heating energy demand by approximately 35%. The inclusion of the collector system correlates to a reduction of about 120 tonnes of CO2 equivalent emissions per acre of greenhouse per year. System payback period is discussed considering the benefits of a future Ontario carbon tax.

  4. GAIA: A WINDOW TO LARGE-SCALE MOTIONS

    Energy Technology Data Exchange (ETDEWEB)

    Nusser, Adi [Physics Department and the Asher Space Science Institute-Technion, Haifa 32000 (Israel); Branchini, Enzo [Department of Physics, Universita Roma Tre, Via della Vasca Navale 84, 00146 Rome (Italy); Davis, Marc, E-mail: adi@physics.technion.ac.il, E-mail: branchin@fis.uniroma3.it, E-mail: mdavis@berkeley.edu [Departments of Astronomy and Physics, University of California, Berkeley, CA 94720 (United States)

    2012-08-10

    Using redshifts as a proxy for galaxy distances, estimates of the two-dimensional (2D) transverse peculiar velocities of distant galaxies could be obtained from future measurements of proper motions. We provide the mathematical framework for analyzing 2D transverse motions and show that they offer several advantages over traditional probes of large-scale motions. They are completely independent of any intrinsic relations between galaxy properties; hence, they are essentially free of selection biases. They are free from homogeneous and inhomogeneous Malmquist biases that typically plague distance indicator catalogs. They provide additional information to traditional probes that yield line-of-sight peculiar velocities only. Further, because of their 2D nature, fundamental questions regarding vorticity of large-scale flows can be addressed. Gaia, for example, is expected to provide proper motions of at least bright galaxies with high central surface brightness, making proper motions a likely contender for traditional probes based on current and future distance indicator measurements.

  5. Analysis of large concrete storage tank under seismic response

    Energy Technology Data Exchange (ETDEWEB)

    Le, Jingyuan; Cui, Hongcheng; He, Qiang; Ju, Jinsan [China Agricultural University, Beijing (China); You, Xiaochuan [Tsinghua University, Beijing (China)

    2015-01-15

    This study adopted the finite element software ABAQUS to trace the dynamic response history of large reinforced concrete storage tank during different seismic excitations. The dynamic characteristics and failure modes of the tank's structure were investigated by considering the rebar's effect. Calculation results show that the large concrete storage tank remains in safe working conditions under a seismic acceleration of 55 cm/s{sup 2}. The joint of the concrete wall and dome begins to crack when seismic acceleration reaches 250 cm/s{sup 2}. As the earthquake continues, cracks spread until the top of the wall completely fails and stops working. The maximum displacement of the concrete tank and seismic acceleration are in proportion. Peak displacement and stress of the tank always appear behind the maximum acceleration.

  6. Final Report: Large-Scale Optimization for Bayesian Inference in Complex Systems

    Energy Technology Data Exchange (ETDEWEB)

    Ghattas, Omar [The University of Texas at Austin

    2013-10-15

    The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimiza- tion) Project focuses on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimiza- tion and inversion methods. Our research is directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. Our efforts are integrated in the context of a challenging testbed problem that considers subsurface reacting flow and transport. The MIT component of the SAGUARO Project addresses the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas-Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to- observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as "reduce then sample" and "sample then reduce." In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to achieve their speedups.

  7. Monitoring of large-scale federated data storage: XRootD and beyond

    International Nuclear Information System (INIS)

    Andreeva, J; Beche, A; Arias, D Diguez; Giordano, D; Saiz, P; Tuckett, D; Belov, S; Oleynik, D; Petrosyan, A; Tadel, M; Vukotic, I

    2014-01-01

    The computing models of the LHC experiments are gradually moving from hierarchical data models with centrally managed data pre-placement towards federated storage which provides seamless access to data files independently of their location and dramatically improve recovery due to fail-over mechanisms. Construction of the data federations and understanding the impact of the new approach to data management on user analysis requires complete and detailed monitoring. Monitoring functionality should cover the status of all components of the federated storage, measuring data traffic and data access performance, as well as being able to detect any kind of inefficiencies and to provide hints for resource optimization and effective data distribution policy. Data mining of the collected monitoring data provides a deep insight into new usage patterns. In the WLCG context, there are several federations currently based on the XRootD technology. This paper will focus on monitoring for the ATLAS and CMS XRootD federations implemented in the Experiment Dashboard monitoring framework. Both federations consist of many dozens of sites accessed by many hundreds of clients and they continue to grow in size. Handling of the monitoring flow generated by these systems has to be well optimized in order to achieve the required performance. Furthermore, this paper demonstrates the XRootD monitoring architecture is sufficiently generic to be easily adapted for other technologies, such as HTTP/WebDAV dynamic federations.

  8. Application of Cloud Storage on BIM Life-Cycle Management

    Directory of Open Access Journals (Sweden)

    Lieyun Ding

    2014-08-01

    Full Text Available Because of its high information intensity, strong consistency and convenient visualization features, building information modelling (BIM has received widespread attention in the fields of construction and project management. However, due to large amounts of information, high integration, the need for resource sharing between various departments, the long time-span of the BIM application, challenges relating to data interoperability, security and cost all slow down the adoption of BIM. This paper constructs a BIM cloud storage concept system using cloud storage, an advanced computer technology, to solve the problem of mass data processing, information security, and cost problems in the existing application of BIM to full life-cycle management. This system takes full advantage of the cloud storage technique. Achievements are reached in four areas of BIM information management, involving security and licensing management, file management, work process management and collaborative management. The system expands the time and space scales, improves the level of participation, and reduces the cost of BIM. The construction of the BIM cloud storage system is one of the most important directions of the development of BIM, which benefits the promotion and further development of BIM to better serve construction and engineering project management.

  9. The Software Reliability of Large Scale Integration Circuit and Very Large Scale Integration Circuit

    OpenAIRE

    Artem Ganiyev; Jan Vitasek

    2010-01-01

    This article describes evaluation method of faultless function of large scale integration circuits (LSI) and very large scale integration circuits (VLSI). In the article there is a comparative analysis of factors which determine faultless of integrated circuits, analysis of already existing methods and model of faultless function evaluation of LSI and VLSI. The main part describes a proposed algorithm and program for analysis of fault rate in LSI and VLSI circuits.

  10. Modeling of large-scale oxy-fuel combustion processes

    DEFF Research Database (Denmark)

    Yin, Chungen

    2012-01-01

    Quite some studies have been conducted in order to implement oxy-fuel combustion with flue gas recycle in conventional utility boilers as an effective effort of carbon capture and storage. However, combustion under oxy-fuel conditions is significantly different from conventional air-fuel firing......, among which radiative heat transfer under oxy-fuel conditions is one of the fundamental issues. This paper demonstrates the nongray-gas effects in modeling of large-scale oxy-fuel combustion processes. Oxy-fuel combustion of natural gas in a 609MW utility boiler is numerically studied, in which...... calculation of the oxy-fuel WSGGM remarkably over-predicts the radiative heat transfer to the furnace walls and under-predicts the gas temperature at the furnace exit plane, which also result in a higher incomplete combustion in the gray calculation. Moreover, the gray and non-gray calculations of the same...

  11. Lab-scale experiment of a closed thermochemical heat storage system including honeycomb heat exchanger

    International Nuclear Information System (INIS)

    Fopah-Lele, Armand; Rohde, Christian; Neumann, Karsten; Tietjen, Theo; Rönnebeck, Thomas; N'Tsoukpoe, Kokouvi Edem; Osterland, Thomas; Opel, Oliver

    2016-01-01

    A lab-scale thermochemical heat storage reactor was developed in the European project “thermal battery” to obtain information on the characteristics of a closed heat storage system, based on thermochemical reactions. The present type of storage is capable of re-using waste heat from cogeneration system to produce useful heat for space heating. The storage material used was SrBr 2 ·6H 2 O. Due to agglomeration or gel-like problems, a structural element was introduced to enhance vapour and heat transfer. Honeycomb heat exchanger was designed and tested. 13 dehydration-hydration cycles were studied under low-temperature conditions (material temperatures < 100 °C) for storage. Discharging was realized at water vapour pressure of about 42 mbar. Temperature evolution inside the reactor at different times and positions, chemical conversion, thermal power and overall efficiency were analysed for the selected cycles. Experimental system thermal capacity and efficiency of 65 kWh and 0.77 are respectively obtained with about 1 kg of SrBr 2 ·6H 2 O. Heat transfer fluid recovers heat at a short span of about 43 °C with an average of 22 °C during about 4 h, acceptable temperature for the human comfort (20 °C on day and 16 °C at night). System performances were obtained for a salt bed energy density of 213 kWh·m 3 . The overall heat transfer coefficient of the honeycomb heat exchanger has an average value of 147 W m −2  K −1 . Though promising results have been obtained, ameliorations need to be made, in order to make the closed thermochemical heat storage system competitive for space heating. - Highlights: • Lab-scale thermochemical heat storage is designed, constructed and tested. • The use of honeycomb heat exchanger as a heat and vapour process enhancement. • Closed system (1 kg SrBr 2 ·6H 2 O) able to give back 3/4 of initial thermal waste energy. • System storage capacity and thermal efficiency are respectively 65 kWh and 0.77.

  12. How the Internet Will Help Large-Scale Assessment Reinvent Itself

    Directory of Open Access Journals (Sweden)

    Randy Elliot Bennett

    2001-02-01

    Full Text Available Large-scale assessment in the United States is undergoing enormous pressure to change. That pressure stems from many causes. Depending upon the type of test, the issues precipitating change include an outmoded cognitive-scientific basis for test design; a mismatch with curriculum; the differential performance of population groups; a lack of information to help individuals improve; and inefficiency. These issues provide a strong motivation to reconceptualize both the substance and the business of large-scale assessment. At the same time, advances in technology, measurement, and cognitive science are providing the means to make that reconceptualization a reality. The thesis of this paper is that the largest facilitating factor will be technological, in particular the Internet. In the same way that it is already helping to revolutionize commerce, education, and even social interaction, the Internet will help revolutionize the business and substance of large-scale assessment.

  13. Maps on large-scale air quality concentrations in the Netherlands

    International Nuclear Information System (INIS)

    Velders, G.J.M.; Aben, J.M.M.; Beck, J.P.; Blom, W.F.; Van Dam, J.D.; Elzenga, H.E.; Geilenkirchen, G.P.; Hoen, A.; Jimmink, B.A.; Matthijsen, J.; Peek, C.J.; Van Velze, K.; Visser, H.; De Vries, W.J.

    2007-01-01

    Every year MNP produces maps showing large-scale concentrations of several air quality components in the Netherlands for which there are European regulations. The concentration maps are based on a combination of model calculations and measurements. These maps (called GCN maps) show the large-scale contribution of these components in air in the Netherlands for both past and future years. Local, provincial and other authorities use these maps for reporting exceedances in the framework of the EU Air Quality Directive and for planning. The report gives the underlying assumptions applied to the GCN-maps in this 2007 report. The Dutch Ministry of Housing, Spatial Planning and the Environment (VROM) is legally responsible for selecting the scenario to be used in the GCN maps. The Ministry has chosen to base the current maps of nitrogen dioxide, particulate matter (PM10) and sulphur dioxide for 2010 up to 2020 on standing and proposed Dutch and European policies. That means that the Netherlands and other European countries will meet their National Emissions Ceilings (NEC) by 2010 and the emissions according to the ambitions of the Thematic Strategy on Air Pollution of the European Commission up to 2020, as assumed in the calculations. The large-scale concentrations of NO2 and PM10, presented by the GCN maps, are in 2006 and for the 2010-2020 period, below the European limit value of yearly averaged 40 μg m 3 everywhere in the Netherlands. The large-scale concentration exceeds the European limit value for the daily average of PM10 of maximally 35 days above 50 μg m 3 in some locations in 2006. This applies close to the harbours of Amsterdam and Rotterdam and is associated with storage and handling of dry bulk material. The large-scale concentration of PM10 is below the European limit value for the daily average everywhere in 2010-2020. Several changes have been implemented, in addition to the changes in the GCN maps of last year (report March 2006). New insights into

  14. Phylogenetic distribution of large-scale genome patchiness

    Directory of Open Access Journals (Sweden)

    Hackenberg Michael

    2008-04-01

    Full Text Available Abstract Background The phylogenetic distribution of large-scale genome structure (i.e. mosaic compositional patchiness has been explored mainly by analytical ultracentrifugation of bulk DNA. However, with the availability of large, good-quality chromosome sequences, and the recently developed computational methods to directly analyze patchiness on the genome sequence, an evolutionary comparative analysis can be carried out at the sequence level. Results The local variations in the scaling exponent of the Detrended Fluctuation Analysis are used here to analyze large-scale genome structure and directly uncover the characteristic scales present in genome sequences. Furthermore, through shuffling experiments of selected genome regions, computationally-identified, isochore-like regions were identified as the biological source for the uncovered large-scale genome structure. The phylogenetic distribution of short- and large-scale patchiness was determined in the best-sequenced genome assemblies from eleven eukaryotic genomes: mammals (Homo sapiens, Pan troglodytes, Mus musculus, Rattus norvegicus, and Canis familiaris, birds (Gallus gallus, fishes (Danio rerio, invertebrates (Drosophila melanogaster and Caenorhabditis elegans, plants (Arabidopsis thaliana and yeasts (Saccharomyces cerevisiae. We found large-scale patchiness of genome structure, associated with in silico determined, isochore-like regions, throughout this wide phylogenetic range. Conclusion Large-scale genome structure is detected by directly analyzing DNA sequences in a wide range of eukaryotic chromosome sequences, from human to yeast. In all these genomes, large-scale patchiness can be associated with the isochore-like regions, as directly detected in silico at the sequence level.

  15. Managing large-scale models: DBS

    International Nuclear Information System (INIS)

    1981-05-01

    A set of fundamental management tools for developing and operating a large scale model and data base system is presented. Based on experience in operating and developing a large scale computerized system, the only reasonable way to gain strong management control of such a system is to implement appropriate controls and procedures. Chapter I discusses the purpose of the book. Chapter II classifies a broad range of generic management problems into three groups: documentation, operations, and maintenance. First, system problems are identified then solutions for gaining management control are disucssed. Chapters III, IV, and V present practical methods for dealing with these problems. These methods were developed for managing SEAS but have general application for large scale models and data bases

  16. Participatory Design and the Challenges of Large-Scale Systems

    DEFF Research Database (Denmark)

    Simonsen, Jesper; Hertzum, Morten

    2008-01-01

    With its 10th biannual anniversary conference, Participatory Design (PD) is leaving its teens and must now be considered ready to join the adult world. In this article we encourage the PD community to think big: PD should engage in large-scale information-systems development and opt for a PD...

  17. Redox Flow Batteries, Hydrogen and Distributed Storage.

    Science.gov (United States)

    Dennison, C R; Vrubel, Heron; Amstutz, Véronique; Peljo, Pekka; Toghill, Kathryn E; Girault, Hubert H

    2015-01-01

    Social, economic, and political pressures are causing a shift in the global energy mix, with a preference toward renewable energy sources. In order to realize widespread implementation of these resources, large-scale storage of renewable energy is needed. Among the proposed energy storage technologies, redox flow batteries offer many unique advantages. The primary limitation of these systems, however, is their limited energy density which necessitates very large installations. In order to enhance the energy storage capacity of these systems, we have developed a unique dual-circuit architecture which enables two levels of energy storage; first in the conventional electrolyte, and then through the formation of hydrogen. Moreover, we have begun a pilot-scale demonstration project to investigate the scalability and technical readiness of this approach. This combination of conventional energy storage and hydrogen production is well aligned with the current trajectory of modern energy and mobility infrastructure. The combination of these two means of energy storage enables the possibility of an energy economy dominated by renewable resources.

  18. Lessons from a large-scale assessment: Results from conceptual inventories

    Directory of Open Access Journals (Sweden)

    Beth Thacker

    2014-07-01

    Full Text Available We report conceptual inventory results of a large-scale assessment project at a large university. We studied the introduction of materials and instructional methods informed by physics education research (PER (physics education research-informed materials into a department where most instruction has previously been traditional and a significant number of faculty are hesitant, ambivalent, or even resistant to the introduction of such reforms. Data were collected in all of the sections of both the large algebra- and calculus-based introductory courses for a number of years employing commonly used conceptual inventories. Results from a small PER-informed, inquiry-based, laboratory-based class are also reported. Results suggest that when PER-informed materials are introduced in the labs and recitations, independent of the lecture style, there is an increase in students’ conceptual inventory gains. There is also an increase in the results on conceptual inventories if PER-informed instruction is used in the lecture. The highest conceptual inventory gains were achieved by the combination of PER-informed lectures and laboratories in large class settings and by the hands-on, laboratory-based, inquiry-based course taught in a small class setting.

  19. Implicit solvers for large-scale nonlinear problems

    International Nuclear Information System (INIS)

    Keyes, David E; Reynolds, Daniel R; Woodward, Carol S

    2006-01-01

    Computational scientists are grappling with increasingly complex, multi-rate applications that couple such physical phenomena as fluid dynamics, electromagnetics, radiation transport, chemical and nuclear reactions, and wave and material propagation in inhomogeneous media. Parallel computers with large storage capacities are paving the way for high-resolution simulations of coupled problems; however, hardware improvements alone will not prove enough to enable simulations based on brute-force algorithmic approaches. To accurately capture nonlinear couplings between dynamically relevant phenomena, often while stepping over rapid adjustments to quasi-equilibria, simulation scientists are increasingly turning to implicit formulations that require a discrete nonlinear system to be solved for each time step or steady state solution. Recent advances in iterative methods have made fully implicit formulations a viable option for solution of these large-scale problems. In this paper, we overview one of the most effective iterative methods, Newton-Krylov, for nonlinear systems and point to software packages with its implementation. We illustrate the method with an example from magnetically confined plasma fusion and briefly survey other areas in which implicit methods have bestowed important advantages, such as allowing high-order temporal integration and providing a pathway to sensitivity analyses and optimization. Lastly, we overview algorithm extensions under development motivated by current SciDAC applications

  20. Designing a cost-effective CO2 storage infrastructure using a GIS based linear optimization energy model

    NARCIS (Netherlands)

    Broek, M. van den; Brederode, E.; Ramírez, A.; Kramers, L.; Kuip, M. van der; Wildenborg, T.; Turkenburg, W.; Faaij, A.

    2010-01-01

    Large-scale deployment of carbon capture and storage needs a dedicated infrastructure. Planning and designing of this infrastructure require incorporation of both temporal and spatial aspects. In this study, a toolbox has been developed that integrates ArcGIS, a geographical information system with

  1. Large scale structure and baryogenesis

    International Nuclear Information System (INIS)

    Kirilova, D.P.; Chizhov, M.V.

    2001-08-01

    We discuss a possible connection between the large scale structure formation and the baryogenesis in the universe. An update review of the observational indications for the presence of a very large scale 120h -1 Mpc in the distribution of the visible matter of the universe is provided. The possibility to generate a periodic distribution with the characteristic scale 120h -1 Mpc through a mechanism producing quasi-periodic baryon density perturbations during inflationary stage, is discussed. The evolution of the baryon charge density distribution is explored in the framework of a low temperature boson condensate baryogenesis scenario. Both the observed very large scale of a the visible matter distribution in the universe and the observed baryon asymmetry value could naturally appear as a result of the evolution of a complex scalar field condensate, formed at the inflationary stage. Moreover, for some model's parameters a natural separation of matter superclusters from antimatter ones can be achieved. (author)

  2. Long-time data storage: relevant time scales

    NARCIS (Netherlands)

    Elwenspoek, Michael Curt

    2011-01-01

    Dynamic processes relevant for long-time storage of information about human kind are discussed, ranging from biological and geological processes to the lifecycle of stars and the expansion of the universe. Major results are that life will end ultimately and the remaining time that the earth is

  3. CHEMICAL STORAGE: MYTHS VERSUS REALITY

    International Nuclear Information System (INIS)

    Simmons, F.

    2007-01-01

    A large number of resources explaining proper chemical storage are available. These resources include books, databases/tables, and articles that explain various aspects of chemical storage including compatible chemical storage, signage, and regulatory requirements. Another source is the chemical manufacturer or distributor who provides storage information in the form of icons or color coding schemes on container labels. Despite the availability of these resources, chemical accidents stemming from improper storage, according to recent reports (1) (2), make up almost 25% of all chemical accidents. This relatively high percentage of chemical storage accidents suggests that these publications and color coding schemes although helpful, still provide incomplete information that may not completely mitigate storage risks. This manuscript will explore some ways published storage information may be incomplete, examine the associated risks, and suggest methods to help further eliminate chemical storage risks

  4. Automatic management software for large-scale cluster system

    International Nuclear Information System (INIS)

    Weng Yunjian; Chinese Academy of Sciences, Beijing; Sun Gongxing

    2007-01-01

    At present, the large-scale cluster system faces to the difficult management. For example the manager has large work load. It needs to cost much time on the management and the maintenance of large-scale cluster system. The nodes in large-scale cluster system are very easy to be chaotic. Thousands of nodes are put in big rooms so that some managers are very easy to make the confusion with machines. How do effectively carry on accurate management under the large-scale cluster system? The article introduces ELFms in the large-scale cluster system. Furthermore, it is proposed to realize the large-scale cluster system automatic management. (authors)

  5. Utilization of Large Scale Surface Models for Detailed Visibility Analyses

    Science.gov (United States)

    Caha, J.; Kačmařík, M.

    2017-11-01

    This article demonstrates utilization of large scale surface models with small spatial resolution and high accuracy, acquired from Unmanned Aerial Vehicle scanning, for visibility analyses. The importance of large scale data for visibility analyses on the local scale, where the detail of the surface model is the most defining factor, is described. The focus is not only the classic Boolean visibility, that is usually determined within GIS, but also on so called extended viewsheds that aims to provide more information about visibility. The case study with examples of visibility analyses was performed on river Opava, near the Ostrava city (Czech Republic). The multiple Boolean viewshed analysis and global horizon viewshed were calculated to determine most prominent features and visibility barriers of the surface. Besides that, the extended viewshed showing angle difference above the local horizon, which describes angular height of the target area above the barrier, is shown. The case study proved that large scale models are appropriate data source for visibility analyses on local level. The discussion summarizes possible future applications and further development directions of visibility analyses.

  6. Alternatives to electricity for transmission and annual-scale firming - Storage for diverse, stranded, renewable energy resources: hydrogen and ammonia

    Energy Technology Data Exchange (ETDEWEB)

    Leighty, William

    2010-09-15

    The world's richest renewable energy resources 'of large geographic extent and high intensity' are stranded: far from end-users with inadequate or nonexistent gathering and transmission systems to deliver energy. Output of most renewables varies greatly, at time scales of seconds-seasons: energy capture assets operate at low capacity factor; energy delivery is not 'firm'. New electric transmission systems, or fractions thereof, dedicated to renewables, suffer the same low CF: substantial stranded capital assets, increasing the cost of delivered renewable-source energy. Electricity storage cannot affordably firm large renewables at annual scale. Gaseous hydrogen and anhydrous ammonia fuels can: attractive alternatives.

  7. Analysis on applicable error-correcting code strength of storage class memory and NAND flash in hybrid storage

    Science.gov (United States)

    Matsui, Chihiro; Kinoshita, Reika; Takeuchi, Ken

    2018-04-01

    A hybrid of storage class memory (SCM) and NAND flash is a promising technology for high performance storage. Error correction is inevitable on SCM and NAND flash because their bit error rate (BER) increases with write/erase (W/E) cycles, data retention, and program/read disturb. In addition, scaling and multi-level cell technologies increase BER. However, error-correcting code (ECC) degrades storage performance because of extra memory reading and encoding/decoding time. Therefore, applicable ECC strength of SCM and NAND flash is evaluated independently by fixing ECC strength of one memory in the hybrid storage. As a result, weak BCH ECC with small correctable bit is recommended for the hybrid storage with large SCM capacity because SCM is accessed frequently. In contrast, strong and long-latency LDPC ECC can be applied to NAND flash in the hybrid storage with large SCM capacity because large-capacity SCM improves the storage performance.

  8. Validating Bayesian truth serum in large-scale online human experiments.

    Science.gov (United States)

    Frank, Morgan R; Cebrian, Manuel; Pickard, Galen; Rahwan, Iyad

    2017-01-01

    Bayesian truth serum (BTS) is an exciting new method for improving honesty and information quality in multiple-choice survey, but, despite the method's mathematical reliance on large sample sizes, existing literature about BTS only focuses on small experiments. Combined with the prevalence of online survey platforms, such as Amazon's Mechanical Turk, which facilitate surveys with hundreds or thousands of participants, BTS must be effective in large-scale experiments for BTS to become a readily accepted tool in real-world applications. We demonstrate that BTS quantifiably improves honesty in large-scale online surveys where the "honest" distribution of answers is known in expectation on aggregate. Furthermore, we explore a marketing application where "honest" answers cannot be known, but find that BTS treatment impacts the resulting distributions of answers.

  9. Prospects for Large-Scale Energy Storage in Decarbonised Power Grids. Working Paper

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2009-12-21

    This report describes the development of a simplified algorithm to determine the amount of storage that compensates for short-term net variation of wind power supply and assesses its role in light of a changing future power supply mix. It also examines the range of options available to power generation and transmission operators to deal with variability.

  10. Optimization of large-scale heterogeneous system-of-systems models.

    Energy Technology Data Exchange (ETDEWEB)

    Parekh, Ojas; Watson, Jean-Paul; Phillips, Cynthia Ann; Siirola, John; Swiler, Laura Painton; Hough, Patricia Diane (Sandia National Laboratories, Livermore, CA); Lee, Herbert K. H. (University of California, Santa Cruz, Santa Cruz, CA); Hart, William Eugene; Gray, Genetha Anne (Sandia National Laboratories, Livermore, CA); Woodruff, David L. (University of California, Davis, Davis, CA)

    2012-01-01

    Decision makers increasingly rely on large-scale computational models to simulate and analyze complex man-made systems. For example, computational models of national infrastructures are being used to inform government policy, assess economic and national security risks, evaluate infrastructure interdependencies, and plan for the growth and evolution of infrastructure capabilities. A major challenge for decision makers is the analysis of national-scale models that are composed of interacting systems: effective integration of system models is difficult, there are many parameters to analyze in these systems, and fundamental modeling uncertainties complicate analysis. This project is developing optimization methods to effectively represent and analyze large-scale heterogeneous system of systems (HSoS) models, which have emerged as a promising approach for describing such complex man-made systems. These optimization methods enable decision makers to predict future system behavior, manage system risk, assess tradeoffs between system criteria, and identify critical modeling uncertainties.

  11. Multilevel resistive information storage and retrieval

    Science.gov (United States)

    Lohn, Andrew; Mickel, Patrick R.

    2016-08-09

    The present invention relates to resistive random-access memory (RRAM or ReRAM) systems, as well as methods of employing multiple state variables to form degenerate states in such memory systems. The methods herein allow for precise write and read steps to form multiple state variables, and these steps can be performed electrically. Such an approach allows for multilevel, high density memory systems with enhanced information storage capacity and simplified information retrieval.

  12. Prehospital Acute Stroke Severity Scale to Predict Large Artery Occlusion: Design and Comparison With Other Scales.

    Science.gov (United States)

    Hastrup, Sidsel; Damgaard, Dorte; Johnsen, Søren Paaske; Andersen, Grethe

    2016-07-01

    We designed and validated a simple prehospital stroke scale to identify emergent large vessel occlusion (ELVO) in patients with acute ischemic stroke and compared the scale to other published scales for prediction of ELVO. A national historical test cohort of 3127 patients with information on intracranial vessel status (angiography) before reperfusion therapy was identified. National Institutes of Health Stroke Scale (NIHSS) items with the highest predictive value of occlusion of a large intracranial artery were identified, and the most optimal combination meeting predefined criteria to ensure usefulness in the prehospital phase was determined. The predictive performance of Prehospital Acute Stroke Severity (PASS) scale was compared with other published scales for ELVO. The PASS scale was composed of 3 NIHSS scores: level of consciousness (month/age), gaze palsy/deviation, and arm weakness. In derivation of PASS 2/3 of the test cohort was used and showed accuracy (area under the curve) of 0.76 for detecting large arterial occlusion. Optimal cut point ≥2 abnormal scores showed: sensitivity=0.66 (95% CI, 0.62-0.69), specificity=0.83 (0.81-0.85), and area under the curve=0.74 (0.72-0.76). Validation on 1/3 of the test cohort showed similar performance. Patients with a large artery occlusion on angiography with PASS ≥2 had a median NIHSS score of 17 (interquartile range=6) as opposed to PASS <2 with a median NIHSS score of 6 (interquartile range=5). The PASS scale showed equal performance although more simple when compared with other scales predicting ELVO. The PASS scale is simple and has promising accuracy for prediction of ELVO in the field. © 2016 American Heart Association, Inc.

  13. Large scale network-centric distributed systems

    CERN Document Server

    Sarbazi-Azad, Hamid

    2014-01-01

    A highly accessible reference offering a broad range of topics and insights on large scale network-centric distributed systems Evolving from the fields of high-performance computing and networking, large scale network-centric distributed systems continues to grow as one of the most important topics in computing and communication and many interdisciplinary areas. Dealing with both wired and wireless networks, this book focuses on the design and performance issues of such systems. Large Scale Network-Centric Distributed Systems provides in-depth coverage ranging from ground-level hardware issu

  14. Large-Scale Outflows in Seyfert Galaxies

    Science.gov (United States)

    Colbert, E. J. M.; Baum, S. A.

    1995-12-01

    \\catcode`\\@=11 \\ialign{m @th#1hfil ##hfil \\crcr#2\\crcr\\sim\\crcr}}} \\catcode`\\@=12 Highly collimated outflows extend out to Mpc scales in many radio-loud active galaxies. In Seyfert galaxies, which are radio-quiet, the outflows extend out to kpc scales and do not appear to be as highly collimated. In order to study the nature of large-scale (>~1 kpc) outflows in Seyferts, we have conducted optical, radio and X-ray surveys of a distance-limited sample of 22 edge-on Seyfert galaxies. Results of the optical emission-line imaging and spectroscopic survey imply that large-scale outflows are present in >~{{1} /{4}} of all Seyferts. The radio (VLA) and X-ray (ROSAT) surveys show that large-scale radio and X-ray emission is present at about the same frequency. Kinetic luminosities of the outflows in Seyferts are comparable to those in starburst-driven superwinds. Large-scale radio sources in Seyferts appear diffuse, but do not resemble radio halos found in some edge-on starburst galaxies (e.g. M82). We discuss the feasibility of the outflows being powered by the active nucleus (e.g. a jet) or a circumnuclear starburst.

  15. SCALE INTERACTION IN A MIXING LAYER. THE ROLE OF THE LARGE-SCALE GRADIENTS

    KAUST Repository

    Fiscaletti, Daniele

    2015-08-23

    The interaction between scales is investigated in a turbulent mixing layer. The large-scale amplitude modulation of the small scales already observed in other works depends on the crosswise location. Large-scale positive fluctuations correlate with a stronger activity of the small scales on the low speed-side of the mixing layer, and a reduced activity on the high speed-side. However, from physical considerations we would expect the scales to interact in a qualitatively similar way within the flow and across different turbulent flows. Therefore, instead of the large-scale fluctuations, the large-scale gradients modulation of the small scales has been additionally investigated.

  16. Integration, Provenance, and Temporal Queries for Large-Scale Knowledge Bases

    OpenAIRE

    Gao, Shi

    2016-01-01

    Knowledge bases that summarize web information in RDF triples deliver many benefits, including support for natural language question answering and powerful structured queries that extract encyclopedic knowledge via SPARQL. Large scale knowledge bases grow rapidly in terms of scale and significance, and undergo frequent changes in both schema and content. Two critical problems have thus emerged: (i) how to support temporal queries that explore the history of knowledge bases or flash-back to th...

  17. Irradiation of onions on a large scale

    International Nuclear Information System (INIS)

    Kawashima, Koji; Hayashi, Toru; Uozumi, J.; Sugimoto, Toshio; Aoki, Shohei

    1984-01-01

    A large number of onions of var. Kitamiki and Ohotsuku were irradiated in September followed by storage at 0 deg C or 5 deg C. The onions were shifted from cold-storage facilities to room temperature in mid-March or in mid-April in the following year. Their sprouting, rooting, spoilage characteristics and sugar content were observed during storage at room temperature. Most of the unirradiated onions sprouted either outside or inside bulbs during storage at room temperature, and almost all of the irradiated ones showed small buds with browning inside the bulb in mid-April irrespective of the storage temperature. Rooting and/or expansion of bottom were observed in the unirradiated samples. Although the irradiated materials did not have root, they showed expansion of bottom to some extent. Both the irradiated and unirradiated onions spoiled slightly unless they sprouted, and sprouted onions got easily spoiled. There was no difference in the glucose content between the unirradiated and irradiated onions, but the irradiated ones yielded higher sucrose content when stored at room temperature. Irradiation treatment did not have an obvious effect on the quality of freeze-dried onion slices. (author)

  18. Risk Management Challenges in Large-scale Energy PSS

    DEFF Research Database (Denmark)

    Tegeltija, Miroslava; Oehmen, Josef; Kozin, Igor

    2017-01-01

    Probabilistic risk management approaches have a long tradition in engineering. A large variety of tools and techniques based on the probabilistic view of risk is available and applied in PSS practice. However, uncertainties that arise due to lack of knowledge and information are still missing...... adequate representations. We focus on a large-scale energy company in Denmark as one case of current product/servicesystems risk management best practices. We analyze their risk management process and investigate the tools they use in order to support decision making processes within the company. First, we...... identify the following challenges in the current risk management practices that are in line with literature: (1) current methods are not appropriate for the situations dominated by weak knowledge and information; (2) quality of traditional models in such situations is open to debate; (3) quality of input...

  19. Large-scale silviculture experiments of western Oregon and Washington.

    Science.gov (United States)

    Nathan J. Poage; Paul D. Anderson

    2007-01-01

    We review 12 large-scale silviculture experiments (LSSEs) in western Washington and Oregon with which the Pacific Northwest Research Station of the USDA Forest Service is substantially involved. We compiled and arrayed information about the LSSEs as a series of matrices in a relational database, which is included on the compact disc published with this report and...

  20. Dissecting the large-scale galactic conformity

    Science.gov (United States)

    Seo, Seongu

    2018-01-01

    Galactic conformity is an observed phenomenon that galaxies located in the same region have similar properties such as star formation rate, color, gas fraction, and so on. The conformity was first observed among galaxies within in the same halos (“one-halo conformity”). The one-halo conformity can be readily explained by mutual interactions among galaxies within a halo. Recent observations however further witnessed a puzzling connection among galaxies with no direct interaction. In particular, galaxies located within a sphere of ~5 Mpc radius tend to show similarities, even though the galaxies do not share common halos with each other ("two-halo conformity" or “large-scale conformity”). Using a cosmological hydrodynamic simulation, Illustris, we investigate the physical origin of the two-halo conformity and put forward two scenarios. First, back-splash galaxies are likely responsible for the large-scale conformity. They have evolved into red galaxies due to ram-pressure stripping in a given galaxy cluster and happen to reside now within a ~5 Mpc sphere. Second, galaxies in strong tidal field induced by large-scale structure also seem to give rise to the large-scale conformity. The strong tides suppress star formation in the galaxies. We discuss the importance of the large-scale conformity in the context of galaxy evolution.

  1. Health information management using optical storage technology: case studies.

    Science.gov (United States)

    Kohn, D

    1992-05-01

    All the health care facilities examined in the case studies addressed several important organizational issues before and during the installation of their systems. All the facilities examined employee commitment. The prudent managers considered how easily their employees adapt to changes in their jobs and work environment. They considered how enthusiastic cooperation can be fostered in the creation of a liberated and reengineered office. This was determined not only by each individual's reaction to change, but also by the health care facility's track record with other system installations. For example, document image, diagnostic image, and coded data processing systems allow the integration of divergent health care information systems within complex institutions. Unfortunately, many institutions are currently struggling with how to create an information management architecture that will integrate their mature systems, such as their patient care and financial systems. Information managers must realize that if optical storage technology-based systems are used in a strategic and planned fashion, these systems can act as focal points for systems integration, not as promises to further confuse the issue. Another issue that needed attention in all the examples was the work environment. The managers considered how the work environment was going to affect the ability to integrate optical image and data systems into the institution. For example, many of these medical centers have created alliances with clinics, HMOs, and large corporate users of medical services. This created a demand for all or part of the health information outside the confines of the original institution. Since the work environment is composed of a handful of factors such as merged medical services, as many work environment factors as possible were addressed before application of the optical storage technology solution in the institutions. And finally, the third critical issue was the organization of work

  2. Integrating weather and geotechnical monitoring data for assessing the stability of large scale surface mining operations

    Science.gov (United States)

    Steiakakis, Chrysanthos; Agioutantis, Zacharias; Apostolou, Evangelia; Papavgeri, Georgia; Tripolitsiotis, Achilles

    2016-01-01

    The geotechnical challenges for safe slope design in large scale surface mining operations are enormous. Sometimes one degree of slope inclination can significantly reduce the overburden to ore ratio and therefore dramatically improve the economics of the operation, while large scale slope failures may have a significant impact on human lives. Furthermore, adverse weather conditions, such as high precipitation rates, may unfavorably affect the already delicate balance between operations and safety. Geotechnical, weather and production parameters should be systematically monitored and evaluated in order to safely operate such pits. Appropriate data management, processing and storage are critical to ensure timely and informed decisions. This paper presents an integrated data management system which was developed over a number of years as well as the advantages through a specific application. The presented case study illustrates how the high production slopes of a mine that exceed depths of 100-120 m were successfully mined with an average displacement rate of 10- 20 mm/day, approaching an almost slow to moderate landslide velocity. Monitoring data of the past four years are included in the database and can be analyzed to produce valuable results. Time-series data correlations of movements, precipitation records, etc. are evaluated and presented in this case study. The results can be used to successfully manage mine operations and ensure the safety of the mine and the workforce.

  3. Diffused holographic information storage and retrieval using photorefractive optical materials

    Science.gov (United States)

    McMillen, Deanna Kay

    Holography offers a tremendous opportunity for dense information storage, theoretically one bit per cubic wavelength of material volume, with rapid retrieval, of up to thousands of pages of information simultaneously. However, many factors prevent the theoretical storage limit from being reached, including dynamic range problems and imperfections in recording materials. This research explores new ways of moving closer to practical holographic information storage and retrieval by altering the recording materials, in this case, photorefractive crystals, and by increasing the current storage capacity while improving the information retrieved. As an experimental example of the techniques developed, the information retrieved is the correlation peak from an optical recognition architecture, but the materials and methods developed are applicable to many other holographic information storage systems. Optical correlators can potentially solve any signal or image recognition problem. Military surveillance, fingerprint identification for law enforcement or employee identification, and video games are but a few examples of applications. A major obstacle keeping optical correlators from being universally accepted is the lack of a high quality, thick (high capacity) holographic recording material that operates with red or infrared wavelengths which are available from inexpensive diode lasers. This research addresses the problems from two positions: find a better material for use with diode lasers, and reduce the requirements placed on the material while maintaining an efficient and effective system. This research found that the solutions are new dopants introduced into photorefractive lithium niobate to improve wavelength sensitivities and the use of a novel inexpensive diffuser that reduces the dynamic range and optical element quality requirements (which reduces the cost) while improving performance. A uniquely doped set of 12 lithium niobate crystals was specified and

  4. Solar energy storage researchers information user study

    Energy Technology Data Exchange (ETDEWEB)

    Belew, W.W.; Wood, B.L.; Marle, T.L.; Reinhardt, C.L.

    1981-03-01

    The results of a series of telephone interviews with groups of users of information on solar energy storage are described. In the current study only high-priority groups were examined. Results from 2 groups of researchers are analyzed: DOE-Funded Researchers and Non-DOE-Funded Researchers. The data will be used as input to the determination of information products and services the Solar Energy Research Institute, the Solar Energy Information Data Bank Network, and the entire information outreach community should be preparing and disseminating.

  5. Modeling of information flows in natural gas storage facility

    Science.gov (United States)

    Ranjbari, Leyla; Bahar, Arifah; Aziz, Zainal Abdul

    2013-09-01

    The paper considers the natural-gas storage valuation based on the information-based pricing framework of Brody-Hughston-Macrina (BHM). As opposed to many studies which the associated filtration is considered pre-specified, this work tries to construct the filtration in terms of the information provided to the market. The value of the storage is given by the sum of the discounted expectations of the cash flows under risk-neutral measure, conditional to the constructed filtration with the Brownian bridge noise term. In order to model the flow of information about the cash flows, we assume the existence of a fixed pricing kernel with liquid, homogenous and incomplete market without arbitrage.

  6. Deterministic sensitivity and uncertainty analysis for large-scale computer models

    International Nuclear Information System (INIS)

    Worley, B.A.; Pin, F.G.; Oblow, E.M.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.

    1988-01-01

    This paper presents a comprehensive approach to sensitivity and uncertainty analysis of large-scale computer models that is analytic (deterministic) in principle and that is firmly based on the model equations. The theory and application of two systems based upon computer calculus, GRESS and ADGEN, are discussed relative to their role in calculating model derivatives and sensitivities without a prohibitive initial manpower investment. Storage and computational requirements for these two systems are compared for a gradient-enhanced version of the PRESTO-II computer model. A Deterministic Uncertainty Analysis (DUA) method that retains the characteristics of analytically computing result uncertainties based upon parameter probability distributions is then introduced and results from recent studies are shown. 29 refs., 4 figs., 1 tab

  7. Large-scale perspective as a challenge

    NARCIS (Netherlands)

    Plomp, M.G.A.

    2012-01-01

    1. Scale forms a challenge for chain researchers: when exactly is something ‘large-scale’? What are the underlying factors (e.g. number of parties, data, objects in the chain, complexity) that determine this? It appears to be a continuum between small- and large-scale, where positioning on that

  8. Mental schemas hamper memory storage of goal-irrelevant information

    NARCIS (Netherlands)

    Sweegers, C.C.G.; Coleman, G.A.; van Poppel, E.A.M.; Cox, R.; Talamini, L.M.

    2015-01-01

    Mental schemas exert top-down control on information processing, for instance by facilitating the storage of schema-related information. However, given capacity-limits and competition in neural network processing, schemas may additionally exert their effects by suppressing information with low

  9. Detection of large-scale concentric gravity waves from a Chinese airglow imager network

    Science.gov (United States)

    Lai, Chang; Yue, Jia; Xu, Jiyao; Yuan, Wei; Li, Qinzeng; Liu, Xiao

    2018-06-01

    Concentric gravity waves (CGWs) contain a broad spectrum of horizontal wavelengths and periods due to their instantaneous localized sources (e.g., deep convection, volcanic eruptions, or earthquake, etc.). However, it is difficult to observe large-scale gravity waves of >100 km wavelength from the ground for the limited field of view of a single camera and local bad weather. Previously, complete large-scale CGW imagery could only be captured by satellite observations. In the present study, we developed a novel method that uses assembling separate images and applying low-pass filtering to obtain temporal and spatial information about complete large-scale CGWs from a network of all-sky airglow imagers. Coordinated observations from five all-sky airglow imagers in Northern China were assembled and processed to study large-scale CGWs over a wide area (1800 km × 1 400 km), focusing on the same two CGW events as Xu et al. (2015). Our algorithms yielded images of large-scale CGWs by filtering out the small-scale CGWs. The wavelengths, wave speeds, and periods of CGWs were measured from a sequence of consecutive assembled images. Overall, the assembling and low-pass filtering algorithms can expand the airglow imager network to its full capacity regarding the detection of large-scale gravity waves.

  10. Algorithm 896: LSA: Algorithms for Large-Scale Optimization

    Czech Academy of Sciences Publication Activity Database

    Lukšan, Ladislav; Matonoha, Ctirad; Vlček, Jan

    2009-01-01

    Roč. 36, č. 3 (2009), 16-1-16-29 ISSN 0098-3500 R&D Pro jects: GA AV ČR IAA1030405; GA ČR GP201/06/P397 Institutional research plan: CEZ:AV0Z10300504 Keywords : algorithms * design * large-scale optimization * large-scale nonsmooth optimization * large-scale nonlinear least squares * large-scale nonlinear minimax * large-scale systems of nonlinear equations * sparse pro blems * partially separable pro blems * limited-memory methods * discrete Newton methods * quasi-Newton methods * primal interior-point methods Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 1.904, year: 2009

  11. Scale interactions in a mixing layer – the role of the large-scale gradients

    KAUST Repository

    Fiscaletti, D.

    2016-02-15

    © 2016 Cambridge University Press. The interaction between the large and the small scales of turbulence is investigated in a mixing layer, at a Reynolds number based on the Taylor microscale of , via direct numerical simulations. The analysis is performed in physical space, and the local vorticity root-mean-square (r.m.s.) is taken as a measure of the small-scale activity. It is found that positive large-scale velocity fluctuations correspond to large vorticity r.m.s. on the low-speed side of the mixing layer, whereas, they correspond to low vorticity r.m.s. on the high-speed side. The relationship between large and small scales thus depends on position if the vorticity r.m.s. is correlated with the large-scale velocity fluctuations. On the contrary, the correlation coefficient is nearly constant throughout the mixing layer and close to unity if the vorticity r.m.s. is correlated with the large-scale velocity gradients. Therefore, the small-scale activity appears closely related to large-scale gradients, while the correlation between the small-scale activity and the large-scale velocity fluctuations is shown to reflect a property of the large scales. Furthermore, the vorticity from unfiltered (small scales) and from low pass filtered (large scales) velocity fields tend to be aligned when examined within vortical tubes. These results provide evidence for the so-called \\'scale invariance\\' (Meneveau & Katz, Annu. Rev. Fluid Mech., vol. 32, 2000, pp. 1-32), and suggest that some of the large-scale characteristics are not lost at the small scales, at least at the Reynolds number achieved in the present simulation.

  12. Next-generation digital information storage in DNA.

    Science.gov (United States)

    Church, George M; Gao, Yuan; Kosuri, Sriram

    2012-09-28

    Digital information is accumulating at an astounding rate, straining our ability to store and archive it. DNA is among the most dense and stable information media known. The development of new technologies in both DNA synthesis and sequencing make DNA an increasingly feasible digital storage medium. We developed a strategy to encode arbitrary digital information in DNA, wrote a 5.27-megabit book using DNA microchips, and read the book by using next-generation DNA sequencing.

  13. Licensing of spent fuel dry storage and consolidated rod storage: A Review of Issues and Experiences

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, W.J.

    1990-02-01

    The results of this study, performed by Pacific Northwest Laboratory (PNL) and sponsored by the US Department of Energy (DOE), respond to the nuclear industry's recommendation that a report be prepared that collects and describes the licensing issues (and their resolutions) that confront a new applicant requesting approval from the US Nuclear Regulatory Commission (NRC) for dry storage of spent fuel or for large-scale storage of consolidated spent fuel rods in pools. The issues are identified in comments, questions, and requests from the NRC during its review of applicants' submittals. Included in the report are discussions of (1) the 18 topical reports on cask and module designs for dry storage fuel that have been submitted to the NRC, (2) the three license applications for dry storage of spent fuel at independent spent fuel storage installations (ISFSIs) that have been submitted to the NRC, and (3) the three applications (one of which was later withdrawn) for large-scale storage of consolidated fuel rods in existing spent fuel storage pools at reactors that were submitted tot he NRC. For each of the applications submitted, examples of some of the issues (and suggestions for their resolutions) are described. The issues and their resolutions are also covered in detail in an example in each of the three subject areas: (1) the application for the CASTOR V/21 dry spent fuel storage cask, (2) the application for the ISFSI for dry storage of spent fuel at Surry, and (3) the application for full-scale wet storage of consolidated spent fuel at Millstone-2. The conclusions in the report include examples of major issues that applicants have encountered. Recommendations for future applicants to follow are listed. 401 refs., 26 tabs.

  14. Large-scale matrix-handling subroutines 'ATLAS'

    International Nuclear Information System (INIS)

    Tsunematsu, Toshihide; Takeda, Tatsuoki; Fujita, Keiichi; Matsuura, Toshihiko; Tahara, Nobuo

    1978-03-01

    Subroutine package ''ATLAS'' has been developed for handling large-scale matrices. The package is composed of four kinds of subroutines, i.e., basic arithmetic routines, routines for solving linear simultaneous equations and for solving general eigenvalue problems and utility routines. The subroutines are useful in large scale plasma-fluid simulations. (auth.)

  15. South Louisiana Enhanced Oil Recovery/Sequestration R&D Project Small Scale Field Tests of Geologic Reservoir Classes for Geologic Storage

    Energy Technology Data Exchange (ETDEWEB)

    Hite, Roger [Blackhorse Energy LLC, Houston, TX (United States)

    2016-10-01

    The project site is located in Livingston Parish, Louisiana, approximately 26 miles due east of Baton Rouge. This project proposed to evaluate an early Eocene-aged Wilcox oil reservoir for permanent storage of CO2. Blackhorse Energy, LLC planned to conduct a parallel CO2 oil recovery project in the First Wilcox Sand. The primary focus of this project was to examine and prove the suitability of South Louisiana geologic formations for large-scale geologic sequestration of CO2 in association with enhanced oil recovery applications. This was to be accomplished through the focused demonstration of small-scale, permanent storage of CO2 in the First Wilcox Sand. The project was terminated at the request of Blackhorse Energy LLC on October 22, 2014.

  16. Large-scale solar heat

    Energy Technology Data Exchange (ETDEWEB)

    Tolonen, J.; Konttinen, P.; Lund, P. [Helsinki Univ. of Technology, Otaniemi (Finland). Dept. of Engineering Physics and Mathematics

    1998-12-31

    In this project a large domestic solar heating system was built and a solar district heating system was modelled and simulated. Objectives were to improve the performance and reduce costs of a large-scale solar heating system. As a result of the project the benefit/cost ratio can be increased by 40 % through dimensioning and optimising the system at the designing stage. (orig.)

  17. Compressor-less Hydrogen Transmission Pipelines Deliver Large-scale Stranded Renewable Energy at Competitive Cost

    International Nuclear Information System (INIS)

    W Leighty; J Holloway; R Merer; B Somerday; C San Marchi; G Keith; D White

    2006-01-01

    We assume a transmission-constrained world, where large new wind plants and other renewable energies must pay all transmission costs for delivering their energy to distant markets. We modeled a 1,000 MW (1 GW) (name plate) wind plant in the large wind resource of the North America Great Plains, delivering exclusively hydrogen fuel, via a new gaseous hydrogen (GH2) pipeline, to an urban market at least 300 km distant. All renewable electric energy output would be converted, at the source, to hydrogen, via 100 bar output electrolyzers, directly feeding the GH2 transmission pipeline without costly compressor stations at inlet or at midline. The new GH2 pipeline is an alternative to new electric transmission lines. We investigate whether the pipeline would provide valuable energy storage. We present a simple model by which we estimate the cost of wind-source hydrogen fuel delivered to the distant city gate in year 2010, at GW scale. Ammonia, synthetic hydrocarbons, and other substances may also be attractive renewable-source energy carriers, storage media, and fuels; they are not considered in this paper. (authors)

  18. Probes of large-scale structure in the Universe

    International Nuclear Information System (INIS)

    Suto, Yasushi; Gorski, K.; Juszkiewicz, R.; Silk, J.

    1988-01-01

    Recent progress in observational techniques has made it possible to confront quantitatively various models for the large-scale structure of the Universe with detailed observational data. We develop a general formalism to show that the gravitational instability theory for the origin of large-scale structure is now capable of critically confronting observational results on cosmic microwave background radiation angular anisotropies, large-scale bulk motions and large-scale clumpiness in the galaxy counts. (author)

  19. Contribution of large scale coherence to wind turbine power: A large eddy simulation study in periodic wind farms

    Science.gov (United States)

    Chatterjee, Tanmoy; Peet, Yulia T.

    2018-03-01

    Length scales of eddies involved in the power generation of infinite wind farms are studied by analyzing the spectra of the turbulent flux of mean kinetic energy (MKE) from large eddy simulations (LES). Large-scale structures with an order of magnitude bigger than the turbine rotor diameter (D ) are shown to have substantial contribution to wind power. Varying dynamics in the intermediate scales (D -10 D ) are also observed from a parametric study involving interturbine distances and hub height of the turbines. Further insight about the eddies responsible for the power generation have been provided from the scaling analysis of two-dimensional premultiplied spectra of MKE flux. The LES code is developed in a high Reynolds number near-wall modeling framework, using an open-source spectral element code Nek5000, and the wind turbines have been modelled using a state-of-the-art actuator line model. The LES of infinite wind farms have been validated against the statistical results from the previous literature. The study is expected to improve our understanding of the complex multiscale dynamics in the domain of large wind farms and identify the length scales that contribute to the power. This information can be useful for design of wind farm layout and turbine placement that take advantage of the large-scale structures contributing to wind turbine power.

  20. Asynchronous Two-Level Checkpointing Scheme for Large-Scale Adjoints in the Spectral-Element Solver Nek5000

    Energy Technology Data Exchange (ETDEWEB)

    Schanen, Michel; Marin, Oana; Zhang, Hong; Anitescu, Mihai

    2016-01-01

    Adjoints are an important computational tool for large-scale sensitivity evaluation, uncertainty quantification, and derivative-based optimization. An essential component of their performance is the storage/recomputation balance in which efficient checkpointing methods play a key role. We introduce a novel asynchronous two-level adjoint checkpointing scheme for multistep numerical time discretizations targeted at large-scale numerical simulations. The checkpointing scheme combines bandwidth-limited disk checkpointing and binomial memory checkpointing. Based on assumptions about the target petascale systems, which we later demonstrate to be realistic on the IBM Blue Gene/Q system Mira, we create a model of the expected performance of our checkpointing approach and validate it using the highly scalable Navier-Stokes spectralelement solver Nek5000 on small to moderate subsystems of the Mira supercomputer. In turn, this allows us to predict optimal algorithmic choices when using all of Mira. We also demonstrate that two-level checkpointing is significantly superior to single-level checkpointing when adjoining a large number of time integration steps. To our knowledge, this is the first time two-level checkpointing had been designed, implemented, tuned, and demonstrated on fluid dynamics codes at large scale of 50k+ cores.

  1. Scaling up DNA data storage and random access retrieval

    OpenAIRE

    Gopalan, Parikshit; Ceze, Luis; Nguyen, Bichlien; Takahashi, Christopher; Newman, Sharon; Parker, Hsing-Yeh; Rashtchian, Cyrus; Seelig, Georg; Stewart, Kendall; Gupta, Gagan; Carlson, Robert; Mulligan, John; Carmean, Douglas; Yekhanin, Sergey; Makarychev, Konstantin

    2017-01-01

    Current storage technologies can no longer keep pace with exponentially growing amounts of data. Synthetic DNA offers an attractive alternative due to its potential information density of ~ 1018B/mm3, 107 times denser than magnetic tape, and potential durability of thousands of years. Recent advances in DNA data storage have highlighted technical challenges, in particular, coding and random access, but have stored only modest amounts of data in synthetic DNA. This paper demonstrates an end-to...

  2. Large-scale grid management; Storskala Nettforvaltning

    Energy Technology Data Exchange (ETDEWEB)

    Langdal, Bjoern Inge; Eggen, Arnt Ove

    2003-07-01

    The network companies in the Norwegian electricity industry now have to establish a large-scale network management, a concept essentially characterized by (1) broader focus (Broad Band, Multi Utility,...) and (2) bigger units with large networks and more customers. Research done by SINTEF Energy Research shows so far that the approaches within large-scale network management may be structured according to three main challenges: centralization, decentralization and out sourcing. The article is part of a planned series.

  3. Japanese large-scale interferometers

    CERN Document Server

    Kuroda, K; Miyoki, S; Ishizuka, H; Taylor, C T; Yamamoto, K; Miyakawa, O; Fujimoto, M K; Kawamura, S; Takahashi, R; Yamazaki, T; Arai, K; Tatsumi, D; Ueda, A; Fukushima, M; Sato, S; Shintomi, T; Yamamoto, A; Suzuki, T; Saitô, Y; Haruyama, T; Sato, N; Higashi, Y; Uchiyama, T; Tomaru, T; Tsubono, K; Ando, M; Takamori, A; Numata, K; Ueda, K I; Yoneda, H; Nakagawa, K; Musha, M; Mio, N; Moriwaki, S; Somiya, K; Araya, A; Kanda, N; Telada, S; Sasaki, M; Tagoshi, H; Nakamura, T; Tanaka, T; Ohara, K

    2002-01-01

    The objective of the TAMA 300 interferometer was to develop advanced technologies for kilometre scale interferometers and to observe gravitational wave events in nearby galaxies. It was designed as a power-recycled Fabry-Perot-Michelson interferometer and was intended as a step towards a final interferometer in Japan. The present successful status of TAMA is presented. TAMA forms a basis for LCGT (large-scale cryogenic gravitational wave telescope), a 3 km scale cryogenic interferometer to be built in the Kamioka mine in Japan, implementing cryogenic mirror techniques. The plan of LCGT is schematically described along with its associated R and D.

  4. Large-scale exact diagonalizations reveal low-momentum scales of nuclei

    Science.gov (United States)

    Forssén, C.; Carlsson, B. D.; Johansson, H. T.; Sääf, D.; Bansal, A.; Hagen, G.; Papenbrock, T.

    2018-03-01

    Ab initio methods aim to solve the nuclear many-body problem with controlled approximations. Virtually exact numerical solutions for realistic interactions can only be obtained for certain special cases such as few-nucleon systems. Here we extend the reach of exact diagonalization methods to handle model spaces with dimension exceeding 1010 on a single compute node. This allows us to perform no-core shell model (NCSM) calculations for 6Li in model spaces up to Nmax=22 and to reveal the 4He+d halo structure of this nucleus. Still, the use of a finite harmonic-oscillator basis implies truncations in both infrared (IR) and ultraviolet (UV) length scales. These truncations impose finite-size corrections on observables computed in this basis. We perform IR extrapolations of energies and radii computed in the NCSM and with the coupled-cluster method at several fixed UV cutoffs. It is shown that this strategy enables information gain also from data that is not fully UV converged. IR extrapolations improve the accuracy of relevant bound-state observables for a range of UV cutoffs, thus making them profitable tools. We relate the momentum scale that governs the exponential IR convergence to the threshold energy for the first open decay channel. Using large-scale NCSM calculations we numerically verify this small-momentum scale of finite nuclei.

  5. Large Scale Computing and Storage Requirements for Nuclear Physics Research

    Energy Technology Data Exchange (ETDEWEB)

    Gerber, Richard A.; Wasserman, Harvey J.

    2012-03-02

    IThe National Energy Research Scientific Computing Center (NERSC) is the primary computing center for the DOE Office of Science, serving approximately 4,000 users and hosting some 550 projects that involve nearly 700 codes for a wide variety of scientific disciplines. In addition to large-scale computing resources NERSC provides critical staff support and expertise to help scientists make the most efficient use of these resources to advance the scientific mission of the Office of Science. In May 2011, NERSC, DOE’s Office of Advanced Scientific Computing Research (ASCR) and DOE’s Office of Nuclear Physics (NP) held a workshop to characterize HPC requirements for NP research over the next three to five years. The effort is part of NERSC’s continuing involvement in anticipating future user needs and deploying necessary resources to meet these demands. The workshop revealed several key requirements, in addition to achieving its goal of characterizing NP computing. The key requirements include: 1. Larger allocations of computational resources at NERSC; 2. Visualization and analytics support; and 3. Support at NERSC for the unique needs of experimental nuclear physicists. This report expands upon these key points and adds others. The results are based upon representative samples, called “case studies,” of the needs of science teams within NP. The case studies were prepared by NP workshop participants and contain a summary of science goals, methods of solution, current and future computing requirements, and special software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, “multi-core” environment that is expected to dominate HPC architectures over the next few years. The report also includes a section with NERSC responses to the workshop findings. NERSC has many initiatives already underway that address key workshop findings and all of the action items are aligned with NERSC strategic plans.

  6. The Fermilab data storage infrastructure

    International Nuclear Information System (INIS)

    Jon A Bakken et al.

    2003-01-01

    Fermilab, in collaboration with the DESY laboratory in Hamburg, Germany, has created a petabyte scale data storage infrastructure to meet the requirements of experiments to store and access large data sets. The Fermilab data storage infrastructure consists of the following major storage and data transfer components: Enstore mass storage system, DCache distributed data cache, ftp and Grid ftp for primarily external data transfers. This infrastructure provides a data throughput sufficient for transferring data from experiments' data acquisition systems. It also allows access to data in the Grid framework

  7. Long-term modelling of Carbon Capture and Storage, Nuclear Fusion, and large-scale District Heating

    DEFF Research Database (Denmark)

    Grohnheit, Poul Erik; Korsholm, Søren Bang; Lüthje, Mikael

    2011-01-01

    before 2050. The modelling tools developed by the International Energy Agency (IEA) Implementing Agreement ETSAP include both multi-regional global and long-term energy models till 2100, as well as national or regional models with shorter time horizons. Examples are the EFDA-TIMES model, focusing...... on nuclear fusion and the Pan European TIMES model, respectively. In the next decades CCS can be a driver for the development and expansion of large-scale district heating systems, which are currently widespread in Europe, Korea and China, and with large potentials in North America. If fusion will replace...... fossil fuel power plants with CCS in the second half of the century, the same infrastructure for heat distribution can be used which will support the penetration of both technologies. This paper will address the issue of infrastructure development and the use of CCS and fusion technologies using...

  8. Large-scale modeling of condition-specific gene regulatory networks by information integration and inference.

    Science.gov (United States)

    Ellwanger, Daniel Christian; Leonhardt, Jörn Florian; Mewes, Hans-Werner

    2014-12-01

    Understanding how regulatory networks globally coordinate the response of a cell to changing conditions, such as perturbations by shifting environments, is an elementary challenge in systems biology which has yet to be met. Genome-wide gene expression measurements are high dimensional as these are reflecting the condition-specific interplay of thousands of cellular components. The integration of prior biological knowledge into the modeling process of systems-wide gene regulation enables the large-scale interpretation of gene expression signals in the context of known regulatory relations. We developed COGERE (http://mips.helmholtz-muenchen.de/cogere), a method for the inference of condition-specific gene regulatory networks in human and mouse. We integrated existing knowledge of regulatory interactions from multiple sources to a comprehensive model of prior information. COGERE infers condition-specific regulation by evaluating the mutual dependency between regulator (transcription factor or miRNA) and target gene expression using prior information. This dependency is scored by the non-parametric, nonlinear correlation coefficient η(2) (eta squared) that is derived by a two-way analysis of variance. We show that COGERE significantly outperforms alternative methods in predicting condition-specific gene regulatory networks on simulated data sets. Furthermore, by inferring the cancer-specific gene regulatory network from the NCI-60 expression study, we demonstrate the utility of COGERE to promote hypothesis-driven clinical research. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  9. Local, distributed topology control for large-scale wireless ad-hoc networks

    NARCIS (Netherlands)

    Nieberg, T.; Hurink, Johann L.

    In this document, topology control of a large-scale, wireless network by a distributed algorithm that uses only locally available information is presented. Topology control algorithms adjust the transmission power of wireless nodes to create a desired topology. The algorithm, named local power

  10. Large-scale simulations with distributed computing: Asymptotic scaling of ballistic deposition

    International Nuclear Information System (INIS)

    Farnudi, Bahman; Vvedensky, Dimitri D

    2011-01-01

    Extensive kinetic Monte Carlo simulations are reported for ballistic deposition (BD) in (1 + 1) dimensions. The large system sizes L observed for the onset of asymptotic scaling (L ≅ 2 12 ) explains the widespread discrepancies in previous reports for exponents of BD in one and likely in higher dimensions. The exponents obtained directly from our simulations, α = 0.499 ± 0.004 and β = 0.336 ± 0.004, capture the exact values α = 1/2 and β = 1/3 for the one-dimensional Kardar-Parisi-Zhang equation. An analysis of our simulations suggests a criterion for identifying the onset of true asymptotic scaling, which enables a more informed evaluation of exponents for BD in higher dimensions. These simulations were made possible by the Simulation through Social Networking project at the Institute for Advanced Studies in Basic Sciences in 2007, which was re-launched in November 2010.

  11. Large-scale climatic anomalies affect marine predator foraging behaviour and demography

    Science.gov (United States)

    Bost, Charles A.; Cotté, Cedric; Terray, Pascal; Barbraud, Christophe; Bon, Cécile; Delord, Karine; Gimenez, Olivier; Handrich, Yves; Naito, Yasuhiko; Guinet, Christophe; Weimerskirch, Henri

    2015-10-01

    Determining the links between the behavioural and population responses of wild species to environmental variations is critical for understanding the impact of climate variability on ecosystems. Using long-term data sets, we show how large-scale climatic anomalies in the Southern Hemisphere affect the foraging behaviour and population dynamics of a key marine predator, the king penguin. When large-scale subtropical dipole events occur simultaneously in both subtropical Southern Indian and Atlantic Oceans, they generate tropical anomalies that shift the foraging zone southward. Consequently the distances that penguins foraged from the colony and their feeding depths increased and the population size decreased. This represents an example of a robust and fast impact of large-scale climatic anomalies affecting a marine predator through changes in its at-sea behaviour and demography, despite lack of information on prey availability. Our results highlight a possible behavioural mechanism through which climate variability may affect population processes.

  12. Large scale model testing

    International Nuclear Information System (INIS)

    Brumovsky, M.; Filip, R.; Polachova, H.; Stepanek, S.

    1989-01-01

    Fracture mechanics and fatigue calculations for WWER reactor pressure vessels were checked by large scale model testing performed using large testing machine ZZ 8000 (with a maximum load of 80 MN) at the SKODA WORKS. The results are described from testing the material resistance to fracture (non-ductile). The testing included the base materials and welded joints. The rated specimen thickness was 150 mm with defects of a depth between 15 and 100 mm. The results are also presented of nozzles of 850 mm inner diameter in a scale of 1:3; static, cyclic, and dynamic tests were performed without and with surface defects (15, 30 and 45 mm deep). During cyclic tests the crack growth rate in the elastic-plastic region was also determined. (author). 6 figs., 2 tabs., 5 refs

  13. Pre-commercial scale preservation of garlic by gamma radiation in combination with cold storage

    International Nuclear Information System (INIS)

    Nouchpramool, K.; Charoen, S.; Bunnak, J.

    1997-06-01

    Irradiation of garlic on a pilot scale and storage in cold room under commercial condition was carried out in co-operation with garlic trader in 1986-1987. Garlic bulbs from local cultivars were irradiated seven weeks after harvest with average dose of 70 Gy and stored for nine months at low (1-7 degrees C) and ambient (25-34 degrees C) temperatures. The treatment proved to be effective in controlling sprouting and in reducing weight loss and rotting. After 9 months of cold storage the weight loss and rotting of irradiated bulbs were reduced by 18 and 13 per cent. The radio inhibition process is technically feasible and economically justified as a profit can be made during the extended storage period. Small scale marketing trials of irradiated garlic conducted during and after termination of storage revealed no adverse comments from consumers

  14. Antimicrobial residues in animal waste and water resources proximal to large-scale swine and poultry feeding operations

    Science.gov (United States)

    Campagnolo, E.R.; Johnson, K.R.; Karpati, A.; Rubin, C.S.; Kolpin, D.W.; Meyer, M.T.; Esteban, J. Emilio; Currier, R.W.; Smith, K.; Thu, K.M.; McGeehin, M.

    2002-01-01

    Expansion and intensification of large-scale animal feeding operations (AFOs) in the United States has resulted in concern about environmental contamination and its potential public health impacts. The objective of this investigation was to obtain background data on a broad profile of antimicrobial residues in animal wastes and surface water and groundwater proximal to large-scale swine and poultry operations. The samples were measured for antimicrobial compounds using both radioimmunoassay and liquid chromatography/electrospray ionization-mass spectrometry (LC/ESI-MS) techniques. Multiple classes of antimicrobial compounds (commonly at concentrations of >100 μg/l) were detected in swine waste storage lagoons. In addition, multiple classes of antimicrobial compounds were detected in surface and groundwater samples collected proximal to the swine and poultry farms. This information indicates that animal waste used as fertilizer for crops may serve as a source of antimicrobial residues for the environment. Further research is required to determine if the levels of antimicrobials detected in this study are of consequence to human and/or environmental ecosystems. A comparison of the radioimmunoassay and LC/ESI-MS analytical methods documented that radioimmunoassay techniques were only appropriate for measuring residues in animal waste samples likely to contain high levels of antimicrobials. More sensitive LC/ESI-MS techniques are required in environmental samples, where low levels of antimicrobial residues are more likely.

  15. Antimicrobial residues in animal waste and water resources proximal to large-scale swine and poultry feeding operations.

    Science.gov (United States)

    Campagnolo, Enzo R; Johnson, Kammy R; Karpati, Adam; Rubin, Carol S; Kolpin, Dana W; Meyer, Michael T; Esteban, J Emilio; Currier, Russell W; Smith, Kathleen; Thu, Kendall M; McGeehin, Michael

    2002-11-01

    Expansion and intensification of large-scale animal feeding operations (AFOs) in the United States has resulted in concern about environmental contamination and its potential public health impacts. The objective of this investigation was to obtain background data on a broad profile of antimicrobial residues in animal wastes and surface water and groundwater proximal to large-scale swine and poultry operations. The samples were measured for antimicrobial compounds using both radioimmunoassay and liquid chromatography/electrospray ionization-mass spectrometry (LC/ESI-MS) techniques. Multiple classes of antimicrobial compounds (commonly at concentrations of > 100 microg/l) were detected in swine waste storage lagoons. In addition, multiple classes of antimicrobial compounds were detected in surface and groundwater samples collected proximal to the swine and poultry farms. This information indicates that animal waste used as fertilizer for crops may serve as a source of antimicrobial residues for the environment. Further research is required to determine if the levels of antimicrobials detected in this study are of consequence to human and/or environmental ecosystems. A comparison of the radioimmunoassay and LC/ESI-MS analytical methods documented that radioimmunoassay techniques were only appropriate for measuring residues in animal waste samples likely to contain high levels of antimicrobials. More sensitive LC/ESI-MS techniques are required in environmental samples, where low levels of antimicrobial residues are more likely.

  16. Why small-scale cannabis growers stay small: five mechanisms that prevent small-scale growers from going large scale.

    Science.gov (United States)

    Hammersvik, Eirik; Sandberg, Sveinung; Pedersen, Willy

    2012-11-01

    Over the past 15-20 years, domestic cultivation of cannabis has been established in a number of European countries. New techniques have made such cultivation easier; however, the bulk of growers remain small-scale. In this study, we explore the factors that prevent small-scale growers from increasing their production. The study is based on 1 year of ethnographic fieldwork and qualitative interviews conducted with 45 Norwegian cannabis growers, 10 of whom were growing on a large-scale and 35 on a small-scale. The study identifies five mechanisms that prevent small-scale indoor growers from going large-scale. First, large-scale operations involve a number of people, large sums of money, a high work-load and a high risk of detection, and thus demand a higher level of organizational skills than for small growing operations. Second, financial assets are needed to start a large 'grow-site'. Housing rent, electricity, equipment and nutrients are expensive. Third, to be able to sell large quantities of cannabis, growers need access to an illegal distribution network and knowledge of how to act according to black market norms and structures. Fourth, large-scale operations require advanced horticultural skills to maximize yield and quality, which demands greater skills and knowledge than does small-scale cultivation. Fifth, small-scale growers are often embedded in the 'cannabis culture', which emphasizes anti-commercialism, anti-violence and ecological and community values. Hence, starting up large-scale production will imply having to renegotiate or abandon these values. Going from small- to large-scale cannabis production is a demanding task-ideologically, technically, economically and personally. The many obstacles that small-scale growers face and the lack of interest and motivation for going large-scale suggest that the risk of a 'slippery slope' from small-scale to large-scale growing is limited. Possible political implications of the findings are discussed. Copyright

  17. Distributed large-scale dimensional metrology new insights

    CERN Document Server

    Franceschini, Fiorenzo; Maisano, Domenico

    2011-01-01

    Focuses on the latest insights into and challenges of distributed large scale dimensional metrology Enables practitioners to study distributed large scale dimensional metrology independently Includes specific examples of the development of new system prototypes

  18. Design of a Large-scale Three-dimensional Flexible Arrayed Tactile Sensor

    Directory of Open Access Journals (Sweden)

    Junxiang Ding

    2011-01-01

    Full Text Available This paper proposes a new type of large-scale three-dimensional flexible arrayed tactile sensor based on conductive rubber. It can be used to detect three-dimensional force information on the continuous surface of the sensor, which realizes a true skin type tactile sensor. The widely used method of liquid rubber injection molding (LIMS method is used for "the overall injection molding" sample preparation. The structure details of staggered nodes and a new decoupling algorithm of force analysis are given. Simulation results show that the sensor based on this structure can achieve flexible measurement of large-scale 3-D tactile sensor arrays.

  19. Scaling experiments on plasma opening switches for inductive energy storage applications

    International Nuclear Information System (INIS)

    Boller, J.R.; Commisso, R.J.; Cooperstein, G.

    1983-01-01

    A new type of fast opening switch for use with pulsed power accelerators is examined. This Plasma Opening Switch (POS) utilizes an injected carbon plasma to conduct large currents (circa 1 MA) for up to 100 ns while a vacuum inductor (circa 100 nH) is charged. The switch is then capable of opening on a short (circa 10 ns) timescale and depositing the stored energy into a load impedance. Output pulse widths and power levels are determined by the storage inductance and the load impedance. The switch operation is studied in detail both analytically and experimentally. Experiments are performed at the 5 kJ stored energy level on the Gamble I generator and at the 50 kJ level on the Gamble II generator. Results of both experiments are reported and the scaling of switch operation is discussed

  20. Improving Wind Farm Dispatchability Using Model Predictive Control for Optimal Operation of Grid-Scale Energy Storage

    Directory of Open Access Journals (Sweden)

    Douglas Halamay

    2014-09-01

    Full Text Available This paper demonstrates the use of model-based predictive control for energy storage systems to improve the dispatchability of wind power plants. Large-scale wind penetration increases the variability of power flow on the grid, thus increasing reserve requirements. Large energy storage systems collocated with wind farms can improve dispatchability of the wind plant by storing energy during generation over-the-schedule and sourcing energy during generation under-the-schedule, essentially providing on-site reserves. Model predictive control (MPC provides a natural framework for this application. By utilizing an accurate energy storage system model, control actions can be planned in the context of system power and state-of-charge limitations. MPC also enables the inclusion of predicted wind farm performance over a near-term horizon that allows control actions to be planned in anticipation of fast changes, such as wind ramps. This paper demonstrates that model-based predictive control can improve system performance compared with a standard non-predictive, non-model-based control approach. It is also demonstrated that secondary objectives, such as reducing the rate of change of the wind plant output (i.e., ramps, can be considered and successfully implemented within the MPC framework. Specifically, it is shown that scheduling error can be reduced by 81%, reserve requirements can be improved by up to 37%, and the number of ramp events can be reduced by 74%.

  1. Identifying Non-Volatile Data Storage Areas: Unique Notebook Identification Information as Digital Evidence

    Directory of Open Access Journals (Sweden)

    Nikica Budimir

    2007-03-01

    Full Text Available The research reported in this paper introduces new techniques to aid in the identification of recovered notebook computers so they may be returned to the rightful owner. We identify non-volatile data storage areas as a means of facilitating the safe storing of computer identification information. A forensic proof of concept tool has been designed to test the feasibility of several storage locations identified within this work to hold the data needed to uniquely identify a computer. The tool was used to perform the creation and extraction of created information in order to allow the analysis of the non-volatile storage locations as valid storage areas capable of holding and preserving the data created within them.  While the format of the information used to identify the machine itself is important, this research only discusses the insertion, storage and ability to retain such information.

  2. Principal considerations in large energy-storage capacitor banks

    International Nuclear Information System (INIS)

    Kemp, E.L.

    1976-01-01

    Capacitor banks storing one or more megajoules and costing more than one million dollars have unique problems not often found in smaller systems. Two large banks, Scyllac at Los Alamos and Shiva at Livermore, are used as models of large, complex systems. Scyllac is a 10-MJ, 60-kV theta-pinch system while Shiva is a 20-MJ, 20-kV energy system for laser flash lamps. A number of design principles are emphasized for expediting the design and construction of large banks. The sensitive features of the charge system, the storage system layout, the switching system, the transmission system, and the design of the principal bank components are presented. Project management and planning must involve a PERT chart with certain common features for all the activities. The importance of the budget is emphasized

  3. Large Scale Computing and Storage Requirements for Fusion Energy Sciences: Target 2017

    Energy Technology Data Exchange (ETDEWEB)

    Gerber, Richard

    2014-05-02

    The National Energy Research Scientific Computing Center (NERSC) is the primary computing center for the DOE Office of Science, serving approximately 4,500 users working on some 650 projects that involve nearly 600 codes in a wide variety of scientific disciplines. In March 2013, NERSC, DOE?s Office of Advanced Scientific Computing Research (ASCR) and DOE?s Office of Fusion Energy Sciences (FES) held a review to characterize High Performance Computing (HPC) and storage requirements for FES research through 2017. This report is the result.

  4. SCALE INTERACTION IN A MIXING LAYER. THE ROLE OF THE LARGE-SCALE GRADIENTS

    KAUST Repository

    Fiscaletti, Daniele; Attili, Antonio; Bisetti, Fabrizio; Elsinga, Gerrit E.

    2015-01-01

    from physical considerations we would expect the scales to interact in a qualitatively similar way within the flow and across different turbulent flows. Therefore, instead of the large-scale fluctuations, the large-scale gradients modulation of the small scales has been additionally investigated.

  5. Parallel Quasi Newton Algorithms for Large Scale Non Linear Unconstrained Optimization

    International Nuclear Information System (INIS)

    Rahman, M. A.; Basarudin, T.

    1997-01-01

    This paper discusses about Quasi Newton (QN) method to solve non-linear unconstrained minimization problems. One of many important of QN method is choice of matrix Hk. to be positive definite and satisfies to QN method. Our interest here is the parallel QN methods which will suite for the solution of large-scale optimization problems. The QN methods became less attractive in large-scale problems because of the storage and computational requirements. How ever, it is often the case that the Hessian is space matrix. In this paper we include the mechanism of how to reduce the Hessian update and hold the Hessian properties.One major reason of our research is that the QN method may be good in solving certain type of minimization problems, but it is efficiency degenerate when is it applied to solve other category of problems. For this reason, we use an algorithm containing several direction strategies which are processed in parallel. We shall attempt to parallelized algorithm by exploring different search directions which are generated by various QN update during the minimization process. The different line search strategies will be employed simultaneously in the process of locating the minimum along each direction.The code of algorithm will be written in Occam language 2 which is run on the transputer machine

  6. How Did the Information Flow in the #AlphaGo Hashtag Network? A Social Network Analysis of the Large-Scale Information Network on Twitter.

    Science.gov (United States)

    Kim, Jinyoung

    2017-12-01

    As it becomes common for Internet users to use hashtags when posting and searching information on social media, it is important to understand who builds a hashtag network and how information is circulated within the network. This article focused on unlocking the potential of the #AlphaGo hashtag network by addressing the following questions. First, the current study examined whether traditional opinion leadership (i.e., the influentials hypothesis) or grassroot participation by the public (i.e., the interpersonal hypothesis) drove dissemination of information in the hashtag network. Second, several unique patterns of information distribution by key users were identified. Finally, the association between attributes of key users who exerted great influence on information distribution (i.e., the number of followers and follows) and their central status in the network was tested. To answer the proffered research questions, a social network analysis was conducted using a large-scale hashtag network data set from Twitter (n = 21,870). The results showed that the leading actors in the network were actively receiving information from their followers rather than serving as intermediaries between the original information sources and the public. Moreover, the leading actors played several roles (i.e., conversation starters, influencers, and active engagers) in the network. Furthermore, the number of their follows and followers were significantly associated with their central status in the hashtag network. Based on the results, the current research explained how the information was exchanged in the hashtag network by proposing the reciprocal model of information flow.

  7. Hydrogen Storage Technologies for Future Energy Systems.

    Science.gov (United States)

    Preuster, Patrick; Alekseev, Alexander; Wasserscheid, Peter

    2017-06-07

    Future energy systems will be determined by the increasing relevance of solar and wind energy. Crude oil and gas prices are expected to increase in the long run, and penalties for CO 2 emissions will become a relevant economic factor. Solar- and wind-powered electricity will become significantly cheaper, such that hydrogen produced from electrolysis will be competitively priced against hydrogen manufactured from natural gas. However, to handle the unsteadiness of system input from fluctuating energy sources, energy storage technologies that cover the full scale of power (in megawatts) and energy storage amounts (in megawatt hours) are required. Hydrogen, in particular, is a promising secondary energy vector for storing, transporting, and distributing large and very large amounts of energy at the gigawatt-hour and terawatt-hour scales. However, we also discuss energy storage at the 120-200-kWh scale, for example, for onboard hydrogen storage in fuel cell vehicles using compressed hydrogen storage. This article focuses on the characteristics and development potential of hydrogen storage technologies in light of such a changing energy system and its related challenges. Technological factors that influence the dynamics, flexibility, and operating costs of unsteady operation are therefore highlighted in particular. Moreover, the potential for using renewable hydrogen in the mobility sector, industrial production, and the heat market is discussed, as this potential may determine to a significant extent the future economic value of hydrogen storage technology as it applies to other industries. This evaluation elucidates known and well-established options for hydrogen storage and may guide the development and direction of newer, less developed technologies.

  8. Fabrication and analysis of small-scale thermal energy storage with conductivity enhancement

    International Nuclear Information System (INIS)

    Thapa, Suvhashis; Chukwu, Sam; Khaliq, Abdul; Weiss, Leland

    2014-01-01

    Highlights: • Useful thermal conductivity envelope established for small scale TES. • Paraffin conductivity enhanced from .5 to 3.8 W/m K via low-cost copper insert. • Conductivity increase beyond 5 W/m K shows diminished returns. • Storage with increased conductivity lengthened thermoelectric output up to 247 s. - Abstract: The operation and useful operating parameters of a small-scale Thermal Energy Storage (TES) device that collects and stores heat in a Phase Change Material (PCM) is explored. The PCM utilized is an icosane wax. A physical device is constructed on the millimeter scale to examine specific effects of low-cost thermal conductivity enhancements that include copper foams and other metallic inserts. Numerical methods are utilized to establish useful operating range of small-scale TES devices in general, and the limits of thermal conductivity enhancement on thermoelectric operation specifically. Specific attention is paid to the manufacturability of the various constructs as well as the resulting thermal conductivity enhancement. A maximum thermal conductivity of 3.8 W/m K is achieved in experimental testing via copper foam enhancement. A simplified copper matrix achieves conductivity of 3.7 W/m K and allows significantly reduced fabrication effort. These results compare favorably to baseline wax conductivity of .5 W/m K. Power absorption is recorded of about 900 W/m 2 . Modeling reveals diminishing returns beyond 4–6 W/m K for devices on this scale. Results show the system capable of extending thermoelectric operation several minutes through the use of thermal energy storage techniques within the effective conductivity ranges

  9. Modeling and Validating Time, Buffering, and Utilization of a Large-Scale, Real-Time Data Acquisition System

    CERN Document Server

    AUTHOR|(SzGeCERN)756497; The ATLAS collaboration; Garcia Garcia, Pedro Javier; Vandelli, Wainer; Froening, Holger

    2017-01-01

    Data acquisition systems for large-scale high-energy physics experiments have to handle hundreds of gigabytes per second of data, and are typically realized as specialized data centers that connect a very large number of front-end electronics devices to an event detection and storage system. The design of such systems is often based on many assumptions, small-scale experiments and a substantial amount of over-provisioning. In this work, we introduce a discrete event-based simulation tool that models the data flow of the current ATLAS data acquisition system, with the main goal to be accurate with regard to the main operational characteristics. We measure buffer occupancy counting the number of elements in buffers, resource utilization measuring output bandwidth and counting the number of active processing units, and their time evolution by comparing data over many consecutive and small periods of time. We perform studies on the error of simulation when comparing the results to a large amount of real-world ope...

  10. Modeling and Validating Time, Buffering, and Utilization of a Large-Scale, Real-Time Data Acquisition System

    CERN Document Server

    AUTHOR|(SzGeCERN)756497; The ATLAS collaboration; Garcia Garcia, Pedro Javier; Vandelli, Wainer; Froening, Holger

    2017-01-01

    Data acquisition systems for large-scale high-energy physics experiments have to handle hundreds of gigabytes per second of data, and are typically implemented as specialized data centers that connect a very large number of front-end electronics devices to an event detection and storage system. The design of such systems is often based on many assumptions, small-scale experiments and a substantial amount of over-provisioning. In this paper, we introduce a discrete event-based simulation tool that models the dataflow of the current ATLAS data acquisition system, with the main goal to be accurate with regard to the main operational characteristics. We measure buffer occupancy counting the number of elements in buffers; resource utilization measuring output bandwidth and counting the number of active processing units, and their time evolution by comparing data over many consecutive and small periods of time. We perform studies on the error in simulation when comparing the results to a large amount of real-world ...

  11. Information report on nuclear safety and radiation protection of the Manche storage Centre - 2012

    International Nuclear Information System (INIS)

    2013-06-01

    After a presentation of the Manche Storage Centre (CSM), the first French centre of surface storage of weakly and moderately radioactive wastes, of its history, its buildings and activities, of the multi-layer cover, of the water management system (installation, controls, sampling), this report describes the measures related to nuclear safety (principles and objectives, prevention measures, technical measures, regulatory plan of control of the Centre and of its environment, control of releases from storage installations, quality organisation, archiving system). It describes measures related to radiation protection: principles, staff dosimetry, and personnel safety. The next part presents the nuclear event scale (INES) and indicates that no incident occurred. The effluents and releases from the Centre are then addressed: origin, locations and results of radiological controls of rainfalls, of risky effluents, of underground waters, of rivers, impacts of the Centre on its environment (releases in the sea, in rivers). The management of conventional and nuclear wastes produced by the Centre is reviewed as well as the actions related to information and transparency. Recommendations of the CHSCT are reported

  12. National Waste Terminal Storage Program: information management plan. Volume II. Plan description

    International Nuclear Information System (INIS)

    1977-05-01

    A comprehensive information management plan to provide for the systematic processing of large amounts of internally prepared and externally acquired documentation that will accrue to the Office of Waste Isolation (OWI) during the next decade is outlined. The Information Management Plan of the National Waste Terminal Storage (NWTS) Program is based on time proven procedures developed by government and industry for the requirements determination, acquisition, and the administration of documentation. The NWTS Information Management Plan is designed to establish the basis for the planning, development, implemenation, operation and maintenance of the NWTS Information Management System. This plan will help assure that documentation meets required quality standards and that each organization's needs are reflected when soliciting documentation from subcontractors. An example would be the Quality Assurance documentation requirement necessary to comply with eventual NRC licensing regulations. The provisions of the NWTS Information Management Plan will apply to all documentation from OWI contractors, subcontractors, and suppliers, and to OWI organizations for documentation prepared periodically for external dissemination

  13. Large permanent magnet quadrupoles for an electron storage ring

    International Nuclear Information System (INIS)

    Herb, S.W.

    1987-01-01

    We have built large high quality permanent magnet quadrupoles for use as interaction region quadrupoles in the Cornell Electron Storage Ring where they must operate in the 10 kG axial field of the CLEO experimental detector. We describe the construction and the magnetic measurement and tuning procedures used to achieve the required field quality and stability. (orig.)

  14. Trends in large-scale testing of reactor structures

    International Nuclear Information System (INIS)

    Blejwas, T.E.

    2003-01-01

    Large-scale tests of reactor structures have been conducted at Sandia National Laboratories since the late 1970s. This paper describes a number of different large-scale impact tests, pressurization tests of models of containment structures, and thermal-pressure tests of models of reactor pressure vessels. The advantages of large-scale testing are evident, but cost, in particular limits its use. As computer models have grown in size, such as number of degrees of freedom, the advent of computer graphics has made possible very realistic representation of results - results that may not accurately represent reality. A necessary condition to avoiding this pitfall is the validation of the analytical methods and underlying physical representations. Ironically, the immensely larger computer models sometimes increase the need for large-scale testing, because the modeling is applied to increasing more complex structural systems and/or more complex physical phenomena. Unfortunately, the cost of large-scale tests is a disadvantage that will likely severely limit similar testing in the future. International collaborations may provide the best mechanism for funding future programs with large-scale tests. (author)

  15. Using Large Scale Test Results for Pedagogical Purposes

    DEFF Research Database (Denmark)

    Dolin, Jens

    2012-01-01

    The use and influence of large scale tests (LST), both national and international, has increased dramatically within the last decade. This process has revealed a tension between the legitimate need for information about the performance of the educational system and teachers to inform policy......, and the teachers’ and students’ use of this information for pedagogical purposes in the classroom. We know well how the policy makers interpret and use the outcomes of such tests, but we know less about how teachers make use of LSTs to inform their pedagogical practice. An important question is whether...... there is a contradiction between the political system’s use of LST and teachers’ (possible) pedagogical use of LST. And if yes: What is a contradiction based on? This presentation will give some results from a systematic review on how tests have influenced the pedagogical practice. The research revealed many of the fatal...

  16. Complex Formation Control of Large-Scale Intelligent Autonomous Vehicles

    Directory of Open Access Journals (Sweden)

    Ming Lei

    2012-01-01

    Full Text Available A new formation framework of large-scale intelligent autonomous vehicles is developed, which can realize complex formations while reducing data exchange. Using the proposed hierarchy formation method and the automatic dividing algorithm, vehicles are automatically divided into leaders and followers by exchanging information via wireless network at initial time. Then, leaders form formation geometric shape by global formation information and followers track their own virtual leaders to form line formation by local information. The formation control laws of leaders and followers are designed based on consensus algorithms. Moreover, collision-avoiding problems are considered and solved using artificial potential functions. Finally, a simulation example that consists of 25 vehicles shows the effectiveness of theory.

  17. Experimental Investigation of Jet-Induced Mixing of a Large Liquid Hydrogen Storage Tank

    Science.gov (United States)

    Lin, C. S.; Hasan, M. M.; Vandresar, N. T.

    1994-01-01

    Experiments have been conducted to investigate the effect of fluid mixing on the depressurization of a large liquid hydrogen storage tank. The test tank is approximately ellipsoidal, having a volume of 4.89 m(exp 3) and an average wall heat flux of 4.2 W/m(exp 2) due to external heat input. A mixer unit was installed near the bottom of the tank to generate an upward directed axial jet flow normal to the liquid-vapor interface. Mixing tests were initiated after achieving thermally stratified conditions in the tank either by the introduction of hydrogen gas into the tank or by self-pressurization due to ambient heat leak through the tank wall. The subcooled liquid jet directed towards the liquid-vapor interface by the mixer induced vapor condensation and caused a reduction in tank pressure. Tests were conducted at two jet submergence depths for jet Reynolds numbers from 80,000 to 495,000 and Richardson numbers from 0.014 to 0.52. Results show that the rate of tank pressure change is controlled by the competing effects of subcooled jet flow and the free convection boundary layer flow due to external tank wall heating. It is shown that existing correlations for mixing time and vapor condensation rate based on small scale tanks may not be applicable to large scale liquid hydrogen systems.

  18. A comparative study of two approaches to analyse groundwater recharge, travel times and nitrate storage distribution at a regional scale

    Science.gov (United States)

    Turkeltaub, T.; Ascott, M.; Gooddy, D.; Jia, X.; Shao, M.; Binley, A. M.

    2017-12-01

    Understanding deep percolation, travel time processes and nitrate storage in the unsaturated zone at a regional scale is crucial for sustainable management of many groundwater systems. Recently, global hydrological models have been developed to quantify the water balance at such scales and beyond. However, the coarse spatial resolution of the global hydrological models can be a limiting factor when analysing regional processes. This study compares simulations of water flow and nitrate storage based on regional and global scale approaches. The first approach was applied over the Loess Plateau of China (LPC) to investigate the water fluxes and nitrate storage and travel time to the LPC groundwater system. Using raster maps of climate variables, land use data and soil parameters enabled us to determine fluxes by employing Richards' equation and the advection - dispersion equation. These calculations were conducted for each cell on the raster map in a multiple 1-D column approach. In the second approach, vadose zone travel times and nitrate storage were estimated by coupling groundwater recharge (PCR-GLOBWB) and nitrate leaching (IMAGE) models with estimates of water table depth and unsaturated zone porosity. The simulation results of the two methods indicate similar spatial groundwater recharge, nitrate storage and travel time distribution. Intensive recharge rates are located mainly at the south central and south west parts of the aquifer's outcrops. Particularly low recharge rates were simulated in the top central area of the outcrops. However, there are significant discrepancies between the simulated absolute recharge values, which might be related to the coarse scale that is used in the PCR-GLOBWB model, leading to smoothing of the recharge estimations. Both models indicated large nitrate inventories in the south central and south west parts of the aquifer's outcrops and the shortest travel times in the vadose zone are in the south central and east parts of the

  19. Coupling of Large Eddy Simulations with Meteorological Models to simulate Methane Leaks from Natural Gas Storage Facilities

    Science.gov (United States)

    Prasad, K.

    2017-12-01

    Atmospheric transport is usually performed with weather models, e.g., the Weather Research and Forecasting (WRF) model that employs a parameterized turbulence model and does not resolve the fine scale dynamics generated by the flow around buildings and features comprising a large city. The NIST Fire Dynamics Simulator (FDS) is a computational fluid dynamics model that utilizes large eddy simulation methods to model flow around buildings at length scales much smaller than is practical with models like WRF. FDS has the potential to evaluate the impact of complex topography on near-field dispersion and mixing that is difficult to simulate with a mesoscale atmospheric model. A methodology has been developed to couple the FDS model with WRF mesoscale transport models. The coupling is based on nudging the FDS flow field towards that computed by WRF, and is currently limited to one way coupling performed in an off-line mode. This approach allows the FDS model to operate as a sub-grid scale model with in a WRF simulation. To test and validate the coupled FDS - WRF model, the methane leak from the Aliso Canyon underground storage facility was simulated. Large eddy simulations were performed over the complex topography of various natural gas storage facilities including Aliso Canyon, Honor Rancho and MacDonald Island at 10 m horizontal and vertical resolution. The goal of these simulations included improving and validating transport models as well as testing leak hypotheses. Forward simulation results were compared with aircraft and tower based in-situ measurements as well as methane plumes observed using the NASA Airborne Visible InfraRed Imaging Spectrometer (AVIRIS) and the next generation instrument AVIRIS-NG. Comparison of simulation results with measurement data demonstrate the capability of the coupled FDS-WRF models to accurately simulate the transport and dispersion of methane plumes over urban domains. Simulated integrated methane enhancements will be presented and

  20. Large scale tracking algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Ross L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Love, Joshua Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Melgaard, David Kennett [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Karelitz, David B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pitts, Todd Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Zollweg, Joshua David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Anderson, Dylan Z. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Nandy, Prabal [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Whitlow, Gary L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bender, Daniel A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Byrne, Raymond Harry [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  1. Building Participation in Large-scale Conservation: Lessons from Belize and Panama

    Directory of Open Access Journals (Sweden)

    Jesse Guite Hastings

    2015-01-01

    Full Text Available Motivated by biogeography and a desire for alignment with the funding priorities of donors, the twenty-first century has seen big international NGOs shifting towards a large-scale conservation approach. This shift has meant that even before stakeholders at the national and local scale are involved, conservation programmes often have their objectives defined and funding allocated. This paper uses the experiences of Conservation International′s Marine Management Area Science (MMAS programme in Belize and Panama to explore how to build participation at the national and local scale while working within the bounds of the current conservation paradigm. Qualitative data about MMAS was gathered through a multi-sited ethnographic research process, utilising document review, direct observation, and semi-structured interviews with 82 informants in Belize, Panama, and the United States of America. Results indicate that while a large-scale approach to conservation disadvantages early national and local stakeholder participation, this effect can be mediated through focusing engagement efforts, paying attention to context, building horizontal and vertical partnerships, and using deliberative processes that promote learning. While explicit consideration of geopolitics and local complexity alongside biogeography in the planning phase of a large-scale conservation programme is ideal, actions taken by programme managers during implementation can still have a substantial impact on conservation outcomes.

  2. Mapping spatial patterns of denitrifiers at large scales (Invited)

    Science.gov (United States)

    Philippot, L.; Ramette, A.; Saby, N.; Bru, D.; Dequiedt, S.; Ranjard, L.; Jolivet, C.; Arrouays, D.

    2010-12-01

    Little information is available regarding the landscape-scale distribution of microbial communities and its environmental determinants. Here we combined molecular approaches and geostatistical modeling to explore spatial patterns of the denitrifying community at large scales. The distribution of denitrifrying community was investigated over 107 sites in Burgundy, a 31 500 km2 region of France, using a 16 X 16 km sampling grid. At each sampling site, the abundances of denitrifiers and 42 soil physico-chemical properties were measured. The relative contributions of land use, spatial distance, climatic conditions, time and soil physico-chemical properties to the denitrifier spatial distribution were analyzed by canonical variation partitioning. Our results indicate that 43% to 85% of the spatial variation in community abundances could be explained by the measured environmental parameters, with soil chemical properties (mostly pH) being the main driver. We found spatial autocorrelation up to 739 km and used geostatistical modelling to generate predictive maps of the distribution of denitrifiers at the landscape scale. Studying the distribution of the denitrifiers at large scale can help closing the artificial gap between the investigation of microbial processes and microbial community ecology, therefore facilitating our understanding of the relationships between the ecology of denitrifiers and N-fluxes by denitrification.

  3. Pulsed rf systems for large storage rings

    International Nuclear Information System (INIS)

    Wilson, P.B.

    1979-03-01

    The possibility is considered that by using a pulsed rf system a substantial reduction can be made in the rf power requirement for the next generation of large storage rings. For a ring with a sufficiently large circumference, the time between bunch passages, T/sub b/, can exceed the cavity filling time, T/sub f/. As the ratio T/sub b//T/sub f/ increases, it is clear that at some point the average power requirement can be reduced by pulsing the rf to the cavities. In this mode of operation, the rf power is turned on a filling time or so before the arrival of a bunch and is switched off again at the time of bunch passage. There is no rf energy in the accelerating structure, and hence no power dissipation, for most of the period between bunches

  4. Analysis of Utilization of Fecal Resources in Large-scale Livestock and Poultry Breeding in China

    Directory of Open Access Journals (Sweden)

    XUAN Meng

    2018-02-01

    Full Text Available The purpose of this paper is to develop a systematic investigation for the serious problems of livestock and poultry breeding in China and the technical demand of promoting the utilization of manure. Based on the status quo of large-scale livestock and poultry farming in typical areas in China, the work had been done beared on statistics and analysis of the modes and proportions of utilization of manure resources. Such a statistical method had been applied to the country -identified large -scale farm, which the total amount of pollutants reduction was in accordance with the "12th Five-Year Plan" standards. The results showed that there were some differences in the modes of resource utilization due to livestock and poultry manure at different scales and types:(1 Hogs, dairy cattle and beef cattle in total accounted for more than 75% of the agricultural manure storage;(2 Laying hens and broiler chickens accounted for about 65% of the total production of the organic manure produced by fecal production. It is demonstrated that the major modes of resource utilization of dung and urine were related to the natural characteristics, agricultural production methods, farming scale and economic development level in the area. It was concluded that the unreasonable planning, lacking of cleansing during breeding, false selection of manure utilizing modes were the major problems in China忆s large-scale livestock and poultry fecal resources utilization.

  5. Economic analysis of using above ground gas storage devices for compressed air energy storage system

    Science.gov (United States)

    Liu, Jinchao; Zhang, Xinjing; Xu, Yujie; Chen, Zongyan; Chen, Haisheng; Tan, Chunqing

    2014-12-01

    Above ground gas storage devices for compressed air energy storage (CAES) have three types: air storage tanks, gas cylinders, and gas storage pipelines. A cost model of these gas storage devices is established on the basis of whole life cycle cost (LCC) analysis. The optimum parameters of the three types are determined by calculating the theoretical metallic raw material consumption of these three devices and considering the difficulties in manufacture and the influence of gas storage device number. The LCCs of the three types are comprehensively analyzed and compared. The result reveal that the cost of the gas storage pipeline type is lower than that of the other two types. This study may serve as a reference for designing large-scale CAES systems.

  6. Large Scale Computations in Air Pollution Modelling

    DEFF Research Database (Denmark)

    Zlatev, Z.; Brandt, J.; Builtjes, P. J. H.

    Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...

  7. Large Scale Computing and Storage Requirements for High Energy Physics

    International Nuclear Information System (INIS)

    Gerber, Richard A.; Wasserman, Harvey

    2010-01-01

    The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility for the Department of Energy's Office of Science, providing high-performance computing (HPC) resources to more than 3,000 researchers working on about 400 projects. NERSC provides large-scale computing resources and, crucially, the support and expertise needed for scientists to make effective use of them. In November 2009, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR), and DOE's Office of High Energy Physics (HEP) held a workshop to characterize the HPC resources needed at NERSC to support HEP research through the next three to five years. The effort is part of NERSC's legacy of anticipating users needs and deploying resources to meet those demands. The workshop revealed several key points, in addition to achieving its goal of collecting and characterizing computing requirements. The chief findings: (1) Science teams need access to a significant increase in computational resources to meet their research goals; (2) Research teams need to be able to read, write, transfer, store online, archive, analyze, and share huge volumes of data; (3) Science teams need guidance and support to implement their codes on future architectures; and (4) Projects need predictable, rapid turnaround of their computational jobs to meet mission-critical time constraints. This report expands upon these key points and includes others. It also presents a number of case studies as representative of the research conducted within HEP. Workshop participants were asked to codify their requirements in this case study format, summarizing their science goals, methods of solution, current and three-to-five year computing requirements, and software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, multi-core environment that is expected to dominate HPC architectures over the next few years. The report includes

  8. Restoring large-scale brain networks in PTSD and related disorders: a proposal for neuroscientifically-informed treatment interventions

    Directory of Open Access Journals (Sweden)

    Ruth A. Lanius

    2015-03-01

    Full Text Available Background: Three intrinsic connectivity networks in the brain, namely the central executive, salience, and default mode networks, have been identified as crucial to the understanding of higher cognitive functioning, and the functioning of these networks has been suggested to be impaired in psychopathology, including posttraumatic stress disorder (PTSD. Objective: 1 To describe three main large-scale networks of the human brain; 2 to discuss the functioning of these neural networks in PTSD and related symptoms; and 3 to offer hypotheses for neuroscientifically-informed interventions based on treating the abnormalities observed in these neural networks in PTSD and related disorders. Method: Literature relevant to this commentary was reviewed. Results: Increasing evidence for altered functioning of the central executive, salience, and default mode networks in PTSD has been demonstrated. We suggest that each network is associated with specific clinical symptoms observed in PTSD, including cognitive dysfunction (central executive network, increased and decreased arousal/interoception (salience network, and an altered sense of self (default mode network. Specific testable neuroscientifically-informed treatments aimed to restore each of these neural networks and related clinical dysfunction are proposed. Conclusions: Neuroscientifically-informed treatment interventions will be essential to future research agendas aimed at targeting specific PTSD and related symptoms.

  9. Large-Scale 3D Printing: The Way Forward

    Science.gov (United States)

    Jassmi, Hamad Al; Najjar, Fady Al; Ismail Mourad, Abdel-Hamid

    2018-03-01

    Research on small-scale 3D printing has rapidly evolved, where numerous industrial products have been tested and successfully applied. Nonetheless, research on large-scale 3D printing, directed to large-scale applications such as construction and automotive manufacturing, yet demands a great a great deal of efforts. Large-scale 3D printing is considered an interdisciplinary topic and requires establishing a blended knowledge base from numerous research fields including structural engineering, materials science, mechatronics, software engineering, artificial intelligence and architectural engineering. This review article summarizes key topics of relevance to new research trends on large-scale 3D printing, particularly pertaining (1) technological solutions of additive construction (i.e. the 3D printers themselves), (2) materials science challenges, and (3) new design opportunities.

  10. Growth Limits in Large Scale Networks

    DEFF Research Database (Denmark)

    Knudsen, Thomas Phillip

    limitations. The rising complexity of network management with the convergence of communications platforms is shown as problematic for both automatic management feasibility and for manpower resource management. In the fourth step the scope is extended to include the present society with the DDN project as its......The Subject of large scale networks is approached from the perspective of the network planner. An analysis of the long term planning problems is presented with the main focus on the changing requirements for large scale networks and the potential problems in meeting these requirements. The problems...... the fundamental technological resources in network technologies are analysed for scalability. Here several technological limits to continued growth are presented. The third step involves a survey of major problems in managing large scale networks given the growth of user requirements and the technological...

  11. Accelerating sustainability in large-scale facilities

    CERN Multimedia

    Marina Giampietro

    2011-01-01

    Scientific research centres and large-scale facilities are intrinsically energy intensive, but how can big science improve its energy management and eventually contribute to the environmental cause with new cleantech? CERN’s commitment to providing tangible answers to these questions was sealed in the first workshop on energy management for large scale scientific infrastructures held in Lund, Sweden, on the 13-14 October.   Participants at the energy management for large scale scientific infrastructures workshop. The workshop, co-organised with the European Spallation Source (ESS) and  the European Association of National Research Facilities (ERF), tackled a recognised need for addressing energy issues in relation with science and technology policies. It brought together more than 150 representatives of Research Infrastrutures (RIs) and energy experts from Europe and North America. “Without compromising our scientific projects, we can ...

  12. Large scale reflood test

    International Nuclear Information System (INIS)

    Hirano, Kemmei; Murao, Yoshio

    1980-01-01

    The large-scale reflood test with a view to ensuring the safety of light water reactors was started in fiscal 1976 based on the special account act for power source development promotion measures by the entrustment from the Science and Technology Agency. Thereafter, to establish the safety of PWRs in loss-of-coolant accidents by joint international efforts, the Japan-West Germany-U.S. research cooperation program was started in April, 1980. Thereupon, the large-scale reflood test is now included in this program. It consists of two tests using a cylindrical core testing apparatus for examining the overall system effect and a plate core testing apparatus for testing individual effects. Each apparatus is composed of the mock-ups of pressure vessel, primary loop, containment vessel and ECCS. The testing method, the test results and the research cooperation program are described. (J.P.N.)

  13. Evaluation of economics of spent fuel storage techniques

    International Nuclear Information System (INIS)

    Yamaji, Kenji; Nagano, Koji

    1988-01-01

    Various spent fuel storage techniques are evaluated in terms of required costs. The unit storage cost for each spent fuel storage scenario is calculated based on the total cost required for the scenario including capital expenditure, operation cost, maintenance cost and transport cost. Intermediate storage may be performed in relatively small facilities in the plant or in independent large-scale facilities installed away from the plant. Dry casks or water pools are assumed to be used in in-plant storage facilities while vaults may also be employed in independent facilities. Evaluation is made for these different cases. In in-plant facilities, dry cask storage is found to be more economical in all cases than water pool storage, especially when large-sized casks are employed. In independent facilities, on the other hand, the use of vaults is the most desirable because the required capital expenditure is the lowest due to the effect of scale economics. Dry cask storage is less expensive than water pool storage also in independent facilities. The annual discount rate has relatively small influence on the unit cost for storage. An estimated unit cost for storage in independent storage facilities is shown separately for facilities with a capacity of 1,000 tons, 3,000 tons or 5,000 tons. The report also outlines the economics of spent fuel storage in overseas facilities (Finland, Sweden and U.S.A.). (Nogami, K.)

  14. Mass storage technology in networks

    Science.gov (United States)

    Ishii, Katsunori; Takeda, Toru; Itao, Kiyoshi; Kaneko, Reizo

    1990-08-01

    Trends and features of mass storage subsystems in network are surveyed and their key technologies spotlighted. Storage subsystems are becoming increasingly important in new network systems in which communications and data processing are systematically combined. These systems require a new class of high-performance mass-information storage in order to effectively utilize their processing power. The requirements of high transfer rates, high transactional rates and large storage capacities, coupled with high functionality, fault tolerance and flexibility in configuration, are major challenges in storage subsystems. Recent progress in optical disk technology has resulted in improved performance of on-line external memories to optical disk drives, which are competing with mid-range magnetic disks. Optical disks are more effective than magnetic disks in using low-traffic random-access file storing multimedia data that requires large capacity, such as in archive use and in information distribution use by ROM disks. Finally, it demonstrates image coded document file servers for local area network use that employ 130mm rewritable magneto-optical disk subsystems.

  15. Quantum information generation, storage and transmission based on nuclear spins

    Science.gov (United States)

    Zaharov, V. V.; Makarov, V. I.

    2018-05-01

    A new approach to quantum information generation, storage and transmission is proposed. It is shown that quantum information generation and storage using an ensemble of N electron spins encounter unresolvable implementation problems (at least at the present time). As an alternative implementation we discuss two promising radical systems, one with N equivalent nuclear spins and another with N nonequivalent nuclear spins. Detailed analysis shows that only the radical system containing N nonequivalent nuclei is perfectly matched for quantum information generation, storage and transmission. We develop a procedure based on pulsed electron paramagnetic resonance (EPR) and we apply it to the radical system with the set of nonequivalent nuclei. The resulting EPR spectrum contains 2N transition lines, where N is the number of the atoms with the nuclear spin 1/2, and each of these lines may be encoded with a determined qudit sequence. For encoding the EPR lines we propose to submit the radical system to two magnetic pulses in the direction perpendicular to the z axis of the reference frame. As a result, the radical system impulse response may be measured, stored and transmitted through the communications channel. Confirming our development, the ab initio analysis of the system with three anion radicals was done showing matching between the simulations and the theoretical predictions. The developed method may be easily adapted for quantum information generation, storage, processing and transmission in quantum computing and quantum communications applications.

  16. Large Scale Cosmological Anomalies and Inhomogeneous Dark Energy

    Directory of Open Access Journals (Sweden)

    Leandros Perivolaropoulos

    2014-01-01

    Full Text Available A wide range of large scale observations hint towards possible modifications on the standard cosmological model which is based on a homogeneous and isotropic universe with a small cosmological constant and matter. These observations, also known as “cosmic anomalies” include unexpected Cosmic Microwave Background perturbations on large angular scales, large dipolar peculiar velocity flows of galaxies (“bulk flows”, the measurement of inhomogenous values of the fine structure constant on cosmological scales (“alpha dipole” and other effects. The presence of the observational anomalies could either be a large statistical fluctuation in the context of ΛCDM or it could indicate a non-trivial departure from the cosmological principle on Hubble scales. Such a departure is very much constrained by cosmological observations for matter. For dark energy however there are no significant observational constraints for Hubble scale inhomogeneities. In this brief review I discuss some of the theoretical models that can naturally lead to inhomogeneous dark energy, their observational constraints and their potential to explain the large scale cosmic anomalies.

  17. Large-scale patterns in Rayleigh-Benard convection

    International Nuclear Information System (INIS)

    Hardenberg, J. von; Parodi, A.; Passoni, G.; Provenzale, A.; Spiegel, E.A.

    2008-01-01

    Rayleigh-Benard convection at large Rayleigh number is characterized by the presence of intense, vertically moving plumes. Both laboratory and numerical experiments reveal that the rising and descending plumes aggregate into separate clusters so as to produce large-scale updrafts and downdrafts. The horizontal scales of the aggregates reported so far have been comparable to the horizontal extent of the containers, but it has not been clear whether that represents a limitation imposed by domain size. In this work, we present numerical simulations of convection at sufficiently large aspect ratio to ascertain whether there is an intrinsic saturation scale for the clustering process when that ratio is large enough. From a series of simulations of Rayleigh-Benard convection with Rayleigh numbers between 10 5 and 10 8 and with aspect ratios up to 12π, we conclude that the clustering process has a finite horizontal saturation scale with at most a weak dependence on Rayleigh number in the range studied

  18. An emerging network storage management standard: Media error monitoring and reporting information (MEMRI) - to determine optical tape data integrity

    Science.gov (United States)

    Podio, Fernando; Vollrath, William; Williams, Joel; Kobler, Ben; Crouse, Don

    1998-01-01

    Sophisticated network storage management applications are rapidly evolving to satisfy a market demand for highly reliable data storage systems with large data storage capacities and performance requirements. To preserve a high degree of data integrity, these applications must rely on intelligent data storage devices that can provide reliable indicators of data degradation. Error correction activity generally occurs within storage devices without notification to the host. Early indicators of degradation and media error monitoring 333 and reporting (MEMR) techniques implemented in data storage devices allow network storage management applications to notify system administrators of these events and to take appropriate corrective actions before catastrophic errors occur. Although MEMR techniques have been implemented in data storage devices for many years, until 1996 no MEMR standards existed. In 1996 the American National Standards Institute (ANSI) approved the only known (world-wide) industry standard specifying MEMR techniques to verify stored data on optical disks. This industry standard was developed under the auspices of the Association for Information and Image Management (AIIM). A recently formed AIIM Optical Tape Subcommittee initiated the development of another data integrity standard specifying a set of media error monitoring tools and media error monitoring information (MEMRI) to verify stored data on optical tape media. This paper discusses the need for intelligent storage devices that can provide data integrity metadata, the content of the existing data integrity standard for optical disks, and the content of the MEMRI standard being developed by the AIIM Optical Tape Subcommittee.

  19. Lie construction affects information storage under high memory load condition.

    Directory of Open Access Journals (Sweden)

    Yuqiu Liu

    Full Text Available Previous studies indicate that lying consumes cognitive resources, especially working memory (WM resources. Considering the dual functions that WM might play in lying: holding the truth-related information and turning the truth into lies, the present study examined the relationship between the information storage and processing in the lie construction. To achieve that goal, a deception task based on the old/new recognition paradigm was designed, which could manipulate two levels of WM load (low-load task using 4 items and high-load task using 6 items during the deception process. The analyses based on the amplitude of the contralateral delay activity (CDA, a proved index of the number of representations being held in WM, showed that the CDA amplitude was lower in the deception process than that in the truth telling process under the high-load condition. In contrast, under the low-load condition, no CDA difference was found between the deception and truth telling processes. Therefore, we deduced that the lie construction and information storage compete for WM resources; when the available WM resources cannot meet this cognitive demand, the WM resources occupied by the information storage would be consumed by the lie construction.

  20. Lie construction affects information storage under high memory load condition.

    Science.gov (United States)

    Liu, Yuqiu; Wang, Chunjie; Jiang, Haibo; He, Hongjian; Chen, Feiyan

    2017-01-01

    Previous studies indicate that lying consumes cognitive resources, especially working memory (WM) resources. Considering the dual functions that WM might play in lying: holding the truth-related information and turning the truth into lies, the present study examined the relationship between the information storage and processing in the lie construction. To achieve that goal, a deception task based on the old/new recognition paradigm was designed, which could manipulate two levels of WM load (low-load task using 4 items and high-load task using 6 items) during the deception process. The analyses based on the amplitude of the contralateral delay activity (CDA), a proved index of the number of representations being held in WM, showed that the CDA amplitude was lower in the deception process than that in the truth telling process under the high-load condition. In contrast, under the low-load condition, no CDA difference was found between the deception and truth telling processes. Therefore, we deduced that the lie construction and information storage compete for WM resources; when the available WM resources cannot meet this cognitive demand, the WM resources occupied by the information storage would be consumed by the lie construction.

  1. Extending SME to Handle Large-Scale Cognitive Modeling.

    Science.gov (United States)

    Forbus, Kenneth D; Ferguson, Ronald W; Lovett, Andrew; Gentner, Dedre

    2017-07-01

    Analogy and similarity are central phenomena in human cognition, involved in processes ranging from visual perception to conceptual change. To capture this centrality requires that a model of comparison must be able to integrate with other processes and handle the size and complexity of the representations required by the tasks being modeled. This paper describes extensions to Structure-Mapping Engine (SME) since its inception in 1986 that have increased its scope of operation. We first review the basic SME algorithm, describe psychological evidence for SME as a process model, and summarize its role in simulating similarity-based retrieval and generalization. Then we describe five techniques now incorporated into the SME that have enabled it to tackle large-scale modeling tasks: (a) Greedy merging rapidly constructs one or more best interpretations of a match in polynomial time: O(n 2 log(n)); (b) Incremental operation enables mappings to be extended as new information is retrieved or derived about the base or target, to model situations where information in a task is updated over time; (c) Ubiquitous predicates model the varying degrees to which items may suggest alignment; (d) Structural evaluation of analogical inferences models aspects of plausibility judgments; (e) Match filters enable large-scale task models to communicate constraints to SME to influence the mapping process. We illustrate via examples from published studies how these enable it to capture a broader range of psychological phenomena than before. Copyright © 2016 Cognitive Science Society, Inc.

  2. Hydrogen transport and storage in engineered glass microspheres

    Energy Technology Data Exchange (ETDEWEB)

    Rambach, G.D.

    1994-04-20

    New, high-strength, hollow, glass microspheres filled with pressurized hydrogen exhibit storage densities which make them attractive for bulk hydrogen storage and transport. The hoop stress at failure of our engineered glass microspheres is about 150,000 psi, permitting a three-fold increase in pressure limit and storage capacity above commercial microspheres, which fail at wall stresses of 50,000 psi. For this project, microsphere material and structure will be optimized for storage capacity and charge/discharge kinetics to improve their commercial practicality. Microsphere production scale up will be performed, directed towards large-scale commercial use. Our analysis relating glass microspheres for hydrogen transport with infrastructure and economics` indicate that pressurized microspheres can be economically competitive with other forms of bulk rail and truck transport such as hydride beds, cryocarbons and pressurized tube transports. For microspheres made from advanced materials and processes, analysis will also be performed to identify the appropriate applications of the microspheres considering property variables, and different hydrogen infrastructure, end use, production and market scenarios. This report presents some of the recent modelling results for large beds of glass microspheres in hydrogen storage applications. It includes plans for experiments to identify the properties relevant to large-bed hydrogen transport and storage applications, of the best, currently producible, glass microspheres. This work began in March, 1994. Project successes will be manifest in the matching of cur-rent glass microspheres with a useful application in hydrogen bulk transport and storage, and in developing microsphere materials and processes that increase the storage density and reduce the storage energy requirement.

  3. The Convergence of High Performance Computing and Large Scale Data Analytics

    Science.gov (United States)

    Duffy, D.; Bowen, M. K.; Thompson, J. H.; Yang, C. P.; Hu, F.; Wills, B.

    2015-12-01

    As the combinations of remote sensing observations and model outputs have grown, scientists are increasingly burdened with both the necessity and complexity of large-scale data analysis. Scientists are increasingly applying traditional high performance computing (HPC) solutions to solve their "Big Data" problems. While this approach has the benefit of limiting data movement, the HPC system is not optimized to run analytics, which can create problems that permeate throughout the HPC environment. To solve these issues and to alleviate some of the strain on the HPC environment, the NASA Center for Climate Simulation (NCCS) has created the Advanced Data Analytics Platform (ADAPT), which combines both HPC and cloud technologies to create an agile system designed for analytics. Large, commonly used data sets are stored in this system in a write once/read many file system, such as Landsat, MODIS, MERRA, and NGA. High performance virtual machines are deployed and scaled according to the individual scientist's requirements specifically for data analysis. On the software side, the NCCS and GMU are working with emerging commercial technologies and applying them to structured, binary scientific data in order to expose the data in new ways. Native NetCDF data is being stored within a Hadoop Distributed File System (HDFS) enabling storage-proximal processing through MapReduce while continuing to provide accessibility of the data to traditional applications. Once the data is stored within HDFS, an additional indexing scheme is built on top of the data and placed into a relational database. This spatiotemporal index enables extremely fast mappings of queries to data locations to dramatically speed up analytics. These are some of the first steps toward a single unified platform that optimizes for both HPC and large-scale data analysis, and this presentation will elucidate the resulting and necessary exascale architectures required for future systems.

  4. Large entropy derived from low-frequency vibrations and its implications for hydrogen storage

    Science.gov (United States)

    Wang, Xiaoxia; Chen, Hongshan

    2018-02-01

    Adsorption and desorption are driven by the energy and entropy competition, but the entropy effect is often ignored in hydrogen storage and the optimal adsorption strength for the ambient storage is controversial in the literature. This letter investigated the adsorption states of the H2 molecule on M-B12C6N6 (M = Li, Na, Mg, Ca, and Sc) and analyzed the correlation among the zero point energy (ZPE), the entropy change, and the adsorption energy and their effects on the delivery capacities. The ZPE has large correction to the adsorption energy due to the light mass of hydrogen. The computations show that the potential energies along the spherical surface centered at the alkali metals are very flat and it leads to large entropy (˜70 J/mol.K) of the adsorbed H2 molecules. The entropy change can compensate the enthalpy change effectively, and the ambient storage can be realized with relatively weak adsorption of ΔH = -12 kJ/mol. The results are encouraging and instructive for the design of hydrogen storage materials.

  5. Characteristic mega-basin water storage behavior using GRACE.

    Science.gov (United States)

    Reager, J T; Famiglietti, James S

    2013-06-01

    [1] A long-standing challenge for hydrologists has been a lack of observational data on global-scale basin hydrological behavior. With observations from NASA's Gravity Recovery and Climate Experiment (GRACE) mission, hydrologists are now able to study terrestrial water storage for large river basins (>200,000 km 2 ), with monthly time resolution. Here we provide results of a time series model of basin-averaged GRACE terrestrial water storage anomaly and Global Precipitation Climatology Project precipitation for the world's largest basins. We address the short (10 year) length of the GRACE record by adopting a parametric spectral method to calculate frequency-domain transfer functions of storage response to precipitation forcing and then generalize these transfer functions based on large-scale basin characteristics, such as percent forest cover and basin temperature. Among the parameters tested, results show that temperature, soil water-holding capacity, and percent forest cover are important controls on relative storage variability, while basin area and mean terrain slope are less important. The derived empirical relationships were accurate (0.54 ≤  E f  ≤ 0.84) in modeling global-scale water storage anomaly time series for the study basins using only precipitation, average basin temperature, and two land-surface variables, offering the potential for synthesis of basin storage time series beyond the GRACE observational period. Such an approach could be applied toward gap filling between current and future GRACE missions and for predicting basin storage given predictions of future precipitation.

  6. Modelling financial markets with agents competing on different time scales and with different amount of information

    Science.gov (United States)

    Wohlmuth, Johannes; Andersen, Jørgen Vitting

    2006-05-01

    We use agent-based models to study the competition among investors who use trading strategies with different amount of information and with different time scales. We find that mixing agents that trade on the same time scale but with different amount of information has a stabilizing impact on the large and extreme fluctuations of the market. Traders with the most information are found to be more likely to arbitrage traders who use less information in the decision making. On the other hand, introducing investors who act on two different time scales has a destabilizing effect on the large and extreme price movements, increasing the volatility of the market. Closeness in time scale used in the decision making is found to facilitate the creation of local trends. The larger the overlap in commonly shared information the more the traders in a mixed system with different time scales are found to profit from the presence of traders acting at another time scale than themselves.

  7. Manufacturing test of large scale hollow capsule and long length cladding in the large scale oxide dispersion strengthened (ODS) martensitic steel

    International Nuclear Information System (INIS)

    Narita, Takeshi; Ukai, Shigeharu; Kaito, Takeji; Ohtsuka, Satoshi; Fujiwara, Masayuki

    2004-04-01

    Mass production capability of oxide dispersion strengthened (ODS) martensitic steel cladding (9Cr) has being evaluated in the Phase II of the Feasibility Studies on Commercialized Fast Reactor Cycle System. The cost for manufacturing mother tube (raw materials powder production, mechanical alloying (MA) by ball mill, canning, hot extrusion, and machining) is a dominant factor in the total cost for manufacturing ODS ferritic steel cladding. In this study, the large-sale 9Cr-ODS martensitic steel mother tube which is made with a large-scale hollow capsule, and long length claddings were manufactured, and the applicability of these processes was evaluated. Following results were obtained in this study. (1) Manufacturing the large scale mother tube in the dimension of 32 mm OD, 21 mm ID, and 2 m length has been successfully carried out using large scale hollow capsule. This mother tube has a high degree of accuracy in size. (2) The chemical composition and the micro structure of the manufactured mother tube are similar to the existing mother tube manufactured by a small scale can. And the remarkable difference between the bottom and top sides in the manufactured mother tube has not been observed. (3) The long length cladding has been successfully manufactured from the large scale mother tube which was made using a large scale hollow capsule. (4) For reducing the manufacturing cost of the ODS steel claddings, manufacturing process of the mother tubes using a large scale hollow capsules is promising. (author)

  8. Rainbow: a tool for large-scale whole-genome sequencing data analysis using cloud computing.

    Science.gov (United States)

    Zhao, Shanrong; Prenger, Kurt; Smith, Lance; Messina, Thomas; Fan, Hongtao; Jaeger, Edward; Stephens, Susan

    2013-06-27

    Technical improvements have decreased sequencing costs and, as a result, the size and number of genomic datasets have increased rapidly. Because of the lower cost, large amounts of sequence data are now being produced by small to midsize research groups. Crossbow is a software tool that can detect single nucleotide polymorphisms (SNPs) in whole-genome sequencing (WGS) data from a single subject; however, Crossbow has a number of limitations when applied to multiple subjects from large-scale WGS projects. The data storage and CPU resources that are required for large-scale whole genome sequencing data analyses are too large for many core facilities and individual laboratories to provide. To help meet these challenges, we have developed Rainbow, a cloud-based software package that can assist in the automation of large-scale WGS data analyses. Here, we evaluated the performance of Rainbow by analyzing 44 different whole-genome-sequenced subjects. Rainbow has the capacity to process genomic data from more than 500 subjects in two weeks using cloud computing provided by the Amazon Web Service. The time includes the import and export of the data using Amazon Import/Export service. The average cost of processing a single sample in the cloud was less than 120 US dollars. Compared with Crossbow, the main improvements incorporated into Rainbow include the ability: (1) to handle BAM as well as FASTQ input files; (2) to split large sequence files for better load balance downstream; (3) to log the running metrics in data processing and monitoring multiple Amazon Elastic Compute Cloud (EC2) instances; and (4) to merge SOAPsnp outputs for multiple individuals into a single file to facilitate downstream genome-wide association studies. Rainbow is a scalable, cost-effective, and open-source tool for large-scale WGS data analysis. For human WGS data sequenced by either the Illumina HiSeq 2000 or HiSeq 2500 platforms, Rainbow can be used straight out of the box. Rainbow is available

  9. Comparing centralised and decentralised anaerobic digestion of stillage from a large-scale bioethanol plant to animal feed production.

    Science.gov (United States)

    Drosg, B; Wirthensohn, T; Konrad, G; Hornbachner, D; Resch, C; Wäger, F; Loderer, C; Waltenberger, R; Kirchmayr, R; Braun, R

    2008-01-01

    A comparison of stillage treatment options for large-scale bioethanol plants was based on the data of an existing plant producing approximately 200,000 t/yr of bioethanol and 1,400,000 t/yr of stillage. Animal feed production--the state-of-the-art technology at the plant--was compared to anaerobic digestion. The latter was simulated in two different scenarios: digestion in small-scale biogas plants in the surrounding area versus digestion in a large-scale biogas plant at the bioethanol production site. Emphasis was placed on a holistic simulation balancing chemical parameters and calculating logistic algorithms to compare the efficiency of the stillage treatment solutions. For central anaerobic digestion different digestate handling solutions were considered because of the large amount of digestate. For land application a minimum of 36,000 ha of available agricultural area would be needed and 600,000 m(3) of storage volume. Secondly membrane purification of the digestate was investigated consisting of decanter, microfiltration, and reverse osmosis. As a third option aerobic wastewater treatment of the digestate was discussed. The final outcome was an economic evaluation of the three mentioned stillage treatment options, as a guide to stillage management for operators of large-scale bioethanol plants. Copyright IWA Publishing 2008.

  10. Amplification of large-scale magnetic field in nonhelical magnetohydrodynamics

    KAUST Repository

    Kumar, Rohit

    2017-08-11

    It is typically assumed that the kinetic and magnetic helicities play a crucial role in the growth of large-scale dynamo. In this paper, we demonstrate that helicity is not essential for the amplification of large-scale magnetic field. For this purpose, we perform nonhelical magnetohydrodynamic (MHD) simulation, and show that the large-scale magnetic field can grow in nonhelical MHD when random external forcing is employed at scale 1/10 the box size. The energy fluxes and shell-to-shell transfer rates computed using the numerical data show that the large-scale magnetic energy grows due to the energy transfers from the velocity field at the forcing scales.

  11. Hierarchical storage of large volume of multidector CT data using distributed servers

    Science.gov (United States)

    Ratib, Osman; Rosset, Antoine; Heuberger, Joris; Bandon, David

    2006-03-01

    Multidector scanners and hybrid multimodality scanners have the ability to generate large number of high-resolution images resulting in very large data sets. In most cases, these datasets are generated for the sole purpose of generating secondary processed images and 3D rendered images as well as oblique and curved multiplanar reformatted images. It is therefore not essential to archive the original images after they have been processed. We have developed an architecture of distributed archive servers for temporary storage of large image datasets for 3D rendering and image processing without the need for long term storage in PACS archive. With the relatively low cost of storage devices it is possible to configure these servers to hold several months or even years of data, long enough for allowing subsequent re-processing if required by specific clinical situations. We tested the latest generation of RAID servers provided by Apple computers with a capacity of 5 TBytes. We implemented a peer-to-peer data access software based on our Open-Source image management software called OsiriX, allowing remote workstations to directly access DICOM image files located on the server through a new technology called "bonjour". This architecture offers a seamless integration of multiple servers and workstations without the need for central database or complex workflow management tools. It allows efficient access to image data from multiple workstation for image analysis and visualization without the need for image data transfer. It provides a convenient alternative to centralized PACS architecture while avoiding complex and time-consuming data transfer and storage.

  12. Detecting differential protein expression in large-scale population proteomics

    Energy Technology Data Exchange (ETDEWEB)

    Ryu, Soyoung; Qian, Weijun; Camp, David G.; Smith, Richard D.; Tompkins, Ronald G.; Davis, Ronald W.; Xiao, Wenzhong

    2014-06-17

    Mass spectrometry-based high-throughput quantitative proteomics shows great potential in clinical biomarker studies, identifying and quantifying thousands of proteins in biological samples. However, methods are needed to appropriately handle issues/challenges unique to mass spectrometry data in order to detect as many biomarker proteins as possible. One issue is that different mass spectrometry experiments generate quite different total numbers of quantified peptides, which can result in more missing peptide abundances in an experiment with a smaller total number of quantified peptides. Another issue is that the quantification of peptides is sometimes absent, especially for less abundant peptides and such missing values contain the information about the peptide abundance. Here, we propose a Significance Analysis for Large-scale Proteomics Studies (SALPS) that handles missing peptide intensity values caused by the two mechanisms mentioned above. Our model has a robust performance in both simulated data and proteomics data from a large clinical study. Because varying patients’ sample qualities and deviating instrument performances are not avoidable for clinical studies performed over the course of several years, we believe that our approach will be useful to analyze large-scale clinical proteomics data.

  13. Large-scale information flow in conscious and unconscious states: an ECoG study in monkeys.

    Directory of Open Access Journals (Sweden)

    Toru Yanagawa

    Full Text Available Consciousness is an emergent property of the complex brain network. In order to understand how consciousness is constructed, neural interactions within this network must be elucidated. Previous studies have shown that specific neural interactions between the thalamus and frontoparietal cortices; frontal and parietal cortices; and parietal and temporal cortices are correlated with levels of consciousness. However, due to technical limitations, the network underlying consciousness has not been investigated in terms of large-scale interactions with high temporal and spectral resolution. In this study, we recorded neural activity with dense electrocorticogram (ECoG arrays and used the spectral Granger causality to generate a more comprehensive network that relates to consciousness in monkeys. We found that neural interactions were significantly different between conscious and unconscious states in all combinations of cortical region pairs. Furthermore, the difference in neural interactions between conscious and unconscious states could be represented in 4 frequency-specific large-scale networks with unique interaction patterns: 2 networks were related to consciousness and showed peaks in alpha and beta bands, while the other 2 networks were related to unconsciousness and showed peaks in theta and gamma bands. Moreover, networks in the unconscious state were shared amongst 3 different unconscious conditions, which were induced either by ketamine and medetomidine, propofol, or sleep. Our results provide a novel picture that the difference between conscious and unconscious states is characterized by a switch in frequency-specific modes of large-scale communications across the entire cortex, rather than the cessation of interactions between specific cortical regions.

  14. Hydrometeorological variability on a large french catchment and its relation to large-scale circulation across temporal scales

    Science.gov (United States)

    Massei, Nicolas; Dieppois, Bastien; Fritier, Nicolas; Laignel, Benoit; Debret, Maxime; Lavers, David; Hannah, David

    2015-04-01

    In the present context of global changes, considerable efforts have been deployed by the hydrological scientific community to improve our understanding of the impacts of climate fluctuations on water resources. Both observational and modeling studies have been extensively employed to characterize hydrological changes and trends, assess the impact of climate variability or provide future scenarios of water resources. In the aim of a better understanding of hydrological changes, it is of crucial importance to determine how and to what extent trends and long-term oscillations detectable in hydrological variables are linked to global climate oscillations. In this work, we develop an approach associating large-scale/local-scale correlation, enmpirical statistical downscaling and wavelet multiresolution decomposition of monthly precipitation and streamflow over the Seine river watershed, and the North Atlantic sea level pressure (SLP) in order to gain additional insights on the atmospheric patterns associated with the regional hydrology. We hypothesized that: i) atmospheric patterns may change according to the different temporal wavelengths defining the variability of the signals; and ii) definition of those hydrological/circulation relationships for each temporal wavelength may improve the determination of large-scale predictors of local variations. The results showed that the large-scale/local-scale links were not necessarily constant according to time-scale (i.e. for the different frequencies characterizing the signals), resulting in changing spatial patterns across scales. This was then taken into account by developing an empirical statistical downscaling (ESD) modeling approach which integrated discrete wavelet multiresolution analysis for reconstructing local hydrometeorological processes (predictand : precipitation and streamflow on the Seine river catchment) based on a large-scale predictor (SLP over the Euro-Atlantic sector) on a monthly time-step. This approach

  15. Superconducting materials for large scale applications

    International Nuclear Information System (INIS)

    Dew-Hughes, D.

    1975-01-01

    Applications of superconductors capable of carrying large current densities in large-scale electrical devices are examined. Discussions are included on critical current density, superconducting materials available, and future prospects for improved superconducting materials. (JRD)

  16. Large-scale influences in near-wall turbulence.

    Science.gov (United States)

    Hutchins, Nicholas; Marusic, Ivan

    2007-03-15

    Hot-wire data acquired in a high Reynolds number facility are used to illustrate the need for adequate scale separation when considering the coherent structure in wall-bounded turbulence. It is found that a large-scale motion in the log region becomes increasingly comparable in energy to the near-wall cycle as the Reynolds number increases. Through decomposition of fluctuating velocity signals, it is shown that this large-scale motion has a distinct modulating influence on the small-scale energy (akin to amplitude modulation). Reassessment of DNS data, in light of these results, shows similar trends, with the rate and intensity of production due to the near-wall cycle subject to a modulating influence from the largest-scale motions.

  17. SIMULATION FRAMEWORK FOR REGIONAL GEOLOGIC CO{sub 2} STORAGE ALONG ARCHES PROVINCE OF MIDWESTERN UNITED STATES

    Energy Technology Data Exchange (ETDEWEB)

    Sminchak, Joel

    2012-09-30

    This report presents final technical results for the project Simulation Framework for Regional Geologic CO{sub 2} Storage Infrastructure along Arches Province of the Midwest United States. The Arches Simulation project was a three year effort designed to develop a simulation framework for regional geologic carbon dioxide (CO{sub 2}) storage infrastructure along the Arches Province through development of a geologic model and advanced reservoir simulations of large-scale CO{sub 2} storage. The project included five major technical tasks: (1) compilation of geologic, hydraulic and injection data on Mount Simon, (2) development of model framework and parameters, (3) preliminary variable density flow simulations, (4) multi-phase model runs of regional storage scenarios, and (5) implications for regional storage feasibility. The Arches Province is an informal region in northeastern Indiana, northern Kentucky, western Ohio, and southern Michigan where sedimentary rock formations form broad arch and platform structures. In the province, the Mount Simon sandstone is an appealing deep saline formation for CO{sub 2} storage because of the intersection of reservoir thickness and permeability. Many CO{sub 2} sources are located in proximity to the Arches Province, and the area is adjacent to coal fired power plants along the Ohio River Valley corridor. Geophysical well logs, rock samples, drilling logs, and geotechnical tests were evaluated for a 500,000 km{sup 2} study area centered on the Arches Province. Hydraulic parameters and historical operational information was also compiled from Mount Simon wastewater injection wells in the region. This information was integrated into a geocellular model that depicts the parameters and conditions in a numerical array. The geologic and hydraulic data were integrated into a three-dimensional grid of porosity and permeability, which are key parameters regarding fluid flow and pressure buildup due to CO{sub 2} injection. Permeability data

  18. Resource Provisioning in Large-Scale Self-Organizing Distributed Systems

    Science.gov (United States)

    2012-06-01

    organizations. Due to scale, competition, and advertising revenues, services such as email, social networking, office document processing, file storage and...53] fastCGI, http://www.fastcgi.com/. 205 [54] B. Adida , “It all starts at the server [5.World Wide Web and FastCGI],” IEEE Internet

  19. Aquifer thermal energy storage - A feasibility study for large scale demonstration

    Science.gov (United States)

    Skinner, W. V.; Supkow, D. J.

    Engineering procedures necessary for aquifer thermal energy storage (ATES), based on studies of the Magothy Aquifer on Long Island, NY, are presented, with chilled winter water pumped into the aquifer and reclaimed in summer months for air conditioning. The choice of aquifer involves necessary volume, flow rate, efficiency of thermal recovery, and avoidance of conflict with other users; utilization depends on choice of appropriate piping, heat exchangers, and well construction to prevent degradation of the aquifer. The methods employed to probe the Magothy for suitability are described, including drilling an asymmetric well cluster for observation, and 48 hr pumping and 8 hr recovery. Transmissivity was found to vary from 8,000 to 29,000 sq ft/day. A doublet well was then drilled and water withdrawn, chilled, and returned. Later withdrawal indicated a 46% thermal recovery, with computer models projecting 80% with additional cycling. The study verified the feasibility of ATES, which can be expanded with additional demand.

  20. 3D fast adaptive correlation imaging for large-scale gravity data based on GPU computation

    Science.gov (United States)

    Chen, Z.; Meng, X.; Guo, L.; Liu, G.

    2011-12-01

    In recent years, large scale gravity data sets have been collected and employed to enhance gravity problem-solving abilities of tectonics studies in China. Aiming at the large scale data and the requirement of rapid interpretation, previous authors have carried out a lot of work, including the fast gradient module inversion and Euler deconvolution depth inversion ,3-D physical property inversion using stochastic subspaces and equivalent storage, fast inversion using wavelet transforms and a logarithmic barrier method. So it can be say that 3-D gravity inversion has been greatly improved in the last decade. Many authors added many different kinds of priori information and constraints to deal with nonuniqueness using models composed of a large number of contiguous cells of unknown property and obtained good results. However, due to long computation time, instability and other shortcomings, 3-D physical property inversion has not been widely applied to large-scale data yet. In order to achieve 3-D interpretation with high efficiency and precision for geological and ore bodies and obtain their subsurface distribution, there is an urgent need to find a fast and efficient inversion method for large scale gravity data. As an entirely new geophysical inversion method, 3D correlation has a rapid development thanks to the advantage of requiring no a priori information and demanding small amount of computer memory. This method was proposed to image the distribution of equivalent excess masses of anomalous geological bodies with high resolution both longitudinally and transversely. In order to tranform the equivalence excess masses into real density contrasts, we adopt the adaptive correlation imaging for gravity data. After each 3D correlation imaging, we change the equivalence into density contrasts according to the linear relationship, and then carry out forward gravity calculation for each rectangle cells. Next, we compare the forward gravity data with real data, and

  1. Large Scale Computing and Storage Requirements for High Energy Physics

    Energy Technology Data Exchange (ETDEWEB)

    Gerber, Richard A.; Wasserman, Harvey

    2010-11-24

    The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility for the Department of Energy's Office of Science, providing high-performance computing (HPC) resources to more than 3,000 researchers working on about 400 projects. NERSC provides large-scale computing resources and, crucially, the support and expertise needed for scientists to make effective use of them. In November 2009, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR), and DOE's Office of High Energy Physics (HEP) held a workshop to characterize the HPC resources needed at NERSC to support HEP research through the next three to five years. The effort is part of NERSC's legacy of anticipating users needs and deploying resources to meet those demands. The workshop revealed several key points, in addition to achieving its goal of collecting and characterizing computing requirements. The chief findings: (1) Science teams need access to a significant increase in computational resources to meet their research goals; (2) Research teams need to be able to read, write, transfer, store online, archive, analyze, and share huge volumes of data; (3) Science teams need guidance and support to implement their codes on future architectures; and (4) Projects need predictable, rapid turnaround of their computational jobs to meet mission-critical time constraints. This report expands upon these key points and includes others. It also presents a number of case studies as representative of the research conducted within HEP. Workshop participants were asked to codify their requirements in this case study format, summarizing their science goals, methods of solution, current and three-to-five year computing requirements, and software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, multi-core environment that is expected to dominate HPC architectures over the next few years

  2. PKI security in large-scale healthcare networks.

    Science.gov (United States)

    Mantas, Georgios; Lymberopoulos, Dimitrios; Komninos, Nikos

    2012-06-01

    During the past few years a lot of PKI (Public Key Infrastructures) infrastructures have been proposed for healthcare networks in order to ensure secure communication services and exchange of data among healthcare professionals. However, there is a plethora of challenges in these healthcare PKI infrastructures. Especially, there are a lot of challenges for PKI infrastructures deployed over large-scale healthcare networks. In this paper, we propose a PKI infrastructure to ensure security in a large-scale Internet-based healthcare network connecting a wide spectrum of healthcare units geographically distributed within a wide region. Furthermore, the proposed PKI infrastructure facilitates the trust issues that arise in a large-scale healthcare network including multi-domain PKI infrastructures.

  3. A large-scale perspective on stress-induced alterations in resting-state networks

    Science.gov (United States)

    Maron-Katz, Adi; Vaisvaser, Sharon; Lin, Tamar; Hendler, Talma; Shamir, Ron

    2016-02-01

    Stress is known to induce large-scale neural modulations. However, its neural effect once the stressor is removed and how it relates to subjective experience are not fully understood. Here we used a statistically sound data-driven approach to investigate alterations in large-scale resting-state functional connectivity (rsFC) induced by acute social stress. We compared rsfMRI profiles of 57 healthy male subjects before and after stress induction. Using a parcellation-based univariate statistical analysis, we identified a large-scale rsFC change, involving 490 parcel-pairs. Aiming to characterize this change, we employed statistical enrichment analysis, identifying anatomic structures that were significantly interconnected by these pairs. This analysis revealed strengthening of thalamo-cortical connectivity and weakening of cross-hemispheral parieto-temporal connectivity. These alterations were further found to be associated with change in subjective stress reports. Integrating report-based information on stress sustainment 20 minutes post induction, revealed a single significant rsFC change between the right amygdala and the precuneus, which inversely correlated with the level of subjective recovery. Our study demonstrates the value of enrichment analysis for exploring large-scale network reorganization patterns, and provides new insight on stress-induced neural modulations and their relation to subjective experience.

  4. Inference of functional properties from large-scale analysis of enzyme superfamilies.

    Science.gov (United States)

    Brown, Shoshana D; Babbitt, Patricia C

    2012-01-02

    As increasingly large amounts of data from genome and other sequencing projects become available, new approaches are needed to determine the functions of the proteins these genes encode. We show how large-scale computational analysis can help to address this challenge by linking functional information to sequence and structural similarities using protein similarity networks. Network analyses using three functionally diverse enzyme superfamilies illustrate the use of these approaches for facile updating and comparison of available structures for a large superfamily, for creation of functional hypotheses for metagenomic sequences, and to summarize the limits of our functional knowledge about even well studied superfamilies.

  5. Emerging large-scale solar heating applications

    International Nuclear Information System (INIS)

    Wong, W.P.; McClung, J.L.

    2009-01-01

    Currently the market for solar heating applications in Canada is dominated by outdoor swimming pool heating, make-up air pre-heating and domestic water heating in homes, commercial and institutional buildings. All of these involve relatively small systems, except for a few air pre-heating systems on very large buildings. Together these applications make up well over 90% of the solar thermal collectors installed in Canada during 2007. These three applications, along with the recent re-emergence of large-scale concentrated solar thermal for generating electricity, also dominate the world markets. This paper examines some emerging markets for large scale solar heating applications, with a focus on the Canadian climate and market. (author)

  6. Emerging large-scale solar heating applications

    Energy Technology Data Exchange (ETDEWEB)

    Wong, W.P.; McClung, J.L. [Science Applications International Corporation (SAIC Canada), Ottawa, Ontario (Canada)

    2009-07-01

    Currently the market for solar heating applications in Canada is dominated by outdoor swimming pool heating, make-up air pre-heating and domestic water heating in homes, commercial and institutional buildings. All of these involve relatively small systems, except for a few air pre-heating systems on very large buildings. Together these applications make up well over 90% of the solar thermal collectors installed in Canada during 2007. These three applications, along with the recent re-emergence of large-scale concentrated solar thermal for generating electricity, also dominate the world markets. This paper examines some emerging markets for large scale solar heating applications, with a focus on the Canadian climate and market. (author)

  7. Large-scale inverse model analyses employing fast randomized data reduction

    Science.gov (United States)

    Lin, Youzuo; Le, Ellen B.; O'Malley, Daniel; Vesselinov, Velimir V.; Bui-Thanh, Tan

    2017-08-01

    When the number of observations is large, it is computationally challenging to apply classical inverse modeling techniques. We have developed a new computationally efficient technique for solving inverse problems with a large number of observations (e.g., on the order of 107 or greater). Our method, which we call the randomized geostatistical approach (RGA), is built upon the principal component geostatistical approach (PCGA). We employ a data reduction technique combined with the PCGA to improve the computational efficiency and reduce the memory usage. Specifically, we employ a randomized numerical linear algebra technique based on a so-called "sketching" matrix to effectively reduce the dimension of the observations without losing the information content needed for the inverse analysis. In this way, the computational and memory costs for RGA scale with the information content rather than the size of the calibration data. Our algorithm is coded in Julia and implemented in the MADS open-source high-performance computational framework (http://mads.lanl.gov). We apply our new inverse modeling method to invert for a synthetic transmissivity field. Compared to a standard geostatistical approach (GA), our method is more efficient when the number of observations is large. Most importantly, our method is capable of solving larger inverse problems than the standard GA and PCGA approaches. Therefore, our new model inversion method is a powerful tool for solving large-scale inverse problems. The method can be applied in any field and is not limited to hydrogeological applications such as the characterization of aquifer heterogeneity.

  8. Preliminary analytical study on the feasibility of using reinforced concrete pile foundations for renewable energy storage by compressed air energy storage technology

    Science.gov (United States)

    Tulebekova, S.; Saliyev, D.; Zhang, D.; Kim, J. R.; Karabay, A.; Turlybek, A.; Kazybayeva, L.

    2017-11-01

    Compressed air energy storage technology is one of the promising methods that have high reliability, economic feasibility and low environmental impact. Current applications of the technology are mainly limited to energy storage for power plants using large scale underground caverns. This paper explores the possibility of making use of reinforced concrete pile foundations to store renewable energy generated from solar panels or windmills attached to building structures. The energy will be stored inside the pile foundation with hollow sections via compressed air. Given the relatively small volume of storage provided by the foundation, the required storage pressure is expected to be higher than that in the large-scale underground cavern. The high air pressure typically associated with large temperature increase, combined with structural loads, will make the pile foundation in a complicated loading condition, which might cause issues in the structural and geotechnical safety. This paper presents a preliminary analytical study on the performance of the pile foundation subjected to high pressure, large temperature increase and structural loads. Finite element analyses on pile foundation models, which are built from selected prototype structures, have been conducted. The analytical study identifies maximum stresses in the concrete of the pile foundation under combined pressure, temperature change and structural loads. Recommendations have been made for the use of reinforced concrete pile foundations for renewable energy storage.

  9. Fundamental Challenges for Modeling Electrochemical Energy Storage Systems at the Atomic Scale.

    Science.gov (United States)

    Groß, Axel

    2018-04-23

    There is a strong need to improve the efficiency of electrochemical energy storage, but progress is hampered by significant technological and scientific challenges. This review describes the potential contribution of atomic-scale modeling to the development of more efficient batteries, with a particular focus on first-principles electronic structure calculations. Numerical and theoretical obstacles are discussed, along with ways to overcome them, and some recent examples are presented illustrating the insights into electrochemical energy storage that can be gained from quantum chemical studies.

  10. SUBTASK 2.19 – OPERATIONAL FLEXIBILITY OF CO2 TRANSPORT AND STORAGE

    Energy Technology Data Exchange (ETDEWEB)

    Jensen, Melanie; Schlasner, Steven; Sorensen, James; Hamling, John

    2014-12-31

    Carbon dioxide (CO2) is produced in large quantities during electricity generation and by industrial processes. These CO2 streams vary in terms of both composition and mass flow rate, sometimes substantially. The impact of a varying CO2 stream on pipeline and storage operation is not fully understood in terms of either operability or infrastructure robustness. This study was performed to summarize basic background from the literature on the topic of operational flexibility of CO2 transport and storage, but the primary focus was on compiling real-world lessons learned about flexible operation of CO2 pipelines and storage from both large-scale field demonstrations and commercial operating experience. Modeling and pilot-scale results of research in this area were included to illustrate some of the questions that exist relative to operation of carbon capture and storage (CCS) projects with variable CO2 streams. It is hoped that this report’s real-world findings provide readers with useful information on the topic of transport and storage of variable CO2 streams. The real-world results were obtained from two sources. The first source consisted of five full-scale, commercial transport–storage projects: Sleipner, Snøhvit, In Salah, Weyburn, and Illinois Basin–Decatur. These scenarios were reviewed to determine the information that is available about CO2 stream variability/intermittency on these demonstration-scale projects. The five projects all experienced mass flow variability or an interruption in flow. In each case, pipeline and/or injection engineers were able to accommodate any issues that arose. Significant variability in composition has not been an issue at these five sites. The second source of real- world results was telephone interviews conducted with experts in CO2 pipeline transport, injection, and storage during which commercial anecdotal information was acquired to augment that found during the literature search of the five full-scale projects. The

  11. Large-scale regions of antimatter

    International Nuclear Information System (INIS)

    Grobov, A. V.; Rubin, S. G.

    2015-01-01

    Amodified mechanism of the formation of large-scale antimatter regions is proposed. Antimatter appears owing to fluctuations of a complex scalar field that carries a baryon charge in the inflation era

  12. Large-scale regions of antimatter

    Energy Technology Data Exchange (ETDEWEB)

    Grobov, A. V., E-mail: alexey.grobov@gmail.com; Rubin, S. G., E-mail: sgrubin@mephi.ru [National Research Nuclear University MEPhI (Russian Federation)

    2015-07-15

    Amodified mechanism of the formation of large-scale antimatter regions is proposed. Antimatter appears owing to fluctuations of a complex scalar field that carries a baryon charge in the inflation era.

  13. Internationalization Measures in Large Scale Research Projects

    Science.gov (United States)

    Soeding, Emanuel; Smith, Nancy

    2017-04-01

    Internationalization measures in Large Scale Research Projects Large scale research projects (LSRP) often serve as flagships used by universities or research institutions to demonstrate their performance and capability to stakeholders and other interested parties. As the global competition among universities for the recruitment of the brightest brains has increased, effective internationalization measures have become hot topics for universities and LSRP alike. Nevertheless, most projects and universities are challenged with little experience on how to conduct these measures and make internationalization an cost efficient and useful activity. Furthermore, those undertakings permanently have to be justified with the Project PIs as important, valuable tools to improve the capacity of the project and the research location. There are a variety of measures, suited to support universities in international recruitment. These include e.g. institutional partnerships, research marketing, a welcome culture, support for science mobility and an effective alumni strategy. These activities, although often conducted by different university entities, are interlocked and can be very powerful measures if interfaced in an effective way. On this poster we display a number of internationalization measures for various target groups, identify interfaces between project management, university administration, researchers and international partners to work together, exchange information and improve processes in order to be able to recruit, support and keep the brightest heads to your project.

  14. Recent Advances in Understanding Large Scale Vapour Explosions

    International Nuclear Information System (INIS)

    Board, S.J.; Hall, R.W.

    1976-01-01

    description of efficient large scale explosions it will be necessary to consider three stages: a) the setting up of a quasi-stable initial configuration; b) the triggering of this configuration; c) the propagation of the explosion. In this paper we consider each stage in turn, reviewing the relevant experimental information and theory to see to what extent the requirements for energetic explosions, and the physical processes that can satisfy these requirements, are understood. We pay particular attention to an attractively simple criterion for explosiveness, suggested by Fauske, that the contact temperature should exceed the temperature for spontaneous nucleation of the coolant, because on this criterion, sodium and UO 2 in particular are not explosive

  15. Large-Scale Analysis of Art Proportions

    DEFF Research Database (Denmark)

    Jensen, Karl Kristoffer

    2014-01-01

    While literature often tries to impute mathematical constants into art, this large-scale study (11 databases of paintings and photos, around 200.000 items) shows a different truth. The analysis, consisting of the width/height proportions, shows a value of rarely if ever one (square) and with majo......While literature often tries to impute mathematical constants into art, this large-scale study (11 databases of paintings and photos, around 200.000 items) shows a different truth. The analysis, consisting of the width/height proportions, shows a value of rarely if ever one (square...

  16. The Expanded Large Scale Gap Test

    Science.gov (United States)

    1987-03-01

    NSWC TR 86-32 DTIC THE EXPANDED LARGE SCALE GAP TEST BY T. P. LIDDIARD D. PRICE RESEARCH AND TECHNOLOGY DEPARTMENT ’ ~MARCH 1987 Ap~proved for public...arises, to reduce the spread in the LSGT 50% gap value.) The worst charges, such as those with the highest or lowest densities, the largest re-pressed...Arlington, VA 22217 PE 62314N INS3A 1 RJ14E31 7R4TBK 11 TITLE (Include Security CIlmsilficatiorn The Expanded Large Scale Gap Test . 12. PEIRSONAL AUTHOR() T

  17. Modelling study, efficiency analysis and optimisation of large-scale Adiabatic Compressed Air Energy Storage systems with low-temperature thermal storage

    International Nuclear Information System (INIS)

    Luo, Xing; Wang, Jihong; Krupke, Christopher; Wang, Yue; Sheng, Yong; Li, Jian; Xu, Yujie; Wang, Dan; Miao, Shihong; Chen, Haisheng

    2016-01-01

    Highlights: • The paper presents an A-CAES system thermodynamic model with low temperature thermal energy storage integration. • The initial parameter value ranges for A-CAES system simulation are identified from the study of a CAES plant in operation. • The strategies of system efficiency improvement are investigated via a parametric study with a sensitivity analysis. • Various system configurations are discussed for analysing the efficiency improvement potentials. - Abstract: The key feature of Adiabatic Compressed Air Energy Storage (A-CAES) is the reuse of the heat generated from the air compression process at the stage of air expansion. This increases the complexity of the whole system since the heat exchange and thermal storage units must have the capacities and performance to match the air compression/expansion units. Thus it raises a strong demand in the whole system modelling and simulation tool for A-CAES system optimisation. The paper presents a new whole system mathematical model for A-CAES with simulation implementation and the model is developed with consideration of lowing capital cost of the system. The paper then focuses on the study of system efficiency improvement strategies via parametric analysis and system structure optimisation. The paper investigates how the system efficiency is affected by the system component performance and parameters. From the study, the key parameters are identified, which give dominant influences in improving the system efficiency. The study is extended onto optimal system configuration and the recommendations are made for achieving higher efficiency, which provides a useful guidance for A-CAES system design.

  18. Large scale and big data processing and management

    CERN Document Server

    Sakr, Sherif

    2014-01-01

    Large Scale and Big Data: Processing and Management provides readers with a central source of reference on the data management techniques currently available for large-scale data processing. Presenting chapters written by leading researchers, academics, and practitioners, it addresses the fundamental challenges associated with Big Data processing tools and techniques across a range of computing environments.The book begins by discussing the basic concepts and tools of large-scale Big Data processing and cloud computing. It also provides an overview of different programming models and cloud-bas

  19. The phrase “information storage and retrieval” (IS&R)

    DEFF Research Database (Denmark)

    Hjørland, Birger

    2015-01-01

    Scholars have uncovered abundant data about the history of the term “information”, as well as some of its many combined phrases (e.g. “information science”, “information retrieval” and “information technology”). Many other compounds involving “information” seem, however, not to have a known origi...... yet. In this article, further information about the phrase “information storage and retrieval” is provided. To know the history of terms and their associated concepts is an important prescription against poor terminological phrasing and theoretical confusion....

  20. Large Scale Computing and Storage Requirements for Basic Energy Sciences Research

    Energy Technology Data Exchange (ETDEWEB)

    Gerber, Richard; Wasserman, Harvey

    2011-03-31

    The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility supporting research within the Department of Energy's Office of Science. NERSC provides high-performance computing (HPC) resources to approximately 4,000 researchers working on about 400 projects. In addition to hosting large-scale computing facilities, NERSC provides the support and expertise scientists need to effectively and efficiently use HPC systems. In February 2010, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR) and DOE's Office of Basic Energy Sciences (BES) held a workshop to characterize HPC requirements for BES research through 2013. The workshop was part of NERSC's legacy of anticipating users future needs and deploying the necessary resources to meet these demands. Workshop participants reached a consensus on several key findings, in addition to achieving the workshop's goal of collecting and characterizing computing requirements. The key requirements for scientists conducting research in BES are: (1) Larger allocations of computational resources; (2) Continued support for standard application software packages; (3) Adequate job turnaround time and throughput; and (4) Guidance and support for using future computer architectures. This report expands upon these key points and presents others. Several 'case studies' are included as significant representative samples of the needs of science teams within BES. Research teams scientific goals, computational methods of solution, current and 2013 computing requirements, and special software and support needs are summarized in these case studies. Also included are researchers strategies for computing in the highly parallel, 'multi-core' environment that is expected to dominate HPC architectures over the next few years. NERSC has strategic plans and initiatives already underway that address key workshop findings. This report includes a

  1. Seismic Response Analysis and Test of 1/8 Scale Model for a Spent Fuel Storage Cask

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jae Han; Park, C. G.; Koo, G. H.; Seo, G. S. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Yeom, S. H. [Chungnam Univ., Daejeon (Korea, Republic of); Choi, B. I.; Cho, Y. D. [Korea Hydro and Nuclear Power Co. Ltd., Daejeon (Korea, Republic of)

    2005-07-15

    The seismic response tests of a spent fuel dry storage cask model of 1/8 scale are performed for an typical 1940 El-centro and Kobe earthquakes. This report firstly focuses on the data generation by seismic response tests of a free standing storage cask model to check the overturing possibility of a storage cask and the slipping displacement on concrete slab bed. The variations in seismic load magnitude and cask/bed interface friction are considered in tests. The test results show that the model gives an overturning response for an extreme condition only. A FEM model is built for the test model of 1/8 scale spent fuel dry storage cask using available 3D contact conditions in ABAQUS/Explicit. Input load for this analysis is El-centro earthquake, and the friction coefficients are obtained from the test result. Penalty and kinematic contact methods of ABAQUS are used for a mechanical contact formulation. The analysis methods was verified with the rocking angle obtained by seismic response tests. The kinematic contact method with an adequate normal contact stiffness showed a good agreement with tests. Based on the established analysis method for 1/8 scale model, the seismic response analyses of a full scale model are performed for design and beyond design seismic loads.

  2. Large-scale Meteorological Patterns Associated with Extreme Precipitation Events over Portland, OR

    Science.gov (United States)

    Aragon, C.; Loikith, P. C.; Lintner, B. R.; Pike, M.

    2017-12-01

    Extreme precipitation events can have profound impacts on human life and infrastructure, with broad implications across a range of stakeholders. Changes to extreme precipitation events are a projected outcome of climate change that warrants further study, especially at regional- to local-scales. While global climate models are generally capable of simulating mean climate at global-to-regional scales with reasonable skill, resiliency and adaptation decisions are made at local-scales where most state-of-the-art climate models are limited by coarse resolution. Characterization of large-scale meteorological patterns associated with extreme precipitation events at local-scales can provide climatic information without this scale limitation, thus facilitating stakeholder decision-making. This research will use synoptic climatology as a tool by which to characterize the key large-scale meteorological patterns associated with extreme precipitation events in the Portland, Oregon metro region. Composite analysis of meteorological patterns associated with extreme precipitation days, and associated watershed-specific flooding, is employed to enhance understanding of the climatic drivers behind such events. The self-organizing maps approach is then used to characterize the within-composite variability of the large-scale meteorological patterns associated with extreme precipitation events, allowing us to better understand the different types of meteorological conditions that lead to high-impact precipitation events and associated hydrologic impacts. A more comprehensive understanding of the meteorological drivers of extremes will aid in evaluation of the ability of climate models to capture key patterns associated with extreme precipitation over Portland and to better interpret projections of future climate at impact-relevant scales.

  3. Large-Scale Sentinel-1 Processing for Solid Earth Science and Urgent Response using Cloud Computing and Machine Learning

    Science.gov (United States)

    Hua, H.; Owen, S. E.; Yun, S. H.; Agram, P. S.; Manipon, G.; Starch, M.; Sacco, G. F.; Bue, B. D.; Dang, L. B.; Linick, J. P.; Malarout, N.; Rosen, P. A.; Fielding, E. J.; Lundgren, P.; Moore, A. W.; Liu, Z.; Farr, T.; Webb, F.; Simons, M.; Gurrola, E. M.

    2017-12-01

    With the increased availability of open SAR data (e.g. Sentinel-1 A/B), new challenges are being faced with processing and analyzing the voluminous SAR datasets to make geodetic measurements. Upcoming SAR missions such as NISAR are expected to generate close to 100TB per day. The Advanced Rapid Imaging and Analysis (ARIA) project can now generate geocoded unwrapped phase and coherence products from Sentinel-1 TOPS mode data in an automated fashion, using the ISCE software. This capability is currently being exercised on various study sites across the United States and around the globe, including Hawaii, Central California, Iceland and South America. The automated and large-scale SAR data processing and analysis capabilities use cloud computing techniques to speed the computations and provide scalable processing power and storage. Aspects such as how to processing these voluminous SLCs and interferograms at global scales, keeping up with the large daily SAR data volumes, and how to handle the voluminous data rates are being explored. Scene-partitioning approaches in the processing pipeline help in handling global-scale processing up to unwrapped interferograms with stitching done at a late stage. We have built an advanced science data system with rapid search functions to enable access to the derived data products. Rapid image processing of Sentinel-1 data to interferograms and time series is already being applied to natural hazards including earthquakes, floods, volcanic eruptions, and land subsidence due to fluid withdrawal. We will present the status of the ARIA science data system for generating science-ready data products and challenges that arise from being able to process SAR datasets to derived time series data products at large scales. For example, how do we perform large-scale data quality screening on interferograms? What approaches can be used to minimize compute, storage, and data movement costs for time series analysis in the cloud? We will also

  4. Inflationary tensor fossils in large-scale structure

    Energy Technology Data Exchange (ETDEWEB)

    Dimastrogiovanni, Emanuela [School of Physics and Astronomy, University of Minnesota, Minneapolis, MN 55455 (United States); Fasiello, Matteo [Department of Physics, Case Western Reserve University, Cleveland, OH 44106 (United States); Jeong, Donghui [Department of Astronomy and Astrophysics, The Pennsylvania State University, University Park, PA 16802 (United States); Kamionkowski, Marc, E-mail: ema@physics.umn.edu, E-mail: mrf65@case.edu, E-mail: duj13@psu.edu, E-mail: kamion@jhu.edu [Department of Physics and Astronomy, 3400 N. Charles St., Johns Hopkins University, Baltimore, MD 21218 (United States)

    2014-12-01

    Inflation models make specific predictions for a tensor-scalar-scalar three-point correlation, or bispectrum, between one gravitational-wave (tensor) mode and two density-perturbation (scalar) modes. This tensor-scalar-scalar correlation leads to a local power quadrupole, an apparent departure from statistical isotropy in our Universe, as well as characteristic four-point correlations in the current mass distribution in the Universe. So far, the predictions for these observables have been worked out only for single-clock models in which certain consistency conditions between the tensor-scalar-scalar correlation and tensor and scalar power spectra are satisfied. Here we review the requirements on inflation models for these consistency conditions to be satisfied. We then consider several examples of inflation models, such as non-attractor and solid-inflation models, in which these conditions are put to the test. In solid inflation the simplest consistency conditions are already violated whilst in the non-attractor model we find that, contrary to the standard scenario, the tensor-scalar-scalar correlator probes directly relevant model-dependent information. We work out the predictions for observables in these models. For non-attractor inflation we find an apparent local quadrupolar departure from statistical isotropy in large-scale structure but that this power quadrupole decreases very rapidly at smaller scales. The consistency of the CMB quadrupole with statistical isotropy then constrains the distance scale that corresponds to the transition from the non-attractor to attractor phase of inflation to be larger than the currently observable horizon. Solid inflation predicts clustering fossils signatures in the current galaxy distribution that may be large enough to be detectable with forthcoming, and possibly even current, galaxy surveys.

  5. Large scale cluster computing workshop

    International Nuclear Information System (INIS)

    Dane Skow; Alan Silverman

    2002-01-01

    Recent revolutions in computer hardware and software technologies have paved the way for the large-scale deployment of clusters of commodity computers to address problems heretofore the domain of tightly coupled SMP processors. Near term projects within High Energy Physics and other computing communities will deploy clusters of scale 1000s of processors and be used by 100s to 1000s of independent users. This will expand the reach in both dimensions by an order of magnitude from the current successful production facilities. The goals of this workshop were: (1) to determine what tools exist which can scale up to the cluster sizes foreseen for the next generation of HENP experiments (several thousand nodes) and by implication to identify areas where some investment of money or effort is likely to be needed. (2) To compare and record experimences gained with such tools. (3) To produce a practical guide to all stages of planning, installing, building and operating a large computing cluster in HENP. (4) To identify and connect groups with similar interest within HENP and the larger clustering community

  6. Storage in alluvial deposits controls the timing of particle delivery from large watersheds, filtering upland erosional signals and delaying benefits from watershed best management practices

    Science.gov (United States)

    Pizzuto, J. E.; Skalak, K.; Karwan, D. L.

    2017-12-01

    Transport of suspended sediment and sediment-borne constituents (here termed fluvial particles) through large river systems can be significantly influenced by episodic storage in floodplains and other alluvial deposits. Geomorphologists quantify the importance of storage using sediment budgets, but these data alone are insufficient to determine how storage influences the routing of fluvial particles through river corridors across large spatial scales. For steady state systems, models that combine sediment budget data with "waiting time distributions" (to define how long deposited particles remain stored until being remobilized) and velocities during transport events can provide useful predictions. Limited field data suggest that waiting time distributions are well represented by power laws, extending from 104 years, while the probability of storage defined by sediment budgets varies from 0.1 km-1 for small drainage basins to 0.001 km-1 for the world's largest watersheds. Timescales of particle delivery from large watersheds are determined by storage rather than by transport processes, with most particles requiring 102 -104 years to reach the basin outlet. These predictions suggest that erosional "signals" induced by climate change, tectonics, or anthropogenic activity will be transformed by storage before delivery to the outlets of large watersheds. In particular, best management practices (BMPs) implemented in upland source areas, designed to reduce the loading of fluvial particles to estuarine receiving waters, will not achieve their intended benefits for centuries (or longer). For transient systems, waiting time distributions cannot be constant, but will vary as portions of transient sediment "pulses" enter and are later released from storage. The delivery of sediment pulses under transient conditions can be predicted by adopting the hypothesis that the probability of erosion of stored particles will decrease with increasing "age" (where age is defined as the

  7. Large-Scale Agriculture and Outgrower Schemes in Ethiopia

    DEFF Research Database (Denmark)

    Wendimu, Mengistu Assefa

    , the impact of large-scale agriculture and outgrower schemes on productivity, household welfare and wages in developing countries is highly contentious. Chapter 1 of this thesis provides an introduction to the study, while also reviewing the key debate in the contemporary land ‘grabbing’ and historical large...... sugarcane outgrower scheme on household income and asset stocks. Chapter 5 examines the wages and working conditions in ‘formal’ large-scale and ‘informal’ small-scale irrigated agriculture. The results in Chapter 2 show that moisture stress, the use of untested planting materials, and conflict over land...... commands a higher wage than ‘formal’ large-scale agriculture, while rather different wage determination mechanisms exist in the two sectors. Human capital characteristics (education and experience) partly explain the differences in wages within the formal sector, but play no significant role...

  8. Experimental facilities for large-scale and full-scale study of hydrogen accidents

    Energy Technology Data Exchange (ETDEWEB)

    Merilo, E.; Groethe, M.; Colton, J. [SRI International, Poulter Laboratory, Menlo Park, CA (United States); Chiba, S. [SRI Japan, Tokyo (Japan)

    2007-07-01

    This paper summarized some of the work that has been performed at SRI International over the past 5 years that address safety issues for the hydrogen-based economy. Researchers at SRI International have conducted experiments at the Corral Hollow Experiment Site (CHES) near Livermore California to obtain fundamental data on hydrogen explosions for risk assessment. In particular, large-scale hydrogen tests were conducted using homogeneous mixtures of hydrogen in volumes from 5.3 m{sup 3} to 300 m{sup 3} to represent scenarios involving fuel cell vehicles as well as transport and storage facilities. Experiments have focused on unconfined deflagrations of hydrogen and air, and detonations of hydrogen in a semi-open space to measure free-field blast effects; the use of blast walls as a mitigation technique; turbulent enhancement of hydrogen combustion due to obstacles within the mixture, and determination of when deflagration-to-detonation transition occurs; the effect of confined hydrogen releases and explosions that could originate from an interconnecting hydrogen pipeline; and, large and small accidental releases of hydrogen. The experiments were conducted to improve the prediction of hydrogen explosions and the capabilities for performing risk assessments, and to develop mitigation techniques. Measurements included hydrogen concentration; flame speed; blast overpressure; heat flux; and, high-speed, standard, and infrared video. The data collected in these experiments is used to correlate computer models and to facilitate the development of codes and standards. This work contributes to better safety technology by evaluating the effectiveness of different blast mitigation techniques. 13 refs., 13 figs.

  9. Economically viable large-scale hydrogen liquefaction

    Science.gov (United States)

    Cardella, U.; Decker, L.; Klein, H.

    2017-02-01

    The liquid hydrogen demand, particularly driven by clean energy applications, will rise in the near future. As industrial large scale liquefiers will play a major role within the hydrogen supply chain, production capacity will have to increase by a multiple of today’s typical sizes. The main goal is to reduce the total cost of ownership for these plants by increasing energy efficiency with innovative and simple process designs, optimized in capital expenditure. New concepts must ensure a manageable plant complexity and flexible operability. In the phase of process development and selection, a dimensioning of key equipment for large scale liquefiers, such as turbines and compressors as well as heat exchangers, must be performed iteratively to ensure technological feasibility and maturity. Further critical aspects related to hydrogen liquefaction, e.g. fluid properties, ortho-para hydrogen conversion, and coldbox configuration, must be analysed in detail. This paper provides an overview on the approach, challenges and preliminary results in the development of efficient as well as economically viable concepts for large-scale hydrogen liquefaction.

  10. Large scale chromatographic separations using continuous displacement chromatography (CDC)

    International Nuclear Information System (INIS)

    Taniguchi, V.T.; Doty, A.W.; Byers, C.H.

    1988-01-01

    A process for large scale chromatographic separations using a continuous chromatography technique is described. The process combines the advantages of large scale batch fixed column displacement chromatography with conventional analytical or elution continuous annular chromatography (CAC) to enable large scale displacement chromatography to be performed on a continuous basis (CDC). Such large scale, continuous displacement chromatography separations have not been reported in the literature. The process is demonstrated with the ion exchange separation of a binary lanthanide (Nd/Pr) mixture. The process is, however, applicable to any displacement chromatography separation that can be performed using conventional batch, fixed column chromatography

  11. Large-Scale Total Water Storage and Water Flux Changes over the Arid and Semiarid Parts of the Middle East from GRACE and Reanalysis Products

    Science.gov (United States)

    Forootan, E.; Safari, A.; Mostafaie, A.; Schumacher, M.; Delavar, M.; Awange, J. L.

    2017-05-01

    Previous studies indicate that water storage over a large part of the Middle East has been decreased over the last decade. Variability in the total (hydrological) water flux (TWF, i.e., precipitation minus evapotranspiration minus runoff) and water storage changes of the Tigris-Euphrates river basin and Iran's six major basins (Khazar, Persian, Urmia, Markazi, Hamun, and Sarakhs) over 2003-2013 is assessed in this study. Our investigation is performed based on the TWF that are estimated as temporal derivatives of terrestrial water storage (TWS) changes from the Gravity Recovery and Climate Experiment (GRACE) products and those from the reanalysis products of ERA-Interim and MERRA-Land. An inversion approach is applied to consistently estimate the spatio-temporal changes of soil moisture and groundwater storage compartments of the seven basins during the study period from GRACE TWS, altimetry, and land surface model products. The influence of TWF trends on separated water storage compartments is then explored. Our results, estimated as basin averages, indicate negative trends in the maximums of TWF peaks that reach up to -5.2 and -2.6 (mm/month/year) over 2003-2013, respectively, for the Urmia and Tigris-Euphrates basins, which are most likely due to the reported meteorological drought. Maximum amplitudes of the soil moisture compartment exhibit negative trends of -11.1, -6.6, -6.1, -4.8, -4.7, -3.8, and -1.2 (mm/year) for Urmia, Tigris-Euphrates, Khazar, Persian, Markazi, Sarakhs, and Hamun basins, respectively. Strong groundwater storage decrease is found, respectively, within the Khazar -8.6 (mm/year) and Sarakhs -7.0 (mm/year) basins. The magnitude of water storage decline in the Urmia and Tigris-Euphrates basins is found to be bigger than the decrease in the monthly accumulated TWF indicating a contribution of human water use, as well as surface and groundwater flow to the storage decline over the study area.

  12. Deriving Scaling Factors Using a Global Hydrological Model to Restore GRACE Total Water Storage Changes for China's Yangtze River Basin

    Science.gov (United States)

    Long, Di; Yang, Yuting; Yoshihide, Wada; Hong, Yang; Liang, Wei; Chen, Yaning; Yong, Bin; Hou, Aizhong; Wei, Jiangfeng; Chen, Lu

    2015-01-01

    This study used a global hydrological model (GHM), PCR-GLOBWB, which simulates surface water storage changes, natural and human induced groundwater storage changes, and the interactions between surface water and subsurface water, to generate scaling factors by mimicking low-pass filtering of GRACE signals. Signal losses in GRACE data were subsequently restored by the scaling factors from PCR-GLOBWB. Results indicate greater spatial heterogeneity in scaling factor from PCR-GLOBWB and CLM4.0 than that from GLDAS-1 Noah due to comprehensive simulation of surface and subsurface water storage changes for PCR-GLOBWB and CLM4.0. Filtered GRACE total water storage (TWS) changes applied with PCR-GLOBWB scaling factors show closer agreement with water budget estimates of TWS changes than those with scaling factors from other land surface models (LSMs) in China's Yangtze River basin. Results of this study develop a further understanding of the behavior of scaling factors from different LSMs or GHMs over hydrologically complex basins, and could be valuable in providing more accurate TWS changes for hydrological applications (e.g., monitoring drought and groundwater storage depletion) over regions where human-induced interactions between surface water and subsurface water are intensive.

  13. Algorithms for Large-Scale Astronomical Problems

    Science.gov (United States)

    2013-08-01

    Magana, Ross O’Connell, Xiaoying Xu, and Feng Yu . They are very nice and provided a lot of useful information. During these years I have worked with...Sicun Gao, Fan Guo, Yunchuan Kong, Ni Lao, Lei Li, Nan Li, Jialiu Lin, Yuefeng Lin, Xi Liu, Ning Lu, Min Luo, Kai Ren, Chao Shen, Long Qin, Huimin... Hsieh , Deborah A. Wallach, Mike Burrows, Tushar Chandra, Andrew Fikes, and Robert E. Gruber. Bigtable: A distributed storage system for structured data

  14. Large Scale Processes and Extreme Floods in Brazil

    Science.gov (United States)

    Ribeiro Lima, C. H.; AghaKouchak, A.; Lall, U.

    2016-12-01

    Persistent large scale anomalies in the atmospheric circulation and ocean state have been associated with heavy rainfall and extreme floods in water basins of different sizes across the world. Such studies have emerged in the last years as a new tool to improve the traditional, stationary based approach in flood frequency analysis and flood prediction. Here we seek to advance previous studies by evaluating the dominance of large scale processes (e.g. atmospheric rivers/moisture transport) over local processes (e.g. local convection) in producing floods. We consider flood-prone regions in Brazil as case studies and the role of large scale climate processes in generating extreme floods in such regions is explored by means of observed streamflow, reanalysis data and machine learning methods. The dynamics of the large scale atmospheric circulation in the days prior to the flood events are evaluated based on the vertically integrated moisture flux and its divergence field, which are interpreted in a low-dimensional space as obtained by machine learning techniques, particularly supervised kernel principal component analysis. In such reduced dimensional space, clusters are obtained in order to better understand the role of regional moisture recycling or teleconnected moisture in producing floods of a given magnitude. The convective available potential energy (CAPE) is also used as a measure of local convection activities. We investigate for individual sites the exceedance probability in which large scale atmospheric fluxes dominate the flood process. Finally, we analyze regional patterns of floods and how the scaling law of floods with drainage area responds to changes in the climate forcing mechanisms (e.g. local vs large scale).

  15. Safeguarding of large scale reprocessing and MOX plants

    International Nuclear Information System (INIS)

    Howsley, R.; Burrows, B.; Longevialle, H. de; Kuroi, H.; Izumi, A.

    1997-01-01

    In May 97, the IAEA Board of Governors approved the final measures of the ''93+2'' safeguards strengthening programme, thus improving the international non-proliferation regime by enhancing the effectiveness and efficiency of safeguards verification. These enhancements are not however, a revolution in current practices, but rather an important step in the continuous evolution of the safeguards system. The principles embodied in 93+2, for broader access to information and increased physical access already apply, in a pragmatic way, to large scale reprocessing and MOX fabrication plants. In these plants, qualitative measures and process monitoring play an important role in addition to accountancy and material balance evaluations in attaining the safeguard's goals. This paper will reflect on the safeguards approaches adopted for these large bulk handling facilities and draw analogies, conclusions and lessons for the forthcoming implementation of the 93+2 Programme. (author)

  16. Computing in Large-Scale Dynamic Systems

    NARCIS (Netherlands)

    Pruteanu, A.S.

    2013-01-01

    Software applications developed for large-scale systems have always been difficult to de- velop due to problems caused by the large number of computing devices involved. Above a certain network size (roughly one hundred), necessary services such as code updating, topol- ogy discovery and data

  17. Fires in large scale ventilation systems

    International Nuclear Information System (INIS)

    Gregory, W.S.; Martin, R.A.; White, B.W.; Nichols, B.D.; Smith, P.R.; Leslie, I.H.; Fenton, D.L.; Gunaji, M.V.; Blythe, J.P.

    1991-01-01

    This paper summarizes the experience gained simulating fires in large scale ventilation systems patterned after ventilation systems found in nuclear fuel cycle facilities. The series of experiments discussed included: (1) combustion aerosol loading of 0.61x0.61 m HEPA filters with the combustion products of two organic fuels, polystyrene and polymethylemethacrylate; (2) gas dynamic and heat transport through a large scale ventilation system consisting of a 0.61x0.61 m duct 90 m in length, with dampers, HEPA filters, blowers, etc.; (3) gas dynamic and simultaneous transport of heat and solid particulate (consisting of glass beads with a mean aerodynamic diameter of 10μ) through the large scale ventilation system; and (4) the transport of heat and soot, generated by kerosene pool fires, through the large scale ventilation system. The FIRAC computer code, designed to predict fire-induced transients in nuclear fuel cycle facility ventilation systems, was used to predict the results of experiments (2) through (4). In general, the results of the predictions were satisfactory. The code predictions for the gas dynamics, heat transport, and particulate transport and deposition were within 10% of the experimentally measured values. However, the code was less successful in predicting the amount of soot generation from kerosene pool fires, probably due to the fire module of the code being a one-dimensional zone model. The experiments revealed a complicated three-dimensional combustion pattern within the fire room of the ventilation system. Further refinement of the fire module within FIRAC is needed. (orig.)

  18. Energy Storage Systems

    Science.gov (United States)

    Elliott, David

    2017-07-01

    As renewable energy use expands there will be a need to develop ways to balance its variability. Storage is one of the options. Presently the main emphasis is for systems storing electrical power in advanced batteries (many of them derivatives of parallel developments in the electric vehicle field), as well as via liquid air storage, compressed air storage, super-capacitors and flywheels, and, the leader so far, pumped hydro reservoirs. In addition, new systems are emerging for hydrogen generation and storage, feeding fuel cell power production. Heat (and cold) is also a storage medium and some systems exploit thermal effects as part of wider energy management activity. Some of the more exotic ones even try to use gravity on a large scale. This short book looks at all the options, their potentials and their limits. There are no clear winners, with some being suited to short-term balancing and others to longer-term storage. The eventual mix adopted will be shaped by the pattern of development of other balancing measures, including smart-grid demand management and super-grid imports and exports.

  19. Inference of Functional Properties from Large-scale Analysis of Enzyme Superfamilies*

    Science.gov (United States)

    Brown, Shoshana D.; Babbitt, Patricia C.

    2012-01-01

    As increasingly large amounts of data from genome and other sequencing projects become available, new approaches are needed to determine the functions of the proteins these genes encode. We show how large-scale computational analysis can help to address this challenge by linking functional information to sequence and structural similarities using protein similarity networks. Network analyses using three functionally diverse enzyme superfamilies illustrate the use of these approaches for facile updating and comparison of available structures for a large superfamily, for creation of functional hypotheses for metagenomic sequences, and to summarize the limits of our functional knowledge about even well studied superfamilies. PMID:22069325

  20. Methodology to determine the technical performance and value proposition for grid-scale energy storage systems :

    Energy Technology Data Exchange (ETDEWEB)

    Byrne, Raymond Harry; Loose, Verne William; Donnelly, Matthew K.; Trudnowski, Daniel J.

    2012-12-01

    As the amount of renewable generation increases, the inherent variability of wind and photovoltaic systems must be addressed in order to ensure the continued safe and reliable operation of the nation's electricity grid. Grid-scale energy storage systems are uniquely suited to address the variability of renewable generation and to provide other valuable grid services. The goal of this report is to quantify the technical performance required to provide di erent grid bene ts and to specify the proper techniques for estimating the value of grid-scale energy storage systems.

  1. Large-scale Complex IT Systems

    OpenAIRE

    Sommerville, Ian; Cliff, Dave; Calinescu, Radu; Keen, Justin; Kelly, Tim; Kwiatkowska, Marta; McDermid, John; Paige, Richard

    2011-01-01

    This paper explores the issues around the construction of large-scale complex systems which are built as 'systems of systems' and suggests that there are fundamental reasons, derived from the inherent complexity in these systems, why our current software engineering methods and techniques cannot be scaled up to cope with the engineering challenges of constructing such systems. It then goes on to propose a research and education agenda for software engineering that identifies the major challen...

  2. Large-scale complex IT systems

    OpenAIRE

    Sommerville, Ian; Cliff, Dave; Calinescu, Radu; Keen, Justin; Kelly, Tim; Kwiatkowska, Marta; McDermid, John; Paige, Richard

    2012-01-01

    12 pages, 2 figures This paper explores the issues around the construction of large-scale complex systems which are built as 'systems of systems' and suggests that there are fundamental reasons, derived from the inherent complexity in these systems, why our current software engineering methods and techniques cannot be scaled up to cope with the engineering challenges of constructing such systems. It then goes on to propose a research and education agenda for software engineering that ident...

  3. Public perceptions and preferences regarding large scale implementation of six CO2 capture and storage technologies. Well-informed and well-considered opinions versus uninformed pseudo-opinions of the Dutch public

    International Nuclear Information System (INIS)

    De Best-Waldhober, M.; Daamen, D.

    2006-03-01

    Three research projects were carried out within the framework of the programme 'Sustainable use of fossil fuels'. Two research projects focussed on technical aspects of advanced fossil fuel options with CO2 capture and storage (CCS). The focus of the third project was on studying informed opinions of the general public regarding advanced fossil fuel options. This study has investigated the choices the general public would make after having received and evaluated expert information on the consequences pertaining to these choices. The method to collect these informed preferences is called the Information-Choice Questionnaire (ICQ). By comparing informed public preferences, obtained through administration of the ICQ, with current public opinions and preferences regarding fossil fuel options, collected in a more conventional survey, the outcomes of this project can indicate what options would be considered acceptable given sufficient knowledge, and how much and in what respect the current situation deviates from this possible future situation. Answering these questions constitutes the main goal of this project. This report describes the development and deployment of the Information-Choice Questionnaire on advanced fossil fuel options. It furthermore describes the parallel deployment of a more traditional questionnaire without expert information and a second measure of this more traditional questionnaire. This report encompasses all parts of the project 'Informed opinions of the general public as a tool for policy measures regarding advanced fossil fuel options'. This report will explain the ICQ methodology and its usefulness for this project. Furthermore, the development of the current ICQ, the method of the ICQ and of the more traditional questionnaires, and the results thereof, are described.

  4. First Mile Challenges for Large-Scale IoT

    KAUST Repository

    Bader, Ahmed; Elsawy, Hesham; Gharbieh, Mohammad; Alouini, Mohamed-Slim; Adinoyi, Abdulkareem; Alshaalan, Furaih

    2017-01-01

    The Internet of Things is large-scale by nature. This is not only manifested by the large number of connected devices, but also by the sheer scale of spatial traffic intensity that must be accommodated, primarily in the uplink direction. To that end

  5. Hybrid dual gate ferroelectric memory for multilevel information storage

    KAUST Repository

    Khan, Yasser; Caraveo-Frescas, Jesus Alfonso; Alshareef, Husam N.

    2015-01-01

    Here, we report hybrid organic/inorganic ferroelectric memory with multilevel information storage using transparent p-type SnO semiconductor and ferroelectric P(VDF-TrFE) polymer. The dual gate devices include a top ferroelectric field

  6. LOD-based clustering techniques for efficient large-scale terrain storage and visualization

    Science.gov (United States)

    Bao, Xiaohong; Pajarola, Renato

    2003-05-01

    Large multi-resolution terrain data sets are usually stored out-of-core. To visualize terrain data at interactive frame rates, the data needs to be organized on disk, loaded into main memory part by part, then rendered efficiently. Many main-memory algorithms have been proposed for efficient vertex selection and mesh construction. Organization of terrain data on disk is quite difficult because the error, the triangulation dependency and the spatial location of each vertex all need to be considered. Previous terrain clustering algorithms did not consider the per-vertex approximation error of individual terrain data sets. Therefore, the vertex sequences on disk are exactly the same for any terrain. In this paper, we propose a novel clustering algorithm which introduces the level-of-detail (LOD) information to terrain data organization to map multi-resolution terrain data to external memory. In our approach the LOD parameters of the terrain elevation points are reflected during clustering. The experiments show that dynamic loading and paging of terrain data at varying LOD is very efficient and minimizes page faults. Additionally, the preprocessing of this algorithm is very fast and works from out-of-core.

  7. Large-scale automated image analysis for computational profiling of brain tissue surrounding implanted neuroprosthetic devices using Python.

    Science.gov (United States)

    Rey-Villamizar, Nicolas; Somasundar, Vinay; Megjhani, Murad; Xu, Yan; Lu, Yanbin; Padmanabhan, Raghav; Trett, Kristen; Shain, William; Roysam, Badri

    2014-01-01

    In this article, we describe the use of Python for large-scale automated server-based bio-image analysis in FARSIGHT, a free and open-source toolkit of image analysis methods for quantitative studies of complex and dynamic tissue microenvironments imaged by modern optical microscopes, including confocal, multi-spectral, multi-photon, and time-lapse systems. The core FARSIGHT modules for image segmentation, feature extraction, tracking, and machine learning are written in C++, leveraging widely used libraries including ITK, VTK, Boost, and Qt. For solving complex image analysis tasks, these modules must be combined into scripts using Python. As a concrete example, we consider the problem of analyzing 3-D multi-spectral images of brain tissue surrounding implanted neuroprosthetic devices, acquired using high-throughput multi-spectral spinning disk step-and-repeat confocal microscopy. The resulting images typically contain 5 fluorescent channels. Each channel consists of 6000 × 10,000 × 500 voxels with 16 bits/voxel, implying image sizes exceeding 250 GB. These images must be mosaicked, pre-processed to overcome imaging artifacts, and segmented to enable cellular-scale feature extraction. The features are used to identify cell types, and perform large-scale analysis for identifying spatial distributions of specific cell types relative to the device. Python was used to build a server-based script (Dell 910 PowerEdge servers with 4 sockets/server with 10 cores each, 2 threads per core and 1TB of RAM running on Red Hat Enterprise Linux linked to a RAID 5 SAN) capable of routinely handling image datasets at this scale and performing all these processing steps in a collaborative multi-user multi-platform environment. Our Python script enables efficient data storage and movement between computers and storage servers, logs all the processing steps, and performs full multi-threaded execution of all codes, including open and closed-source third party libraries.

  8. Large-scale automated image analysis for computational profiling of brain tissue surrounding implanted neuroprosthetic devices using Python

    Directory of Open Access Journals (Sweden)

    Nicolas eRey-Villamizar

    2014-04-01

    Full Text Available In this article, we describe use of Python for large-scale automated server-based bio-image analysis in FARSIGHT, a free and open-source toolkit of image analysis methods for quantitative studies of complex and dynamic tissue microenvironments imaged by modern optical microscopes including confocal, multi-spectral, multi-photon, and time-lapse systems. The core FARSIGHT modules for image segmentation, feature extraction, tracking, and machine learning are written in C++, leveraging widely used libraries including ITK, VTK, Boost, and Qt. For solving complex image analysis task, these modules must be combined into scripts using Python. As a concrete example, we consider the problem of analyzing 3-D multi-spectral brain tissue images surrounding implanted neuroprosthetic devices, acquired using high-throughput multi-spectral spinning disk step-and-repeat confocal microscopy. The resulting images typically contain 5 fluorescent channels, 6,000$times$10,000$times$500 voxels with 16 bits/voxel, implying image sizes exceeding 250GB. These images must be mosaicked, pre-processed to overcome imaging artifacts, and segmented to enable cellular-scale feature extraction. The features are used to identify cell types, and perform large-scale analytics for identifying spatial distributions of specific cell types relative to the device. Python was used to build a server-based script (Dell 910 PowerEdge servers with 4 sockets/server with 10 cores each, 2 threads per core and 1TB of RAM running on Red Hat Enterprise Linux linked to a RAID 5 SAN capable of routinely handling image datasets at this scale and performing all these processing steps in a collaborative multi-user multi-platform environment consisting. Our Python script enables efficient data storage and movement between compute and storage servers, logging all processing steps, and performs full multi-threaded execution of all codes, including open and closed-source third party libraries.

  9. Large-scale sequential quadratic programming algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Eldersveld, S.K.

    1992-09-01

    The problem addressed is the general nonlinear programming problem: finding a local minimizer for a nonlinear function subject to a mixture of nonlinear equality and inequality constraints. The methods studied are in the class of sequential quadratic programming (SQP) algorithms, which have previously proved successful for problems of moderate size. Our goal is to devise an SQP algorithm that is applicable to large-scale optimization problems, using sparse data structures and storing less curvature information but maintaining the property of superlinear convergence. The main features are: 1. The use of a quasi-Newton approximation to the reduced Hessian of the Lagrangian function. Only an estimate of the reduced Hessian matrix is required by our algorithm. The impact of not having available the full Hessian approximation is studied and alternative estimates are constructed. 2. The use of a transformation matrix Q. This allows the QP gradient to be computed easily when only the reduced Hessian approximation is maintained. 3. The use of a reduced-gradient form of the basis for the null space of the working set. This choice of basis is more practical than an orthogonal null-space basis for large-scale problems. The continuity condition for this choice is proven. 4. The use of incomplete solutions of quadratic programming subproblems. Certain iterates generated by an active-set method for the QP subproblem are used in place of the QP minimizer to define the search direction for the nonlinear problem. An implementation of the new algorithm has been obtained by modifying the code MINOS. Results and comparisons with MINOS and NPSOL are given for the new algorithm on a set of 92 test problems.

  10. Simulation of Porous Medium Hydrogen Storage - Estimation of Storage Capacity and Deliverability for a North German anticlinal Structure

    Science.gov (United States)

    Wang, B.; Bauer, S.; Pfeiffer, W. T.

    2015-12-01

    Large scale energy storage will be required to mitigate offsets between electric energy demand and the fluctuating electric energy production from renewable sources like wind farms, if renewables dominate energy supply. Porous formations in the subsurface could provide the large storage capacities required if chemical energy carriers such as hydrogen gas produced during phases of energy surplus are stored. This work assesses the behavior of a porous media hydrogen storage operation through numerical scenario simulation of a synthetic, heterogeneous sandstone formation formed by an anticlinal structure. The structural model is parameterized using data available for the North German Basin as well as data given for formations with similar characteristics. Based on the geological setting at the storage site a total of 15 facies distributions is generated and the hydrological parameters are assigned accordingly. Hydraulic parameters are spatially distributed according to the facies present and include permeability, porosity relative permeability and capillary pressure. The storage is designed to supply energy in times of deficiency on the order of seven days, which represents the typical time span of weather conditions with no wind. It is found that using five injection/extraction wells 21.3 mio sm³ of hydrogen gas can be stored and retrieved to supply 62,688 MWh of energy within 7 days. This requires a ratio of working to cushion gas of 0.59. The retrievable energy within this time represents the demand of about 450000 people. Furthermore it is found that for longer storage times, larger gas volumes have to be used, for higher delivery rates additionally the number of wells has to be increased. The formation investigated here thus seems to offer sufficient capacity and deliverability to be used for a large scale hydrogen gas storage operation.

  11. Identification and characterisation of factors affecting losses in the large-scale, non-ventilated bulk storage of wood chips and development of best storage practices

    Energy Technology Data Exchange (ETDEWEB)

    Garstang, J.; Weekes, A.; Poulter, R.; Bartlett, D.

    2002-07-01

    The report describes the findings of a study to determine the factors affecting the commercial storage of wood chips for biomass power generation in the UK. The UK's first such plant in North Yorkshire uses a mixture of forestry residues and short rotation coppice (SRC) willow, where problems with the stored fuel highlighted the need to determine best storage practices. Two wood chip piles were built (one with willow chip and the other with wood chips from board leaf forestry residues) and monitored (moisture, temperature, chemical composition, spore numbers and species, heat and air flows, bulk density, etc). Local weather data was also obtained. Recommendations for future storage practices are made.

  12. An improved method for upscaling borehole thermal energy storage using inverse finite element modelling

    DEFF Research Database (Denmark)

    Tordrup, Karl Woldum; Poulsen, Søren Erbs; Bjørn, Henrik

    2017-01-01

    Dimensioning of large-scale borehole thermal energy storage (BTES) is inherently uncertain due to the natural variability of thermal conductivity and heat capacity in the storage volume. We present an improved method for upscaling a pilot BTES to full scale and apply the method to an operational...

  13. Software for large scale tracking studies

    International Nuclear Information System (INIS)

    Niederer, J.

    1984-05-01

    Over the past few years, Brookhaven accelerator physicists have been adapting particle tracking programs in planning local storage rings, and lately for SSC reference designs. In addition, the Laboratory is actively considering upgrades to its AGS capabilities aimed at higher proton intensity, polarized proton beams, and heavy ion acceleration. Further activity concerns heavy ion transfer, a proposed booster, and most recently design studies for a heavy ion collider to join to this complex. Circumstances have thus encouraged a search for common features among design and modeling programs and their data, and the corresponding controls efforts among present and tentative machines. Using a version of PATRICIA with nonlinear forces as a vehicle, we have experimented with formal ways to describe accelerator lattice problems to computers as well as to speed up the calculations for large storage ring models. Code treated by straightforward reorganization has served for SSC explorations. The representation work has led to a relational data base centered program, LILA, which has desirable properties for dealing with the many thousands of rapidly changing variables in tracking and other model programs. 13 references

  14. Evolution of scaling emergence in large-scale spatial epidemic spreading.

    Science.gov (United States)

    Wang, Lin; Li, Xiang; Zhang, Yi-Qing; Zhang, Yan; Zhang, Kan

    2011-01-01

    Zipf's law and Heaps' law are two representatives of the scaling concepts, which play a significant role in the study of complexity science. The coexistence of the Zipf's law and the Heaps' law motivates different understandings on the dependence between these two scalings, which has still hardly been clarified. In this article, we observe an evolution process of the scalings: the Zipf's law and the Heaps' law are naturally shaped to coexist at the initial time, while the crossover comes with the emergence of their inconsistency at the larger time before reaching a stable state, where the Heaps' law still exists with the disappearance of strict Zipf's law. Such findings are illustrated with a scenario of large-scale spatial epidemic spreading, and the empirical results of pandemic disease support a universal analysis of the relation between the two laws regardless of the biological details of disease. Employing the United States domestic air transportation and demographic data to construct a metapopulation model for simulating the pandemic spread at the U.S. country level, we uncover that the broad heterogeneity of the infrastructure plays a key role in the evolution of scaling emergence. The analyses of large-scale spatial epidemic spreading help understand the temporal evolution of scalings, indicating the coexistence of the Zipf's law and the Heaps' law depends on the collective dynamics of epidemic processes, and the heterogeneity of epidemic spread indicates the significance of performing targeted containment strategies at the early time of a pandemic disease.

  15. Large-Scale Structure and Hyperuniformity of Amorphous Ices

    Science.gov (United States)

    Martelli, Fausto; Torquato, Salvatore; Giovambattista, Nicolas; Car, Roberto

    2017-09-01

    We investigate the large-scale structure of amorphous ices and transitions between their different forms by quantifying their large-scale density fluctuations. Specifically, we simulate the isothermal compression of low-density amorphous ice (LDA) and hexagonal ice to produce high-density amorphous ice (HDA). Both HDA and LDA are nearly hyperuniform; i.e., they are characterized by an anomalous suppression of large-scale density fluctuations. By contrast, in correspondence with the nonequilibrium phase transitions to HDA, the presence of structural heterogeneities strongly suppresses the hyperuniformity and the system becomes hyposurficial (devoid of "surface-area fluctuations"). Our investigation challenges the largely accepted "frozen-liquid" picture, which views glasses as structurally arrested liquids. Beyond implications for water, our findings enrich our understanding of pressure-induced structural transformations in glasses.

  16. Business Model for the Security of a Large-Scale PACS, Compliance with ISO/27002:2013 Standard.

    Science.gov (United States)

    Gutiérrez-Martínez, Josefina; Núñez-Gaona, Marco Antonio; Aguirre-Meneses, Heriberto

    2015-08-01

    Data security is a critical issue in an organization; a proper information security management (ISM) is an ongoing process that seeks to build and maintain programs, policies, and controls for protecting information. A hospital is one of the most complex organizations, where patient information has not only legal and economic implications but, more importantly, an impact on the patient's health. Imaging studies include medical images, patient identification data, and proprietary information of the study; these data are contained in the storage device of a PACS. This system must preserve the confidentiality, integrity, and availability of patient information. There are techniques such as firewalls, encryption, and data encapsulation that contribute to the protection of information. In addition, the Digital Imaging and Communications in Medicine (DICOM) standard and the requirements of the Health Insurance Portability and Accountability Act (HIPAA) regulations are also used to protect the patient clinical data. However, these techniques are not systematically applied to the picture and archiving and communication system (PACS) in most cases and are not sufficient to ensure the integrity of the images and associated data during transmission. The ISO/IEC 27001:2013 standard has been developed to improve the ISM. Currently, health institutions lack effective ISM processes that enable reliable interorganizational activities. In this paper, we present a business model that accomplishes the controls of ISO/IEC 27002:2013 standard and criteria of security and privacy from DICOM and HIPAA to improve the ISM of a large-scale PACS. The methodology associated with the model can monitor the flow of data in a PACS, facilitating the detection of unauthorized access to images and other abnormal activities.

  17. Digging, Damming or Diverting? Small-Scale Irrigation in the Blue Nile Basin, Ethiopia

    Directory of Open Access Journals (Sweden)

    Irit Eguavoen

    2012-10-01

    Full Text Available The diversity of small-scale irrigation in the Ethiopian Blue Nile basin comprises small dams, wells, ponds and river diversion. The diversity of irrigation infrastructure is partly a consequence of the topographic heterogeneity of the Fogera plains. Despite similar social-political conditions and the same administrative framework, irrigation facilities are established, used and managed differently, ranging from informal arrangements of households and 'water fathers' to water user associations, as well as from open access to irrigation schedules. Fogera belongs to Ethiopian landscapes that will soon transform as a consequence of large dams and huge irrigation schemes. Property rights to land and water are negotiated among a variety of old and new actors. This study, based on ethnographic, hydrological and survey data, synthesises four case studies to analyse the current state of small-scale irrigation. It argues that all water storage options have not only certain comparative advantages but also social constraints, and supports a policy of extending water storage 'systems' that combine and build on complementarities of different storage types instead of fully replacing diversity by large dams.

  18. Large-area perovskite nanowire arrays fabricated by large-scale roll-to-roll micro-gravure printing and doctor blading

    Science.gov (United States)

    Hu, Qiao; Wu, Han; Sun, Jia; Yan, Donghang; Gao, Yongli; Yang, Junliang

    2016-02-01

    Organic-inorganic hybrid halide perovskite nanowires (PNWs) show great potential applications in electronic and optoelectronic devices such as solar cells, field-effect transistors and photodetectors. It is very meaningful to fabricate ordered, large-area PNW arrays and greatly accelerate their applications and commercialization in electronic and optoelectronic devices. Herein, highly oriented and ultra-long methylammonium lead iodide (CH3NH3PbI3) PNW array thin films were fabricated by large-scale roll-to-roll (R2R) micro-gravure printing and doctor blading in ambient environments (humility ~45%, temperature ~28 °C), which produced PNW lengths as long as 15 mm. Furthermore, photodetectors based on these PNWs were successfully fabricated on both silicon oxide (SiO2) and flexible polyethylene terephthalate (PET) substrates and showed moderate performance. This study provides low-cost, large-scale techniques to fabricate large-area PNW arrays with great potential applications in flexible electronic and optoelectronic devices.Organic-inorganic hybrid halide perovskite nanowires (PNWs) show great potential applications in electronic and optoelectronic devices such as solar cells, field-effect transistors and photodetectors. It is very meaningful to fabricate ordered, large-area PNW arrays and greatly accelerate their applications and commercialization in electronic and optoelectronic devices. Herein, highly oriented and ultra-long methylammonium lead iodide (CH3NH3PbI3) PNW array thin films were fabricated by large-scale roll-to-roll (R2R) micro-gravure printing and doctor blading in ambient environments (humility ~45%, temperature ~28 °C), which produced PNW lengths as long as 15 mm. Furthermore, photodetectors based on these PNWs were successfully fabricated on both silicon oxide (SiO2) and flexible polyethylene terephthalate (PET) substrates and showed moderate performance. This study provides low-cost, large-scale techniques to fabricate large-area PNW arrays

  19. Storage the electric power: yes, it is indispensable and it is possible. Why, where, how

    International Nuclear Information System (INIS)

    2003-01-01

    This document describes the main characteristics of various electric power storage methods and their application domains. The large-scale storages include the hydraulic systems, those using compressed air, the batteries or those implementing a thermal way. The small-scale storages are electrochemical as the accumulators, the super-capacitors, mechanical as the flywheel, magnetic or also by the hydrogen use. The first part presents the necessity of the electric power storage, the second part the places of these storage. The third part details the forms of storage. (A.L.B.)

  20. 10 CFR 95.25 - Protection of National Security Information and Restricted Data in storage.

    Science.gov (United States)

    2010-01-01

    ... 10 Energy 2 2010-01-01 2010-01-01 false Protection of National Security Information and Restricted Data in storage. 95.25 Section 95.25 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) FACILITY SECURITY... Protection of National Security Information and Restricted Data in storage. (a) Secret matter, while...

  1. Temporal scaling in information propagation

    Science.gov (United States)

    Huang, Junming; Li, Chao; Wang, Wen-Qiang; Shen, Hua-Wei; Li, Guojie; Cheng, Xue-Qi

    2014-06-01

    For the study of information propagation, one fundamental problem is uncovering universal laws governing the dynamics of information propagation. This problem, from the microscopic perspective, is formulated as estimating the propagation probability that a piece of information propagates from one individual to another. Such a propagation probability generally depends on two major classes of factors: the intrinsic attractiveness of information and the interactions between individuals. Despite the fact that the temporal effect of attractiveness is widely studied, temporal laws underlying individual interactions remain unclear, causing inaccurate prediction of information propagation on evolving social networks. In this report, we empirically study the dynamics of information propagation, using the dataset from a population-scale social media website. We discover a temporal scaling in information propagation: the probability a message propagates between two individuals decays with the length of time latency since their latest interaction, obeying a power-law rule. Leveraging the scaling law, we further propose a temporal model to estimate future propagation probabilities between individuals, reducing the error rate of information propagation prediction from 6.7% to 2.6% and improving viral marketing with 9.7% incremental customers.

  2. Combined process automation for large-scale EEG analysis.

    Science.gov (United States)

    Sfondouris, John L; Quebedeaux, Tabitha M; Holdgraf, Chris; Musto, Alberto E

    2012-01-01

    Epileptogenesis is a dynamic process producing increased seizure susceptibility. Electroencephalography (EEG) data provides information critical in understanding the evolution of epileptiform changes throughout epileptic foci. We designed an algorithm to facilitate efficient large-scale EEG analysis via linked automation of multiple data processing steps. Using EEG recordings obtained from electrical stimulation studies, the following steps of EEG analysis were automated: (1) alignment and isolation of pre- and post-stimulation intervals, (2) generation of user-defined band frequency waveforms, (3) spike-sorting, (4) quantification of spike and burst data and (5) power spectral density analysis. This algorithm allows for quicker, more efficient EEG analysis. Copyright © 2011 Elsevier Ltd. All rights reserved.

  3. Energy storage device with large charge separation

    Science.gov (United States)

    Holme, Timothy P.; Prinz, Friedrich B.; Iancu, Andrei T.

    2018-04-03

    High density energy storage in semiconductor devices is provided. There are two main aspects of the present approach. The first aspect is to provide high density energy storage in semiconductor devices based on formation of a plasma in the semiconductor. The second aspect is to provide high density energy storage based on charge separation in a p-n junction.

  4. Double inflation: A possible resolution of the large-scale structure problem

    International Nuclear Information System (INIS)

    Turner, M.S.; Villumsen, J.V.; Vittorio, N.; Silk, J.; Juszkiewicz, R.

    1986-11-01

    A model is presented for the large-scale structure of the universe in which two successive inflationary phases resulted in large small-scale and small large-scale density fluctuations. This bimodal density fluctuation spectrum in an Ω = 1 universe dominated by hot dark matter leads to large-scale structure of the galaxy distribution that is consistent with recent observational results. In particular, large, nearly empty voids and significant large-scale peculiar velocity fields are produced over scales of ∼100 Mpc, while the small-scale structure over ≤ 10 Mpc resembles that in a low density universe, as observed. Detailed analytical calculations and numerical simulations are given of the spatial and velocity correlations. 38 refs., 6 figs

  5. Large-scale fracture mechancis testing -- requirements and possibilities

    International Nuclear Information System (INIS)

    Brumovsky, M.

    1993-01-01

    Application of fracture mechanics to very important and/or complicated structures, like reactor pressure vessels, brings also some questions about the reliability and precision of such calculations. These problems become more pronounced in cases of elastic-plastic conditions of loading and/or in parts with non-homogeneous materials (base metal and austenitic cladding, property gradient changes through material thickness) or with non-homogeneous stress fields (nozzles, bolt threads, residual stresses etc.). For such special cases some verification by large-scale testing is necessary and valuable. This paper discusses problems connected with planning of such experiments with respect to their limitations, requirements to a good transfer of received results to an actual vessel. At the same time, an analysis of possibilities of small-scale model experiments is also shown, mostly in connection with application of results between standard, small-scale and large-scale experiments. Experience from 30 years of large-scale testing in SKODA is used as an example to support this analysis. 1 fig

  6. Ethics of large-scale change

    DEFF Research Database (Denmark)

    Arler, Finn

    2006-01-01

    , which kind of attitude is appropriate when dealing with large-scale changes like these from an ethical point of view. Three kinds of approaches are discussed: Aldo Leopold's mountain thinking, the neoclassical economists' approach, and finally the so-called Concentric Circle Theories approach...

  7. Autonomous management of a recursive area hierarchy for large scale wireless sensor networks using multiple parents

    Energy Technology Data Exchange (ETDEWEB)

    Cree, Johnathan Vee [Washington State Univ., Pullman, WA (United States); Delgado-Frias, Jose [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2016-03-01

    Large scale wireless sensor networks have been proposed for applications ranging from anomaly detection in an environment to vehicle tracking. Many of these applications require the networks to be distributed across a large geographic area while supporting three to five year network lifetimes. In order to support these requirements large scale wireless sensor networks of duty-cycled devices need a method of efficient and effective autonomous configuration/maintenance. This method should gracefully handle the synchronization tasks duty-cycled networks. Further, an effective configuration solution needs to recognize that in-network data aggregation and analysis presents significant benefits to wireless sensor network and should configure the network in a way such that said higher level functions benefit from the logically imposed structure. NOA, the proposed configuration and maintenance protocol, provides a multi-parent hierarchical logical structure for the network that reduces the synchronization workload. It also provides higher level functions with significant inherent benefits such as but not limited to: removing network divisions that are created by single-parent hierarchies, guarantees for when data will be compared in the hierarchy, and redundancies for communication as well as in-network data aggregation/analysis/storage.

  8. A Fault-Tolerant Radiation-Robust Mass Storage Concept for Highly Scaled Flash Memory

    Science.gov (United States)

    Fuchs, Cristian M.; Trinitis, Carsten; Appel, Nicolas; Langer, Martin

    2015-09-01

    Future spacemissions will require vast amounts of data to be stored and processed aboard spacecraft. While satisfying operational mission requirements, storage systems must guarantee data integrity and recover damaged data throughout the mission. NAND-flash memories have become popular for space-borne high performance mass memory scenarios, though future storage concepts will rely upon highly scaled flash or other memory technologies. With modern flash memory, single bit erasure coding and RAID based concepts are insufficient. Thus, a fully run-time configurable, high performance, dependable storage concept, requiring a minimal set of logic or software. The solution is based on composite erasure coding and can be adjusted for altered mission duration or changing environmental conditions.

  9. Electricity storage by gas pumping. An introduction to thermodynamic storage processes

    International Nuclear Information System (INIS)

    Ruer, Jacques

    2013-01-01

    To date, Pumped Hydro Storage (PHS) is practically the only technology used to store large quantities of electricity. There are however other ways to achieve the same goal. There are not yet well known, because the interest for large scale storage is quite new A complete family of storage technologies can be defined as 'Thermodynamic Storage Systems'. Their only common factor is that a gas is pumped and expanded in the process. If the gas is air taken from the atmosphere and discharged to it, the system is said 'an open system'. This is already developed in the form of Compressed Air Energy Storage (CAES). Different embodiments are possible, following the way the heat gene - rated during the compression stage is conserved. The compressed air is generally stored in underground caverns created in deep salt formations. 2 installations are presently operating and many projects are envisaged. if the gas circulates in closed loop within the plant, the system is said 'a closed system' In this case, the energy is stored as heat and/or cold at different temperature levels. A great variety of technologies can be imagined and are under development, using different gases (e.g. argon, CO 2 ) and different temperature ranges. PHS and CAES require specific sites for water reservoirs or underground caverns. The dosed systems can be installed basically anywhere. (author)

  10. Modeling the impact of large-scale energy conversion systems on global climate

    International Nuclear Information System (INIS)

    Williams, J.

    There are three energy options which could satisfy a projected energy requirement of about 30 TW and these are the solar, nuclear and (to a lesser extent) coal options. Climate models can be used to assess the impact of large scale deployment of these options. The impact of waste heat has been assessed using energy balance models and general circulation models (GCMs). Results suggest that the impacts are significant when the heat imput is very high and studies of more realistic scenarios are required. Energy balance models, radiative-convective models and a GCM have been used to study the impact of doubling the atmospheric CO 2 concentration. State-of-the-art models estimate a surface temperature increase of 1.5-3.0 0 C with large amplification near the poles, but much uncertainty remains. Very few model studies have been made of the impact of particles on global climate, more information on the characteristics of particle input are required. The impact of large-scale deployment of solar energy conversion systems has received little attention but model studies suggest that large scale changes in surface characteristics associated with such systems (surface heat balance, roughness and hydrological characteristics and ocean surface temperature) could have significant global climatic effects. (Auth.)

  11. Information Storage and Management Storing, Managing, and Protecting Digital Information in Classic, Virtualized, and Cloud Environments

    CERN Document Server

    Services, EMC Education

    2012-01-01

    The new edition of a bestseller, now revised and update throughout! This new edition of the unparalleled bestseller serves as a full training course all in one and as the world's largest data storage company, EMC is the ideal author for such a critical resource. They cover the components of a storage system and the different storage system models while also offering essential new material that explores the advances in existing technologies and the emergence of the "Cloud" as well as updates and vital information on new technologies. Features a separate section on emerging area of cloud computi

  12. Aggregation of carbon dioxide sequestration storage assessment units

    Science.gov (United States)

    Blondes, Madalyn S.; Schuenemeyer, John H.; Olea, Ricardo A.; Drew, Lawrence J.

    2013-01-01

    The U.S. Geological Survey is currently conducting a national assessment of carbon dioxide (CO2) storage resources, mandated by the Energy Independence and Security Act of 2007. Pre-emission capture and storage of CO2 in subsurface saline formations is one potential method to reduce greenhouse gas emissions and the negative impact of global climate change. Like many large-scale resource assessments, the area under investigation is split into smaller, more manageable storage assessment units (SAUs), which must be aggregated with correctly propagated uncertainty to the basin, regional, and national scales. The aggregation methodology requires two types of data: marginal probability distributions of storage resource for each SAU, and a correlation matrix obtained by expert elicitation describing interdependencies between pairs of SAUs. Dependencies arise because geologic analogs, assessment methods, and assessors often overlap. The correlation matrix is used to induce rank correlation, using a Cholesky decomposition, among the empirical marginal distributions representing individually assessed SAUs. This manuscript presents a probabilistic aggregation method tailored to the correlations and dependencies inherent to a CO2 storage assessment. Aggregation results must be presented at the basin, regional, and national scales. A single stage approach, in which one large correlation matrix is defined and subsets are used for different scales, is compared to a multiple stage approach, in which new correlation matrices are created to aggregate intermediate results. Although the single-stage approach requires determination of significantly more correlation coefficients, it captures geologic dependencies among similar units in different basins and it is less sensitive to fluctuations in low correlation coefficients than the multiple stage approach. Thus, subsets of one single-stage correlation matrix are used to aggregate to basin, regional, and national scales.

  13. Eco-friendly Energy Storage System: Seawater and Ionic Liquid Electrolyte.

    Science.gov (United States)

    Kim, Jae-Kwang; Mueller, Franziska; Kim, Hyojin; Jeong, Sangsik; Park, Jeong-Sun; Passerini, Stefano; Kim, Youngsik

    2016-01-08

    As existing battery technologies struggle to meet the requirements for widespread use in the field of large-scale energy storage, novel concepts are urgently needed concerning batteries that have high energy densities, low costs, and high levels of safety. Here, a novel eco-friendly energy storage system (ESS) using seawater and an ionic liquid is proposed for the first time; this represents an intermediate system between a battery and a fuel cell, and is accordingly referred to as a hybrid rechargeable cell. Compared to conventional organic electrolytes, the ionic liquid electrolyte significantly enhances the cycle performance of the seawater hybrid rechargeable system, acting as a very stable interface layer between the Sn-C (Na storage) anode and the NASICON (Na3 Zr2 Si2 PO12) ceramic solid electrolyte, making this system extremely promising for cost-efficient and environmentally friendly large-scale energy storage. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Comparison Between Overtopping Discharge in Small and Large Scale Models

    DEFF Research Database (Denmark)

    Helgason, Einar; Burcharth, Hans F.

    2006-01-01

    The present paper presents overtopping measurements from small scale model test performed at the Haudraulic & Coastal Engineering Laboratory, Aalborg University, Denmark and large scale model tests performed at the Largde Wave Channel,Hannover, Germany. Comparison between results obtained from...... small and large scale model tests show no clear evidence of scale effects for overtopping above a threshold value. In the large scale model no overtopping was measured for waveheights below Hs = 0.5m as the water sunk into the voids between the stones on the crest. For low overtopping scale effects...

  15. Neighborhood Discriminant Hashing for Large-Scale Image Retrieval.

    Science.gov (United States)

    Tang, Jinhui; Li, Zechao; Wang, Meng; Zhao, Ruizhen

    2015-09-01

    With the proliferation of large-scale community-contributed images, hashing-based approximate nearest neighbor search in huge databases has aroused considerable interest from the fields of computer vision and multimedia in recent years because of its computational and memory efficiency. In this paper, we propose a novel hashing method named neighborhood discriminant hashing (NDH) (for short) to implement approximate similarity search. Different from the previous work, we propose to learn a discriminant hashing function by exploiting local discriminative information, i.e., the labels of a sample can be inherited from the neighbor samples it selects. The hashing function is expected to be orthogonal to avoid redundancy in the learned hashing bits as much as possible, while an information theoretic regularization is jointly exploited using maximum entropy principle. As a consequence, the learned hashing function is compact and nonredundant among bits, while each bit is highly informative. Extensive experiments are carried out on four publicly available data sets and the comparison results demonstrate the outperforming performance of the proposed NDH method over state-of-the-art hashing techniques.

  16. Characteristics of large thermal energy storage systems in Poland

    Science.gov (United States)

    Zwierzchowski, Ryszard

    2017-11-01

    In District Heating Systems (DHS) there are significant fluctuations in demand for heat by consumers during both the heating and the summer seasons. These variations are considered primarily in the 24-hour time horizon. These problems are aggravated further if the DHS is supplied by a CHP plant, because fluctuations in heat demand adversely affect to a significant degree the stable production of electricity at high overall efficiency. Therefore, introducing Thermal Energy Storage (TES) would be highly recommended on these grounds alone. The characteristics of Large (i.e. over 10 000 m3) TES in operation in Poland are presented. Information is given regarding new projects (currently in design or construction) that apply TES technology in DHS in Poland. The paper looks at the methodology used in Poland to select the TES system for a particular DHS, i.e., procedure for calculating capacity of the TES tank and the system to prevent water stored in the tank from absorbing oxygen from atmospheric air. Implementation of TES in DHS is treated as a recommended technology in the Polish District Heating sector. This technology offers great opportunities to improve the operating conditions of DHS, cutting energy production costs and emissions of pollutants to the atmosphere.

  17. Results of geo-radio-monitoring for radioactive waste storage in large diameter boreholes in clayey ground

    International Nuclear Information System (INIS)

    Dmitriev, S.; Litinsky, Y.; Tkachenko, A.

    2010-01-01

    Document available in extended abstract form only. Full text of publication follows: The main purpose of the work carried out at the site of SUE MosSIA 'Radon' is to develop the system of geo-radio-monitoring for new type of storage facility (large diameter borehole) integrated into existing monitoring system of the whole site, check its effectiveness and improve the system, obtain initial results on safety aspects for using large diameter boreholes for RAW storage. Technology of large diameter boreholes (LDB) construction for low- and intermediate-level waste (LILW) isolation in moraine loams is being under development at SUE MosSIA 'Radon' site since the end of the last century. A project for construction of a demonstration unit for LILW storage in large diameter boreholes at the SUE MosSIA 'Radon' site in Sergiev Posad region has been developed taking into account specific site conditions. The main aim of the project is to develop the technology of LDB repository construction, operational procedures such as loading and retrieval, to develop and improve monitoring system for the new repository type, to get practical data on safety of radioactive wastes storage in new repositories, hermeticity of construction, and behavior of waste, waste packages, construction materials and near-field. In the case of LDB applications for LILW storage, the waste are removed from the scope of human activity into a stable geological medium. Waste are placed below the frost zone where damage of engineered barriers due to climatic factors is practically impossible. Two boreholes with 1.5 m internal diameter and 38 m depth have been drilled in 1997, equipped with engineering barriers including bentonite-concrete stone, licensed as storage facilities in 2003 and are in use now for solid and solidified RAW storage. Specific automated system of geo-radio-monitoring has been developed especially for the LDB-type repository, covering both the interior and the

  18. The use of production management techniques in the construction of large scale physics detectors

    CERN Document Server

    Bazan, A; Estrella, F; Kovács, Z; Le Flour, T; Le Goff, J M; Lieunard, S; McClatchey, R; Murray, S; Varga, L Z; Vialle, J P; Zsenei, M

    1999-01-01

    The construction process of detectors for the Large Hadron Collider (LHC) experiments is large scale, heavily constrained by resource availability and evolves with time. As a consequence, changes in detector component design need to be tracked and quickly reflected in the construction process. With similar problems in industry engineers employ so-called Product Data Management (PDM) systems to control access to documented versions of designs and managers employ so- called Workflow Management software (WfMS) to coordinate production work processes. However, PDM and WfMS software are not generally integrated in industry. The scale of LHC experiments, like CMS, demands that industrial production techniques be applied in detector construction. This paper outlines the major functions and applications of the CRISTAL system (Cooperating Repositories and an information System for Tracking Assembly Lifecycles) in use in CMS which successfully integrates PDM and WfMS techniques in managing large scale physics detector ...

  19. Global models underestimate large decadal declining and rising water storage trends relative to GRACE satellite data

    Science.gov (United States)

    Scanlon, Bridget R.; Zhang, Zizhan; Save, Himanshu; Sun, Alexander Y.; van Beek, Ludovicus P. H.; Wiese, David N.; Reedy, Robert C.; Longuevergne, Laurent; Döll, Petra; Bierkens, Marc F. P.

    2018-01-01

    Assessing reliability of global models is critical because of increasing reliance on these models to address past and projected future climate and human stresses on global water resources. Here, we evaluate model reliability based on a comprehensive comparison of decadal trends (2002–2014) in land water storage from seven global models (WGHM, PCR-GLOBWB, GLDAS NOAH, MOSAIC, VIC, CLM, and CLSM) to trends from three Gravity Recovery and Climate Experiment (GRACE) satellite solutions in 186 river basins (∼60% of global land area). Medians of modeled basin water storage trends greatly underestimate GRACE-derived large decreasing (≤−0.5 km3/y) and increasing (≥0.5 km3/y) trends. Decreasing trends from GRACE are mostly related to human use (irrigation) and climate variations, whereas increasing trends reflect climate variations. For example, in the Amazon, GRACE estimates a large increasing trend of ∼43 km3/y, whereas most models estimate decreasing trends (−71 to 11 km3/y). Land water storage trends, summed over all basins, are positive for GRACE (∼71–82 km3/y) but negative for models (−450 to −12 km3/y), contributing opposing trends to global mean sea level change. Impacts of climate forcing on decadal land water storage trends exceed those of modeled human intervention by about a factor of 2. The model-GRACE comparison highlights potential areas of future model development, particularly simulated water storage. The inability of models to capture large decadal water storage trends based on GRACE indicates that model projections of climate and human-induced water storage changes may be underestimated. PMID:29358394

  20. Needs, opportunities, and options for large scale systems research

    Energy Technology Data Exchange (ETDEWEB)

    Thompson, G.L.

    1984-10-01

    The Office of Energy Research was recently asked to perform a study of Large Scale Systems in order to facilitate the development of a true large systems theory. It was decided to ask experts in the fields of electrical engineering, chemical engineering and manufacturing/operations research for their ideas concerning large scale systems research. The author was asked to distribute a questionnaire among these experts to find out their opinions concerning recent accomplishments and future research directions in large scale systems research. He was also requested to convene a conference which included three experts in each area as panel members to discuss the general area of large scale systems research. The conference was held on March 26--27, 1984 in Pittsburgh with nine panel members, and 15 other attendees. The present report is a summary of the ideas presented and the recommendations proposed by the attendees.

  1. Visual Data-Analytics of Large-Scale Parallel Discrete-Event Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Ross, Caitlin; Carothers, Christopher D.; Mubarak, Misbah; Carns, Philip; Ross, Robert; Li, Jianping Kelvin; Ma, Kwan-Liu

    2016-11-13

    Parallel discrete-event simulation (PDES) is an important tool in the codesign of extreme-scale systems because PDES provides a cost-effective way to evaluate designs of highperformance computing systems. Optimistic synchronization algorithms for PDES, such as Time Warp, allow events to be processed without global synchronization among the processing elements. A rollback mechanism is provided when events are processed out of timestamp order. Although optimistic synchronization protocols enable the scalability of large-scale PDES, the performance of the simulations must be tuned to reduce the number of rollbacks and provide an improved simulation runtime. To enable efficient large-scale optimistic simulations, one has to gain insight into the factors that affect the rollback behavior and simulation performance. We developed a tool for ROSS model developers that gives them detailed metrics on the performance of their large-scale optimistic simulations at varying levels of simulation granularity. Model developers can use this information for parameter tuning of optimistic simulations in order to achieve better runtime and fewer rollbacks. In this work, we instrument the ROSS optimistic PDES framework to gather detailed statistics about the simulation engine. We have also developed an interactive visualization interface that uses the data collected by the ROSS instrumentation to understand the underlying behavior of the simulation engine. The interface connects real time to virtual time in the simulation and provides the ability to view simulation data at different granularities. We demonstrate the usefulness of our framework by performing a visual analysis of the dragonfly network topology model provided by the CODES simulation framework built on top of ROSS. The instrumentation needs to minimize overhead in order to accurately collect data about the simulation performance. To ensure that the instrumentation does not introduce unnecessary overhead, we perform a

  2. Analysis using large-scale ringing data

    Directory of Open Access Journals (Sweden)

    Baillie, S. R.

    2004-06-01

    Full Text Available Birds are highly mobile organisms and there is increasing evidence that studies at large spatial scales are needed if we are to properly understand their population dynamics. While classical metapopulation models have rarely proved useful for birds, more general metapopulation ideas involving collections of populations interacting within spatially structured landscapes are highly relevant (Harrison, 1994. There is increasing interest in understanding patterns of synchrony, or lack of synchrony, between populations and the environmental and dispersal mechanisms that bring about these patterns (Paradis et al., 2000. To investigate these processes we need to measure abundance, demographic rates and dispersal at large spatial scales, in addition to gathering data on relevant environmental variables. There is an increasing realisation that conservation needs to address rapid declines of common and widespread species (they will not remain so if such trends continue as well as the management of small populations that are at risk of extinction. While the knowledge needed to support the management of small populations can often be obtained from intensive studies in a few restricted areas, conservation of widespread species often requires information on population trends and processes measured at regional, national and continental scales (Baillie, 2001. While management prescriptions for widespread populations may initially be developed from a small number of local studies or experiments, there is an increasing need to understand how such results will scale up when applied across wider areas. There is also a vital role for monitoring at large spatial scales both in identifying such population declines and in assessing population recovery. Gathering data on avian abundance and demography at large spatial scales usually relies on the efforts of large numbers of skilled volunteers. Volunteer studies based on ringing (for example Constant Effort Sites [CES

  3. Large-scale structure of the Universe

    International Nuclear Information System (INIS)

    Doroshkevich, A.G.

    1978-01-01

    The problems, discussed at the ''Large-scale Structure of the Universe'' symposium are considered on a popular level. Described are the cell structure of galaxy distribution in the Universe, principles of mathematical galaxy distribution modelling. The images of cell structures, obtained after reprocessing with the computer are given. Discussed are three hypothesis - vortical, entropic, adiabatic, suggesting various processes of galaxy and galaxy clusters origin. A considerable advantage of the adiabatic hypothesis is recognized. The relict radiation, as a method of direct studying the processes taking place in the Universe is considered. The large-scale peculiarities and small-scale fluctuations of the relict radiation temperature enable one to estimate the turbance properties at the pre-galaxy stage. The discussion of problems, pertaining to studying the hot gas, contained in galaxy clusters, the interactions within galaxy clusters and with the inter-galaxy medium, is recognized to be a notable contribution into the development of theoretical and observational cosmology

  4. Large-area printed supercapacitor technology for low-cost domestic green energy storage

    International Nuclear Information System (INIS)

    Tehrani, Z.; Thomas, D.J.; Korochkina, T.; Phillips, C.O.; Lupo, D.; Lehtimäki, S.; O'Mahony, J.; Gethin, D.T.

    2017-01-01

    In this research we demonstrate that a flexible ultra-thin supercapacitor can be fabricated using high volume screen printing process. This has enabled the sequential deposition of current collector, electrode, electrolyte materials and adhesive onto a Polyethylene terephthalate (PET) substrate in order to form flexible electrodes for reliable energy storage applications. The electrodes were based on an activated carbon ink and gel electrolyte each of which were formulated for this application. Supercapacitors that have surface areas from 100 to 1600 mm"2 and an assembled device thickness of 375 μm were demonstrated. The capacitance ranged from 50 to 400 mF. Capacitance of printed carbon electrodes is rarely reported in literature and no references were found. The chemistry developed during this study displayed long-term cycling potential and demonstrated the stability of the capacitor for continued usage. The gel electrolyte developed within this work showed comparable performance to that of a liquid counterpart. This improvement resulted in the reduction in gel resistance from 90Ω to 0.5Ω. Significant reduction was observed for all resistances. The solid-state supercapacitors with the gel electrolyte showed comparable performance to the supercapacitors that used a liquid electrolyte. This large area printed device can be used in future houses for reliable green energy storage. - Highlights: • It has been demonstrated that a flexible supercapacitors with large area storage has been developed. • The simplified architecture has the potential to lead to a new class of printable, thin storage devices. • The specific capacitance of 21 F/g was measured.

  5. Seismic safety in conducting large-scale blasts

    Science.gov (United States)

    Mashukov, I. V.; Chaplygin, V. V.; Domanov, V. P.; Semin, A. A.; Klimkin, M. A.

    2017-09-01

    In mining enterprises to prepare hard rocks for excavation a drilling and blasting method is used. With the approach of mining operations to settlements the negative effect of large-scale blasts increases. To assess the level of seismic impact of large-scale blasts the scientific staff of Siberian State Industrial University carried out expertise for coal mines and iron ore enterprises. Determination of the magnitude of surface seismic vibrations caused by mass explosions was performed using seismic receivers, an analog-digital converter with recording on a laptop. The registration results of surface seismic vibrations during production of more than 280 large-scale blasts at 17 mining enterprises in 22 settlements are presented. The maximum velocity values of the Earth’s surface vibrations are determined. The safety evaluation of seismic effect was carried out according to the permissible value of vibration velocity. For cases with exceedance of permissible values recommendations were developed to reduce the level of seismic impact.

  6. The Need for Large-Scale, Longitudinal Empirical Studies in Middle Level Education Research

    Science.gov (United States)

    Mertens, Steven B.; Caskey, Micki M.; Flowers, Nancy

    2016-01-01

    This essay describes and discusses the ongoing need for large-scale, longitudinal, empirical research studies focused on middle grades education. After a statement of the problem and concerns, the essay describes and critiques several prior middle grades efforts and research studies. Recommendations for future research efforts to inform policy…

  7. Large Scale Computing and Storage Requirements for Biological and Environmental Research

    Energy Technology Data Exchange (ETDEWEB)

    DOE Office of Science, Biological and Environmental Research Program Office (BER),

    2009-09-30

    In May 2009, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR), and DOE's Office of Biological and Environmental Research (BER) held a workshop to characterize HPC requirements for BER-funded research over the subsequent three to five years. The workshop revealed several key points, in addition to achieving its goal of collecting and characterizing computing requirements. Chief among them: scientific progress in BER-funded research is limited by current allocations of computational resources. Additionally, growth in mission-critical computing -- combined with new requirements for collaborative data manipulation and analysis -- will demand ever increasing computing, storage, network, visualization, reliability and service richness from NERSC. This report expands upon these key points and adds others. It also presents a number of"case studies" as significant representative samples of the needs of science teams within BER. Workshop participants were asked to codify their requirements in this"case study" format, summarizing their science goals, methods of solution, current and 3-5 year computing requirements, and special software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel,"multi-core" environment that is expected to dominate HPC architectures over the next few years.

  8. Image-based Exploration of Large-Scale Pathline Fields

    KAUST Repository

    Nagoor, Omniah H.

    2014-05-27

    While real-time applications are nowadays routinely used in visualizing large nu- merical simulations and volumes, handling these large-scale datasets requires high-end graphics clusters or supercomputers to process and visualize them. However, not all users have access to powerful clusters. Therefore, it is challenging to come up with a visualization approach that provides insight to large-scale datasets on a single com- puter. Explorable images (EI) is one of the methods that allows users to handle large data on a single workstation. Although it is a view-dependent method, it combines both exploration and modification of visual aspects without re-accessing the original huge data. In this thesis, we propose a novel image-based method that applies the concept of EI in visualizing large flow-field pathlines data. The goal of our work is to provide an optimized image-based method, which scales well with the dataset size. Our approach is based on constructing a per-pixel linked list data structure in which each pixel contains a list of pathlines segments. With this view-dependent method it is possible to filter, color-code and explore large-scale flow data in real-time. In addition, optimization techniques such as early-ray termination and deferred shading are applied, which further improves the performance and scalability of our approach.

  9. An inertia-free filter line-search algorithm for large-scale nonlinear programming

    Energy Technology Data Exchange (ETDEWEB)

    Chiang, Nai-Yuan; Zavala, Victor M.

    2016-02-15

    We present a filter line-search algorithm that does not require inertia information of the linear system. This feature enables the use of a wide range of linear algebra strategies and libraries, which is essential to tackle large-scale problems on modern computing architectures. The proposed approach performs curvature tests along the search step to detect negative curvature and to trigger convexification. We prove that the approach is globally convergent and we implement the approach within a parallel interior-point framework to solve large-scale and highly nonlinear problems. Our numerical tests demonstrate that the inertia-free approach is as efficient as inertia detection via symmetric indefinite factorizations. We also demonstrate that the inertia-free approach can lead to reductions in solution time because it reduces the amount of convexification needed.

  10. Large scale mapping of groundwater resources using a highly integrated set of tools

    DEFF Research Database (Denmark)

    Søndergaard, Verner; Auken, Esben; Christiansen, Anders Vest

    large areas with information from an optimum number of new investigation boreholes, existing boreholes, logs and water samples to get an integrated and detailed description of the groundwater resources and their vulnerability.Development of more time efficient and airborne geophysical data acquisition...... platforms (e.g. SkyTEM) have made large-scale mapping attractive and affordable in the planning and administration of groundwater resources. The handling and optimized use of huge amounts of geophysical data covering large areas has also required a comprehensive database, where data can easily be stored...

  11. Multi-scale interactions affecting transport, storage, and processing of solutes and sediments in stream corridors (Invited)

    Science.gov (United States)

    Harvey, J. W.; Packman, A. I.

    2010-12-01

    where hyporheic fluxes cannot be accurately estimated without considering multi-scale effects. Our modeling captures the dominance of small-scale features such as bedforms that drive the majority of hyporheic flow, but it also captures how hyporheic flow is substantially modified by relatively small changes in streamflow or groundwater flow. The additional field measurements add sensitivity and power to whole stream tracer additions by improving resolution of the relative importance of storage at different scales (e.g. bar-scale versus bedform-scale). This information is critical in identifying hot spots where important biogeochemical reactions occur. In summary, interpreting multi-scale interactions in streams requires models that are physically based and that incorporate non-linear process dynamics. Such models can take advantage of increasingly comprehensive field data to integrate transport processes across spatially variable flow and geomorphic conditions. The most useful field and modeling approaches will be those that are simple enough to be easily implemented by users from various disciplines but comprehensive enough to produce meaningful predictions for a wide range of flow and geomorphic scenarios. This capability is needed to support improved strategies for protecting stream ecological health in the face of accelerating land use and climate change.

  12. Mapping the distribution of the denitrifier community at large scales (Invited)

    Science.gov (United States)

    Philippot, L.; Bru, D.; Ramette, A.; Dequiedt, S.; Ranjard, L.; Jolivet, C.; Arrouays, D.

    2010-12-01

    Little information is available regarding the landscape-scale distribution of microbial communities and its environmental determinants. Here we combined molecular approaches and geostatistical modeling to explore spatial patterns of the denitrifying community at large scales. The distribution of denitrifrying community was investigated over 107 sites in Burgundy, a 31 500 km2 region of France, using a 16 X 16 km sampling grid. At each sampling site, the abundances of denitrifiers and 42 soil physico-chemical properties were measured. The relative contributions of land use, spatial distance, climatic conditions, time and soil physico-chemical properties to the denitrifier spatial distribution were analyzed by canonical variation partitioning. Our results indicate that 43% to 85% of the spatial variation in community abundances could be explained by the measured environmental parameters, with soil chemical properties (mostly pH) being the main driver. We found spatial autocorrelation up to 740 km and used geostatistical modelling to generate predictive maps of the distribution of denitrifiers at the landscape scale. Studying the distribution of the denitrifiers at large scale can help closing the artificial gap between the investigation of microbial processes and microbial community ecology, therefore facilitating our understanding of the relationships between the ecology of denitrifiers and N-fluxes by denitrification.

  13. An improved method to characterise the modulation of small-scale turbulent by large-scale structures

    Science.gov (United States)

    Agostini, Lionel; Leschziner, Michael; Gaitonde, Datta

    2015-11-01

    A key aspect of turbulent boundary layer dynamics is ``modulation,'' which refers to degree to which the intensity of coherent large-scale structures (LS) cause an amplification or attenuation of the intensity of the small-scale structures (SS) through large-scale-linkage. In order to identify the variation of the amplitude of the SS motion, the envelope of the fluctuations needs to be determined. Mathis et al. (2009) proposed to define this latter by low-pass filtering the modulus of the analytic signal built from the Hilbert transform of SS. The validity of this definition, as a basis for quantifying the modulated SS signal, is re-examined on the basis of DNS data for a channel flow. The analysis shows that the modulus of the analytic signal is very sensitive to the skewness of its PDF, which is dependent, in turn, on the sign of the LS fluctuation and thus of whether these fluctuations are associated with sweeps or ejections. The conclusion is that generating an envelope by use of a low-pass filtering step leads to an important loss of information associated with the effects of the local skewness of the PDF of the SS on the modulation process. An improved Hilbert-transform-based method is proposed to characterize the modulation of SS turbulence by LS structures

  14. RE-Europe, a large-scale dataset for modeling a highly renewable European electricity system

    DEFF Research Database (Denmark)

    Jensen, Tue Vissing; Pinson, Pierre

    2017-01-01

    , we describe a dedicated large-scale dataset for a renewable electric power system. The dataset combines a transmission network model, as well as information for generation and demand. Generation includes conventional generators with their technical and economic characteristics, as well as weather-driven...... to the evaluation, scaling analysis and replicability check of a wealth of proposals in, e.g., market design, network actor coordination and forecastingof renewable power generation....

  15. SIMULATION FRAMEWORK FOR REGIONAL GEOLOGIC CO{sub 2} STORAGE ALONG ARCHES PROVINCE OF MIDWESTERN UNITED STATES

    Energy Technology Data Exchange (ETDEWEB)

    Sminchak, Joel

    2012-09-30

    This report presents final technical results for the project Simulation Framework for Regional Geologic CO{sub 2} Storage Infrastructure along Arches Province of the Midwest United States. The Arches Simulation project was a three year effort designed to develop a simulation framework for regional geologic carbon dioxide (CO{sub 2}) storage infrastructure along the Arches Province through development of a geologic model and advanced reservoir simulations of large-scale CO{sub 2} storage. The project included five major technical tasks: (1) compilation of geologic, hydraulic and injection data on Mount Simon, (2) development of model framework and parameters, (3) preliminary variable density flow simulations, (4) multi-phase model runs of regional storage scenarios, and (5) implications for regional storage feasibility. The Arches Province is an informal region in northeastern Indiana, northern Kentucky, western Ohio, and southern Michigan where sedimentary rock formations form broad arch and platform structures. In the province, the Mount Simon sandstone is an appealing deep saline formation for CO{sub 2} storage because of the intersection of reservoir thickness and permeability. Many CO{sub 2} sources are located in proximity to the Arches Province, and the area is adjacent to coal fired power plants along the Ohio River Valley corridor. Geophysical well logs, rock samples, drilling logs, and geotechnical tests were evaluated for a 500,000 km{sup 2} study area centered on the Arches Province. Hydraulic parameters and historical operational information was also compiled from Mount Simon wastewater injection wells in the region. This information was integrated into a geocellular model that depicts the parameters and conditions in a numerical array. The geologic and hydraulic data were integrated into a three-dimensional grid of porosity and permeability, which are key parameters regarding fluid flow and pressure buildup due to CO{sub 2} injection. Permeability data

  16. The role of large-scale, extratropical dynamics in climate change

    Energy Technology Data Exchange (ETDEWEB)

    Shepherd, T.G. [ed.

    1994-02-01

    The climate modeling community has focused recently on improving our understanding of certain processes, such as cloud feedbacks and ocean circulation, that are deemed critical to climate-change prediction. Although attention to such processes is warranted, emphasis on these areas has diminished a general appreciation of the role played by the large-scale dynamics of the extratropical atmosphere. Lack of interest in extratropical dynamics may reflect the assumption that these dynamical processes are a non-problem as far as climate modeling is concerned, since general circulation models (GCMs) calculate motions on this scale from first principles. Nevertheless, serious shortcomings in our ability to understand and simulate large-scale dynamics exist. Partly due to a paucity of standard GCM diagnostic calculations of large-scale motions and their transports of heat, momentum, potential vorticity, and moisture, a comprehensive understanding of the role of large-scale dynamics in GCM climate simulations has not been developed. Uncertainties remain in our understanding and simulation of large-scale extratropical dynamics and their interaction with other climatic processes, such as cloud feedbacks, large-scale ocean circulation, moist convection, air-sea interaction and land-surface processes. To address some of these issues, the 17th Stanstead Seminar was convened at Bishop`s University in Lennoxville, Quebec. The purpose of the Seminar was to promote discussion of the role of large-scale extratropical dynamics in global climate change. Abstracts of the talks are included in this volume. On the basis of these talks, several key issues emerged concerning large-scale extratropical dynamics and their climatic role. Individual records are indexed separately for the database.

  17. The role of large-scale, extratropical dynamics in climate change

    International Nuclear Information System (INIS)

    Shepherd, T.G.

    1994-02-01

    The climate modeling community has focused recently on improving our understanding of certain processes, such as cloud feedbacks and ocean circulation, that are deemed critical to climate-change prediction. Although attention to such processes is warranted, emphasis on these areas has diminished a general appreciation of the role played by the large-scale dynamics of the extratropical atmosphere. Lack of interest in extratropical dynamics may reflect the assumption that these dynamical processes are a non-problem as far as climate modeling is concerned, since general circulation models (GCMs) calculate motions on this scale from first principles. Nevertheless, serious shortcomings in our ability to understand and simulate large-scale dynamics exist. Partly due to a paucity of standard GCM diagnostic calculations of large-scale motions and their transports of heat, momentum, potential vorticity, and moisture, a comprehensive understanding of the role of large-scale dynamics in GCM climate simulations has not been developed. Uncertainties remain in our understanding and simulation of large-scale extratropical dynamics and their interaction with other climatic processes, such as cloud feedbacks, large-scale ocean circulation, moist convection, air-sea interaction and land-surface processes. To address some of these issues, the 17th Stanstead Seminar was convened at Bishop's University in Lennoxville, Quebec. The purpose of the Seminar was to promote discussion of the role of large-scale extratropical dynamics in global climate change. Abstracts of the talks are included in this volume. On the basis of these talks, several key issues emerged concerning large-scale extratropical dynamics and their climatic role. Individual records are indexed separately for the database

  18. Large-Scale, Parallel, Multi-Sensor Atmospheric Data Fusion Using Cloud Computing

    Science.gov (United States)

    Wilson, B. D.; Manipon, G.; Hua, H.; Fetzer, E. J.

    2013-12-01

    NASA's Earth Observing System (EOS) is an ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the 'A-Train' platforms (AIRS, AMSR-E, MODIS, MISR, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over decades. Moving to multi-sensor, long-duration analyses of important climate variables presents serious challenges for large-scale data mining and fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another (MODIS), and to a model (MERRA), stratify the comparisons using a classification of the 'cloud scenes' from CloudSat, and repeat the entire analysis over 10 years of data. To efficiently assemble such datasets, we are utilizing Elastic Computing in the Cloud and parallel map/reduce-based algorithms. However, these problems are Data Intensive computing so the data transfer times and storage costs (for caching) are key issues. SciReduce is a Hadoop-like parallel analysis system, programmed in parallel python, that is designed from the ground up for Earth science. SciReduce executes inside VMWare images and scales to any number of nodes in the Cloud. Unlike Hadoop, SciReduce operates on bundles of named numeric arrays, which can be passed in memory or serialized to disk in netCDF4 or HDF5. Figure 1 shows the architecture of the full computational system, with SciReduce at the core. Multi-year datasets are automatically 'sharded' by time and space across a cluster of nodes so that years of data (millions of files) can be processed in a massively parallel way. Input variables (arrays) are pulled on-demand into the Cloud using OPeNDAP URLs or other subsetting services, thereby minimizing the size of the cached input and intermediate datasets. We are using SciReduce to automate the production of multiple versions of a ten-year A-Train water vapor climatology under a NASA MEASURES grant. We will

  19. Large-scale network dynamics of beta-band oscillations underlie auditory perceptual decision-making

    Directory of Open Access Journals (Sweden)

    Mohsen Alavash

    2017-06-01

    Full Text Available Perceptual decisions vary in the speed at which we make them. Evidence suggests that translating sensory information into perceptual decisions relies on distributed interacting neural populations, with decision speed hinging on power modulations of the neural oscillations. Yet the dependence of perceptual decisions on the large-scale network organization of coupled neural oscillations has remained elusive. We measured magnetoencephalographic signals in human listeners who judged acoustic stimuli composed of carefully titrated clouds of tone sweeps. These stimuli were used in two task contexts, in which the participants judged the overall pitch or direction of the tone sweeps. We traced the large-scale network dynamics of the source-projected neural oscillations on a trial-by-trial basis using power-envelope correlations and graph-theoretical network discovery. In both tasks, faster decisions were predicted by higher segregation and lower integration of coupled beta-band (∼16–28 Hz oscillations. We also uncovered the brain network states that promoted faster decisions in either lower-order auditory or higher-order control brain areas. Specifically, decision speed in judging the tone sweep direction critically relied on the nodal network configurations of anterior temporal, cingulate, and middle frontal cortices. Our findings suggest that global network communication during perceptual decision-making is implemented in the human brain by large-scale couplings between beta-band neural oscillations. The speed at which we make perceptual decisions varies. This translation of sensory information into perceptual decisions hinges on dynamic changes in neural oscillatory activity. However, the large-scale neural-network embodiment supporting perceptual decision-making is unclear. We addressed this question by experimenting two auditory perceptual decision-making situations. Using graph-theoretical network discovery, we traced the large-scale network

  20. Properties and uses of storage for enhancing the grid penetration of very large photovoltaic systems

    International Nuclear Information System (INIS)

    Solomon, A.A.; Faiman, D.; Meron, G.

    2010-01-01

    In this third paper, which studies the hourly generation data for the year 2006 from the Israel Electric Corporation, with a view to incorporating very large photovoltaic (PV) power plants, we address the question: What properties should storage have in order to enhance the grid penetration of large PV systems in an efficient and substantial manner? We first impose the constraint that no PV energy losses are permitted other than those due to storage inefficiency. This constraint leads to powerful linkages between the energy capacity and power capacity of storage, and PV system size, and their combined effect on grid penetration. Various strategies are then examined for enhancing grid penetration, based upon this newfound knowledge. Specific strategies examined include PV energy dumping and baseload rescheduling both on a seasonal basis and shorter time periods. We found, inter alia, that at high grid flexibilities (in the range ff=0.8-1), PV grid penetration levels could be possible in the range 60-90% of annual requirements. Moreover, with appropriately designed storage and accurate forecasting, a future grid could be operated at ff=1.

  1. Status: Large-scale subatmospheric cryogenic systems

    International Nuclear Information System (INIS)

    Peterson, T.

    1989-01-01

    In the late 1960's and early 1970's an interest in testing and operating RF cavities at 1.8K motivated the development and construction of four large (300 Watt) 1.8K refrigeration systems. in the past decade, development of successful superconducting RF cavities and interest in obtaining higher magnetic fields with the improved Niobium-Titanium superconductors has once again created interest in large-scale 1.8K refrigeration systems. The L'Air Liquide plant for Tore Supra is a recently commissioned 300 Watt 1.8K system which incorporates new technology, cold compressors, to obtain the low vapor pressure for low temperature cooling. CEBAF proposes to use cold compressors to obtain 5KW at 2.0K. Magnetic refrigerators of 10 Watt capacity or higher at 1.8K are now being developed. The state of the art of large-scale refrigeration in the range under 4K will be reviewed. 28 refs., 4 figs., 7 tabs

  2. A Novel Constant-Pressure Pumped Hydro Combined with Compressed Air Energy Storage System

    Directory of Open Access Journals (Sweden)

    Erren Yao

    2014-12-01

    Full Text Available As intermittent renewable energy is receiving increasing attention, the combination of intermittent renewable energy with large-scale energy storage technology is considered as an important technological approach for the wider application of wind power and solar energy. Pumped hydro combined with compressed air energy storage system (PHCA is one of the energy storage systems that not only integrates the advantages but also overcomes the disadvantages of compressed air energy storage (CAES systems and pumped hydro energy storage systems to solve the problem of energy storage in China’s arid regions. Aiming at the variable working conditions of PHCA system technology, this study proposes a new constant-pressure PHCA. The most significant characteristics of this system were that the water pump and hydroturbine work under stable conditions and this improves the working efficiency of the equipment without incurring an energy loss. In addition, the constant-pressure PHCA system was subjected to energy and exergy analysis, in expectation of exploring an attractive solution for the large-scale storage of existing intermittent renewable energy.

  3. Financial analysis of utility scale photovoltaic plants with battery energy storage

    International Nuclear Information System (INIS)

    Rudolf, Viktor; Papastergiou, Konstantinos D.

    2013-01-01

    Battery energy storage is a flexible and responsive form of storing electrical energy from Renewable generation. The need for energy storage mainly stems from the intermittent nature of solar and wind energy sources. System integrators are investigating ways to design plants that can provide more stable output power without compromising the financial performance that is vital for investors. Network operators on the other side set stringent requirements for the commissioning of new generation, including preferential terms for energy providers with a well-defined generation profile. The aim of this work is to highlight the market and technology drivers that impact the feasibility of battery energy storage in a Utility-scale solar PV project. A simulation tool combines a battery cycling and lifetime model with a solar generation profile and electricity market prices. The business cases of the present market conditions and a projected future scenario are analyzed. - Highlights: • Generation shifting with batteries allows PV projects to generate additional revenues. • Battery lifetime, lifecycles and price are less relevant than electricity market prices. • Installed battery capacity of up to 50% of the daily PV energy boosts project economy. • A 25% higher premium for energy storage could improve NPV by approximately 65%

  4. Electricity storage using a thermal storage scheme

    Energy Technology Data Exchange (ETDEWEB)

    White, Alexander, E-mail: ajw36@cam.ac.uk [Hopkinson Laboratory, Cambridge University Engineering Department, Trumpington Street, Cambridge. CB2 1PZ (United Kingdom)

    2015-01-22

    The increasing use of renewable energy technologies for electricity generation, many of which have an unpredictably intermittent nature, will inevitably lead to a greater demand for large-scale electricity storage schemes. For example, the expanding fraction of electricity produced by wind turbines will require either backup or storage capacity to cover extended periods of wind lull. This paper describes a recently proposed storage scheme, referred to here as Pumped Thermal Storage (PTS), and which is based on “sensible heat” storage in large thermal reservoirs. During the charging phase, the system effectively operates as a high temperature-ratio heat pump, extracting heat from a cold reservoir and delivering heat to a hot one. In the discharge phase the processes are reversed and it operates as a heat engine. The round-trip efficiency is limited only by process irreversibilities (as opposed to Second Law limitations on the coefficient of performance and the thermal efficiency of the heat pump and heat engine respectively). PTS is currently being developed in both France and England. In both cases, the schemes operate on the Joule-Brayton (gas turbine) cycle, using argon as the working fluid. However, the French scheme proposes the use of turbomachinery for compression and expansion, whereas for that being developed in England reciprocating devices are proposed. The current paper focuses on the impact of the various process irreversibilities on the thermodynamic round-trip efficiency of the scheme. Consideration is given to compression and expansion losses and pressure losses (in pipe-work, valves and thermal reservoirs); heat transfer related irreversibility in the thermal reservoirs is discussed but not included in the analysis. Results are presented demonstrating how the various loss parameters and operating conditions influence the overall performance.

  5. Large-scale networks in engineering and life sciences

    CERN Document Server

    Findeisen, Rolf; Flockerzi, Dietrich; Reichl, Udo; Sundmacher, Kai

    2014-01-01

    This edited volume provides insights into and tools for the modeling, analysis, optimization, and control of large-scale networks in the life sciences and in engineering. Large-scale systems are often the result of networked interactions between a large number of subsystems, and their analysis and control are becoming increasingly important. The chapters of this book present the basic concepts and theoretical foundations of network theory and discuss its applications in different scientific areas such as biochemical reactions, chemical production processes, systems biology, electrical circuits, and mobile agents. The aim is to identify common concepts, to understand the underlying mathematical ideas, and to inspire discussions across the borders of the various disciplines.  The book originates from the interdisciplinary summer school “Large Scale Networks in Engineering and Life Sciences” hosted by the International Max Planck Research School Magdeburg, September 26-30, 2011, and will therefore be of int...

  6. Quantifying the Impacts of Large Scale Integration of Renewables in Indian Power Sector

    Science.gov (United States)

    Kumar, P.; Mishra, T.; Banerjee, R.

    2017-12-01

    India's power sector is responsible for nearly 37 percent of India's greenhouse gas emissions. For a fast emerging economy like India whose population and energy consumption are poised to rise rapidly in the coming decades, renewable energy can play a vital role in decarbonizing power sector. In this context, India has targeted 33-35 percent emission intensity reduction (with respect to 2005 levels) along with large scale renewable energy targets (100GW solar, 60GW wind, and 10GW biomass energy by 2022) in INDCs submitted at Paris agreement. But large scale integration of renewable energy is a complex process which faces a number of problems like capital intensiveness, matching intermittent loads with least storage capacity and reliability. In this context, this study attempts to assess the technical feasibility of integrating renewables into Indian electricity mix by 2022 and analyze its implications on power sector operations. This study uses TIMES, a bottom up energy optimization model with unit commitment and dispatch features. We model coal and gas fired units discretely with region-wise representation of wind and solar resources. The dispatch features are used for operational analysis of power plant units under ramp rate and minimum generation constraints. The study analyzes India's electricity sector transition for the year 2022 with three scenarios. The base case scenario (no RE addition) along with INDC scenario (with 100GW solar, 60GW wind, 10GW biomass) and low RE scenario (50GW solar, 30GW wind) have been created to analyze the implications of large scale integration of variable renewable energy. The results provide us insights on trade-offs involved in achieving mitigation targets and investment decisions involved. The study also examines operational reliability and flexibility requirements of the system for integrating renewables.

  7. Hanford Site Waste Storage Tank Information Notebook

    International Nuclear Information System (INIS)

    Husa, E.I.; Raymond, R.E.; Welty, R.K.; Griffith, S.M.; Hanlon, B.M.; Rios, R.R.; Vermeulen, N.J.

    1993-07-01

    This report provides summary data on the radioactive waste stored in underground tanks in the 200 East and West Areas at the Hanford Site. The summary data covers each of the existing 161 Series 100 underground waste storage tanks (500,000 gallons and larger). It also contains information on the design and construction of these tanks. The information in this report is derived from existing reports that document the status of the tanks and their materials. This report also contains interior, surface photographs of each of the 54 Watch List tanks, which are those tanks identified as Priority I Hanford Site Tank Farm Safety Issues in accordance with Public Law 101-510, Section 3137*

  8. Information Management for a Large Multidisciplinary Project

    Science.gov (United States)

    Jones, Kennie H.; Randall, Donald P.; Cronin, Catherine K.

    1992-01-01

    In 1989, NASA's Langley Research Center (LaRC) initiated the High-Speed Airframe Integration Research (HiSAIR) Program to develop and demonstrate an integrated environment for high-speed aircraft design using advanced multidisciplinary analysis and optimization procedures. The major goals of this program were to evolve the interactions among disciplines and promote sharing of information, to provide a timely exchange of information among aeronautical disciplines, and to increase the awareness of the effects each discipline has upon other disciplines. LaRC historically has emphasized the advancement of analysis techniques. HiSAIR was founded to synthesize these advanced methods into a multidisciplinary design process emphasizing information feedback among disciplines and optimization. Crucial to the development of such an environment are the definition of the required data exchanges and the methodology for both recording the information and providing the exchanges in a timely manner. These requirements demand extensive use of data management techniques, graphic visualization, and interactive computing. HiSAIR represents the first attempt at LaRC to promote interdisciplinary information exchange on a large scale using advanced data management methodologies combined with state-of-the-art, scientific visualization techniques on graphics workstations in a distributed computing environment. The subject of this paper is the development of the data management system for HiSAIR.

  9. Large-Scale and Global Hydrology. Chapter 92

    Science.gov (United States)

    Rodell, Matthew; Beaudoing, Hiroko Kato; Koster, Randal; Peters-Lidard, Christa D.; Famiglietti, James S.; Lakshmi, Venkat

    2016-01-01

    Powered by the sun, water moves continuously between and through Earths oceanic, atmospheric, and terrestrial reservoirs. It enables life, shapes Earths surface, and responds to and influences climate change. Scientists measure various features of the water cycle using a combination of ground, airborne, and space-based observations, and seek to characterize it at multiple scales with the aid of numerical models. Over time our understanding of the water cycle and ability to quantify it have improved, owing to advances in observational capabilities, the extension of the data record, and increases in computing power and storage. Here we present some of the most recent estimates of global and continental ocean basin scale water cycle stocks and fluxes and provide examples of modern numerical modeling systems and reanalyses.Further, we discuss prospects for predicting water cycle variability at seasonal and longer scales, which is complicated by a changing climate and direct human impacts related to water management and agriculture. Changes to the water cycle will be among the most obvious and important facets of climate change, thus it is crucial that we continue to invest in our ability to monitor it.

  10. Cluster galaxy dynamics and the effects of large-scale environment

    Science.gov (United States)

    White, Martin; Cohn, J. D.; Smit, Renske

    2010-11-01

    Advances in observational capabilities have ushered in a new era of multi-wavelength, multi-physics probes of galaxy clusters and ambitious surveys are compiling large samples of cluster candidates selected in different ways. We use a high-resolution N-body simulation to study how the influence of large-scale structure in and around clusters causes correlated signals in different physical probes and discuss some implications this has for multi-physics probes of clusters (e.g. richness, lensing, Compton distortion and velocity dispersion). We pay particular attention to velocity dispersions, matching galaxies to subhaloes which are explicitly tracked in the simulation. We find that not only do haloes persist as subhaloes when they fall into a larger host, but groups of subhaloes retain their identity for long periods within larger host haloes. The highly anisotropic nature of infall into massive clusters, and their triaxiality, translates into an anisotropic velocity ellipsoid: line-of-sight galaxy velocity dispersions for any individual halo show large variance depending on viewing angle. The orientation of the velocity ellipsoid is correlated with the large-scale structure, and thus velocity outliers correlate with outliers caused by projection in other probes. We quantify this orientation uncertainty and give illustrative examples. Such a large variance suggests that velocity dispersion estimators will work better in an ensemble sense than for any individual cluster, which may inform strategies for obtaining redshifts of cluster members. We similarly find that the ability of substructure indicators to find kinematic substructures is highly viewing angle dependent. While groups of subhaloes which merge with a larger host halo can retain their identity for many Gyr, they are only sporadically picked up by substructure indicators. We discuss the effects of correlated scatter on scaling relations estimated through stacking, both analytically and in the simulations

  11. Benefits of transactive memory systems in large-scale development

    OpenAIRE

    Aivars, Sablis

    2016-01-01

    Context. Large-scale software development projects are those consisting of a large number of teams, maybe even spread across multiple locations, and working on large and complex software tasks. That means that neither a team member individually nor an entire team holds all the knowledge about the software being developed and teams have to communicate and coordinate their knowledge. Therefore, teams and team members in large-scale software development projects must acquire and manage expertise...

  12. Using Large-Scale Cooperative Control to Manage Operational Uncertainties for Aquifer Thermal Energy Storage

    Science.gov (United States)

    Jaxa-Rozen, M.; Rostampour, V.; Kwakkel, J. H.; Bloemendal, M.

    2017-12-01

    Seasonal Aquifer Thermal Energy Storage (ATES) technology can help reduce the demand of energy for heating and cooling in buildings, and has become a popular option for larger buildings in northern Europe. However, the larger-scale deployment of this technology has evidenced some issues of concern for policymakers; in particular, recent research shows that operational uncertainties contribute to inefficient outcomes under current planning methods for ATES. For instance, systems in the Netherlands typically use less than half of their permitted pumping volume on an annual basis. This overcapacity gives users more flexibility to operate their systems in response to the uncertainties which drive building energy demand; these include short-term operational factors such as weather and occupancy, and longer-term, deeply uncertain factors such as changes in climate and aquifer conditions over the lifespan of the buildings. However, as allocated subsurface volume remains unused, this situation limits the adoption of the technology in dense areas. Previous work using coupled agent-based/geohydrological simulation has shown that the cooperative operation of neighbouring ATES systems can support more efficient spatial planning, by dynamically managing thermal interactions in response to uncertain operating conditions. An idealized case study with centralized ATES control thus showed significant improvements in the energy savings which could obtained per unit of allocated subsurface volume, without degrading the recovery performance of systems. This work will extend this cooperative approach for a realistic case study of ATES planning in the city of Utrecht, in the Netherlands. This case was previously simulated under different scenarios for individual ATES operation. The poster will compare these results with a cooperative case under which neighbouring systems can coordinate their operation to manage interactions. Furthermore, a cooperative game-theoretical framework will be

  13. Study of a large scale neutron measurement channel

    International Nuclear Information System (INIS)

    Amarouayache, Anissa; Ben Hadid, Hayet.

    1982-12-01

    A large scale measurement channel allows the processing of the signal coming from an unique neutronic sensor, during three different running modes: impulses, fluctuations and current. The study described in this note includes three parts: - A theoretical study of the large scale channel and its brief description are given. The results obtained till now in that domain are presented. - The fluctuation mode is thoroughly studied and the improvements to be done are defined. The study of a fluctuation linear channel with an automatic commutation of scales is described and the results of the tests are given. In this large scale channel, the method of data processing is analogical. - To become independent of the problems generated by the use of a an analogical processing of the fluctuation signal, a digital method of data processing is tested. The validity of that method is improved. The results obtained on a test system realized according to this method are given and a preliminary plan for further research is defined [fr

  14. In-bed accountability of tritium in production scale metal hydride storage beds

    International Nuclear Information System (INIS)

    Klein, J.E.

    1995-01-01

    An ''in-bed accountability'' (IBA) flowing gas calorimetric measurement method has been developed and implemented to eliminate the need to remove tritium from production scale metal hydride storage beds for inventory measurement purposes. Six-point tritium IBA calibration curves have been completed for two, 390 gram tritium metal hydride storage beds. The calibration curves for the two tritium beds are similar to those obtained from the ''cold'' test program. Tritium inventory errors at the 95 percent confidence level ranged from ± 7.3 to 8.6 grams for the cold test results compared to ± 4.2 to 7.5 grams obtained for the two tritium calibrated beds

  15. Large-scale geographic variation in distribution and abundance of Australian deep-water kelp forests.

    Directory of Open Access Journals (Sweden)

    Ezequiel M Marzinelli

    Full Text Available Despite the significance of marine habitat-forming organisms, little is known about their large-scale distribution and abundance in deeper waters, where they are difficult to access. Such information is necessary to develop sound conservation and management strategies. Kelps are main habitat-formers in temperate reefs worldwide; however, these habitats are highly sensitive to environmental change. The kelp Ecklonia radiate is the major habitat-forming organism on subtidal reefs in temperate Australia. Here, we provide large-scale ecological data encompassing the latitudinal distribution along the continent of these kelp forests, which is a necessary first step towards quantitative inferences about the effects of climatic change and other stressors on these valuable habitats. We used the Autonomous Underwater Vehicle (AUV facility of Australia's Integrated Marine Observing System (IMOS to survey 157,000 m2 of seabed, of which ca 13,000 m2 were used to quantify kelp covers at multiple spatial scales (10-100 m to 100-1,000 km and depths (15-60 m across several regions ca 2-6° latitude apart along the East and West coast of Australia. We investigated the large-scale geographic variation in distribution and abundance of deep-water kelp (>15 m depth and their relationships with physical variables. Kelp cover generally increased with latitude despite great variability at smaller spatial scales. Maximum depth of kelp occurrence was 40-50 m. Kelp latitudinal distribution along the continent was most strongly related to water temperature and substratum availability. This extensive survey data, coupled with ongoing AUV missions, will allow for the detection of long-term shifts in the distribution and abundance of habitat-forming kelp and the organisms they support on a continental scale, and provide information necessary for successful implementation and management of conservation reserves.

  16. Information report on the nuclear safety and radiation protection of the Aube storage Centre - 2012

    International Nuclear Information System (INIS)

    2013-07-01

    This report first present the site of the Aube Storage Centre (CSA), its storage areas, its buildings and equipment, describes the water treatment process, proposes some exploitation data for 2012 (deliveries, storage, compacting), and indicates highlights and works performed in 2012. The next part reviews measures related to nuclear safety: recall of safety principles and objectives, technical arrangements to meet safety objectives, inspections by the ASN, quality audits. The third part reviews measures related to safety and radiation protection: principles for radiation protection, staff dosimetry practices and results, personnel safety, works performed in 2012. The fourth part addresses incidents and accidents (none occurred in 2012) and other minor events classified according to the INES scale. The fifth part addresses the control of the environment and the releases by the centre: measurement locations, measurement results (in the atmosphere, in rivers, in underground waters, radiological control, control of ecosystems, assessment of the radiological impact), physical-chemical control of a local river, actions undertaken for the protection of the environment, highlights for 2012. The next chapter addresses the management of the various wastes produced by the Centre (radioactive wastes, conventional wastes) and the last part reports actions regarding information and transparency. Recommendations of the CHSCT are reported

  17. Black start research of the wind and storage system based on the dual master-slave control

    Science.gov (United States)

    Leng, Xue; Shen, Li; Hu, Tian; Liu, Li

    2018-02-01

    Black start is the key to solving the problem of large-scale power failure, while the introduction of new renewable clean energy as a black start power supply was a new hotspot. Based on the dual master-slave control strategy, the wind and storage system was taken as the black start reliable power, energy storage and wind combined to ensure the stability of the micorgrid systems, to realize the black start. In order to obtain the capacity ratio of the storage in the small system based on the dual master-slave control strategy, and the black start constraint condition of the wind and storage combined system, obtain the key points of black start of wind storage combined system, but also provide reference and guidance for the subsequent large-scale wind and storage combined system in black start projects.

  18. Capabilities of the Large-Scale Sediment Transport Facility

    Science.gov (United States)

    2016-04-01

    pump flow meters, sediment trap weigh tanks , and beach profiling lidar. A detailed discussion of the original LSTF features and capabilities can be...ERDC/CHL CHETN-I-88 April 2016 Approved for public release; distribution is unlimited. Capabilities of the Large-Scale Sediment Transport...describes the Large-Scale Sediment Transport Facility (LSTF) and recent upgrades to the measurement systems. The purpose of these upgrades was to increase

  19. Spatiotemporal property and predictability of large-scale human mobility

    Science.gov (United States)

    Zhang, Hai-Tao; Zhu, Tao; Fu, Dongfei; Xu, Bowen; Han, Xiao-Pu; Chen, Duxin

    2018-04-01

    Spatiotemporal characteristics of human mobility emerging from complexity on individual scale have been extensively studied due to the application potential on human behavior prediction and recommendation, and control of epidemic spreading. We collect and investigate a comprehensive data set of human activities on large geographical scales, including both websites browse and mobile towers visit. Numerical results show that the degree of activity decays as a power law, indicating that human behaviors are reminiscent of scale-free random walks known as Lévy flight. More significantly, this study suggests that human activities on large geographical scales have specific non-Markovian characteristics, such as a two-segment power-law distribution of dwelling time and a high possibility for prediction. Furthermore, a scale-free featured mobility model with two essential ingredients, i.e., preferential return and exploration, and a Gaussian distribution assumption on the exploration tendency parameter is proposed, which outperforms existing human mobility models under scenarios of large geographical scales.

  20. Hierarchical hybrid control of manipulators: Artificial intelligence in large scale integrated circuits

    Science.gov (United States)

    Greene, P. H.

    1972-01-01

    Both in practical engineering and in control of muscular systems, low level subsystems automatically provide crude approximations to the proper response. Through low level tuning of these approximations, the proper response variant can emerge from standardized high level commands. Such systems are expressly suited to emerging large scale integrated circuit technology. A computer, using symbolic descriptions of subsystem responses, can select and shape responses of low level digital or analog microcircuits. A mathematical theory that reveals significant informational units in this style of control and software for realizing such information structures are formulated.

  1. Large-scale synthesis of Tellurium nanostructures via galvanic displacement of metals

    Science.gov (United States)

    Kok, Kuan-Ying; Choo, Thye-Foo; Ubaidah Saidin, Nur; Rahman, Che Zuraini Che Ab

    2018-01-01

    Tellurium (Te) is an attractive semiconductor material for a wide range of applications in various functional devices including, radiation dosimeters, optical storage materials, thermoelectric or piezoelectric generators. In this work, large scale synthesis of tellurium (Te) nanostructures have been successfully carried out in different concentrations of aqueous solutions containing TeO2 and NaOH, by galvanic displacements of Zn and Al which served as the sacrificial materials. Galvanic displacement process is cost-effective and it requires no template or surfactant for the synthesis of nanostructures. By varying the concentrations of TeO2 and NaOH, etching temperatures and etching times, Te nanostructures of various forms of nanostructures were successfully obtained, ranging from one-dimensional needles and rod-like structures to more complex hierarchical structures. Microscopy examinations on the nanostructures obtained have shown that both the diameters and lengths of the Te nanostructures increased with increasing etching temperature and etching time.

  2. Deep Hashing Based Fusing Index Method for Large-Scale Image Retrieval

    Directory of Open Access Journals (Sweden)

    Lijuan Duan

    2017-01-01

    Full Text Available Hashing has been widely deployed to perform the Approximate Nearest Neighbor (ANN search for the large-scale image retrieval to solve the problem of storage and retrieval efficiency. Recently, deep hashing methods have been proposed to perform the simultaneous feature learning and the hash code learning with deep neural networks. Even though deep hashing has shown the better performance than traditional hashing methods with handcrafted features, the learned compact hash code from one deep hashing network may not provide the full representation of an image. In this paper, we propose a novel hashing indexing method, called the Deep Hashing based Fusing Index (DHFI, to generate a more compact hash code which has stronger expression ability and distinction capability. In our method, we train two different architecture’s deep hashing subnetworks and fuse the hash codes generated by the two subnetworks together to unify images. Experiments on two real datasets show that our method can outperform state-of-the-art image retrieval applications.

  3. Data management in large-scale collaborative toxicity studies: how to file experimental data for automated statistical analysis.

    Science.gov (United States)

    Stanzel, Sven; Weimer, Marc; Kopp-Schneider, Annette

    2013-06-01

    High-throughput screening approaches are carried out for the toxicity assessment of a large number of chemical compounds. In such large-scale in vitro toxicity studies several hundred or thousand concentration-response experiments are conducted. The automated evaluation of concentration-response data using statistical analysis scripts saves time and yields more consistent results in comparison to data analysis performed by the use of menu-driven statistical software. Automated statistical analysis requires that concentration-response data are available in a standardised data format across all compounds. To obtain consistent data formats, a standardised data management workflow must be established, including guidelines for data storage, data handling and data extraction. In this paper two procedures for data management within large-scale toxicological projects are proposed. Both procedures are based on Microsoft Excel files as the researcher's primary data format and use a computer programme to automate the handling of data files. The first procedure assumes that data collection has not yet started whereas the second procedure can be used when data files already exist. Successful implementation of the two approaches into the European project ACuteTox is illustrated. Copyright © 2012 Elsevier Ltd. All rights reserved.

  4. Design of an indigeous music information storage and retrieval ...

    African Journals Online (AJOL)

    The main aim of the study was to design an appropriate Indigenous Music Information Storage and Retrieval System for Eritrea. A quantitative approach was mainly used to obtain data from a purposefully selected sample. The qualitative approach was also used in some research stages. Methods used included document

  5. Problems of large-scale vertically-integrated aquaculture

    Energy Technology Data Exchange (ETDEWEB)

    Webber, H H; Riordan, P F

    1976-01-01

    The problems of vertically-integrated aquaculture are outlined; they are concerned with: species limitations (in the market, biological and technological); site selection, feed, manpower needs, and legal, institutional and financial requirements. The gaps in understanding of, and the constraints limiting, large-scale aquaculture are listed. Future action is recommended with respect to: types and diversity of species to be cultivated, marketing, biotechnology (seed supply, disease control, water quality and concerted effort), siting, feed, manpower, legal and institutional aids (granting of water rights, grants, tax breaks, duty-free imports, etc.), and adequate financing. The last of hard data based on experience suggests that large-scale vertically-integrated aquaculture is a high risk enterprise, and with the high capital investment required, banks and funding institutions are wary of supporting it. Investment in pilot projects is suggested to demonstrate that large-scale aquaculture can be a fully functional and successful business. Construction and operation of such pilot farms is judged to be in the interests of both the public and private sector.

  6. GoFFish: A Sub-Graph Centric Framework for Large-Scale Graph Analytics1

    Energy Technology Data Exchange (ETDEWEB)

    Simmhan, Yogesh; Kumbhare, Alok; Wickramaarachchi, Charith; Nagarkar, Soonil; Ravi, Santosh; Raghavendra, Cauligi; Prasanna, Viktor

    2014-08-25

    Large scale graph processing is a major research area for Big Data exploration. Vertex centric programming models like Pregel are gaining traction due to their simple abstraction that allows for scalable execution on distributed systems naturally. However, there are limitations to this approach which cause vertex centric algorithms to under-perform due to poor compute to communication overhead ratio and slow convergence of iterative superstep. In this paper we introduce GoFFish a scalable sub-graph centric framework co-designed with a distributed persistent graph storage for large scale graph analytics on commodity clusters. We introduce a sub-graph centric programming abstraction that combines the scalability of a vertex centric approach with the flexibility of shared memory sub-graph computation. We map Connected Components, SSSP and PageRank algorithms to this model to illustrate its flexibility. Further, we empirically analyze GoFFish using several real world graphs and demonstrate its significant performance improvement, orders of magnitude in some cases, compared to Apache Giraph, the leading open source vertex centric implementation. We map Connected Components, SSSP and PageRank algorithms to this model to illustrate its flexibility. Further, we empirically analyze GoFFish using several real world graphs and demonstrate its significant performance improvement, orders of magnitude in some cases, compared to Apache Giraph, the leading open source vertex centric implementation.

  7. Large-scale computing with Quantum Espresso

    International Nuclear Information System (INIS)

    Giannozzi, P.; Cavazzoni, C.

    2009-01-01

    This paper gives a short introduction to Quantum Espresso: a distribution of software for atomistic simulations in condensed-matter physics, chemical physics, materials science, and to its usage in large-scale parallel computing.

  8. Forest Carbon Storage in the Northern Midwest, USA: A Bottom-Up Scaling Approach Combining Local Meteorological and Biometric Data With Regional Forest Inventories

    Science.gov (United States)

    Curtis, P. S.; Gough, C. M.; Vogel, C. S.

    2005-12-01

    Carbon (C) storage increasingly is considered an important part of the economic return of forestlands, making easily parameterized models for assessing current and future C storage important for both ecosystem and money managers. For the deciduous forests of the northern midwest, USA, detailed information relating annual C storage to local site characteristics can be combined with spatially extensive forest inventories to produce simple, robust models of C storage useful at a variety of scales. At the University of Michigan Biological Station (45o35`' N, 84o42`' W) we measured C storage, or net ecosystem production (NEP), in 65 forest stands varying in age, disturbance history, and productivity (site index) using biometric methods, and independently measured net C exchange at the landscape level using meteorological methods. Our biometric and meteorological estimates of NEP converged to within 1% of each other over five years, providing important confirmation of the robustness of these two approaches applied within northern deciduous forests (Gough et al. 2005). We found a significant relationship between NEP, stand age ( A, yrs), and site index ( Is, m), where NEP = 0.134 + 0.022 * (LN[ A* Is]) (r2 = 0.50, P database (ncrs2.fs.fed.us/4801/fiadb/) to estimate forest C storage at different scales across the upper midwest, Great Lakes region. Model estimates were validated against independent estimates of C storage for other forests in the region. At the local ecosystem-level (~1 km2) C storage averaged 1.52 Mg ha-1 yr-1. Scaling to the two-county area surrounding our meteorological and biometric study sites, average stand age decreased and site index increased, resulting in estimated storage of 1.62 Mg C ha-1 yr-1, or 0.22 Tg C yr-1 in the 1350 km2 of deciduous forest in this area. For the state of Michigan (31,537 km2 of deciduous forest), average uptake was estimated at 1.55 Mg C ha-1 yr-1, or 4.9 Tg C yr-1 total storage. For the three state region encompassing

  9. VESPA: Very large-scale Evolutionary and Selective Pressure Analyses

    Directory of Open Access Journals (Sweden)

    Andrew E. Webb

    2017-06-01

    Full Text Available Background Large-scale molecular evolutionary analyses of protein coding sequences requires a number of preparatory inter-related steps from finding gene families, to generating alignments and phylogenetic trees and assessing selective pressure variation. Each phase of these analyses can represent significant challenges, particularly when working with entire proteomes (all protein coding sequences in a genome from a large number of species. Methods We present VESPA, software capable of automating a selective pressure analysis using codeML in addition to the preparatory analyses and summary statistics. VESPA is written in python and Perl and is designed to run within a UNIX environment. Results We have benchmarked VESPA and our results show that the method is consistent, performs well on both large scale and smaller scale datasets, and produces results in line with previously published datasets. Discussion Large-scale gene family identification, sequence alignment, and phylogeny reconstruction are all important aspects of large-scale molecular evolutionary analyses. VESPA provides flexible software for simplifying these processes along with downstream selective pressure variation analyses. The software automatically interprets results from codeML and produces simplified summary files to assist the user in better understanding the results. VESPA may be found at the following website: http://www.mol-evol.org/VESPA.

  10. Utilizing cloud storage architecture for long-pulse fusion experiment data storage

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Ming; Liu, Qiang [State Key Laboratory of Advanced Electromagnetic Engineering and Technology, Wuhan, Hubei (China); School of Electrical and Electronic Engineering, Huazhong University of Science and Technology, Wuhan, Hubei (China); Zheng, Wei, E-mail: zhenghaku@gmail.com [State Key Laboratory of Advanced Electromagnetic Engineering and Technology, Wuhan, Hubei (China); School of Electrical and Electronic Engineering, Huazhong University of Science and Technology, Wuhan, Hubei (China); Wan, Kuanhong; Hu, Feiran; Yu, Kexun [State Key Laboratory of Advanced Electromagnetic Engineering and Technology, Wuhan, Hubei (China); School of Electrical and Electronic Engineering, Huazhong University of Science and Technology, Wuhan, Hubei (China)

    2016-11-15

    Scientific data storage plays a significant role in research facility. The explosion of data in recent years was always going to make data access, acquiring and management more difficult especially in fusion research field. For future long-pulse experiment like ITER, the extremely large data will be generated continuously for a long time, putting much pressure on both the write performance and the scalability. And traditional database has some defects such as inconvenience of management, hard to scale architecture. Hence a new data storage system is very essential. J-TEXTDB is a data storage and management system based on an application cluster and a storage cluster. J-TEXTDB is designed for big data storage and access, aiming at improving read–write speed, optimizing data system structure. The application cluster of J-TEXTDB is used to provide data manage functions and handles data read and write operations from the users. The storage cluster is used to provide the storage services. Both clusters are composed with general servers. By simply adding server to the cluster can improve the read–write performance, the storage space and redundancy, making whole data system highly scalable and available. In this paper, we propose a data system architecture and data model to manage data more efficient. Benchmarks of J-TEXTDB performance including read and write operations are given.

  11. Utilizing cloud storage architecture for long-pulse fusion experiment data storage

    International Nuclear Information System (INIS)

    Zhang, Ming; Liu, Qiang; Zheng, Wei; Wan, Kuanhong; Hu, Feiran; Yu, Kexun

    2016-01-01

    Scientific data storage plays a significant role in research facility. The explosion of data in recent years was always going to make data access, acquiring and management more difficult especially in fusion research field. For future long-pulse experiment like ITER, the extremely large data will be generated continuously for a long time, putting much pressure on both the write performance and the scalability. And traditional database has some defects such as inconvenience of management, hard to scale architecture. Hence a new data storage system is very essential. J-TEXTDB is a data storage and management system based on an application cluster and a storage cluster. J-TEXTDB is designed for big data storage and access, aiming at improving read–write speed, optimizing data system structure. The application cluster of J-TEXTDB is used to provide data manage functions and handles data read and write operations from the users. The storage cluster is used to provide the storage services. Both clusters are composed with general servers. By simply adding server to the cluster can improve the read–write performance, the storage space and redundancy, making whole data system highly scalable and available. In this paper, we propose a data system architecture and data model to manage data more efficient. Benchmarks of J-TEXTDB performance including read and write operations are given.

  12. Optimization of FTA technology for large scale plant DNA isolation ...

    African Journals Online (AJOL)

    Conventional methods for DNA acquisition and storage require expensive reagents and equipments. Experimental fields located in remote areas and large sample size presents greater challenge to developing country institutions constrained financially. FTATM technology uses a single format utilizing basic tools found in ...

  13. Literature Review: Herbal Medicine Treatment after Large-Scale Disasters.

    Science.gov (United States)

    Takayama, Shin; Kaneko, Soichiro; Numata, Takehiro; Kamiya, Tetsuharu; Arita, Ryutaro; Saito, Natsumi; Kikuchi, Akiko; Ohsawa, Minoru; Kohayagawa, Yoshitaka; Ishii, Tadashi

    2017-01-01

    Large-scale natural disasters, such as earthquakes, tsunamis, volcanic eruptions, and typhoons, occur worldwide. After the Great East Japan earthquake and tsunami, our medical support operation's experiences suggested that traditional medicine might be useful for treating the various symptoms of the survivors. However, little information is available regarding herbal medicine treatment in such situations. Considering that further disasters will occur, we performed a literature review and summarized the traditional medicine approaches for treatment after large-scale disasters. We searched PubMed and Cochrane Library for articles written in English, and Ichushi for those written in Japanese. Articles published before 31 March 2016 were included. Keywords "disaster" and "herbal medicine" were used in our search. Among studies involving herbal medicine after a disaster, we found two randomized controlled trials investigating post-traumatic stress disorder (PTSD), three retrospective investigations of trauma or common diseases, and seven case series or case reports of dizziness, pain, and psychosomatic symptoms. In conclusion, herbal medicine has been used to treat trauma, PTSD, and other symptoms after disasters. However, few articles have been published, likely due to the difficulty in designing high quality studies in such situations. Further study will be needed to clarify the usefulness of herbal medicine after disasters.

  14. SECARB Commercial Scale CO2 Injection and Optimization of Storage Capacity in the Southeastern United States

    Energy Technology Data Exchange (ETDEWEB)

    Koperna, George J. [Advanced Resources International, Inc., Arlington, VA (United States); Pashin, Jack [Oklahoma State Univ., Stillwater, OK (United States); Walsh, Peter [Univ. of Alabama, Birmingham, AL (United States)

    2017-10-30

    The Commercial Scale Project is a US DOE/NETL funded initiative aimed at enhancing the knowledge-base and industry’s ability to geologically store vast quantities of anthropogenic carbon. In support of this goal, a large-scale, stacked reservoir geologic model was developed for Gulf Coast sediments centered on the Citronelle Dome in southwest Alabama, the site of the SECARB Phase III Anthropogenic Test. Characterization of regional geology to construct the model consists of an assessment of the entire stratigraphic continuum at Citronelle Dome, from surface to the depth of the Donovan oil-bearing formation. This project utilizes all available geologic data available, which includes: modern geophysical well logs from three new wells drilled for SECARB’s Anthropogenic Test; vintage logs from the Citronelle oilfield wells; porosity and permeability data from whole core and sidewall cores obtained from the injection and observation wells drilled for the Anthropogenic Test; core data obtained from the SECARB Phase II saline aquifer injection test; regional core data for relevant formations from the Geological Survey of Alabama archives. Cross sections, isopach maps, and structure maps were developed to validate the geometry and architecture of the Citronelle Dome for building the model, and assuring that no major structural defects exist in the area. A synthetic neural network approach was used to predict porosity using the available SP and resistivity log data for the storage reservoir formations. These data are validated and applied to extrapolate porosity data over the study area wells, and to interpolate permeability amongst these data points. Geostatistical assessments were conducted over the study area. In addition to geologic characterization of the region, a suite of core analyses was conducted to construct a depositional model and constrain caprock integrity. Petrographic assessment of core was conducted by OSU and analyzed to build a depositional framework

  15. Large-scale water projects in the developing world: Revisiting the past and looking to the future

    Science.gov (United States)

    Sivakumar, Bellie; Chen, Ji

    2014-05-01

    During the past half a century or so, the developing world has been witnessing a significant increase in freshwater demands due to a combination of factors, including population growth, increased food demand, improved living standards, and water quality degradation. Since there exists significant variability in rainfall and river flow in both space and time, large-scale storage and distribution of water has become a key means to meet these increasing demands. In this regard, large dams and water transfer schemes (including river-linking schemes and virtual water trades) have been playing a key role. While the benefits of such large-scale projects in supplying water for domestic, irrigation, industrial, hydropower, recreational, and other uses both in the countries of their development and in other countries are undeniable, concerns on their negative impacts, such as high initial costs and damages to our ecosystems (e.g. river environment and species) and socio-economic fabric (e.g. relocation and socio-economic changes of affected people) have also been increasing in recent years. These have led to serious debates on the role of large-scale water projects in the developing world and on their future, but the often one-sided nature of such debates have inevitably failed to yield fruitful outcomes thus far. The present study aims to offer a far more balanced perspective on this issue. First, it recognizes and emphasizes the need for still additional large-scale water structures in the developing world in the future, due to the continuing increase in water demands, inefficiency in water use (especially in the agricultural sector), and absence of equivalent and reliable alternatives. Next, it reviews a few important success and failure stories of large-scale water projects in the developing world (and in the developed world), in an effort to arrive at a balanced view on the future role of such projects. Then, it discusses some major challenges in future water planning

  16. RESTRUCTURING OF THE LARGE-SCALE SPRINKLERS

    Directory of Open Access Journals (Sweden)

    Paweł Kozaczyk

    2016-09-01

    Full Text Available One of the best ways for agriculture to become independent from shortages of precipitation is irrigation. In the seventies and eighties of the last century a number of large-scale sprinklers in Wielkopolska was built. At the end of 1970’s in the Poznan province 67 sprinklers with a total area of 6400 ha were installed. The average size of the sprinkler reached 95 ha. In 1989 there were 98 sprinklers, and the area which was armed with them was more than 10 130 ha. The study was conducted on 7 large sprinklers with the area ranging from 230 to 520 hectares in 1986÷1998. After the introduction of the market economy in the early 90’s and ownership changes in agriculture, large-scale sprinklers have gone under a significant or total devastation. Land on the State Farms of the State Agricultural Property Agency has leased or sold and the new owners used the existing sprinklers to a very small extent. This involved a change in crop structure, demand structure and an increase in operating costs. There has also been a threefold increase in electricity prices. Operation of large-scale irrigation encountered all kinds of barriers in practice and limitations of system solutions, supply difficulties, high levels of equipment failure which is not inclined to rational use of available sprinklers. An effect of a vision of the local area was to show the current status of the remaining irrigation infrastructure. The adopted scheme for the restructuring of Polish agriculture was not the best solution, causing massive destruction of assets previously invested in the sprinkler system.

  17. Large-scale synthesis of YSZ nanopowder by Pechini method

    Indian Academy of Sciences (India)

    Administrator

    structure and chemical purity of 99⋅1% by inductively coupled plasma optical emission spectroscopy on a large scale. Keywords. Sol–gel; yttria-stabilized zirconia; large scale; nanopowder; Pechini method. 1. Introduction. Zirconia has attracted the attention of many scientists because of its tremendous thermal, mechanical ...

  18. The Phoenix series large scale LNG pool fire experiments.

    Energy Technology Data Exchange (ETDEWEB)

    Simpson, Richard B.; Jensen, Richard Pearson; Demosthenous, Byron; Luketa, Anay Josephine; Ricks, Allen Joseph; Hightower, Marion Michael; Blanchat, Thomas K.; Helmick, Paul H.; Tieszen, Sheldon Robert; Deola, Regina Anne; Mercier, Jeffrey Alan; Suo-Anttila, Jill Marie; Miller, Timothy J.

    2010-12-01

    The increasing demand for natural gas could increase the number and frequency of Liquefied Natural Gas (LNG) tanker deliveries to ports across the United States. Because of the increasing number of shipments and the number of possible new facilities, concerns about the potential safety of the public and property from an accidental, and even more importantly intentional spills, have increased. While improvements have been made over the past decade in assessing hazards from LNG spills, the existing experimental data is much smaller in size and scale than many postulated large accidental and intentional spills. Since the physics and hazards from a fire change with fire size, there are concerns about the adequacy of current hazard prediction techniques for large LNG spills and fires. To address these concerns, Congress funded the Department of Energy (DOE) in 2008 to conduct a series of laboratory and large-scale LNG pool fire experiments at Sandia National Laboratories (Sandia) in Albuquerque, New Mexico. This report presents the test data and results of both sets of fire experiments. A series of five reduced-scale (gas burner) tests (yielding 27 sets of data) were conducted in 2007 and 2008 at Sandia's Thermal Test Complex (TTC) to assess flame height to fire diameter ratios as a function of nondimensional heat release rates for extrapolation to large-scale LNG fires. The large-scale LNG pool fire experiments were conducted in a 120 m diameter pond specially designed and constructed in Sandia's Area III large-scale test complex. Two fire tests of LNG spills of 21 and 81 m in diameter were conducted in 2009 to improve the understanding of flame height, smoke production, and burn rate and therefore the physics and hazards of large LNG spills and fires.

  19. Testing Inflation with Large Scale Structure: Connecting Hopes with Reality

    International Nuclear Information System (INIS)

    Alvarez, Marcello; Baldauf, T.; Bond, J. Richard; Dalal, N.; Putter, R. D.; Dore, O.; Green, Daniel; Hirata, Chris; Huang, Zhiqi; Huterer, Dragan; Jeong, Donghui; Johnson, Matthew C.; Krause, Elisabeth; Loverde, Marilena; Meyers, Joel; Meeburg, Daniel; Senatore, Leonardo; Shandera, Sarah; Silverstein, Eva; Slosar, Anze; Smith, Kendrick; Zaldarriaga, Matias; Assassi, Valentin; Braden, Jonathan; Hajian, Amir; Kobayashi, Takeshi; Stein, George; Engelen, Alexander van

    2014-01-01

    The statistics of primordial curvature fluctuations are our window into the period of inflation, where these fluctuations were generated. To date, the cosmic microwave background has been the dominant source of information about these perturbations. Large-scale structure is, however, from where drastic improvements should originate. In this paper, we explain the theoretical motivations for pursuing such measurements and the challenges that lie ahead. In particular, we discuss and identify theoretical targets regarding the measurement of primordial non-Gaussianity. We argue that when quantified in terms of the local (equilateral) template amplitude floc\

  20. How Can the Evidence from Global Large-scale Clinical Trials for Cardiovascular Diseases be Improved?

    Science.gov (United States)

    Sawata, Hiroshi; Tsutani, Kiichiro

    2011-06-29

    Clinical investigations are important for obtaining evidence to improve medical treatment. Large-scale clinical trials with thousands of participants are particularly important for this purpose in cardiovascular diseases. Conducting large-scale clinical trials entails high research costs. This study sought to investigate global trends in large-scale clinical trials in cardiovascular diseases. We searched for trials using clinicaltrials.gov (URL: http://www.clinicaltrials.gov/) using the key words 'cardio' and 'event' in all fields on 10 April, 2010. We then selected trials with 300 or more participants examining cardiovascular diseases. The search revealed 344 trials that met our criteria. Of 344 trials, 71% were randomized controlled trials, 15% involved more than 10,000 participants, and 59% were funded by industry. In RCTs whose results were disclosed, 55% of industry-funded trials and 25% of non-industry funded trials reported statistically significant superiority over control (p = 0.012, 2-sided Fisher's exact test). Our findings highlighted concerns regarding potential bias related to funding sources, and that researchers should be aware of the importance of trial information disclosures and conflicts of interest. We should keep considering management and training regarding information disclosures and conflicts of interest for researchers. This could lead to better clinical evidence and further improvements in the development of medical treatment worldwide.