WorldWideScience

Sample records for large-scale high-efficiency terrestrial

  1. High efficiency, long life terrestrial solar panel

    Science.gov (United States)

    Chao, T.; Khemthong, S.; Ling, R.; Olah, S.

    1977-01-01

    The design of a high efficiency, long life terrestrial module was completed. It utilized 256 rectangular, high efficiency solar cells to achieve high packing density and electrical output. Tooling for the fabrication of solar cells was in house and evaluation of the cell performance was begun. Based on the power output analysis, the goal of a 13% efficiency module was achievable.

  2. Impacts of large-scale climatic disturbances on the terrestrial carbon cycle

    Directory of Open Access Journals (Sweden)

    Lucht Wolfgang

    2006-07-01

    Full Text Available Abstract Background The amount of carbon dioxide in the atmosphere steadily increases as a consequence of anthropogenic emissions but with large interannual variability caused by the terrestrial biosphere. These variations in the CO2 growth rate are caused by large-scale climate anomalies but the relative contributions of vegetation growth and soil decomposition is uncertain. We use a biogeochemical model of the terrestrial biosphere to differentiate the effects of temperature and precipitation on net primary production (NPP and heterotrophic respiration (Rh during the two largest anomalies in atmospheric CO2 increase during the last 25 years. One of these, the smallest atmospheric year-to-year increase (largest land carbon uptake in that period, was caused by global cooling in 1992/93 after the Pinatubo volcanic eruption. The other, the largest atmospheric increase on record (largest land carbon release, was caused by the strong El Niño event of 1997/98. Results We find that the LPJ model correctly simulates the magnitude of terrestrial modulation of atmospheric carbon anomalies for these two extreme disturbances. The response of soil respiration to changes in temperature and precipitation explains most of the modelled anomalous CO2 flux. Conclusion Observed and modelled NEE anomalies are in good agreement, therefore we suggest that the temporal variability of heterotrophic respiration produced by our model is reasonably realistic. We therefore conclude that during the last 25 years the two largest disturbances of the global carbon cycle were strongly controlled by soil processes rather then the response of vegetation to these large-scale climatic events.

  3. Broad-Scale Comparison of Photosynthesis in Terrestrial and Aquatic Plant Communities

    DEFF Research Database (Denmark)

    Sand-Jensen, Kaj; Krause-Jensen, D.

    1997-01-01

    Comparisons of photosynthesis in terrestrial and aquatic habitats have been impaired by differences in methods and time-scales of measurements. We compiled information on gross photosynthesis at high irradiance and photosynthetic efficiency at low irradiance from 109 published terrestrial studies...... communities probably due to more efficient light utilization and gas exchange in the terrestrial habitats. By contrast only small differences were found within different aquatic plant communities or within different terrestrial plant communities....... of forests, grasslands and crops and 319 aquatic studies of phytoplankton, macrophyte and attached microalgal communities to test if specific differences existed between the communities. Maximum gross photosynthesis and photosynthetic efficiency were systematically higher in terrestrial than in aquatic...

  4. High-Efficiency, Multijunction Solar Cells for Large-Scale Solar Electricity Generation

    Science.gov (United States)

    Kurtz, Sarah

    2006-03-01

    A solar cell with an infinite number of materials (matched to the solar spectrum) has a theoretical efficiency limit of 68%. If sunlight is concentrated, this limit increases to about 87%. These theoretical limits are calculated using basic physics and are independent of the details of the materials. In practice, the challenge of achieving high efficiency depends on identifying materials that can effectively use the solar spectrum. Impressive progress has been made with the current efficiency record being 39%. Today's solar market is also showing impressive progress, but is still hindered by high prices. One strategy for reducing cost is to use lenses or mirrors to focus the light on small solar cells. In this case, the system cost is dominated by the cost of the relatively inexpensive optics. The value of the optics increases with the efficiency of the solar cell. Thus, a concentrator system made with 35%- 40%-efficient solar cells is expected to deliver 50% more power at a similar cost when compare with a system using 25%-efficient cells. Today's markets are showing an opportunity for large concentrator systems that didn't exist 5-10 years ago. Efficiencies may soon pass 40% and ultimately may reach 50%, providing a pathway to improved performance and decreased cost. Many companies are currently investigating this technology for large-scale electricity generation. The presentation will cover the basic physics and more practical considerations to achieving high efficiency as well as describing the current status of the concentrator industry. This work has been authored by an employee of the Midwest Research Institute under Contract No. DE- AC36-99GO10337 with the U.S. Department of Energy. The United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, worldwide license to publish or reproduce the published form of this work, or allow

  5. Performance Analysis and Scaling Behavior of the Terrestrial Systems Modeling Platform TerrSysMP in Large-Scale Supercomputing Environments

    Science.gov (United States)

    Kollet, S. J.; Goergen, K.; Gasper, F.; Shresta, P.; Sulis, M.; Rihani, J.; Simmer, C.; Vereecken, H.

    2013-12-01

    In studies of the terrestrial hydrologic, energy and biogeochemical cycles, integrated multi-physics simulation platforms take a central role in characterizing non-linear interactions, variances and uncertainties of system states and fluxes in reciprocity with observations. Recently developed integrated simulation platforms attempt to honor the complexity of the terrestrial system across multiple time and space scales from the deeper subsurface including groundwater dynamics into the atmosphere. Technically, this requires the coupling of atmospheric, land surface, and subsurface-surface flow models in supercomputing environments, while ensuring a high-degree of efficiency in the utilization of e.g., standard Linux clusters and massively parallel resources. A systematic performance analysis including profiling and tracing in such an application is crucial in the understanding of the runtime behavior, to identify optimum model settings, and is an efficient way to distinguish potential parallel deficiencies. On sophisticated leadership-class supercomputers, such as the 28-rack 5.9 petaFLOP IBM Blue Gene/Q 'JUQUEEN' of the Jülich Supercomputing Centre (JSC), this is a challenging task, but even more so important, when complex coupled component models are to be analysed. Here we want to present our experience from coupling, application tuning (e.g. 5-times speedup through compiler optimizations), parallel scaling and performance monitoring of the parallel Terrestrial Systems Modeling Platform TerrSysMP. The modeling platform consists of the weather prediction system COSMO of the German Weather Service; the Community Land Model, CLM of NCAR; and the variably saturated surface-subsurface flow code ParFlow. The model system relies on the Multiple Program Multiple Data (MPMD) execution model where the external Ocean-Atmosphere-Sea-Ice-Soil coupler (OASIS3) links the component models. TerrSysMP has been instrumented with the performance analysis tool Scalasca and analyzed

  6. High Efficiency, High Density Terrestrial Panel. [for solar cell modules

    Science.gov (United States)

    Wohlgemuth, J.; Wihl, M.; Rosenfield, T.

    1979-01-01

    Terrestrial panels were fabricated using rectangular cells. Packing densities in excess of 90% with panel conversion efficiencies greater than 13% were obtained. Higher density panels can be produced on a cost competitive basis with the standard salami panels.

  7. High-efficiency wavefunction updates for large scale Quantum Monte Carlo

    Science.gov (United States)

    Kent, Paul; McDaniel, Tyler; Li, Ying Wai; D'Azevedo, Ed

    Within ab intio Quantum Monte Carlo (QMC) simulations, the leading numerical cost for large systems is the computation of the values of the Slater determinants in the trial wavefunctions. The evaluation of each Monte Carlo move requires finding the determinant of a dense matrix, which is traditionally iteratively evaluated using a rank-1 Sherman-Morrison updating scheme to avoid repeated explicit calculation of the inverse. For calculations with thousands of electrons, this operation dominates the execution profile. We propose a novel rank- k delayed update scheme. This strategy enables probability evaluation for multiple successive Monte Carlo moves, with application of accepted moves to the matrices delayed until after a predetermined number of moves, k. Accepted events grouped in this manner are then applied to the matrices en bloc with enhanced arithmetic intensity and computational efficiency. This procedure does not change the underlying Monte Carlo sampling or the sampling efficiency. For large systems and algorithms such as diffusion Monte Carlo where the acceptance ratio is high, order of magnitude speedups can be obtained on both multi-core CPU and on GPUs, making this algorithm highly advantageous for current petascale and future exascale computations.

  8. Carbon dioxide efficiency of terrestrial enhanced weathering.

    Science.gov (United States)

    Moosdorf, Nils; Renforth, Phil; Hartmann, Jens

    2014-05-06

    Terrestrial enhanced weathering, the spreading of ultramafic silicate rock flour to enhance natural weathering rates, has been suggested as part of a strategy to reduce global atmospheric CO2 levels. We budget potential CO2 sequestration against associated CO2 emissions to assess the net CO2 removal of terrestrial enhanced weathering. We combine global spatial data sets of potential source rocks, transport networks, and application areas with associated CO2 emissions in optimistic and pessimistic scenarios. The results show that the choice of source rocks and material comminution technique dominate the CO2 efficiency of enhanced weathering. CO2 emissions from transport amount to on average 0.5-3% of potentially sequestered CO2. The emissions of material mining and application are negligible. After accounting for all emissions, 0.5-1.0 t CO2 can be sequestered on average per tonne of rock, translating into a unit cost from 1.6 to 9.9 GJ per tonne CO2 sequestered by enhanced weathering. However, to control or reduce atmospheric CO2 concentrations substantially with enhanced weathering would require very large amounts of rock. Before enhanced weathering could be applied on large scales, more research is needed to assess weathering rates, potential side effects, social acceptability, and mechanisms of governance.

  9. Energy efficiency supervision strategy selection of Chinese large-scale public buildings

    International Nuclear Information System (INIS)

    Jin Zhenxing; Wu Yong; Li Baizhan; Gao Yafeng

    2009-01-01

    This paper discusses energy consumption, building development and building energy consumption in China, and points that energy efficiency management and maintenance of large-scale public buildings is the breakthrough point of building energy saving in China. Three obstacles are lack of basic statistics data, lack of service market for building energy saving, and lack of effective management measures account for the necessity of energy efficiency supervision for large-scale public buildings. And then the paper introduces the supervision aims, the supervision system and the five basic systems' role in the supervision system, and analyzes the working mechanism of the five basic systems. The energy efficiency supervision system of large-scale public buildings takes energy consumption statistics as a data basis, Energy auditing as a technical support, energy consumption ration as a benchmark of energy saving and price increase beyond ration as a price lever, and energy efficiency public-noticing as an amplifier. The supervision system promotes energy efficiency operation and maintenance of large-scale public building, and drives a comprehensive building energy saving in China.

  10. Energy efficiency supervision strategy selection of Chinese large-scale public buildings

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Zhenxing; Li, Baizhan; Gao, Yafeng [The Faculty of Urban Construction and Environmental Engineering, Chongqing University, Chongqing (China); Key Laboratory of the Three Gorges Reservoir Region' s Eco-Environment, Ministry of Education, Chongqing 400045 (China); Wu, Yong [The Department of Science and Technology, Ministry of Construction, Beijing 100835 (China)

    2009-06-15

    This paper discusses energy consumption, building development and building energy consumption in China, and points that energy efficiency management and maintenance of large-scale public buildings is the breakthrough point of building energy saving in China. Three obstacles are lack of basic statistics data, lack of service market for building energy saving, and lack of effective management measures account for the necessity of energy efficiency supervision for large-scale public buildings. And then the paper introduces the supervision aims, the supervision system and the five basic systems' role in the supervision system, and analyzes the working mechanism of the five basic systems. The energy efficiency supervision system of large-scale public buildings takes energy consumption statistics as a data basis, Energy auditing as a technical support, energy consumption ration as a benchmark of energy saving and price increase beyond ration as a price lever, and energy efficiency public-noticing as an amplifier. The supervision system promotes energy efficiency operation and maintenance of large-scale public building, and drives a comprehensive building energy saving in China. (author)

  11. Energy efficiency supervision strategy selection of Chinese large-scale public buildings

    Energy Technology Data Exchange (ETDEWEB)

    Jin Zhenxing [Faculty of Urban Construction and Environmental Engineering, Chongqing University, Chongqing (China); Key Laboratory of the Three Gorges Reservoir Region' s Eco-Environment, Ministry of Education, Chongqing 400045 (China)], E-mail: jinzhenxing33@sina.com; Wu Yong [Department of Science and Technology, Ministry of Construction, Beijing 100835 (China); Li Baizhan; Gao Yafeng [Faculty of Urban Construction and Environmental Engineering, Chongqing University, Chongqing (China); Key Laboratory of the Three Gorges Reservoir Region' s Eco-Environment, Ministry of Education, Chongqing 400045 (China)

    2009-06-15

    This paper discusses energy consumption, building development and building energy consumption in China, and points that energy efficiency management and maintenance of large-scale public buildings is the breakthrough point of building energy saving in China. Three obstacles are lack of basic statistics data, lack of service market for building energy saving, and lack of effective management measures account for the necessity of energy efficiency supervision for large-scale public buildings. And then the paper introduces the supervision aims, the supervision system and the five basic systems' role in the supervision system, and analyzes the working mechanism of the five basic systems. The energy efficiency supervision system of large-scale public buildings takes energy consumption statistics as a data basis, Energy auditing as a technical support, energy consumption ration as a benchmark of energy saving and price increase beyond ration as a price lever, and energy efficiency public-noticing as an amplifier. The supervision system promotes energy efficiency operation and maintenance of large-scale public building, and drives a comprehensive building energy saving in China.

  12. LARGE-SCALE HYDROGEN PRODUCTION FROM NUCLEAR ENERGY USING HIGH TEMPERATURE ELECTROLYSIS

    International Nuclear Information System (INIS)

    O'Brien, James E.

    2010-01-01

    Hydrogen can be produced from water splitting with relatively high efficiency using high-temperature electrolysis. This technology makes use of solid-oxide cells, running in the electrolysis mode to produce hydrogen from steam, while consuming electricity and high-temperature process heat. When coupled to an advanced high temperature nuclear reactor, the overall thermal-to-hydrogen efficiency for high-temperature electrolysis can be as high as 50%, which is about double the overall efficiency of conventional low-temperature electrolysis. Current large-scale hydrogen production is based almost exclusively on steam reforming of methane, a method that consumes a precious fossil fuel while emitting carbon dioxide to the atmosphere. Demand for hydrogen is increasing rapidly for refining of increasingly low-grade petroleum resources, such as the Athabasca oil sands and for ammonia-based fertilizer production. Large quantities of hydrogen are also required for carbon-efficient conversion of biomass to liquid fuels. With supplemental nuclear hydrogen, almost all of the carbon in the biomass can be converted to liquid fuels in a nearly carbon-neutral fashion. Ultimately, hydrogen may be employed as a direct transportation fuel in a 'hydrogen economy.' The large quantity of hydrogen that would be required for this concept should be produced without consuming fossil fuels or emitting greenhouse gases. An overview of the high-temperature electrolysis technology will be presented, including basic theory, modeling, and experimental activities. Modeling activities include both computational fluid dynamics and large-scale systems analysis. We have also demonstrated high-temperature electrolysis in our laboratory at the 15 kW scale, achieving a hydrogen production rate in excess of 5500 L/hr.

  13. An efficient implementation of 3D high-resolution imaging for large-scale seismic data with GPU/CPU heterogeneous parallel computing

    Science.gov (United States)

    Xu, Jincheng; Liu, Wei; Wang, Jin; Liu, Linong; Zhang, Jianfeng

    2018-02-01

    De-absorption pre-stack time migration (QPSTM) compensates for the absorption and dispersion of seismic waves by introducing an effective Q parameter, thereby making it an effective tool for 3D, high-resolution imaging of seismic data. Although the optimal aperture obtained via stationary-phase migration reduces the computational cost of 3D QPSTM and yields 3D stationary-phase QPSTM, the associated computational efficiency is still the main problem in the processing of 3D, high-resolution images for real large-scale seismic data. In the current paper, we proposed a division method for large-scale, 3D seismic data to optimize the performance of stationary-phase QPSTM on clusters of graphics processing units (GPU). Then, we designed an imaging point parallel strategy to achieve an optimal parallel computing performance. Afterward, we adopted an asynchronous double buffering scheme for multi-stream to perform the GPU/CPU parallel computing. Moreover, several key optimization strategies of computation and storage based on the compute unified device architecture (CUDA) were adopted to accelerate the 3D stationary-phase QPSTM algorithm. Compared with the initial GPU code, the implementation of the key optimization steps, including thread optimization, shared memory optimization, register optimization and special function units (SFU), greatly improved the efficiency. A numerical example employing real large-scale, 3D seismic data showed that our scheme is nearly 80 times faster than the CPU-QPSTM algorithm. Our GPU/CPU heterogeneous parallel computing framework significant reduces the computational cost and facilitates 3D high-resolution imaging for large-scale seismic data.

  14. Large-scale building energy efficiency retrofit: Concept, model and control

    International Nuclear Information System (INIS)

    Wu, Zhou; Wang, Bo; Xia, Xiaohua

    2016-01-01

    BEER (Building energy efficiency retrofit) projects are initiated in many nations and regions over the world. Existing studies of BEER focus on modeling and planning based on one building and one year period of retrofitting, which cannot be applied to certain large BEER projects with multiple buildings and multi-year retrofit. In this paper, the large-scale BEER problem is defined in a general TBT (time-building-technology) framework, which fits essential requirements of real-world projects. The large-scale BEER is newly studied in the control approach rather than the optimization approach commonly used before. Optimal control is proposed to design optimal retrofitting strategy in terms of maximal energy savings and maximal NPV (net present value). The designed strategy is dynamically changing on dimensions of time, building and technology. The TBT framework and the optimal control approach are verified in a large BEER project, and results indicate that promising performance of energy and cost savings can be achieved in the general TBT framework. - Highlights: • Energy efficiency retrofit of many buildings is studied. • A TBT (time-building-technology) framework is proposed. • The control system of the large-scale BEER is modeled. • The optimal retrofitting strategy is obtained.

  15. Efficient algorithms for collaborative decision making for large scale settings

    DEFF Research Database (Denmark)

    Assent, Ira

    2011-01-01

    to bring about more effective and more efficient retrieval systems that support the users' decision making process. We sketch promising research directions for more efficient algorithms for collaborative decision making, especially for large scale systems.......Collaborative decision making is a successful approach in settings where data analysis and querying can be done interactively. In large scale systems with huge data volumes or many users, collaboration is often hindered by impractical runtimes. Existing work on improving collaboration focuses...... on avoiding redundancy for users working on the same task. While this improves the effectiveness of the user work process, the underlying query processing engine is typically considered a "black box" and left unchanged. Research in multiple query processing, on the other hand, ignores the application...

  16. Large-scale hydrology in Europe : observed patterns and model performance

    Energy Technology Data Exchange (ETDEWEB)

    Gudmundsson, Lukas

    2011-06-15

    In a changing climate, terrestrial water storages are of great interest as water availability impacts key aspects of ecosystem functioning. Thus, a better understanding of the variations of wet and dry periods will contribute to fully grasp processes of the earth system such as nutrient cycling and vegetation dynamics. Currently, river runoff from small, nearly natural, catchments is one of the few variables of the terrestrial water balance that is regularly monitored with detailed spatial and temporal coverage on large scales. River runoff, therefore, provides a foundation to approach European hydrology with respect to observed patterns on large scales, with regard to the ability of models to capture these.The analysis of observed river flow from small catchments, focused on the identification and description of spatial patterns of simultaneous temporal variations of runoff. These are dominated by large-scale variations of climatic variables but also altered by catchment processes. It was shown that time series of annual low, mean and high flows follow the same atmospheric drivers. The observation that high flows are more closely coupled to large scale atmospheric drivers than low flows, indicates the increasing influence of catchment properties on runoff under dry conditions. Further, it was shown that the low-frequency variability of European runoff is dominated by two opposing centres of simultaneous variations, such that dry years in the north are accompanied by wet years in the south.Large-scale hydrological models are simplified representations of our current perception of the terrestrial water balance on large scales. Quantification of the models strengths and weaknesses is the prerequisite for a reliable interpretation of simulation results. Model evaluations may also enable to detect shortcomings with model assumptions and thus enable a refinement of the current perception of hydrological systems. The ability of a multi model ensemble of nine large-scale

  17. Innovation-driven efficient development of the Longwangmiao Fm large-scale sulfur gas reservoir in Moxi block, Sichuan Basin

    Directory of Open Access Journals (Sweden)

    Xinhua Ma

    2016-03-01

    Full Text Available The Lower Cambrian Longwangmiao Fm gas reservoir in Moxi block of the Anyue Gas field, Sichuan Basin, is the largest single-sandbody integrated carbonate gas reservoir proved so far in China. Notwithstanding this reservoir's advantages like large-scale reserves and high single-well productivity, there are multiple complicated factors restricting its efficient development, such as a median content of hydrogen sulfide, low porosity and strong heterogeneity of fracture–cave formation, various modes of gas–water occurrences, and close relation between overpressure and stress sensitivity. Up till now, since only a few Cambrian large-scale carbonate gas reservoirs have ever been developed in the world, there still exists some blind spots especially about its exploration and production rules. Besides, as for large-scale sulfur gas reservoirs, the exploration and construction is costly, and production test in the early evaluation stage is severely limited, all of which will bring about great challenges in productivity construction and high potential risks. In this regard, combining with Chinese strategic demand of strengthening clean energy supply security, the PetroChina Southwest Oil & Gas Field Company has carried out researches and field tests for the purpose of providing high-production wells, optimizing development design, rapidly constructing high-quality productivity and upgrading HSE security in the Longwangmiao Fm gas reservoir in Moxi block. Through the innovations of technology and management mode within 3 years, this gas reservoir has been built into a modern large-scale gas field with high quality, high efficiency and high benefit, and its annual capacity is now up to over 100 × 108 m3, with a desirable production capacity and development indexes gained as originally anticipated. It has become a new model of large-scale gas reservoirs with efficient development, providing a reference for other types of gas reservoirs in China.

  18. Large-scale impact cratering on the terrestrial planets

    International Nuclear Information System (INIS)

    Grieve, R.A.F.

    1982-01-01

    The crater densities on the earth and moon form the basis for a standard flux-time curve that can be used in dating unsampled planetary surfaces and constraining the temporal history of endogenic geologic processes. Abundant evidence is seen not only that impact cratering was an important surface process in planetary history but also that large imapact events produced effects that were crucial in scale. By way of example, it is noted that the formation of multiring basins on the early moon was as important in defining the planetary tectonic framework as plate tectonics is on the earth. Evidence from several planets suggests that the effects of very-large-scale impacts go beyond the simple formation of an impact structure and serve to localize increased endogenic activity over an extended period of geologic time. Even though no longer occurring with the frequency and magnitude of early solar system history, it is noted that large scale impact events continue to affect the local geology of the planets. 92 references

  19. Automatic Matching of Large Scale Images and Terrestrial LIDAR Based on App Synergy of Mobile Phone

    Science.gov (United States)

    Xia, G.; Hu, C.

    2018-04-01

    The digitalization of Cultural Heritage based on ground laser scanning technology has been widely applied. High-precision scanning and high-resolution photography of cultural relics are the main methods of data acquisition. The reconstruction with the complete point cloud and high-resolution image requires the matching of image and point cloud, the acquisition of the homonym feature points, the data registration, etc. However, the one-to-one correspondence between image and corresponding point cloud depends on inefficient manual search. The effective classify and management of a large number of image and the matching of large image and corresponding point cloud will be the focus of the research. In this paper, we propose automatic matching of large scale images and terrestrial LiDAR based on APP synergy of mobile phone. Firstly, we develop an APP based on Android, take pictures and record related information of classification. Secondly, all the images are automatically grouped with the recorded information. Thirdly, the matching algorithm is used to match the global and local image. According to the one-to-one correspondence between the global image and the point cloud reflection intensity image, the automatic matching of the image and its corresponding laser radar point cloud is realized. Finally, the mapping relationship between global image, local image and intensity image is established according to homonym feature point. So we can establish the data structure of the global image, the local image in the global image, the local image corresponding point cloud, and carry on the visualization management and query of image.

  20. Quantum Monte Carlo for large chemical systems: implementing efficient strategies for peta scale platforms and beyond

    International Nuclear Information System (INIS)

    Scemama, Anthony; Caffarel, Michel; Oseret, Emmanuel; Jalby, William

    2013-01-01

    Various strategies to implement efficiently quantum Monte Carlo (QMC) simulations for large chemical systems are presented. These include: (i) the introduction of an efficient algorithm to calculate the computationally expensive Slater matrices. This novel scheme is based on the use of the highly localized character of atomic Gaussian basis functions (not the molecular orbitals as usually done), (ii) the possibility of keeping the memory footprint minimal, (iii) the important enhancement of single-core performance when efficient optimization tools are used, and (iv) the definition of a universal, dynamic, fault-tolerant, and load-balanced framework adapted to all kinds of computational platforms (massively parallel machines, clusters, or distributed grids). These strategies have been implemented in the QMC-Chem code developed at Toulouse and illustrated with numerical applications on small peptides of increasing sizes (158, 434, 1056, and 1731 electrons). Using 10-80 k computing cores of the Curie machine (GENCI-TGCC-CEA, France), QMC-Chem has been shown to be capable of running at the peta scale level, thus demonstrating that for this machine a large part of the peak performance can be achieved. Implementation of large-scale QMC simulations for future exa scale platforms with a comparable level of efficiency is expected to be feasible. (authors)

  1. Corrugation Architecture Enabled Ultraflexible Wafer-Scale High-Efficiency Monocrystalline Silicon Solar Cell

    KAUST Repository

    Bahabry, Rabab R.

    2018-01-02

    Advanced classes of modern application require new generation of versatile solar cells showcasing extreme mechanical resilience, large-scale, low cost, and excellent power conversion efficiency. Conventional crystalline silicon-based solar cells offer one of the most highly efficient power sources, but a key challenge remains to attain mechanical resilience while preserving electrical performance. A complementary metal oxide semiconductor-based integration strategy where corrugation architecture enables ultraflexible and low-cost solar cell modules from bulk monocrystalline large-scale (127 × 127 cm) silicon solar wafers with a 17% power conversion efficiency. This periodic corrugated array benefits from an interchangeable solar cell segmentation scheme which preserves the active silicon thickness of 240 μm and achieves flexibility via interdigitated back contacts. These cells can reversibly withstand high mechanical stress and can be deformed to zigzag and bifacial modules. These corrugation silicon-based solar cells offer ultraflexibility with high stability over 1000 bending cycles including convex and concave bending to broaden the application spectrum. Finally, the smallest bending radius of curvature lower than 140 μm of the back contacts is shown that carries the solar cells segments.

  2. Corrugation Architecture Enabled Ultraflexible Wafer-Scale High-Efficiency Monocrystalline Silicon Solar Cell

    KAUST Repository

    Bahabry, Rabab R.; Kutbee, Arwa T.; Khan, Sherjeel M.; Sepulveda, Adrian C.; Wicaksono, Irmandy; Nour, Maha A.; Wehbe, Nimer; Almislem, Amani Saleh Saad; Ghoneim, Mohamed T.; Sevilla, Galo T.; Syed, Ahad; Shaikh, Sohail F.; Hussain, Muhammad Mustafa

    2018-01-01

    Advanced classes of modern application require new generation of versatile solar cells showcasing extreme mechanical resilience, large-scale, low cost, and excellent power conversion efficiency. Conventional crystalline silicon-based solar cells offer one of the most highly efficient power sources, but a key challenge remains to attain mechanical resilience while preserving electrical performance. A complementary metal oxide semiconductor-based integration strategy where corrugation architecture enables ultraflexible and low-cost solar cell modules from bulk monocrystalline large-scale (127 × 127 cm) silicon solar wafers with a 17% power conversion efficiency. This periodic corrugated array benefits from an interchangeable solar cell segmentation scheme which preserves the active silicon thickness of 240 μm and achieves flexibility via interdigitated back contacts. These cells can reversibly withstand high mechanical stress and can be deformed to zigzag and bifacial modules. These corrugation silicon-based solar cells offer ultraflexibility with high stability over 1000 bending cycles including convex and concave bending to broaden the application spectrum. Finally, the smallest bending radius of curvature lower than 140 μm of the back contacts is shown that carries the solar cells segments.

  3. SparseLeap: Efficient Empty Space Skipping for Large-Scale Volume Rendering

    KAUST Repository

    Hadwiger, Markus; Al-Awami, Ali K.; Beyer, Johanna; Agus, Marco; Pfister, Hanspeter

    2017-01-01

    Recent advances in data acquisition produce volume data of very high resolution and large size, such as terabyte-sized microscopy volumes. These data often contain many fine and intricate structures, which pose huge challenges for volume rendering, and make it particularly important to efficiently skip empty space. This paper addresses two major challenges: (1) The complexity of large volumes containing fine structures often leads to highly fragmented space subdivisions that make empty regions hard to skip efficiently. (2) The classification of space into empty and non-empty regions changes frequently, because the user or the evaluation of an interactive query activate a different set of objects, which makes it unfeasible to pre-compute a well-adapted space subdivision. We describe the novel SparseLeap method for efficient empty space skipping in very large volumes, even around fine structures. The main performance characteristic of SparseLeap is that it moves the major cost of empty space skipping out of the ray-casting stage. We achieve this via a hybrid strategy that balances the computational load between determining empty ray segments in a rasterization (object-order) stage, and sampling non-empty volume data in the ray-casting (image-order) stage. Before ray-casting, we exploit the fast hardware rasterization of GPUs to create a ray segment list for each pixel, which identifies non-empty regions along the ray. The ray-casting stage then leaps over empty space without hierarchy traversal. Ray segment lists are created by rasterizing a set of fine-grained, view-independent bounding boxes. Frame coherence is exploited by re-using the same bounding boxes unless the set of active objects changes. We show that SparseLeap scales better to large, sparse data than standard octree empty space skipping.

  4. SparseLeap: Efficient Empty Space Skipping for Large-Scale Volume Rendering

    KAUST Repository

    Hadwiger, Markus

    2017-08-28

    Recent advances in data acquisition produce volume data of very high resolution and large size, such as terabyte-sized microscopy volumes. These data often contain many fine and intricate structures, which pose huge challenges for volume rendering, and make it particularly important to efficiently skip empty space. This paper addresses two major challenges: (1) The complexity of large volumes containing fine structures often leads to highly fragmented space subdivisions that make empty regions hard to skip efficiently. (2) The classification of space into empty and non-empty regions changes frequently, because the user or the evaluation of an interactive query activate a different set of objects, which makes it unfeasible to pre-compute a well-adapted space subdivision. We describe the novel SparseLeap method for efficient empty space skipping in very large volumes, even around fine structures. The main performance characteristic of SparseLeap is that it moves the major cost of empty space skipping out of the ray-casting stage. We achieve this via a hybrid strategy that balances the computational load between determining empty ray segments in a rasterization (object-order) stage, and sampling non-empty volume data in the ray-casting (image-order) stage. Before ray-casting, we exploit the fast hardware rasterization of GPUs to create a ray segment list for each pixel, which identifies non-empty regions along the ray. The ray-casting stage then leaps over empty space without hierarchy traversal. Ray segment lists are created by rasterizing a set of fine-grained, view-independent bounding boxes. Frame coherence is exploited by re-using the same bounding boxes unless the set of active objects changes. We show that SparseLeap scales better to large, sparse data than standard octree empty space skipping.

  5. Implementing effect of energy efficiency supervision system for government office buildings and large-scale public buildings in China

    International Nuclear Information System (INIS)

    Zhao Jing; Wu Yong; Zhu Neng

    2009-01-01

    The Chinese central government released a document to initiate a task of energy efficiency supervision system construction for government office buildings and large-scale public buildings in 2007, which marks the overall start of existing buildings energy efficiency management in China with the government office buildings and large-scale public buildings as a breakthrough. This paper focused on the implementing effect in the demonstration region all over China for less than one year, firstly introduced the target and path of energy efficiency supervision system, then described the achievements and problems during the implementing process in the first demonstration provinces and cities. A certain data from the energy efficiency public notice in some typical demonstration provinces and cities were analyzed statistically. It can be concluded that different functional buildings have different energy consumption and the average energy consumption of large-scale public buildings is too high in China compared with the common public buildings and residential buildings. The obstacles need to be overcome afterward were summarized and the prospects for the future work were also put forward in the end.

  6. Implementing effect of energy efficiency supervision system for government office buildings and large-scale public buildings in China

    Energy Technology Data Exchange (ETDEWEB)

    Zhao Jing [School of Environmental Science and Engineering, Tianjin University, Tianjin 300072 (China)], E-mail: zhaojing@tju.edu.cn; Wu Yong [Department of Science and Technology, Ministry of Housing and Urban-Rural Development of the People' s Republic of China, Beijing 100835 (China); Zhu Neng [School of Environmental Science and Engineering, Tianjin University, Tianjin 300072 (China)

    2009-06-15

    The Chinese central government released a document to initiate a task of energy efficiency supervision system construction for government office buildings and large-scale public buildings in 2007, which marks the overall start of existing buildings energy efficiency management in China with the government office buildings and large-scale public buildings as a breakthrough. This paper focused on the implementing effect in the demonstration region all over China for less than one year, firstly introduced the target and path of energy efficiency supervision system, then described the achievements and problems during the implementing process in the first demonstration provinces and cities. A certain data from the energy efficiency public notice in some typical demonstration provinces and cities were analyzed statistically. It can be concluded that different functional buildings have different energy consumption and the average energy consumption of large-scale public buildings is too high in China compared with the common public buildings and residential buildings. The obstacles need to be overcome afterward were summarized and the prospects for the future work were also put forward in the end.

  7. Implementing effect of energy efficiency supervision system for government office buildings and large-scale public buildings in China

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Jing; Zhu, Neng [School of Environmental Science and Engineering, Tianjin University, Tianjin 300072 (China); Wu, Yong [Department of Science and Technology, Ministry of Housing and Urban-Rural Development of the People' s Republic of China, Beijing 100835 (China)

    2009-06-15

    The Chinese central government released a document to initiate a task of energy efficiency supervision system construction for government office buildings and large-scale public buildings in 2007, which marks the overall start of existing buildings energy efficiency management in China with the government office buildings and large-scale public buildings as a breakthrough. This paper focused on the implementing effect in the demonstration region all over China for less than one year, firstly introduced the target and path of energy efficiency supervision system, then described the achievements and problems during the implementing process in the first demonstration provinces and cities. A certain data from the energy efficiency public notice in some typical demonstration provinces and cities were analyzed statistically. It can be concluded that different functional buildings have different energy consumption and the average energy consumption of large-scale public buildings is too high in China compared with the common public buildings and residential buildings. The obstacles need to be overcome afterward were summarized and the prospects for the future work were also put forward in the end. (author)

  8. Large-scale high-resolution non-invasive geophysical archaeological prospection for the investigation of entire archaeological landscapes

    Science.gov (United States)

    Trinks, Immo; Neubauer, Wolfgang; Hinterleitner, Alois; Kucera, Matthias; Löcker, Klaus; Nau, Erich; Wallner, Mario; Gabler, Manuel; Zitz, Thomas

    2014-05-01

    Over the past three years the 2010 in Vienna founded Ludwig Boltzmann Institute for Archaeological Prospection and Virtual Archaeology (http://archpro.lbg.ac.at), in collaboration with its ten European partner organizations, has made considerable progress in the development and application of near-surface geophysical survey technology and methodology mapping square kilometres rather than hectares in unprecedented spatial resolution. The use of multiple novel motorized multichannel GPR and magnetometer systems (both Förster/Fluxgate and Cesium type) in combination with advanced and centimetre precise positioning systems (robotic totalstations and Realtime Kinematic GPS) permitting efficient navigation in open fields have resulted in comprehensive blanket coverage archaeological prospection surveys of important cultural heritage sites, such as the landscape surrounding Stonehenge in the framework of the Stonehenge Hidden Landscape Project, the mapping of the World Cultural Heritage site Birka-Hovgården in Sweden, or the detailed investigation of the Roman urban landscape of Carnuntum near Vienna. Efficient state-of-the-art archaeological prospection survey solutions require adequate fieldwork methodologies and appropriate data processing tools for timely quality control of the data in the field and large-scale data visualisations after arrival back in the office. The processed and optimized visualisations of the geophysical measurement data provide the basis for subsequent archaeological interpretation. Integration of the high-resolution geophysical prospection data with remote sensing data acquired through aerial photography, airborne laser- and hyperspectral-scanning, terrestrial laser-scanning or detailed digital terrain models derived through photogrammetric methods permits improved understanding and spatial analysis as well as the preparation of comprehensible presentations for the stakeholders (scientific community, cultural heritage managers, public). Of

  9. Ultra-high efficiency photovoltaic cells for large scale solar power generation.

    Science.gov (United States)

    Nakano, Yoshiaki

    2012-01-01

    The primary targets of our project are to drastically improve the photovoltaic conversion efficiency and to develop new energy storage and delivery technologies. Our approach to obtain an efficiency over 40% starts from the improvement of III-V multi-junction solar cells by introducing a novel material for each cell realizing an ideal combination of bandgaps and lattice-matching. Further improvement incorporates quantum structures such as stacked quantum wells and quantum dots, which allow higher degree of freedom in the design of the bandgap and the lattice strain. Highly controlled arrangement of either quantum dots or quantum wells permits the coupling of the wavefunctions, and thus forms intermediate bands in the bandgap of a host material, which allows multiple photon absorption theoretically leading to a conversion efficiency exceeding 50%. In addition to such improvements, microfabrication technology for the integrated high-efficiency cells and the development of novel material systems that realizes high efficiency and low cost at the same time are investigated.

  10. A potential mechanism for allometric trabecular bone scaling in terrestrial mammals.

    Science.gov (United States)

    Christen, Patrik; Ito, Keita; van Rietbergen, Bert

    2015-03-01

    Trabecular bone microstructural parameters, including trabecular thickness, spacing, and number, have been reported to scale with animal size with negative allometry, whereas bone volume fraction is animal size-invariant in terrestrial mammals. As for the majority of scaling patterns described in animals, its underlying mechanism is unknown. However, it has also been found that osteocyte density is inversely related to animal size, possibly adapted to metabolic rate, which shows a negative relationship as well. In addition, the signalling reach of osteocytes is limited by the extent of the lacuno-canalicular network, depending on trabecular dimensions and thus also on animal size. Here we propose animal size-dependent variations in osteocyte density and their signalling influence distance as a potential mechanism for negative allometric trabecular bone scaling in terrestrial mammals. Using an established and tested computational model of bone modelling and remodelling, we run simulations with different osteocyte densities and influence distances mimicking six terrestrial mammals covering a large range of body masses. Simulated trabecular structures revealed negative allometric scaling for trabecular thickness, spacing, and number, constant bone volume fraction, and bone turnover rates inversely related to animal size. These results are in agreement with previous observations supporting our proposal of osteocyte density and influence distance variation as a potential mechanism for negative allometric trabecular bone scaling in terrestrial mammals. The inverse relationship between bone turnover rates and animal size further indicates that trabecular bone scaling may be linked to metabolic rather than mechanical adaptations. © 2015 Anatomical Society.

  11. Efficient purification and concentration of viruses from a large body of high turbidity seawater.

    Science.gov (United States)

    Sun, Guowei; Xiao, Jinzhou; Wang, Hongming; Gong, Chaowen; Pan, Yingjie; Yan, Shuling; Wang, Yongjie

    2014-01-01

    Marine viruses are the most abundant entities in the ocean and play crucial roles in the marine ecological system. However, understanding of viral diversity on large scale depends on efficient and reliable viral purification and concentration techniques. Here, we report on developing an efficient method to purify and concentrate viruses from large body of high turbidity seawater. The developed method characterizes with high viral recovery efficiency, high concentration factor, high viral particle densities and high-throughput, and is reliable for viral concentration from high turbidity seawater. Recovered viral particles were used directly for subsequent analysis by epifluorescence microscopy, transmission electron microscopy and metagenomic sequencing. Three points are essential for this method:•The sampled seawater (>150 L) was initially divided into two parts, water fraction and settled matter fraction, after natural sedimentation.•Both viruses in the water fraction concentrated by tangential flow filtration (TFF) and viruses isolated from the settled matter fraction were considered as the whole viral community in high turbidity seawater.•The viral concentrates were re-concentrated by using centrifugal filter device in order to obtain high density of viral particles.

  12. Efficient Selection of Multiple Objects on a Large Scale

    DEFF Research Database (Denmark)

    Stenholt, Rasmus

    2012-01-01

    The task of multiple object selection (MOS) in immersive virtual environments is important and still largely unexplored. The diffi- culty of efficient MOS increases with the number of objects to be selected. E.g. in small-scale MOS, only a few objects need to be simultaneously selected. This may...... consuming. Instead, we have implemented and tested two of the existing approaches to 3-D MOS, a brush and a lasso, as well as a new technique, a magic wand, which automati- cally selects objects based on local proximity to other objects. In a formal user evaluation, we have studied how the performance...

  13. The Climate Potentials and Side-Effects of Large-Scale terrestrial CO2 Removal - Insights from Quantitative Model Assessments

    Science.gov (United States)

    Boysen, L.; Heck, V.; Lucht, W.; Gerten, D.

    2015-12-01

    Terrestrial carbon dioxide removal (tCDR) through dedicated biomass plantations is considered as one climate engineering (CE) option if implemented at large-scale. While the risks and costs are supposed to be small, the effectiveness depends strongly on spatial and temporal scales of implementation. Based on simulations with a dynamic global vegetation model (LPJmL) we comprehensively assess the effectiveness, biogeochemical side-effects and tradeoffs from an earth system-analytic perspective. We analyzed systematic land-use scenarios in which all, 25%, or 10% of natural and/or agricultural areas are converted to tCDR plantations including the assumption that biomass plantations are established once the 2°C target is crossed in a business-as-usual climate change trajectory. The resulting tCDR potentials in year 2100 include the net accumulated annual biomass harvests and changes in all land carbon pools. We find that only the most spatially excessive, and thus undesirable, scenario would be capable to restore the 2° target by 2100 under continuing high emissions (with a cooling of 3.02°C). Large-scale biomass plantations covering areas between 1.1 - 4.2 Gha would produce a climate reduction potential of 0.8 - 1.4°C. tCDR plantations at smaller scales do not build up enough biomass over this considered period and the potentials to achieve global warming reductions are substantially lowered to no more than 0.5-0.6°C. Finally, we demonstrate that the (non-economic) costs for the Earth system include negative impacts on the water cycle and on ecosystems, which are already under pressure due to both land use change and climate change. Overall, tCDR may lead to a further transgression of land- and water-related planetary boundaries while not being able to set back the crossing of the planetary boundary for climate change. tCDR could still be considered in the near-future mitigation portfolio if implemented on small scales on wisely chosen areas.

  14. An Investigation of the High Efficiency Estimation Approach of the Large-Scale Scattered Point Cloud Normal Vector

    Directory of Open Access Journals (Sweden)

    Xianglin Meng

    2018-03-01

    Full Text Available The normal vector estimation of the large-scale scattered point cloud (LSSPC plays an important role in point-based shape editing. However, the normal vector estimation for LSSPC cannot meet the great challenge of the sharp increase of the point cloud that is mainly attributed to its low computational efficiency. In this paper, a novel, fast method-based on bi-linear interpolation is reported on the normal vector estimation for LSSPC. We divide the point sets into many small cubes to speed up the local point search and construct interpolation nodes on the isosurface expressed by the point cloud. On the premise of calculating the normal vectors of these interpolated nodes, a normal vector bi-linear interpolation of the points in the cube is realized. The proposed approach has the merits of accurate, simple, and high efficiency, because the algorithm only needs to search neighbor and calculates normal vectors for interpolation nodes that are usually far less than the point cloud. The experimental results of several real and simulated point sets show that our method is over three times faster than the Elliptic Gabriel Graph-based method, and the average deviation is less than 0.01 mm.

  15. Spatial scale and β-diversity of terrestrial vertebrates in Mexico

    OpenAIRE

    Ochoa-Ochoa, Leticia M.; Munguía, Mariana; Lira-Noriega, Andrés; Sánchez-Cordero, Víctor; Flores-Villela, Oscar; Navarro-Sigüenza, Adolfo; Rodríguez, Pilar

    2014-01-01

    Patterns of diversity are scale dependent and beta-diversity is not the exception. Mexico is megadiverse due to its high beta diversity, but little is known if it is scale-dependent and/or taxonomic-dependent. We explored these questions based on the self-similarity hypothesis of beta-diversity across spatial scales. Using geographic distribution ranges of 2 513 species, we compared the beta-diversity patterns of 4 groups of terrestrial vertebrates, across 7 spatial scales (from ~10 km² to 16...

  16. Time-Efficient Cloning Attacks Identification in Large-Scale RFID Systems

    Directory of Open Access Journals (Sweden)

    Ju-min Zhao

    2017-01-01

    Full Text Available Radio Frequency Identification (RFID is an emerging technology for electronic labeling of objects for the purpose of automatically identifying, categorizing, locating, and tracking the objects. But in their current form RFID systems are susceptible to cloning attacks that seriously threaten RFID applications but are hard to prevent. Existing protocols aimed at detecting whether there are cloning attacks in single-reader RFID systems. In this paper, we investigate the cloning attacks identification in the multireader scenario and first propose a time-efficient protocol, called the time-efficient Cloning Attacks Identification Protocol (CAIP to identify all cloned tags in multireaders RFID systems. We evaluate the performance of CAIP through extensive simulations. The results show that CAIP can identify all the cloned tags in large-scale RFID systems fairly fast with required accuracy.

  17. Global climate change: Mitigation opportunities high efficiency large chiller technology

    Energy Technology Data Exchange (ETDEWEB)

    Stanga, M.V.

    1997-12-31

    This paper, comprised of presentation viewgraphs, examines the impact of high efficiency large chiller technology on world electricity consumption and carbon dioxide emissions. Background data are summarized, and sample calculations are presented. Calculations show that presently available high energy efficiency chiller technology has the ability to substantially reduce energy consumption from large chillers. If this technology is widely implemented on a global basis, it could reduce carbon dioxide emissions by 65 million tons by 2010.

  18. An efficient and novel computation method for simulating diffraction patterns from large-scale coded apertures on large-scale focal plane arrays

    Science.gov (United States)

    Shrekenhamer, Abraham; Gottesman, Stephen R.

    2012-10-01

    A novel and memory efficient method for computing diffraction patterns produced on large-scale focal planes by largescale Coded Apertures at wavelengths where diffraction effects are significant has been developed and tested. The scheme, readily implementable on portable computers, overcomes the memory limitations of present state-of-the-art simulation codes such as Zemax. The method consists of first calculating a set of reference complex field (amplitude and phase) patterns on the focal plane produced by a single (reference) central hole, extending to twice the focal plane array size, with one such pattern for each Line-of-Sight (LOS) direction and wavelength in the scene, and with the pattern amplitude corresponding to the square-root of the spectral irradiance from each such LOS direction in the scene at selected wavelengths. Next the set of reference patterns is transformed to generate pattern sets for other holes. The transformation consists of a translational pattern shift corresponding to each hole's position offset and an electrical phase shift corresponding to each hole's position offset and incoming radiance's direction and wavelength. The set of complex patterns for each direction and wavelength is then summed coherently and squared for each detector to yield a set of power patterns unique for each direction and wavelength. Finally the set of power patterns is summed to produce the full waveband diffraction pattern from the scene. With this tool researchers can now efficiently simulate diffraction patterns produced from scenes by large-scale Coded Apertures onto large-scale focal plane arrays to support the development and optimization of coded aperture masks and image reconstruction algorithms.

  19. High-Performance Monitoring Architecture for Large-Scale Distributed Systems Using Event Filtering

    Science.gov (United States)

    Maly, K.

    1998-01-01

    Monitoring is an essential process to observe and improve the reliability and the performance of large-scale distributed (LSD) systems. In an LSD environment, a large number of events is generated by the system components during its execution or interaction with external objects (e.g. users or processes). Monitoring such events is necessary for observing the run-time behavior of LSD systems and providing status information required for debugging, tuning and managing such applications. However, correlated events are generated concurrently and could be distributed in various locations in the applications environment which complicates the management decisions process and thereby makes monitoring LSD systems an intricate task. We propose a scalable high-performance monitoring architecture for LSD systems to detect and classify interesting local and global events and disseminate the monitoring information to the corresponding end- points management applications such as debugging and reactive control tools to improve the application performance and reliability. A large volume of events may be generated due to the extensive demands of the monitoring applications and the high interaction of LSD systems. The monitoring architecture employs a high-performance event filtering mechanism to efficiently process the large volume of event traffic generated by LSD systems and minimize the intrusiveness of the monitoring process by reducing the event traffic flow in the system and distributing the monitoring computation. Our architecture also supports dynamic and flexible reconfiguration of the monitoring mechanism via its Instrumentation and subscription components. As a case study, we show how our monitoring architecture can be utilized to improve the reliability and the performance of the Interactive Remote Instruction (IRI) system which is a large-scale distributed system for collaborative distance learning. The filtering mechanism represents an Intrinsic component integrated

  20. Terrestrial nitrogen-carbon cycle interactions at the global scale.

    Science.gov (United States)

    Zaehle, S

    2013-07-05

    Interactions between the terrestrial nitrogen (N) and carbon (C) cycles shape the response of ecosystems to global change. However, the global distribution of nitrogen availability and its importance in global biogeochemistry and biogeochemical interactions with the climate system remain uncertain. Based on projections of a terrestrial biosphere model scaling ecological understanding of nitrogen-carbon cycle interactions to global scales, anthropogenic nitrogen additions since 1860 are estimated to have enriched the terrestrial biosphere by 1.3 Pg N, supporting the sequestration of 11.2 Pg C. Over the same time period, CO2 fertilization has increased terrestrial carbon storage by 134.0 Pg C, increasing the terrestrial nitrogen stock by 1.2 Pg N. In 2001-2010, terrestrial ecosystems sequestered an estimated total of 27 Tg N yr(-1) (1.9 Pg C yr(-1)), of which 10 Tg N yr(-1) (0.2 Pg C yr(-1)) are due to anthropogenic nitrogen deposition. Nitrogen availability already limits terrestrial carbon sequestration in the boreal and temperate zone, and will constrain future carbon sequestration in response to CO2 fertilization (regionally by up to 70% compared with an estimate without considering nitrogen-carbon interactions). This reduced terrestrial carbon uptake will probably dominate the role of the terrestrial nitrogen cycle in the climate system, as it accelerates the accumulation of anthropogenic CO2 in the atmosphere. However, increases of N2O emissions owing to anthropogenic nitrogen and climate change (at a rate of approx. 0.5 Tg N yr(-1) per 1°C degree climate warming) will add an important long-term climate forcing.

  1. Efficient graph-based dynamic load-balancing for parallel large-scale agent-based traffic simulation

    NARCIS (Netherlands)

    Xu, Y.; Cai, W.; Aydt, H.; Lees, M.; Tolk, A.; Diallo, S.Y.; Ryzhov, I.O.; Yilmaz, L.; Buckley, S.; Miller, J.A.

    2014-01-01

    One of the issues of parallelizing large-scale agent-based traffic simulations is partitioning and load-balancing. Traffic simulations are dynamic applications where the distribution of workload in the spatial domain constantly changes. Dynamic load-balancing at run-time has shown better efficiency

  2. Assessment of clean development mechanism potential of large-scale energy efficiency measures in heavy industries

    International Nuclear Information System (INIS)

    Hayashi, Daisuke; Krey, Matthias

    2007-01-01

    This paper assesses clean development mechanism (CDM) potential of large-scale energy efficiency measures in selected heavy industries (iron and steel, cement, aluminium, pulp and paper, and ammonia) taking India and Brazil as examples of CDM project host countries. We have chosen two criteria for identification of the CDM potential of each energy efficiency measure: (i) emission reductions volume (in CO 2 e) that can be expected from the measure and (ii) likelihood of the measure passing the additionality test of the CDM Executive Board (EB) when submitted as a proposed CDM project activity. The paper shows that the CDM potential of large-scale energy efficiency measures strongly depends on the project-specific and country-specific context. In particular, technologies for the iron and steel industry (coke dry quenching (CDQ), top pressure recovery turbine (TRT), and basic oxygen furnace (BOF) gas recovery), the aluminium industry (point feeder prebake (PFPB) smelter), and the pulp and paper industry (continuous digester technology) offer promising CDM potential

  3. A new framework to increase the efficiency of large-scale solar power plants.

    Science.gov (United States)

    Alimohammadi, Shahrouz; Kleissl, Jan P.

    2015-11-01

    A new framework to estimate the spatio-temporal behavior of solar power is introduced, which predicts the statistical behavior of power output at utility scale Photo-Voltaic (PV) power plants. The framework is based on spatio-temporal Gaussian Processes Regression (Kriging) models, which incorporates satellite data with the UCSD version of the Weather and Research Forecasting model. This framework is designed to improve the efficiency of the large-scale solar power plants. The results are also validated from measurements of the local pyranometer sensors, and some improvements in different scenarios are observed. Solar energy.

  4. Increased light-use efficiency in northern terrestrial ecosystems indicated by CO 2 and greening observations: INCREASE IN NH LIGHT USE EFFICIENCY

    Energy Technology Data Exchange (ETDEWEB)

    Thomas, Rebecca T. [Science and Solutions for a Changing Planet DTP, Imperial College London, London UK; AXA Chair Programme in Biosphere and Climate Impacts, Department of Life Sciences, Imperial College London, London UK; Department of Physics, Imperial College London, London UK; Prentice, Iain Colin [AXA Chair Programme in Biosphere and Climate Impacts, Department of Life Sciences, Imperial College London, London UK; Grantham Institute: Climate Change and the Environment, Imperial College London, London UK; Graven, Heather [Department of Physics, Imperial College London, London UK; Grantham Institute: Climate Change and the Environment, Imperial College London, London UK; Ciais, Philippe [Laboratoire des Sciences du Climat et de l' Environnement, Saint-Aubin France; Fisher, Joshua B. [Jet Propulsion Laboratory, California Institute of Technology, Pasadena California USA; Hayes, Daniel J. [School of Forest Resources, University of Maine, Orono Maine USA; Huang, Maoyi [Atmospheric Sciences and Global Change Division, Pacific Northwest National Laboratory, Richland Washington USA; Huntzinger, Deborah N. [School of Earth Sciences and Environmental Sustainability, Northern Arizona University, Flagstaff Arizona USA; Ito, Akihiko [Center for Global Environmental Research, National Institute for Environmental Studies, Tsukuba Japan; Jain, Atul [Department of Atmospheric Sciences, University of Illinois at Urbana-Champaign, Urbana Illinois USA; Mao, Jiafu [Climate Change Science Institute and Environmental Sciences Division, Oak Ridge National Laboratory, Oak Ridge Tennessee USA; Michalak, Anna M. [Department of Global Ecology, Carnegie Institution for Science, Stanford California USA; Peng, Shushi [Sino-French Institute for Earth System Science, College of Urban and Environmental Sciences, Peking University, Beijing China; Poulter, Benjamin [Department of Ecology, Montana State University, Bozeman Montana USA; Ricciuto, Daniel M. [Climate Change Science Institute and Environmental Sciences Division, Oak Ridge National Laboratory, Oak Ridge Tennessee USA; Shi, Xiaoying [Climate Change Science Institute and Environmental Sciences Division, Oak Ridge National Laboratory, Oak Ridge Tennessee USA; Schwalm, Christopher [Woods Hole Research Center, Falmouth Massachusetts USA; Tian, Hanqin [International Center for Climate and Global Change Research, School of Forestry and Wildlife Sciences, Auburn University, Auburn Alabama USA; Zeng, Ning [Department of Atmospheric and Oceanic Science and Earth System Science Interdisciplinary Center, University of Maryland, College Park Maryland USA

    2016-11-04

    Observations show an increasing amplitude in the seasonal cycle of CO2 (ASC) north of 45°N of 56 ± 9.8% over the last 50 years and an increase in vegetation greenness of 7.5–15% in high northern latitudes since the 1980s. However, the causes of these changes remain uncertain. Historical simulations from terrestrial biosphere models in the Multiscale Synthesis and Terrestrial Model Intercomparison Project are compared to the ASC and greenness observations, using the TM3 atmospheric transport model to translate surface fluxes into CO2 concentrations. We find that the modeled change in ASC is too small but the mean greening trend is generally captured. Modeled increases in greenness are primarily driven by warming, whereas ASC changes are primarily driven by increasing CO2. We suggest that increases in ecosystem-scale light use efficiency (LUE) have contributed to the observed ASC increase but are underestimated by current models. We highlight potential mechanisms that could increase modeled LUE.

  5. Large-area high-efficiency flexible PHOLED lighting panels

    Science.gov (United States)

    Pang, Huiqing; Mandlik, Prashant; Levermore, Peter A.; Silvernail, Jeff; Ma, Ruiqing; Brown, Julie J.

    2012-09-01

    Organic Light Emitting Diodes (OLEDs) provide various attractive features for next generation illumination systems, including high efficiency, low power, thin and flexible form factor. In this work, we incorporated phosphorescent emitters and demonstrated highly efficient white phosphorescent OLED (PHOLED) devices on flexible plastic substrates. The 0.94 cm2 small-area device has total thickness of approximately 0.25 mm and achieved 63 lm/W at 1,000 cd/m2 with CRI = 85 and CCT = 2920 K. We further designed and fabricated a 15 cm x 15 cm large-area flexible white OLED lighting panels, finished with a hybrid single-layer ultra-low permeability single layer barrier (SLB) encapsulation film. The flexible panel has an active area of 116.4 cm2, and achieved a power efficacy of 47 lm/W at 1,000 cd/m2 with CRI = 83 and CCT = 3470 K. The efficacy of the panel at 3,000 cd/m2 is 43 lm/W. The large-area flexible PHOLED lighting panel is to bring out enormous possibilities to the future general lighting applications.

  6. Large-Scale Nanophotonic Solar Selective Absorbers for High-Efficiency Solar Thermal Energy Conversion.

    Science.gov (United States)

    Li, Pengfei; Liu, Baoan; Ni, Yizhou; Liew, Kaiyang Kevin; Sze, Jeff; Chen, Shuo; Shen, Sheng

    2015-08-19

    An omnidirectional nanophotonic solar selective absorber is fabricated on a large scale using a template-stripping method. The nanopyramid nickel structure achieves an average absorptance of 95% at a wavelength range below 1.3 μm and a low emittance less than 10% at wavelength >2.5 μm. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Large historical growth in global terrestrial gross primary production

    Energy Technology Data Exchange (ETDEWEB)

    Campbell, J. E.; Berry, J. A.; Seibt, U.; Smith, S. J.; Montzka, S. A.; Launois, T.; Belviso, S.; Bopp, L.; Laine, M.

    2017-04-05

    Growth in terrestrial gross primary production (GPP) may provide a feedback for climate change, but there is still strong disagreement on the extent to which biogeochemical processes may suppress this GPP growth at the ecosystem to continental scales. The consequent uncertainty in modeling of future carbon storage by the terrestrial biosphere constitutes one of the largest unknowns in global climate projections for the next century. Here we provide a global, measurement-based estimate of historical GPP growth using long-term atmospheric carbonyl sulfide (COS) records derived from ice core, firn, and ambient air samples. We interpret these records using a model that relates changes in the COS concentration to changes in its sources and sinks, the largest of which is proportional to GPP. The COS history was most consistent with simulations that assume a large historical GPP growth. Carbon-climate models that assume little to no GPP growth predicted trajectories of COS concentration over the anthropogenic era that differ from those observed. Continued COS monitoring may be useful for detecting ongoing changes in GPP while extending the ice core record to glacial cycles could provide further opportunities to evaluate earth system models.

  8. Sodium-immersed self-cooled electromagnetic pump design and development of a large-scale coil for high temperature

    International Nuclear Information System (INIS)

    Oto, Akihiro; Naohara, Nobuyuki; Ishida, Masayoshi; Katsuki, Kenji; Kumazawa, Ryouji

    1995-01-01

    A sodium-immersed, self-cooled electromagnetic (EM) pump was recently studied as a prospective innovative technology to simplify a fast breeder reactor plant system. The EM pump for a primary pump, a pump type, was designed, and the structural concept and the system performance were clarified. For the flow control method, a constant voltage/frequency method was preferable from the point of view of pump performance and efficiency. The insulation life was tested on a large-scale coil at high temperature as part of the development of a large-capacity EM pump. Mechanical and electrical damage were not observed, and the insulation performance was quite good. The insulation system could also be applied to large-scale coils

  9. SQDFT: Spectral Quadrature method for large-scale parallel O(N) Kohn-Sham calculations at high temperature

    Science.gov (United States)

    Suryanarayana, Phanish; Pratapa, Phanisri P.; Sharma, Abhiraj; Pask, John E.

    2018-03-01

    We present SQDFT: a large-scale parallel implementation of the Spectral Quadrature (SQ) method for O(N) Kohn-Sham Density Functional Theory (DFT) calculations at high temperature. Specifically, we develop an efficient and scalable finite-difference implementation of the infinite-cell Clenshaw-Curtis SQ approach, in which results for the infinite crystal are obtained by expressing quantities of interest as bilinear forms or sums of bilinear forms, that are then approximated by spatially localized Clenshaw-Curtis quadrature rules. We demonstrate the accuracy of SQDFT by showing systematic convergence of energies and atomic forces with respect to SQ parameters to reference diagonalization results, and convergence with discretization to established planewave results, for both metallic and insulating systems. We further demonstrate that SQDFT achieves excellent strong and weak parallel scaling on computer systems consisting of tens of thousands of processors, with near perfect O(N) scaling with system size and wall times as low as a few seconds per self-consistent field iteration. Finally, we verify the accuracy of SQDFT in large-scale quantum molecular dynamics simulations of aluminum at high temperature.

  10. Large-scale monitoring of effects of clothianidin-dressed OSR seeds on pollinating insects in Northern Germany: effects on large earth bumble bees (Bombus terrestris).

    Science.gov (United States)

    Sterk, Guido; Peters, Britta; Gao, Zhenglei; Zumkier, Ulrich

    2016-11-01

    The aim of this study was to investigate the effects of Elado ® -dressed winter oilseed rape (OSR, 10 g clothianidin & 2 g beta-cyfluthrin/kg seed) on the development, reproduction and behaviour of large earth bumble bees (Bombus terrestris) as part of a large-scale monitoring field study in Northern Germany, where OSR is usually cultivated at 25-33 % of the arable land. Both reference and test sites comprised 65 km 2 in which no other crops attractive to pollinating insects were present. Six study locations were selected per site and 10 bumble bee hives were placed at each location. At each site, three locations were directly adjacent to OSR fields and three locations were situated 400 m distant from the nearest OSR field. The development of colonies was monitored from the beginning of OSR flowering in April until June 2014. Pollen from returning foragers was analysed for its composition. An average of 44 % of OSR pollen was found in pollen loads of bumble bees indicating that OSR was a major resource for the colonies. At the end of OSR flowering, hives were transferred to a nature reserve until the end of the study. Colony development in terms of hive weight and the number of workers showed a typical course with no statistically significant differences between the sites. Reproductive output was comparatively high and not negatively affected by the exposure to treated OSR. In summary, Elado ® -dressed OSR did not cause any detrimental effects on the development or reproduction of bumble bee colonies.

  11. Riparian vegetation in the alpine connectome: Terrestrial-aquatic and terrestrial-terrestrial interactions.

    Science.gov (United States)

    Zaharescu, Dragos G; Palanca-Soler, Antonio; Hooda, Peter S; Tanase, Catalin; Burghelea, Carmen I; Lester, Richard N

    2017-12-01

    Alpine regions are under increased attention worldwide for their critical role in early biogeochemical cycles, their high sensitivity to environmental change, and as repositories of natural resources of high quality. Their riparian ecosystems, at the interface between aquatic and terrestrial environments, play important geochemical functions in the watershed and are biodiversity hotspots, despite a harsh climate and topographic setting. With climate change rapidly affecting the alpine biome, we still lack a comprehensive understanding of the extent of interactions between riparian surface, lake and catchment environments. A total of 189 glacial - origin lakes were surveyed in the Central Pyrenees to test how key elements of the lake and terrestrial environments interact at different scales to shape riparian plant composition. Secondly, we evaluated how underlying ecotope features drive the formation of natural communities potentially sensitive to environmental change and assessed their habitat distribution. At the macroscale, vegetation composition responded to pan-climatic gradients altitude and latitude, which captured in a narrow geographic area the transition between large European climatic zones. Hydrodynamics was the main catchment-scale factor connecting riparian vegetation with major water fluxes, followed by topography and geomorphology. Lake sediment Mg and Pb, and water Mn and Fe contents reflected local influences from mafic bedrock and soil water saturation. Community analysis identified four keystone ecosystems: (i) damp ecotone, (ii) snow bed-silicate bedrock, (iii) wet heath, and (iv) calcareous substrate. These communities and their connections with ecotope elements could be at risk from a number of environmental change factors including warmer seasons, snow line and lowland species advancement, increased nutrient/metal input and water level fluctuations. The results imply important natural terrestrial-aquatic linkages in the riparian environment

  12. Highly Scalable Trip Grouping for Large Scale Collective Transportation Systems

    DEFF Research Database (Denmark)

    Gidofalvi, Gyozo; Pedersen, Torben Bach; Risch, Tore

    2008-01-01

    Transportation-related problems, like road congestion, parking, and pollution, are increasing in most cities. In order to reduce traffic, recent work has proposed methods for vehicle sharing, for example for sharing cabs by grouping "closeby" cab requests and thus minimizing transportation cost...... and utilizing cab space. However, the methods published so far do not scale to large data volumes, which is necessary to facilitate large-scale collective transportation systems, e.g., ride-sharing systems for large cities. This paper presents highly scalable trip grouping algorithms, which generalize previous...

  13. Large-scale cauliflower-shaped hierarchical copper nanostructures for efficient photothermal conversion

    Science.gov (United States)

    Fan, Peixun; Wu, Hui; Zhong, Minlin; Zhang, Hongjun; Bai, Benfeng; Jin, Guofan

    2016-07-01

    Efficient solar energy harvesting and photothermal conversion have essential importance for many practical applications. Here, we present a laser-induced cauliflower-shaped hierarchical surface nanostructure on a copper surface, which exhibits extremely high omnidirectional absorption efficiency over a broad electromagnetic spectral range from the UV to the near-infrared region. The measured average hemispherical absorptance is as high as 98% within the wavelength range of 200-800 nm, and the angle dependent specular reflectance stays below 0.1% within the 0-60° incident angle. Such a structured copper surface can exhibit an apparent heating up effect under the sunlight illumination. In the experiment of evaporating water, the structured surface yields an overall photothermal conversion efficiency over 60% under an illuminating solar power density of ~1 kW m-2. The presented technology provides a cost-effective, reliable, and simple way for realizing broadband omnidirectional light absorptive metal surfaces for efficient solar energy harvesting and utilization, which is highly demanded in various light harvesting, anti-reflection, and photothermal conversion applications. Since the structure is directly formed by femtosecond laser writing, it is quite suitable for mass production and can be easily extended to a large surface area.Efficient solar energy harvesting and photothermal conversion have essential importance for many practical applications. Here, we present a laser-induced cauliflower-shaped hierarchical surface nanostructure on a copper surface, which exhibits extremely high omnidirectional absorption efficiency over a broad electromagnetic spectral range from the UV to the near-infrared region. The measured average hemispherical absorptance is as high as 98% within the wavelength range of 200-800 nm, and the angle dependent specular reflectance stays below 0.1% within the 0-60° incident angle. Such a structured copper surface can exhibit an apparent

  14. Large-Scale Fabrication of Silicon Nanowires for Solar Energy Applications.

    Science.gov (United States)

    Zhang, Bingchang; Jie, Jiansheng; Zhang, Xiujuan; Ou, Xuemei; Zhang, Xiaohong

    2017-10-11

    The development of silicon (Si) materials during past decades has boosted up the prosperity of the modern semiconductor industry. In comparison with the bulk-Si materials, Si nanowires (SiNWs) possess superior structural, optical, and electrical properties and have attracted increasing attention in solar energy applications. To achieve the practical applications of SiNWs, both large-scale synthesis of SiNWs at low cost and rational design of energy conversion devices with high efficiency are the prerequisite. This review focuses on the recent progresses in large-scale production of SiNWs, as well as the construction of high-efficiency SiNW-based solar energy conversion devices, including photovoltaic devices and photo-electrochemical cells. Finally, the outlook and challenges in this emerging field are presented.

  15. Large scale modulation of high frequency acoustic waves in periodic porous media.

    Science.gov (United States)

    Boutin, Claude; Rallu, Antoine; Hans, Stephane

    2012-12-01

    This paper deals with the description of the modulation at large scale of high frequency acoustic waves in gas saturated periodic porous media. High frequencies mean local dynamics at the pore scale and therefore absence of scale separation in the usual sense of homogenization. However, although the pressure is spatially varying in the pores (according to periodic eigenmodes), the mode amplitude can present a large scale modulation, thereby introducing another type of scale separation to which the asymptotic multi-scale procedure applies. The approach is first presented on a periodic network of inter-connected Helmholtz resonators. The equations governing the modulations carried by periodic eigenmodes, at frequencies close to their eigenfrequency, are derived. The number of cells on which the carrying periodic mode is defined is therefore a parameter of the modeling. In a second part, the asymptotic approach is developed for periodic porous media saturated by a perfect gas. Using the "multicells" periodic condition, one obtains the family of equations governing the amplitude modulation at large scale of high frequency waves. The significant difference between modulations of simple and multiple mode are evidenced and discussed. The features of the modulation (anisotropy, width of frequency band) are also analyzed.

  16. Large-scale modeling of rain fields from a rain cell deterministic model

    Science.gov (United States)

    FéRal, Laurent; Sauvageot, Henri; Castanet, Laurent; Lemorton, JoëL.; Cornet, FréDéRic; Leconte, Katia

    2006-04-01

    A methodology to simulate two-dimensional rain rate fields at large scale (1000 × 1000 km2, the scale of a satellite telecommunication beam or a terrestrial fixed broadband wireless access network) is proposed. It relies on a rain rate field cellular decomposition. At small scale (˜20 × 20 km2), the rain field is split up into its macroscopic components, the rain cells, described by the Hybrid Cell (HYCELL) cellular model. At midscale (˜150 × 150 km2), the rain field results from the conglomeration of rain cells modeled by HYCELL. To account for the rain cell spatial distribution at midscale, the latter is modeled by a doubly aggregative isotropic random walk, the optimal parameterization of which is derived from radar observations at midscale. The extension of the simulation area from the midscale to the large scale (1000 × 1000 km2) requires the modeling of the weather frontal area. The latter is first modeled by a Gaussian field with anisotropic covariance function. The Gaussian field is then turned into a binary field, giving the large-scale locations over which it is raining. This transformation requires the definition of the rain occupation rate over large-scale areas. Its probability distribution is determined from observations by the French operational radar network ARAMIS. The coupling with the rain field modeling at midscale is immediate whenever the large-scale field is split up into midscale subareas. The rain field thus generated accounts for the local CDF at each point, defining a structure spatially correlated at small scale, midscale, and large scale. It is then suggested that this approach be used by system designers to evaluate diversity gain, terrestrial path attenuation, or slant path attenuation for different azimuth and elevation angle directions.

  17. Top-down constraints on disturbance dynamics in the terrestrial carbon cycle: effects at global and regional scales

    NARCIS (Netherlands)

    Bloom, A. A.; Exbrayat, J. F.; van der Velde, I.; Peters, W.; Williams, M.

    2014-01-01

    Large uncertainties preside over terrestrial carbon flux estimates on a global scale. In particular, the strongly coupled dynamics between net ecosystem productivity and disturbance C losses are poorly constrained. To gain an improved understanding of ecosystem C dynamics from regional to global

  18. Terrestrial water fluxes dominated by transpiration.

    Science.gov (United States)

    Jasechko, Scott; Sharp, Zachary D; Gibson, John J; Birks, S Jean; Yi, Yi; Fawcett, Peter J

    2013-04-18

    Renewable fresh water over continents has input from precipitation and losses to the atmosphere through evaporation and transpiration. Global-scale estimates of transpiration from climate models are poorly constrained owing to large uncertainties in stomatal conductance and the lack of catchment-scale measurements required for model calibration, resulting in a range of predictions spanning 20 to 65 per cent of total terrestrial evapotranspiration (14,000 to 41,000 km(3) per year) (refs 1, 2, 3, 4, 5). Here we use the distinct isotope effects of transpiration and evaporation to show that transpiration is by far the largest water flux from Earth's continents, representing 80 to 90 per cent of terrestrial evapotranspiration. On the basis of our analysis of a global data set of large lakes and rivers, we conclude that transpiration recycles 62,000 ± 8,000 km(3) of water per year to the atmosphere, using half of all solar energy absorbed by land surfaces in the process. We also calculate CO2 uptake by terrestrial vegetation by connecting transpiration losses to carbon assimilation using water-use efficiency ratios of plants, and show the global gross primary productivity to be 129 ± 32 gigatonnes of carbon per year, which agrees, within the uncertainty, with previous estimates. The dominance of transpiration water fluxes in continental evapotranspiration suggests that, from the point of view of water resource forecasting, climate model development should prioritize improvements in simulations of biological fluxes rather than physical (evaporation) fluxes.

  19. High Quantum Efficiency OLED Lighting Systems

    Energy Technology Data Exchange (ETDEWEB)

    Shiang, Joseph [General Electric (GE) Global Research, Fairfield, CT (United States)

    2011-09-30

    The overall goal of the program was to apply improvements in light outcoupling technology to a practical large area plastic luminaire, and thus enable the product vision of an extremely thin form factor high efficiency large area light source. The target substrate was plastic and the baseline device was operating at 35 LPW at the start of the program. The target LPW of the program was a >2x improvement in the LPW efficacy and the overall amount of light to be delivered was relatively high 900 lumens. Despite the extremely difficult challenges associated with scaling up a wet solution process on plastic substrates, the program was able to make substantial progress. A small molecule wet solution process was successfully implemented on plastic substrates with almost no loss in efficiency in transitioning from the laboratory scale glass to large area plastic substrates. By transitioning to a small molecule based process, the LPW entitlement increased from 35 LPW to 60 LPW. A further 10% improvement in outcoupling efficiency was demonstrated via the use of a highly reflecting cathode, which reduced absorptive loss in the OLED device. The calculated potential improvement in some cases is even larger, ~30%, and thus there is considerable room for optimism in improving the net light coupling efficacy, provided absorptive loss mechanisms are eliminated. Further improvements are possible if scattering schemes such as the silver nanowire based hard coat structure are fully developed. The wet coating processes were successfully scaled to large area plastic substrate and resulted in the construction of a 900 lumens luminaire device.

  20. Scaling of olfactory antennae of the terrestrial hermit crabs Coenobita rugosus and Coenobita perlatus during ontogeny

    Directory of Open Access Journals (Sweden)

    Lindsay D. Waldrop

    2014-08-01

    Full Text Available Although many lineages of terrestrial crustaceans have poor olfactory capabilities, crabs in the family Coenobitidae, including the terrestrial hermit crabs in the genus Coenobita, are able to locate food and water using olfactory antennae (antennules to capture odors from the surrounding air. Terrestrial hermit crabs begin their lives as small marine larvae and must find a suitable place to undergo metamorphosis into a juvenile form, which initiates their transition to land. Juveniles increase in size by more than an order of magnitude to reach adult size. Since odor capture is a process heavily dependent on the size and speed of the antennules and physical properties of the fluid, both the transition from water to air and the large increase in size during ontogeny could impact odor capture. In this study, we examine two species of terrestrial hermit crabs, Coenobita perlatus H. Milne-Edwards and Coenobita rugosus H. Milne-Edwards, to determine how the antennule morphometrics and kinematics of flicking change in comparison to body size during ontogeny, and how this scaling relationship could impact odor capture by using a simple model of mass transport in flow. Many features of the antennules, including the chemosensory sensilla, scaled allometrically with carapace width and increased slower than expected by isometry, resulting in relatively larger antennules on juvenile animals. Flicking speed scaled as expected with isometry. Our mass-transport model showed that allometric scaling of antennule morphometrics and kinematics leads to thinner boundary layers of attached fluid around the antennule during flicking and higher odorant capture rates as compared to antennules which scaled isometrically. There were no significant differences in morphometric or kinematic measurements between the two species.

  1. Intercomparison of terrestrial carbon fluxes and carbon use efficiency simulated by CMIP5 Earth System Models

    Science.gov (United States)

    Kim, Dongmin; Lee, Myong-In; Jeong, Su-Jong; Im, Jungho; Cha, Dong Hyun; Lee, Sanggyun

    2017-12-01

    This study compares historical simulations of the terrestrial carbon cycle produced by 10 Earth System Models (ESMs) that participated in the fifth phase of the Coupled Model Intercomparison Project (CMIP5). Using MODIS satellite estimates, this study validates the simulation of gross primary production (GPP), net primary production (NPP), and carbon use efficiency (CUE), which depend on plant function types (PFTs). The models show noticeable deficiencies compared to the MODIS data in the simulation of the spatial patterns of GPP and NPP and large differences among the simulations, although the multi-model ensemble (MME) mean provides a realistic global mean value and spatial distributions. The larger model spreads in GPP and NPP compared to those of surface temperature and precipitation suggest that the differences among simulations in terms of the terrestrial carbon cycle are largely due to uncertainties in the parameterization of terrestrial carbon fluxes by vegetation. The models also exhibit large spatial differences in their simulated CUE values and at locations where the dominant PFT changes, primarily due to differences in the parameterizations. While the MME-simulated CUE values show a strong dependence on surface temperatures, the observed CUE values from MODIS show greater complexity, as well as non-linear sensitivity. This leads to the overall underestimation of CUE using most of the PFTs incorporated into current ESMs. The results of this comparison suggest that more careful and extensive validation is needed to improve the terrestrial carbon cycle in terms of ecosystem-level processes.

  2. Energy-Efficient Optimal Power Allocation in Integrated Wireless Sensor and Cognitive Satellite Terrestrial Networks.

    Science.gov (United States)

    Shi, Shengchao; Li, Guangxia; An, Kang; Gao, Bin; Zheng, Gan

    2017-09-04

    This paper proposes novel satellite-based wireless sensor networks (WSNs), which integrate the WSN with the cognitive satellite terrestrial network. Having the ability to provide seamless network access and alleviate the spectrum scarcity, cognitive satellite terrestrial networks are considered as a promising candidate for future wireless networks with emerging requirements of ubiquitous broadband applications and increasing demand for spectral resources. With the emerging environmental and energy cost concerns in communication systems, explicit concerns on energy efficient resource allocation in satellite networks have also recently received considerable attention. In this regard, this paper proposes energy-efficient optimal power allocation schemes in the cognitive satellite terrestrial networks for non-real-time and real-time applications, respectively, which maximize the energy efficiency (EE) of the cognitive satellite user while guaranteeing the interference at the primary terrestrial user below an acceptable level. Specifically, average interference power (AIP) constraint is employed to protect the communication quality of the primary terrestrial user while average transmit power (ATP) or peak transmit power (PTP) constraint is adopted to regulate the transmit power of the satellite user. Since the energy-efficient power allocation optimization problem belongs to the nonlinear concave fractional programming problem, we solve it by combining Dinkelbach's method with Lagrange duality method. Simulation results demonstrate that the fading severity of the terrestrial interference link is favorable to the satellite user who can achieve EE gain under the ATP constraint comparing to the PTP constraint.

  3. The fragmentation of Pangaea and Mesozoic terrestrial vertebrate biodiversity.

    Science.gov (United States)

    Vavrek, Matthew J

    2016-09-01

    During the Mesozoic (242-66 million years ago), terrestrial regions underwent a massive shift in their size, position and connectivity. At the beginning of the era, the land masses were joined into a single supercontinent called Pangaea. However, by the end of the Mesozoic, terrestrial regions had become highly fragmented, both owing to the drifting apart of the continental plates and the extremely high sea levels that flooded and divided many regions. How terrestrial biodiversity was affected by this fragmentation and large-scale flooding of the Earth's landmasses is uncertain. Based on a model using the species-area relationship (SAR), terrestrial vertebrate biodiversity would be expected to nearly double through the Mesozoic owing to continental fragmentation, despite a decrease of 24% in total terrestrial area. Previous studies of Mesozoic vertebrates have generally found increases in terrestrial diversity towards the end of the era, although these increases are often attributed to intrinsic or climatic factors. Instead, continental fragmentation over this time may largely explain any observed increase in terrestrial biodiversity. This study demonstrates the importance that non-intrinsic effects can have on the taxonomic success of a group, and the importance of geography to understanding past biodiversity. © 2016 The Author(s).

  4. The decadal state of the terrestrial carbon cycle : Global retrievals of terrestrial carbon allocation, pools, and residence times

    NARCIS (Netherlands)

    Bloom, A Anthony; Exbrayat, Jean-François; van der Velde, Ivar R; Feng, Liang; Williams, Mathew

    2016-01-01

    The terrestrial carbon cycle is currently the least constrained component of the global carbon budget. Large uncertainties stem from a poor understanding of plant carbon allocation, stocks, residence times, and carbon use efficiency. Imposing observational constraints on the terrestrial carbon cycle

  5. Inferring Large-Scale Terrestrial Water Storage Through GRACE and GPS Data Fusion in Cloud Computing Environments

    Science.gov (United States)

    Rude, C. M.; Li, J. D.; Gowanlock, M.; Herring, T.; Pankratius, V.

    2016-12-01

    Surface subsidence due to depletion of groundwater can lead to permanent compaction of aquifers and damaged infrastructure. However, studies of such effects on a large scale are challenging and compute intensive because they involve fusing a variety of data sets beyond direct measurements from groundwater wells, such as gravity change measurements from the Gravity Recovery and Climate Experiment (GRACE) or surface displacements measured by GPS receivers. Our work therefore leverages Amazon cloud computing to enable these types of analyses spanning the entire continental US. Changes in groundwater storage are inferred from surface displacements measured by GPS receivers stationed throughout the country. Receivers located on bedrock are anti-correlated with changes in water levels from elastic deformation due to loading, while stations on aquifers correlate with groundwater changes due to poroelastic expansion and compaction. Correlating linearly detrended equivalent water thickness measurements from GRACE with linearly detrended and Kalman filtered vertical displacements of GPS stations located throughout the United States helps compensate for the spatial and temporal limitations of GRACE. Our results show that the majority of GPS stations are negatively correlated with GRACE in a statistically relevant way, as most GPS stations are located on bedrock in order to provide stable reference locations and measure geophysical processes such as tectonic deformations. Additionally, stations located on the Central Valley California aquifer show statistically significant positive correlations. Through the identification of positive and negative correlations, deformation phenomena can be classified as loading or poroelastic expansion due to changes in groundwater. This method facilitates further studies of terrestrial water storage on a global scale. This work is supported by NASA AIST-NNX15AG84G (PI: V. Pankratius) and Amazon.

  6. An efficient method based on the uniformity principle for synthesis of large-scale heat exchanger networks

    International Nuclear Information System (INIS)

    Zhang, Chunwei; Cui, Guomin; Chen, Shang

    2016-01-01

    Highlights: • Two dimensionless uniformity factors are presented to heat exchange network. • The grouping of process streams reduces the computational complexity of large-scale HENS problems. • The optimal sub-network can be obtained by Powell particle swarm optimization algorithm. • The method is illustrated by a case study involving 39 process streams, with a better solution. - Abstract: The optimal design of large-scale heat exchanger networks is a difficult task due to the inherent non-linear characteristics and the combinatorial nature of heat exchangers. To solve large-scale heat exchanger network synthesis (HENS) problems, two dimensionless uniformity factors to describe the heat exchanger network (HEN) uniformity in terms of the temperature difference and the accuracy of process stream grouping are deduced. Additionally, a novel algorithm that combines deterministic and stochastic optimizations to obtain an optimal sub-network with a suitable heat load for a given group of streams is proposed, and is named the Powell particle swarm optimization (PPSO). As a result, the synthesis of large-scale heat exchanger networks is divided into two corresponding sub-parts, namely, the grouping of process streams and the optimization of sub-networks. This approach reduces the computational complexity and increases the efficiency of the proposed method. The robustness and effectiveness of the proposed method are demonstrated by solving a large-scale HENS problem involving 39 process streams, and the results obtained are better than those previously published in the literature.

  7. Large scale mass redistribution and surface displacement from GRACE and SLR

    Science.gov (United States)

    Cheng, M.; Ries, J. C.; Tapley, B. D.

    2012-12-01

    Mass transport between the atmosphere, ocean and solid earth results in the temporal variations in the Earth gravity field and loading induced deformation of the Earth. Recent space-borne observations, such as GRACE mission, are providing extremely high precision temporal variations of gravity field. The results from 10-yr GRACE data has shown a significant annual variations of large scale vertical and horizontal displacements occurring over the Amazon, Himalayan region and South Asia, African, and Russian with a few mm amplitude. Improving understanding from monitoring and modeling of the large scale mass redistribution and the Earth's response are a critical for all studies in the geosciences, in particular for determination of Terrestrial Reference System (TRS), including geocenter motion. This paper will report results for the observed seasonal variations in the 3-dimentional surface displacements of SLR and GPS tracking stations and compare with the prediction from time series of GRACE monthly gravity solution.

  8. Top-down constraints on disturbance dynamics in the terrestrial carbon cycle: effects at global and regional scales

    Science.gov (United States)

    Bloom, A. A.; Exbrayat, J. F.; van der Velde, I.; Peters, W.; Williams, M.

    2014-12-01

    Large uncertainties preside over terrestrial carbon flux estimates on a global scale. In particular, the strongly coupled dynamics between net ecosystem productivity and disturbance C losses are poorly constrained. To gain an improved understanding of ecosystem C dynamics from regional to global scale, we apply a Markov Chain Monte Carlo based model-data-fusion approach into the CArbon DAta-MOdel fraMework (CARDAMOM). We assimilate MODIS LAI and burned area, plant-trait data, and use the Harmonized World Soil Database (HWSD) and maps of above ground biomass as prior knowledge for initial conditions. We optimize model parameters based on (a) globally spanning observations and (b) ecological and dynamic constraints that force single parameter values and parameter inter-dependencies to be representative of real world processes. We determine the spatial and temporal dynamics of major terrestrial C fluxes and model parameter values on a global scale (GPP = 123 +/- 8 Pg C yr-1 & NEE = -1.8 +/- 2.7 Pg C yr-1). We further show that the incorporation of disturbance fluxes, and accounting for their instantaneous or delayed effect, is of critical importance in constraining global C cycle dynamics, particularly in the tropics. In a higher resolution case study centred on the Amazon Basin we show how fires not only trigger large instantaneous emissions of burned matter, but also how they are responsible for a sustained reduction of up to 50% in plant uptake following the depletion of biomass stocks. The combination of these two fire-induced effects leads to a 1 g C m-2 d-1reduction in the strength of the net terrestrial carbon sink. Through our simulations at regional and global scale, we advocate the need to assimilate disturbance metrics in global terrestrial carbon cycle models to bridge the gap between globally spanning terrestrial carbon cycle data and the full dynamics of the ecosystem C cycle. Disturbances are especially important because their quick occurrence may have

  9. Low-Temperature Soft-Cover Deposition of Uniform Large-Scale Perovskite Films for High-Performance Solar Cells.

    Science.gov (United States)

    Ye, Fei; Tang, Wentao; Xie, Fengxian; Yin, Maoshu; He, Jinjin; Wang, Yanbo; Chen, Han; Qiang, Yinghuai; Yang, Xudong; Han, Liyuan

    2017-09-01

    Large-scale high-quality perovskite thin films are crucial to produce high-performance perovskite solar cells. However, for perovskite films fabricated by solvent-rich processes, film uniformity can be prevented by convection during thermal evaporation of the solvent. Here, a scalable low-temperature soft-cover deposition (LT-SCD) method is presented, where the thermal convection-induced defects in perovskite films are eliminated through a strategy of surface tension relaxation. Compact, homogeneous, and convection-induced-defects-free perovskite films are obtained on an area of 12 cm 2 , which enables a power conversion efficiency (PCE) of 15.5% on a solar cell with an area of 5 cm 2 . This is the highest efficiency at this large cell area. A PCE of 15.3% is also obtained on a flexible perovskite solar cell deposited on the polyethylene terephthalate substrate owing to the advantage of presented low-temperature processing. Hence, the present LT-SCD technology provides a new non-spin-coating route to the deposition of large-area uniform perovskite films for both rigid and flexible perovskite devices. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Government regulation and associated innovations in building energy-efficiency supervisory systems for large-scale public buildings in a market economy

    International Nuclear Information System (INIS)

    Dai Xuezhi; Wu Yong; Di Yanqiang; Li Qiaoyan

    2009-01-01

    The supervision of energy efficiency in government office buildings and large-scale public buildings is the main embodiment for government implementation of Public Administration in the fields of resource saving and environmental protection. Aimed at improving the current situation of lack of government administration in building energy efficiency, this paper proposes the concept of 'change and redesign of governmental supervision in building energy efficiency', repositioning the role of government supervision. Based on this theory and other related theories in regulation economic and modern management, this paper analyzes and researches the action and function of all level governments in execution of the supervisory system of building energy efficiency in government office buildings and large-scale public buildings. This paper also defines the importance of government supervision in energy-efficiency system. Finally, this paper analyzes and researches the interaction mechanism between government and owners of different type buildings, government and energy-efficiency service institution with gambling as main features. This paper also presents some measurements to achieve a common benefit community in implementation of building energy-efficiency supervisory system.

  11. Government regulation and associated innovations in building energy-efficiency supervisory systems for large-scale public buildings in a market economy

    Energy Technology Data Exchange (ETDEWEB)

    Dai Xuezhi [China Academy of Building Research, Beijing 100013 (China)], E-mail: daixz9999@126.com; Wu Yong [Ministry of Housing and Urban-Rural Development of the People' s Republic of China, Beijing 100835 (China); Di Yanqiang [China Academy of Building Research, Beijing 100013 (China); Li Qiaoyan [Department of Building, School of Design and Environment, National University of Singapore (Singapore)

    2009-06-15

    The supervision of energy efficiency in government office buildings and large-scale public buildings is the main embodiment for government implementation of Public Administration in the fields of resource saving and environmental protection. Aimed at improving the current situation of lack of government administration in building energy efficiency, this paper proposes the concept of 'change and redesign of governmental supervision in building energy efficiency', repositioning the role of government supervision. Based on this theory and other related theories in regulation economic and modern management, this paper analyzes and researches the action and function of all level governments in execution of the supervisory system of building energy efficiency in government office buildings and large-scale public buildings. This paper also defines the importance of government supervision in energy-efficiency system. Finally, this paper analyzes and researches the interaction mechanism between government and owners of different type buildings, government and energy-efficiency service institution with gambling as main features. This paper also presents some measurements to achieve a common benefit community in implementation of building energy-efficiency supervisory system.

  12. Government regulation and associated innovations in building energy-efficiency supervisory systems for large-scale public buildings in a market economy

    Energy Technology Data Exchange (ETDEWEB)

    Dai, Xuezhi; Di, Yanqiang [China Academy of Building Research, Beijing 100013 (China); Wu, Yong [Ministry of Housing and Urban-Rural Development of the People' s Republic of China, Beijing 100835 (China); Li, Qiaoyan [Department of Building, School of Design and Environment, National University of Singapore (Singapore)

    2009-06-15

    The supervision of energy efficiency in government office buildings and large-scale public buildings is the main embodiment for government implementation of Public Administration in the fields of resource saving and environmental protection. Aimed at improving the current situation of lack of government administration in building energy efficiency, this paper proposes the concept of 'change and redesign of governmental supervision in building energy efficiency', repositioning the role of government supervision. Based on this theory and other related theories in regulation economic and modern management, this paper analyzes and researches the action and function of all level governments in execution of the supervisory system of building energy efficiency in government office buildings and large-scale public buildings. This paper also defines the importance of government supervision in energy-efficiency system. Finally, this paper analyzes and researches the interaction mechanism between government and owners of different type buildings, government and energy-efficiency service institution with gambling as main features. This paper also presents some measurements to achieve a common benefit community in implementation of building energy-efficiency supervisory system. (author)

  13. High-Performance Carbon Dioxide Electrocatalytic Reduction by Easily Fabricated Large-Scale Silver Nanowire Arrays.

    Science.gov (United States)

    Luan, Chuhao; Shao, Yang; Lu, Qi; Gao, Shenghan; Huang, Kai; Wu, Hui; Yao, Kefu

    2018-05-17

    An efficient and selective catalyst is in urgent need for carbon dioxide electroreduction and silver is one of the promising candidates with affordable costs. Here we fabricated large-scale vertically standing Ag nanowire arrays with high crystallinity and electrical conductivity as carbon dioxide electroreduction catalysts by a simple nanomolding method that was usually considered not feasible for metallic crystalline materials. A great enhancement of current densities and selectivity for CO at moderate potentials was achieved. The current density for CO ( j co ) of Ag nanowire array with 200 nm in diameter was more than 2500 times larger than that of Ag foil at an overpotential of 0.49 V with an efficiency over 90%. The origin of enhanced performances are attributed to greatly increased electrochemically active surface area (ECSA) and higher intrinsic activity compared to those of polycrystalline Ag foil. More low-coordinated sites on the nanowires which can stabilize the CO 2 intermediate better are responsible for the high intrinsic activity. In addition, the impact of surface morphology that induces limited mass transportation on reaction selectivity and efficiency of nanowire arrays with different diameters was also discussed.

  14. Reproducible, large-scale production of thallium-based high-temperature superconductors

    International Nuclear Information System (INIS)

    Gay, R.L.; Stelman, D.; Newcomb, J.C.; Grantham, L.F.; Schnittgrund, G.D.

    1990-01-01

    This paper reports on the development of a large scale spray-calcination technique generic to the preparation of ceramic high-temperature superconductor (HTSC) powders. Among the advantages of the technique is that of producing uniformly mixed metal oxides on a fine scale. Production of both yttrium and thallium-based HTSCs has been demonstrated using this technique. In the spray calciner, solutions of the desired composition are atomized as a fine mist into a hot gas. Evaporation and calcination are instantaneous, yielding an extremely fine, uniform oxide powder. The calciner is 76 cm in diameter and can produce metal oxide powder at relatively large rates (approximately 100 g/h) without contamination

  15. Large scale mapping of groundwater resources using a highly integrated set of tools

    DEFF Research Database (Denmark)

    Søndergaard, Verner; Auken, Esben; Christiansen, Anders Vest

    large areas with information from an optimum number of new investigation boreholes, existing boreholes, logs and water samples to get an integrated and detailed description of the groundwater resources and their vulnerability.Development of more time efficient and airborne geophysical data acquisition...... platforms (e.g. SkyTEM) have made large-scale mapping attractive and affordable in the planning and administration of groundwater resources. The handling and optimized use of huge amounts of geophysical data covering large areas has also required a comprehensive database, where data can easily be stored...

  16. Comparison of Single and Multi-Scale Method for Leaf and Wood Points Classification from Terrestrial Laser Scanning Data

    Science.gov (United States)

    Wei, Hongqiang; Zhou, Guiyun; Zhou, Junjie

    2018-04-01

    The classification of leaf and wood points is an essential preprocessing step for extracting inventory measurements and canopy characterization of trees from the terrestrial laser scanning (TLS) data. The geometry-based approach is one of the widely used classification method. In the geometry-based method, it is common practice to extract salient features at one single scale before the features are used for classification. It remains unclear how different scale(s) used affect the classification accuracy and efficiency. To assess the scale effect on the classification accuracy and efficiency, we extracted the single-scale and multi-scale salient features from the point clouds of two oak trees of different sizes and conducted the classification on leaf and wood. Our experimental results show that the balanced accuracy of the multi-scale method is higher than the average balanced accuracy of the single-scale method by about 10 % for both trees. The average speed-up ratio of single scale classifiers over multi-scale classifier for each tree is higher than 30.

  17. Self-* and Adaptive Mechanisms for Large Scale Distributed Systems

    Science.gov (United States)

    Fragopoulou, P.; Mastroianni, C.; Montero, R.; Andrjezak, A.; Kondo, D.

    Large-scale distributed computing systems and infrastructure, such as Grids, P2P systems and desktop Grid platforms, are decentralized, pervasive, and composed of a large number of autonomous entities. The complexity of these systems is such that human administration is nearly impossible and centralized or hierarchical control is highly inefficient. These systems need to run on highly dynamic environments, where content, network topologies and workloads are continuously changing. Moreover, they are characterized by the high degree of volatility of their components and the need to provide efficient service management and to handle efficiently large amounts of data. This paper describes some of the areas for which adaptation emerges as a key feature, namely, the management of computational Grids, the self-management of desktop Grid platforms and the monitoring and healing of complex applications. It also elaborates on the use of bio-inspired algorithms to achieve self-management. Related future trends and challenges are described.

  18. Efficient Computation of Sparse Matrix Functions for Large-Scale Electronic Structure Calculations: The CheSS Library.

    Science.gov (United States)

    Mohr, Stephan; Dawson, William; Wagner, Michael; Caliste, Damien; Nakajima, Takahito; Genovese, Luigi

    2017-10-10

    We present CheSS, the "Chebyshev Sparse Solvers" library, which has been designed to solve typical problems arising in large-scale electronic structure calculations using localized basis sets. The library is based on a flexible and efficient expansion in terms of Chebyshev polynomials and presently features the calculation of the density matrix, the calculation of matrix powers for arbitrary powers, and the extraction of eigenvalues in a selected interval. CheSS is able to exploit the sparsity of the matrices and scales linearly with respect to the number of nonzero entries, making it well-suited for large-scale calculations. The approach is particularly adapted for setups leading to small spectral widths of the involved matrices and outperforms alternative methods in this regime. By coupling CheSS to the DFT code BigDFT, we show that such a favorable setup is indeed possible in practice. In addition, the approach based on Chebyshev polynomials can be massively parallelized, and CheSS exhibits excellent scaling up to thousands of cores even for relatively small matrix sizes.

  19. Electrical efficiency and renewable energy - Economical alternatives to large-scale power generation

    International Nuclear Information System (INIS)

    Oettli, B.; Hammer, S.; Moret, F.; Iten, R.; Nordmann, T.

    2010-05-01

    This final report for WWF Switzerland, Greenpeace Switzerland, the Swiss Energy Foundation SES, Pro Natura and the Swiss Cantons of Basel City and Geneva takes a look at the energy-relevant effects of the propositions made by Swiss electricity utilities for large-scale power generation. These proposals are compared with a strategy that proposes investments in energy-efficiency and the use of renewable sources of energy. The effects of both scenarios on the environment and the risks involved are discussed, as are the investments involved. The associated effects on the Swiss national economy are also discussed. For the efficiency and renewables scenario, two implementation variants are discussed: Inland investments and production are examined as are foreign production options and/or import from foreign countries. The methods used in the study are introduced and discussed. Investment and cost considerations, earnings and effects on employment are also reviewed. The report is completed with an extensive appendix which, amongst other things, includes potential reviews, cost estimates and a discussion on 'smart grids'

  20. Economically viable large-scale hydrogen liquefaction

    Science.gov (United States)

    Cardella, U.; Decker, L.; Klein, H.

    2017-02-01

    The liquid hydrogen demand, particularly driven by clean energy applications, will rise in the near future. As industrial large scale liquefiers will play a major role within the hydrogen supply chain, production capacity will have to increase by a multiple of today’s typical sizes. The main goal is to reduce the total cost of ownership for these plants by increasing energy efficiency with innovative and simple process designs, optimized in capital expenditure. New concepts must ensure a manageable plant complexity and flexible operability. In the phase of process development and selection, a dimensioning of key equipment for large scale liquefiers, such as turbines and compressors as well as heat exchangers, must be performed iteratively to ensure technological feasibility and maturity. Further critical aspects related to hydrogen liquefaction, e.g. fluid properties, ortho-para hydrogen conversion, and coldbox configuration, must be analysed in detail. This paper provides an overview on the approach, challenges and preliminary results in the development of efficient as well as economically viable concepts for large-scale hydrogen liquefaction.

  1. The decadal state of the terrestrial carbon cycle

    NARCIS (Netherlands)

    Velde, van der I.R.; Bloom, J.; Exbrayat, J.; Feng, L.; Williams, M.

    2016-01-01

    The terrestrial carbon cycle is currently the least constrained component of the global carbon budget. Large uncertainties stem from a poor understanding of plant carbon allocation, stocks, residence times, and carbon use efficiency. Imposing observational constraints on the terrestrial carbon cycle

  2. Large-scale, high-definition Ground Penetrating Radar prospection in archaeology

    Science.gov (United States)

    Trinks, I.; Kucera, M.; Hinterleitner, A.; Löcker, K.; Nau, E.; Neubauer, W.; Zitz, T.

    2012-04-01

    The future demands on professional archaeological prospection will be its ability to cover large areas in a time and cost efficient manner with very high spatial resolution and accuracy. The objective of the 2010 in Vienna established Ludwig Boltzmann Institute for Archaeological Prospection and Virtual Archaeology (LBI ArchPro) in collaboration with its eight European partner organisations is the advancement of state-of-the-art archaeological sciences. The application and specific further development of remote sensing, geophysical prospection and virtual reality applications, as well as of novel integrated interpretation approaches dedicated to non-invasive spatial archaeology combining near-surface prospection methods with advanced computer science is crucial for modern archaeology. Within the institute's research programme different areas for distinct case studies in Austria, Germany, Norway, Sweden and the UK have been selected as basis for the development and testing of new concepts for efficient and universally applicable tools for spatial, non-invasive archaeology. In terms of geophysical prospection the investigation of entire archaeological landscapes for the exploration and protection of Europe's buried cultural heritage requires new measurement devices, which are fast, accurate and precise. Therefore the further development of motorized, multichannel survey systems and advanced navigation solutions is required. The use of motorized measurement devices for archaeological prospection implicates several technological and methodological challenges. Latest multichannel Ground Penetrating Radar (GPR) arrays mounted in front off, or towed behind motorized survey vehicles permit large-scale GPR prospection surveys with unprecedented spatial resolution. In particular the motorized 16 channel 400 MHz MALÅ Imaging Radar Array (MIRA) used by the LBI ArchPro in combination with latest automatic data positioning and navigation solutions permits the reliable high

  3. Predicting ecosystem dynamics at regional scales: an evaluation of a terrestrial biosphere model for the forests of northeastern North America.

    Science.gov (United States)

    Medvigy, David; Moorcroft, Paul R

    2012-01-19

    Terrestrial biosphere models are important tools for diagnosing both the current state of the terrestrial carbon cycle and forecasting terrestrial ecosystem responses to global change. While there are a number of ongoing assessments of the short-term predictive capabilities of terrestrial biosphere models using flux-tower measurements, to date there have been relatively few assessments of their ability to predict longer term, decadal-scale biomass dynamics. Here, we present the results of a regional-scale evaluation of the Ecosystem Demography version 2 (ED2)-structured terrestrial biosphere model, evaluating the model's predictions against forest inventory measurements for the northeast USA and Quebec from 1985 to 1995. Simulations were conducted using a default parametrization, which used parameter values from the literature, and a constrained model parametrization, which had been developed by constraining the model's predictions against 2 years of measurements from a single site, Harvard Forest (42.5° N, 72.1° W). The analysis shows that the constrained model parametrization offered marked improvements over the default model formulation, capturing large-scale variation in patterns of biomass dynamics despite marked differences in climate forcing, land-use history and species-composition across the region. These results imply that data-constrained parametrizations of structured biosphere models such as ED2 can be successfully used for regional-scale ecosystem prediction and forecasting. We also assess the model's ability to capture sub-grid scale heterogeneity in the dynamics of biomass growth and mortality of different sizes and types of trees, and then discuss the implications of these analyses for further reducing the remaining biases in the model's predictions.

  4. Toward efficient task assignment and motion planning for large-scale underwater missions

    Directory of Open Access Journals (Sweden)

    Somaiyeh MahmoudZadeh

    2016-10-01

    Full Text Available An autonomous underwater vehicle needs to possess a certain degree of autonomy for any particular underwater mission to fulfil the mission objectives successfully and ensure its safety in all stages of the mission in a large-scale operating field. In this article, a novel combinatorial conflict-free task assignment strategy, consisting of an interactive engagement of a local path planner and an adaptive global route planner, is introduced. The method takes advantage of the heuristic search potency of the particle swarm optimization algorithm to address the discrete nature of routing-task assignment approach and the complexity of nondeterministic polynomial-time-hard path planning problem. The proposed hybrid method is highly efficient as a consequence of its reactive guidance framework that guarantees successful completion of missions particularly in cluttered environments. To examine the performance of the method in a context of mission productivity, mission time management, and vehicle safety, a series of simulation studies are undertaken. The results of simulations declare that the proposed method is reliable and robust, particularly in dealing with uncertainties, and it can significantly enhance the level of a vehicle’s autonomy by relying on its reactive nature and capability of providing fast feasible solutions.

  5. A novel highly efficient grating coupler with large filling factor used for optoelectronic integration

    International Nuclear Information System (INIS)

    Zhou Liang; Li Zhi-Yong; Zhu Yu; Li Yun-Tao; Yu Yu-De; Yu Jin-Zhong; Fan Zhong-Cao; Han Wei-Hua

    2010-01-01

    A novel highly efficient grating coupler with large filling factor and deep etching is proposed in silicon-on-insulator for near vertical coupling between the rib waveguide and optical fibre. The deep slots acting as high efficient scattering centres are analysed and optimized. As high as 60% coupling efficiency at telecom wavelength of 1550-nm and 3-dB bandwidth of 61 nm are predicted by simulation. A peak coupling efficiency of 42.1% at wavelength 1546-nm and 3-dB bandwidth of 37.6 nm are obtained experimentally. (classical areas of phenomenology)

  6. Algorithm and Application of Gcp-Independent Block Adjustment for Super Large-Scale Domestic High Resolution Optical Satellite Imagery

    Science.gov (United States)

    Sun, Y. S.; Zhang, L.; Xu, B.; Zhang, Y.

    2018-04-01

    The accurate positioning of optical satellite image without control is the precondition for remote sensing application and small/medium scale mapping in large abroad areas or with large-scale images. In this paper, aiming at the geometric features of optical satellite image, based on a widely used optimization method of constraint problem which is called Alternating Direction Method of Multipliers (ADMM) and RFM least-squares block adjustment, we propose a GCP independent block adjustment method for the large-scale domestic high resolution optical satellite image - GISIBA (GCP-Independent Satellite Imagery Block Adjustment), which is easy to parallelize and highly efficient. In this method, the virtual "average" control points are built to solve the rank defect problem and qualitative and quantitative analysis in block adjustment without control. The test results prove that the horizontal and vertical accuracy of multi-covered and multi-temporal satellite images are better than 10 m and 6 m. Meanwhile the mosaic problem of the adjacent areas in large area DOM production can be solved if the public geographic information data is introduced as horizontal and vertical constraints in the block adjustment process. Finally, through the experiments by using GF-1 and ZY-3 satellite images over several typical test areas, the reliability, accuracy and performance of our developed procedure will be presented and studied in this paper.

  7. Real-time simulation of large-scale floods

    Science.gov (United States)

    Liu, Q.; Qin, Y.; Li, G. D.; Liu, Z.; Cheng, D. J.; Zhao, Y. H.

    2016-08-01

    According to the complex real-time water situation, the real-time simulation of large-scale floods is very important for flood prevention practice. Model robustness and running efficiency are two critical factors in successful real-time flood simulation. This paper proposed a robust, two-dimensional, shallow water model based on the unstructured Godunov- type finite volume method. A robust wet/dry front method is used to enhance the numerical stability. An adaptive method is proposed to improve the running efficiency. The proposed model is used for large-scale flood simulation on real topography. Results compared to those of MIKE21 show the strong performance of the proposed model.

  8. Large-scale runoff generation - parsimonious parameterisation using high-resolution topography

    Science.gov (United States)

    Gong, L.; Halldin, S.; Xu, C.-Y.

    2011-08-01

    World water resources have primarily been analysed by global-scale hydrological models in the last decades. Runoff generation in many of these models are based on process formulations developed at catchments scales. The division between slow runoff (baseflow) and fast runoff is primarily governed by slope and spatial distribution of effective water storage capacity, both acting at very small scales. Many hydrological models, e.g. VIC, account for the spatial storage variability in terms of statistical distributions; such models are generally proven to perform well. The statistical approaches, however, use the same runoff-generation parameters everywhere in a basin. The TOPMODEL concept, on the other hand, links the effective maximum storage capacity with real-world topography. Recent availability of global high-quality, high-resolution topographic data makes TOPMODEL attractive as a basis for a physically-based runoff-generation algorithm at large scales, even if its assumptions are not valid in flat terrain or for deep groundwater systems. We present a new runoff-generation algorithm for large-scale hydrology based on TOPMODEL concepts intended to overcome these problems. The TRG (topography-derived runoff generation) algorithm relaxes the TOPMODEL equilibrium assumption so baseflow generation is not tied to topography. TRG only uses the topographic index to distribute average storage to each topographic index class. The maximum storage capacity is proportional to the range of topographic index and is scaled by one parameter. The distribution of storage capacity within large-scale grid cells is obtained numerically through topographic analysis. The new topography-derived distribution function is then inserted into a runoff-generation framework similar VIC's. Different basin parts are parameterised by different storage capacities, and different shapes of the storage-distribution curves depend on their topographic characteristics. The TRG algorithm is driven by the

  9. Efficient numerical methods for the large-scale, parallel solution of elastoplastic contact problems

    KAUST Repository

    Frohne, Jö rg; Heister, Timo; Bangerth, Wolfgang

    2015-01-01

    © 2016 John Wiley & Sons, Ltd. Quasi-static elastoplastic contact problems are ubiquitous in many industrial processes and other contexts, and their numerical simulation is consequently of great interest in accurately describing and optimizing production processes. The key component in these simulations is the solution of a single load step of a time iteration. From a mathematical perspective, the problems to be solved in each time step are characterized by the difficulties of variational inequalities for both the plastic behavior and the contact problem. Computationally, they also often lead to very large problems. In this paper, we present and evaluate a complete set of methods that are (1) designed to work well together and (2) allow for the efficient solution of such problems. In particular, we use adaptive finite element meshes with linear and quadratic elements, a Newton linearization of the plasticity, active set methods for the contact problem, and multigrid-preconditioned linear solvers. Through a sequence of numerical experiments, we show the performance of these methods. This includes highly accurate solutions of a three-dimensional benchmark problem and scaling our methods in parallel to 1024 cores and more than a billion unknowns.

  10. Efficient numerical methods for the large-scale, parallel solution of elastoplastic contact problems

    KAUST Repository

    Frohne, Jörg

    2015-08-06

    © 2016 John Wiley & Sons, Ltd. Quasi-static elastoplastic contact problems are ubiquitous in many industrial processes and other contexts, and their numerical simulation is consequently of great interest in accurately describing and optimizing production processes. The key component in these simulations is the solution of a single load step of a time iteration. From a mathematical perspective, the problems to be solved in each time step are characterized by the difficulties of variational inequalities for both the plastic behavior and the contact problem. Computationally, they also often lead to very large problems. In this paper, we present and evaluate a complete set of methods that are (1) designed to work well together and (2) allow for the efficient solution of such problems. In particular, we use adaptive finite element meshes with linear and quadratic elements, a Newton linearization of the plasticity, active set methods for the contact problem, and multigrid-preconditioned linear solvers. Through a sequence of numerical experiments, we show the performance of these methods. This includes highly accurate solutions of a three-dimensional benchmark problem and scaling our methods in parallel to 1024 cores and more than a billion unknowns.

  11. A Low Collision and High Throughput Data Collection Mechanism for Large-Scale Super Dense Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Chunyang Lei

    2016-07-01

    Full Text Available Super dense wireless sensor networks (WSNs have become popular with the development of Internet of Things (IoT, Machine-to-Machine (M2M communications and Vehicular-to-Vehicular (V2V networks. While highly-dense wireless networks provide efficient and sustainable solutions to collect precise environmental information, a new channel access scheme is needed to solve the channel collision problem caused by the large number of competing nodes accessing the channel simultaneously. In this paper, we propose a space-time random access method based on a directional data transmission strategy, by which collisions in the wireless channel are significantly decreased and channel utility efficiency is greatly enhanced. Simulation results show that our proposed method can decrease the packet loss rate to less than 2 % in large scale WSNs and in comparison with other channel access schemes for WSNs, the average network throughput can be doubled.

  12. A Low Collision and High Throughput Data Collection Mechanism for Large-Scale Super Dense Wireless Sensor Networks.

    Science.gov (United States)

    Lei, Chunyang; Bie, Hongxia; Fang, Gengfa; Gaura, Elena; Brusey, James; Zhang, Xuekun; Dutkiewicz, Eryk

    2016-07-18

    Super dense wireless sensor networks (WSNs) have become popular with the development of Internet of Things (IoT), Machine-to-Machine (M2M) communications and Vehicular-to-Vehicular (V2V) networks. While highly-dense wireless networks provide efficient and sustainable solutions to collect precise environmental information, a new channel access scheme is needed to solve the channel collision problem caused by the large number of competing nodes accessing the channel simultaneously. In this paper, we propose a space-time random access method based on a directional data transmission strategy, by which collisions in the wireless channel are significantly decreased and channel utility efficiency is greatly enhanced. Simulation results show that our proposed method can decrease the packet loss rate to less than 2 % in large scale WSNs and in comparison with other channel access schemes for WSNs, the average network throughput can be doubled.

  13. Application of the Monte Carlo method for the efficiency calibration of CsI and NaI detectors for gamma-ray measurements from terrestrial samples

    International Nuclear Information System (INIS)

    Baccouche, S.; Al-Azmi, D.; Karunakara, N.; Trabelsi, A.

    2012-01-01

    Gamma-ray measurements in terrestrial/environmental samples require the use of high efficient detectors because of the low level of the radionuclide activity concentrations in the samples; thus scintillators are suitable for this purpose. Two scintillation detectors were studied in this work; CsI(Tl) and NaI(Tl) with identical size for measurement of terrestrial samples for performance study. This work describes a Monte Carlo method for making the full-energy efficiency calibration curves for both detectors using gamma-ray energies associated with the decay of naturally occurring radionuclides 137 Cs (661 keV), 40 K (1460 keV), 238 U ( 214 Bi, 1764 keV) and 232 Th ( 208 Tl, 2614 keV), which are found in terrestrial samples. The magnitude of the coincidence summing effect occurring for the 2614 keV emission of 208 Tl is assessed by simulation. The method provides an efficient tool to make the full-energy efficiency calibration curve for scintillation detectors for any samples geometry and volume in order to determine accurate activity concentrations in terrestrial samples. - Highlights: ► CsI (Tl) and NaI (Tl) detectors were studied for the measurement of terrestrial samples. ► Monte Carlo method was used for efficiency calibration using natural gamma emitting terrestrial radionuclides. ► The coincidence summing effect occurring for the 2614 keV emission of 208 Tl is assessed by simulation.

  14. Feasibility analysis of large length-scale thermocapillary flow experiment for the International Space Station

    Science.gov (United States)

    Alberts, Samantha J.

    The investigation of microgravity fluid dynamics emerged out of necessity with the advent of space exploration. In particular, capillary research took a leap forward in the 1960s with regards to liquid settling and interfacial dynamics. Due to inherent temperature variations in large spacecraft liquid systems, such as fuel tanks, forces develop on gas-liquid interfaces which induce thermocapillary flows. To date, thermocapillary flows have been studied in small, idealized research geometries usually under terrestrial conditions. The 1 to 3m lengths in current and future large tanks and hardware are designed based on hardware rather than research, which leaves spaceflight systems designers without the technological tools to effectively create safe and efficient designs. This thesis focused on the design and feasibility of a large length-scale thermocapillary flow experiment, which utilizes temperature variations to drive a flow. The design of a helical channel geometry ranging from 1 to 2.5m in length permits a large length-scale thermocapillary flow experiment to fit in a seemingly small International Space Station (ISS) facility such as the Fluids Integrated Rack (FIR). An initial investigation determined the proposed experiment produced measurable data while adhering to the FIR facility limitations. The computational portion of this thesis focused on the investigation of functional geometries of fuel tanks and depots using Surface Evolver. This work outlines the design of a large length-scale thermocapillary flow experiment for the ISS FIR. The results from this work improve the understanding thermocapillary flows and thus improve technological tools for predicting heat and mass transfer in large length-scale thermocapillary flows. Without the tools to understand the thermocapillary flows in these systems, engineers are forced to design larger, heavier vehicles to assure safety and mission success.

  15. Rethinking Trade-Driven Extinction Risk in Marine and Terrestrial Megafauna.

    Science.gov (United States)

    McClenachan, Loren; Cooper, Andrew B; Dulvy, Nicholas K

    2016-06-20

    Large animals hunted for the high value of their parts (e.g., elephant ivory and shark fins) are at risk of extinction due to both intensive international trade pressure and intrinsic biological sensitivity. However, the relative role of trade, particularly in non-perishable products, and biological factors in driving extinction risk is not well understood [1-4]. Here we identify a taxonomically diverse group of >100 marine and terrestrial megafauna targeted for international luxury markets; estimate their value across three points of sale; test relationships among extinction risk, high value, and body size; and quantify the effects of two mitigating factors: poaching fines and geographic range size. We find that body size is the principal driver of risk for lower value species, but that this biological pattern is eliminated above a value threshold, meaning that the most valuable species face a high extinction risk regardless of size. For example, once mean product values exceed US$12,557 kg(-1), body size no longer drives risk. Total value scales with size for marine animals more strongly than for terrestrial animals, incentivizing the hunting of large marine individuals and species. Poaching fines currently have little effect on extinction risk; fines would need to be increased 10- to 100-fold to be effective. Large geographic ranges reduce risk for terrestrial, but not marine, species, whose ranges are ten times greater. Our results underscore both the evolutionary and ecosystem consequences of targeting large marine animals and the need to geographically scale up and prioritize conservation of high-value marine species to avoid extinction. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. High performance nanostructured Silicon heterojunction for water splitting on large scales

    KAUST Repository

    Bonifazi, Marcella

    2017-11-02

    In past years the global demand for energy has been increasing steeply, as well as the awareness that new sources of clean energy are essential. Photo-electrochemical devices (PEC) for water splitting applications have stirred great interest, and different approach has been explored to improve the efficiency of these devices and to avoid optical losses at the interfaces with water. These include engineering materials and nanostructuring the device\\'s surfaces [1]-[2]. Despite the promising initial results, there are still many drawbacks that needs to be overcome to reach large scale production with optimized performances [3]. We present a new device that relies on the optimization of the nanostructuring process that exploits suitably disordered surfaces. Additionally, this device could harvest light on both sides to efficiently gain and store the energy to keep the photocatalytic reaction active.

  17. High performance nanostructured Silicon heterojunction for water splitting on large scales

    KAUST Repository

    Bonifazi, Marcella; Fu, Hui-chun; He, Jr-Hau; Fratalocchi, Andrea

    2017-01-01

    In past years the global demand for energy has been increasing steeply, as well as the awareness that new sources of clean energy are essential. Photo-electrochemical devices (PEC) for water splitting applications have stirred great interest, and different approach has been explored to improve the efficiency of these devices and to avoid optical losses at the interfaces with water. These include engineering materials and nanostructuring the device's surfaces [1]-[2]. Despite the promising initial results, there are still many drawbacks that needs to be overcome to reach large scale production with optimized performances [3]. We present a new device that relies on the optimization of the nanostructuring process that exploits suitably disordered surfaces. Additionally, this device could harvest light on both sides to efficiently gain and store the energy to keep the photocatalytic reaction active.

  18. Advances in High-Efficiency III-V Multijunction Solar Cells

    Directory of Open Access Journals (Sweden)

    Richard R. King

    2007-01-01

    Full Text Available The high efficiency of multijunction concentrator cells has the potential to revolutionize the cost structure of photovoltaic electricity generation. Advances in the design of metamorphic subcells to reduce carrier recombination and increase voltage, wide-band-gap tunnel junctions capable of operating at high concentration, metamorphic buffers to transition from the substrate lattice constant to that of the epitaxial subcells, concentrator cell AR coating and grid design, and integration into 3-junction cells with current-matched subcells under the terrestrial spectrum have resulted in new heights in solar cell performance. A metamorphic Ga0.44In0.56P/Ga0.92In0.08As/ Ge 3-junction solar cell from this research has reached a record 40.7% efficiency at 240 suns, under the standard reporting spectrum for terrestrial concentrator cells (AM1.5 direct, low-AOD, 24.0 W/cm2, 25∘C, and experimental lattice-matched 3-junction cells have now also achieved over 40% efficiency, with 40.1% measured at 135 suns. This metamorphic 3-junction device is the first solar cell to reach over 40% in efficiency, and has the highest solar conversion efficiency for any type of photovoltaic cell developed to date. Solar cells with more junctions offer the potential for still higher efficiencies to be reached. Four-junction cells limited by radiative recombination can reach over 58% in principle, and practical 4-junction cell efficiencies over 46% are possible with the right combination of band gaps, taking into account series resistance and gridline shadowing. Many of the optimum band gaps for maximum energy conversion can be accessed with metamorphic semiconductor materials. The lower current in cells with 4 or more junctions, resulting in lower I2R resistive power loss, is a particularly significant advantage in concentrator PV systems. Prototype 4-junction terrestrial concentrator cells have been grown by metal-organic vapor-phase epitaxy, with preliminary measured

  19. Testing on a Large Scale Running the ATLAS Data Acquisition and High Level Trigger Software on 700 PC Nodes

    CERN Document Server

    Burckhart-Chromek, Doris; Adragna, P; Alexandrov, L; Amorim, A; Armstrong, S; Badescu, E; Baines, J T M; Barros, N; Beck, H P; Bee, C; Blair, R; Bogaerts, J A C; Bold, T; Bosman, M; Caprini, M; Caramarcu, C; Ciobotaru, M; Comune, G; Corso-Radu, A; Cranfield, R; Crone, G; Dawson, J; Della Pietra, M; Di Mattia, A; Dobinson, Robert W; Dobson, M; Dos Anjos, A; Dotti, A; Drake, G; Ellis, Nick; Ermoline, Y; Ertorer, E; Falciano, S; Ferrari, R; Ferrer, M L; Francis, D; Gadomski, S; Gameiro, S; Garitaonandia, H; Gaudio, G; George, S; Gesualdi-Mello, A; Gorini, B; Green, B; Haas, S; Haberichter, W N; Hadavand, H; Haeberli, C; Haller, J; Hansen, J; Hauser, R; Hillier, S J; Höcker, A; Hughes-Jones, R E; Joos, M; Kazarov, A; Kieft, G; Klous, S; Kohno, T; Kolos, S; Korcyl, K; Kordas, K; Kotov, V; Kugel, A; Landon, M; Lankford, A; Leahu, L; Leahu, M; Lehmann-Miotto, G; Le Vine, M J; Liu, W; Maeno, T; Männer, R; Mapelli, L; Martin, B; Masik, J; McLaren, R; Meessen, C; Meirosu, C; Mineev, M; Misiejuk, A; Morettini, P; Mornacchi, G; Müller, M; Garcia-Murillo, R; Nagasaka, Y; Negri, A; Padilla, C; Pasqualucci, E; Pauly, T; Perera, V; Petersen, J; Pope, B; Albuquerque-Portes, M; Pretzl, K; Prigent, D; Roda, C; Ryabov, Yu; Salvatore, D; Schiavi, C; Schlereth, J L; Scholtes, I; Sole-Segura, E; Seixas, M; Sloper, J; Soloviev, I; Spiwoks, R; Stamen, R; Stancu, S; Strong, S; Sushkov, S; Szymocha, T; Tapprogge, S; Teixeira-Dias, P; Torres, R; Touchard, F; Tremblet, L; Ünel, G; Van Wasen, J; Vandelli, W; Vaz-Gil-Lopes, L; Vermeulen, J C; von der Schmitt, H; Wengler, T; Werner, P; Wheeler, S; Wickens, F; Wiedenmann, W; Wiesmann, M; Wu, X; Yasu, Y; Yu, M; Zema, F; Zobernig, H; Computing In High Energy and Nuclear Physics

    2006-01-01

    The ATLAS Data Acquisition (DAQ) and High Level Trigger (HLT) software system will be comprised initially of 2000 PC nodes which take part in the control, event readout, second level trigger and event filter operations. This high number of PCs will only be purchased before data taking in 2007. The large CERN IT LXBATCH facility provided the opportunity to run in July 2005 online functionality tests over a period of 5 weeks on a stepwise increasing farm size from 100 up to 700 PC dual nodes. The interplay between the control and monitoring software with the event readout, event building and the trigger software has been exercised the first time as an integrated system on this large scale. New was also to run algorithms in the online environment for the trigger selection and in the event filter processing tasks on a larger scale. A mechanism has been developed to package the offline software together with the DAQ/HLT software and to distribute it via peer-to-peer software efficiently to this large pc cluster. T...

  20. Testing on a Large Scale running the ATLAS Data Acquisition and High Level Trigger Software on 700 PC Nodes

    CERN Document Server

    Burckhart-Chromek, Doris; Adragna, P; Albuquerque-Portes, M; Alexandrov, L; Amorim, A; Armstrong, S; Badescu, E; Baines, J T M; Barros, N; Beck, H P; Bee, C; Blair, R; Bogaerts, J A C; Bold, T; Bosman, M; Caprini, M; Caramarcu, C; Ciobotaru, M; Comune, G; Corso-Radu, A; Cranfield, R; Crone, G; Dawson, J; Della Pietra, M; Di Mattia, A; Dobinson, Robert W; Dobson, M; Dos Anjos, A; Dotti, A; Drake, G; Ellis, Nick; Ermoline, Y; Ertorer, E; Falciano, S; Ferrari, R; Ferrer, M L; Francis, D; Gadomski, S; Gameiro, S; Garcia-Murillo, R; Garitaonandia, H; Gaudio, G; George, S; Gesualdi-Mello, A; Gorini, B; Green, B; Haas, S; Haberichter, W N; Hadavand, H; Haeberli, C; Haller, J; Hansen, J; Hauser, R; Hillier, S J; Hughes-Jones, R E; Höcker, A; Joos, M; Kazarov, A; Kieft, G; Klous, S; Kohno, T; Kolos, S; Korcyl, K; Kordas, K; Kotov, V; Kugel, A; Landon, M; Lankford, A; Le Vine, M J; Leahu, L; Leahu, M; Lehmann-Miotto, G; Liu, W; Maeno, T; Mapelli, L; Martin, B; Masik, J; McLaren, R; Meessen, C; Meirosu, C; Mineev, M; Misiejuk, A; Morettini, P; Mornacchi, G; Männer, R; Müller, M; Nagasaka, Y; Negri, A; Padilla, C; Pasqualucci, E; Pauly, T; Perera, V; Petersen, J; Pope, B; Pretzl, K; Prigent, D; Roda, C; Ryabov, Yu; Salvatore, D; Schiavi, C; Schlereth, J L; Scholtes, I; Seixas, M; Sloper, J; Sole-Segura, E; Soloviev, I; Spiwoks, R; Stamen, R; Stancu, S; Strong, S; Sushkov, S; Szymocha, T; Tapprogge, S; Teixeira-Dias, P; Torres, R; Touchard, F; Tremblet, L; Van Wasen, J; Vandelli, W; Vaz-Gil-Lopes, L; Vermeulen, J C; Wengler, T; Werner, P; Wheeler, S; Wickens, F; Wiedenmann, W; Wiesmann, M; Wu, X; Yasu, Y; Yu, M; Zema, F; Zobernig, H; von der Schmitt, H; Ünel, G; Computing In High Energy and Nuclear Physics

    2006-01-01

    The ATLAS Data Acquisition (DAQ) and High Level Trigger (HLT) software system will be comprised initially of 2000 PC nodes which take part in the control, event readout, second level trigger and event filter operations. This high number of PCs will only be purchased before data taking in 2007. The large CERN IT LXBATCH facility provided the opportunity to run in July 2005 online functionality tests over a period of 5 weeks on a stepwise increasing farm size from 100 up to 700 PC dual nodes. The interplay between the control and monitoring software with the event readout, event building and the trigger software has been exercised the first time as an integrated system on this large scale. New was also to run algorithms in the online environment for the trigger selection and in the event filter processing tasks on a larger scale. A mechanism has been developed to package the offline software together with the DAQ/HLT software and to distribute it via peer-to-peer software efficiently to this large pc cluster. T...

  1. High Efficiency Large-Angle Pancharatnam Phase Deflector Based on Dual Twist Design

    Science.gov (United States)

    2016-12-16

    construction and characterization of a ±40° beam steering device with 90% diffraction efficiency based on our dual-twist design at 633nm wavelength...N. & Escuti, M. J. Achromatic Wollaston prism beam splitter using polarization gratings. Opt. Lett. 41, 4461–4463 (2016). 13. Slussarenko, S., et...High-efficiency large-angle Pancharatnam phase deflector based on dual-twist design Kun Gao1, Colin McGinty1, Harold Payson2, Shaun Berry2, Joseph

  2. 1km Global Terrestrial Carbon Flux: Estimations and Evaluations

    Science.gov (United States)

    Murakami, K.; Sasai, T.; Kato, S.; Saito, M.; Matsunaga, T.; Hiraki, K.; Maksyutov, S. S.

    2017-12-01

    Estimating global scale of the terrestrial carbon flux change with high accuracy and high resolution is important to understand global environmental changes. Furthermore the estimations of the global spatiotemporal distribution may contribute to the political and social activities such as REDD+. In order to reveal the current state of terrestrial carbon fluxes covering all over the world and a decadal scale. The satellite-based diagnostic biosphere model is suitable for achieving this purpose owing to observing on the present global land surface condition uniformly at some time interval. In this study, we estimated the global terrestrial carbon fluxes with 1km grids by using the terrestrial biosphere model (BEAMS). And we evaluated our new carbon flux estimations on various spatial scales and showed the transition of forest carbon stocks in some regions. Because BEAMS required high resolution meteorological data and satellite data as input data, we made 1km interpolated data using a kriging method. The data used in this study were JRA-55, GPCP, GOSAT L4B atmospheric CO2 data as meteorological data, and MODIS land product as land surface satellite data. Interpolating process was performed on the meteorological data because of insufficient resolution, but not on MODIS data. We evaluated our new carbon flux estimations using the flux tower measurement (FLUXNET2015 Datasets) in a point scale. We used 166 sites data for evaluating our model results. These flux sites are classified following vegetation type (DBF, EBF, ENF, mixed forests, grass lands, croplands, shrub lands, Savannas, wetlands). In global scale, the BEAMS estimations was underestimated compared to the flux measurements in the case of carbon uptake and release. The monthly variations of NEP showed relatively high correlations in DBF and mixed forests, but the correlation coefficients of EBF, ENF, and grass lands were less than 0.5. In the meteorological factors, air temperature and solar radiation showed

  3. HAlign-II: efficient ultra-large multiple sequence alignment and phylogenetic tree reconstruction with distributed and parallel computing.

    Science.gov (United States)

    Wan, Shixiang; Zou, Quan

    2017-01-01

    Multiple sequence alignment (MSA) plays a key role in biological sequence analyses, especially in phylogenetic tree construction. Extreme increase in next-generation sequencing results in shortage of efficient ultra-large biological sequence alignment approaches for coping with different sequence types. Distributed and parallel computing represents a crucial technique for accelerating ultra-large (e.g. files more than 1 GB) sequence analyses. Based on HAlign and Spark distributed computing system, we implement a highly cost-efficient and time-efficient HAlign-II tool to address ultra-large multiple biological sequence alignment and phylogenetic tree construction. The experiments in the DNA and protein large scale data sets, which are more than 1GB files, showed that HAlign II could save time and space. It outperformed the current software tools. HAlign-II can efficiently carry out MSA and construct phylogenetic trees with ultra-large numbers of biological sequences. HAlign-II shows extremely high memory efficiency and scales well with increases in computing resource. THAlign-II provides a user-friendly web server based on our distributed computing infrastructure. HAlign-II with open-source codes and datasets was established at http://lab.malab.cn/soft/halign.

  4. An Improved GRACE Terrestrial Water Storage Assimilation System For Estimating Large-Scale Soil Moisture and Shallow Groundwater

    Science.gov (United States)

    Girotto, M.; De Lannoy, G. J. M.; Reichle, R. H.; Rodell, M.

    2015-12-01

    The Gravity Recovery And Climate Experiment (GRACE) mission is unique because it provides highly accurate column integrated estimates of terrestrial water storage (TWS) variations. Major limitations of GRACE-based TWS observations are related to their monthly temporal and coarse spatial resolution (around 330 km at the equator), and to the vertical integration of the water storage components. These challenges can be addressed through data assimilation. To date, it is still not obvious how best to assimilate GRACE-TWS observations into a land surface model, in order to improve hydrological variables, and many details have yet to be worked out. This presentation discusses specific recent features of the assimilation of gridded GRACE-TWS data into the NASA Goddard Earth Observing System (GEOS-5) Catchment land surface model to improve soil moisture and shallow groundwater estimates at the continental scale. The major recent advancements introduced by the presented work with respect to earlier systems include: 1) the assimilation of gridded GRACE-TWS data product with scaling factors that are specifically derived for data assimilation purposes only; 2) the assimilation is performed through a 3D assimilation scheme, in which reasonable spatial and temporal error standard deviations and correlations are exploited; 3) the analysis step uses an optimized calculation and application of the analysis increments; 4) a poor-man's adaptive estimation of a spatially variable measurement error. This work shows that even if they are characterized by a coarse spatial and temporal resolution, the observed column integrated GRACE-TWS data have potential for improving our understanding of soil moisture and shallow groundwater variations.

  5. THE COMPOSITIONAL DIVERSITY OF EXTRASOLAR TERRESTRIAL PLANETS. II. MIGRATION SIMULATIONS

    International Nuclear Information System (INIS)

    Carter-Bond, Jade C.; O'Brien, David P.; Raymond, Sean N.

    2012-01-01

    Prior work has found that a variety of terrestrial planetary compositions are expected to occur within known extrasolar planetary systems. However, such studies ignored the effects of giant planet migration, which is thought to be very common in extrasolar systems. Here we present calculations of the compositions of terrestrial planets that formed in dynamical simulations incorporating varying degrees of giant planet migration. We used chemical equilibrium models of the solid material present in the disks of five known planetary host stars: the Sun, GJ 777, HD4203, HD19994, and HD213240. Giant planet migration has a strong effect on the compositions of simulated terrestrial planets as the migration results in large-scale mixing between terrestrial planet building blocks that condensed at a range of temperatures. This mixing acts to (1) increase the typical abundance of Mg-rich silicates in the terrestrial planets' feeding zones and thus increase the frequency of planets with Earth-like compositions compared with simulations with static giant planet orbits, and (2) drastically increase the efficiency of the delivery of hydrous phases (water and serpentine) to terrestrial planets and thus produce waterworlds and/or wet Earths. Our results demonstrate that although a wide variety of terrestrial planet compositions can still be produced, planets with Earth-like compositions should be common within extrasolar planetary systems.

  6. Application of the Monte Carlo method for the efficiency calibration of CsI and NaI detectors for gamma-ray measurements from terrestrial samples

    Energy Technology Data Exchange (ETDEWEB)

    Baccouche, S., E-mail: souad.baccouche@cnstn.rnrt.tn [UR-MDTN, National Center for Nuclear Sciences and Technology, Technopole Sidi Thabet, 2020 Sidi Thabet (Tunisia); Al-Azmi, D., E-mail: ds.alazmi@paaet.edu.kw [Department of Applied Sciences, College of Technological Studies, Public Authority for Applied Education and Training, Shuwaikh, P.O. Box 42325, Code 70654 (Kuwait); Karunakara, N., E-mail: karunakara_n@yahoo.com [University Science Instrumentation Centre, Mangalore University, Mangalagangotri 574199 (India); Trabelsi, A., E-mail: adel.trabelsi@fst.rnu.tn [UR-MDTN, National Center for Nuclear Sciences and Technology, Technopole Sidi Thabet, 2020 Sidi Thabet (Tunisia); UR-UPNHE, Faculty of Sciences of Tunis, El-Manar University, 2092 Tunis (Tunisia)

    2012-01-15

    Gamma-ray measurements in terrestrial/environmental samples require the use of high efficient detectors because of the low level of the radionuclide activity concentrations in the samples; thus scintillators are suitable for this purpose. Two scintillation detectors were studied in this work; CsI(Tl) and NaI(Tl) with identical size for measurement of terrestrial samples for performance study. This work describes a Monte Carlo method for making the full-energy efficiency calibration curves for both detectors using gamma-ray energies associated with the decay of naturally occurring radionuclides {sup 137}Cs (661 keV), {sup 40}K (1460 keV), {sup 238}U ({sup 214}Bi, 1764 keV) and {sup 232}Th ({sup 208}Tl, 2614 keV), which are found in terrestrial samples. The magnitude of the coincidence summing effect occurring for the 2614 keV emission of {sup 208}Tl is assessed by simulation. The method provides an efficient tool to make the full-energy efficiency calibration curve for scintillation detectors for any samples geometry and volume in order to determine accurate activity concentrations in terrestrial samples. - Highlights: Black-Right-Pointing-Pointer CsI (Tl) and NaI (Tl) detectors were studied for the measurement of terrestrial samples. Black-Right-Pointing-Pointer Monte Carlo method was used for efficiency calibration using natural gamma emitting terrestrial radionuclides. Black-Right-Pointing-Pointer The coincidence summing effect occurring for the 2614 keV emission of {sup 208}Tl is assessed by simulation.

  7. High-Temperature High-Efficiency Solar Thermoelectric Generators

    Energy Technology Data Exchange (ETDEWEB)

    Baranowski, LL; Warren, EL; Toberer, ES

    2014-03-01

    Inspired by recent high-efficiency thermoelectric modules, we consider thermoelectrics for terrestrial applications in concentrated solar thermoelectric generators (STEGs). The STEG is modeled as two subsystems: a TEG, and a solar absorber that efficiently captures the concentrated sunlight and limits radiative losses from the system. The TEG subsystem is modeled using thermoelectric compatibility theory; this model does not constrain the material properties to be constant with temperature. Considering a three-stage TEG based on current record modules, this model suggests that 18% efficiency could be experimentally expected with a temperature gradient of 1000A degrees C to 100A degrees C. Achieving 15% overall STEG efficiency thus requires an absorber efficiency above 85%, and we consider two methods to achieve this: solar-selective absorbers and thermally insulating cavities. When the TEG and absorber subsystem models are combined, we expect that the STEG modeled here could achieve 15% efficiency with optical concentration between 250 and 300 suns.

  8. Large scale network-centric distributed systems

    CERN Document Server

    Sarbazi-Azad, Hamid

    2014-01-01

    A highly accessible reference offering a broad range of topics and insights on large scale network-centric distributed systems Evolving from the fields of high-performance computing and networking, large scale network-centric distributed systems continues to grow as one of the most important topics in computing and communication and many interdisciplinary areas. Dealing with both wired and wireless networks, this book focuses on the design and performance issues of such systems. Large Scale Network-Centric Distributed Systems provides in-depth coverage ranging from ground-level hardware issu

  9. Dynamical Origin and Terrestrial Impact Flux of Large Near-Earth Asteroids

    Science.gov (United States)

    Nesvorný, David; Roig, Fernando

    2018-01-01

    Dynamical models of the asteroid delivery from the main belt suggest that the current impact flux of diameter D> 10 km asteroids on the Earth is ≃0.5–1 Gyr‑1. Studies of the Near-Earth Asteroid (NEA) population find a much higher flux, with ≃ 7 D> 10 km asteroid impacts per Gyr. Here we show that this problem is rooted in the application of impact probability of small NEAs (≃1.5 Gyr‑1 per object), whose population is well characterized, to large NEAs. In reality, large NEAs evolve from the main belt by different escape routes, have a different orbital distribution, and lower impact probabilities (0.8 ± 0.3 Gyr‑1 per object) than small NEAs. In addition, we find that the current population of two D> 10 km NEAs (Ganymed and Eros) is a slight fluctuation over the long-term average of 1.1+/- 0.5 D> 10 km NEAs in a steady state. These results have important implications for our understanding of the occurrence of the K/T-scale impacts on the terrestrial worlds.

  10. Efficient management vital to large, long-term engineering projects

    International Nuclear Information System (INIS)

    Wolfe, P.L.

    1989-01-01

    This article describes the ways in which firms manage large hazardous waste mitigation projects efficiently. Staffing concerns, control systems and report mechanisms critical to effective and timely management of these large-scale programs are explored

  11. A universal scaling relationship between body mass and proximal limb bone dimensions in quadrupedal terrestrial tetrapods.

    Science.gov (United States)

    Campione, Nicolás E; Evans, David C

    2012-07-10

    Body size is intimately related to the physiology and ecology of an organism. Therefore, accurate and consistent body mass estimates are essential for inferring numerous aspects of paleobiology in extinct taxa, and investigating large-scale evolutionary and ecological patterns in the history of life. Scaling relationships between skeletal measurements and body mass in birds and mammals are commonly used to predict body mass in extinct members of these crown clades, but the applicability of these models for predicting mass in more distantly related stem taxa, such as non-avian dinosaurs and non-mammalian synapsids, has been criticized on biomechanical grounds. Here we test the major criticisms of scaling methods for estimating body mass using an extensive dataset of mammalian and non-avian reptilian species derived from individual skeletons with live weights. Significant differences in the limb scaling of mammals and reptiles are noted in comparisons of limb proportions and limb length to body mass. Remarkably, however, the relationship between proximal (stylopodial) limb bone circumference and body mass is highly conserved in extant terrestrial mammals and reptiles, in spite of their disparate limb postures, gaits, and phylogenetic histories. As a result, we are able to conclusively reject the main criticisms of scaling methods that question the applicability of a universal scaling equation for estimating body mass in distantly related taxa. The conserved nature of the relationship between stylopodial circumference and body mass suggests that the minimum diaphyseal circumference of the major weight-bearing bones is only weakly influenced by the varied forces exerted on the limbs (that is, compression or torsion) and most strongly related to the mass of the animal. Our results, therefore, provide a much-needed, robust, phylogenetically corrected framework for accurate and consistent estimation of body mass in extinct terrestrial quadrupeds, which is important for a

  12. A universal scaling relationship between body mass and proximal limb bone dimensions in quadrupedal terrestrial tetrapods

    Directory of Open Access Journals (Sweden)

    Campione Nicolás E

    2012-07-01

    Full Text Available Abstract Background Body size is intimately related to the physiology and ecology of an organism. Therefore, accurate and consistent body mass estimates are essential for inferring numerous aspects of paleobiology in extinct taxa, and investigating large-scale evolutionary and ecological patterns in the history of life. Scaling relationships between skeletal measurements and body mass in birds and mammals are commonly used to predict body mass in extinct members of these crown clades, but the applicability of these models for predicting mass in more distantly related stem taxa, such as non-avian dinosaurs and non-mammalian synapsids, has been criticized on biomechanical grounds. Here we test the major criticisms of scaling methods for estimating body mass using an extensive dataset of mammalian and non-avian reptilian species derived from individual skeletons with live weights. Results Significant differences in the limb scaling of mammals and reptiles are noted in comparisons of limb proportions and limb length to body mass. Remarkably, however, the relationship between proximal (stylopodial limb bone circumference and body mass is highly conserved in extant terrestrial mammals and reptiles, in spite of their disparate limb postures, gaits, and phylogenetic histories. As a result, we are able to conclusively reject the main criticisms of scaling methods that question the applicability of a universal scaling equation for estimating body mass in distantly related taxa. Conclusions The conserved nature of the relationship between stylopodial circumference and body mass suggests that the minimum diaphyseal circumference of the major weight-bearing bones is only weakly influenced by the varied forces exerted on the limbs (that is, compression or torsion and most strongly related to the mass of the animal. Our results, therefore, provide a much-needed, robust, phylogenetically corrected framework for accurate and consistent estimation of body mass in

  13. Large Scale Self-Organizing Information Distribution System

    National Research Council Canada - National Science Library

    Low, Steven

    2005-01-01

    This project investigates issues in "large-scale" networks. Here "large-scale" refers to networks with large number of high capacity nodes and transmission links, and shared by a large number of users...

  14. A High Efficiency DC-DC Converter Topology Suitable for Distributed Large Commercial and Utility Scale PV Systems

    Energy Technology Data Exchange (ETDEWEB)

    Agamy, Mohammed S; Harfman-Todorovic, Maja; Elasser, Ahmed; Steigerwald, Robert L; Sabate, Juan A; Chi, Song; McCann, Adam J; Zhang, Li; Mueller, Frank

    2012-09-01

    In this paper a DC-DC power converter for distributed photovoltaic plant architectures is presented. The proposed converter has the advantages of simplicity, high efficiency, and low cost. High efficiency is achieved by having a portion of the input PV power directly fed forward to the output without being processed by the converter. The operation of this converter also allows for a simplified maximum power point tracker design using fewer measurements

  15. High-Temperature-Short-Time Annealing Process for High-Performance Large-Area Perovskite Solar Cells.

    Science.gov (United States)

    Kim, Minjin; Kim, Gi-Hwan; Oh, Kyoung Suk; Jo, Yimhyun; Yoon, Hyun; Kim, Ka-Hyun; Lee, Heon; Kim, Jin Young; Kim, Dong Suk

    2017-06-27

    Organic-inorganic hybrid metal halide perovskite solar cells (PSCs) are attracting tremendous research interest due to their high solar-to-electric power conversion efficiency with a high possibility of cost-effective fabrication and certified power conversion efficiency now exceeding 22%. Although many effective methods for their application have been developed over the past decade, their practical transition to large-size devices has been restricted by difficulties in achieving high performance. Here we report on the development of a simple and cost-effective production method with high-temperature and short-time annealing processing to obtain uniform, smooth, and large-size grain domains of perovskite films over large areas. With high-temperature short-time annealing at 400 °C for 4 s, the perovskite film with an average domain size of 1 μm was obtained, which resulted in fast solvent evaporation. Solar cells fabricated using this processing technique had a maximum power conversion efficiency exceeding 20% over a 0.1 cm 2 active area and 18% over a 1 cm 2 active area. We believe our approach will enable the realization of highly efficient large-area PCSs for practical development with a very simple and short-time procedure. This simple method should lead the field toward the fabrication of uniform large-scale perovskite films, which are necessary for the production of high-efficiency solar cells that may also be applicable to several other material systems for more widespread practical deployment.

  16. A novel iron-lead redox flow battery for large-scale energy storage

    Science.gov (United States)

    Zeng, Y. K.; Zhao, T. S.; Zhou, X. L.; Wei, L.; Ren, Y. X.

    2017-04-01

    The redox flow battery (RFB) is one of the most promising large-scale energy storage technologies for the massive utilization of intermittent renewables especially wind and solar energy. This work presents a novel redox flow battery that utilizes inexpensive and abundant Fe(II)/Fe(III) and Pb/Pb(II) redox couples as redox materials. Experimental results show that both the Fe(II)/Fe(III) and Pb/Pb(II) redox couples have fast electrochemical kinetics in methanesulfonic acid, and that the coulombic efficiency and energy efficiency of the battery are, respectively, as high as 96.2% and 86.2% at 40 mA cm-2. Furthermore, the battery exhibits stable performance in terms of efficiencies and discharge capacities during the cycle test. The inexpensive redox materials, fast electrochemical kinetics and stable cycle performance make the present battery a promising candidate for large-scale energy storage applications.

  17. No large scale curvature perturbations during the waterfall phase transition of hybrid inflation

    International Nuclear Information System (INIS)

    Abolhasani, Ali Akbar; Firouzjahi, Hassan

    2011-01-01

    In this paper the possibility of generating large scale curvature perturbations induced from the entropic perturbations during the waterfall phase transition of the standard hybrid inflation model is studied. We show that whether or not appreciable amounts of large scale curvature perturbations are produced during the waterfall phase transition depends crucially on the competition between the classical and the quantum mechanical backreactions to terminate inflation. If one considers only the classical evolution of the system, we show that the highly blue-tilted entropy perturbations induce highly blue-tilted large scale curvature perturbations during the waterfall phase transition which dominate over the original adiabatic curvature perturbations. However, we show that the quantum backreactions of the waterfall field inhomogeneities produced during the phase transition dominate completely over the classical backreactions. The cumulative quantum backreactions of very small scale tachyonic modes terminate inflation very efficiently and shut off the curvature perturbation evolution during the waterfall phase transition. This indicates that the standard hybrid inflation model is safe under large scale curvature perturbations during the waterfall phase transition.

  18. Large scale electrolysers

    International Nuclear Information System (INIS)

    B Bello; M Junker

    2006-01-01

    Hydrogen production by water electrolysis represents nearly 4 % of the world hydrogen production. Future development of hydrogen vehicles will require large quantities of hydrogen. Installation of large scale hydrogen production plants will be needed. In this context, development of low cost large scale electrolysers that could use 'clean power' seems necessary. ALPHEA HYDROGEN, an European network and center of expertise on hydrogen and fuel cells, has performed for its members a study in 2005 to evaluate the potential of large scale electrolysers to produce hydrogen in the future. The different electrolysis technologies were compared. Then, a state of art of the electrolysis modules currently available was made. A review of the large scale electrolysis plants that have been installed in the world was also realized. The main projects related to large scale electrolysis were also listed. Economy of large scale electrolysers has been discussed. The influence of energy prices on the hydrogen production cost by large scale electrolysis was evaluated. (authors)

  19. Exploiting multi-scale parallelism for large scale numerical modelling of laser wakefield accelerators

    International Nuclear Information System (INIS)

    Fonseca, R A; Vieira, J; Silva, L O; Fiuza, F; Davidson, A; Tsung, F S; Mori, W B

    2013-01-01

    A new generation of laser wakefield accelerators (LWFA), supported by the extreme accelerating fields generated in the interaction of PW-Class lasers and underdense targets, promises the production of high quality electron beams in short distances for multiple applications. Achieving this goal will rely heavily on numerical modelling to further understand the underlying physics and identify optimal regimes, but large scale modelling of these scenarios is computationally heavy and requires the efficient use of state-of-the-art petascale supercomputing systems. We discuss the main difficulties involved in running these simulations and the new developments implemented in the OSIRIS framework to address these issues, ranging from multi-dimensional dynamic load balancing and hybrid distributed/shared memory parallelism to the vectorization of the PIC algorithm. We present the results of the OASCR Joule Metric program on the issue of large scale modelling of LWFA, demonstrating speedups of over 1 order of magnitude on the same hardware. Finally, scalability to over ∼10 6 cores and sustained performance over ∼2 P Flops is demonstrated, opening the way for large scale modelling of LWFA scenarios. (paper)

  20. Integration and segregation of large-scale brain networks during short-term task automatization.

    Science.gov (United States)

    Mohr, Holger; Wolfensteller, Uta; Betzel, Richard F; Mišić, Bratislav; Sporns, Olaf; Richiardi, Jonas; Ruge, Hannes

    2016-11-03

    The human brain is organized into large-scale functional networks that can flexibly reconfigure their connectivity patterns, supporting both rapid adaptive control and long-term learning processes. However, it has remained unclear how short-term network dynamics support the rapid transformation of instructions into fluent behaviour. Comparing fMRI data of a learning sample (N=70) with a control sample (N=67), we find that increasingly efficient task processing during short-term practice is associated with a reorganization of large-scale network interactions. Practice-related efficiency gains are facilitated by enhanced coupling between the cingulo-opercular network and the dorsal attention network. Simultaneously, short-term task automatization is accompanied by decreasing activation of the fronto-parietal network, indicating a release of high-level cognitive control, and a segregation of the default mode network from task-related networks. These findings suggest that short-term task automatization is enabled by the brain's ability to rapidly reconfigure its large-scale network organization involving complementary integration and segregation processes.

  1. Towards Portable Large-Scale Image Processing with High-Performance Computing.

    Science.gov (United States)

    Huo, Yuankai; Blaber, Justin; Damon, Stephen M; Boyd, Brian D; Bao, Shunxing; Parvathaneni, Prasanna; Noguera, Camilo Bermudez; Chaganti, Shikha; Nath, Vishwesh; Greer, Jasmine M; Lyu, Ilwoo; French, William R; Newton, Allen T; Rogers, Baxter P; Landman, Bennett A

    2018-05-03

    High-throughput, large-scale medical image computing demands tight integration of high-performance computing (HPC) infrastructure for data storage, job distribution, and image processing. The Vanderbilt University Institute for Imaging Science (VUIIS) Center for Computational Imaging (CCI) has constructed a large-scale image storage and processing infrastructure that is composed of (1) a large-scale image database using the eXtensible Neuroimaging Archive Toolkit (XNAT), (2) a content-aware job scheduling platform using the Distributed Automation for XNAT pipeline automation tool (DAX), and (3) a wide variety of encapsulated image processing pipelines called "spiders." The VUIIS CCI medical image data storage and processing infrastructure have housed and processed nearly half-million medical image volumes with Vanderbilt Advanced Computing Center for Research and Education (ACCRE), which is the HPC facility at the Vanderbilt University. The initial deployment was natively deployed (i.e., direct installations on a bare-metal server) within the ACCRE hardware and software environments, which lead to issues of portability and sustainability. First, it could be laborious to deploy the entire VUIIS CCI medical image data storage and processing infrastructure to another HPC center with varying hardware infrastructure, library availability, and software permission policies. Second, the spiders were not developed in an isolated manner, which has led to software dependency issues during system upgrades or remote software installation. To address such issues, herein, we describe recent innovations using containerization techniques with XNAT/DAX which are used to isolate the VUIIS CCI medical image data storage and processing infrastructure from the underlying hardware and software environments. The newly presented XNAT/DAX solution has the following new features: (1) multi-level portability from system level to the application level, (2) flexible and dynamic software

  2. Large-scale runoff generation – parsimonious parameterisation using high-resolution topography

    Directory of Open Access Journals (Sweden)

    L. Gong

    2011-08-01

    Full Text Available World water resources have primarily been analysed by global-scale hydrological models in the last decades. Runoff generation in many of these models are based on process formulations developed at catchments scales. The division between slow runoff (baseflow and fast runoff is primarily governed by slope and spatial distribution of effective water storage capacity, both acting at very small scales. Many hydrological models, e.g. VIC, account for the spatial storage variability in terms of statistical distributions; such models are generally proven to perform well. The statistical approaches, however, use the same runoff-generation parameters everywhere in a basin. The TOPMODEL concept, on the other hand, links the effective maximum storage capacity with real-world topography. Recent availability of global high-quality, high-resolution topographic data makes TOPMODEL attractive as a basis for a physically-based runoff-generation algorithm at large scales, even if its assumptions are not valid in flat terrain or for deep groundwater systems. We present a new runoff-generation algorithm for large-scale hydrology based on TOPMODEL concepts intended to overcome these problems. The TRG (topography-derived runoff generation algorithm relaxes the TOPMODEL equilibrium assumption so baseflow generation is not tied to topography. TRG only uses the topographic index to distribute average storage to each topographic index class. The maximum storage capacity is proportional to the range of topographic index and is scaled by one parameter. The distribution of storage capacity within large-scale grid cells is obtained numerically through topographic analysis. The new topography-derived distribution function is then inserted into a runoff-generation framework similar VIC's. Different basin parts are parameterised by different storage capacities, and different shapes of the storage-distribution curves depend on their topographic characteristics. The TRG algorithm

  3. Efficient Topology Estimation for Large Scale Optical Mapping

    CERN Document Server

    Elibol, Armagan; Garcia, Rafael

    2013-01-01

    Large scale optical mapping methods are in great demand among scientists who study different aspects of the seabed, and have been fostered by impressive advances in the capabilities of underwater robots in gathering optical data from the seafloor. Cost and weight constraints mean that low-cost ROVs usually have a very limited number of sensors. When a low-cost robot carries out a seafloor survey using a down-looking camera, it usually follows a predefined trajectory that provides several non time-consecutive overlapping image pairs. Finding these pairs (a process known as topology estimation) is indispensable to obtaining globally consistent mosaics and accurate trajectory estimates, which are necessary for a global view of the surveyed area, especially when optical sensors are the only data source. This book contributes to the state-of-art in large area image mosaicing methods for underwater surveys using low-cost vehicles equipped with a very limited sensor suite. The main focus has been on global alignment...

  4. Low-Complexity Transmit Antenna Selection and Beamforming for Large-Scale MIMO Communications

    Directory of Open Access Journals (Sweden)

    Kun Qian

    2014-01-01

    Full Text Available Transmit antenna selection plays an important role in large-scale multiple-input multiple-output (MIMO communications, but optimal large-scale MIMO antenna selection is a technical challenge. Exhaustive search is often employed in antenna selection, but it cannot be efficiently implemented in large-scale MIMO communication systems due to its prohibitive high computation complexity. This paper proposes a low-complexity interactive multiple-parameter optimization method for joint transmit antenna selection and beamforming in large-scale MIMO communication systems. The objective is to jointly maximize the channel outrage capacity and signal-to-noise (SNR performance and minimize the mean square error in transmit antenna selection and minimum variance distortionless response (MVDR beamforming without exhaustive search. The effectiveness of all the proposed methods is verified by extensive simulation results. It is shown that the required antenna selection processing time of the proposed method does not increase along with the increase of selected antennas, but the computation complexity of conventional exhaustive search method will significantly increase when large-scale antennas are employed in the system. This is particularly useful in antenna selection for large-scale MIMO communication systems.

  5. Multi-model analysis of terrestrial carbon cycles in Japan: reducing uncertainties in model outputs among different terrestrial biosphere models using flux observations

    Science.gov (United States)

    Ichii, K.; Suzuki, T.; Kato, T.; Ito, A.; Hajima, T.; Ueyama, M.; Sasai, T.; Hirata, R.; Saigusa, N.; Ohtani, Y.; Takagi, K.

    2009-08-01

    Terrestrial biosphere models show large uncertainties when simulating carbon and water cycles, and reducing these uncertainties is a priority for developing more accurate estimates of both terrestrial ecosystem statuses and future climate changes. To reduce uncertainties and improve the understanding of these carbon budgets, we investigated the ability of flux datasets to improve model simulations and reduce variabilities among multi-model outputs of terrestrial biosphere models in Japan. Using 9 terrestrial biosphere models (Support Vector Machine-based regressions, TOPS, CASA, VISIT, Biome-BGC, DAYCENT, SEIB, LPJ, and TRIFFID), we conducted two simulations: (1) point simulations at four flux sites in Japan and (2) spatial simulations for Japan with a default model (based on original settings) and an improved model (based on calibration using flux observations). Generally, models using default model settings showed large deviations in model outputs from observation with large model-by-model variability. However, after we calibrated the model parameters using flux observations (GPP, RE and NEP), most models successfully simulated seasonal variations in the carbon cycle, with less variability among models. We also found that interannual variations in the carbon cycle are mostly consistent among models and observations. Spatial analysis also showed a large reduction in the variability among model outputs, and model calibration using flux observations significantly improved the model outputs. These results show that to reduce uncertainties among terrestrial biosphere models, we need to conduct careful validation and calibration with available flux observations. Flux observation data significantly improved terrestrial biosphere models, not only on a point scale but also on spatial scales.

  6. IP over optical multicasting for large-scale video delivery

    Science.gov (United States)

    Jin, Yaohui; Hu, Weisheng; Sun, Weiqiang; Guo, Wei

    2007-11-01

    In the IPTV systems, multicasting will play a crucial role in the delivery of high-quality video services, which can significantly improve bandwidth efficiency. However, the scalability and the signal quality of current IPTV can barely compete with the existing broadcast digital TV systems since it is difficult to implement large-scale multicasting with end-to-end guaranteed quality of service (QoS) in packet-switched IP network. China 3TNet project aimed to build a high performance broadband trial network to support large-scale concurrent streaming media and interactive multimedia services. The innovative idea of 3TNet is that an automatic switched optical networks (ASON) with the capability of dynamic point-to-multipoint (P2MP) connections replaces the conventional IP multicasting network in the transport core, while the edge remains an IP multicasting network. In this paper, we will introduce the network architecture and discuss challenges in such IP over Optical multicasting for video delivery.

  7. Large-scale DNA Barcode Library Generation for Biomolecule Identification in High-throughput Screens.

    Science.gov (United States)

    Lyons, Eli; Sheridan, Paul; Tremmel, Georg; Miyano, Satoru; Sugano, Sumio

    2017-10-24

    High-throughput screens allow for the identification of specific biomolecules with characteristics of interest. In barcoded screens, DNA barcodes are linked to target biomolecules in a manner allowing for the target molecules making up a library to be identified by sequencing the DNA barcodes using Next Generation Sequencing. To be useful in experimental settings, the DNA barcodes in a library must satisfy certain constraints related to GC content, homopolymer length, Hamming distance, and blacklisted subsequences. Here we report a novel framework to quickly generate large-scale libraries of DNA barcodes for use in high-throughput screens. We show that our framework dramatically reduces the computation time required to generate large-scale DNA barcode libraries, compared with a naїve approach to DNA barcode library generation. As a proof of concept, we demonstrate that our framework is able to generate a library consisting of one million DNA barcodes for use in a fragment antibody phage display screening experiment. We also report generating a general purpose one billion DNA barcode library, the largest such library yet reported in literature. Our results demonstrate the value of our novel large-scale DNA barcode library generation framework for use in high-throughput screening applications.

  8. High-Resiliency and Auto-Scaling of Large-Scale Cloud Computing for OCO-2 L2 Full Physics Processing

    Science.gov (United States)

    Hua, H.; Manipon, G.; Starch, M.; Dang, L. B.; Southam, P.; Wilson, B. D.; Avis, C.; Chang, A.; Cheng, C.; Smyth, M.; McDuffie, J. L.; Ramirez, P.

    2015-12-01

    Next generation science data systems are needed to address the incoming flood of data from new missions such as SWOT and NISAR where data volumes and data throughput rates are order of magnitude larger than present day missions. Additionally, traditional means of procuring hardware on-premise are already limited due to facilities capacity constraints for these new missions. Existing missions, such as OCO-2, may also require high turn-around time for processing different science scenarios where on-premise and even traditional HPC computing environments may not meet the high processing needs. We present our experiences on deploying a hybrid-cloud computing science data system (HySDS) for the OCO-2 Science Computing Facility to support large-scale processing of their Level-2 full physics data products. We will explore optimization approaches to getting best performance out of hybrid-cloud computing as well as common issues that will arise when dealing with large-scale computing. Novel approaches were utilized to do processing on Amazon's spot market, which can potentially offer ~10X costs savings but with an unpredictable computing environment based on market forces. We will present how we enabled high-tolerance computing in order to achieve large-scale computing as well as operational cost savings.

  9. Recent Advances in Understanding Large Scale Vapour Explosions

    International Nuclear Information System (INIS)

    Board, S.J.; Hall, R.W.

    1976-01-01

    In foundries, violent explosions occur occasionally when molten metal comes into contact with water. If similar explosions can occur with other materials, hazardous situations may arise for example in LNG marine transportation accidents, or in liquid cooled reactor incidents when molten UO 2 contacts water or sodium coolant. Over the last 10 years a large body of experimental data has been obtained on the behaviour of small quantities of hot material in contact with a vaporisable coolant. Such experiments generally give low energy yields, despite producing fine fragmentation of the molten material. These events have been interpreted in terms of a wide range of phenomena such as violent boiling, liquid entrainment, bubble collapse, superheat, surface cracking and many others. Many of these studies have been aimed at understanding the small scale behaviour of the particular materials of interest. However, understanding the nature of the energetic events which were the original cause for concern may also be necessary to give confidence that violent events cannot occur for these materials in large scale situations. More recently, there has been a trend towards larger experiments and some of these have produced explosions of moderately high efficiency. Although occurrence of such large scale explosions can depend rather critically on initial conditions in a way which is not fully understood, there are signs that the interpretation of these events may be more straightforward than that of the single drop experiments. In the last two years several theoretical models for large scale explosions have appeared which attempt a self contained explanation of at least some stages of such high yield events: these have as their common feature a description of how a propagating breakdown of an initially quasi-stable distribution of materials is induced by the pressure and flow field caused by the energy release in adjacent regions. These models have led to the idea that for a full

  10. Compact Representation of High-Dimensional Feature Vectors for Large-Scale Image Recognition and Retrieval.

    Science.gov (United States)

    Zhang, Yu; Wu, Jianxin; Cai, Jianfei

    2016-05-01

    In large-scale visual recognition and image retrieval tasks, feature vectors, such as Fisher vector (FV) or the vector of locally aggregated descriptors (VLAD), have achieved state-of-the-art results. However, the combination of the large numbers of examples and high-dimensional vectors necessitates dimensionality reduction, in order to reduce its storage and CPU costs to a reasonable range. In spite of the popularity of various feature compression methods, this paper shows that the feature (dimension) selection is a better choice for high-dimensional FV/VLAD than the feature (dimension) compression methods, e.g., product quantization. We show that strong correlation among the feature dimensions in the FV and the VLAD may not exist, which renders feature selection a natural choice. We also show that, many dimensions in FV/VLAD are noise. Throwing them away using feature selection is better than compressing them and useful dimensions altogether using feature compression methods. To choose features, we propose an efficient importance sorting algorithm considering both the supervised and unsupervised cases, for visual recognition and image retrieval, respectively. Combining with the 1-bit quantization, feature selection has achieved both higher accuracy and less computational cost than feature compression methods, such as product quantization, on the FV and the VLAD image representations.

  11. Sensitivity analysis for large-scale problems

    Science.gov (United States)

    Noor, Ahmed K.; Whitworth, Sandra L.

    1987-01-01

    The development of efficient techniques for calculating sensitivity derivatives is studied. The objective is to present a computational procedure for calculating sensitivity derivatives as part of performing structural reanalysis for large-scale problems. The scope is limited to framed type structures. Both linear static analysis and free-vibration eigenvalue problems are considered.

  12. Large-Scale Outflows in Seyfert Galaxies

    Science.gov (United States)

    Colbert, E. J. M.; Baum, S. A.

    1995-12-01

    \\catcode`\\@=11 \\ialign{m @th#1hfil ##hfil \\crcr#2\\crcr\\sim\\crcr}}} \\catcode`\\@=12 Highly collimated outflows extend out to Mpc scales in many radio-loud active galaxies. In Seyfert galaxies, which are radio-quiet, the outflows extend out to kpc scales and do not appear to be as highly collimated. In order to study the nature of large-scale (>~1 kpc) outflows in Seyferts, we have conducted optical, radio and X-ray surveys of a distance-limited sample of 22 edge-on Seyfert galaxies. Results of the optical emission-line imaging and spectroscopic survey imply that large-scale outflows are present in >~{{1} /{4}} of all Seyferts. The radio (VLA) and X-ray (ROSAT) surveys show that large-scale radio and X-ray emission is present at about the same frequency. Kinetic luminosities of the outflows in Seyferts are comparable to those in starburst-driven superwinds. Large-scale radio sources in Seyferts appear diffuse, but do not resemble radio halos found in some edge-on starburst galaxies (e.g. M82). We discuss the feasibility of the outflows being powered by the active nucleus (e.g. a jet) or a circumnuclear starburst.

  13. Measuring large-scale social networks with high resolution.

    Directory of Open Access Journals (Sweden)

    Arkadiusz Stopczynski

    Full Text Available This paper describes the deployment of a large-scale study designed to measure human interactions across a variety of communication channels, with high temporal resolution and spanning multiple years-the Copenhagen Networks Study. Specifically, we collect data on face-to-face interactions, telecommunication, social networks, location, and background information (personality, demographics, health, politics for a densely connected population of 1000 individuals, using state-of-the-art smartphones as social sensors. Here we provide an overview of the related work and describe the motivation and research agenda driving the study. Additionally, the paper details the data-types measured, and the technical infrastructure in terms of both backend and phone software, as well as an outline of the deployment procedures. We document the participant privacy procedures and their underlying principles. The paper is concluded with early results from data analysis, illustrating the importance of multi-channel high-resolution approach to data collection.

  14. Research on precision grinding technology of large scale and ultra thin optics

    Science.gov (United States)

    Zhou, Lian; Wei, Qiancai; Li, Jie; Chen, Xianhua; Zhang, Qinghua

    2018-03-01

    The flatness and parallelism error of large scale and ultra thin optics have an important influence on the subsequent polishing efficiency and accuracy. In order to realize the high precision grinding of those ductile elements, the low deformation vacuum chuck was designed first, which was used for clamping the optics with high supporting rigidity in the full aperture. Then the optics was planar grinded under vacuum adsorption. After machining, the vacuum system was turned off. The form error of optics was on-machine measured using displacement sensor after elastic restitution. The flatness would be convergenced with high accuracy by compensation machining, whose trajectories were integrated with the measurement result. For purpose of getting high parallelism, the optics was turned over and compensation grinded using the form error of vacuum chuck. Finally, the grinding experiment of large scale and ultra thin fused silica optics with aperture of 430mm×430mm×10mm was performed. The best P-V flatness of optics was below 3 μm, and parallelism was below 3 ″. This machining technique has applied in batch grinding of large scale and ultra thin optics.

  15. Electro-spray deposition of a mesoporous TiO2 charge collection layer: toward large scale and continuous production of high efficiency perovskite solar cells.

    Science.gov (United States)

    Kim, Min-cheol; Kim, Byeong Jo; Yoon, Jungjin; Lee, Jin-wook; Suh, Dongchul; Park, Nam-gyu; Choi, Mansoo; Jung, Hyun Suk

    2015-12-28

    The spin-coating method, which is widely used for thin film device fabrication, is incapable of large-area deposition or being performed continuously. In perovskite hybrid solar cells using CH(3)NH(3)PbI(3) (MAPbI(3)), large-area deposition is essential for their potential use in mass production. Prior to replacing all the spin-coating process for fabrication of perovskite solar cells, herein, a mesoporous TiO(2) electron-collection layer is fabricated by using the electro-spray deposition (ESD) system. Moreover, impedance spectroscopy and transient photocurrent and photovoltage measurements reveal that the electro-sprayed mesoscopic TiO(2) film facilitates charge collection from the perovskite. The series resistance of the perovskite solar cell is also reduced owing to the highly porous nature of, and the low density of point defects in, the film. An optimized power conversion efficiency of 15.11% is achieved under an illumination of 1 sun; this efficiency is higher than that (13.67%) of the perovskite solar cell with the conventional spin-coated TiO(2) films. Furthermore, the large-area coating capability of the ESD process is verified through the coating of uniform 10 × 10 cm(2) TiO(2) films. This study clearly shows that ESD constitutes therefore a viable alternative for the fabrication of high-throughput, large-area perovskite solar cells.

  16. Using memory-efficient algorithm for large-scale time-domain modeling of surface plasmon polaritons propagation in organic light emitting diodes

    Science.gov (United States)

    Zakirov, Andrey; Belousov, Sergei; Valuev, Ilya; Levchenko, Vadim; Perepelkina, Anastasia; Zempo, Yasunari

    2017-10-01

    We demonstrate an efficient approach to numerical modeling of optical properties of large-scale structures with typical dimensions much greater than the wavelength of light. For this purpose, we use the finite-difference time-domain (FDTD) method enhanced with a memory efficient Locally Recursive non-Locally Asynchronous (LRnLA) algorithm called DiamondTorre and implemented for General Purpose Graphical Processing Units (GPGPU) architecture. We apply our approach to simulation of optical properties of organic light emitting diodes (OLEDs), which is an essential step in the process of designing OLEDs with improved efficiency. Specifically, we consider a problem of excitation and propagation of surface plasmon polaritons (SPPs) in a typical OLED, which is a challenging task given that SPP decay length can be about two orders of magnitude greater than the wavelength of excitation. We show that with our approach it is possible to extend the simulated volume size sufficiently so that SPP decay dynamics is accounted for. We further consider an OLED with periodically corrugated metallic cathode and show how the SPP decay length can be greatly reduced due to scattering off the corrugation. Ultimately, we compare the performance of our algorithm to the conventional FDTD and demonstrate that our approach can efficiently be used for large-scale FDTD simulations with the use of only a single GPGPU-powered workstation, which is not practically feasible with the conventional FDTD.

  17. Potential high efficiency solar cells: Applications from space photovoltaic research

    Science.gov (United States)

    Flood, D. J.

    1986-01-01

    NASA involvement in photovoltaic energy conversion research development and applications spans over two decades of continuous progress. Solar cell research and development programs conducted by the Lewis Research Center's Photovoltaic Branch have produced a sound technology base not only for the space program, but for terrestrial applications as well. The fundamental goals which have guided the NASA photovoltaic program are to improve the efficiency and lifetime, and to reduce the mass and cost of photovoltaic energy conversion devices and arrays for use in space. The major efforts in the current Lewis program are on high efficiency, single crystal GaAs planar and concentrator cells, radiation hard InP cells, and superlattice solar cells. A brief historical perspective of accomplishments in high efficiency space solar cells will be given, and current work in all of the above categories will be described. The applicability of space cell research and technology to terrestrial photovoltaics will be discussed.

  18. An Efficient, Hierarchical Viewpoint Planning Strategy for Terrestrial Laser Scanner Networks

    Science.gov (United States)

    Jia, F.; Lichti, D. D.

    2018-05-01

    Terrestrial laser scanner (TLS) techniques have been widely adopted in a variety of applications. However, unlike in geodesy or photogrammetry, insufficient attention has been paid to the optimal TLS network design. It is valuable to develop a complete design system that can automatically provide an optimal plan, especially for high-accuracy, large-volume scanning networks. To achieve this goal, one should look at the "optimality" of the solution as well as the computational complexity in reaching it. In this paper, a hierarchical TLS viewpoint planning strategy is developed to solve the optimal scanner placement problems. If one targeted object to be scanned is simplified as discretized wall segments, any possible viewpoint can be evaluated by a score table representing its visible segments under certain scanning geometry constraints. Thus, the design goal is to find a minimum number of viewpoints that achieves complete coverage of all wall segments. The efficiency is improved by densifying viewpoints hierarchically, instead of a "brute force" search within the entire workspace. The experiment environments in this paper were simulated from two buildings located on University of Calgary campus. Compared with the "brute force" strategy in terms of the quality of the solutions and the runtime, it is shown that the proposed strategy can provide a scanning network with a compatible quality but with more than a 70 % time saving.

  19. Towards Building a High Performance Spatial Query System for Large Scale Medical Imaging Data.

    Science.gov (United States)

    Aji, Ablimit; Wang, Fusheng; Saltz, Joel H

    2012-11-06

    Support of high performance queries on large volumes of scientific spatial data is becoming increasingly important in many applications. This growth is driven by not only geospatial problems in numerous fields, but also emerging scientific applications that are increasingly data- and compute-intensive. For example, digital pathology imaging has become an emerging field during the past decade, where examination of high resolution images of human tissue specimens enables more effective diagnosis, prediction and treatment of diseases. Systematic analysis of large-scale pathology images generates tremendous amounts of spatially derived quantifications of micro-anatomic objects, such as nuclei, blood vessels, and tissue regions. Analytical pathology imaging provides high potential to support image based computer aided diagnosis. One major requirement for this is effective querying of such enormous amount of data with fast response, which is faced with two major challenges: the "big data" challenge and the high computation complexity. In this paper, we present our work towards building a high performance spatial query system for querying massive spatial data on MapReduce. Our framework takes an on demand index building approach for processing spatial queries and a partition-merge approach for building parallel spatial query pipelines, which fits nicely with the computing model of MapReduce. We demonstrate our framework on supporting multi-way spatial joins for algorithm evaluation and nearest neighbor queries for microanatomic objects. To reduce query response time, we propose cost based query optimization to mitigate the effect of data skew. Our experiments show that the framework can efficiently support complex analytical spatial queries on MapReduce.

  20. Meniscus-assisted solution printing of large-grained perovskite films for high-efficiency solar cells

    Science.gov (United States)

    He, Ming; Li, Bo; Cui, Xun; Jiang, Beibei; He, Yanjie; Chen, Yihuang; O'Neil, Daniel; Szymanski, Paul; Ei-Sayed, Mostafa A.; Huang, Jinsong; Lin, Zhiqun

    2017-07-01

    Control over morphology and crystallinity of metal halide perovskite films is of key importance to enable high-performance optoelectronics. However, this remains particularly challenging for solution-printed devices due to the complex crystallization kinetics of semiconductor materials within dynamic flow of inks. Here we report a simple yet effective meniscus-assisted solution printing (MASP) strategy to yield large-grained dense perovskite film with good crystallization and preferred orientation. Intriguingly, the outward convective flow triggered by fast solvent evaporation at the edge of the meniscus ink imparts the transport of perovskite solutes, thus facilitating the growth of micrometre-scale perovskite grains. The growth kinetics of perovskite crystals is scrutinized by in situ optical microscopy tracking to understand the crystallization mechanism. The perovskite films produced by MASP exhibit excellent optoelectronic properties with efficiencies approaching 20% in planar perovskite solar cells. This robust MASP strategy may in principle be easily extended to craft other solution-printed perovskite-based optoelectronics.

  1. Parallelizing Gene Expression Programming Algorithm in Enabling Large-Scale Classification

    Directory of Open Access Journals (Sweden)

    Lixiong Xu

    2017-01-01

    Full Text Available As one of the most effective function mining algorithms, Gene Expression Programming (GEP algorithm has been widely used in classification, pattern recognition, prediction, and other research fields. Based on the self-evolution, GEP is able to mine an optimal function for dealing with further complicated tasks. However, in big data researches, GEP encounters low efficiency issue due to its long time mining processes. To improve the efficiency of GEP in big data researches especially for processing large-scale classification tasks, this paper presents a parallelized GEP algorithm using MapReduce computing model. The experimental results show that the presented algorithm is scalable and efficient for processing large-scale classification tasks.

  2. Integrative taxonomy for continental-scale terrestrial insect observations.

    Directory of Open Access Journals (Sweden)

    Cara M Gibson

    Full Text Available Although 21(st century ecology uses unprecedented technology at the largest spatio-temporal scales in history, the data remain reliant on sound taxonomic practices that derive from 18(th century science. The importance of accurate species identifications has been assessed repeatedly and in instances where inappropriate assignments have been made there have been costly consequences. The National Ecological Observatory Network (NEON will use a standardized system based upon an integrative taxonomic foundation to conduct observations of the focal terrestrial insect taxa, ground beetles and mosquitoes, at the continental scale for a 30 year monitoring program. The use of molecular data for continental-scale, multi-decadal research conducted by a geographically widely distributed set of researchers has not been evaluated until this point. The current paper addresses the development of a reference library for verifying species identifications at NEON and the key ways in which this resource will enhance a variety of user communities.

  3. Integrative Taxonomy for Continental-Scale Terrestrial Insect Observations

    Science.gov (United States)

    Gibson, Cara M.; Kao, Rebecca H.; Blevins, Kali K.; Travers, Patrick D.

    2012-01-01

    Although 21st century ecology uses unprecedented technology at the largest spatio-temporal scales in history, the data remain reliant on sound taxonomic practices that derive from 18th century science. The importance of accurate species identifications has been assessed repeatedly and in instances where inappropriate assignments have been made there have been costly consequences. The National Ecological Observatory Network (NEON) will use a standardized system based upon an integrative taxonomic foundation to conduct observations of the focal terrestrial insect taxa, ground beetles and mosquitoes, at the continental scale for a 30 year monitoring program. The use of molecular data for continental-scale, multi-decadal research conducted by a geographically widely distributed set of researchers has not been evaluated until this point. The current paper addresses the development of a reference library for verifying species identifications at NEON and the key ways in which this resource will enhance a variety of user communities. PMID:22666362

  4. Efficient 3D scene modeling and mosaicing

    CERN Document Server

    Nicosevici, Tudor

    2013-01-01

    This book proposes a complete pipeline for monocular (single camera) based 3D mapping of terrestrial and underwater environments. The aim is to provide a solution to large-scale scene modeling that is both accurate and efficient. To this end, we have developed a novel Structure from Motion algorithm that increases mapping accuracy by registering camera views directly with the maps. The camera registration uses a dual approach that adapts to the type of environment being mapped.   In order to further increase the accuracy of the resulting maps, a new method is presented, allowing detection of images corresponding to the same scene region (crossovers). Crossovers then used in conjunction with global alignment methods in order to highly reduce estimation errors, especially when mapping large areas. Our method is based on Visual Bag of Words paradigm (BoW), offering a more efficient and simpler solution by eliminating the training stage, generally required by state of the art BoW algorithms.   Also, towards dev...

  5. Engineering management of large scale systems

    Science.gov (United States)

    Sanders, Serita; Gill, Tepper L.; Paul, Arthur S.

    1989-01-01

    The organization of high technology and engineering problem solving, has given rise to an emerging concept. Reasoning principles for integrating traditional engineering problem solving with system theory, management sciences, behavioral decision theory, and planning and design approaches can be incorporated into a methodological approach to solving problems with a long range perspective. Long range planning has a great potential to improve productivity by using a systematic and organized approach. Thus, efficiency and cost effectiveness are the driving forces in promoting the organization of engineering problems. Aspects of systems engineering that provide an understanding of management of large scale systems are broadly covered here. Due to the focus and application of research, other significant factors (e.g., human behavior, decision making, etc.) are not emphasized but are considered.

  6. High convergence efficiency design of flat Fresnel lens with large aperture

    Science.gov (United States)

    Ke, Jieyao; Zhao, Changming; Guan, Zhe

    2018-01-01

    This paper designed a circle-shaped Fresnel lens with large aperture as part of the solar pumped laser design project. The Fresnel lens designed in this paper simulate in size 1000mm×1000mm, focus length 1200mm and polymethyl methacrylate (PMMA) material in order to conduct high convergence efficiency. In the light of design requirement of concentric ring with same width of 0.3mm, this paper proposed an optimized Fresnel lens design based on previous sphere design and conduct light tracing simulation in Matlab. This paper also analyzed the effect of light spot size, light intensity distribution, optical efficiency under four conditions, monochromatic parallel light, parallel spectrum light, divergent monochromatic light and sunlight. Design by 550nm wavelength and under the condition of Fresnel reflection, the results indicated that the designed lens could convergent sunlight in diffraction limit of 11.8mm with a 78.7% optical efficiency, better than the sphere cutting design results of 30.4%.

  7. A low-cost iron-cadmium redox flow battery for large-scale energy storage

    Science.gov (United States)

    Zeng, Y. K.; Zhao, T. S.; Zhou, X. L.; Wei, L.; Jiang, H. R.

    2016-10-01

    The redox flow battery (RFB) is one of the most promising large-scale energy storage technologies that offer a potential solution to the intermittency of renewable sources such as wind and solar. The prerequisite for widespread utilization of RFBs is low capital cost. In this work, an iron-cadmium redox flow battery (Fe/Cd RFB) with a premixed iron and cadmium solution is developed and tested. It is demonstrated that the coulombic efficiency and energy efficiency of the Fe/Cd RFB reach 98.7% and 80.2% at 120 mA cm-2, respectively. The Fe/Cd RFB exhibits stable efficiencies with capacity retention of 99.87% per cycle during the cycle test. Moreover, the Fe/Cd RFB is estimated to have a low capital cost of 108 kWh-1 for 8-h energy storage. Intrinsically low-cost active materials, high cell performance and excellent capacity retention equip the Fe/Cd RFB to be a promising solution for large-scale energy storage systems.

  8. Robust large-scale parallel nonlinear solvers for simulations.

    Energy Technology Data Exchange (ETDEWEB)

    Bader, Brett William; Pawlowski, Roger Patrick; Kolda, Tamara Gibson (Sandia National Laboratories, Livermore, CA)

    2005-11-01

    This report documents research to develop robust and efficient solution techniques for solving large-scale systems of nonlinear equations. The most widely used method for solving systems of nonlinear equations is Newton's method. While much research has been devoted to augmenting Newton-based solvers (usually with globalization techniques), little has been devoted to exploring the application of different models. Our research has been directed at evaluating techniques using different models than Newton's method: a lower order model, Broyden's method, and a higher order model, the tensor method. We have developed large-scale versions of each of these models and have demonstrated their use in important applications at Sandia. Broyden's method replaces the Jacobian with an approximation, allowing codes that cannot evaluate a Jacobian or have an inaccurate Jacobian to converge to a solution. Limited-memory methods, which have been successful in optimization, allow us to extend this approach to large-scale problems. We compare the robustness and efficiency of Newton's method, modified Newton's method, Jacobian-free Newton-Krylov method, and our limited-memory Broyden method. Comparisons are carried out for large-scale applications of fluid flow simulations and electronic circuit simulations. Results show that, in cases where the Jacobian was inaccurate or could not be computed, Broyden's method converged in some cases where Newton's method failed to converge. We identify conditions where Broyden's method can be more efficient than Newton's method. We also present modifications to a large-scale tensor method, originally proposed by Bouaricha, for greater efficiency, better robustness, and wider applicability. Tensor methods are an alternative to Newton-based methods and are based on computing a step based on a local quadratic model rather than a linear model. The advantage of Bouaricha's method is that it can use any

  9. Large-scale inverse model analyses employing fast randomized data reduction

    Science.gov (United States)

    Lin, Youzuo; Le, Ellen B.; O'Malley, Daniel; Vesselinov, Velimir V.; Bui-Thanh, Tan

    2017-08-01

    When the number of observations is large, it is computationally challenging to apply classical inverse modeling techniques. We have developed a new computationally efficient technique for solving inverse problems with a large number of observations (e.g., on the order of 107 or greater). Our method, which we call the randomized geostatistical approach (RGA), is built upon the principal component geostatistical approach (PCGA). We employ a data reduction technique combined with the PCGA to improve the computational efficiency and reduce the memory usage. Specifically, we employ a randomized numerical linear algebra technique based on a so-called "sketching" matrix to effectively reduce the dimension of the observations without losing the information content needed for the inverse analysis. In this way, the computational and memory costs for RGA scale with the information content rather than the size of the calibration data. Our algorithm is coded in Julia and implemented in the MADS open-source high-performance computational framework (http://mads.lanl.gov). We apply our new inverse modeling method to invert for a synthetic transmissivity field. Compared to a standard geostatistical approach (GA), our method is more efficient when the number of observations is large. Most importantly, our method is capable of solving larger inverse problems than the standard GA and PCGA approaches. Therefore, our new model inversion method is a powerful tool for solving large-scale inverse problems. The method can be applied in any field and is not limited to hydrogeological applications such as the characterization of aquifer heterogeneity.

  10. The Convergence of High Performance Computing and Large Scale Data Analytics

    Science.gov (United States)

    Duffy, D.; Bowen, M. K.; Thompson, J. H.; Yang, C. P.; Hu, F.; Wills, B.

    2015-12-01

    As the combinations of remote sensing observations and model outputs have grown, scientists are increasingly burdened with both the necessity and complexity of large-scale data analysis. Scientists are increasingly applying traditional high performance computing (HPC) solutions to solve their "Big Data" problems. While this approach has the benefit of limiting data movement, the HPC system is not optimized to run analytics, which can create problems that permeate throughout the HPC environment. To solve these issues and to alleviate some of the strain on the HPC environment, the NASA Center for Climate Simulation (NCCS) has created the Advanced Data Analytics Platform (ADAPT), which combines both HPC and cloud technologies to create an agile system designed for analytics. Large, commonly used data sets are stored in this system in a write once/read many file system, such as Landsat, MODIS, MERRA, and NGA. High performance virtual machines are deployed and scaled according to the individual scientist's requirements specifically for data analysis. On the software side, the NCCS and GMU are working with emerging commercial technologies and applying them to structured, binary scientific data in order to expose the data in new ways. Native NetCDF data is being stored within a Hadoop Distributed File System (HDFS) enabling storage-proximal processing through MapReduce while continuing to provide accessibility of the data to traditional applications. Once the data is stored within HDFS, an additional indexing scheme is built on top of the data and placed into a relational database. This spatiotemporal index enables extremely fast mappings of queries to data locations to dramatically speed up analytics. These are some of the first steps toward a single unified platform that optimizes for both HPC and large-scale data analysis, and this presentation will elucidate the resulting and necessary exascale architectures required for future systems.

  11. An accurate and efficient method for large-scale SSR genotyping and applications.

    Science.gov (United States)

    Li, Lun; Fang, Zhiwei; Zhou, Junfei; Chen, Hong; Hu, Zhangfeng; Gao, Lifen; Chen, Lihong; Ren, Sheng; Ma, Hongyu; Lu, Long; Zhang, Weixiong; Peng, Hai

    2017-06-02

    Accurate and efficient genotyping of simple sequence repeats (SSRs) constitutes the basis of SSRs as an effective genetic marker with various applications. However, the existing methods for SSR genotyping suffer from low sensitivity, low accuracy, low efficiency and high cost. In order to fully exploit the potential of SSRs as genetic marker, we developed a novel method for SSR genotyping, named as AmpSeq-SSR, which combines multiplexing polymerase chain reaction (PCR), targeted deep sequencing and comprehensive analysis. AmpSeq-SSR is able to genotype potentially more than a million SSRs at once using the current sequencing techniques. In the current study, we simultaneously genotyped 3105 SSRs in eight rice varieties, which were further validated experimentally. The results showed that the accuracies of AmpSeq-SSR were nearly 100 and 94% with a single base resolution for homozygous and heterozygous samples, respectively. To demonstrate the power of AmpSeq-SSR, we adopted it in two applications. The first was to construct discriminative fingerprints of the rice varieties using 3105 SSRs, which offer much greater discriminative power than the 48 SSRs commonly used for rice. The second was to map Xa21, a gene that confers persistent resistance to rice bacterial blight. We demonstrated that genome-scale fingerprints of an organism can be efficiently constructed and candidate genes, such as Xa21 in rice, can be accurately and efficiently mapped using an innovative strategy consisting of multiplexing PCR, targeted sequencing and computational analysis. While the work we present focused on rice, AmpSeq-SSR can be readily extended to animals and micro-organisms. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  12. Comparison of the large-scale radon risk map for southern Belgium with results of high resolution surveys

    International Nuclear Information System (INIS)

    Zhu, H.-C.; Charlet, J.M.; Poffijn, A.

    2000-01-01

    A large-scale radon survey consisting of long-term measurements in about 5200 singe-family houses in the southern part of Belgium was carried from 1995 to 1999. A radon risk map for the region was produced using geostatistical and GIS approaches. Some communes or villages situated within high risk areas were chosen for detailed surveys. A high resolution radon survey with about 330 measurements was performed in half part of the commune of Burg-Reuland. Comparison of radon maps on quite different scales shows that the general Rn risk map has similar pattern as the radon map for the detailed study area. Another detailed radon survey in the village of Hatrival, situated in a high radon area, found very high proportion of houses with elevated radon concentrations. The results of this detailed survey are comparable to the expectation for high risk areas on the large-scale radon risk map. The good correspondence between the findings of the general risk map and the analysis of the limited detailed surveys, suggests that the large-scale radon risk map is likely reliable. (author)

  13. BILGO: Bilateral greedy optimization for large scale semidefinite programming

    KAUST Repository

    Hao, Zhifeng; Yuan, Ganzhao; Ghanem, Bernard

    2013-01-01

    Many machine learning tasks (e.g. metric and manifold learning problems) can be formulated as convex semidefinite programs. To enable the application of these tasks on a large-scale, scalability and computational efficiency are considered as desirable properties for a practical semidefinite programming algorithm. In this paper, we theoretically analyze a new bilateral greedy optimization (denoted BILGO) strategy in solving general semidefinite programs on large-scale datasets. As compared to existing methods, BILGO employs a bilateral search strategy during each optimization iteration. In such an iteration, the current semidefinite matrix solution is updated as a bilateral linear combination of the previous solution and a suitable rank-1 matrix, which can be efficiently computed from the leading eigenvector of the descent direction at this iteration. By optimizing for the coefficients of the bilateral combination, BILGO reduces the cost function in every iteration until the KKT conditions are fully satisfied, thus, it tends to converge to a global optimum. In fact, we prove that BILGO converges to the global optimal solution at a rate of O(1/k), where k is the iteration counter. The algorithm thus successfully combines the efficiency of conventional rank-1 update algorithms and the effectiveness of gradient descent. Moreover, BILGO can be easily extended to handle low rank constraints. To validate the effectiveness and efficiency of BILGO, we apply it to two important machine learning tasks, namely Mahalanobis metric learning and maximum variance unfolding. Extensive experimental results clearly demonstrate that BILGO can solve large-scale semidefinite programs efficiently.

  14. BILGO: Bilateral greedy optimization for large scale semidefinite programming

    KAUST Repository

    Hao, Zhifeng

    2013-10-03

    Many machine learning tasks (e.g. metric and manifold learning problems) can be formulated as convex semidefinite programs. To enable the application of these tasks on a large-scale, scalability and computational efficiency are considered as desirable properties for a practical semidefinite programming algorithm. In this paper, we theoretically analyze a new bilateral greedy optimization (denoted BILGO) strategy in solving general semidefinite programs on large-scale datasets. As compared to existing methods, BILGO employs a bilateral search strategy during each optimization iteration. In such an iteration, the current semidefinite matrix solution is updated as a bilateral linear combination of the previous solution and a suitable rank-1 matrix, which can be efficiently computed from the leading eigenvector of the descent direction at this iteration. By optimizing for the coefficients of the bilateral combination, BILGO reduces the cost function in every iteration until the KKT conditions are fully satisfied, thus, it tends to converge to a global optimum. In fact, we prove that BILGO converges to the global optimal solution at a rate of O(1/k), where k is the iteration counter. The algorithm thus successfully combines the efficiency of conventional rank-1 update algorithms and the effectiveness of gradient descent. Moreover, BILGO can be easily extended to handle low rank constraints. To validate the effectiveness and efficiency of BILGO, we apply it to two important machine learning tasks, namely Mahalanobis metric learning and maximum variance unfolding. Extensive experimental results clearly demonstrate that BILGO can solve large-scale semidefinite programs efficiently.

  15. SCALE INTERACTION IN A MIXING LAYER. THE ROLE OF THE LARGE-SCALE GRADIENTS

    KAUST Repository

    Fiscaletti, Daniele

    2015-08-23

    The interaction between scales is investigated in a turbulent mixing layer. The large-scale amplitude modulation of the small scales already observed in other works depends on the crosswise location. Large-scale positive fluctuations correlate with a stronger activity of the small scales on the low speed-side of the mixing layer, and a reduced activity on the high speed-side. However, from physical considerations we would expect the scales to interact in a qualitatively similar way within the flow and across different turbulent flows. Therefore, instead of the large-scale fluctuations, the large-scale gradients modulation of the small scales has been additionally investigated.

  16. Joint classification and contour extraction of large 3D point clouds

    Science.gov (United States)

    Hackel, Timo; Wegner, Jan D.; Schindler, Konrad

    2017-08-01

    We present an effective and efficient method for point-wise semantic classification and extraction of object contours of large-scale 3D point clouds. What makes point cloud interpretation challenging is the sheer size of several millions of points per scan and the non-grid, sparse, and uneven distribution of points. Standard image processing tools like texture filters, for example, cannot handle such data efficiently, which calls for dedicated point cloud labeling methods. It turns out that one of the major drivers for efficient computation and handling of strong variations in point density, is a careful formulation of per-point neighborhoods at multiple scales. This allows, both, to define an expressive feature set and to extract topologically meaningful object contours. Semantic classification and contour extraction are interlaced problems. Point-wise semantic classification enables extracting a meaningful candidate set of contour points while contours help generating a rich feature representation that benefits point-wise classification. These methods are tailored to have fast run time and small memory footprint for processing large-scale, unstructured, and inhomogeneous point clouds, while still achieving high classification accuracy. We evaluate our methods on the semantic3d.net benchmark for terrestrial laser scans with >109 points.

  17. Vulnerability of the global terrestrial ecosystems to climate change.

    Science.gov (United States)

    Li, Delong; Wu, Shuyao; Liu, Laibao; Zhang, Yatong; Li, Shuangcheng

    2018-05-27

    Climate change has far-reaching impacts on ecosystems. Recent attempts to quantify such impacts focus on measuring exposure to climate change but largely ignore ecosystem resistance and resilience, which may also affect the vulnerability outcomes. In this study, the relative vulnerability of global terrestrial ecosystems to short-term climate variability was assessed by simultaneously integrating exposure, sensitivity, and resilience at a high spatial resolution (0.05°). The results show that vulnerable areas are currently distributed primarily in plains. Responses to climate change vary among ecosystems and deserts and xeric shrublands are the most vulnerable biomes. Global vulnerability patterns are determined largely by exposure, while ecosystem sensitivity and resilience may exacerbate or alleviate external climate pressures at local scales; there is a highly significant negative correlation between exposure and sensitivity. Globally, 61.31% of the terrestrial vegetated area is capable of mitigating climate change impacts and those areas are concentrated in polar regions, boreal forests, tropical rainforests, and intact forests. Under current sensitivity and resilience conditions, vulnerable areas are projected to develop in high Northern Hemisphere latitudes in the future. The results suggest that integrating all three aspects of vulnerability (exposure, sensitivity, and resilience) may offer more comprehensive and spatially explicit adaptation strategies to reduce the impacts of climate change on terrestrial ecosystems. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  18. Evaluation of biochar powder on oxygen supply efficiency and global warming potential during mainstream large-scale aerobic composting.

    Science.gov (United States)

    He, Xueqin; Chen, Longjian; Han, Lujia; Liu, Ning; Cui, Ruxiu; Yin, Hongjie; Huang, Guangqun

    2017-12-01

    This study investigated the effects of biochar powder on oxygen supply efficiency and global warming potential (GWP) in the large-scale aerobic composting pattern which includes cyclical forced-turning with aeration at the bottom of composting tanks in China. A 55-day large-scale aerobic composting experiment was conducted in two different groups without and with 10% biochar powder addition (by weight). The results show that biochar powder improves the holding ability of oxygen, and the duration time (O 2 >5%) is around 80%. The composting process with above pattern significantly reduce CH 4 and N 2 O emissions compared to the static or turning-only styles. Considering the average GWP of the BC group was 19.82% lower than that of the CK group, it suggests that rational addition of biochar powder has the potential to reduce the energy consumption of turning, improve effectiveness of the oxygen supply, and reduce comprehensive greenhouse effects. Copyright © 2017. Published by Elsevier Ltd.

  19. Modelling Soil-Landscapes in Coastal California Hills Using Fine Scale Terrestrial Lidar

    Science.gov (United States)

    Prentice, S.; Bookhagen, B.; Kyriakidis, P. C.; Chadwick, O.

    2013-12-01

    Digital elevation models (DEMs) are the dominant input to spatially explicit digital soil mapping (DSM) efforts due to their increasing availability and the tight coupling between topography and soil variability. Accurate characterization of this coupling is dependent on DEM spatial resolution and soil sampling density, both of which may limit analyses. For example, DEM resolution may be too coarse to accurately reflect scale-dependent soil properties yet downscaling introduces artifactual uncertainty unrelated to deterministic or stochastic soil processes. We tackle these limitations through a DSM effort that couples moderately high density soil sampling with a very fine scale terrestrial lidar dataset (20 cm) implemented in a semiarid rolling hillslope domain where terrain variables change rapidly but smoothly over short distances. Our guiding hypothesis is that in this diffusion-dominated landscape, soil thickness is readily predicted by continuous terrain attributes coupled with catenary hillslope segmentation. We choose soil thickness as our keystone dependent variable for its geomorphic and hydrologic significance, and its tendency to be a primary input to synthetic ecosystem models. In defining catenary hillslope position we adapt a logical rule-set approach that parses common terrain derivatives of curvature and specific catchment area into discrete landform elements (LE). Variograms and curvature-area plots are used to distill domain-scale terrain thresholds from short range order noise characteristic of very fine-scale spatial data. The revealed spatial thresholds are used to condition LE rule-set inputs, rendering a catenary LE map that leverages the robustness of fine-scale terrain data to create a generalized interpretation of soil geomorphic domains. Preliminary regressions show that continuous terrain variables alone (curvature, specific catchment area) only partially explain soil thickness, and only in a subset of soils. For example, at spatial

  20. The TeraShake Computational Platform for Large-Scale Earthquake Simulations

    Science.gov (United States)

    Cui, Yifeng; Olsen, Kim; Chourasia, Amit; Moore, Reagan; Maechling, Philip; Jordan, Thomas

    Geoscientific and computer science researchers with the Southern California Earthquake Center (SCEC) are conducting a large-scale, physics-based, computationally demanding earthquake system science research program with the goal of developing predictive models of earthquake processes. The computational demands of this program continue to increase rapidly as these researchers seek to perform physics-based numerical simulations of earthquake processes for larger meet the needs of this research program, a multiple-institution team coordinated by SCEC has integrated several scientific codes into a numerical modeling-based research tool we call the TeraShake computational platform (TSCP). A central component in the TSCP is a highly scalable earthquake wave propagation simulation program called the TeraShake anelastic wave propagation (TS-AWP) code. In this chapter, we describe how we extended an existing, stand-alone, wellvalidated, finite-difference, anelastic wave propagation modeling code into the highly scalable and widely used TS-AWP and then integrated this code into the TeraShake computational platform that provides end-to-end (initialization to analysis) research capabilities. We also describe the techniques used to enhance the TS-AWP parallel performance on TeraGrid supercomputers, as well as the TeraShake simulations phases including input preparation, run time, data archive management, and visualization. As a result of our efforts to improve its parallel efficiency, the TS-AWP has now shown highly efficient strong scaling on over 40K processors on IBM’s BlueGene/L Watson computer. In addition, the TSCP has developed into a computational system that is useful to many members of the SCEC community for performing large-scale earthquake simulations.

  1. Efficient motif finding algorithms for large-alphabet inputs

    Directory of Open Access Journals (Sweden)

    Pavlovic Vladimir

    2010-10-01

    Full Text Available Abstract Background We consider the problem of identifying motifs, recurring or conserved patterns, in the biological sequence data sets. To solve this task, we present a new deterministic algorithm for finding patterns that are embedded as exact or inexact instances in all or most of the input strings. Results The proposed algorithm (1 improves search efficiency compared to existing algorithms, and (2 scales well with the size of alphabet. On a synthetic planted DNA motif finding problem our algorithm is over 10× more efficient than MITRA, PMSPrune, and RISOTTO for long motifs. Improvements are orders of magnitude higher in the same setting with large alphabets. On benchmark TF-binding site problems (FNP, CRP, LexA we observed reduction in running time of over 12×, with high detection accuracy. The algorithm was also successful in rapidly identifying protein motifs in Lipocalin, Zinc metallopeptidase, and supersecondary structure motifs for Cadherin and Immunoglobin families. Conclusions Our algorithm reduces computational complexity of the current motif finding algorithms and demonstrate strong running time improvements over existing exact algorithms, especially in important and difficult cases of large-alphabet sequences.

  2. Modelling high Reynolds number wall-turbulence interactions in laboratory experiments using large-scale free-stream turbulence.

    Science.gov (United States)

    Dogan, Eda; Hearst, R Jason; Ganapathisubramani, Bharathram

    2017-03-13

    A turbulent boundary layer subjected to free-stream turbulence is investigated in order to ascertain the scale interactions that dominate the near-wall region. The results are discussed in relation to a canonical high Reynolds number turbulent boundary layer because previous studies have reported considerable similarities between these two flows. Measurements were acquired simultaneously from four hot wires mounted to a rake which was traversed through the boundary layer. Particular focus is given to two main features of both canonical high Reynolds number boundary layers and boundary layers subjected to free-stream turbulence: (i) the footprint of the large scales in the logarithmic region on the near-wall small scales, specifically the modulating interaction between these scales, and (ii) the phase difference in amplitude modulation. The potential for a turbulent boundary layer subjected to free-stream turbulence to 'simulate' high Reynolds number wall-turbulence interactions is discussed. The results of this study have encouraging implications for future investigations of the fundamental scale interactions that take place in high Reynolds number flows as it demonstrates that these can be achieved at typical laboratory scales.This article is part of the themed issue 'Toward the development of high-fidelity models of wall turbulence at large Reynolds number'. © 2017 The Author(s).

  3. Topographically Engineered Large Scale Nanostructures for Plasmonic Biosensing

    Science.gov (United States)

    Xiao, Bo; Pradhan, Sangram K.; Santiago, Kevin C.; Rutherford, Gugu N.; Pradhan, Aswini K.

    2016-04-01

    We demonstrate that a nanostructured metal thin film can achieve enhanced transmission efficiency and sharp resonances and use a large-scale and high-throughput nanofabrication technique for the plasmonic structures. The fabrication technique combines the features of nanoimprint and soft lithography to topographically construct metal thin films with nanoscale patterns. Metal nanogratings developed using this method show significantly enhanced optical transmission (up to a one-order-of-magnitude enhancement) and sharp resonances with full width at half maximum (FWHM) of ~15nm in the zero-order transmission using an incoherent white light source. These nanostructures are sensitive to the surrounding environment, and the resonance can shift as the refractive index changes. We derive an analytical method using a spatial Fourier transformation to understand the enhancement phenomenon and the sensing mechanism. The use of real-time monitoring of protein-protein interactions in microfluidic cells integrated with these nanostructures is demonstrated to be effective for biosensing. The perpendicular transmission configuration and large-scale structures provide a feasible platform without sophisticated optical instrumentation to realize label-free surface plasmon resonance (SPR) sensing.

  4. Cloud-based bioinformatics workflow platform for large-scale next-generation sequencing analyses.

    Science.gov (United States)

    Liu, Bo; Madduri, Ravi K; Sotomayor, Borja; Chard, Kyle; Lacinski, Lukasz; Dave, Utpal J; Li, Jianqiang; Liu, Chunchen; Foster, Ian T

    2014-06-01

    Due to the upcoming data deluge of genome data, the need for storing and processing large-scale genome data, easy access to biomedical analyses tools, efficient data sharing and retrieval has presented significant challenges. The variability in data volume results in variable computing and storage requirements, therefore biomedical researchers are pursuing more reliable, dynamic and convenient methods for conducting sequencing analyses. This paper proposes a Cloud-based bioinformatics workflow platform for large-scale next-generation sequencing analyses, which enables reliable and highly scalable execution of sequencing analyses workflows in a fully automated manner. Our platform extends the existing Galaxy workflow system by adding data management capabilities for transferring large quantities of data efficiently and reliably (via Globus Transfer), domain-specific analyses tools preconfigured for immediate use by researchers (via user-specific tools integration), automatic deployment on Cloud for on-demand resource allocation and pay-as-you-go pricing (via Globus Provision), a Cloud provisioning tool for auto-scaling (via HTCondor scheduler), and the support for validating the correctness of workflows (via semantic verification tools). Two bioinformatics workflow use cases as well as performance evaluation are presented to validate the feasibility of the proposed approach. Copyright © 2014 Elsevier Inc. All rights reserved.

  5. Optimization of the plasma parameters for the high current and uniform large-scale pulse arc ion source of the VEST-NBI system

    International Nuclear Information System (INIS)

    Jung, Bongki; Park, Min; Heo, Sung Ryul; Kim, Tae-Seong; Jeong, Seung Ho; Chang, Doo-Hee; Lee, Kwang Won; In, Sang-Ryul

    2016-01-01

    Highlights: • High power magnetic bucket-type arc plasma source for the VEST NBI system is developed with modifications based on the prototype plasma source for KSTAR. • Plasma parameters in pulse duration are measured to characterize the plasma source. • High plasma density and good uniformity is achieved at the low operating pressure below 1 Pa. • Required ion beam current density is confirmed by analysis of plasma parameters and results of a particle balance model. - Abstract: A large-scale hydrogen arc plasma source was developed at the Korea Atomic Energy Research Institute for a high power pulsed NBI system of VEST which is a compact spherical tokamak at Seoul national university. One of the research target of VEST is to study innovative tokamak operating scenarios. For this purpose, high current density and uniform large-scale pulse plasma source is required to satisfy the target ion beam power efficiently. Therefore, optimizing the plasma parameters of the ion source such as the electron density, temperature, and plasma uniformity is conducted by changing the operating conditions of the plasma source. Furthermore, ion species of the hydrogen plasma source are analyzed using a particle balance model to increase the monatomic fraction which is another essential parameter for increasing the ion beam current density. Conclusively, efficient operating conditions are presented from the results of the optimized plasma parameters and the extractable ion beam current is calculated.

  6. Hi-Corrector: a fast, scalable and memory-efficient package for normalizing large-scale Hi-C data.

    Science.gov (United States)

    Li, Wenyuan; Gong, Ke; Li, Qingjiao; Alber, Frank; Zhou, Xianghong Jasmine

    2015-03-15

    Genome-wide proximity ligation assays, e.g. Hi-C and its variant TCC, have recently become important tools to study spatial genome organization. Removing biases from chromatin contact matrices generated by such techniques is a critical preprocessing step of subsequent analyses. The continuing decline of sequencing costs has led to an ever-improving resolution of the Hi-C data, resulting in very large matrices of chromatin contacts. Such large-size matrices, however, pose a great challenge on the memory usage and speed of its normalization. Therefore, there is an urgent need for fast and memory-efficient methods for normalization of Hi-C data. We developed Hi-Corrector, an easy-to-use, open source implementation of the Hi-C data normalization algorithm. Its salient features are (i) scalability-the software is capable of normalizing Hi-C data of any size in reasonable times; (ii) memory efficiency-the sequential version can run on any single computer with very limited memory, no matter how little; (iii) fast speed-the parallel version can run very fast on multiple computing nodes with limited local memory. The sequential version is implemented in ANSI C and can be easily compiled on any system; the parallel version is implemented in ANSI C with the MPI library (a standardized and portable parallel environment designed for solving large-scale scientific problems). The package is freely available at http://zhoulab.usc.edu/Hi-Corrector/. © The Author 2014. Published by Oxford University Press.

  7. Performance of Linear and Nonlinear Two-Leaf Light Use Efficiency Models at Different Temporal Scales

    DEFF Research Database (Denmark)

    Wu, Xiaocui; Ju, Weimin; Zhou, Yanlian

    2015-01-01

    The reliable simulation of gross primary productivity (GPP) at various spatial and temporal scales is of significance to quantifying the net exchange of carbon between terrestrial ecosystems and the atmosphere. This study aimed to verify the ability of a nonlinear two-leaf model (TL-LUEn), a linear...... two-leaf model (TL-LUE), and a big-leaf light use efficiency model (MOD17) to simulate GPP at half-hourly, daily and 8-day scales using GPP derived from 58 eddy-covariance flux sites in Asia, Europe and North America as benchmarks. Model evaluation showed that the overall performance of TL...

  8. Probing high scale physics with top quarks at the Large Hadron Collider

    Science.gov (United States)

    Dong, Zhe

    With the Large Hadron Collider (LHC) running at TeV scale, we are expecting to find the deviations from the Standard Model in the experiments, and understanding what is the origin of these deviations. Being the heaviest elementary particle observed so far in the experiments with the mass at the electroweak scale, top quark is a powerful probe for new phenomena of high scale physics at the LHC. Therefore, we concentrate on studying the high scale physics phenomena with top quark pair production or decay at the LHC. In this thesis, we study the discovery potential of string resonances decaying to t/tbar final state, and examine the possibility of observing baryon-number-violating top-quark production or decay, at the LHC. We point out that string resonances for a string scale below 4 TeV can be detected via the t/tbar channel, by reconstructing center-of-mass frame kinematics of the resonances from either the t/tbar semi-leptonic decay or recent techniques of identifying highly boosted tops. For the study of baryon-number-violating processes, by a model independent effective approach and focusing on operators with minimal mass-dimension, we find that corresponding effective coefficients could be directly probed at the LHC already with an integrated luminosity of 1 inverse femtobarns at 7 TeV, and further constrained with 30 (100) inverse femtobarns at 7 (14) TeV.

  9. TURBULENCE-GENERATED PROTON-SCALE STRUCTURES IN THE TERRESTRIAL MAGNETOSHEATH

    Energy Technology Data Exchange (ETDEWEB)

    Vörös, Zoltán; Narita, Yasuhito [Space Research Institute, Austrian Academy of Sciences, Graz (Austria); Yordanova, Emiliya [Swedish Institute of Space Physics, Uppsala (Sweden); Echim, Marius M. [Belgian Institute for Space Aeronomy, Bruxelles (Belgium); Consolini, Giuseppe, E-mail: zoltan.voeroes@oeaw.ac.at [INAF-Istituto di Astrofisica e Planetologia Spaziali, Roma (Italy)

    2016-03-01

    Recent results of numerical magnetohydrodynamic simulations suggest that in collisionless space plasmas, turbulence can spontaneously generate thin current sheets. These coherent structures can partially explain the intermittency and the non-homogenous distribution of localized plasma heating in turbulence. In this Letter, Cluster multi-point observations are used to investigate the distribution of magnetic field discontinuities and the associated small-scale current sheets in the terrestrial magnetosheath downstream of a quasi-parallel bow shock. It is shown experimentally, for the first time, that the strongest turbulence-generated current sheets occupy the long tails of probability distribution functions associated with extremal values of magnetic field partial derivatives. During the analyzed one-hour time interval, about a hundred strong discontinuities, possibly proton-scale current sheets, were observed.

  10. High efficiency thin-film solar cells

    Energy Technology Data Exchange (ETDEWEB)

    Schock, Hans-Werner [Helmholtz Zentrum Berlin (Germany). Solar Energy

    2012-11-01

    Production of photovoltaics is growing worldwide on a gigawatt scale. Among the thin film technologies, Cu(In,Ga)S,Se{sub 2} (CIS or CIGS) based solar cells have been the focus of more and more attention. This paper aims to analyze the success of CIGS based solar cells and the potential of this technology for future photovoltaics large-scale production. Specific material properties make CIS unique and allow the preparation of the material with a wide range of processing options. The huge potential lies in the possibility to take advantage of modern thin film processing equipment and combine it with very high efficiencies beyond 20% already achieved on the laboratory scale. A sustainable development of this technology could be realized by modifying the materials and replacing indium by abundant elements. (orig.)

  11. MicroEcos: Micro-Scale Explorations of Large-Scale Late Pleistocene Ecosystems

    Science.gov (United States)

    Gellis, B. S.

    2017-12-01

    Pollen data can inform the reconstruction of early-floral environments by providing data for artistic representations of what early-terrestrial ecosystems looked like, and how existing terrestrial landscapes have evolved. For example, what did the Bighorn Basin look like when large ice sheets covered modern Canada, the Yellowstone Plateau had an ice cap, and the Bighorn Mountains were mantled with alpine glaciers? MicroEcos is an immersive, multimedia project that aims to strengthen human-nature connections through the understanding and appreciation of biological ecosystems. Collected pollen data elucidates flora that are visible in the fossil record - associated with the Late-Pleistocene - and have been illustrated and described in botanical literature. It aims to make scientific data accessible and interesting to all audiences through a series of interactive-digital sculptures, large-scale photography and field-based videography. While this project is driven by scientific data, it is rooted in deeply artistic and outreach-based practices, which include broad artistic practices, e.g.: digital design, illustration, photography, video and sound design. Using 3D modeling and printing technology MicroEcos centers around a series of 3D-printed models of the Last Canyon rock shelter on the Wyoming and Montana border, Little Windy Hill pond site in Wyoming's Medicine Bow National Forest, and Natural Trap Cave site in Wyoming's Big Horn Basin. These digital, interactive-3D sculpture provide audiences with glimpses of three-dimensional Late-Pleistocene environments, and helps create dialogue of how grass, sagebrush, and spruce based ecosystems form. To help audiences better contextualize how MicroEcos bridges notions of time, space, and place, modern photography and videography of the Last Canyon, Little Windy Hill and Natural Trap Cave sites surround these 3D-digital reconstructions.

  12. Large-scale solvothermal synthesis of fluorescent carbon nanoparticles

    International Nuclear Information System (INIS)

    Ku, Kahoe; Park, Jinwoo; Kim, Nayon; Kim, Woong; Lee, Seung-Wook; Chung, Haegeun; Han, Chi-Hwan

    2014-01-01

    The large-scale production of high-quality carbon nanomaterials is highly desirable for a variety of applications. We demonstrate a novel synthetic route to the production of fluorescent carbon nanoparticles (CNPs) in large quantities via a single-step reaction. The simple heating of a mixture of benzaldehyde, ethanol and graphite oxide (GO) with residual sulfuric acid in an autoclave produced 7 g of CNPs with a quantum yield of 20%. The CNPs can be dispersed in various organic solvents; hence, they are easily incorporated into polymer composites in forms such as nanofibers and thin films. Additionally, we observed that the GO present during the CNP synthesis was reduced. The reduced GO (RGO) was sufficiently conductive (σ ≈ 282 S m −1 ) such that it could be used as an electrode material in a supercapacitor; in addition, it can provide excellent capacitive behavior and high-rate capability. This work will contribute greatly to the development of efficient synthetic routes to diverse carbon nanomaterials, including CNPs and RGO, that are suitable for a wide range of applications. (paper)

  13. Data Mining for Efficient and Accurate Large Scale Retrieval of Geophysical Parameters

    Science.gov (United States)

    Obradovic, Z.; Vucetic, S.; Peng, K.; Han, B.

    2004-12-01

    Our effort is devoted to developing data mining technology for improving efficiency and accuracy of the geophysical parameter retrievals by learning a mapping from observation attributes to the corresponding parameters within the framework of classification and regression. We will describe a method for efficient learning of neural network-based classification and regression models from high-volume data streams. The proposed procedure automatically learns a series of neural networks of different complexities on smaller data stream chunks and then properly combines them into an ensemble predictor through averaging. Based on the idea of progressive sampling the proposed approach starts with a very simple network trained on a very small chunk and then gradually increases the model complexity and the chunk size until the learning performance no longer improves. Our empirical study on aerosol retrievals from data obtained with the MISR instrument mounted at Terra satellite suggests that the proposed method is successful in learning complex concepts from large data streams with near-optimal computational effort. We will also report on a method that complements deterministic retrievals by constructing accurate predictive algorithms and applying them on appropriately selected subsets of observed data. The method is based on developing more accurate predictors aimed to catch global and local properties synthesized in a region. The procedure starts by learning the global properties of data sampled over the entire space, and continues by constructing specialized models on selected localized regions. The global and local models are integrated through an automated procedure that determines the optimal trade-off between the two components with the objective of minimizing the overall mean square errors over a specific region. Our experimental results on MISR data showed that the combined model can increase the retrieval accuracy significantly. The preliminary results on various

  14. A full scale approximation of covariance functions for large spatial data sets

    KAUST Repository

    Sang, Huiyan

    2011-10-10

    Gaussian process models have been widely used in spatial statistics but face tremendous computational challenges for very large data sets. The model fitting and spatial prediction of such models typically require O(n 3) operations for a data set of size n. Various approximations of the covariance functions have been introduced to reduce the computational cost. However, most existing approximations cannot simultaneously capture both the large- and the small-scale spatial dependence. A new approximation scheme is developed to provide a high quality approximation to the covariance function at both the large and the small spatial scales. The new approximation is the summation of two parts: a reduced rank covariance and a compactly supported covariance obtained by tapering the covariance of the residual of the reduced rank approximation. Whereas the former part mainly captures the large-scale spatial variation, the latter part captures the small-scale, local variation that is unexplained by the former part. By combining the reduced rank representation and sparse matrix techniques, our approach allows for efficient computation for maximum likelihood estimation, spatial prediction and Bayesian inference. We illustrate the new approach with simulated and real data sets. © 2011 Royal Statistical Society.

  15. A full scale approximation of covariance functions for large spatial data sets

    KAUST Repository

    Sang, Huiyan; Huang, Jianhua Z.

    2011-01-01

    Gaussian process models have been widely used in spatial statistics but face tremendous computational challenges for very large data sets. The model fitting and spatial prediction of such models typically require O(n 3) operations for a data set of size n. Various approximations of the covariance functions have been introduced to reduce the computational cost. However, most existing approximations cannot simultaneously capture both the large- and the small-scale spatial dependence. A new approximation scheme is developed to provide a high quality approximation to the covariance function at both the large and the small spatial scales. The new approximation is the summation of two parts: a reduced rank covariance and a compactly supported covariance obtained by tapering the covariance of the residual of the reduced rank approximation. Whereas the former part mainly captures the large-scale spatial variation, the latter part captures the small-scale, local variation that is unexplained by the former part. By combining the reduced rank representation and sparse matrix techniques, our approach allows for efficient computation for maximum likelihood estimation, spatial prediction and Bayesian inference. We illustrate the new approach with simulated and real data sets. © 2011 Royal Statistical Society.

  16. On the Large-Scaling Issues of Cloud-based Applications for Earth Science Dat

    Science.gov (United States)

    Hua, H.

    2016-12-01

    Next generation science data systems are needed to address the incoming flood of data from new missions such as NASA's SWOT and NISAR where its SAR data volumes and data throughput rates are order of magnitude larger than present day missions. Existing missions, such as OCO-2, may also require high turn-around time for processing different science scenarios where on-premise and even traditional HPC computing environments may not meet the high processing needs. Additionally, traditional means of procuring hardware on-premise are already limited due to facilities capacity constraints for these new missions. Experiences have shown that to embrace efficient cloud computing approaches for large-scale science data systems requires more than just moving existing code to cloud environments. At large cloud scales, we need to deal with scaling and cost issues. We present our experiences on deploying multiple instances of our hybrid-cloud computing science data system (HySDS) to support large-scale processing of Earth Science data products. We will explore optimization approaches to getting best performance out of hybrid-cloud computing as well as common issues that will arise when dealing with large-scale computing. Novel approaches were utilized to do processing on Amazon's spot market, which can potentially offer 75%-90% costs savings but with an unpredictable computing environment based on market forces.

  17. Cosmology Large Angular Scale Surveyor (CLASS) Focal Plane Development

    Science.gov (United States)

    Chuss, D. T.; Ali, A.; Amiri, M.; Appel, J.; Bennett, C. L.; Colazo, F.; Denis, K. L.; Dunner, R.; Essinger-Hileman, T.; Eimer, J.; hide

    2015-01-01

    The Cosmology Large Angular Scale Surveyor (CLASS) will measure the polarization of the Cosmic Microwave Background to search for and characterize the polarized signature of inflation. CLASS will operate from the Atacama Desert and observe approx.70% of the sky. A variable-delay polarization modulator provides modulation of the polarization at approx.10Hz to suppress the 1/f noise of the atmosphere and enable the measurement of the large angular scale polarization modes. The measurement of the inflationary signal across angular scales that spans both the recombination and reionization features allows a test of the predicted shape of the polarized angular power spectra in addition to a measurement of the energy scale of inflation. CLASS is an array of telescopes covering frequencies of 38, 93, 148, and 217 GHz. These frequencies straddle the foreground minimum and thus allow the extraction of foregrounds from the primordial signal. Each focal plane contains feedhorn-coupled transition-edge sensors that simultaneously detect two orthogonal linear polarizations. The use of single-crystal silicon as the dielectric for the on-chip transmission lines enables both high efficiency and uniformity in fabrication. Integrated band definition has been implemented that both controls the bandpass of the single-mode transmission on the chip and prevents stray light from coupling to the detectors.

  18. Temporal development and chemical efficiency of positive streamers in a large scale wire-plate reactor as a function of voltage waveform parameters

    NARCIS (Netherlands)

    Winands, G.J.J.; Liu, Zhen; Pemen, A.J.M.; Heesch, van E.J.M.; Yan, K.; Veldhuizen, van E.M.

    2006-01-01

    In this paper a large-scale pulsed corona system is described in which pulse parameters such as pulse rise-time, peak voltage, pulse width and energy per pulse can be varied. The chemical efficiency of the system is determined by measuring ozone production. The temporal and spatial development of

  19. Survey of high-voltage pulse technology suitable for large-scale plasma source ion implantation processes

    International Nuclear Information System (INIS)

    Reass, W.A.

    1994-01-01

    Many new plasma processes ideas are finding their way from the research lab to the manufacturing plant floor. These require high voltage (HV) pulse power equipment, which must be optimized for application, system efficiency, and reliability. Although no single HV pulse technology is suitable for all plasma processes, various classes of high voltage pulsers may offer a greater versatility and economy to the manufacturer. Technology developed for existing radar and particle accelerator modulator power systems can be utilized to develop a modern large scale plasma source ion implantation (PSII) system. The HV pulse networks can be broadly defined by two classes of systems, those that generate the voltage directly, and those that use some type of pulse forming network and step-up transformer. This article will examine these HV pulse technologies and discuss their applicability to the specific PSII process. Typical systems that will be reviewed will include high power solid state, hard tube systems such as crossed-field ''hollow beam'' switch tubes and planar tetrodes, and ''soft'' tube systems with crossatrons and thyratrons. Results will be tabulated and suggestions provided for a particular PSII process

  20. Review of status developments of high-efficiency crystalline silicon solar cells

    Science.gov (United States)

    Liu, Jingjing; Yao, Yao; Xiao, Shaoqing; Gu, Xiaofeng

    2018-03-01

    In order to further improve cell efficiency and reduce cost in achieving grid parity, a large number of PV manufacturing companies, universities and research institutes have been devoted to a variety of low-cost and high-efficiency crystalline Si solar cells. In this article, the cell structures, characteristics and efficiency progresses of several types of high-efficiency crystalline Si solar cells that have been in small scale production or are promising in mass production are presented, including passivated emitter rear cell, tunnel oxide passivated contact solar cell, interdigitated back contact cell, heterojunction with intrinsic thin-layer cell, and heterojunction solar cells with interdigitated back contacts. Both the industrialization status and future development trend of high-efficiency crystalline silicon solar cells are also pinpointed.

  1. Global terrestrial biogeochemistry: Perturbations, interactions, and time scales

    Energy Technology Data Exchange (ETDEWEB)

    Braswell, B.H. Jr.

    1996-12-01

    Global biogeochemical processes are being perturbed by human activity, principally that which is associated with industrial activity and expansion of urban and agricultural complexes. Perturbations have manifested themselves at least since the beginning of the 19th Century, and include emissions of CO{sub 2} and other pollutants from fossil fuel combustion, agricultural emissions of reactive nitrogen, and direct disruption of ecosystem function through land conversion. These perturbations yield local impacts, but there are also global consequences that are the sum of local-scale influences. Several approaches to understanding the global-scale implications of chemical perturbations to the Earth system are discussed. The lifetime of anthropogenic CO{sub 2} in the atmosphere is an important concept for understanding the current and future commitment to an altered atmospheric heat budget. The importance of the terrestrial biogeochemistry relative to the lifetime of excess CO{sub 2} is demonstrated using dynamic, aggregated models of the global carbon cycle.

  2. Energy Analysis of Cascade Heating with High Back-Pressure Large-Scale Steam Turbine

    Directory of Open Access Journals (Sweden)

    Zhihua Ge

    2018-01-01

    Full Text Available To reduce the exergy loss that is caused by the high-grade extraction steam of traditional heating mode of combined heat and power (CHP generating unit, a high back-pressure cascade heating technology for two jointly constructed large-scale steam turbine power generating units is proposed. The Unit 1 makes full use of the exhaust steam heat from high back-pressure turbine, and the Unit 2 uses the original heating mode of extracting steam condensation, which significantly reduces the flow rate of high-grade extraction steam. The typical 2 × 350 MW supercritical CHP units in northern China were selected as object. The boundary conditions for heating were determined based on the actual climatic conditions and heating demands. A model to analyze the performance of the high back-pressure cascade heating supply units for off-design operating conditions was developed. The load distributions between high back-pressure exhaust steam direct supply and extraction steam heating supply were described under various conditions, based on which, the heating efficiency of the CHP units with the high back-pressure cascade heating system was analyzed. The design heating load and maximum heating supply load were determined as well. The results indicate that the average coal consumption rate during the heating season is 205.46 g/kWh for the design heating load after the retrofit, which is about 51.99 g/kWh lower than that of the traditional heating mode. The coal consumption rate of 199.07 g/kWh can be achieved for the maximum heating load. Significant energy saving and CO2 emission reduction are obtained.

  3. A solvent- and vacuum-free route to large-area perovskite films for efficient solar modules

    Science.gov (United States)

    Chen, Han; Ye, Fei; Tang, Wentao; He, Jinjin; Yin, Maoshu; Wang, Yanbo; Xie, Fengxian; Bi, Enbing; Yang, Xudong; Grätzel, Michael; Han, Liyuan

    2017-10-01

    Recent advances in the use of organic-inorganic hybrid perovskites for optoelectronics have been rapid, with reported power conversion efficiencies of up to 22 per cent for perovskite solar cells. Improvements in stability have also enabled testing over a timescale of thousands of hours. However, large-scale deployment of such cells will also require the ability to produce large-area, uniformly high-quality perovskite films. A key challenge is to overcome the substantial reduction in power conversion efficiency when a small device is scaled up: a reduction from over 20 per cent to about 10 per cent is found when a common aperture area of about 0.1 square centimetres is increased to more than 25 square centimetres. Here we report a new deposition route for methyl ammonium lead halide perovskite films that does not rely on use of a common solvent or vacuum: rather, it relies on the rapid conversion of amine complex precursors to perovskite films, followed by a pressure application step. The deposited perovskite films were free of pin-holes and highly uniform. Importantly, the new deposition approach can be performed in air at low temperatures, facilitating fabrication of large-area perovskite devices. We reached a certified power conversion efficiency of 12.1 per cent with an aperture area of 36.1 square centimetres for a mesoporous TiO2-based perovskite solar module architecture.

  4. Sparse deconvolution for the large-scale ill-posed inverse problem of impact force reconstruction

    Science.gov (United States)

    Qiao, Baijie; Zhang, Xingwu; Gao, Jiawei; Liu, Ruonan; Chen, Xuefeng

    2017-01-01

    Most previous regularization methods for solving the inverse problem of force reconstruction are to minimize the l2-norm of the desired force. However, these traditional regularization methods such as Tikhonov regularization and truncated singular value decomposition, commonly fail to solve the large-scale ill-posed inverse problem in moderate computational cost. In this paper, taking into account the sparse characteristic of impact force, the idea of sparse deconvolution is first introduced to the field of impact force reconstruction and a general sparse deconvolution model of impact force is constructed. Second, a novel impact force reconstruction method based on the primal-dual interior point method (PDIPM) is proposed to solve such a large-scale sparse deconvolution model, where minimizing the l2-norm is replaced by minimizing the l1-norm. Meanwhile, the preconditioned conjugate gradient algorithm is used to compute the search direction of PDIPM with high computational efficiency. Finally, two experiments including the small-scale or medium-scale single impact force reconstruction and the relatively large-scale consecutive impact force reconstruction are conducted on a composite wind turbine blade and a shell structure to illustrate the advantage of PDIPM. Compared with Tikhonov regularization, PDIPM is more efficient, accurate and robust whether in the single impact force reconstruction or in the consecutive impact force reconstruction.

  5. Why small-scale cannabis growers stay small: five mechanisms that prevent small-scale growers from going large scale.

    Science.gov (United States)

    Hammersvik, Eirik; Sandberg, Sveinung; Pedersen, Willy

    2012-11-01

    Over the past 15-20 years, domestic cultivation of cannabis has been established in a number of European countries. New techniques have made such cultivation easier; however, the bulk of growers remain small-scale. In this study, we explore the factors that prevent small-scale growers from increasing their production. The study is based on 1 year of ethnographic fieldwork and qualitative interviews conducted with 45 Norwegian cannabis growers, 10 of whom were growing on a large-scale and 35 on a small-scale. The study identifies five mechanisms that prevent small-scale indoor growers from going large-scale. First, large-scale operations involve a number of people, large sums of money, a high work-load and a high risk of detection, and thus demand a higher level of organizational skills than for small growing operations. Second, financial assets are needed to start a large 'grow-site'. Housing rent, electricity, equipment and nutrients are expensive. Third, to be able to sell large quantities of cannabis, growers need access to an illegal distribution network and knowledge of how to act according to black market norms and structures. Fourth, large-scale operations require advanced horticultural skills to maximize yield and quality, which demands greater skills and knowledge than does small-scale cultivation. Fifth, small-scale growers are often embedded in the 'cannabis culture', which emphasizes anti-commercialism, anti-violence and ecological and community values. Hence, starting up large-scale production will imply having to renegotiate or abandon these values. Going from small- to large-scale cannabis production is a demanding task-ideologically, technically, economically and personally. The many obstacles that small-scale growers face and the lack of interest and motivation for going large-scale suggest that the risk of a 'slippery slope' from small-scale to large-scale growing is limited. Possible political implications of the findings are discussed. Copyright

  6. A decomposition heuristics based on multi-bottleneck machines for large-scale job shop scheduling problems

    Directory of Open Access Journals (Sweden)

    Yingni Zhai

    2014-10-01

    Full Text Available Purpose: A decomposition heuristics based on multi-bottleneck machines for large-scale job shop scheduling problems (JSP is proposed.Design/methodology/approach: In the algorithm, a number of sub-problems are constructed by iteratively decomposing the large-scale JSP according to the process route of each job. And then the solution of the large-scale JSP can be obtained by iteratively solving the sub-problems. In order to improve the sub-problems' solving efficiency and the solution quality, a detection method for multi-bottleneck machines based on critical path is proposed. Therewith the unscheduled operations can be decomposed into bottleneck operations and non-bottleneck operations. According to the principle of “Bottleneck leads the performance of the whole manufacturing system” in TOC (Theory Of Constraints, the bottleneck operations are scheduled by genetic algorithm for high solution quality, and the non-bottleneck operations are scheduled by dispatching rules for the improvement of the solving efficiency.Findings: In the process of the sub-problems' construction, partial operations in the previous scheduled sub-problem are divided into the successive sub-problem for re-optimization. This strategy can improve the solution quality of the algorithm. In the process of solving the sub-problems, the strategy that evaluating the chromosome's fitness by predicting the global scheduling objective value can improve the solution quality.Research limitations/implications: In this research, there are some assumptions which reduce the complexity of the large-scale scheduling problem. They are as follows: The processing route of each job is predetermined, and the processing time of each operation is fixed. There is no machine breakdown, and no preemption of the operations is allowed. The assumptions should be considered if the algorithm is used in the actual job shop.Originality/value: The research provides an efficient scheduling method for the

  7. Thermal System Analysis and Optimization of Large-Scale Compressed Air Energy Storage (CAES

    Directory of Open Access Journals (Sweden)

    Zhongguang Fu

    2015-08-01

    Full Text Available As an important solution to issues regarding peak load and renewable energy resources on grids, large-scale compressed air energy storage (CAES power generation technology has recently become a popular research topic in the area of large-scale industrial energy storage. At present, the combination of high-expansion ratio turbines with advanced gas turbine technology is an important breakthrough in energy storage technology. In this study, a new gas turbine power generation system is coupled with current CAES technology. Moreover, a thermodynamic cycle system is optimized by calculating for the parameters of a thermodynamic system. Results show that the thermal efficiency of the new system increases by at least 5% over that of the existing system.

  8. The effect of short ground vegetation on terrestrial laser scans at a local scale

    Science.gov (United States)

    Fan, Lei; Powrie, William; Smethurst, Joel; Atkinson, Peter M.; Einstein, Herbert

    2014-09-01

    Terrestrial laser scanning (TLS) can record a large amount of accurate topographical information with a high spatial accuracy over a relatively short period of time. These features suggest it is a useful tool for topographical survey and surface deformation detection. However, the use of TLS to survey a terrain surface is still challenging in the presence of dense ground vegetation. The bare ground surface may not be illuminated due to signal occlusion caused by vegetation. This paper investigates vegetation-induced elevation error in TLS surveys at a local scale and its spatial pattern. An open, relatively flat area vegetated with dense grass was surveyed repeatedly under several scan conditions. A total station was used to establish an accurate representation of the bare ground surface. Local-highest-point and local-lowest-point filters were applied to the point clouds acquired for deriving vegetation height and vegetation-induced elevation error, respectively. The effects of various factors (for example, vegetation height, edge effects, incidence angle, scan resolution and location) on the error caused by vegetation are discussed. The results are of use in the planning and interpretation of TLS surveys of vegetated areas.

  9. Large Scale Visual Recommendations From Street Fashion Images

    OpenAIRE

    Jagadeesh, Vignesh; Piramuthu, Robinson; Bhardwaj, Anurag; Di, Wei; Sundaresan, Neel

    2014-01-01

    We describe a completely automated large scale visual recommendation system for fashion. Our focus is to efficiently harness the availability of large quantities of online fashion images and their rich meta-data. Specifically, we propose four data driven models in the form of Complementary Nearest Neighbor Consensus, Gaussian Mixture Models, Texture Agnostic Retrieval and Markov Chain LDA for solving this problem. We analyze relative merits and pitfalls of these algorithms through extensive e...

  10. Large-scale application of highly-diluted bacteria for Leptospirosis epidemic control.

    Science.gov (United States)

    Bracho, Gustavo; Varela, Enrique; Fernández, Rolando; Ordaz, Barbara; Marzoa, Natalia; Menéndez, Jorge; García, Luis; Gilling, Esperanza; Leyva, Richard; Rufín, Reynaldo; de la Torre, Rubén; Solis, Rosa L; Batista, Niurka; Borrero, Reinier; Campa, Concepción

    2010-07-01

    Leptospirosis is a zoonotic disease of major importance in the tropics where the incidence peaks in rainy seasons. Natural disasters represent a big challenge to Leptospirosis prevention strategies especially in endemic regions. Vaccination is an effective option but of reduced effectiveness in emergency situations. Homeoprophylactic interventions might help to control epidemics by using highly-diluted pathogens to induce protection in a short time scale. We report the results of a very large-scale homeoprophylaxis (HP) intervention against Leptospirosis in a dangerous epidemic situation in three provinces of Cuba in 2007. Forecast models were used to estimate possible trends of disease incidence. A homeoprophylactic formulation was prepared from dilutions of four circulating strains of Leptospirosis. This formulation was administered orally to 2.3 million persons at high risk in an epidemic in a region affected by natural disasters. The data from surveillance were used to measure the impact of the intervention by comparing with historical trends and non-intervention regions. After the homeoprophylactic intervention a significant decrease of the disease incidence was observed in the intervention regions. No such modifications were observed in non-intervention regions. In the intervention region the incidence of Leptospirosis fell below the historic median. This observation was independent of rainfall. The homeoprophylactic approach was associated with a large reduction of disease incidence and control of the epidemic. The results suggest the use of HP as a feasible tool for epidemic control, further research is warranted. 2010 Elsevier Ltd. All rights reserved.

  11. Using radar altimetry to update a large-scale hydrological model of the Brahmaputra river basin

    DEFF Research Database (Denmark)

    Finsen, F.; Milzow, Christian; Smith, R.

    2014-01-01

    Measurements of river and lake water levels from space-borne radar altimeters (past missions include ERS, Envisat, Jason, Topex) are useful for calibration and validation of large-scale hydrological models in poorly gauged river basins. Altimetry data availability over the downstream reaches...... of the Brahmaputra is excellent (17 high-quality virtual stations from ERS-2, 6 from Topex and 10 from Envisat are available for the Brahmaputra). In this study, altimetry data are used to update a large-scale Budyko-type hydrological model of the Brahmaputra river basin in real time. Altimetry measurements...... improved model performance considerably. The Nash-Sutcliffe model efficiency increased from 0.77 to 0.83. Real-time river basin modelling using radar altimetry has the potential to improve the predictive capability of large-scale hydrological models elsewhere on the planet....

  12. Large-scale solar purchasing

    International Nuclear Information System (INIS)

    1999-01-01

    The principal objective of the project was to participate in the definition of a new IEA task concerning solar procurement (''the Task'') and to assess whether involvement in the task would be in the interest of the UK active solar heating industry. The project also aimed to assess the importance of large scale solar purchasing to UK active solar heating market development and to evaluate the level of interest in large scale solar purchasing amongst potential large scale purchasers (in particular housing associations and housing developers). A further aim of the project was to consider means of stimulating large scale active solar heating purchasing activity within the UK. (author)

  13. Applicability of vector processing to large-scale nuclear codes

    International Nuclear Information System (INIS)

    Ishiguro, Misako; Harada, Hiroo; Matsuura, Toshihiko; Okuda, Motoi; Ohta, Fumio; Umeya, Makoto.

    1982-03-01

    To meet the growing trend of computational requirements in JAERI, introduction of a high-speed computer with vector processing faculty (a vector processor) is desirable in the near future. To make effective use of a vector processor, appropriate optimization of nuclear codes to pipelined-vector architecture is vital, which will pose new problems concerning code development and maintenance. In this report, vector processing efficiency is assessed with respect to large-scale nuclear codes by examining the following items: 1) The present feature of computational load in JAERI is analyzed by compiling the computer utilization statistics. 2) Vector processing efficiency is estimated for the ten heavily-used nuclear codes by analyzing their dynamic behaviors run on a scalar machine. 3) Vector processing efficiency is measured for the other five nuclear codes by using the current vector processors, FACOM 230-75 APU and CRAY-1. 4) Effectiveness of applying a high-speed vector processor to nuclear codes is evaluated by taking account of the characteristics in JAERI jobs. Problems of vector processors are also discussed from the view points of code performance and ease of use. (author)

  14. Are large farms more efficient? Tenure security, farm size and farm efficiency: evidence from northeast China

    Science.gov (United States)

    Zhou, Yuepeng; Ma, Xianlei; Shi, Xiaoping

    2017-04-01

    How to increase production efficiency, guarantee grain security, and increase farmers' income using the limited farmland is a great challenge that China is facing. Although theory predicts that secure property rights and moderate scale management of farmland can increase land productivity, reduce farm-related costs, and raise farmer's income, empirical studies on the size and magnitude of these effects are scarce. A number of studies have examined the impacts of land tenure or farm size on productivity or efficiency, respectively. There are also a few studies linking farm size, land tenure and efficiency together. However, to our best knowledge, there are no studies considering tenure security and farm efficiency together for different farm scales in China. In addition, there is little study analyzing the profit frontier. In this study, we particularly focus on the impacts of land tenure security and farm size on farm profit efficiency, using farm level data collected from 23 villages, 811 households in Liaoning in 2015. 7 different farm scales have been identified to further represent small farms, median farms, moderate-scale farms, and large farms. Technical efficiency is analyzed with stochastic frontier production function. The profit efficiency is regressed on a set of explanatory variables which includes farm size dummies, land tenure security indexes, and household characteristics. We found that: 1) The technical efficiency scores for production efficiency (average score = 0.998) indicate that it is already very close to the production frontier, and thus there is little room to improve production efficiency. However, there is larger space to raise profit efficiency (average score = 0.768) by investing more on farm size expansion, seed, hired labor, pesticide, and irrigation. 2) Farms between 50-80 mu are most efficient from the viewpoint of profit efficiency. The so-called moderate-scale farms (100-150 mu) according to the governmental guideline show no

  15. Carbon dioxide efficiency of terrestrial enhanced weathering

    OpenAIRE

    Moosdorf, Nils; Renforth, Philip; Hartmann, Jens

    2014-01-01

    Terrestrial enhanced weathering, the spreading of ultramafic silicate rock flour to enhance natural weathering rates, has been suggested as part of a strategy to reduce global atmospheric CO2 levels. We budget potential CO2 sequestration against associated CO2 emissions to assess the net CO2 removal of terrestrial enhanced weathering. We combine global spatial data sets of potential source rocks, transport networks, and application areas with associated CO2 emissions in optimistic and pessimi...

  16. An inertia-free filter line-search algorithm for large-scale nonlinear programming

    Energy Technology Data Exchange (ETDEWEB)

    Chiang, Nai-Yuan; Zavala, Victor M.

    2016-02-15

    We present a filter line-search algorithm that does not require inertia information of the linear system. This feature enables the use of a wide range of linear algebra strategies and libraries, which is essential to tackle large-scale problems on modern computing architectures. The proposed approach performs curvature tests along the search step to detect negative curvature and to trigger convexification. We prove that the approach is globally convergent and we implement the approach within a parallel interior-point framework to solve large-scale and highly nonlinear problems. Our numerical tests demonstrate that the inertia-free approach is as efficient as inertia detection via symmetric indefinite factorizations. We also demonstrate that the inertia-free approach can lead to reductions in solution time because it reduces the amount of convexification needed.

  17. Evaluating the effects of future climate change and elevated CO2 on the water use efficiency in terrestrial ecosystems of China

    Science.gov (United States)

    Zhu, Q.; Jiang, H.; Peng, C.; Liu, J.; Wei, X.; Fang, X.; Liu, S.; Zhou, G.; Yu, S.

    2011-01-01

    Water use efficiency (WUE) is an important variable used in climate change and hydrological studies in relation to how it links ecosystem carbon cycles and hydrological cycles together. However, obtaining reliable WUE results based on site-level flux data remains a great challenge when scaling up to larger regional zones. Biophysical, process-based ecosystem models are powerful tools to study WUE at large spatial and temporal scales. The Integrated BIosphere Simulator (IBIS) was used to evaluate the effects of climate change and elevated CO2 concentrations on ecosystem-level WUE (defined as the ratio of gross primary production (GPP) to evapotranspiration (ET)) in relation to terrestrial ecosystems in China for 2009–2099. Climate scenario data (IPCC SRES A2 and SRES B1) generated from the Third Generation Coupled Global Climate Model (CGCM3) was used in the simulations. Seven simulations were implemented according to the assemblage of different elevated CO2 concentrations scenarios and different climate change scenarios. Analysis suggests that (1) further elevated CO2concentrations will significantly enhance the WUE over China by the end of the twenty-first century, especially in forest areas; (2) effects of climate change on WUE will vary for different geographical regions in China with negative effects occurring primarily in southern regions and positive effects occurring primarily in high latitude and altitude regions (Tibetan Plateau); (3) WUE will maintain the current levels for 2009–2099 under the constant climate scenario (i.e. using mean climate condition of 1951–2006 and CO2concentrations of the 2008 level); and (4) WUE will decrease with the increase of water resource restriction (expressed as evaporation ratio) among different ecosystems.

  18. The Modified HZ Conjugate Gradient Algorithm for Large-Scale Nonsmooth Optimization.

    Science.gov (United States)

    Yuan, Gonglin; Sheng, Zhou; Liu, Wenjie

    2016-01-01

    In this paper, the Hager and Zhang (HZ) conjugate gradient (CG) method and the modified HZ (MHZ) CG method are presented for large-scale nonsmooth convex minimization. Under some mild conditions, convergent results of the proposed methods are established. Numerical results show that the presented methods can be better efficiency for large-scale nonsmooth problems, and several problems are tested (with the maximum dimensions to 100,000 variables).

  19. The Modified HZ Conjugate Gradient Algorithm for Large-Scale Nonsmooth Optimization.

    Directory of Open Access Journals (Sweden)

    Gonglin Yuan

    Full Text Available In this paper, the Hager and Zhang (HZ conjugate gradient (CG method and the modified HZ (MHZ CG method are presented for large-scale nonsmooth convex minimization. Under some mild conditions, convergent results of the proposed methods are established. Numerical results show that the presented methods can be better efficiency for large-scale nonsmooth problems, and several problems are tested (with the maximum dimensions to 100,000 variables.

  20. Change in terrestrial ecosystem water-use efficiency over the last three decades.

    Science.gov (United States)

    Huang, Mengtian; Piao, Shilong; Sun, Yan; Ciais, Philippe; Cheng, Lei; Mao, Jiafu; Poulter, Ben; Shi, Xiaoying; Zeng, Zhenzhong; Wang, Yingping

    2015-06-01

    Defined as the ratio between gross primary productivity (GPP) and evapotranspiration (ET), ecosystem-scale water-use efficiency (EWUE) is an indicator of the adjustment of vegetation photosynthesis to water loss. The processes controlling EWUE are complex and reflect both a slow evolution of plants and plant communities as well as fast adjustments of ecosystem functioning to changes of limiting resources. In this study, we investigated EWUE trends from 1982 to 2008 using data-driven models derived from satellite observations and process-oriented carbon cycle models. Our findings suggest positive EWUE trends of 0.0056, 0.0007 and 0.0001 g C m(-2)  mm(-1)  yr(-1) under the single effect of rising CO2 ('CO2 '), climate change ('CLIM') and nitrogen deposition ('NDEP'), respectively. Global patterns of EWUE trends under different scenarios suggest that (i) EWUE-CO2 shows global increases, (ii) EWUE-CLIM increases in mainly high latitudes and decreases at middle and low latitudes, (iii) EWUE-NDEP displays slight increasing trends except in west Siberia, eastern Europe, parts of North America and central Amazonia. The data-driven MTE model, however, shows a slight decline of EWUE during the same period (-0.0005 g C m(-2)  mm(-1)  yr(-1) ), which differs from process-model (0.0064 g C m(-2)  mm(-1)  yr(-1) ) simulations with all drivers taken into account. We attribute this discrepancy to the fact that the nonmodeled physiological effects of elevated CO2 reducing stomatal conductance and transpiration (TR) in the MTE model. Partial correlation analysis between EWUE and climate drivers shows similar responses to climatic variables with the data-driven model and the process-oriented models across different ecosystems. Change in water-use efficiency defined from transpiration-based WUEt (GPP/TR) and inherent water-use efficiency (IWUEt , GPP×VPD/TR) in response to rising CO2 , climate change, and nitrogen deposition are also discussed. Our analyses will

  1. EvArnoldi: A New Algorithm for Large-Scale Eigenvalue Problems.

    Science.gov (United States)

    Tal-Ezer, Hillel

    2016-05-19

    Eigenvalues and eigenvectors are an essential theme in numerical linear algebra. Their study is mainly motivated by their high importance in a wide range of applications. Knowledge of eigenvalues is essential in quantum molecular science. Solutions of the Schrödinger equation for the electrons composing the molecule are the basis of electronic structure theory. Electronic eigenvalues compose the potential energy surfaces for nuclear motion. The eigenvectors allow calculation of diople transition matrix elements, the core of spectroscopy. The vibrational dynamics molecule also requires knowledge of the eigenvalues of the vibrational Hamiltonian. Typically in these problems, the dimension of Hilbert space is huge. Practically, only a small subset of eigenvalues is required. In this paper, we present a highly efficient algorithm, named EvArnoldi, for solving the large-scale eigenvalues problem. The algorithm, in its basic formulation, is mathematically equivalent to ARPACK ( Sorensen , D. C. Implicitly Restarted Arnoldi/Lanczos Methods for Large Scale Eigenvalue Calculations ; Springer , 1997 ; Lehoucq , R. B. ; Sorensen , D. C. SIAM Journal on Matrix Analysis and Applications 1996 , 17 , 789 ; Calvetti , D. ; Reichel , L. ; Sorensen , D. C. Electronic Transactions on Numerical Analysis 1994 , 2 , 21 ) (or Eigs of Matlab) but significantly simpler.

  2. Holography as a highly efficient RG flow I: Rephrasing gravity

    OpenAIRE

    Behr, Nicolas; Kuperstein, Stanislav; Mukhopadhyay, Ayan

    2015-01-01

    We investigate how the holographic correspondence can be reformulated as a generalisation of Wilsonian RG flow in a strongly interacting large $N$ quantum field theory. We firstly define a \\textit{highly efficient RG flow} as one in which the Ward identities related to local conservation of energy, momentum and charges preserve the same form at each scale -- to achieve this it is necessary to redefine the background metric and external sources at each scale as functionals of the effective sin...

  3. Large-scale influences in near-wall turbulence.

    Science.gov (United States)

    Hutchins, Nicholas; Marusic, Ivan

    2007-03-15

    Hot-wire data acquired in a high Reynolds number facility are used to illustrate the need for adequate scale separation when considering the coherent structure in wall-bounded turbulence. It is found that a large-scale motion in the log region becomes increasingly comparable in energy to the near-wall cycle as the Reynolds number increases. Through decomposition of fluctuating velocity signals, it is shown that this large-scale motion has a distinct modulating influence on the small-scale energy (akin to amplitude modulation). Reassessment of DNS data, in light of these results, shows similar trends, with the rate and intensity of production due to the near-wall cycle subject to a modulating influence from the largest-scale motions.

  4. Nitrogen-Related Constraints of Carbon Uptake by Large-Scale Forest Expansion: Simulation Study for Climate Change and Management Scenarios

    Science.gov (United States)

    Kracher, Daniela

    2017-11-01

    Increase of forest areas has the potential to increase the terrestrial carbon (C) sink. However, the efficiency for C sequestration depends on the availability of nutrients such as nitrogen (N), which is affected by climatic conditions and management practices. In this study, I analyze how N limitation affects C sequestration of afforestation and how it is influenced by individual climate variables, increased harvest, and fertilizer application. To this end, JSBACH, the land component of the Earth system model of the Max Planck Institute for Meteorology is applied in idealized simulation experiments. In those simulations, large-scale afforestation increases the terrestrial C sink in the 21st century by around 100 Pg C compared to a business as usual land-use scenario. N limitation reduces C sequestration roughly by the same amount. The relevance of compensating effects of uptake and release of carbon dioxide by plant productivity and soil decomposition, respectively, gets obvious from the simulations. N limitation of both fluxes compensates particularly in the tropics. Increased mineralization under global warming triggers forest expansion, which otherwise is restricted by N availability. Due to compensating higher plant productivity and soil respiration, the global net effect of warming for C sequestration is however rather small. Fertilizer application and increased harvest enhance C sequestration as well as boreal expansion. The additional C sequestration achieved by fertilizer application is offset to a large part by additional emissions of nitrous oxide.

  5. Nanofluidic crystal: a facile, high-efficiency and high-power-density scaling up scheme for energy harvesting based on nanofluidic reverse electrodialysis

    International Nuclear Information System (INIS)

    Ouyang Wei; Wang Wei; Zhang Haixia; Wu Wengang; Li Zhihong

    2013-01-01

    The great advances in nanotechnology call for advances in miniaturized power sources for micro/nano-scale systems. Nanofluidic channels have received great attention as promising high-power-density substitutes for ion exchange membranes for use in energy harvesting from ambient ionic concentration gradient, namely reverse electrodialysis. This paper proposes the nanofluidic crystal (NFC), of packed nanoparticles in micro-meter-sized confined space, as a facile, high-efficiency and high-power-density scaling-up scheme for energy harvesting by nanofluidic reverse electrodialysis (NRED). Obtained from the self-assembly of nanoparticles in a micropore, the NFC forms an ion-selective network with enormous nanochannels due to electrical double-layer overlap in the nanoparticle interstices. As a proof-of-concept demonstration, a maximum efficiency of 42.3 ± 1.84%, a maximum power density of 2.82 ± 0.22 W m −2 , and a maximum output power of 1.17 ± 0.09 nW/unit (nearly three orders of magnitude of amplification compared to other NREDs) were achieved in our prototype cell, which was prepared within 30 min. The current NFC-based prototype cell can be parallelized and cascaded to achieve the desired output power and open circuit voltage. This NFC-based scaling-up scheme for energy harvesting based on NRED is promising for the building of self-powered micro/nano-scale systems. (paper)

  6. Highly efficient and large-scale fabrication of superhydrophobic alumina surface with strong stability based on self-congregated alumina nanowires.

    Science.gov (United States)

    Peng, Shan; Tian, Dong; Yang, Xiaojun; Deng, Wenli

    2014-04-09

    In this study, a large-area superhydrophobic alumina surface with a series of superior properties was fabricated via an economical, simple, and highly effective one-step anodization process, and subsequently modified with low-surface-energy film. The effects of the anodization parameters including electrochemical anodization time, current density, and electrolyte temperature on surface morphology and surface wettability were investigated in detail. The hierarchical alumina pyramids-on-pores (HAPOP) rough structure which was produced quickly through the one-step anodization process together with a low-surface-energy film deposition [1H,1H,2H,2H-perfluorodecyltriethoxysilane (PDES) and stearic acid (STA)] confer excellent superhydrophobicity and an extremely low sliding angle. Both the PDES-modified superhydrophobic (PDES-MS) and the STA-modified superhydrophobic (STA-MS) surfaces present fascinating nonwetting and extremely slippery behaviors. The chemical stability and mechanical durability of the PDES-MS and STA-MS surfaces were evaluated and discussed. Compared with the STA-MS surface, the as-prepared PDES-MS surface possesses an amazing chemical stability which not only can repel cool liquids (water, HCl/NaOH solutions, around 25 °C), but also can show excellent resistance to a series of hot liquids (water, HCl/NaOH solutions, 30-100 °C) and hot beverages (coffee, milk, tea, 80 °C). Moreover, the PDES-MS surface also presents excellent stability toward immersion in various organic solvents, high temperature, and long time period. In particular, the PDES-MS surface achieves good mechanical durability which can withstand ultrasonication treatment, finger-touch, multiple fold, peeling by adhesive tape, and even abrasion test treatments without losing superhydrophobicity. The corrosion resistance and durability of the diverse-modified superhydrophobic surfaces were also examined. These fascinating performances makes the present method suitable for large-scale

  7. Large-Scale medical image analytics: Recent methodologies, applications and Future directions.

    Science.gov (United States)

    Zhang, Shaoting; Metaxas, Dimitris

    2016-10-01

    Despite the ever-increasing amount and complexity of annotated medical image data, the development of large-scale medical image analysis algorithms has not kept pace with the need for methods that bridge the semantic gap between images and diagnoses. The goal of this position paper is to discuss and explore innovative and large-scale data science techniques in medical image analytics, which will benefit clinical decision-making and facilitate efficient medical data management. Particularly, we advocate that the scale of image retrieval systems should be significantly increased at which interactive systems can be effective for knowledge discovery in potentially large databases of medical images. For clinical relevance, such systems should return results in real-time, incorporate expert feedback, and be able to cope with the size, quality, and variety of the medical images and their associated metadata for a particular domain. The design, development, and testing of the such framework can significantly impact interactive mining in medical image databases that are growing rapidly in size and complexity and enable novel methods of analysis at much larger scales in an efficient, integrated fashion. Copyright © 2016. Published by Elsevier B.V.

  8. Scale interactions in a mixing layer – the role of the large-scale gradients

    KAUST Repository

    Fiscaletti, D.

    2016-02-15

    © 2016 Cambridge University Press. The interaction between the large and the small scales of turbulence is investigated in a mixing layer, at a Reynolds number based on the Taylor microscale of , via direct numerical simulations. The analysis is performed in physical space, and the local vorticity root-mean-square (r.m.s.) is taken as a measure of the small-scale activity. It is found that positive large-scale velocity fluctuations correspond to large vorticity r.m.s. on the low-speed side of the mixing layer, whereas, they correspond to low vorticity r.m.s. on the high-speed side. The relationship between large and small scales thus depends on position if the vorticity r.m.s. is correlated with the large-scale velocity fluctuations. On the contrary, the correlation coefficient is nearly constant throughout the mixing layer and close to unity if the vorticity r.m.s. is correlated with the large-scale velocity gradients. Therefore, the small-scale activity appears closely related to large-scale gradients, while the correlation between the small-scale activity and the large-scale velocity fluctuations is shown to reflect a property of the large scales. Furthermore, the vorticity from unfiltered (small scales) and from low pass filtered (large scales) velocity fields tend to be aligned when examined within vortical tubes. These results provide evidence for the so-called \\'scale invariance\\' (Meneveau & Katz, Annu. Rev. Fluid Mech., vol. 32, 2000, pp. 1-32), and suggest that some of the large-scale characteristics are not lost at the small scales, at least at the Reynolds number achieved in the present simulation.

  9. Co-Cure-Ply Resins for High Performance, Large-Scale Structures

    Data.gov (United States)

    National Aeronautics and Space Administration — Large-scale composite structures are commonly joined by secondary bonding of molded-and-cured thermoset components. This approach may result in unpredictable joint...

  10. Efficacy of extracting indices from large-scale acoustic recordings to monitor biodiversity.

    Science.gov (United States)

    Buxton, Rachel; McKenna, Megan F; Clapp, Mary; Meyer, Erik; Stabenau, Erik; Angeloni, Lisa M; Crooks, Kevin; Wittemyer, George

    2018-04-20

    Passive acoustic monitoring has the potential to be a powerful approach for assessing biodiversity across large spatial and temporal scales. However, extracting meaningful information from recordings can be prohibitively time consuming. Acoustic indices offer a relatively rapid method for processing acoustic data and are increasingly used to characterize biological communities. We examine the ability of acoustic indices to predict the diversity and abundance of biological sounds within recordings. First we reviewed the acoustic index literature and found that over 60 indices have been applied to a range of objectives with varying success. We then implemented a subset of the most successful indices on acoustic data collected at 43 sites in temperate terrestrial and tropical marine habitats across the continental U.S., developing a predictive model of the diversity of animal sounds observed in recordings. For terrestrial recordings, random forest models using a suite of acoustic indices as covariates predicted Shannon diversity, richness, and total number of biological sounds with high accuracy (R 2 > = 0.94, mean squared error MSE indices assessed, roughness, acoustic activity, and acoustic richness contributed most to the predictive ability of models. Performance of index models was negatively impacted by insect, weather, and anthropogenic sounds. For marine recordings, random forest models predicted Shannon diversity, richness, and total number of biological sounds with low accuracy (R 2 = 195), indicating that alternative methods are necessary in marine habitats. Our results suggest that using a combination of relevant indices in a flexible model can accurately predict the diversity of biological sounds in temperate terrestrial acoustic recordings. Thus, acoustic approaches could be an important contribution to biodiversity monitoring in some habitats in the face of accelerating human-caused ecological change. This article is protected by copyright. All rights

  11. Investigations on efficiency of the emergency cooling by means of large-scale tests

    International Nuclear Information System (INIS)

    Hicken, E.F.

    1982-01-01

    The RSK guidelines contain the maximum permissible loads (max. cladding tube temperature 1200 0 C, max. Zr/H 2 O-reaction of 1% Zr). Their observance implies that only a small number of fuel rods fail. The safety research has to produce the evidence that the limiting loads are not exceeded. The analytical investigations on the emergency cooling behaviour could so far only be verified in scaled-down test facilities. After about 100 tests in four different large-scale test facilities the experimental investigations on the blow-down phase for large cracks are finished in the main. With the refill- and flood process the systems behaviour in scaled down test stands, the multidimensional conditions in the reactor pressure vessel can, however, only be simulated on the original scale. More experiments are planned as part of the 2D/3D-project (CCTF , SCTF, UPTF) and as part of the PKL-tests, so that more than 200 tests in seven plants will be available then. As to the small cracks the physical phenomena are known. The current investigations are used to increase the reliability of statement. After their being finished approximately 300 tests in seven plants will be available. (orig./HP) [de

  12. Demonstration-Scale High-Cell-Density Fermentation of Pichia pastoris.

    Science.gov (United States)

    Liu, Wan-Cang; Zhu, Ping

    2018-01-01

    Pichia pastoris has been one of the most successful heterologous overexpression systems in generating proteins for large-scale production through high-cell-density fermentation. However, optimizing conditions of the large-scale high-cell-density fermentation for biochemistry and industrialization is usually a laborious and time-consuming process. Furthermore, it is often difficult to produce authentic proteins in large quantities, which is a major obstacle for functional and structural features analysis and industrial application. For these reasons, we have developed a protocol for efficient demonstration-scale high-cell-density fermentation of P. pastoris, which employs a new methanol-feeding strategy-biomass-stat strategy and a strategy of increased air pressure instead of pure oxygen supplement. The protocol included three typical stages of glycerol batch fermentation (initial culture phase), glycerol fed-batch fermentation (biomass accumulation phase), and methanol fed-batch fermentation (induction phase), which allows direct online-monitoring of fermentation conditions, including broth pH, temperature, DO, anti-foam generation, and feeding of glycerol and methanol. Using this protocol, production of the recombinant β-xylosidase of Lentinula edodes origin in 1000-L scale fermentation can be up to ~900 mg/L or 9.4 mg/g cells (dry cell weight, intracellular expression), with the specific production rate and average specific production of 0.1 mg/g/h and 0.081 mg/g/h, respectively. The methodology described in this protocol can be easily transferred to other systems, and eligible to scale up for a large number of proteins used in either the scientific studies or commercial purposes.

  13. Large-scale bioenergy production: how to resolve sustainability trade-offs?

    Science.gov (United States)

    Humpenöder, Florian; Popp, Alexander; Bodirsky, Benjamin Leon; Weindl, Isabelle; Biewald, Anne; Lotze-Campen, Hermann; Dietrich, Jan Philipp; Klein, David; Kreidenweis, Ulrich; Müller, Christoph; Rolinski, Susanne; Stevanovic, Miodrag

    2018-02-01

    Large-scale 2nd generation bioenergy deployment is a key element of 1.5 °C and 2 °C transformation pathways. However, large-scale bioenergy production might have negative sustainability implications and thus may conflict with the Sustainable Development Goal (SDG) agenda. Here, we carry out a multi-criteria sustainability assessment of large-scale bioenergy crop production throughout the 21st century (300 EJ in 2100) using a global land-use model. Our analysis indicates that large-scale bioenergy production without complementary measures results in negative effects on the following sustainability indicators: deforestation, CO2 emissions from land-use change, nitrogen losses, unsustainable water withdrawals and food prices. One of our main findings is that single-sector environmental protection measures next to large-scale bioenergy production are prone to involve trade-offs among these sustainability indicators—at least in the absence of more efficient land or water resource use. For instance, if bioenergy production is accompanied by forest protection, deforestation and associated emissions (SDGs 13 and 15) decline substantially whereas food prices (SDG 2) increase. However, our study also shows that this trade-off strongly depends on the development of future food demand. In contrast to environmental protection measures, we find that agricultural intensification lowers some side-effects of bioenergy production substantially (SDGs 13 and 15) without generating new trade-offs—at least among the sustainability indicators considered here. Moreover, our results indicate that a combination of forest and water protection schemes, improved fertilization efficiency, and agricultural intensification would reduce the side-effects of bioenergy production most comprehensively. However, although our study includes more sustainability indicators than previous studies on bioenergy side-effects, our study represents only a small subset of all indicators relevant for the

  14. Facile Large-Scale Synthesis of 5- and 6-Carboxyfluoresceins

    DEFF Research Database (Denmark)

    Hammershøj, Peter; Ek, Pramod Kumar; Harris, Pernille

    2015-01-01

    A series of fluorescein dyes have been prepared from a common precursor through a very simple synthetic procedure, giving access to important precursors for fluorescent probes. The method has proven an efficient access to regioisomerically pure 5- and 6-carboxyfluoresceins on a large scale, in good...

  15. Large scale anisotropy studies with the Auger Observatory

    International Nuclear Information System (INIS)

    Santos, E.M.; Letessier-Selvon, A.

    2006-01-01

    With the increasing Auger surface array data sample of the highest energy cosmic rays, large scale anisotropy studies at this part of the spectrum become a promising path towards the understanding of the origin of ultra-high energy cosmic particles. We describe the methods underlying the search for distortions in the cosmic rays arrival directions over large angular scales, that is, bigger than those commonly employed in the search for correlations with point-like sources. The widely used tools, known as coverage maps, are described and some of the issues involved in their calculations are presented through Monte Carlo based studies. Coverage computation requires a deep knowledge on the local detection efficiency, including the influence of weather parameters like temperature and pressure. Particular attention is devoted to a new proposed method to extract the coverage, based upon the assumption of time factorization of an extensive air shower detector acceptance. We use Auger monitoring data to test the goodness of such a hypothesis. We finally show the necessity of using more than one coverage to extract any possible anisotropic pattern on the sky, by pointing to some of the biases present in commonly used methods based, for example, on the scrambling of the UTC arrival times for each event. (author)

  16. Debris disks as signposts of terrestrial planet formation

    Science.gov (United States)

    Raymond, S. N.; Armitage, P. J.; Moro-Martín, A.; Booth, M.; Wyatt, M. C.; Armstrong, J. C.; Mandell, A. M.; Selsis, F.; West, A. A.

    2011-06-01

    There exists strong circumstantial evidence from their eccentric orbits that most of the known extra-solar planetary systems are the survivors of violent dynamical instabilities. Here we explore the effect of giant planet instabilities on the formation and survival of terrestrial planets. We numerically simulate the evolution of planetary systems around Sun-like stars that include three components: (i) an inner disk of planetesimals and planetary embryos; (ii) three giant planets at Jupiter-Saturn distances; and (iii) an outer disk of planetesimals comparable to estimates of the primitive Kuiper belt. We calculate the dust production and spectral energy distribution of each system by assuming that each planetesimal particle represents an ensemble of smaller bodies in collisional equilibrium. Our main result is a strong correlation between the evolution of the inner and outer parts of planetary systems, i.e. between the presence of terrestrial planets and debris disks. Strong giant planet instabilities - that produce very eccentric surviving planets - destroy all rocky material in the system, including fully-formed terrestrial planets if the instabilities occur late, and also destroy the icy planetesimal population. Stable or weakly unstable systems allow terrestrial planets to accrete in their inner regions and significant dust to be produced in their outer regions, detectable at mid-infrared wavelengths as debris disks. Stars older than ~100 Myr with bright cold dust emission (in particular at λ ~ 70 μm) signpost dynamically calm environments that were conducive to efficient terrestrial accretion. Such emission is present around ~16% of billion-year old Solar-type stars. Our simulations yield numerous secondary results: 1) the typical eccentricities of as-yet undetected terrestrial planets are ~0.1 but there exists a novel class of terrestrial planet system whose single planet undergoes large amplitude oscillations in orbital eccentricity and inclination; 2) by

  17. A fast approach to generate large-scale topographic maps based on new Chinese vehicle-borne Lidar system

    International Nuclear Information System (INIS)

    Youmei, Han; Bogang, Yang

    2014-01-01

    Large -scale topographic maps are important basic information for city and regional planning and management. Traditional large- scale mapping methods are mostly based on artificial mapping and photogrammetry. The traditional mapping method is inefficient and limited by the environments. While the photogrammetry methods(such as low-altitude aerial mapping) is an economical and effective way to map wide and regulate range of large scale topographic map but doesn't work well in the small area due to the high cost of manpower and resources. Recent years, the vehicle-borne LIDAR technology has a rapid development, and its application in surveying and mapping is becoming a new topic. The main objective of this investigation is to explore the potential of vehicle-borne LIDAR technology to be used to fast mapping large scale topographic maps based on new Chinese vehicle-borne LIDAR system. It studied how to use the new Chinese vehicle-borne LIDAR system measurement technology to map large scale topographic maps. After the field data capture, it can be mapped in the office based on the LIDAR data (point cloud) by software which programmed by ourselves. In addition, the detailed process and accuracy analysis were proposed by an actual case. The result show that this new technology provides a new fast method to generate large scale topographic maps, which is high efficient and accuracy compared to traditional methods

  18. Bilevel Traffic Evacuation Model and Algorithm Design for Large-Scale Activities

    Directory of Open Access Journals (Sweden)

    Danwen Bao

    2017-01-01

    Full Text Available This paper establishes a bilevel planning model with one master and multiple slaves to solve traffic evacuation problems. The minimum evacuation network saturation and shortest evacuation time are used as the objective functions for the upper- and lower-level models, respectively. The optimizing conditions of this model are also analyzed. An improved particle swarm optimization (PSO method is proposed by introducing an electromagnetism-like mechanism to solve the bilevel model and enhance its convergence efficiency. A case study is carried out using the Nanjing Olympic Sports Center. The results indicate that, for large-scale activities, the average evacuation time of the classic model is shorter but the road saturation distribution is more uneven. Thus, the overall evacuation efficiency of the network is not high. For induced emergencies, the evacuation time of the bilevel planning model is shortened. When the audience arrival rate is increased from 50% to 100%, the evacuation time is shortened from 22% to 35%, indicating that the optimization effect of the bilevel planning model is more effective compared to the classic model. Therefore, the model and algorithm presented in this paper can provide a theoretical basis for the traffic-induced evacuation decision making of large-scale activities.

  19. Large Scale Frequent Pattern Mining using MPI One-Sided Model

    Energy Technology Data Exchange (ETDEWEB)

    Vishnu, Abhinav; Agarwal, Khushbu

    2015-09-08

    In this paper, we propose a work-stealing runtime --- Library for Work Stealing LibWS --- using MPI one-sided model for designing scalable FP-Growth --- {\\em de facto} frequent pattern mining algorithm --- on large scale systems. LibWS provides locality efficient and highly scalable work-stealing techniques for load balancing on a variety of data distributions. We also propose a novel communication algorithm for FP-growth data exchange phase, which reduces the communication complexity from state-of-the-art O(p) to O(f + p/f) for p processes and f frequent attributed-ids. FP-Growth is implemented using LibWS and evaluated on several work distributions and support counts. An experimental evaluation of the FP-Growth on LibWS using 4096 processes on an InfiniBand Cluster demonstrates excellent efficiency for several work distributions (87\\% efficiency for Power-law and 91% for Poisson). The proposed distributed FP-Tree merging algorithm provides 38x communication speedup on 4096 cores.

  20. Photorealistic large-scale urban city model reconstruction.

    Science.gov (United States)

    Poullis, Charalambos; You, Suya

    2009-01-01

    The rapid and efficient creation of virtual environments has become a crucial part of virtual reality applications. In particular, civil and defense applications often require and employ detailed models of operations areas for training, simulations of different scenarios, planning for natural or man-made events, monitoring, surveillance, games, and films. A realistic representation of the large-scale environments is therefore imperative for the success of such applications since it increases the immersive experience of its users and helps reduce the difference between physical and virtual reality. However, the task of creating such large-scale virtual environments still remains a time-consuming and manual work. In this work, we propose a novel method for the rapid reconstruction of photorealistic large-scale virtual environments. First, a novel, extendible, parameterized geometric primitive is presented for the automatic building identification and reconstruction of building structures. In addition, buildings with complex roofs containing complex linear and nonlinear surfaces are reconstructed interactively using a linear polygonal and a nonlinear primitive, respectively. Second, we present a rendering pipeline for the composition of photorealistic textures, which unlike existing techniques, can recover missing or occluded texture information by integrating multiple information captured from different optical sensors (ground, aerial, and satellite).

  1. Voltage stability issues in a distribution grid with large scale PV plant

    Energy Technology Data Exchange (ETDEWEB)

    Perez, Alvaro Ruiz; Marinopoulos, Antonios; Reza, Muhamad; Srivastava, Kailash [ABB AB, Vaesteraas (Sweden). Corporate Research Center; Hertem, Dirk van [Katholieke Univ. Leuven, Heverlee (Belgium). ESAT-ELECTA

    2011-07-01

    Solar photovoltaics (PV) has become a competitive renewable energy source. The production of solar PV cells and panels has increased significantly, while the cost is reduced due to economics of scale and technological achievements in the field. At the same time, the increase in efficiency of PV power systems and high energy prices are expected to lead PV systems to grid parity in the coming decade. This is expected to boost even more the large scale implementation of PV power plants (utility scale PV) and therefore the impact of such large scale PV plants to power system needs to be studies. This paper investigates the voltage stability issues arising from the connection of a large PV power plant to the power grid. For this purpose, a 15 MW PV power plant was implemented into a distribution grid, modeled and simulated using DIgSILENT Power Factory. Two scenarios were developed: in the first scenario, active power injected into the grid by the PV power plants was varied and the resulted U-Q curve was analyzed. In the second scenario, the impact of connecting PV power plants to different points in the grid - resulting in different strength of the connection - was investigated. (orig.)

  2. LARGE-SCALE PRODUCTION OF HYDROGEN BY NUCLEAR ENERGY FOR THE HYDROGEN ECONOMY

    International Nuclear Information System (INIS)

    SCHULTZ, K.R.; BROWN, L.C.; BESENBRUCH, G.E.; HAMILTON, C.J.

    2003-01-01

    OAK B202 LARGE-SCALE PRODUCTION OF HYDROGEN BY NUCLEAR ENERGY FOR THE HYDROGEN ECONOMY. The ''Hydrogen Economy'' will reduce petroleum imports and greenhouse gas emissions. However, current commercial hydrogen production processes use fossil fuels and releases carbon dioxide. Hydrogen produced from nuclear energy could avoid these concerns. The authors have recently completed a three-year project for the US Department of Energy whose objective was to ''define an economically feasible concept for production of hydrogen, by nuclear means, using an advanced high-temperature nuclear reactor as the energy source''. Thermochemical water-splitting, a chemical process that accomplishes the decomposition of water into hydrogen and oxygen, met this objective. The goal of the first phase of this study was to evaluate thermochemical processes which offer the potential for efficient, cost-effective, large-scale production of hydrogen and to select one for further detailed consideration. The authors selected the Sulfur-Iodine cycle, In the second phase, they reviewed all the basic reactor types for suitability to provide the high temperature heat needed by the selected thermochemical water splitting cycle and chose the helium gas-cooled reactor. In the third phase they designed the chemical flowsheet for the thermochemical process and estimated the efficiency and cost of the process and the projected cost of producing hydrogen. These results are summarized in this paper

  3. Terrestrial Ecosystem Responses to Global Change: A Research Strategy

    Energy Technology Data Exchange (ETDEWEB)

    Ecosystems Working Group,

    1998-09-23

    Uncertainty about the magnitude of global change effects on terrestrial ecosystems and consequent feedbacks to the atmosphere impedes sound policy planning at regional, national, and global scales. A strategy to reduce these uncertainties must include a substantial increase in funding for large-scale ecosystem experiments and a careful prioritization of research efforts. Prioritization criteria should be based on the magnitude of potential changes in environmental properties of concern to society, including productivity; biodiversity; the storage and cycling of carbon, water, and nutrients; and sensitivity of specific ecosystems to environmental change. A research strategy is proposed that builds on existing knowledge of ecosystem responses to global change by (1) expanding the spatial and temporal scale of experimental ecosystem manipulations to include processes known to occur at large scales and over long time periods; (2) quantifying poorly understood linkages among processes through the use of experiments that manipulate multiple interacting environmental factors over a broader range of relevant conditions than did past experiments; and (3) prioritizing ecosystems for major experimental manipulations on the basis of potential positive and negative impacts on ecosystem properties and processes of intrinsic and/or utilitarian value to humans and on feedbacks of terrestrial ecosystems to the atmosphere. Models and experiments are equally important for developing process-level understanding into a predictive capability. To support both the development and testing of mechanistic ecosystem models, a two-tiered design of ecosystem experiments should be used. This design should include both (1) large-scale manipulative experiments for comprehensive testing of integrated ecosystem models and (2) multifactor, multilevel experiments for parameterization of process models across the critical range of interacting environmental factors (CO{sub 2}, temperature, water

  4. A Ranking Approach on Large-Scale Graph With Multidimensional Heterogeneous Information.

    Science.gov (United States)

    Wei, Wei; Gao, Bin; Liu, Tie-Yan; Wang, Taifeng; Li, Guohui; Li, Hang

    2016-04-01

    Graph-based ranking has been extensively studied and frequently applied in many applications, such as webpage ranking. It aims at mining potentially valuable information from the raw graph-structured data. Recently, with the proliferation of rich heterogeneous information (e.g., node/edge features and prior knowledge) available in many real-world graphs, how to effectively and efficiently leverage all information to improve the ranking performance becomes a new challenging problem. Previous methods only utilize part of such information and attempt to rank graph nodes according to link-based methods, of which the ranking performances are severely affected by several well-known issues, e.g., over-fitting or high computational complexity, especially when the scale of graph is very large. In this paper, we address the large-scale graph-based ranking problem and focus on how to effectively exploit rich heterogeneous information of the graph to improve the ranking performance. Specifically, we propose an innovative and effective semi-supervised PageRank (SSP) approach to parameterize the derived information within a unified semi-supervised learning framework (SSLF-GR), then simultaneously optimize the parameters and the ranking scores of graph nodes. Experiments on the real-world large-scale graphs demonstrate that our method significantly outperforms the algorithms that consider such graph information only partially.

  5. Characterization of laser-induced plasmas as a complement to high-explosive large-scale detonations

    Directory of Open Access Journals (Sweden)

    Clare Kimblin

    2017-09-01

    Full Text Available Experimental investigations into the characteristics of laser-induced plasmas indicate that LIBS provides a relatively inexpensive and easily replicable laboratory technique to isolate and measure reactions germane to understanding aspects of high-explosive detonations under controlled conditions. Spectral signatures and derived physical parameters following laser ablation of aluminum, graphite and laser-sparked air are examined as they relate to those observed following detonation of high explosives and as they relate to shocked air. Laser-induced breakdown spectroscopy (LIBS reliably correlates reactions involving atomic Al and aluminum monoxide (AlO with respect to both emission spectra and temperatures, as compared to small- and large-scale high-explosive detonations. Atomic Al and AlO resulting from laser ablation and a cited small-scale study, decay within ∼10-5 s, roughly 100 times faster than the Al and AlO decay rates (∼10-3 s observed following the large-scale detonation of an Al-encased explosive. Temperatures and species produced in laser-sparked air are compared to those produced with laser ablated graphite in air. With graphite present, CN is dominant relative to N2+. In studies where the height of the ablating laser’s focus was altered relative to the surface of the graphite substrate, CN concentration was found to decrease with laser focus below the graphite surface, indicating that laser intensity is a critical factor in the production of CN, via reactive nitrogen.

  6. Scanning, Multibeam, Single Photon Lidars for Rapid, Large Scale, High Resolution, Topographic and Bathymetric Mapping

    Directory of Open Access Journals (Sweden)

    John J. Degnan

    2016-11-01

    Full Text Available Several scanning, single photon sensitive, 3D imaging lidars are herein described that operate at aircraft above ground levels (AGLs between 1 and 11 km, and speeds in excess of 200 knots. With 100 beamlets and laser fire rates up to 60 kHz, we, at the Sigma Space Corporation (Lanham, MD, USA, have interrogated up to 6 million ground pixels per second, all of which can record multiple returns from volumetric scatterers such as tree canopies. High range resolution has been achieved through the use of subnanosecond laser pulsewidths, detectors and timing receivers. The systems are presently being deployed on a variety of aircraft to demonstrate their utility in multiple applications including large scale surveying, bathymetry, forestry, etc. Efficient noise filters, suitable for near realtime imaging, have been shown to effectively eliminate the solar background during daytime operations. Geolocation elevation errors measured to date are at the subdecimeter level. Key differences between our Single Photon Lidars, and competing Geiger Mode lidars are also discussed.

  7. Effects of microhabitat and large-scale land use on stream salamander occupancy in the coalfields of Central Appalachia

    Science.gov (United States)

    Sweeten, Sara E.; Ford, W. Mark

    2016-01-01

    Large-scale coal mining practices, particularly surface coal extraction and associated valley fills as well as residential wastewater discharge, are of ecological concern for aquatic systems in central Appalachia. Identifying and quantifying alterations to ecosystems along a gradient of spatial scales is a necessary first-step to aid in mitigation of negative consequences to aquatic biota. In central Appalachian headwater streams, apart from fish, salamanders are the most abundant vertebrate predator that provide a significant intermediate trophic role linking aquatic and terrestrial food webs. Stream salamander species are considered to be sensitive to aquatic stressors and environmental alterations, as past research has shown linkages among microhabitat parameters, large-scale land use such as urbanization and logging, and salamander abundances. However, there is little information examining these relationships between environmental conditions and salamander occupancy in the coalfields of central Appalachia. In the summer of 2013, 70 sites (sampled two to three times each) in the southwest Virginia coalfields were visited to collect salamanders and quantify stream and riparian microhabitat parameters. Using an information-theoretic framework, effects of microhabitat and large-scale land use on stream salamander occupancy were compared. The findings indicate that Desmognathus spp. occupancy rates are more correlated to microhabitat parameters such as canopy cover than to large-scale land uses. However, Eurycea spp. occupancy rates had a strong association with large-scale land uses, particularly recent mining and forest cover within the watershed. These findings suggest that protection of riparian habitats is an important consideration for maintaining aquatic systems in central Appalachia. If this is not possible, restoration riparian areas should follow guidelines using quick-growing tree species that are native to Appalachian riparian areas. These types of trees

  8. Facile and large-scale synthesis and characterization of carbon nanotube/silver nanocrystal nanohybrids

    International Nuclear Information System (INIS)

    Gao Chao; Li Wenwen; Jin Yizheng; Kong Hao

    2006-01-01

    A facile and efficient aqueous phase-based strategy to synthesize carbon nanotube (CNT)/silver nanocrystal nanohybrids at room temperature is reported. In the presence of carboxyl group functionalized or poly(acrylic acid)- (PAA-) grafted CNTs, silver nanoparticles were in situ generated from AgNO 3 aqueous solution, without any additional reducing agent or irradiation treatment, and readily attached to the CNT convex surfaces, leading to the CNT/Ag nanohybrids. The produced silver nanoparticles were determined to be face-centred cubic silver nanocrystals by scanning transmission electron microscopy (STEM), electron diffraction (ED) and x-ray powder diffraction (XRD) analyses. Detailed experiments showed that this strategy can also be applied to different CNTs, including single-walled carbon nanotubes (SWNTs), double-walled carbon nanotubes (DWNTs), multiwalled carbon nanotubes (MWNTs), and polymer-functionalized CNTs. The nanoparticle sizes can be controlled from 2 nm to 10-20 nm and the amount of metal deposited on CNT surfaces can be as high as 82 wt%. Furthermore, large-scale (10 g or more) CNT/Ag nanohybrids can be prepared via this approach without the decrease of efficiency and quality. This approach can also be extended to prepare Au single crystals by CNTs. The facile, efficient and large-scale availability of the nanohybrids makes their tremendous potential realizable and developable

  9. Enabling High Performance Large Scale Dense Problems through KBLAS

    KAUST Repository

    Abdelfattah, Ahmad

    2014-05-04

    KBLAS (KAUST BLAS) is a small library that provides highly optimized BLAS routines on systems accelerated with GPUs. KBLAS is entirely written in CUDA C, and targets NVIDIA GPUs with compute capability 2.0 (Fermi) or higher. The current focus is on level-2 BLAS routines, namely the general matrix vector multiplication (GEMV) kernel, and the symmetric/hermitian matrix vector multiplication (SYMV/HEMV) kernel. KBLAS provides these two kernels in all four precisions (s, d, c, and z), with support to multi-GPU systems. Through advanced optimization techniques that target latency hiding and pushing memory bandwidth to the limit, KBLAS outperforms state-of-the-art kernels by 20-90% improvement. Competitors include CUBLAS-5.5, MAGMABLAS-1.4.0, and CULAR17. The SYMV/HEMV kernel from KBLAS has been adopted by NVIDIA, and should appear in CUBLAS-6.0. KBLAS has been used in large scale simulations of multi-object adaptive optics.

  10. Energy transfers in large-scale and small-scale dynamos

    Science.gov (United States)

    Samtaney, Ravi; Kumar, Rohit; Verma, Mahendra

    2015-11-01

    We present the energy transfers, mainly energy fluxes and shell-to-shell energy transfers in small-scale dynamo (SSD) and large-scale dynamo (LSD) using numerical simulations of MHD turbulence for Pm = 20 (SSD) and for Pm = 0.2 on 10243 grid. For SSD, we demonstrate that the magnetic energy growth is caused by nonlocal energy transfers from the large-scale or forcing-scale velocity field to small-scale magnetic field. The peak of these energy transfers move towards lower wavenumbers as dynamo evolves, which is the reason for the growth of the magnetic fields at the large scales. The energy transfers U2U (velocity to velocity) and B2B (magnetic to magnetic) are forward and local. For LSD, we show that the magnetic energy growth takes place via energy transfers from large-scale velocity field to large-scale magnetic field. We observe forward U2U and B2B energy flux, similar to SSD.

  11. Enclosed outdoor photobioreactors: light regime, photosynthetic efficiency, scale-up, and future prospects

    NARCIS (Netherlands)

    Janssen, M.G.J.; Tramper, J.; Mur, L.R.; Wijffels, R.H.

    2003-01-01

    Enclosed outdoor photobioreactors need to be developed and designed for large-scale production of phototrophic microorganisms. Both light regime and photosynthetic efficiency were analyzed in characteristic examples of state-of-the-art pilot-scale photobioreactors. In this study it is shown that

  12. Global Drainage Patterns to Modern Terrestrial Sedimentary Basins and its Influence on Large River Systems

    Science.gov (United States)

    Nyberg, B.; Helland-Hansen, W.

    2017-12-01

    Long-term preservation of alluvial sediments is dependent on the hydrological processes that deposit sediments solely within an area that has available accomodation space and net subsidence know as a sedimentary basin. An understanding of the river processes contributing to terrestrial sedimentary basins is essential to fundamentally constrain and quantify controls on the modern terrestrial sink. Furthermore, the terrestrial source to sink controls place constraints on the entire coastal, shelf and deep marine sediment routing systems. In addition, the geographical importance of modern terrestrial sedimentary basins for agriculture and human settlements has resulted in significant upstream anthropogenic catchment modification for irrigation and energy needs. Yet to our knowledge, a global catchment model depicting the drainage patterns to modern terrestrial sedimentary basins has previously not been established that may be used to address these challenging issues. Here we present a new database of 180,737 global catchments that show the surface drainage patterns to modern terrestrial sedimentary basins. This is achieved by using high resolution river networks derived from digital elevation models in relation to newly acquired maps on global modern sedimentary basins to identify terrestrial sinks. The results show that active tectonic regimes are typically characterized by larger terrestrial sedimentary basins, numerous smaller source catchments and a high source to sink relief ratio. To the contrary passive margins drain catchments to smaller terrestrial sedimentary basins, are composed of fewer source catchments that are relatively larger and a lower source to sink relief ratio. The different geomorphological characteristics of source catchments by tectonic setting influence the spatial and temporal patterns of fluvial architecture within sedimentary basins and the anthropogenic methods of exploiting those rivers. The new digital database resource is aimed to help

  13. Accurate and Efficient Parallel Implementation of an Effective Linear-Scaling Direct Random Phase Approximation Method.

    Science.gov (United States)

    Graf, Daniel; Beuerle, Matthias; Schurkus, Henry F; Luenser, Arne; Savasci, Gökcen; Ochsenfeld, Christian

    2018-05-08

    An efficient algorithm for calculating the random phase approximation (RPA) correlation energy is presented that is as accurate as the canonical molecular orbital resolution-of-the-identity RPA (RI-RPA) with the important advantage of an effective linear-scaling behavior (instead of quartic) for large systems due to a formulation in the local atomic orbital space. The high accuracy is achieved by utilizing optimized minimax integration schemes and the local Coulomb metric attenuated by the complementary error function for the RI approximation. The memory bottleneck of former atomic orbital (AO)-RI-RPA implementations ( Schurkus, H. F.; Ochsenfeld, C. J. Chem. Phys. 2016 , 144 , 031101 and Luenser, A.; Schurkus, H. F.; Ochsenfeld, C. J. Chem. Theory Comput. 2017 , 13 , 1647 - 1655 ) is addressed by precontraction of the large 3-center integral matrix with the Cholesky factors of the ground state density reducing the memory requirements of that matrix by a factor of [Formula: see text]. Furthermore, we present a parallel implementation of our method, which not only leads to faster RPA correlation energy calculations but also to a scalable decrease in memory requirements, opening the door for investigations of large molecules even on small- to medium-sized computing clusters. Although it is known that AO methods are highly efficient for extended systems, where sparsity allows for reaching the linear-scaling regime, we show that our work also extends the applicability when considering highly delocalized systems for which no linear scaling can be achieved. As an example, the interlayer distance of two covalent organic framework pore fragments (comprising 384 atoms in total) is analyzed.

  14. Accelerating large-scale protein structure alignments with graphics processing units

    Directory of Open Access Journals (Sweden)

    Pang Bin

    2012-02-01

    Full Text Available Abstract Background Large-scale protein structure alignment, an indispensable tool to structural bioinformatics, poses a tremendous challenge on computational resources. To ensure structure alignment accuracy and efficiency, efforts have been made to parallelize traditional alignment algorithms in grid environments. However, these solutions are costly and of limited accessibility. Others trade alignment quality for speedup by using high-level characteristics of structure fragments for structure comparisons. Findings We present ppsAlign, a parallel protein structure Alignment framework designed and optimized to exploit the parallelism of Graphics Processing Units (GPUs. As a general-purpose GPU platform, ppsAlign could take many concurrent methods, such as TM-align and Fr-TM-align, into the parallelized algorithm design. We evaluated ppsAlign on an NVIDIA Tesla C2050 GPU card, and compared it with existing software solutions running on an AMD dual-core CPU. We observed a 36-fold speedup over TM-align, a 65-fold speedup over Fr-TM-align, and a 40-fold speedup over MAMMOTH. Conclusions ppsAlign is a high-performance protein structure alignment tool designed to tackle the computational complexity issues from protein structural data. The solution presented in this paper allows large-scale structure comparisons to be performed using massive parallel computing power of GPU.

  15. The role of forest disturbance in global forest mortality and terrestrial carbon fluxes

    Science.gov (United States)

    Pugh, Thomas; Arneth, Almut; Smith, Benjamin; Poulter, Benjamin

    2017-04-01

    Large-scale forest disturbance dynamics such as insect outbreaks, wind-throw and fires, along with anthropogenic disturbances such as logging, have been shown to turn forests from carbon sinks into intermittent sources, often quite dramatically so. There is also increasing evidence that disturbance regimes in many regions are changing as a result of climatic change and human land-management practices. But how these landscape-scale events fit into the wider picture of global tree mortality is not well understood. Do such events dominate global carbon turnover, or are their effects highly regional? How sensitive is global terrestrial carbon exchange to realistic changes in the occurrence rate of such disturbances? Here, we combine recent advances in global satellite observations of stand-replacing forest disturbances and in compilations of forest inventory data, with a global terrestrial ecosystem model which incorporates an explicit representation of the role of disturbance in forest dynamics. We find that stand-replacing disturbances account for a fraction of wood carbon turnover that varies spatially from less than 5% in the tropical rainforest to ca. 50% in the mid latitudes, and as much as 90% in some heavily-managed regions. We contrast the size of the land-atmosphere carbon flux due to this disturbance with other components of the terrestrial carbon budget. In terms of sensitivity, we find a quasi log-linear relationship of disturbance rate to total carbon storage. Relatively small changes in disturbance rates at all latitudes have marked effects on vegetation carbon storage, with potentially very substantial implications for the global terrestrial carbon sink. Our results suggest a surprisingly small effect of disturbance type on large-scale forest vegetation dynamics and carbon storage, with limited evidence of widespread increases in nitrogen limitation as a result of increasing future disturbance. However, the influence of disturbance type on soil carbon

  16. Evaluating scale and roughness effects in urban flood modelling using terrestrial LIDAR data

    Directory of Open Access Journals (Sweden)

    H. Ozdemir

    2013-10-01

    Full Text Available This paper evaluates the results of benchmark testing a new inertial formulation of the St. Venant equations, implemented within the LISFLOOD-FP hydraulic model, using different high resolution terrestrial LiDAR data (10 cm, 50 cm and 1 m and roughness conditions (distributed and composite in an urban area. To examine these effects, the model is applied to a hypothetical flooding scenario in Alcester, UK, which experienced surface water flooding during summer 2007. The sensitivities of simulated water depth, extent, arrival time and velocity to grid resolutions and different roughness conditions are analysed. The results indicate that increasing the terrain resolution from 1 m to 10 cm significantly affects modelled water depth, extent, arrival time and velocity. This is because hydraulically relevant small scale topography that is accurately captured by the terrestrial LIDAR system, such as road cambers and street kerbs, is better represented on the higher resolution DEM. It is shown that altering surface friction values within a wide range has only a limited effect and is not sufficient to recover the results of the 10 cm simulation at 1 m resolution. Alternating between a uniform composite surface friction value (n = 0.013 or a variable distributed value based on land use has a greater effect on flow velocities and arrival times than on water depths and inundation extent. We conclude that the use of extra detail inherent in terrestrial laser scanning data compared to airborne sensors will be advantageous for urban flood modelling related to surface water, risk analysis and planning for Sustainable Urban Drainage Systems (SUDS to attenuate flow.

  17. A Dynamic Optimization Strategy for the Operation of Large Scale Seawater Reverses Osmosis System

    Directory of Open Access Journals (Sweden)

    Aipeng Jiang

    2014-01-01

    Full Text Available In this work, an efficient strategy was proposed for efficient solution of the dynamic model of SWRO system. Since the dynamic model is formulated by a set of differential-algebraic equations, simultaneous strategies based on collocations on finite element were used to transform the DAOP into large scale nonlinear programming problem named Opt2. Then, simulation of RO process and storage tanks was carried element by element and step by step with fixed control variables. All the obtained values of these variables then were used as the initial value for the optimal solution of SWRO system. Finally, in order to accelerate the computing efficiency and at the same time to keep enough accuracy for the solution of Opt2, a simple but efficient finite element refinement rule was used to reduce the scale of Opt2. The proposed strategy was applied to a large scale SWRO system with 8 RO plants and 4 storage tanks as case study. Computing result shows that the proposed strategy is quite effective for optimal operation of the large scale SWRO system; the optimal problem can be successfully solved within decades of iterations and several minutes when load and other operating parameters fluctuate.

  18. Cloud-enabled large-scale land surface model simulations with the NASA Land Information System

    Science.gov (United States)

    Duffy, D.; Vaughan, G.; Clark, M. P.; Peters-Lidard, C. D.; Nijssen, B.; Nearing, G. S.; Rheingrover, S.; Kumar, S.; Geiger, J. V.

    2017-12-01

    Developed by the Hydrological Sciences Laboratory at NASA Goddard Space Flight Center (GSFC), the Land Information System (LIS) is a high-performance software framework for terrestrial hydrology modeling and data assimilation. LIS provides the ability to integrate satellite and ground-based observational products and advanced modeling algorithms to extract land surface states and fluxes. Through a partnership with the National Center for Atmospheric Research (NCAR) and the University of Washington, the LIS model is currently being extended to include the Structure for Unifying Multiple Modeling Alternatives (SUMMA). With the addition of SUMMA in LIS, meaningful simulations containing a large multi-model ensemble will be enabled and can provide advanced probabilistic continental-domain modeling capabilities at spatial scales relevant for water managers. The resulting LIS/SUMMA application framework is difficult for non-experts to install due to the large amount of dependencies on specific versions of operating systems, libraries, and compilers. This has created a significant barrier to entry for domain scientists that are interested in using the software on their own systems or in the cloud. In addition, the requirement to support multiple run time environments across the LIS community has created a significant burden on the NASA team. To overcome these challenges, LIS/SUMMA has been deployed using Linux containers, which allows for an entire software package along with all dependences to be installed within a working runtime environment, and Kubernetes, which orchestrates the deployment of a cluster of containers. Within a cloud environment, users can now easily create a cluster of virtual machines and run large-scale LIS/SUMMA simulations. Installations that have taken weeks and months can now be performed in minutes of time. This presentation will discuss the steps required to create a cloud-enabled large-scale simulation, present examples of its use, and

  19. Terrestrial gamma radiation baseline mapping using ultra low density sampling methods

    International Nuclear Information System (INIS)

    Kleinschmidt, R.; Watson, D.

    2016-01-01

    Baseline terrestrial gamma radiation maps are indispensable for providing basic reference information that may be used in assessing the impact of a radiation related incident, performing epidemiological studies, remediating land contaminated with radioactive materials, assessment of land use applications and resource prospectivity. For a large land mass, such as Queensland, Australia (over 1.7 million km 2 ), it is prohibitively expensive and practically difficult to undertake detailed in-situ radiometric surveys of this scale. It is proposed that an existing, ultra-low density sampling program already undertaken for the purpose of a nationwide soil survey project be utilised to develop a baseline terrestrial gamma radiation map. Geoelement data derived from the National Geochemistry Survey of Australia (NGSA) was used to construct a baseline terrestrial gamma air kerma rate map, delineated by major drainage catchments, for Queensland. Three drainage catchments (sampled at the catchment outlet) spanning low, medium and high radioelement concentrations were selected for validation of the methodology using radiometric techniques including in-situ measurements and soil sampling for high resolution gamma spectrometry, and comparative non-radiometric analysis. A Queensland mean terrestrial air kerma rate, as calculated from the NGSA outlet sediment uranium, thorium and potassium concentrations, of 49 ± 69 nGy h −1 (n = 311, 3σ 99% confidence level) is proposed as being suitable for use as a generic terrestrial air kerma rate background range. Validation results indicate that catchment outlet measurements are representative of the range of results obtained across the catchment and that the NGSA geoelement data is suitable for calculation and mapping of terrestrial air kerma rate. - Highlights: • A baseline terrestrial air kerma map of Queensland, Australia was developed using geochemical data from a major drainage catchment ultra-low density sampling program

  20. A practical process for light-water detritiation at large scales

    Energy Technology Data Exchange (ETDEWEB)

    Boniface, H.A. [Atomic Energy of Canada Limited, Chalk River, ON (Canada); Robinson, J., E-mail: jr@tyne-engineering.com [Tyne Engineering, Burlington, ON (Canada); Gnanapragasam, N.V.; Castillo, I.; Suppiah, S. [Atomic Energy of Canada Limited, Chalk River, ON (Canada)

    2014-07-01

    AECL and Tyne Engineering have recently completed a preliminary engineering design for a modest-scale tritium removal plant for light water, intended for installation at AECL's Chalk River Laboratories (CRL). This plant design was based on the Combined Electrolysis and Catalytic Exchange (CECE) technology developed at CRL over many years and demonstrated there and elsewhere. The general features and capabilities of this design have been reported as well as the versatility of the design for separating any pair of the three hydrogen isotopes. The same CECE technology could be applied directly to very large-scale wastewater detritiation, such as the case at Fukushima Daiichi Nuclear Power Station. However, since the CECE process scales linearly with throughput, the required capital and operating costs are substantial for such large-scale applications. This paper discusses some options for reducing the costs of very large-scale detritiation. Options include: Reducing tritium removal effectiveness; Energy recovery; Improving the tolerance of impurities; Use of less expensive or more efficient equipment. A brief comparison with alternative processes is also presented. (author)

  1. Non-parametric co-clustering of large scale sparse bipartite networks on the GPU

    DEFF Research Database (Denmark)

    Hansen, Toke Jansen; Mørup, Morten; Hansen, Lars Kai

    2011-01-01

    of row and column clusters from a hypothesis space of an infinite number of clusters. To reach large scale applications of co-clustering we exploit that parameter inference for co-clustering is well suited for parallel computing. We develop a generic GPU framework for efficient inference on large scale...... sparse bipartite networks and achieve a speedup of two orders of magnitude compared to estimation based on conventional CPUs. In terms of scalability we find for networks with more than 100 million links that reliable inference can be achieved in less than an hour on a single GPU. To efficiently manage...

  2. A Meteorological Distribution System for High Resolution Terrestrial Modeling (MicroMet)

    Science.gov (United States)

    Liston, G. E.; Elder, K.

    2004-12-01

    Spatially distributed terrestrial models generally require atmospheric forcing data on horizontal grids that are of higher resolution than available meteorological data. Furthermore, the meteorological data collected may not necessarily represent the area of interest's meteorological variability. To address these deficiencies, computationally efficient and physically realistic methods must be developed to take available meteorological data sets (e.g., meteorological tower observations) and generate high-resolution atmospheric-forcing distributions. This poster describes MicroMet, a quasi-physically-based, but simple meteorological distribution model designed to produce high-resolution (e.g., 5-m to 1-km horizontal grid increments) meteorological data distributions required to run spatially distributed terrestrial models over a wide variety of landscapes. The model produces distributions of the seven fundamental atmospheric forcing variables required to run most terrestrial models: air temperature, relative humidity, wind speed, wind direction, incoming solar radiation, incoming longwave radiation, and precipitation. MicroMet includes a preprocessor that analyzes meteorological station data and identifies and repairs potential data deficiencies. The model uses known relationships between meteorological variables and the surrounding area (primarily topography) to distribute those variables over any given landscape. MicroMet performs two kinds of adjustments to available meteorological data: 1) when there are data at more than one location, at a given time, the data are spatially interpolated over the domain using a Barnes objective analysis scheme, and 2) physical sub-models are applied to each MicroMet variable to improve its realism at a given point in space and time with respect to the terrain. The three, 25-km by 25-km, Cold Land Processes Experiment (CLPX) mesoscale study areas (MSAs: Fraser, North Park, and Rabbit Ears) will be used as example Micro

  3. Large-Scale Agriculture and Outgrower Schemes in Ethiopia

    DEFF Research Database (Denmark)

    Wendimu, Mengistu Assefa

    , the impact of large-scale agriculture and outgrower schemes on productivity, household welfare and wages in developing countries is highly contentious. Chapter 1 of this thesis provides an introduction to the study, while also reviewing the key debate in the contemporary land ‘grabbing’ and historical large...... sugarcane outgrower scheme on household income and asset stocks. Chapter 5 examines the wages and working conditions in ‘formal’ large-scale and ‘informal’ small-scale irrigated agriculture. The results in Chapter 2 show that moisture stress, the use of untested planting materials, and conflict over land...... commands a higher wage than ‘formal’ large-scale agriculture, while rather different wage determination mechanisms exist in the two sectors. Human capital characteristics (education and experience) partly explain the differences in wages within the formal sector, but play no significant role...

  4. Problems of large-scale vertically-integrated aquaculture

    Energy Technology Data Exchange (ETDEWEB)

    Webber, H H; Riordan, P F

    1976-01-01

    The problems of vertically-integrated aquaculture are outlined; they are concerned with: species limitations (in the market, biological and technological); site selection, feed, manpower needs, and legal, institutional and financial requirements. The gaps in understanding of, and the constraints limiting, large-scale aquaculture are listed. Future action is recommended with respect to: types and diversity of species to be cultivated, marketing, biotechnology (seed supply, disease control, water quality and concerted effort), siting, feed, manpower, legal and institutional aids (granting of water rights, grants, tax breaks, duty-free imports, etc.), and adequate financing. The last of hard data based on experience suggests that large-scale vertically-integrated aquaculture is a high risk enterprise, and with the high capital investment required, banks and funding institutions are wary of supporting it. Investment in pilot projects is suggested to demonstrate that large-scale aquaculture can be a fully functional and successful business. Construction and operation of such pilot farms is judged to be in the interests of both the public and private sector.

  5. Multi-scale effects of nestling diet on breeding performance in a terrestrial top predator inferred from stable isotope analysis.

    Directory of Open Access Journals (Sweden)

    Jaime Resano-Mayor

    Full Text Available Inter-individual diet variation within populations is likely to have important ecological and evolutionary implications. The diet-fitness relationships at the individual level and the emerging population processes are, however, poorly understood for most avian predators inhabiting complex terrestrial ecosystems. In this study, we use an isotopic approach to assess the trophic ecology of nestlings in a long-lived raptor, the Bonelli's eagle Aquila fasciata, and investigate whether nestling dietary breath and main prey consumption can affect the species' reproductive performance at two spatial scales: territories within populations and populations over a large geographic area. At the territory level, those breeding pairs whose nestlings consumed similar diets to the overall population (i.e. moderate consumption of preferred prey, but complemented by alternative prey categories or those disproportionally consuming preferred prey were more likely to fledge two chicks. An increase in the diet diversity, however, related negatively with productivity. The age and replacements of breeding pair members had also an influence on productivity, with more fledglings associated to adult pairs with few replacements, as expected in long-lived species. At the population level, mean productivity was higher in those population-years with lower dietary breadth and higher diet similarity among territories, which was related to an overall higher consumption of preferred prey. Thus, we revealed a correspondence in diet-fitness relationships at two spatial scales: territories and populations. We suggest that stable isotope analyses may be a powerful tool to monitor the diet of terrestrial avian predators on large spatio-temporal scales, which could serve to detect potential changes in the availability of those prey on which predators depend for breeding. We encourage ecologists and evolutionary and conservation biologists concerned with the multi-scale fitness

  6. Experiments performed on a man-made crack in the flat low-permeability basement as a basis for large-scale technical extraction of terrestrial heat

    Energy Technology Data Exchange (ETDEWEB)

    Kappelmeyer, O.; Jung, R.; Rummel, F.

    1984-01-01

    Research work is performed on an in-situ experimental field in the crystalline subsoil near Falkenberg in East Bavaria which are to help develop new technologies for exploiting geothermal energy. The aim is to make terrestrial heat available for technical utilization even with a relatively normal geologic structure of the subsoil - i.e. far away from volcanos and outside of layers carrying water or steam. To achieve this objective, artificial heat exchange systems were produced by hydraulic fracturing of crystalline rocks at a depth of 250 m. Geometric positions of these cracks were located by means of seismic and geo-electric methods. Seismic observations allowed deriving a crack model which helped with penetrating the man-made crack by sectional drilling. The circulation system consisting in production drill-hole, crack system and sectional drill-hole was studied for hydraulic parameter (e.g. flow resistance) and thermal efficiency at various pressure levels in the crack. Crack width was measured at different pressure stages for the first time. Thermal model calculations allow transferral of the results gained from the flat relatively cool basement to basement areas of an elevated temperature. A number of rock parameters which are relevant for an assessment whether or not the subsoil is suitable for creating artificial heat exchange systems, were examined on-site and bench-scale.

  7. Global variation of carbon use efficiency in terrestrial ecosystems

    Science.gov (United States)

    Tang, Xiaolu; Carvalhais, Nuno; Moura, Catarina; Reichstein, Markus

    2017-04-01

    Carbon use efficiency (CUE), defined as the ratio between net primary production (NPP) and gross primary production (GPP), is an emergent property of vegetation that describes its effectiveness in storing carbon (C) and is of significance for understanding C biosphere-atmosphere exchange dynamics. A constant CUE value of 0.5 has been widely used in terrestrial C-cycle models, such as the Carnegie-Ames-Stanford-Approach model, or the Marine Biological Laboratory/Soil Plant-Atmosphere Canopy Model, for regional or global modeling purposes. However, increasing evidence argues that CUE is not constant, but varies with ecosystem types, site fertility, climate, site management and forest age. Hence, the assumption of a constant CUE of 0.5 can produce great uncertainty in estimating global carbon dynamics between terrestrial ecosystems and the atmosphere. Here, in order to analyze the global variations in CUE and understand how CUE varies with environmental variables, a global database was constructed based on published data for crops, forests, grasslands, wetlands and tundra ecosystems. In addition to CUE data, were also collected: GPP and NPP; site variables (e.g. climate zone, site management and plant function type); climate variables (e.g. temperature and precipitation); additional carbon fluxes (e.g. soil respiration, autotrophic respiration and heterotrophic respiration); and carbon pools (e.g. stem, leaf and root biomass). Different climate metrics were derived to diagnose seasonal temperature (mean annual temperature, MAT, and maximum temperature, Tmax) and water availability proxies (mean annual precipitation, MAP, and Palmer Drought Severity Index), in order to improve the local representation of environmental variables. Additionally were also included vegetation phenology dynamics as observed by different vegetation indices from the MODIS satellite. The mean CUE of all terrestrial ecosystems was 0.45, 10% lower than the previous assumed constant CUE of 0

  8. Drivers and barriers for municipal retrofitting activities – Evidence from a large-scale survey of German local authorities

    NARCIS (Netherlands)

    Polzin, Friedemann|info:eu-repo/dai/nl/413317404; Nolden, Colin; von Flotow, Paschen

    2018-01-01

    Local authorities are key actors for implementing innovative energy efficiency technologies (retrofitting) to reduce end-use energy demand and consequently reduce negative effects of high energy use such as climate change and public budget deficits. This paper reports the results of a large-scale

  9. Examining the relationship between intermediate scale soil moisture and terrestrial evaporation within a semi-arid grassland

    KAUST Repository

    Jana, Raghavendra Belur; Ershadi, Ali; McCabe, Matthew

    2016-01-01

    Interactions between soil moisture and terrestrial evaporation affect water cycle behaviour and responses between the land surface and the atmosphere across scales. With strong heterogeneities at the land surface, the inherent spatial variability

  10. Examining the relationship between intermediate-scale soil moisture and terrestrial evaporation within a semi-arid grassland

    KAUST Repository

    Jana, Raghavendra Belur; Ershadi, Ali; McCabe, Matthew

    2016-01-01

    Interactions between soil moisture and terrestrial evaporation affect water cycle behaviour and responses between the land surface and the atmosphere across scales. With strong heterogeneities at the land surface, the inherent spatial variability

  11. Large-Scale Structure and Hyperuniformity of Amorphous Ices

    Science.gov (United States)

    Martelli, Fausto; Torquato, Salvatore; Giovambattista, Nicolas; Car, Roberto

    2017-09-01

    We investigate the large-scale structure of amorphous ices and transitions between their different forms by quantifying their large-scale density fluctuations. Specifically, we simulate the isothermal compression of low-density amorphous ice (LDA) and hexagonal ice to produce high-density amorphous ice (HDA). Both HDA and LDA are nearly hyperuniform; i.e., they are characterized by an anomalous suppression of large-scale density fluctuations. By contrast, in correspondence with the nonequilibrium phase transitions to HDA, the presence of structural heterogeneities strongly suppresses the hyperuniformity and the system becomes hyposurficial (devoid of "surface-area fluctuations"). Our investigation challenges the largely accepted "frozen-liquid" picture, which views glasses as structurally arrested liquids. Beyond implications for water, our findings enrich our understanding of pressure-induced structural transformations in glasses.

  12. Large-scale data analytics

    CERN Document Server

    Gkoulalas-Divanis, Aris

    2014-01-01

    Provides cutting-edge research in large-scale data analytics from diverse scientific areas Surveys varied subject areas and reports on individual results of research in the field Shares many tips and insights into large-scale data analytics from authors and editors with long-term experience and specialization in the field

  13. Downlink Coexistence Performance Assessment and Techniques for WiMAX Services from High Altitude Platform and Terrestrial Deployments

    Directory of Open Access Journals (Sweden)

    Grace D

    2008-01-01

    Full Text Available Abstract We investigate the performance and coexistence techniques for worldwide interoperability for microwave access (WiMAX delivered from high altitude platforms (HAPs and terrestrial systems in shared 3.5 GHz frequency bands. The paper shows that it is possible to provide WiMAX services from individual HAP systems. The coexistence performance is evaluated by appropriate choice of parameters, which include the HAP deployment spacing radius, directive antenna beamwidths based on adopted antenna models for HAPs and receivers. Illustrations and comparisons of coexistence techniques, for example, varying the antenna pointing offset, transmitting and receiving antenna beamwidth, demonstrate efficient ways to enhance the HAP system performance while effectively coexisting with terrestrial WiMAX systems.

  14. Importance of terrestrial arthropods as subsidies in lowland Neotropical rain forest stream ecosystems

    Science.gov (United States)

    Small, Gaston E.; Torres, Pedro J.; Schwizer, Lauren M.; Duff, John H.; Pringle, Catherine M.

    2013-01-01

    The importance of terrestrial arthropods has been documented in temperate stream ecosystems, but little is known about the magnitude of these inputs in tropical streams. Terrestrial arthropods falling from the canopy of tropical forests may be an important subsidy to tropical stream food webs and could also represent an important flux of nitrogen (N) and phosphorus (P) in nutrient-poor headwater streams. We quantified input rates of terrestrial insects in eight streams draining lowland tropical wet forest in Costa Rica. In two focal headwater streams, we also measured capture efficiency by the fish assemblage and quantified terrestrially derived N- and P-excretion relative to stream nutrient uptake rates. Average input rates of terrestrial insects ranged from 5 to 41 mg dry mass/m2/d, exceeding previous measurements of aquatic invertebrate secondary production in these study streams, and were relatively consistent year-round, in contrast to values reported in temperate streams. Terrestrial insects accounted for half of the diet of the dominant fish species, Priapicthys annectens. Although terrestrially derived fish excretion was found to be a small flux relative to measured nutrient uptake rates in the focal streams, the efficient capture and processing of terrestrial arthropods by fish made these nutrients available to the local stream ecosystem. This aquatic-terrestrial linkage is likely being decoupled by deforestation in many tropical regions, with largely unknown but potentially important ecological consequences.

  15. Relay discovery and selection for large-scale P2P streaming.

    Directory of Open Access Journals (Sweden)

    Chengwei Zhang

    Full Text Available In peer-to-peer networks, application relays have been commonly used to provide various networking services. The service performance often improves significantly if a relay is selected appropriately based on its network location. In this paper, we studied the location-aware relay discovery and selection problem for large-scale P2P streaming networks. In these large-scale and dynamic overlays, it incurs significant communication and computation cost to discover a sufficiently large relay candidate set and further to select one relay with good performance. The network location can be measured directly or indirectly with the tradeoffs between timeliness, overhead and accuracy. Based on a measurement study and the associated error analysis, we demonstrate that indirect measurements, such as King and Internet Coordinate Systems (ICS, can only achieve a coarse estimation of peers' network location and those methods based on pure indirect measurements cannot lead to a good relay selection. We also demonstrate that there exists significant error amplification of the commonly used "best-out-of-K" selection methodology using three RTT data sets publicly available. We propose a two-phase approach to achieve efficient relay discovery and accurate relay selection. Indirect measurements are used to narrow down a small number of high-quality relay candidates and the final relay selection is refined based on direct probing. This two-phase approach enjoys an efficient implementation using the Distributed-Hash-Table (DHT. When the DHT is constructed, the node keys carry the location information and they are generated scalably using indirect measurements, such as the ICS coordinates. The relay discovery is achieved efficiently utilizing the DHT-based search. We evaluated various aspects of this DHT-based approach, including the DHT indexing procedure, key generation under peer churn and message costs.

  16. Nuclear-pumped lasers for large-scale applications

    International Nuclear Information System (INIS)

    Anderson, R.E.; Leonard, E.M.; Shea, R.F.; Berggren, R.R.

    1989-05-01

    Efficient initiation of large-volume chemical lasers may be achieved by neutron induced reactions which produce charged particles in the final state. When a burst mode nuclear reactor is used as the neutron source, both a sufficiently intense neutron flux and a sufficiently short initiation pulse may be possible. Proof-of-principle experiments are planned to demonstrate lasing in a direct nuclear-pumped large-volume system; to study the effects of various neutron absorbing materials on laser performance; to study the effects of long initiation pulse lengths; to demonstrate the performance of large-scale optics and the beam quality that may be obtained; and to assess the performance of alternative designs of burst systems that increase the neutron output and burst repetition rate. 21 refs., 8 figs., 5 tabs

  17. Large Scale Gaussian Processes for Atmospheric Parameter Retrieval and Cloud Screening

    Science.gov (United States)

    Camps-Valls, G.; Gomez-Chova, L.; Mateo, G.; Laparra, V.; Perez-Suay, A.; Munoz-Mari, J.

    2017-12-01

    Current Earth-observation (EO) applications for image classification have to deal with an unprecedented big amount of heterogeneous and complex data sources. Spatio-temporally explicit classification methods are a requirement in a variety of Earth system data processing applications. Upcoming missions such as the super-spectral Copernicus Sentinels EnMAP and FLEX will soon provide unprecedented data streams. Very high resolution (VHR) sensors like Worldview-3 also pose big challenges to data processing. The challenge is not only attached to optical sensors but also to infrared sounders and radar images which increased in spectral, spatial and temporal resolution. Besides, we should not forget the availability of the extremely large remote sensing data archives already collected by several past missions, such ENVISAT, Cosmo-SkyMED, Landsat, SPOT, or Seviri/MSG. These large-scale data problems require enhanced processing techniques that should be accurate, robust and fast. Standard parameter retrieval and classification algorithms cannot cope with this new scenario efficiently. In this work, we review the field of large scale kernel methods for both atmospheric parameter retrieval and cloud detection using infrared sounding IASI data and optical Seviri/MSG imagery. We propose novel Gaussian Processes (GPs) to train problems with millions of instances and high number of input features. Algorithms can cope with non-linearities efficiently, accommodate multi-output problems, and provide confidence intervals for the predictions. Several strategies to speed up algorithms are devised: random Fourier features and variational approaches for cloud classification using IASI data and Seviri/MSG, and engineered randomized kernel functions and emulation in temperature, moisture and ozone atmospheric profile retrieval from IASI as a proxy to the upcoming MTG-IRS sensor. Excellent compromise between accuracy and scalability are obtained in all applications.

  18. Large-scale pool fires

    Directory of Open Access Journals (Sweden)

    Steinhaus Thomas

    2007-01-01

    Full Text Available A review of research into the burning behavior of large pool fires and fuel spill fires is presented. The features which distinguish such fires from smaller pool fires are mainly associated with the fire dynamics at low source Froude numbers and the radiative interaction with the fire source. In hydrocarbon fires, higher soot levels at increased diameters result in radiation blockage effects around the perimeter of large fire plumes; this yields lower emissive powers and a drastic reduction in the radiative loss fraction; whilst there are simplifying factors with these phenomena, arising from the fact that soot yield can saturate, there are other complications deriving from the intermittency of the behavior, with luminous regions of efficient combustion appearing randomly in the outer surface of the fire according the turbulent fluctuations in the fire plume. Knowledge of the fluid flow instabilities, which lead to the formation of large eddies, is also key to understanding the behavior of large-scale fires. Here modeling tools can be effectively exploited in order to investigate the fluid flow phenomena, including RANS- and LES-based computational fluid dynamics codes. The latter are well-suited to representation of the turbulent motions, but a number of challenges remain with their practical application. Massively-parallel computational resources are likely to be necessary in order to be able to adequately address the complex coupled phenomena to the level of detail that is necessary.

  19. PERSEUS-HUB: Interactive and Collective Exploration of Large-Scale Graphs

    Directory of Open Access Journals (Sweden)

    Di Jin

    2017-07-01

    Full Text Available Graphs emerge naturally in many domains, such as social science, neuroscience, transportation engineering, and more. In many cases, such graphs have millions or billions of nodes and edges, and their sizes increase daily at a fast pace. How can researchers from various domains explore large graphs interactively and efficiently to find out what is ‘important’? How can multiple researchers explore a new graph dataset collectively and “help” each other with their findings? In this article, we present Perseus-Hub, a large-scale graph mining tool that computes a set of graph properties in a distributed manner, performs ensemble, multi-view anomaly detection to highlight regions that are worth investigating, and provides users with uncluttered visualization and easy interaction with complex graph statistics. Perseus-Hub uses a Spark cluster to calculate various statistics of large-scale graphs efficiently, and aggregates the results in a summary on the master node to support interactive user exploration. In Perseus-Hub, the visualized distributions of graph statistics provide preliminary analysis to understand a graph. To perform a deeper analysis, users with little prior knowledge can leverage patterns (e.g., spikes in the power-law degree distribution marked by other users or experts. Moreover, Perseus-Hub guides users to regions of interest by highlighting anomalous nodes and helps users establish a more comprehensive understanding about the graph at hand. We demonstrate our system through the case study on real, large-scale networks.

  20. Large scale cluster computing workshop

    International Nuclear Information System (INIS)

    Dane Skow; Alan Silverman

    2002-01-01

    Recent revolutions in computer hardware and software technologies have paved the way for the large-scale deployment of clusters of commodity computers to address problems heretofore the domain of tightly coupled SMP processors. Near term projects within High Energy Physics and other computing communities will deploy clusters of scale 1000s of processors and be used by 100s to 1000s of independent users. This will expand the reach in both dimensions by an order of magnitude from the current successful production facilities. The goals of this workshop were: (1) to determine what tools exist which can scale up to the cluster sizes foreseen for the next generation of HENP experiments (several thousand nodes) and by implication to identify areas where some investment of money or effort is likely to be needed. (2) To compare and record experimences gained with such tools. (3) To produce a practical guide to all stages of planning, installing, building and operating a large computing cluster in HENP. (4) To identify and connect groups with similar interest within HENP and the larger clustering community

  1. Reducing Plug and Process Loads for a Large Scale, Low Energy Office Building: NREL's Research Support Facility; Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Lobato, C.; Pless, S.; Sheppy, M.; Torcellini, P.

    2011-02-01

    This paper documents the design and operational plug and process load energy efficiency measures needed to allow a large scale office building to reach ultra high efficiency building goals. The appendices of this document contain a wealth of documentation pertaining to plug and process load design in the RSF, including a list of equipment was selected for use.

  2. Low rank approximation methods for MR fingerprinting with large scale dictionaries.

    Science.gov (United States)

    Yang, Mingrui; Ma, Dan; Jiang, Yun; Hamilton, Jesse; Seiberlich, Nicole; Griswold, Mark A; McGivney, Debra

    2018-04-01

    This work proposes new low rank approximation approaches with significant memory savings for large scale MR fingerprinting (MRF) problems. We introduce a compressed MRF with randomized singular value decomposition method to significantly reduce the memory requirement for calculating a low rank approximation of large sized MRF dictionaries. We further relax this requirement by exploiting the structures of MRF dictionaries in the randomized singular value decomposition space and fitting them to low-degree polynomials to generate high resolution MRF parameter maps. In vivo 1.5T and 3T brain scan data are used to validate the approaches. T 1 , T 2 , and off-resonance maps are in good agreement with that of the standard MRF approach. Moreover, the memory savings is up to 1000 times for the MRF-fast imaging with steady-state precession sequence and more than 15 times for the MRF-balanced, steady-state free precession sequence. The proposed compressed MRF with randomized singular value decomposition and dictionary fitting methods are memory efficient low rank approximation methods, which can benefit the usage of MRF in clinical settings. They also have great potentials in large scale MRF problems, such as problems considering multi-component MRF parameters or high resolution in the parameter space. Magn Reson Med 79:2392-2400, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  3. Large-scale transportation network congestion evolution prediction using deep learning theory.

    Science.gov (United States)

    Ma, Xiaolei; Yu, Haiyang; Wang, Yunpeng; Wang, Yinhai

    2015-01-01

    Understanding how congestion at one location can cause ripples throughout large-scale transportation network is vital for transportation researchers and practitioners to pinpoint traffic bottlenecks for congestion mitigation. Traditional studies rely on either mathematical equations or simulation techniques to model traffic congestion dynamics. However, most of the approaches have limitations, largely due to unrealistic assumptions and cumbersome parameter calibration process. With the development of Intelligent Transportation Systems (ITS) and Internet of Things (IoT), transportation data become more and more ubiquitous. This triggers a series of data-driven research to investigate transportation phenomena. Among them, deep learning theory is considered one of the most promising techniques to tackle tremendous high-dimensional data. This study attempts to extend deep learning theory into large-scale transportation network analysis. A deep Restricted Boltzmann Machine and Recurrent Neural Network architecture is utilized to model and predict traffic congestion evolution based on Global Positioning System (GPS) data from taxi. A numerical study in Ningbo, China is conducted to validate the effectiveness and efficiency of the proposed method. Results show that the prediction accuracy can achieve as high as 88% within less than 6 minutes when the model is implemented in a Graphic Processing Unit (GPU)-based parallel computing environment. The predicted congestion evolution patterns can be visualized temporally and spatially through a map-based platform to identify the vulnerable links for proactive congestion mitigation.

  4. High-Speed Interrogation for Large-Scale Fiber Bragg Grating Sensing.

    Science.gov (United States)

    Hu, Chenyuan; Bai, Wei

    2018-02-24

    A high-speed interrogation scheme for large-scale fiber Bragg grating (FBG) sensing arrays is presented. This technique employs parallel computing and pipeline control to modulate incident light and demodulate the reflected sensing signal. One Electro-optic modulator (EOM) and one semiconductor optical amplifier (SOA) were used to generate a phase delay to filter reflected spectrum form multiple candidate FBGs with the same optical path difference (OPD). Experimental results showed that the fastest interrogation delay time for the proposed method was only about 27.2 us for a single FBG interrogation, and the system scanning period was only limited by the optical transmission delay in the sensing fiber owing to the multiple simultaneous central wavelength calculations. Furthermore, the proposed FPGA-based technique had a verified FBG wavelength demodulation stability of ±1 pm without average processing.

  5. Multidimensional scaling for large genomic data sets

    Directory of Open Access Journals (Sweden)

    Lu Henry

    2008-04-01

    Full Text Available Abstract Background Multi-dimensional scaling (MDS is aimed to represent high dimensional data in a low dimensional space with preservation of the similarities between data points. This reduction in dimensionality is crucial for analyzing and revealing the genuine structure hidden in the data. For noisy data, dimension reduction can effectively reduce the effect of noise on the embedded structure. For large data set, dimension reduction can effectively reduce information retrieval complexity. Thus, MDS techniques are used in many applications of data mining and gene network research. However, although there have been a number of studies that applied MDS techniques to genomics research, the number of analyzed data points was restricted by the high computational complexity of MDS. In general, a non-metric MDS method is faster than a metric MDS, but it does not preserve the true relationships. The computational complexity of most metric MDS methods is over O(N2, so that it is difficult to process a data set of a large number of genes N, such as in the case of whole genome microarray data. Results We developed a new rapid metric MDS method with a low computational complexity, making metric MDS applicable for large data sets. Computer simulation showed that the new method of split-and-combine MDS (SC-MDS is fast, accurate and efficient. Our empirical studies using microarray data on the yeast cell cycle showed that the performance of K-means in the reduced dimensional space is similar to or slightly better than that of K-means in the original space, but about three times faster to obtain the clustering results. Our clustering results using SC-MDS are more stable than those in the original space. Hence, the proposed SC-MDS is useful for analyzing whole genome data. Conclusion Our new method reduces the computational complexity from O(N3 to O(N when the dimension of the feature space is far less than the number of genes N, and it successfully

  6. Manufacturing test of large scale hollow capsule and long length cladding in the large scale oxide dispersion strengthened (ODS) martensitic steel

    International Nuclear Information System (INIS)

    Narita, Takeshi; Ukai, Shigeharu; Kaito, Takeji; Ohtsuka, Satoshi; Fujiwara, Masayuki

    2004-04-01

    Mass production capability of oxide dispersion strengthened (ODS) martensitic steel cladding (9Cr) has being evaluated in the Phase II of the Feasibility Studies on Commercialized Fast Reactor Cycle System. The cost for manufacturing mother tube (raw materials powder production, mechanical alloying (MA) by ball mill, canning, hot extrusion, and machining) is a dominant factor in the total cost for manufacturing ODS ferritic steel cladding. In this study, the large-sale 9Cr-ODS martensitic steel mother tube which is made with a large-scale hollow capsule, and long length claddings were manufactured, and the applicability of these processes was evaluated. Following results were obtained in this study. (1) Manufacturing the large scale mother tube in the dimension of 32 mm OD, 21 mm ID, and 2 m length has been successfully carried out using large scale hollow capsule. This mother tube has a high degree of accuracy in size. (2) The chemical composition and the micro structure of the manufactured mother tube are similar to the existing mother tube manufactured by a small scale can. And the remarkable difference between the bottom and top sides in the manufactured mother tube has not been observed. (3) The long length cladding has been successfully manufactured from the large scale mother tube which was made using a large scale hollow capsule. (4) For reducing the manufacturing cost of the ODS steel claddings, manufacturing process of the mother tubes using a large scale hollow capsules is promising. (author)

  7. Large-scale grid management

    International Nuclear Information System (INIS)

    Langdal, Bjoern Inge; Eggen, Arnt Ove

    2003-01-01

    The network companies in the Norwegian electricity industry now have to establish a large-scale network management, a concept essentially characterized by (1) broader focus (Broad Band, Multi Utility,...) and (2) bigger units with large networks and more customers. Research done by SINTEF Energy Research shows so far that the approaches within large-scale network management may be structured according to three main challenges: centralization, decentralization and out sourcing. The article is part of a planned series

  8. A new large-scale manufacturing platform for complex biopharmaceuticals.

    Science.gov (United States)

    Vogel, Jens H; Nguyen, Huong; Giovannini, Roberto; Ignowski, Jolene; Garger, Steve; Salgotra, Anil; Tom, Jennifer

    2012-12-01

    Complex biopharmaceuticals, such as recombinant blood coagulation factors, are addressing critical medical needs and represent a growing multibillion-dollar market. For commercial manufacturing of such, sometimes inherently unstable, molecules it is important to minimize product residence time in non-ideal milieu in order to obtain acceptable yields and consistently high product quality. Continuous perfusion cell culture allows minimization of residence time in the bioreactor, but also brings unique challenges in product recovery, which requires innovative solutions. In order to maximize yield, process efficiency, facility and equipment utilization, we have developed, scaled-up and successfully implemented a new integrated manufacturing platform in commercial scale. This platform consists of a (semi-)continuous cell separation process based on a disposable flow path and integrated with the upstream perfusion operation, followed by membrane chromatography on large-scale adsorber capsules in rapid cycling mode. Implementation of the platform at commercial scale for a new product candidate led to a yield improvement of 40% compared to the conventional process technology, while product quality has been shown to be more consistently high. Over 1,000,000 L of cell culture harvest have been processed with 100% success rate to date, demonstrating the robustness of the new platform process in GMP manufacturing. While membrane chromatography is well established for polishing in flow-through mode, this is its first commercial-scale application for bind/elute chromatography in the biopharmaceutical industry and demonstrates its potential in particular for manufacturing of potent, low-dose biopharmaceuticals. Copyright © 2012 Wiley Periodicals, Inc.

  9. Dynamic Reactive Power Compensation of Large Scale Wind Integrated Power System

    DEFF Research Database (Denmark)

    Rather, Zakir Hussain; Chen, Zhe; Thøgersen, Paul

    2015-01-01

    wind turbines especially wind farms with additional grid support functionalities like dynamic support (e,g dynamic reactive power support etc.) and ii) refurbishment of existing conventional central power plants to synchronous condensers could be one of the efficient, reliable and cost effective option......Due to progressive displacement of conventional power plants by wind turbines, dynamic security of large scale wind integrated power systems gets significantly compromised. In this paper we first highlight the importance of dynamic reactive power support/voltage security in large scale wind...... integrated power systems with least presence of conventional power plants. Then we propose a mixed integer dynamic optimization based method for optimal dynamic reactive power allocation in large scale wind integrated power systems. One of the important aspects of the proposed methodology is that unlike...

  10. Parallel Framework for Dimensionality Reduction of Large-Scale Datasets

    Directory of Open Access Journals (Sweden)

    Sai Kiranmayee Samudrala

    2015-01-01

    Full Text Available Dimensionality reduction refers to a set of mathematical techniques used to reduce complexity of the original high-dimensional data, while preserving its selected properties. Improvements in simulation strategies and experimental data collection methods are resulting in a deluge of heterogeneous and high-dimensional data, which often makes dimensionality reduction the only viable way to gain qualitative and quantitative understanding of the data. However, existing dimensionality reduction software often does not scale to datasets arising in real-life applications, which may consist of thousands of points with millions of dimensions. In this paper, we propose a parallel framework for dimensionality reduction of large-scale data. We identify key components underlying the spectral dimensionality reduction techniques, and propose their efficient parallel implementation. We show that the resulting framework can be used to process datasets consisting of millions of points when executed on a 16,000-core cluster, which is beyond the reach of currently available methods. To further demonstrate applicability of our framework we perform dimensionality reduction of 75,000 images representing morphology evolution during manufacturing of organic solar cells in order to identify how processing parameters affect morphology evolution.

  11. Efficient scale for photovoltaic systems and Florida's solar rebate program

    International Nuclear Information System (INIS)

    Burkart, Christopher S.; Arguea, Nestor M.

    2012-01-01

    This paper presents a critical view of Florida's photovoltaic (PV) subsidy system and proposes an econometric model of PV system installation and generation costs. Using information on currently installed systems, average installation cost relations for residential and commercial systems are estimated and cost-efficient scales of installation panel wattage are identified. Productive efficiency in annual generating capacity is also examined under flexible panel efficiency assumptions. We identify potential gains in efficiency and suggest changes in subsidy system constraints, providing important guidance for the implementation of future incentive programs. Specifically, we find that the subsidy system discouraged residential applicants from installing at the cost-efficient scale but over-incentivized commercial applicants, resulting in inefficiently sized installations. - Highlights: ► Describe a PV solar incentive system in the U.S. state of Florida. ► Combine geocoded installation site data with a detailed irradiance map. ► Estimate installation and production costs across a large sample. ► Identify inefficiencies in the incentive system. ► Suggest changes to policy that would improve economic efficiency.

  12. A highly efficient multi-core algorithm for clustering extremely large datasets

    Directory of Open Access Journals (Sweden)

    Kraus Johann M

    2010-04-01

    Full Text Available Abstract Background In recent years, the demand for computational power in computational biology has increased due to rapidly growing data sets from microarray and other high-throughput technologies. This demand is likely to increase. Standard algorithms for analyzing data, such as cluster algorithms, need to be parallelized for fast processing. Unfortunately, most approaches for parallelizing algorithms largely rely on network communication protocols connecting and requiring multiple computers. One answer to this problem is to utilize the intrinsic capabilities in current multi-core hardware to distribute the tasks among the different cores of one computer. Results We introduce a multi-core parallelization of the k-means and k-modes cluster algorithms based on the design principles of transactional memory for clustering gene expression microarray type data and categorial SNP data. Our new shared memory parallel algorithms show to be highly efficient. We demonstrate their computational power and show their utility in cluster stability and sensitivity analysis employing repeated runs with slightly changed parameters. Computation speed of our Java based algorithm was increased by a factor of 10 for large data sets while preserving computational accuracy compared to single-core implementations and a recently published network based parallelization. Conclusions Most desktop computers and even notebooks provide at least dual-core processors. Our multi-core algorithms show that using modern algorithmic concepts, parallelization makes it possible to perform even such laborious tasks as cluster sensitivity and cluster number estimation on the laboratory computer.

  13. DGDFT: A massively parallel method for large scale density functional theory calculations.

    Science.gov (United States)

    Hu, Wei; Lin, Lin; Yang, Chao

    2015-09-28

    We describe a massively parallel implementation of the recently developed discontinuous Galerkin density functional theory (DGDFT) method, for efficient large-scale Kohn-Sham DFT based electronic structure calculations. The DGDFT method uses adaptive local basis (ALB) functions generated on-the-fly during the self-consistent field iteration to represent the solution to the Kohn-Sham equations. The use of the ALB set provides a systematic way to improve the accuracy of the approximation. By using the pole expansion and selected inversion technique to compute electron density, energy, and atomic forces, we can make the computational complexity of DGDFT scale at most quadratically with respect to the number of electrons for both insulating and metallic systems. We show that for the two-dimensional (2D) phosphorene systems studied here, using 37 basis functions per atom allows us to reach an accuracy level of 1.3 × 10(-4) Hartree/atom in terms of the error of energy and 6.2 × 10(-4) Hartree/bohr in terms of the error of atomic force, respectively. DGDFT can achieve 80% parallel efficiency on 128,000 high performance computing cores when it is used to study the electronic structure of 2D phosphorene systems with 3500-14 000 atoms. This high parallel efficiency results from a two-level parallelization scheme that we will describe in detail.

  14. DGDFT: A massively parallel method for large scale density functional theory calculations

    International Nuclear Information System (INIS)

    Hu, Wei; Yang, Chao; Lin, Lin

    2015-01-01

    We describe a massively parallel implementation of the recently developed discontinuous Galerkin density functional theory (DGDFT) method, for efficient large-scale Kohn-Sham DFT based electronic structure calculations. The DGDFT method uses adaptive local basis (ALB) functions generated on-the-fly during the self-consistent field iteration to represent the solution to the Kohn-Sham equations. The use of the ALB set provides a systematic way to improve the accuracy of the approximation. By using the pole expansion and selected inversion technique to compute electron density, energy, and atomic forces, we can make the computational complexity of DGDFT scale at most quadratically with respect to the number of electrons for both insulating and metallic systems. We show that for the two-dimensional (2D) phosphorene systems studied here, using 37 basis functions per atom allows us to reach an accuracy level of 1.3 × 10 −4 Hartree/atom in terms of the error of energy and 6.2 × 10 −4 Hartree/bohr in terms of the error of atomic force, respectively. DGDFT can achieve 80% parallel efficiency on 128,000 high performance computing cores when it is used to study the electronic structure of 2D phosphorene systems with 3500-14 000 atoms. This high parallel efficiency results from a two-level parallelization scheme that we will describe in detail

  15. DGDFT: A massively parallel method for large scale density functional theory calculations

    Energy Technology Data Exchange (ETDEWEB)

    Hu, Wei, E-mail: whu@lbl.gov; Yang, Chao, E-mail: cyang@lbl.gov [Computational Research Division, Lawrence Berkeley National Laboratory, Berkeley, California 94720 (United States); Lin, Lin, E-mail: linlin@math.berkeley.edu [Computational Research Division, Lawrence Berkeley National Laboratory, Berkeley, California 94720 (United States); Department of Mathematics, University of California, Berkeley, California 94720 (United States)

    2015-09-28

    We describe a massively parallel implementation of the recently developed discontinuous Galerkin density functional theory (DGDFT) method, for efficient large-scale Kohn-Sham DFT based electronic structure calculations. The DGDFT method uses adaptive local basis (ALB) functions generated on-the-fly during the self-consistent field iteration to represent the solution to the Kohn-Sham equations. The use of the ALB set provides a systematic way to improve the accuracy of the approximation. By using the pole expansion and selected inversion technique to compute electron density, energy, and atomic forces, we can make the computational complexity of DGDFT scale at most quadratically with respect to the number of electrons for both insulating and metallic systems. We show that for the two-dimensional (2D) phosphorene systems studied here, using 37 basis functions per atom allows us to reach an accuracy level of 1.3 × 10{sup −4} Hartree/atom in terms of the error of energy and 6.2 × 10{sup −4} Hartree/bohr in terms of the error of atomic force, respectively. DGDFT can achieve 80% parallel efficiency on 128,000 high performance computing cores when it is used to study the electronic structure of 2D phosphorene systems with 3500-14 000 atoms. This high parallel efficiency results from a two-level parallelization scheme that we will describe in detail.

  16. Estimating returns to scale and scale efficiency for energy consuming appliances

    Energy Technology Data Exchange (ETDEWEB)

    Blum, Helcio [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Energy Efficiency Standards Group; Okwelum, Edson O. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Energy Efficiency Standards Group

    2018-01-18

    Energy consuming appliances accounted for over 40% of the energy use and $17 billion in sales in the U.S. in 2014. Whether such amounts of money and energy were optimally combined to produce household energy services is not straightforwardly determined. The efficient allocation of capital and energy to provide an energy service has been previously approached, and solved with Data Envelopment Analysis (DEA) under constant returns to scale. That approach, however, lacks the scale dimension of the problem and may restrict the economic efficient models of an appliance available in the market when constant returns to scale does not hold. We expand on that approach to estimate returns to scale for energy using appliances. We further calculate DEA scale efficiency scores for the technically efficient models that comprise the economic efficient frontier of the energy service delivered, under different assumptions of returns to scale. We then apply this approach to evaluate dishwashers available in the market in the U.S. Our results show that (a) for the case of dishwashers scale matters, and (b) the dishwashing energy service is delivered under non-decreasing returns to scale. The results further demonstrate that this method contributes to increase consumers’ choice of appliances.

  17. Terrestrial gamma radiation baseline mapping using ultra low density sampling methods.

    Science.gov (United States)

    Kleinschmidt, R; Watson, D

    2016-01-01

    Baseline terrestrial gamma radiation maps are indispensable for providing basic reference information that may be used in assessing the impact of a radiation related incident, performing epidemiological studies, remediating land contaminated with radioactive materials, assessment of land use applications and resource prospectivity. For a large land mass, such as Queensland, Australia (over 1.7 million km(2)), it is prohibitively expensive and practically difficult to undertake detailed in-situ radiometric surveys of this scale. It is proposed that an existing, ultra-low density sampling program already undertaken for the purpose of a nationwide soil survey project be utilised to develop a baseline terrestrial gamma radiation map. Geoelement data derived from the National Geochemistry Survey of Australia (NGSA) was used to construct a baseline terrestrial gamma air kerma rate map, delineated by major drainage catchments, for Queensland. Three drainage catchments (sampled at the catchment outlet) spanning low, medium and high radioelement concentrations were selected for validation of the methodology using radiometric techniques including in-situ measurements and soil sampling for high resolution gamma spectrometry, and comparative non-radiometric analysis. A Queensland mean terrestrial air kerma rate, as calculated from the NGSA outlet sediment uranium, thorium and potassium concentrations, of 49 ± 69 nGy h(-1) (n = 311, 3σ 99% confidence level) is proposed as being suitable for use as a generic terrestrial air kerma rate background range. Validation results indicate that catchment outlet measurements are representative of the range of results obtained across the catchment and that the NGSA geoelement data is suitable for calculation and mapping of terrestrial air kerma rate. Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved.

  18. Large Scale Computing for the Modelling of Whole Brain Connectivity

    DEFF Research Database (Denmark)

    Albers, Kristoffer Jon

    organization of the brain in continuously increasing resolution. From these images, networks of structural and functional connectivity can be constructed. Bayesian stochastic block modelling provides a prominent data-driven approach for uncovering the latent organization, by clustering the networks into groups...... of neurons. Relying on Markov Chain Monte Carlo (MCMC) simulations as the workhorse in Bayesian inference however poses significant computational challenges, especially when modelling networks at the scale and complexity supported by high-resolution whole-brain MRI. In this thesis, we present how to overcome...... these computational limitations and apply Bayesian stochastic block models for un-supervised data-driven clustering of whole-brain connectivity in full image resolution. We implement high-performance software that allows us to efficiently apply stochastic blockmodelling with MCMC sampling on large complex networks...

  19. Dense image matching of terrestrial imagery for deriving high-resolution topographic properties of vegetation locations in alpine terrain

    Science.gov (United States)

    Niederheiser, R.; Rutzinger, M.; Bremer, M.; Wichmann, V.

    2018-04-01

    The investigation of changes in spatial patterns of vegetation and identification of potential micro-refugia requires detailed topographic and terrain information. However, mapping alpine topography at very detailed scales is challenging due to limited accessibility of sites. Close-range sensing by photogrammetric dense matching approaches based on terrestrial images captured with hand-held cameras offers a light-weight and low-cost solution to retrieve high-resolution measurements even in steep terrain and at locations, which are difficult to access. We propose a novel approach for rapid capturing of terrestrial images and a highly automated processing chain for retrieving detailed dense point clouds for topographic modelling. For this study, we modelled 249 plot locations. For the analysis of vegetation distribution and location properties, topographic parameters, such as slope, aspect, and potential solar irradiation were derived by applying a multi-scale approach utilizing voxel grids and spherical neighbourhoods. The result is a micro-topography archive of 249 alpine locations that includes topographic parameters at multiple scales ready for biogeomorphological analysis. Compared with regional elevation models at larger scales and traditional 2D gridding approaches to create elevation models, we employ analyses in a fully 3D environment that yield much more detailed insights into interrelations between topographic parameters, such as potential solar irradiation, surface area, aspect and roughness.

  20. Terrestrial ecosystem responses to global change: A research strategy

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-09-01

    Uncertainty about the magnitude of global change effects on terrestrial ecosystems and consequent feedbacks to the atmosphere impedes sound policy planning at regional, national, and global scales. A strategy to reduce these uncertainties must include a substantial increase in funding for large-scale ecosystem experiments and a careful prioritization of research efforts. Prioritization criteria should be based on the magnitude of potential changes in environmental properties of concern to society, including productivity; biodiversity; the storage and cycling of carbon, water, and nutrients; and sensitivity of specific ecosystems to environmental change. A research strategy is proposed that builds on existing knowledge of ecosystem responses to global change by (1) expanding the spatial and temporal scale of experimental ecosystem manipulations to include processes known to occur at large scales and over long time periods; (2) quantifying poorly understood linkages among processes through the use of experiments that manipulate multiple interacting environmental factors over a broader range of relevant conditions than did past experiments; and (3) prioritizing ecosystems for major experimental manipulations on the basis of potential positive and negative impacts on ecosystem properties and processes of intrinsic and/or utilitarian value to humans and on feedbacks of terrestrial ecosystems to the atmosphere.

  1. Large-scale Modeling of Nitrous Oxide Production: Issues of Representing Spatial Heterogeneity

    Science.gov (United States)

    Morris, C. K.; Knighton, J.

    2017-12-01

    Nitrous oxide is produced from the biological processes of nitrification and denitrification in terrestrial environments and contributes to the greenhouse effect that warms Earth's climate. Large scale modeling can be used to determine how global rate of nitrous oxide production and consumption will shift under future climates. However, accurate modeling of nitrification and denitrification is made difficult by highly parameterized, nonlinear equations. Here we show that the representation of spatial heterogeneity in inputs, specifically soil moisture, causes inaccuracies in estimating the average nitrous oxide production in soils. We demonstrate that when soil moisture is averaged from a spatially heterogeneous surface, net nitrous oxide production is under predicted. We apply this general result in a test of a widely-used global land surface model, the Community Land Model v4.5. The challenges presented by nonlinear controls on nitrous oxide are highlighted here to provide a wider context to the problem of extraordinary denitrification losses in CLM. We hope that these findings will inform future researchers on the possibilities for model improvement of the global nitrogen cycle.

  2. The Plant Phenology Ontology: A New Informatics Resource for Large-Scale Integration of Plant Phenology Data.

    Science.gov (United States)

    Stucky, Brian J; Guralnick, Rob; Deck, John; Denny, Ellen G; Bolmgren, Kjell; Walls, Ramona

    2018-01-01

    Plant phenology - the timing of plant life-cycle events, such as flowering or leafing out - plays a fundamental role in the functioning of terrestrial ecosystems, including human agricultural systems. Because plant phenology is often linked with climatic variables, there is widespread interest in developing a deeper understanding of global plant phenology patterns and trends. Although phenology data from around the world are currently available, truly global analyses of plant phenology have so far been difficult because the organizations producing large-scale phenology data are using non-standardized terminologies and metrics during data collection and data processing. To address this problem, we have developed the Plant Phenology Ontology (PPO). The PPO provides the standardized vocabulary and semantic framework that is needed for large-scale integration of heterogeneous plant phenology data. Here, we describe the PPO, and we also report preliminary results of using the PPO and a new data processing pipeline to build a large dataset of phenology information from North America and Europe.

  3. Combined process automation for large-scale EEG analysis.

    Science.gov (United States)

    Sfondouris, John L; Quebedeaux, Tabitha M; Holdgraf, Chris; Musto, Alberto E

    2012-01-01

    Epileptogenesis is a dynamic process producing increased seizure susceptibility. Electroencephalography (EEG) data provides information critical in understanding the evolution of epileptiform changes throughout epileptic foci. We designed an algorithm to facilitate efficient large-scale EEG analysis via linked automation of multiple data processing steps. Using EEG recordings obtained from electrical stimulation studies, the following steps of EEG analysis were automated: (1) alignment and isolation of pre- and post-stimulation intervals, (2) generation of user-defined band frequency waveforms, (3) spike-sorting, (4) quantification of spike and burst data and (5) power spectral density analysis. This algorithm allows for quicker, more efficient EEG analysis. Copyright © 2011 Elsevier Ltd. All rights reserved.

  4. Large-scale HTS bulks for magnetic application

    International Nuclear Information System (INIS)

    Werfel, Frank N.; Floegel-Delor, Uta; Riedel, Thomas; Goebel, Bernd; Rothfeld, Rolf; Schirrmeister, Peter; Wippich, Dieter

    2013-01-01

    Highlights: ► ATZ Company has constructed about 130 HTS magnet systems. ► Multi-seeded YBCO bulks joint the way for large-scale application. ► Levitation platforms demonstrate “superconductivity” to a great public audience (100 years anniversary). ► HTS magnetic bearings show forces up to 1 t. ► Modular HTS maglev vacuum cryostats are tested for train demonstrators in Brazil, China and Germany. -- Abstract: ATZ Company has constructed about 130 HTS magnet systems using high-Tc bulk magnets. A key feature in scaling-up is the fabrication of YBCO melts textured multi-seeded large bulks with three to eight seeds. Except of levitation, magnetization, trapped field and hysteresis, we review system engineering parameters of HTS magnetic linear and rotational bearings like compactness, cryogenics, power density, efficiency and robust construction. We examine mobile compact YBCO bulk magnet platforms cooled with LN 2 and Stirling cryo-cooler for demonstrator use. Compact cryostats for Maglev train operation contain 24 pieces of 3-seed bulks and can levitate 2500–3000 N at 10 mm above a permanent magnet (PM) track. The effective magnetic distance of the thermally insulated bulks is 2 mm only; the stored 2.5 l LN 2 allows more than 24 h operation without refilling. 34 HTS Maglev vacuum cryostats are manufactured tested and operate in Germany, China and Brazil. The magnetic levitation load to weight ratio is more than 15, and by group assembling the HTS cryostats under vehicles up to 5 t total loads levitated above a magnetic track is achieved

  5. Large-scale HTS bulks for magnetic application

    Energy Technology Data Exchange (ETDEWEB)

    Werfel, Frank N., E-mail: werfel@t-online.de [Adelwitz Technologiezentrum GmbH (ATZ), Rittergut Adelwitz 16, 04886 Arzberg-Adelwitz (Germany); Floegel-Delor, Uta; Riedel, Thomas; Goebel, Bernd; Rothfeld, Rolf; Schirrmeister, Peter; Wippich, Dieter [Adelwitz Technologiezentrum GmbH (ATZ), Rittergut Adelwitz 16, 04886 Arzberg-Adelwitz (Germany)

    2013-01-15

    Highlights: ► ATZ Company has constructed about 130 HTS magnet systems. ► Multi-seeded YBCO bulks joint the way for large-scale application. ► Levitation platforms demonstrate “superconductivity” to a great public audience (100 years anniversary). ► HTS magnetic bearings show forces up to 1 t. ► Modular HTS maglev vacuum cryostats are tested for train demonstrators in Brazil, China and Germany. -- Abstract: ATZ Company has constructed about 130 HTS magnet systems using high-Tc bulk magnets. A key feature in scaling-up is the fabrication of YBCO melts textured multi-seeded large bulks with three to eight seeds. Except of levitation, magnetization, trapped field and hysteresis, we review system engineering parameters of HTS magnetic linear and rotational bearings like compactness, cryogenics, power density, efficiency and robust construction. We examine mobile compact YBCO bulk magnet platforms cooled with LN{sub 2} and Stirling cryo-cooler for demonstrator use. Compact cryostats for Maglev train operation contain 24 pieces of 3-seed bulks and can levitate 2500–3000 N at 10 mm above a permanent magnet (PM) track. The effective magnetic distance of the thermally insulated bulks is 2 mm only; the stored 2.5 l LN{sub 2} allows more than 24 h operation without refilling. 34 HTS Maglev vacuum cryostats are manufactured tested and operate in Germany, China and Brazil. The magnetic levitation load to weight ratio is more than 15, and by group assembling the HTS cryostats under vehicles up to 5 t total loads levitated above a magnetic track is achieved.

  6. The relationship between small-scale and large-scale ionospheric electron density irregularities generated by powerful HF electromagnetic waves at high latitudes

    Directory of Open Access Journals (Sweden)

    E. D. Tereshchenko

    2006-11-01

    Full Text Available Satellite radio beacons were used in June 2001 to probe the ionosphere modified by a radio beam produced by the EISCAT high-power, high-frequency (HF transmitter located near Tromsø (Norway. Amplitude scintillations and variations of the phase of 150- and 400-MHz signals from Russian navigational satellites passing over the modified region were observed at three receiver sites. In several papers it has been stressed that in the polar ionosphere the thermal self-focusing on striations during ionospheric modification is the main mechanism resulting in the formation of large-scale (hundreds of meters to kilometers nonlinear structures aligned along the geomagnetic field (magnetic zenith effect. It has also been claimed that the maximum effects caused by small-scale (tens of meters irregularities detected in satellite signals are also observed in the direction parallel to the magnetic field. Contrary to those studies, the present paper shows that the maximum in amplitude scintillations does not correspond strictly to the magnetic zenith direction because high latitude drifts typically cause a considerable anisotropy of small-scale irregularities in a plane perpendicular to the geomagnetic field resulting in a deviation of the amplitude-scintillation peak relative to the minimum angle between the line-of-sight to the satellite and direction of the geomagnetic field lines. The variance of the logarithmic relative amplitude fluctuations is considered here, which is a useful quantity in such studies. The experimental values of the variance are compared with model calculations and good agreement has been found. It is also shown from the experimental data that in most of the satellite passes a variance maximum occurs at a minimum in the phase fluctuations indicating that the artificial excitation of large-scale irregularities is minimum when the excitation of small-scale irregularities is maximum.

  7. LOD-based clustering techniques for efficient large-scale terrain storage and visualization

    Science.gov (United States)

    Bao, Xiaohong; Pajarola, Renato

    2003-05-01

    Large multi-resolution terrain data sets are usually stored out-of-core. To visualize terrain data at interactive frame rates, the data needs to be organized on disk, loaded into main memory part by part, then rendered efficiently. Many main-memory algorithms have been proposed for efficient vertex selection and mesh construction. Organization of terrain data on disk is quite difficult because the error, the triangulation dependency and the spatial location of each vertex all need to be considered. Previous terrain clustering algorithms did not consider the per-vertex approximation error of individual terrain data sets. Therefore, the vertex sequences on disk are exactly the same for any terrain. In this paper, we propose a novel clustering algorithm which introduces the level-of-detail (LOD) information to terrain data organization to map multi-resolution terrain data to external memory. In our approach the LOD parameters of the terrain elevation points are reflected during clustering. The experiments show that dynamic loading and paging of terrain data at varying LOD is very efficient and minimizes page faults. Additionally, the preprocessing of this algorithm is very fast and works from out-of-core.

  8. Chemical heterogeneities in the interior of terrestrial bodies

    Science.gov (United States)

    Plesa, Ana-Catalina; Maurice, Maxime; Tosi, Nicola; Breuer, Doris

    2016-04-01

    Mantle chemical heterogeneities that can strongly influence the interior dynamics have been inferred for all terrestrial bodies of the Solar System and range from local to global scale. Seismic data for the Earth, differences in surface mineral compositions observed in data sets from space missions, and isotopic variations identified in laboratory analyses of meteorites or samples indicate chemically heterogeneous systems. One way to generate large scale geochemical heterogeneities is through the fractional crystallization of a liquid magma ocean. The large amount of energy available in the early stages of planetary evolution can cause melting of a significant part or perhaps even the entire mantle of a terrestrial body resulting in a liquid magma ocean. Assuming fractional crystallization, magma ocean solidification proceeds from the core-mantle boundary to the surface where dense cumulates tend to form due to iron enrichment in the evolving liquid. This process leads to a gravitationally unstable mantle, which is prone to overturn. Following cumulate overturn, a stable stratification may be reached that prevents efficient material transport. As a consequence, mantle reservoirs may be kept separate, possibly for the entire thermo-chemical evolution of a terrestrial body. Scenarios assuming fractional crystallization of a liquid magma ocean have been suggested to explain lavas with distinct composition on Mercury's surface [1], the generation of the Moon's mare basalts by sampling a reservoir consisting of overturned ilmenite-bearing cumulates [2], and the preservation of Mars' geochemical reservoirs as inferred by isotopic analysis of the SNC meteorites [3]. However, recent studies have shown that the style of the overturn as well as the subsequent density stratification are of extreme importance for the subsequent thermo-chemical evolution of a planetary body and may have a major impact on the later surface tectonics and volcanic history. The rapid formation of a

  9. Ethics of large-scale change

    OpenAIRE

    Arler, Finn

    2006-01-01

      The subject of this paper is long-term large-scale changes in human society. Some very significant examples of large-scale change are presented: human population growth, human appropriation of land and primary production, the human use of fossil fuels, and climate change. The question is posed, which kind of attitude is appropriate when dealing with large-scale changes like these from an ethical point of view. Three kinds of approaches are discussed: Aldo Leopold's mountain thinking, th...

  10. Energy-efficiency supervision systems for energy management in large public buildings: Necessary choice for China

    International Nuclear Information System (INIS)

    Feng Yanping; Wu Yong; Liu Changbin

    2009-01-01

    Buildings are important contributors to total energy consumption accounting for around 30% of all energy consumed in China. Of this, around two-fifths are consumed within urban homes, one-fifth within public buildings, and two-fifths within rural area. Government office buildings and large-scale public buildings are the dominant energy consumers in cities but their consumption can be largely cut back through improving efficiency. At present, energy management in the large public sector is a particular priority in China. Firstly, this paper discusses how the large public building is defined, and then energy performance in large public buildings is studied. The paper also describes barriers to improving energy efficiency of large public buildings in China and examines the energy-efficiency policies and programs adopted in United States and European Union. The energy-efficiency supervision (EES) systems developed to improve operation and maintenance practices and promote energy efficiency in large public sector are described. The benefits of the EES systems are finally summarized.

  11. Energy-efficiency supervision systems for energy management in large public buildings. Necessary choice for China

    Energy Technology Data Exchange (ETDEWEB)

    Yan-ping, Feng [Beijing Jiaotong University, School of Economics and Management, Jiaoda Donglu18, 5-803, Beijing 100044 (China); Yong, Wu [Ministry of Housing and Urban-Rural Development, Beijing 100835 (China); Chang-bin, Liu [Beijing Institute of Civil Engineering and Architecture, Beijing 100044 (China)

    2009-06-15

    Buildings are important contributors to total energy consumption accounting for around 30% of all energy consumed in China. Of this, around two-fifths are consumed within urban homes, one-fifth within public buildings, and two-fifths within rural area. Government office buildings and large-scale public buildings are the dominant energy consumers in cities but their consumption can be largely cut back through improving efficiency. At present, energy management in the large public sector is a particular priority in China. Firstly, this paper discusses how the large public building is defined, and then energy performance in large public buildings is studied. The paper also describes barriers to improving energy efficiency of large public buildings in China and examines the energy-efficiency policies and programs adopted in United States and European Union. The energy-efficiency supervision (EES) systems developed to improve operation and maintenance practices and promote energy efficiency in large public sector are described. The benefits of the EES systems are finally summarized. (author)

  12. Energy-efficiency supervision systems for energy management in large public buildings: Necessary choice for China

    Energy Technology Data Exchange (ETDEWEB)

    Feng Yanping [Beijing Jiaotong University, School of Economics and Management, Jiaoda Donglu18, 5-803, Beijing 100044 (China)], E-mail: fengyanping10@sohu.com; Wu Yong [Ministry of Housing and Urban-Rural Development, Beijing 100835 (China); Liu Changbin [Beijing Institute of Civil Engineering and Architecture, Beijing 100044 (China)

    2009-06-15

    Buildings are important contributors to total energy consumption accounting for around 30% of all energy consumed in China. Of this, around two-fifths are consumed within urban homes, one-fifth within public buildings, and two-fifths within rural area. Government office buildings and large-scale public buildings are the dominant energy consumers in cities but their consumption can be largely cut back through improving efficiency. At present, energy management in the large public sector is a particular priority in China. Firstly, this paper discusses how the large public building is defined, and then energy performance in large public buildings is studied. The paper also describes barriers to improving energy efficiency of large public buildings in China and examines the energy-efficiency policies and programs adopted in United States and European Union. The energy-efficiency supervision (EES) systems developed to improve operation and maintenance practices and promote energy efficiency in large public sector are described. The benefits of the EES systems are finally summarized.

  13. Lab-scale investigation of Middle-Bosnia coals to achieve high-efficient and clean combustion technology

    Directory of Open Access Journals (Sweden)

    Smajevic Izet

    2014-01-01

    Full Text Available This paper describes full lab-scale investigation of Middle-Bosnia coals launched to support selection an appropriate combustion technology and to support optimization of the boiler design. Tested mix of Middle-Bosnia brown coals is projected coal for new co-generation power plant Kakanj Unit 8 (300-450 MWe, EP B&H electricity utility. The basic coal blend consisting of the coals Kakanj: Breza: Zenica at approximate mass ratio of 70:20:10 is low grade brown coal with very high percentage of ash - over 40%. Testing that coal in circulated fluidized bed combustion technique, performed at Ruhr-University Bohum and Doosan Lentjes GmbH, has shown its inconveniency for fluidized bed combustion technology, primarily due to the agglomeration problems. Tests of these coals in PFC (pulverized fuel combustion technology have been performed in referent laboratory at Faculty of Mechanical Engineering of Sarajevo University, on a lab-scale PFC furnace, to provide reliable data for further analysis. The PFC tests results are fitted well with previously obtained results of the burning similar Bosnian coal blends in the PFC dry bottom furnace technique. Combination of the coals shares, the process temperature and the air combustion distribution for the lowest NOx and SO2 emissions was found in this work, provided that combustion efficiency and CO emissions are within very strict criteria, considering specific settlement of lab-scale furnace. Sustainability assessment based on calculation economic and environmental indicators, in combination with Low Cost Planning method, is used for optimization the power plant design. The results of the full lab-scale investigation will help in selection optimal Boiler design, to achieve sustainable energy system with high-efficient and clean combustion technology applied for given coals.

  14. Downlink Coexistence Performance Assessment and Techniques for WiMAX Services from High Altitude Platform and Terrestrial Deployments

    Directory of Open Access Journals (Sweden)

    D. Grace

    2008-11-01

    Full Text Available We investigate the performance and coexistence techniques for worldwide interoperability for microwave access (WiMAX delivered from high altitude platforms (HAPs and terrestrial systems in shared 3.5 GHz frequency bands. The paper shows that it is possible to provide WiMAX services from individual HAP systems. The coexistence performance is evaluated by appropriate choice of parameters, which include the HAP deployment spacing radius, directive antenna beamwidths based on adopted antenna models for HAPs and receivers. Illustrations and comparisons of coexistence techniques, for example, varying the antenna pointing offset, transmitting and receiving antenna beamwidth, demonstrate efficient ways to enhance the HAP system performance while effectively coexisting with terrestrial WiMAX systems.

  15. Can polar bears use terrestrial foods to offset lost ice-based hunting opportunities?

    Science.gov (United States)

    Rode, Karyn D.; Robbins, Charles T.; Nelson, Lynne; Amstrup, Steven C.

    2015-01-01

    Increased land use by polar bears (Ursus maritimus) due to climate-change-induced reduction of their sea-ice habitat illustrates the impact of climate change on species distributions and the difficulty of conserving a large, highly specialized carnivore in the face of this global threat. Some authors have suggested that terrestrial food consumption by polar bears will help them withstand sea-ice loss as they are forced to spend increasing amounts of time on land. Here, we evaluate the nutritional needs of polar bears as well as the physiological and environmental constraints that shape their use of terrestrial ecosystems. Only small numbers of polar bears have been documented consuming terrestrial foods even in modest quantities. Over much of the polar bear's range, limited terrestrial food availability supports only low densities of much smaller, resident brown bears (Ursus arctos), which use low-quality resources more efficiently and may compete with polar bears in these areas. Where consumption of terrestrial foods has been documented, polar bear body condition and survival rates have declined even as land use has increased. Thus far, observed consumption of terrestrial food by polar bears has been insufficient to offset lost ice-based hunting opportunities but can have ecological consequences for other species. Warming-induced loss of sea ice remains the primary threat faced by polar bears.

  16. Large-scale ruthenium- and enzyme-catalyzed dynamic kinetic resolution of (rac-1-phenylethanol

    Directory of Open Access Journals (Sweden)

    Bäckvall Jan-E

    2007-12-01

    Full Text Available Abstract The scale-up of the ruthenium- and enzyme-catalyzed dynamic kinetic resolution (DKR of (rac-1-phenylethanol (2 is addressed. The immobilized lipase Candida antarctica lipase B (CALB was employed for the resolution, which shows high enantioselectivity in the transesterification. The ruthenium catalyst used, (η 5-C5Ph5RuCl(CO2 1, was shown to possess very high reactivity in the "in situ" redox racemization of 1-phenylethanol (2 in the presence of the immobilized enzyme, and could be used in 0.05 mol% with high efficiency. Commercially available isopropenyl acetate was employed as acylating agent in the lipase-catalyzed transesterifications, which makes the purification of the product very easy. In a successful large-scale DKR of 2, with 0.05 mol% of 1, (R-1-phenylethanol acetate (3 was obtained in 159 g (97% yield in excellent enantiomeric excess (99.8% ee.

  17. Large-scale Estimates of Leaf Area Index from Active Remote Sensing Laser Altimetry

    Science.gov (United States)

    Hopkinson, C.; Mahoney, C.

    2016-12-01

    Leaf area index (LAI) is a key parameter that describes the spatial distribution of foliage within forest canopies which in turn control numerous relationships between the ground, canopy, and atmosphere. The retrieval of LAI has demonstrated success by in-situ (digital) hemispherical photography (DHP) and airborne laser scanning (ALS) data; however, field and ALS acquisitions are often spatially limited (100's km2) and costly. Large-scale (>1000's km2) retrievals have been demonstrated by optical sensors, however, accuracies remain uncertain due to the sensor's inability to penetrate the canopy. The spaceborne Geoscience Laser Altimeter System (GLAS) provides a possible solution in retrieving large-scale derivations whilst simultaneously penetrating the canopy. LAI retrieved by multiple DHP from 6 Australian sites, representing a cross-section of Australian ecosystems, were employed to model ALS LAI, which in turn were used to infer LAI from GLAS data at 5 other sites. An optimally filtered GLAS dataset was then employed in conjunction with a host of supplementary data to build a Random Forest (RF) model to infer predictions (and uncertainties) of LAI at a 250 m resolution across the forested regions of Australia. Predictions were validated against ALS-based LAI from 20 sites (R2=0.64, RMSE=1.1 m2m-2); MODIS-based LAI were also assessed against these sites (R2=0.30, RMSE=1.78 m2m-2) to demonstrate the strength of GLAS-based predictions. The large-scale nature of current predictions was also leveraged to demonstrate large-scale relationships of LAI with other environmental characteristics, such as: canopy height, elevation, and slope. The need for such wide-scale quantification of LAI is key in the assessment and modification of forest management strategies across Australia. Such work also assists Australia's Terrestrial Ecosystem Research Network, in fulfilling their government issued mandates.

  18. A meteorological distribution system for high-resolution terrestrial modeling (MicroMet)

    Science.gov (United States)

    Glen E. Liston; Kelly Elder

    2006-01-01

    An intermediate-complexity, quasi-physically based, meteorological model (MicroMet) has been developed to produce high-resolution (e.g., 30-m to 1-km horizontal grid increment) atmospheric forcings required to run spatially distributed terrestrial models over a wide variety of landscapes. The following eight variables, required to run most terrestrial models, are...

  19. The Cosmology Large Angular Scale Surveyor

    Science.gov (United States)

    Harrington, Kathleen; Marriage, Tobias; Ali, Aamir; Appel, John; Bennett, Charles; Boone, Fletcher; Brewer, Michael; Chan, Manwei; Chuss, David T.; Colazo, Felipe; hide

    2016-01-01

    The Cosmology Large Angular Scale Surveyor (CLASS) is a four telescope array designed to characterize relic primordial gravitational waves from inflation and the optical depth to reionization through a measurement of the polarized cosmic microwave background (CMB) on the largest angular scales. The frequencies of the four CLASS telescopes, one at 38 GHz, two at 93 GHz, and one dichroic system at 145217 GHz, are chosen to avoid spectral regions of high atmospheric emission and span the minimum of the polarized Galactic foregrounds: synchrotron emission at lower frequencies and dust emission at higher frequencies. Low-noise transition edge sensor detectors and a rapid front-end polarization modulator provide a unique combination of high sensitivity, stability, and control of systematics. The CLASS site, at 5200 m in the Chilean Atacama desert, allows for daily mapping of up to 70% of the sky and enables the characterization of CMB polarization at the largest angular scales. Using this combination of a broad frequency range, large sky coverage, control over systematics, and high sensitivity, CLASS will observe the reionization and recombination peaks of the CMB E- and B-mode power spectra. CLASS will make a cosmic variance limited measurement of the optical depth to reionization and will measure or place upper limits on the tensor-to-scalar ratio, r, down to a level of 0.01 (95% C.L.).

  20. Support Vector Machines Trained with Evolutionary Algorithms Employing Kernel Adatron for Large Scale Classification of Protein Structures.

    Science.gov (United States)

    Arana-Daniel, Nancy; Gallegos, Alberto A; López-Franco, Carlos; Alanís, Alma Y; Morales, Jacob; López-Franco, Adriana

    2016-01-01

    With the increasing power of computers, the amount of data that can be processed in small periods of time has grown exponentially, as has the importance of classifying large-scale data efficiently. Support vector machines have shown good results classifying large amounts of high-dimensional data, such as data generated by protein structure prediction, spam recognition, medical diagnosis, optical character recognition and text classification, etc. Most state of the art approaches for large-scale learning use traditional optimization methods, such as quadratic programming or gradient descent, which makes the use of evolutionary algorithms for training support vector machines an area to be explored. The present paper proposes an approach that is simple to implement based on evolutionary algorithms and Kernel-Adatron for solving large-scale classification problems, focusing on protein structure prediction. The functional properties of proteins depend upon their three-dimensional structures. Knowing the structures of proteins is crucial for biology and can lead to improvements in areas such as medicine, agriculture and biofuels.

  1. A Matter of Time: Faster Percolator Analysis via Efficient SVM Learning for Large-Scale Proteomics.

    Science.gov (United States)

    Halloran, John T; Rocke, David M

    2018-05-04

    Percolator is an important tool for greatly improving the results of a database search and subsequent downstream analysis. Using support vector machines (SVMs), Percolator recalibrates peptide-spectrum matches based on the learned decision boundary between targets and decoys. To improve analysis time for large-scale data sets, we update Percolator's SVM learning engine through software and algorithmic optimizations rather than heuristic approaches that necessitate the careful study of their impact on learned parameters across different search settings and data sets. We show that by optimizing Percolator's original learning algorithm, l 2 -SVM-MFN, large-scale SVM learning requires nearly only a third of the original runtime. Furthermore, we show that by employing the widely used Trust Region Newton (TRON) algorithm instead of l 2 -SVM-MFN, large-scale Percolator SVM learning is reduced to nearly only a fifth of the original runtime. Importantly, these speedups only affect the speed at which Percolator converges to a global solution and do not alter recalibration performance. The upgraded versions of both l 2 -SVM-MFN and TRON are optimized within the Percolator codebase for multithreaded and single-thread use and are available under Apache license at bitbucket.org/jthalloran/percolator_upgrade .

  2. High-efficiency targeted editing of large viral genomes by RNA-guided nucleases.

    Science.gov (United States)

    Bi, Yanwei; Sun, Le; Gao, Dandan; Ding, Chen; Li, Zhihua; Li, Yadong; Cun, Wei; Li, Qihan

    2014-05-01

    A facile and efficient method for the precise editing of large viral genomes is required for the selection of attenuated vaccine strains and the construction of gene therapy vectors. The type II prokaryotic CRISPR-Cas (clustered regularly interspaced short palindromic repeats (CRISPR)-associated (Cas)) RNA-guided nuclease system can be introduced into host cells during viral replication. The CRISPR-Cas9 system robustly stimulates targeted double-stranded breaks in the genomes of DNA viruses, where the non-homologous end joining (NHEJ) and homology-directed repair (HDR) pathways can be exploited to introduce site-specific indels or insert heterologous genes with high frequency. Furthermore, CRISPR-Cas9 can specifically inhibit the replication of the original virus, thereby significantly increasing the abundance of the recombinant virus among progeny virus. As a result, purified recombinant virus can be obtained with only a single round of selection. In this study, we used recombinant adenovirus and type I herpes simplex virus as examples to demonstrate that the CRISPR-Cas9 system is a valuable tool for editing the genomes of large DNA viruses.

  3. High-efficiency targeted editing of large viral genomes by RNA-guided nucleases.

    Directory of Open Access Journals (Sweden)

    Yanwei Bi

    2014-05-01

    Full Text Available A facile and efficient method for the precise editing of large viral genomes is required for the selection of attenuated vaccine strains and the construction of gene therapy vectors. The type II prokaryotic CRISPR-Cas (clustered regularly interspaced short palindromic repeats (CRISPR-associated (Cas RNA-guided nuclease system can be introduced into host cells during viral replication. The CRISPR-Cas9 system robustly stimulates targeted double-stranded breaks in the genomes of DNA viruses, where the non-homologous end joining (NHEJ and homology-directed repair (HDR pathways can be exploited to introduce site-specific indels or insert heterologous genes with high frequency. Furthermore, CRISPR-Cas9 can specifically inhibit the replication of the original virus, thereby significantly increasing the abundance of the recombinant virus among progeny virus. As a result, purified recombinant virus can be obtained with only a single round of selection. In this study, we used recombinant adenovirus and type I herpes simplex virus as examples to demonstrate that the CRISPR-Cas9 system is a valuable tool for editing the genomes of large DNA viruses.

  4. Efficient stochastic approaches for sensitivity studies of an Eulerian large-scale air pollution model

    Science.gov (United States)

    Dimov, I.; Georgieva, R.; Todorov, V.; Ostromsky, Tz.

    2017-10-01

    Reliability of large-scale mathematical models is an important issue when such models are used to support decision makers. Sensitivity analysis of model outputs to variation or natural uncertainties of model inputs is crucial for improving the reliability of mathematical models. A comprehensive experimental study of Monte Carlo algorithms based on Sobol sequences for multidimensional numerical integration has been done. A comparison with Latin hypercube sampling and a particular quasi-Monte Carlo lattice rule based on generalized Fibonacci numbers has been presented. The algorithms have been successfully applied to compute global Sobol sensitivity measures corresponding to the influence of several input parameters (six chemical reactions rates and four different groups of pollutants) on the concentrations of important air pollutants. The concentration values have been generated by the Unified Danish Eulerian Model. The sensitivity study has been done for the areas of several European cities with different geographical locations. The numerical tests show that the stochastic algorithms under consideration are efficient for multidimensional integration and especially for computing small by value sensitivity indices. It is a crucial element since even small indices may be important to be estimated in order to achieve a more accurate distribution of inputs influence and a more reliable interpretation of the mathematical model results.

  5. High resolution measurement of light in terrestrial ecosystems using photodegrading dyes.

    Directory of Open Access Journals (Sweden)

    Javier Roales

    Full Text Available Incoming solar radiation is the main determinant of terrestrial ecosystem processes, such as primary production, litter decomposition, or soil mineralization rates. Light in terrestrial ecosystems is spatially and temporally heterogeneous due to the interaction among sunlight angle, cloud cover and tree-canopy structure. To integrate this variability and to know light distribution over time and space, a high number of measurements are needed, but tools to do this are usually expensive and limited. An easy-to-use and inexpensive method that can be used to measure light over time and space is needed. We used two photodegrading fluorescent organic dyes, rhodamine WT (RWT and fluorescein, for the quantification of light. We measured dye photodegradation as the decrease in fluorescence across an irradiance gradient from full sunlight to deep shade. Then, we correlated it to accumulated light measured with PAR quantum sensors and obtained a model for this behavior. Rhodamine WT and fluorescein photodegradation followed an exponential decay curve with respect to accumulated light. Rhodamine WT degraded slower than fluorescein and remained unaltered after exposure to temperature changes. Under controlled conditions, fluorescence of both dyes decreased when temperatures increased, but returned to its initial values after cooling to the pre-heating temperature, indicating no degradation. RWT and fluorescein can be used to measure light under a varying range of light conditions in terrestrial ecosystems. This method is particularly useful to integrate solar radiation over time and to measure light simultaneously at different locations, and might be a better alternative to the expensive and time consuming traditional light measurement methods. The accuracy, low price and ease of this method make it a powerful tool for intensive sampling of large areas and for developing high resolution maps of light in an ecosystem.

  6. An analysis of the energy efficiency of winter rapeseed biomass under different farming technologies. A case study of a large-scale farm in Poland

    International Nuclear Information System (INIS)

    Budzyński, Wojciech Stefan; Jankowski, Krzysztof Józef; Jarocki, Marcin

    2015-01-01

    The article presents the results of a three-year study investigating the impact of production technology on the energy efficiency of winter rapeseed produced in large-scale farms. Rapeseed biomass produced in a high-input system was characterized by the highest energy demand (30.00 GJ ha"−"1). The energy demand associated with medium-input and low-input systems was 20% and 34% lower, respectively. The highest energy value of oil, oil cake and straw was noted in winter rapeseed produced in the high-input system. In the total energy output (268.5 GJ ha"−"1), approximately 17% of energy was accumulated in oil, 20% in oil cake, and 63% in straw. In lower input systems, the energy output of oil decreased by 13–23%, the energy output of oil cake – by 6–16%, and the energy output of straw – by 29–37% without visible changes in the structure of energy accumulated in different components of rapeseed biomass. The highest energy gain was observed in the high-input system. The low-input system was characterized by the highest energy efficiency ratio, at 4.22 for seeds and 9.43 for seeds and straw. The increase in production intensity reduced the energy efficiency of rapeseed biomass production by 8–18% (seeds) and 5–9% (seeds and straw). - Highlights: • Energy inputs in the high-input production system reached 30 GJ ha"−"1. • Energy inputs in the medium- and low-input systems were reduced by 20% and 34%. • Energy gain in the high-input system was 15% and 42% higher than in other systems. • Energy ratio in the high-input system was 5–18% lower than in the low-input system.

  7. Adaptive Texture Synthesis for Large Scale City Modeling

    Science.gov (United States)

    Despine, G.; Colleu, T.

    2015-02-01

    Large scale city models textured with aerial images are well suited for bird-eye navigation but generally the image resolution does not allow pedestrian navigation. One solution to face this problem is to use high resolution terrestrial photos but it requires huge amount of manual work to remove occlusions. Another solution is to synthesize generic textures with a set of procedural rules and elementary patterns like bricks, roof tiles, doors and windows. This solution may give realistic textures but with no correlation to the ground truth. Instead of using pure procedural modelling we present a method to extract information from aerial images and adapt the texture synthesis to each building. We describe a workflow allowing the user to drive the information extraction and to select the appropriate texture patterns. We also emphasize the importance to organize the knowledge about elementary pattern in a texture catalogue allowing attaching physical information, semantic attributes and to execute selection requests. Roofs are processed according to the detected building material. Façades are first described in terms of principal colours, then opening positions are detected and some window features are computed. These features allow selecting the most appropriate patterns from the texture catalogue. We experimented this workflow on two samples with 20 cm and 5 cm resolution images. The roof texture synthesis and opening detection were successfully conducted on hundreds of buildings. The window characterization is still sensitive to the distortions inherent to the projection of aerial images onto the facades.

  8. Tile-Based Semisupervised Classification of Large-Scale VHR Remote Sensing Images

    Directory of Open Access Journals (Sweden)

    Haikel Alhichri

    2018-01-01

    Full Text Available This paper deals with the problem of the classification of large-scale very high-resolution (VHR remote sensing (RS images in a semisupervised scenario, where we have a limited training set (less than ten training samples per class. Typical pixel-based classification methods are unfeasible for large-scale VHR images. Thus, as a practical and efficient solution, we propose to subdivide the large image into a grid of tiles and then classify the tiles instead of classifying pixels. Our proposed method uses the power of a pretrained convolutional neural network (CNN to first extract descriptive features from each tile. Next, a neural network classifier (composed of 2 fully connected layers is trained in a semisupervised fashion and used to classify all remaining tiles in the image. This basically presents a coarse classification of the image, which is sufficient for many RS application. The second contribution deals with the employment of the semisupervised learning to improve the classification accuracy. We present a novel semisupervised approach which exploits both the spectral and spatial relationships embedded in the remaining unlabelled tiles. In particular, we embed a spectral graph Laplacian in the hidden layer of the neural network. In addition, we apply regularization of the output labels using a spatial graph Laplacian and the random Walker algorithm. Experimental results obtained by testing the method on two large-scale images acquired by the IKONOS2 sensor reveal promising capabilities of this method in terms of classification accuracy even with less than ten training samples per class.

  9. Large-Scale Parallel Finite Element Analysis of the Stress Singular Problems

    International Nuclear Information System (INIS)

    Noriyuki Kushida; Hiroshi Okuda; Genki Yagawa

    2002-01-01

    In this paper, the convergence behavior of large-scale parallel finite element method for the stress singular problems was investigated. The convergence behavior of iterative solvers depends on the efficiency of the pre-conditioners. However, efficiency of pre-conditioners may be influenced by the domain decomposition that is necessary for parallel FEM. In this study the following results were obtained: Conjugate gradient method without preconditioning and the diagonal scaling preconditioned conjugate gradient method were not influenced by the domain decomposition as expected. symmetric successive over relaxation method preconditioned conjugate gradient method converged 6% faster as maximum if the stress singular area was contained in one sub-domain. (authors)

  10. Automatic Measurement in Large-Scale Space with the Laser Theodolite and Vision Guiding Technology

    Directory of Open Access Journals (Sweden)

    Bin Wu

    2013-01-01

    Full Text Available The multitheodolite intersection measurement is a traditional approach to the coordinate measurement in large-scale space. However, the procedure of manual labeling and aiming results in the low automation level and the low measuring efficiency, and the measurement accuracy is affected easily by the manual aiming error. Based on the traditional theodolite measuring methods, this paper introduces the mechanism of vision measurement principle and presents a novel automatic measurement method for large-scale space and large workpieces (equipment combined with the laser theodolite measuring and vision guiding technologies. The measuring mark is established on the surface of the measured workpiece by the collimating laser which is coaxial with the sight-axis of theodolite, so the cooperation targets or manual marks are no longer needed. With the theoretical model data and the multiresolution visual imaging and tracking technology, it can realize the automatic, quick, and accurate measurement of large workpieces in large-scale space. Meanwhile, the impact of artificial error is reduced and the measuring efficiency is improved. Therefore, this method has significant ramification for the measurement of large workpieces, such as the geometry appearance characteristics measuring of ships, large aircraft, and spacecraft, and deformation monitoring for large building, dams.

  11. Progress in N-type Si Solar Cell and Module Technology for High Efficiency and Low Cost

    Energy Technology Data Exchange (ETDEWEB)

    Song, Dengyuan; Xiong, Jingfeng; Hu, Zhiyan; Li, Gaofei; Wang, Hongfang; An, Haijiao; Yu, Bo; Grenko, Brian; Borden, Kevin; Sauer, Kenneth; Cui, Jianhua; Wang, Haitao [Yingli Green Energy Holding Co., LTD, 071051 Boading (China); Roessler, T. [Yingli Green Energy Europe GmbH, Heimeranstr. 37, 80339 Munich (Germany); Bultman, J. [ECN Solar Energy, P.O. Box 1, NL-1755 ZG Petten (Netherlands); Vlooswijk, A.H.G.; Venema, P.R. [Tempress Systems BV, Radeweg 31, 8171 Vaassen (Netherlands)

    2012-06-15

    A novel high efficiency solar cell and module technology, named PANDA, using crystalline n-type CZ Si wafers has moved into large-scale production at Yingli. The first commercial sales of the PANDA modules commenced in mid 2010. Up to 600MW of mass production capacity from crystal-Si growth, wafer slicing, cell processing and module assembly have been implemented by the end of 2011. The PANDA technology was developed specifically for high efficiency and low cost. In contrast to the existing n-type Si solar cell manufacturing methods in mass production, this new technology is largely compatible with a traditional p-type Si solar cell production line by conventional diffusion, SiNx coating and screen-printing technology. With optimizing all technologies, Yingli's PANDA solar cells on semi-square 6-inch n-type CZ wafers (cell size 239cm{sup 2}) have been improved to currently have an average efficiency on commercial production lines exceeding 19.0% and up to 20.0% in pilot production. The PANDA modules have been produced and were certified according to UL1703, IEC 61215 and IEC 61730 standards. Nearly two years of full production on scale-up lines show that the PANDA modules have a high efficiency and power density, superior high temperature performance, near zero initial light induced degradation, and excellent efficiency at low irradiance.

  12. EFT of large scale structures in redshift space

    Science.gov (United States)

    Lewandowski, Matthew; Senatore, Leonardo; Prada, Francisco; Zhao, Cheng; Chuang, Chia-Hsun

    2018-03-01

    We further develop the description of redshift-space distortions within the effective field theory of large scale structures. First, we generalize the counterterms to include the effect of baryonic physics and primordial non-Gaussianity. Second, we evaluate the IR resummation of the dark matter power spectrum in redshift space. This requires us to identify a controlled approximation that makes the numerical evaluation straightforward and efficient. Third, we compare the predictions of the theory at one loop with the power spectrum from numerical simulations up to ℓ=6 . We find that the IR resummation allows us to correctly reproduce the baryon acoustic oscillation peak. The k reach—or, equivalently, the precision for a given k —depends on additional counterterms that need to be matched to simulations. Since the nonlinear scale for the velocity is expected to be longer than the one for the overdensity, we consider a minimal and a nonminimal set of counterterms. The quality of our numerical data makes it hard to firmly establish the performance of the theory at high wave numbers. Within this limitation, we find that the theory at redshift z =0.56 and up to ℓ=2 matches the data at the percent level approximately up to k ˜0.13 h Mpc-1 or k ˜0.18 h Mpc-1 , depending on the number of counterterms used, with a potentially large improvement over former analytical techniques.

  13. Highly Accurate Tree Models Derived from Terrestrial Laser Scan Data: A Method Description

    Directory of Open Access Journals (Sweden)

    Jan Hackenberg

    2014-05-01

    Full Text Available This paper presents a method for fitting cylinders into a point cloud, derived from a terrestrial laser-scanned tree. Utilizing high scan quality data as the input, the resulting models describe the branching structure of the tree, capable of detecting branches with a diameter smaller than a centimeter. The cylinders are stored as a hierarchical tree-like data structure encapsulating parent-child neighbor relations and incorporating the tree’s direction of growth. This structure enables the efficient extraction of tree components, such as the stem or a single branch. The method was validated both by applying a comparison of the resulting cylinder models with ground truth data and by an analysis between the input point clouds and the models. Tree models were accomplished representing more than 99% of the input point cloud, with an average distance from the cylinder model to the point cloud within sub-millimeter accuracy. After validation, the method was applied to build two allometric models based on 24 tree point clouds as an example of the application. Computation terminated successfully within less than 30 min. For the model predicting the total above ground volume, the coefficient of determination was 0.965, showing the high potential of terrestrial laser-scanning for forest inventories.

  14. Large Scale Community Detection Using a Small World Model

    Directory of Open Access Journals (Sweden)

    Ranjan Kumar Behera

    2017-11-01

    Full Text Available In a social network, small or large communities within the network play a major role in deciding the functionalities of the network. Despite of diverse definitions, communities in the network may be defined as the group of nodes that are more densely connected as compared to nodes outside the group. Revealing such hidden communities is one of the challenging research problems. A real world social network follows small world phenomena, which indicates that any two social entities can be reachable in a small number of steps. In this paper, nodes are mapped into communities based on the random walk in the network. However, uncovering communities in large-scale networks is a challenging task due to its unprecedented growth in the size of social networks. A good number of community detection algorithms based on random walk exist in literature. In addition, when large-scale social networks are being considered, these algorithms are observed to take considerably longer time. In this work, with an objective to improve the efficiency of algorithms, parallel programming framework like Map-Reduce has been considered for uncovering the hidden communities in social network. The proposed approach has been compared with some standard existing community detection algorithms for both synthetic and real-world datasets in order to examine its performance, and it is observed that the proposed algorithm is more efficient than the existing ones.

  15. Large-scale transportation network congestion evolution prediction using deep learning theory.

    Directory of Open Access Journals (Sweden)

    Xiaolei Ma

    Full Text Available Understanding how congestion at one location can cause ripples throughout large-scale transportation network is vital for transportation researchers and practitioners to pinpoint traffic bottlenecks for congestion mitigation. Traditional studies rely on either mathematical equations or simulation techniques to model traffic congestion dynamics. However, most of the approaches have limitations, largely due to unrealistic assumptions and cumbersome parameter calibration process. With the development of Intelligent Transportation Systems (ITS and Internet of Things (IoT, transportation data become more and more ubiquitous. This triggers a series of data-driven research to investigate transportation phenomena. Among them, deep learning theory is considered one of the most promising techniques to tackle tremendous high-dimensional data. This study attempts to extend deep learning theory into large-scale transportation network analysis. A deep Restricted Boltzmann Machine and Recurrent Neural Network architecture is utilized to model and predict traffic congestion evolution based on Global Positioning System (GPS data from taxi. A numerical study in Ningbo, China is conducted to validate the effectiveness and efficiency of the proposed method. Results show that the prediction accuracy can achieve as high as 88% within less than 6 minutes when the model is implemented in a Graphic Processing Unit (GPU-based parallel computing environment. The predicted congestion evolution patterns can be visualized temporally and spatially through a map-based platform to identify the vulnerable links for proactive congestion mitigation.

  16. Fusion of Terrestrial and Airborne Laser Data for 3D modeling Applications

    Science.gov (United States)

    Mohammed, Hani Mahmoud

    This thesis deals with the 3D modeling phase of the as-built large BIM projects. Among several means of BIM data capturing, such as photogrammetric or range tools, laser scanners have been one of the most efficient and practical tool for a long time. They can generate point clouds with high resolution for 3D models that meet nowadays' market demands. The current 3D modeling projects of as-built BIMs are mainly focused on using one type of laser scanner data, such as Airborne or Terrestrial. According to the literatures, no significant (few) efforts were made towards the fusion of heterogeneous laser scanner data despite its importance. The importance of the fusion of heterogeneous data arises from the fact that no single type of laser data can provide all the information about BIM, especially for large BIM projects that are existing on a large area, such as university buildings, or Heritage places. Terrestrial laser scanners are able to map facades of buildings and other terrestrial objects. However, they lack the ability to map roofs or higher parts in the BIM project. Airborne laser scanner on the other hand, can map roofs of the buildings efficiently and can map only small part of the facades. Short range laser scanners can map the interiors of the BIM projects, while long range scanners are used for mapping wide exterior areas in BIM projects. In this thesis the long range laser scanner data obtained in the Stop-and-Go mapping mode, the short range laser scanner data, obtained in a fully static mapping mode, and the airborne laser data are all fused together to bring a complete effective solution for a large BIM project. Working towards the 3D modeling of BIM projects, the thesis framework starts with the registration of the data, where a new fast automatic registration algorithm were developed. The next step is to recognize the different objects in the BIM project (classification), and obtain 3D models for the buildings. The last step is the development of an

  17. Comparison of Multi-Scale Digital Elevation Models for Defining Waterways and Catchments Over Large Areas

    Science.gov (United States)

    Harris, B.; McDougall, K.; Barry, M.

    2012-07-01

    Digital Elevation Models (DEMs) allow for the efficient and consistent creation of waterways and catchment boundaries over large areas. Studies of waterway delineation from DEMs are usually undertaken over small or single catchment areas due to the nature of the problems being investigated. Improvements in Geographic Information Systems (GIS) techniques, software, hardware and data allow for analysis of larger data sets and also facilitate a consistent tool for the creation and analysis of waterways over extensive areas. However, rarely are they developed over large regional areas because of the lack of available raw data sets and the amount of work required to create the underlying DEMs. This paper examines definition of waterways and catchments over an area of approximately 25,000 km2 to establish the optimal DEM scale required for waterway delineation over large regional projects. The comparative study analysed multi-scale DEMs over two test areas (Wivenhoe catchment, 543 km2 and a detailed 13 km2 within the Wivenhoe catchment) including various data types, scales, quality, and variable catchment input parameters. Historic and available DEM data was compared to high resolution Lidar based DEMs to assess variations in the formation of stream networks. The results identified that, particularly in areas of high elevation change, DEMs at 20 m cell size created from broad scale 1:25,000 data (combined with more detailed data or manual delineation in flat areas) are adequate for the creation of waterways and catchments at a regional scale.

  18. a New Approach for Subway Tunnel Deformation Monitoring: High-Resolution Terrestrial Laser Scanning

    Science.gov (United States)

    Li, J.; Wan, Y.; Gao, X.

    2012-07-01

    With the improvement of the accuracy and efficiency of laser scanning technology, high-resolution terrestrial laser scanning (TLS) technology can obtain high precise points-cloud and density distribution and can be applied to high-precision deformation monitoring of subway tunnels and high-speed railway bridges and other fields. In this paper, a new approach using a points-cloud segmentation method based on vectors of neighbor points and surface fitting method based on moving least squares was proposed and applied to subway tunnel deformation monitoring in Tianjin combined with a new high-resolution terrestrial laser scanner (Riegl VZ-400). There were three main procedures. Firstly, a points-cloud consisted of several scanning was registered by linearized iterative least squares approach to improve the accuracy of registration, and several control points were acquired by total stations (TS) and then adjusted. Secondly, the registered points-cloud was resampled and segmented based on vectors of neighbor points to select suitable points. Thirdly, the selected points were used to fit the subway tunnel surface with moving least squares algorithm. Then a series of parallel sections obtained from temporal series of fitting tunnel surfaces were compared to analysis the deformation. Finally, the results of the approach in z direction were compared with the fiber optical displacement sensor approach and the results in x, y directions were compared with TS respectively, and comparison results showed the accuracy errors of x, y, z directions were respectively about 1.5 mm, 2 mm, 1 mm. Therefore the new approach using high-resolution TLS can meet the demand of subway tunnel deformation monitoring.

  19. Towards a Database System for Large-scale Analytics on Strings

    KAUST Repository

    Sahli, Majed A.

    2015-07-23

    Recent technological advances are causing an explosion in the production of sequential data. Biological sequences, web logs and time series are represented as strings. Currently, strings are stored, managed and queried in an ad-hoc fashion because they lack a standardized data model and query language. String queries are computationally demanding, especially when strings are long and numerous. Existing approaches cannot handle the growing number of strings produced by environmental, healthcare, bioinformatic, and space applications. There is a trade- off between performing analytics efficiently and scaling to thousands of cores to finish in reasonable times. In this thesis, we introduce a data model that unifies the input and output representations of core string operations. We define a declarative query language for strings where operators can be pipelined to form complex queries. A rich set of core string operators is described to support string analytics. We then demonstrate a database system for string analytics based on our model and query language. In particular, we propose the use of a novel data structure augmented by efficient parallel computation to strike a balance between preprocessing overheads and query execution times. Next, we delve into repeated motifs extraction as a core string operation for large-scale string analytics. Motifs are frequent patterns used, for example, to identify biological functionality, periodic trends, or malicious activities. Statistical approaches are fast but inexact while combinatorial methods are sound but slow. We introduce ACME, a combinatorial repeated motifs extractor. We study the spatial and temporal locality of motif extraction and devise a cache-aware search space traversal technique. ACME is the only method that scales to gigabyte- long strings, handles large alphabets, and supports interesting motif types with minimal overhead. While ACME is cache-efficient, it is limited by being serial. We devise a lightweight

  20. Large scale biomimetic membrane arrays

    DEFF Research Database (Denmark)

    Hansen, Jesper Søndergaard; Perry, Mark; Vogel, Jörg

    2009-01-01

    To establish planar biomimetic membranes across large scale partition aperture arrays, we created a disposable single-use horizontal chamber design that supports combined optical-electrical measurements. Functional lipid bilayers could easily and efficiently be established across CO2 laser micro......-structured 8 x 8 aperture partition arrays with average aperture diameters of 301 +/- 5 mu m. We addressed the electro-physical properties of the lipid bilayers established across the micro-structured scaffold arrays by controllable reconstitution of biotechnological and physiological relevant membrane...... peptides and proteins. Next, we tested the scalability of the biomimetic membrane design by establishing lipid bilayers in rectangular 24 x 24 and hexagonal 24 x 27 aperture arrays, respectively. The results presented show that the design is suitable for further developments of sensitive biosensor assays...

  1. Bayesian hierarchical model for large-scale covariance matrix estimation.

    Science.gov (United States)

    Zhu, Dongxiao; Hero, Alfred O

    2007-12-01

    Many bioinformatics problems implicitly depend on estimating large-scale covariance matrix. The traditional approaches tend to give rise to high variance and low accuracy due to "overfitting." We cast the large-scale covariance matrix estimation problem into the Bayesian hierarchical model framework, and introduce dependency between covariance parameters. We demonstrate the advantages of our approaches over the traditional approaches using simulations and OMICS data analysis.

  2. Investigation of the Contamination Control in a Cleaning Room with a Moving AGV by 3D Large-Scale Simulation

    Directory of Open Access Journals (Sweden)

    Qing-He Yao

    2013-01-01

    Full Text Available The motions of the airflow induced by the movement of an automatic guided vehicle (AGV in a cleanroom are numerically studied by large-scale simulation. For this purpose, numerical experiments scheme based on domain decomposition method is designed. Compared with the related past research, the high Reynolds number is treated by large-scale computation in this work. A domain decomposition Lagrange-Galerkin method is employed to approximate the Navier-Stokes equations and the convection diffusion equation; the stiffness matrix is symmetric and an incomplete balancing preconditioned conjugate gradient (PCG method is employed to solve the linear algebra system iteratively. The end wall effects are readily viewed, and the necessity of the extension to 3 dimensions is confirmed. The effect of the high efficiency particular air (HEPA filter on contamination control is studied and the proper setting of the speed of the clean air flow is also investigated. More details of the recirculation zones are revealed by the 3D large-scale simulation.

  3. Political consultation and large-scale research

    International Nuclear Information System (INIS)

    Bechmann, G.; Folkers, H.

    1977-01-01

    Large-scale research and policy consulting have an intermediary position between sociological sub-systems. While large-scale research coordinates science, policy, and production, policy consulting coordinates science, policy and political spheres. In this very position, large-scale research and policy consulting lack of institutional guarantees and rational back-ground guarantee which are characteristic for their sociological environment. This large-scale research can neither deal with the production of innovative goods under consideration of rentability, nor can it hope for full recognition by the basis-oriented scientific community. Policy consulting knows neither the competence assignment of the political system to make decisions nor can it judge succesfully by the critical standards of the established social science, at least as far as the present situation is concerned. This intermediary position of large-scale research and policy consulting has, in three points, a consequence supporting the thesis which states that this is a new form of institutionalization of science: These are: 1) external control, 2) the organization form, 3) the theoretical conception of large-scale research and policy consulting. (orig.) [de

  4. Sub-grid scale representation of vegetation in global land surface schemes: implications for estimation of the terrestrial carbon sink

    Directory of Open Access Journals (Sweden)

    J. R. Melton

    2014-02-01

    Full Text Available Terrestrial ecosystem models commonly represent vegetation in terms of plant functional types (PFTs and use their vegetation attributes in calculations of the energy and water balance as well as to investigate the terrestrial carbon cycle. Sub-grid scale variability of PFTs in these models is represented using different approaches with the "composite" and "mosaic" approaches being the two end-members. The impact of these two approaches on the global carbon balance has been investigated with the Canadian Terrestrial Ecosystem Model (CTEM v 1.2 coupled to the Canadian Land Surface Scheme (CLASS v 3.6. In the composite (single-tile approach, the vegetation attributes of different PFTs present in a grid cell are aggregated and used in calculations to determine the resulting physical environmental conditions (soil moisture, soil temperature, etc. that are common to all PFTs. In the mosaic (multi-tile approach, energy and water balance calculations are performed separately for each PFT tile and each tile's physical land surface environmental conditions evolve independently. Pre-industrial equilibrium CLASS-CTEM simulations yield global totals of vegetation biomass, net primary productivity, and soil carbon that compare reasonably well with observation-based estimates and differ by less than 5% between the mosaic and composite configurations. However, on a regional scale the two approaches can differ by > 30%, especially in areas with high heterogeneity in land cover. Simulations over the historical period (1959–2005 show different responses to evolving climate and carbon dioxide concentrations from the two approaches. The cumulative global terrestrial carbon sink estimated over the 1959–2005 period (excluding land use change (LUC effects differs by around 5% between the two approaches (96.3 and 101.3 Pg, for the mosaic and composite approaches, respectively and compares well with the observation-based estimate of 82.2 ± 35 Pg C over the same

  5. An analysis of Australia's large scale renewable energy target: Restoring market confidence

    International Nuclear Information System (INIS)

    Nelson, Tim; Nelson, James; Ariyaratnam, Jude; Camroux, Simon

    2013-01-01

    In 2001, Australia introduced legislation requiring investment in new renewable electricity generating capacity. The legislation was significantly expanded in 2009 to give effect to a 20% Renewable Energy Target (RET). Importantly, the policy was introduced with bipartisan support and is consistent with global policy trends. In this article, we examine the history of the policy and establish that the ‘stop/start’ nature of renewable policy development has resulted in investors withholding new capital until greater certainty is provided. We utilise the methodology from Simshauser and Nelson (2012) to examine whether capital market efficiency losses would occur under certain policy scenarios. The results show that electricity costs would increase by between $51 million and $119 million if the large-scale RET is abandoned even after accounting for avoided renewable costs. Our conclusions are clear: we find that policymakers should be guided by a high level public policy principle in relation to large-scale renewable energy policy: constant review is not reform. -- Highlights: •We examine the history of Australian renewable energy policy. •We examine whether capital market efficiency losses occur under certain policy scenarios. •We find electricity prices increase by up to $119 million due to renewable policy uncertainty. •We conclude that constant review of policy is not reform and should be avoided

  6. A climatological analysis of high-precipitation events in Dronning Maud Land, Antarctica, and associated large-scale atmospheric conditions

    NARCIS (Netherlands)

    Welker, Christoph; Martius, Olivia; Froidevaux, Paul; Reijmer, Carleen H.; Fischer, Hubertus

    2014-01-01

    The link between high precipitation in Dronning Maud Land (DML), Antarctica, and the large-scale atmospheric circulation is investigated using ERA-Interim data for 1979-2009. High-precipitation events are analyzed at Halvfarryggen situated in the coastal region of DML and at Kohnen Station located

  7. Large-scale multimedia modeling applications

    International Nuclear Information System (INIS)

    Droppo, J.G. Jr.; Buck, J.W.; Whelan, G.; Strenge, D.L.; Castleton, K.J.; Gelston, G.M.

    1995-08-01

    Over the past decade, the US Department of Energy (DOE) and other agencies have faced increasing scrutiny for a wide range of environmental issues related to past and current practices. A number of large-scale applications have been undertaken that required analysis of large numbers of potential environmental issues over a wide range of environmental conditions and contaminants. Several of these applications, referred to here as large-scale applications, have addressed long-term public health risks using a holistic approach for assessing impacts from potential waterborne and airborne transport pathways. Multimedia models such as the Multimedia Environmental Pollutant Assessment System (MEPAS) were designed for use in such applications. MEPAS integrates radioactive and hazardous contaminants impact computations for major exposure routes via air, surface water, ground water, and overland flow transport. A number of large-scale applications of MEPAS have been conducted to assess various endpoints for environmental and human health impacts. These applications are described in terms of lessons learned in the development of an effective approach for large-scale applications

  8. Highly efficient blue organic light emitting device using indium-free transparent anode Ga:ZnO with scalability for large area coating

    International Nuclear Information System (INIS)

    Wang Liang; Matson, Dean W.; Polikarpov, Evgueni; Swensen, James S.; Bonham, Charles C.; Cosimbescu, Lelia; Gaspar, Daniel J.; Padmaperuma, Asanga B.; Berry, Joseph J.; Ginley, David S.

    2010-01-01

    Organic light emitting devices have been achieved with an indium-free transparent anode, Ga doped ZnO (GZO). A large area coating technique was used (RF magnetron sputtering) to deposit the GZO films onto glass. The respective organic light emitting devices exhibited an operational voltage of 3.7 V, an external quantum efficiency of 17%, and a power efficiency of 39 lm/W at a current density of 1 mA/cm 2 . These parameters are well within acceptable standards for blue OLEDs to generate a white light with high enough brightness for general lighting applications. It is expected that high-efficiency, long-lifetime, large area, and cost-effective white OLEDs can be made with these indium-free anode materials.

  9. Design of Availability-Dependent Distributed Services in Large-Scale Uncooperative Settings

    Science.gov (United States)

    Morales, Ramses Victor

    2009-01-01

    Thesis Statement: "Availability-dependent global predicates can be efficiently and scalably realized for a class of distributed services, in spite of specific selfish and colluding behaviors, using local and decentralized protocols". Several types of large-scale distributed systems spanning the Internet have to deal with availability variations…

  10. Highly Efficient Single-Step Enrichment of Low Abundance Phosphopeptides from Plant Membrane Preparations

    Directory of Open Access Journals (Sweden)

    Xu Na Wu

    2017-09-01

    Full Text Available Mass spectrometry (MS-based large scale phosphoproteomics has facilitated the investigation of plant phosphorylation dynamics on a system-wide scale. However, generating large scale data sets for membrane phosphoproteins usually requires fractionation of samples and extended hands-on laboratory time. To overcome these limitations, we developed “ShortPhos,” an efficient and simple phosphoproteomics protocol optimized for research on plant membrane proteins. The optimized workflow allows fast and efficient identification and quantification of phosphopeptides, even from small amounts of starting plant materials. “ShortPhos” can produce label-free datasets with a high quantitative reproducibility. In addition, the “ShortPhos” protocol recovered more phosphorylation sites from membrane proteins, especially plasma membrane and vacuolar proteins, when compared to our previous workflow and other membrane-based data in the PhosPhAt 4.0 database. We applied “ShortPhos” to study kinase-substrate relationships within a nitrate-induction experiment on Arabidopsis roots. The “ShortPhos” identified significantly more known kinase-substrate relationships compared to previous phosphoproteomics workflows, producing new insights into nitrate-induced signaling pathways.

  11. LARGE SCALE TEXTURED MESH RECONSTRUCTION FROM MOBILE MAPPING IMAGES AND LIDAR SCANS

    Directory of Open Access Journals (Sweden)

    M. Boussaha

    2018-05-01

    Full Text Available The representation of 3D geometric and photometric information of the real world is one of the most challenging and extensively studied research topics in the photogrammetry and robotics communities. In this paper, we present a fully automatic framework for 3D high quality large scale urban texture mapping using oriented images and LiDAR scans acquired by a terrestrial Mobile Mapping System (MMS. First, the acquired points and images are sliced into temporal chunks ensuring a reasonable size and time consistency between geometry (points and photometry (images. Then, a simple, fast and scalable 3D surface reconstruction relying on the sensor space topology is performed on each chunk after an isotropic sampling of the point cloud obtained from the raw LiDAR scans. Finally, the algorithm proposed in (Waechter et al., 2014 is adapted to texture the reconstructed surface with the images acquired simultaneously, ensuring a high quality texture with no seams and global color adjustment. We evaluate our full pipeline on a dataset of 17 km of acquisition in Rouen, France resulting in nearly 2 billion points and 40000 full HD images. We are able to reconstruct and texture the whole acquisition in less than 30 computing hours, the entire process being highly parallel as each chunk can be processed independently in a separate thread or computer.

  12. Large Scale Textured Mesh Reconstruction from Mobile Mapping Images and LIDAR Scans

    Science.gov (United States)

    Boussaha, M.; Vallet, B.; Rives, P.

    2018-05-01

    The representation of 3D geometric and photometric information of the real world is one of the most challenging and extensively studied research topics in the photogrammetry and robotics communities. In this paper, we present a fully automatic framework for 3D high quality large scale urban texture mapping using oriented images and LiDAR scans acquired by a terrestrial Mobile Mapping System (MMS). First, the acquired points and images are sliced into temporal chunks ensuring a reasonable size and time consistency between geometry (points) and photometry (images). Then, a simple, fast and scalable 3D surface reconstruction relying on the sensor space topology is performed on each chunk after an isotropic sampling of the point cloud obtained from the raw LiDAR scans. Finally, the algorithm proposed in (Waechter et al., 2014) is adapted to texture the reconstructed surface with the images acquired simultaneously, ensuring a high quality texture with no seams and global color adjustment. We evaluate our full pipeline on a dataset of 17 km of acquisition in Rouen, France resulting in nearly 2 billion points and 40000 full HD images. We are able to reconstruct and texture the whole acquisition in less than 30 computing hours, the entire process being highly parallel as each chunk can be processed independently in a separate thread or computer.

  13. Large-scale block adjustment without use of ground control points based on the compensation of geometric calibration for ZY-3 images

    Science.gov (United States)

    Yang, Bo; Wang, Mi; Xu, Wen; Li, Deren; Gong, Jianya; Pi, Yingdong

    2017-12-01

    The potential of large-scale block adjustment (BA) without ground control points (GCPs) has long been a concern among photogrammetric researchers, which is of effective guiding significance for global mapping. However, significant problems with the accuracy and efficiency of this method remain to be solved. In this study, we analyzed the effects of geometric errors on BA, and then developed a step-wise BA method to conduct integrated processing of large-scale ZY-3 satellite images without GCPs. We first pre-processed the BA data, by adopting a geometric calibration (GC) method based on the viewing-angle model to compensate for systematic errors, such that the BA input images were of good initial geometric quality. The second step was integrated BA without GCPs, in which a series of technical methods were used to solve bottleneck problems and ensure accuracy and efficiency. The BA model, based on virtual control points (VCPs), was constructed to address the rank deficiency problem caused by lack of absolute constraints. We then developed a parallel matching strategy to improve the efficiency of tie points (TPs) matching, and adopted a three-array data structure based on sparsity to relieve the storage and calculation burden of the high-order modified equation. Finally, we used the conjugate gradient method to improve the speed of solving the high-order equations. To evaluate the feasibility of the presented large-scale BA method, we conducted three experiments on real data collected by the ZY-3 satellite. The experimental results indicate that the presented method can effectively improve the geometric accuracies of ZY-3 satellite images. This study demonstrates the feasibility of large-scale mapping without GCPs.

  14. Kinota: An Open-Source NoSQL implementation of OGC SensorThings for large-scale high-resolution real-time environmental monitoring

    Science.gov (United States)

    Miles, B.; Chepudira, K.; LaBar, W.

    2017-12-01

    The Open Geospatial Consortium (OGC) SensorThings API (STA) specification, ratified in 2016, is a next-generation open standard for enabling real-time communication of sensor data. Building on over a decade of OGC Sensor Web Enablement (SWE) Standards, STA offers a rich data model that can represent a range of sensor and phenomena types (e.g. fixed sensors sensing fixed phenomena, fixed sensors sensing moving phenomena, mobile sensors sensing fixed phenomena, and mobile sensors sensing moving phenomena) and is data agnostic. Additionally, and in contrast to previous SWE standards, STA is developer-friendly, as is evident from its convenient JSON serialization, and expressive OData-based query language (with support for geospatial queries); with its Message Queue Telemetry Transport (MQTT), STA is also well-suited to efficient real-time data publishing and discovery. All these attributes make STA potentially useful for use in environmental monitoring sensor networks. Here we present Kinota(TM), an Open-Source NoSQL implementation of OGC SensorThings for large-scale high-resolution real-time environmental monitoring. Kinota, which roughly stands for Knowledge from Internet of Things Analyses, relies on Cassandra its underlying data store, which is a horizontally scalable, fault-tolerant open-source database that is often used to store time-series data for Big Data applications (though integration with other NoSQL or rational databases is possible). With this foundation, Kinota can scale to store data from an arbitrary number of sensors collecting data every 500 milliseconds. Additionally, Kinota architecture is very modular allowing for customization by adopters who can choose to replace parts of the existing implementation when desirable. The architecture is also highly portable providing the flexibility to choose between cloud providers like azure, amazon, google etc. The scalable, flexible and cloud friendly architecture of Kinota makes it ideal for use in next

  15. A comparative study of all-vanadium and iron-chromium redox flow batteries for large-scale energy storage

    Science.gov (United States)

    Zeng, Y. K.; Zhao, T. S.; An, L.; Zhou, X. L.; Wei, L.

    2015-12-01

    The promise of redox flow batteries (RFBs) utilizing soluble redox couples, such as all vanadium ions as well as iron and chromium ions, is becoming increasingly recognized for large-scale energy storage of renewables such as wind and solar, owing to their unique advantages including scalability, intrinsic safety, and long cycle life. An ongoing question associated with these two RFBs is determining whether the vanadium redox flow battery (VRFB) or iron-chromium redox flow battery (ICRFB) is more suitable and competitive for large-scale energy storage. To address this concern, a comparative study has been conducted for the two types of battery based on their charge-discharge performance, cycle performance, and capital cost. It is found that: i) the two batteries have similar energy efficiencies at high current densities; ii) the ICRFB exhibits a higher capacity decay rate than does the VRFB; and iii) the ICRFB is much less expensive in capital costs when operated at high power densities or at large capacities.

  16. Spatiotemporal property and predictability of large-scale human mobility

    Science.gov (United States)

    Zhang, Hai-Tao; Zhu, Tao; Fu, Dongfei; Xu, Bowen; Han, Xiao-Pu; Chen, Duxin

    2018-04-01

    Spatiotemporal characteristics of human mobility emerging from complexity on individual scale have been extensively studied due to the application potential on human behavior prediction and recommendation, and control of epidemic spreading. We collect and investigate a comprehensive data set of human activities on large geographical scales, including both websites browse and mobile towers visit. Numerical results show that the degree of activity decays as a power law, indicating that human behaviors are reminiscent of scale-free random walks known as Lévy flight. More significantly, this study suggests that human activities on large geographical scales have specific non-Markovian characteristics, such as a two-segment power-law distribution of dwelling time and a high possibility for prediction. Furthermore, a scale-free featured mobility model with two essential ingredients, i.e., preferential return and exploration, and a Gaussian distribution assumption on the exploration tendency parameter is proposed, which outperforms existing human mobility models under scenarios of large geographical scales.

  17. A Combined Eulerian-Lagrangian Data Representation for Large-Scale Applications.

    Science.gov (United States)

    Sauer, Franz; Xie, Jinrong; Ma, Kwan-Liu

    2017-10-01

    The Eulerian and Lagrangian reference frames each provide a unique perspective when studying and visualizing results from scientific systems. As a result, many large-scale simulations produce data in both formats, and analysis tasks that simultaneously utilize information from both representations are becoming increasingly popular. However, due to their fundamentally different nature, drawing correlations between these data formats is a computationally difficult task, especially in a large-scale setting. In this work, we present a new data representation which combines both reference frames into a joint Eulerian-Lagrangian format. By reorganizing Lagrangian information according to the Eulerian simulation grid into a "unit cell" based approach, we can provide an efficient out-of-core means of sampling, querying, and operating with both representations simultaneously. We also extend this design to generate multi-resolution subsets of the full data to suit the viewer's needs and provide a fast flow-aware trajectory construction scheme. We demonstrate the effectiveness of our method using three large-scale real world scientific datasets and provide insight into the types of performance gains that can be achieved.

  18. Large Scale Flood Risk Analysis using a New Hyper-resolution Population Dataset

    Science.gov (United States)

    Smith, A.; Neal, J. C.; Bates, P. D.; Quinn, N.; Wing, O.

    2017-12-01

    Here we present the first national scale flood risk analyses, using high resolution Facebook Connectivity Lab population data and data from a hyper resolution flood hazard model. In recent years the field of large scale hydraulic modelling has been transformed by new remotely sensed datasets, improved process representation, highly efficient flow algorithms and increases in computational power. These developments have allowed flood risk analysis to be undertaken in previously unmodeled territories and from continental to global scales. Flood risk analyses are typically conducted via the integration of modelled water depths with an exposure dataset. Over large scales and in data poor areas, these exposure data typically take the form of a gridded population dataset, estimating population density using remotely sensed data and/or locally available census data. The local nature of flooding dictates that for robust flood risk analysis to be undertaken both hazard and exposure data should sufficiently resolve local scale features. Global flood frameworks are enabling flood hazard data to produced at 90m resolution, resulting in a mis-match with available population datasets which are typically more coarsely resolved. Moreover, these exposure data are typically focused on urban areas and struggle to represent rural populations. In this study we integrate a new population dataset with a global flood hazard model. The population dataset was produced by the Connectivity Lab at Facebook, providing gridded population data at 5m resolution, representing a resolution increase over previous countrywide data sets of multiple orders of magnitude. Flood risk analysis undertaken over a number of developing countries are presented, along with a comparison of flood risk analyses undertaken using pre-existing population datasets.

  19. Roll-to-Roll printed large-area all-polymer solar cells with 5% efficiency based on a low crystallinity conjugated polymer blend

    Science.gov (United States)

    Gu, Xiaodan; Zhou, Yan; Gu, Kevin; Kurosawa, Tadanori; Yan, Hongping; Wang, Cheng; Toney, Micheal; Bao, Zhenan

    The challenge of continuous printing in high efficiency large-area organic solar cells is a key limiting factor for their widespread adoption. We present a materials design concept for achieving large-area, solution coated all-polymer bulk heterojunction (BHJ) solar cells with stable phase separation morphology between the donor and acceptor. The key concept lies in inhibiting strong crystallization of donor and acceptor polymers, thus forming intermixed, low crystallinity and mostly amorphous blends. Based on experiments using donors and acceptors with different degree of crystallinity, our results showed that microphase separated donor and acceptor domain sizes are inversely proportional to the crystallinity of the conjugated polymers. This methodology of using low crystallinity donors and acceptors has the added benefit of forming a consistent and robust morphology that is insensitive to different processing conditions, allowing one to easily scale up the printing process from a small scale solution shearing coater to a large-scale continuous roll-to-roll (R2R) printer. We were able to continuously roll-to-roll slot die print large area all-polymer solar cells with power conversion efficiencies of 5%, with combined cell area up to 10 cm2. This is among the highest efficiencies realized with R2R coated active layer organic materials on flexible substrate. DOE BRIDGE sunshot program. Office of Naval Research.

  20. MacroBac: New Technologies for Robust and Efficient Large-Scale Production of Recombinant Multiprotein Complexes.

    Science.gov (United States)

    Gradia, Scott D; Ishida, Justin P; Tsai, Miaw-Sheue; Jeans, Chris; Tainer, John A; Fuss, Jill O

    2017-01-01

    Recombinant expression of large, multiprotein complexes is essential and often rate limiting for determining structural, biophysical, and biochemical properties of DNA repair, replication, transcription, and other key cellular processes. Baculovirus-infected insect cell expression systems are especially well suited for producing large, human proteins recombinantly, and multigene baculovirus systems have facilitated studies of multiprotein complexes. In this chapter, we describe a multigene baculovirus system called MacroBac that uses a Biobricks-type assembly method based on restriction and ligation (Series 11) or ligation-independent cloning (Series 438). MacroBac cloning and assembly is efficient and equally well suited for either single subcloning reactions or high-throughput cloning using 96-well plates and liquid handling robotics. MacroBac vectors are polypromoter with each gene flanked by a strong polyhedrin promoter and an SV40 poly(A) termination signal that minimize gene order expression level effects seen in many polycistronic assemblies. Large assemblies are robustly achievable, and we have successfully assembled as many as 10 genes into a single MacroBac vector. Importantly, we have observed significant increases in expression levels and quality of large, multiprotein complexes using a single, multigene, polypromoter virus rather than coinfection with multiple, single-gene viruses. Given the importance of characterizing functional complexes, we believe that MacroBac provides a critical enabling technology that may change the way that structural, biophysical, and biochemical research is done. © 2017 Elsevier Inc. All rights reserved.

  1. Growing vertical ZnO nanorod arrays within graphite: efficient isolation of large size and high quality single-layer graphene.

    Science.gov (United States)

    Ding, Ling; E, Yifeng; Fan, Louzhen; Yang, Shihe

    2013-07-18

    We report a unique strategy for efficiently exfoliating large size and high quality single-layer graphene directly from graphite into DMF dispersions by growing ZnO nanorod arrays between the graphene layers in graphite.

  2. The green computing book tackling energy efficiency at large scale

    CERN Document Server

    Feng, Wu-chun

    2014-01-01

    Low-Power, Massively Parallel, Energy-Efficient Supercomputers The Blue Gene TeamCompiler-Driven Energy Efficiency Mahmut Kandemir and Shekhar Srikantaiah An Adaptive Run-Time System for Improving Energy Efficiency Chung-Hsing Hsu, Wu-chun Feng, and Stephen W. PooleEnergy-Efficient Multithreading through Run-Time Adaptation Exploring Trade-Offs between Energy Savings and Reliability in Storage Systems Ali R. Butt, Puranjoy Bhattacharjee, Guanying Wang, and Chris GniadyCross-Layer Power Management Zhikui Wang and Parthasarathy Ranganathan Energy-Efficient Virtualized Systems Ripal Nathuji and K

  3. Large scale high strain-rate tests of concrete

    Directory of Open Access Journals (Sweden)

    Kiefer R.

    2012-08-01

    Full Text Available This work presents the stages of development of some innovative equipment, based on Hopkinson bar techniques, for performing large scale dynamic tests of concrete specimens. The activity is centered at the recently upgraded HOPLAB facility, which is basically a split Hopkinson bar with a total length of approximately 200 m and with bar diameters of 72 mm. Through pre-tensioning and suddenly releasing a steel cable, force pulses of up to 2 MN, 250 μs rise time and 40 ms duration can be generated and applied to the specimen tested. The dynamic compression loading has first been treated and several modifications in the basic configuration have been introduced. Twin incident and transmitter bars have been installed with strong steel plates at their ends where large specimens can be accommodated. A series of calibration and qualification tests has been conducted and the first real tests on concrete cylindrical specimens of 20cm diameter and up to 40cm length have commenced. Preliminary results from the analysis of the recorded signals indicate proper Hopkinson bar testing conditions and reliable functioning of the facility.

  4. An Efficient Addressing Scheme and Its Routing Algorithm for a Large-Scale Wireless Sensor Network

    Directory of Open Access Journals (Sweden)

    Choi Jeonghee

    2008-01-01

    Full Text Available Abstract So far, various addressing and routing algorithms have been extensively studied for wireless sensor networks (WSNs, but many of them were limited to cover less than hundreds of sensor nodes. It is largely due to stringent requirements for fully distributed coordination among sensor nodes, leading to the wasteful use of available address space. As there is a growing need for a large-scale WSN, it will be extremely challenging to support more than thousands of nodes, using existing standard bodies. Moreover, it is highly unlikely to change the existing standards, primarily due to backward compatibility issue. In response, we propose an elegant addressing scheme and its routing algorithm. While maintaining the existing address scheme, it tackles the wastage problem and achieves no additional memory storage during a routing. We also present an adaptive routing algorithm for location-aware applications, using our addressing scheme. Through a series of simulations, we prove that our approach can achieve two times lesser routing time than the existing standard in a ZigBee network.

  5. An Efficient Addressing Scheme and Its Routing Algorithm for a Large-Scale Wireless Sensor Network

    Directory of Open Access Journals (Sweden)

    Yongwan Park

    2008-12-01

    Full Text Available So far, various addressing and routing algorithms have been extensively studied for wireless sensor networks (WSNs, but many of them were limited to cover less than hundreds of sensor nodes. It is largely due to stringent requirements for fully distributed coordination among sensor nodes, leading to the wasteful use of available address space. As there is a growing need for a large-scale WSN, it will be extremely challenging to support more than thousands of nodes, using existing standard bodies. Moreover, it is highly unlikely to change the existing standards, primarily due to backward compatibility issue. In response, we propose an elegant addressing scheme and its routing algorithm. While maintaining the existing address scheme, it tackles the wastage problem and achieves no additional memory storage during a routing. We also present an adaptive routing algorithm for location-aware applications, using our addressing scheme. Through a series of simulations, we prove that our approach can achieve two times lesser routing time than the existing standard in a ZigBee network.

  6. Parallel Computing for Terrestrial Ecosystem Carbon Modeling

    International Nuclear Information System (INIS)

    Wang, Dali; Post, Wilfred M.; Ricciuto, Daniel M.; Berry, Michael

    2011-01-01

    Terrestrial ecosystems are a primary component of research on global environmental change. Observational and modeling research on terrestrial ecosystems at the global scale, however, has lagged behind their counterparts for oceanic and atmospheric systems, largely because the unique challenges associated with the tremendous diversity and complexity of terrestrial ecosystems. There are 8 major types of terrestrial ecosystem: tropical rain forest, savannas, deserts, temperate grassland, deciduous forest, coniferous forest, tundra, and chaparral. The carbon cycle is an important mechanism in the coupling of terrestrial ecosystems with climate through biological fluxes of CO 2 . The influence of terrestrial ecosystems on atmospheric CO 2 can be modeled via several means at different timescales. Important processes include plant dynamics, change in land use, as well as ecosystem biogeography. Over the past several decades, many terrestrial ecosystem models (see the 'Model developments' section) have been developed to understand the interactions between terrestrial carbon storage and CO 2 concentration in the atmosphere, as well as the consequences of these interactions. Early TECMs generally adapted simple box-flow exchange models, in which photosynthetic CO 2 uptake and respiratory CO 2 release are simulated in an empirical manner with a small number of vegetation and soil carbon pools. Demands on kinds and amount of information required from global TECMs have grown. Recently, along with the rapid development of parallel computing, spatially explicit TECMs with detailed process based representations of carbon dynamics become attractive, because those models can readily incorporate a variety of additional ecosystem processes (such as dispersal, establishment, growth, mortality etc.) and environmental factors (such as landscape position, pest populations, disturbances, resource manipulations, etc.), and provide information to frame policy options for climate change

  7. Decentralized Large-Scale Power Balancing

    DEFF Research Database (Denmark)

    Halvgaard, Rasmus; Jørgensen, John Bagterp; Poulsen, Niels Kjølstad

    2013-01-01

    problem is formulated as a centralized large-scale optimization problem but is then decomposed into smaller subproblems that are solved locally by each unit connected to an aggregator. For large-scale systems the method is faster than solving the full problem and can be distributed to include an arbitrary...

  8. High-efficiency amorphous silicon solar cell on a periodic nanocone back reflector

    Energy Technology Data Exchange (ETDEWEB)

    Hsu, Ching-Mei; Cui, Yi [Department of Materials Science and Engineering, Durand Building, 496 Lomita Mall, Stanford University, Stanford, CA 94305-4034 (United States); Battaglia, Corsin; Pahud, Celine; Haug, Franz-Josef; Ballif, Christophe [Ecole Polytechnique Federale de Lausanne (EPFL), Institute of Microengineering (IMT), Photovoltaics and Thin Film Electronics Laboratory, Rue Breguet 2, 2000 Neuchatel (Switzerland); Ruan, Zhichao; Fan, Shanhui [Department of Electrical Engineering, Stanford University (United States)

    2012-06-15

    An amorphous silicon solar cell on a periodic nanocone back reflector with a high 9.7% initial conversion efficiency is presented. The optimized back-reflector morphology provides powerful light trapping and enables excellent electrical cell performance. Up-scaling to industrial production of large-area modules should be possible using nanoimprint lithography. (Copyright copyright 2012 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  9. V. Terrestrial vertebrates

    Science.gov (United States)

    Dean Pearson; Deborah Finch

    2011-01-01

    Within the Interior West, terrestrial vertebrates do not represent a large number of invasive species relative to invasive weeds, aquatic vertebrates, and invertebrates. However, several invasive terrestrial vertebrate species do cause substantial economic and ecological damage in the U.S. and in this region (Pimental 2000, 2007; Bergman and others 2002; Finch and...

  10. Rapid Prototyping — A Tool for Presenting 3-Dimensional Digital Models Produced by Terrestrial Laser Scanning

    Directory of Open Access Journals (Sweden)

    Juho-Pekka Virtanen

    2014-07-01

    Full Text Available Rapid prototyping has received considerable interest with the introduction of affordable rapid prototyping machines. These machines can be used to manufacture physical models from three-dimensional digital mesh models. In this paper, we compare the results obtained with a new, affordable, rapid prototyping machine, and a traditional professional machine. Two separate data sets are used for this, both of which were acquired using terrestrial laser scanning. Both of the machines were able to produce complex and highly detailed geometries in plastic material from models based on terrestrial laser scanning. The dimensional accuracies and detail levels of the machines were comparable, and the physical artifacts caused by the fused deposition modeling (FDM technique used in the rapid prototyping machines could be found in both models. The accuracy of terrestrial laser scanning exceeded the requirements for manufacturing physical models of large statues and building segments at a 1:40 scale.

  11. Calculating Soil Wetness, Evapotranspiration and Carbon Cycle Processes Over Large Grid Areas Using a New Scaling Technique

    Science.gov (United States)

    Sellers, Piers

    2012-01-01

    Soil wetness typically shows great spatial variability over the length scales of general circulation model (GCM) grid areas (approx 100 km ), and the functions relating evapotranspiration and photosynthetic rate to local-scale (approx 1 m) soil wetness are highly non-linear. Soil respiration is also highly dependent on very small-scale variations in soil wetness. We therefore expect significant inaccuracies whenever we insert a single grid area-average soil wetness value into a function to calculate any of these rates for the grid area. For the particular case of evapotranspiration., this method - use of a grid-averaged soil wetness value - can also provoke severe oscillations in the evapotranspiration rate and soil wetness under some conditions. A method is presented whereby the probability distribution timction(pdf) for soil wetness within a grid area is represented by binning. and numerical integration of the binned pdf is performed to provide a spatially-integrated wetness stress term for the whole grid area, which then permits calculation of grid area fluxes in a single operation. The method is very accurate when 10 or more bins are used, can deal realistically with spatially variable precipitation, conserves moisture exactly and allows for precise modification of the soil wetness pdf after every time step. The method could also be applied to other ecological problems where small-scale processes must be area-integrated, or upscaled, to estimate fluxes over large areas, for example in treatments of the terrestrial carbon budget or trace gas generation.

  12. Automating large-scale reactor systems

    International Nuclear Information System (INIS)

    Kisner, R.A.

    1985-01-01

    This paper conveys a philosophy for developing automated large-scale control systems that behave in an integrated, intelligent, flexible manner. Methods for operating large-scale systems under varying degrees of equipment degradation are discussed, and a design approach that separates the effort into phases is suggested. 5 refs., 1 fig

  13. A method of orbital analysis for large-scale first-principles simulations

    Energy Technology Data Exchange (ETDEWEB)

    Ohwaki, Tsukuru [Advanced Materials Laboratory, Nissan Research Center, Nissan Motor Co., Ltd., 1 Natsushima-cho, Yokosuka, Kanagawa 237-8523 (Japan); Otani, Minoru [Nanosystem Research Institute, National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba, Ibaraki 305-8568 (Japan); Ozaki, Taisuke [Research Center for Simulation Science (RCSS), Japan Advanced Institute of Science and Technology (JAIST), 1-1 Asahidai, Nomi, Ishikawa 923-1292 (Japan)

    2014-06-28

    An efficient method of calculating the natural bond orbitals (NBOs) based on a truncation of the entire density matrix of a whole system is presented for large-scale density functional theory calculations. The method recovers an orbital picture for O(N) electronic structure methods which directly evaluate the density matrix without using Kohn-Sham orbitals, thus enabling quantitative analysis of chemical reactions in large-scale systems in the language of localized Lewis-type chemical bonds. With the density matrix calculated by either an exact diagonalization or O(N) method, the computational cost is O(1) for the calculation of NBOs associated with a local region where a chemical reaction takes place. As an illustration of the method, we demonstrate how an electronic structure in a local region of interest can be analyzed by NBOs in a large-scale first-principles molecular dynamics simulation for a liquid electrolyte bulk model (propylene carbonate + LiBF{sub 4})

  14. A method of orbital analysis for large-scale first-principles simulations

    International Nuclear Information System (INIS)

    Ohwaki, Tsukuru; Otani, Minoru; Ozaki, Taisuke

    2014-01-01

    An efficient method of calculating the natural bond orbitals (NBOs) based on a truncation of the entire density matrix of a whole system is presented for large-scale density functional theory calculations. The method recovers an orbital picture for O(N) electronic structure methods which directly evaluate the density matrix without using Kohn-Sham orbitals, thus enabling quantitative analysis of chemical reactions in large-scale systems in the language of localized Lewis-type chemical bonds. With the density matrix calculated by either an exact diagonalization or O(N) method, the computational cost is O(1) for the calculation of NBOs associated with a local region where a chemical reaction takes place. As an illustration of the method, we demonstrate how an electronic structure in a local region of interest can be analyzed by NBOs in a large-scale first-principles molecular dynamics simulation for a liquid electrolyte bulk model (propylene carbonate + LiBF 4 )

  15. RE-Europe, a large-scale dataset for modeling a highly renewable European electricity system

    Science.gov (United States)

    Jensen, Tue V.; Pinson, Pierre

    2017-11-01

    Future highly renewable energy systems will couple to complex weather and climate dynamics. This coupling is generally not captured in detail by the open models developed in the power and energy system communities, where such open models exist. To enable modeling such a future energy system, we describe a dedicated large-scale dataset for a renewable electric power system. The dataset combines a transmission network model, as well as information for generation and demand. Generation includes conventional generators with their technical and economic characteristics, as well as weather-driven forecasts and corresponding realizations for renewable energy generation for a period of 3 years. These may be scaled according to the envisioned degrees of renewable penetration in a future European energy system. The spatial coverage, completeness and resolution of this dataset, open the door to the evaluation, scaling analysis and replicability check of a wealth of proposals in, e.g., market design, network actor coordination and forecasting of renewable power generation.

  16. RE-Europe, a large-scale dataset for modeling a highly renewable European electricity system.

    Science.gov (United States)

    Jensen, Tue V; Pinson, Pierre

    2017-11-28

    Future highly renewable energy systems will couple to complex weather and climate dynamics. This coupling is generally not captured in detail by the open models developed in the power and energy system communities, where such open models exist. To enable modeling such a future energy system, we describe a dedicated large-scale dataset for a renewable electric power system. The dataset combines a transmission network model, as well as information for generation and demand. Generation includes conventional generators with their technical and economic characteristics, as well as weather-driven forecasts and corresponding realizations for renewable energy generation for a period of 3 years. These may be scaled according to the envisioned degrees of renewable penetration in a future European energy system. The spatial coverage, completeness and resolution of this dataset, open the door to the evaluation, scaling analysis and replicability check of a wealth of proposals in, e.g., market design, network actor coordination and forecasting of renewable power generation.

  17. Image-based Exploration of Large-Scale Pathline Fields

    KAUST Repository

    Nagoor, Omniah H.

    2014-05-27

    While real-time applications are nowadays routinely used in visualizing large nu- merical simulations and volumes, handling these large-scale datasets requires high-end graphics clusters or supercomputers to process and visualize them. However, not all users have access to powerful clusters. Therefore, it is challenging to come up with a visualization approach that provides insight to large-scale datasets on a single com- puter. Explorable images (EI) is one of the methods that allows users to handle large data on a single workstation. Although it is a view-dependent method, it combines both exploration and modification of visual aspects without re-accessing the original huge data. In this thesis, we propose a novel image-based method that applies the concept of EI in visualizing large flow-field pathlines data. The goal of our work is to provide an optimized image-based method, which scales well with the dataset size. Our approach is based on constructing a per-pixel linked list data structure in which each pixel contains a list of pathlines segments. With this view-dependent method it is possible to filter, color-code and explore large-scale flow data in real-time. In addition, optimization techniques such as early-ray termination and deferred shading are applied, which further improves the performance and scalability of our approach.

  18. High efficiency and broadband acoustic diodes

    Science.gov (United States)

    Fu, Congyi; Wang, Bohan; Zhao, Tianfei; Chen, C. Q.

    2018-01-01

    Energy transmission efficiency and working bandwidth are the two major factors limiting the application of current acoustic diodes (ADs). This letter presents a design of high efficiency and broadband acoustic diodes composed of a nonlinear frequency converter and a linear wave filter. The converter consists of two masses connected by a bilinear spring with asymmetric tension and compression stiffness. The wave filter is a linear mass-spring lattice (sonic crystal). Both numerical simulation and experiment show that the energy transmission efficiency of the acoustic diode can be improved by as much as two orders of magnitude, reaching about 61%. Moreover, the primary working band width of the AD is about two times of the cut-off frequency of the sonic crystal filter. The cut-off frequency dependent working band of the AD implies that the developed AD can be scaled up or down from macro-scale to micro- and nano-scale.

  19. Obtaining high-resolution stage forecasts by coupling large-scale hydrologic models with sensor data

    Science.gov (United States)

    Fries, K. J.; Kerkez, B.

    2017-12-01

    We investigate how "big" quantities of distributed sensor data can be coupled with a large-scale hydrologic model, in particular the National Water Model (NWM), to obtain hyper-resolution forecasts. The recent launch of the NWM provides a great example of how growing computational capacity is enabling a new generation of massive hydrologic models. While the NWM spans an unprecedented spatial extent, there remain many questions about how to improve forecast at the street-level, the resolution at which many stakeholders make critical decisions. Further, the NWM runs on supercomputers, so water managers who may have access to their own high-resolution measurements may not readily be able to assimilate them into the model. To that end, we ask the question: how can the advances of the large-scale NWM be coupled with new local observations to enable hyper-resolution hydrologic forecasts? A methodology is proposed whereby the flow forecasts of the NWM are directly mapped to high-resolution stream levels using Dynamical System Identification. We apply the methodology across a sensor network of 182 gages in Iowa. Of these sites, approximately one third have shown to perform well in high-resolution flood forecasting when coupled with the outputs of the NWM. The quality of these forecasts is characterized using Principal Component Analysis and Random Forests to identify where the NWM may benefit from new sources of local observations. We also discuss how this approach can help municipalities identify where they should place low-cost sensors to most benefit from flood forecasts of the NWM.

  20. Planning under uncertainty solving large-scale stochastic linear programs

    Energy Technology Data Exchange (ETDEWEB)

    Infanger, G. [Stanford Univ., CA (United States). Dept. of Operations Research]|[Technische Univ., Vienna (Austria). Inst. fuer Energiewirtschaft

    1992-12-01

    For many practical problems, solutions obtained from deterministic models are unsatisfactory because they fail to hedge against certain contingencies that may occur in the future. Stochastic models address this shortcoming, but up to recently seemed to be intractable due to their size. Recent advances both in solution algorithms and in computer technology now allow us to solve important and general classes of practical stochastic problems. We show how large-scale stochastic linear programs can be efficiently solved by combining classical decomposition and Monte Carlo (importance) sampling techniques. We discuss the methodology for solving two-stage stochastic linear programs with recourse, present numerical results of large problems with numerous stochastic parameters, show how to efficiently implement the methodology on a parallel multi-computer and derive the theory for solving a general class of multi-stage problems with dependency of the stochastic parameters within a stage and between different stages.

  1. COMPARISON OF MULTI-SCALE DIGITAL ELEVATION MODELS FOR DEFINING WATERWAYS AND CATCHMENTS OVER LARGE AREAS

    Directory of Open Access Journals (Sweden)

    B. Harris

    2012-07-01

    Full Text Available Digital Elevation Models (DEMs allow for the efficient and consistent creation of waterways and catchment boundaries over large areas. Studies of waterway delineation from DEMs are usually undertaken over small or single catchment areas due to the nature of the problems being investigated. Improvements in Geographic Information Systems (GIS techniques, software, hardware and data allow for analysis of larger data sets and also facilitate a consistent tool for the creation and analysis of waterways over extensive areas. However, rarely are they developed over large regional areas because of the lack of available raw data sets and the amount of work required to create the underlying DEMs. This paper examines definition of waterways and catchments over an area of approximately 25,000 km2 to establish the optimal DEM scale required for waterway delineation over large regional projects. The comparative study analysed multi-scale DEMs over two test areas (Wivenhoe catchment, 543 km2 and a detailed 13 km2 within the Wivenhoe catchment including various data types, scales, quality, and variable catchment input parameters. Historic and available DEM data was compared to high resolution Lidar based DEMs to assess variations in the formation of stream networks. The results identified that, particularly in areas of high elevation change, DEMs at 20 m cell size created from broad scale 1:25,000 data (combined with more detailed data or manual delineation in flat areas are adequate for the creation of waterways and catchments at a regional scale.

  2. 3D large-scale calculations using the method of characteristics

    International Nuclear Information System (INIS)

    Dahmani, M.; Roy, R.; Koclas, J.

    2004-01-01

    An overview of the computational requirements and the numerical developments made in order to be able to solve 3D large-scale problems using the characteristics method will be presented. To accelerate the MCI solver, efficient acceleration techniques were implemented and parallelization was performed. However, for the very large problems, the size of the tracking file used to store the tracks can still become prohibitive and exceed the capacity of the machine. The new 3D characteristics solver MCG will now be introduced. This methodology is dedicated to solve very large 3D problems (a part or a whole core) without spatial homogenization. In order to eliminate the input/output problems occurring when solving these large problems, we define a new computing scheme that requires more CPU resources than the usual one, based on sweeps over large tracking files. The huge capacity of storage needed in some problems and the related I/O queries needed by the characteristics solver are replaced by on-the-fly recalculation of tracks at each iteration step. Using this technique, large 3D problems are no longer I/O-bound, and distributed CPU resources can be efficiently used. (author)

  3. Novel patch modelling method for efficient simulation and prediction uncertainty analysis of multi-scale groundwater flow and transport processes

    Science.gov (United States)

    Sreekanth, J.; Moore, Catherine

    2018-04-01

    The application of global sensitivity and uncertainty analysis techniques to groundwater models of deep sedimentary basins are typically challenged by large computational burdens combined with associated numerical stability issues. The highly parameterized approaches required for exploring the predictive uncertainty associated with the heterogeneous hydraulic characteristics of multiple aquifers and aquitards in these sedimentary basins exacerbate these issues. A novel Patch Modelling Methodology is proposed for improving the computational feasibility of stochastic modelling analysis of large-scale and complex groundwater models. The method incorporates a nested groundwater modelling framework that enables efficient simulation of groundwater flow and transport across multiple spatial and temporal scales. The method also allows different processes to be simulated within different model scales. Existing nested model methodologies are extended by employing 'joining predictions' for extrapolating prediction-salient information from one model scale to the next. This establishes a feedback mechanism supporting the transfer of information from child models to parent models as well as parent models to child models in a computationally efficient manner. This feedback mechanism is simple and flexible and ensures that while the salient small scale features influencing larger scale prediction are transferred back to the larger scale, this does not require the live coupling of models. This method allows the modelling of multiple groundwater flow and transport processes using separate groundwater models that are built for the appropriate spatial and temporal scales, within a stochastic framework, while also removing the computational burden associated with live model coupling. The utility of the method is demonstrated by application to an actual large scale aquifer injection scheme in Australia.

  4. Evaluating the potential of large-scale simulations to predict carbon fluxes of terrestrial ecosystems over a European Eddy Covariance network

    International Nuclear Information System (INIS)

    Balzarolo, M.; Boussetta, S.; Balsamo, G.; Beljaars, A.; Maignan, F.; Chevallier, F.; Poulter, B.

    2014-01-01

    This paper reports a comparison between large scale simulations of three different land surface models (LSMs), ORCHIDEE, ISBA-A-gs and CTESSEL, forced with the same meteorological data, and compared with the carbon fluxes measured at 32 eddy covariance (EC) flux tower sites in Europe. The results show that the three simulations have the best performance for forest sites and the poorest performance for cropland and grassland sites. In addition, the three simulations have difficulties capturing the seasonality of Mediterranean and sub-tropical biomes, characterized by dry summers. This reduced simulation performance is also reflected in deficiencies in diagnosed light-use efficiency (LUE) and vapour pressure deficit (VPD) dependencies compared to observations. Shortcomings in the forcing data may also play a role. These results indicate that more research is needed on the LUE and VPD functions for Mediterranean and sub-tropical biomes. Finally, this study highlights the importance of correctly representing phenology (i.e. leaf area evolution) and management (i.e. rotation-irrigation for cropland, and grazing-harvesting for grassland) to simulate the carbon dynamics of European ecosystems and the importance of ecosystem-level observations in model development and validation. (authors)

  5. Hydrometeorological variability on a large french catchment and its relation to large-scale circulation across temporal scales

    Science.gov (United States)

    Massei, Nicolas; Dieppois, Bastien; Fritier, Nicolas; Laignel, Benoit; Debret, Maxime; Lavers, David; Hannah, David

    2015-04-01

    basically consisted in 1- decomposing both signals (SLP field and precipitation or streamflow) using discrete wavelet multiresolution analysis and synthesis, 2- generating one statistical downscaling model per time-scale, 3- summing up all scale-dependent models in order to obtain a final reconstruction of the predictand. The results obtained revealed a significant improvement of the reconstructions for both precipitation and streamflow when using the multiresolution ESD model instead of basic ESD ; in addition, the scale-dependent spatial patterns associated to the model matched quite well those obtained from scale-dependent composite analysis. In particular, the multiresolution ESD model handled very well the significant changes in variance through time observed in either prepciptation or streamflow. For instance, the post-1980 period, which had been characterized by particularly high amplitudes in interannual-to-interdecadal variability associated with flood and extremely low-flow/drought periods (e.g., winter 2001, summer 2003), could not be reconstructed without integrating wavelet multiresolution analysis into the model. Further investigations would be required to address the issue of the stationarity of the large-scale/local-scale relationships and to test the capability of the multiresolution ESD model for interannual-to-interdecadal forecasting. In terms of methodological approach, further investigations may concern a fully comprehensive sensitivity analysis of the modeling to the parameter of the multiresolution approach (different families of scaling and wavelet functions used, number of coefficients/degree of smoothness, etc.).

  6. A generic library for large scale solution of PDEs on modern heterogeneous architectures

    DEFF Research Database (Denmark)

    Glimberg, Stefan Lemvig; Engsig-Karup, Allan Peter

    2012-01-01

    Adapting to new programming models for modern multi- and many-core architectures requires code-rewriting and changing algorithms and data structures, in order to achieve good efficiency and scalability. We present a generic library for solving large scale partial differential equations (PDEs......), capable of utilizing heterogeneous CPU/GPU environments. The library can be used for fast proto-typing of PDE solvers, based on finite difference approximations of spatial derivatives in one, two, or three dimensions. In order to efficiently solve large scale problems, we keep memory consumption...... and memory access low, using a low-storage implementation of flexible-order finite difference operators. We will illustrate the use of library components by assembling such matrix-free operators to be used with one of the supported iterative solvers, such as GMRES, CG, Multigrid or Defect Correction...

  7. A NEW APPROACH FOR SUBWAY TUNNEL DEFORMATION MONITORING: HIGH-RESOLUTION TERRESTRIAL LASER SCANNING

    Directory of Open Access Journals (Sweden)

    J. Li

    2012-07-01

    Full Text Available With the improvement of the accuracy and efficiency of laser scanning technology, high-resolution terrestrial laser scanning (TLS technology can obtain high precise points-cloud and density distribution and can be applied to high-precision deformation monitoring of subway tunnels and high-speed railway bridges and other fields. In this paper, a new approach using a points-cloud segmentation method based on vectors of neighbor points and surface fitting method based on moving least squares was proposed and applied to subway tunnel deformation monitoring in Tianjin combined with a new high-resolution terrestrial laser scanner (Riegl VZ-400. There were three main procedures. Firstly, a points-cloud consisted of several scanning was registered by linearized iterative least squares approach to improve the accuracy of registration, and several control points were acquired by total stations (TS and then adjusted. Secondly, the registered points-cloud was resampled and segmented based on vectors of neighbor points to select suitable points. Thirdly, the selected points were used to fit the subway tunnel surface with moving least squares algorithm. Then a series of parallel sections obtained from temporal series of fitting tunnel surfaces were compared to analysis the deformation. Finally, the results of the approach in z direction were compared with the fiber optical displacement sensor approach and the results in x, y directions were compared with TS respectively, and comparison results showed the accuracy errors of x, y, z directions were respectively about 1.5 mm, 2 mm, 1 mm. Therefore the new approach using high-resolution TLS can meet the demand of subway tunnel deformation monitoring.

  8. Large-scale Labeled Datasets to Fuel Earth Science Deep Learning Applications

    Science.gov (United States)

    Maskey, M.; Ramachandran, R.; Miller, J.

    2017-12-01

    Deep learning has revolutionized computer vision and natural language processing with various algorithms scaled using high-performance computing. However, generic large-scale labeled datasets such as the ImageNet are the fuel that drives the impressive accuracy of deep learning results. Large-scale labeled datasets already exist in domains such as medical science, but creating them in the Earth science domain is a challenge. While there are ways to apply deep learning using limited labeled datasets, there is a need in the Earth sciences for creating large-scale labeled datasets for benchmarking and scaling deep learning applications. At the NASA Marshall Space Flight Center, we are using deep learning for a variety of Earth science applications where we have encountered the need for large-scale labeled datasets. We will discuss our approaches for creating such datasets and why these datasets are just as valuable as deep learning algorithms. We will also describe successful usage of these large-scale labeled datasets with our deep learning based applications.

  9. Alternative treatment of ovarian cysts with Tribulus terrestris extract: a rat model.

    Science.gov (United States)

    Dehghan, A; Esfandiari, A; Bigdeli, S Momeni

    2012-02-01

    Tribulus terrestris has long been used in traditional medicine to treat impotency and improve sexual functions in man. The aim of this study was to evaluate the efficiency of T. terrestris extract in the treatment of polycystic ovary (PCO) in Wistar rat. Estradiol valerate was injected to 15 mature Wistar rats to induce PCO. Rats were randomly divided into three groups (control, low-dose and high-dose groups) of five each and received 0, 5 and 10 mg of T. terrestris extract, respectively.Treatments began on days 50 and 61 after estradiol injection; at the same time, vaginal smear was prepared. The ovaries were removed on day 62, and histological sections were prepared accordingly. The number and diameter of corpora lutea, thickness of the theca interna layer and the number of all follicles were evaluated in both ovaries. In comparison with the control group, the number of corpora lutea and primary and secondary follicles significantly increased following T. terrestris treatment; however, the number of ovarian cysts significantly decreased. It can be concluded that T. terrestris have a luteinizing effect on ovarian cysts, which may relate to its gonadotropin-like activity; also, a high dose of the extract can efficiently remove ovarian cysts and resume ovarian activity. © 2011 Blackwell Verlag GmbH.

  10. ADAPTIVE TEXTURE SYNTHESIS FOR LARGE SCALE CITY MODELING

    Directory of Open Access Journals (Sweden)

    G. Despine

    2015-02-01

    Full Text Available Large scale city models textured with aerial images are well suited for bird-eye navigation but generally the image resolution does not allow pedestrian navigation. One solution to face this problem is to use high resolution terrestrial photos but it requires huge amount of manual work to remove occlusions. Another solution is to synthesize generic textures with a set of procedural rules and elementary patterns like bricks, roof tiles, doors and windows. This solution may give realistic textures but with no correlation to the ground truth. Instead of using pure procedural modelling we present a method to extract information from aerial images and adapt the texture synthesis to each building. We describe a workflow allowing the user to drive the information extraction and to select the appropriate texture patterns. We also emphasize the importance to organize the knowledge about elementary pattern in a texture catalogue allowing attaching physical information, semantic attributes and to execute selection requests. Roofs are processed according to the detected building material. Façades are first described in terms of principal colours, then opening positions are detected and some window features are computed. These features allow selecting the most appropriate patterns from the texture catalogue. We experimented this workflow on two samples with 20 cm and 5 cm resolution images. The roof texture synthesis and opening detection were successfully conducted on hundreds of buildings. The window characterization is still sensitive to the distortions inherent to the projection of aerial images onto the facades.

  11. The Software Reliability of Large Scale Integration Circuit and Very Large Scale Integration Circuit

    OpenAIRE

    Artem Ganiyev; Jan Vitasek

    2010-01-01

    This article describes evaluation method of faultless function of large scale integration circuits (LSI) and very large scale integration circuits (VLSI). In the article there is a comparative analysis of factors which determine faultless of integrated circuits, analysis of already existing methods and model of faultless function evaluation of LSI and VLSI. The main part describes a proposed algorithm and program for analysis of fault rate in LSI and VLSI circuits.

  12. Carbon and nutrient use efficiencies optimally balance stoichiometric imbalances

    Science.gov (United States)

    Manzoni, Stefano; Čapek, Petr; Lindahl, Björn; Mooshammer, Maria; Richter, Andreas; Šantrůčková, Hana

    2016-04-01

    Decomposer organisms face large stoichiometric imbalances because their food is generally poor in nutrients compared to the decomposer cellular composition. The presence of excess carbon (C) requires adaptations to utilize nutrients effectively while disposing of or investing excess C. As food composition changes, these adaptations lead to variable C- and nutrient-use efficiencies (defined as the ratios of C and nutrients used for growth over the amounts consumed). For organisms to be ecologically competitive, these changes in efficiencies with resource stoichiometry have to balance advantages and disadvantages in an optimal way. We hypothesize that efficiencies are varied so that community growth rate is optimized along stoichiometric gradients of their resources. Building from previous theories, we predict that maximum growth is achieved when C and nutrients are co-limiting, so that the maximum C-use efficiency is reached, and nutrient release is minimized. This optimality principle is expected to be applicable across terrestrial-aquatic borders, to various elements, and at different trophic levels. While the growth rate maximization hypothesis has been evaluated for consumers and predators, in this contribution we test it for terrestrial and aquatic decomposers degrading resources across wide stoichiometry gradients. The optimality hypothesis predicts constant efficiencies at low substrate C:N and C:P, whereas above a stoichiometric threshold, C-use efficiency declines and nitrogen- and phosphorus-use efficiencies increase up to one. Thus, high resource C:N and C:P lead to low C-use efficiency, but effective retention of nitrogen and phosphorus. Predictions are broadly consistent with efficiency trends in decomposer communities across terrestrial and aquatic ecosystems.

  13. Large-scale tests of aqueous scrubber systems for LMFBR vented containment

    International Nuclear Information System (INIS)

    McCormack, J.D.; Hilliard, R.K.; Postma, A.K.

    1980-01-01

    Six large-scale air cleaning tests performed in the Containment Systems Test Facility (CSTF) are described. The test conditions simulated those postulated for hypothetical accidents in an LMFBR involving containment venting to control hydrogen concentration and containment overpressure. Sodium aerosols were generated by continously spraying sodium into air and adding steam and/or carbon dioxide to create the desired Na 2 O 2 , Na 2 CO 3 or NaOH aerosol. Two air cleaning systems were tested: (a) spray quench chamber, educator venturi scrubber and high efficiency fibrous scrubber in series; and (b) the same except with the spray quench chamber eliminated. The gas flow rates ranged up to 0.8 m 3 /s (1700 acfm) at temperatures to 313 0 C (600 0 F). Quantities of aerosol removed from the gas stream ranged up to 700 kg per test. The systems performed very satisfactorily with overall aerosol mass removal efficiencies exceeding 99.9% in each test

  14. Deposition and Burial Efficiency of Terrestrial Organic Carbon Exported from Small Mountainous Rivers to the Continental Margin, Southwest of Taiwan

    Science.gov (United States)

    Hsu, F.; Lin, S.; Wang, C.; Huh, C.

    2007-12-01

    Terrestrial organic carbon exported from small mountainous river to the continental margin may play an important role in global carbon cycle and it?|s biogeochemical process. A huge amount of suspended materials from small rivers in southwestern Taiwan (104 million tons per year) could serve as major carbon source to the adjacent ocean. However, little is know concerning fate of this terrigenous organic carbon. The purpose of this study is to calculate flux of terrigenous organic carbon deposited in the continental margin, offshore southwestern Taiwan through investigating spatial variation of organic carbon content, organic carbon isotopic compositions, organic carbon deposition rate and burial efficiency. Results show that organic carbon compositions in sediment are strongly influenced by terrestrial material exported from small rivers in the region, Kaoping River, Tseng-wen River and Er-jan Rver. In addition, a major part of the terrestrial materials exported from the Kaoping River may bypass shelf region and transport directly into the deep sea (South China Sea) through the Kaoping Canyon. Organic carbon isotopic compositions with lighter carbon isotopic values are found near the Kaoping River and Tseng-wen River mouth and rapidly change from heavier to lighter values through shelf to slope. Patches of lighter organic carbon isotopic compositions with high organic carbon content are also found in areas west of Kaoping River mouth, near the Kaoshiung city. Furthermore, terrigenous organic carbons with lighter isotopic values are found in the Kaoping canyon. A total of 0.028 Mt/yr of terrestrial organic carbon was found in the study area, which represented only about 10 percent of all terrestrial organic carbon deposited in the study area. Majority (~90 percent) of the organic carbon exported from the Kaoping River maybe directly transported into the deep sea (South China Sea) and become a major source of organic carbon in the deep sea.

  15. Asynchronous Two-Level Checkpointing Scheme for Large-Scale Adjoints in the Spectral-Element Solver Nek5000

    Energy Technology Data Exchange (ETDEWEB)

    Schanen, Michel; Marin, Oana; Zhang, Hong; Anitescu, Mihai

    2016-01-01

    Adjoints are an important computational tool for large-scale sensitivity evaluation, uncertainty quantification, and derivative-based optimization. An essential component of their performance is the storage/recomputation balance in which efficient checkpointing methods play a key role. We introduce a novel asynchronous two-level adjoint checkpointing scheme for multistep numerical time discretizations targeted at large-scale numerical simulations. The checkpointing scheme combines bandwidth-limited disk checkpointing and binomial memory checkpointing. Based on assumptions about the target petascale systems, which we later demonstrate to be realistic on the IBM Blue Gene/Q system Mira, we create a model of the expected performance of our checkpointing approach and validate it using the highly scalable Navier-Stokes spectralelement solver Nek5000 on small to moderate subsystems of the Mira supercomputer. In turn, this allows us to predict optimal algorithmic choices when using all of Mira. We also demonstrate that two-level checkpointing is significantly superior to single-level checkpointing when adjoining a large number of time integration steps. To our knowledge, this is the first time two-level checkpointing had been designed, implemented, tuned, and demonstrated on fluid dynamics codes at large scale of 50k+ cores.

  16. Response of Water Use Efficiency to Global Environmental Change Based on Output From Terrestrial Biosphere Models

    Science.gov (United States)

    Zhou, Sha; Yu, Bofu; Schwalm, Christopher R.; Ciais, Philippe; Zhang, Yao; Fisher, Joshua B.; Michalak, Anna M.; Wang, Weile; Poulter, Benjamin; Huntzinger, Deborah N.; Niu, Shuli; Mao, Jiafu; Jain, Atul; Ricciuto, Daniel M.; Shi, Xiaoying; Ito, Akihiko; Wei, Yaxing; Huang, Yuefei; Wang, Guangqian

    2017-11-01

    Water use efficiency (WUE), defined as the ratio of gross primary productivity and evapotranspiration at the ecosystem scale, is a critical variable linking the carbon and water cycles. Incorporating a dependency on vapor pressure deficit, apparent underlying WUE (uWUE) provides a better indicator of how terrestrial ecosystems respond to environmental changes than other WUE formulations. Here we used 20th century simulations from four terrestrial biosphere models to develop a novel variance decomposition method. With this method, we attributed variations in apparent uWUE to both the trend and interannual variation of environmental drivers. The secular increase in atmospheric CO2 explained a clear majority of total variation (66 ± 32%: mean ± one standard deviation), followed by positive trends in nitrogen deposition and climate, as well as a negative trend in land use change. In contrast, interannual variation was mostly driven by interannual climate variability. To analyze the mechanism of the CO2 effect, we partitioned the apparent uWUE into the transpiration ratio (transpiration over evapotranspiration) and potential uWUE. The relative increase in potential uWUE parallels that of CO2, but this direct CO2 effect was offset by 20 ± 4% by changes in ecosystem structure, that is, leaf area index for different vegetation types. However, the decrease in transpiration due to stomatal closure with rising CO2 was reduced by 84% by an increase in leaf area index, resulting in small changes in the transpiration ratio. CO2 concentration thus plays a dominant role in driving apparent uWUE variations over time, but its role differs for the two constituent components: potential uWUE and transpiration.

  17. Five hundred years of gridded high-resolution precipitation reconstructions over Europe and the connection to large-scale circulation

    Energy Technology Data Exchange (ETDEWEB)

    Pauling, Andreas [University of Bern, Institute of Geography, Bern (Switzerland); Luterbacher, Juerg; Wanner, Heinz [University of Bern, Institute of Geography, Bern (Switzerland); National Center of Competence in Research (NCCR) in Climate, Bern (Switzerland); Casty, Carlo [University of Bern, Climate and Environmental Physics Institute, Bern (Switzerland)

    2006-03-15

    We present seasonal precipitation reconstructions for European land areas (30 W to 40 E/30-71 N; given on a 0.5 x 0.5 resolved grid) covering the period 1500-1900 together with gridded reanalysis from 1901 to 2000 (Mitchell and Jones 2005). Principal component regression techniques were applied to develop this dataset. A large variety of long instrumental precipitation series, precipitation indices based on documentary evidence and natural proxies (tree-ring chronologies, ice cores, corals and a speleothem) that are sensitive to precipitation signals were used as predictors. Transfer functions were derived over the 1901-1983 calibration period and applied to 1500-1900 in order to reconstruct the large-scale precipitation fields over Europe. The performance (quality estimation based on unresolved variance within the calibration period) of the reconstructions varies over centuries, seasons and space. Highest reconstructive skill was found for winter over central Europe and the Iberian Peninsula. Precipitation variability over the last half millennium reveals both large interannual and decadal fluctuations. Applying running correlations, we found major non-stationarities in the relation between large-scale circulation and regional precipitation. For several periods during the last 500 years, we identified key atmospheric modes for southern Spain/northern Morocco and central Europe as representations of two precipitation regimes. Using scaled composite analysis, we show that precipitation extremes over central Europe and southern Spain are linked to distinct pressure patterns. Due to its high spatial and temporal resolution, this dataset allows detailed studies of regional precipitation variability for all seasons, impact studies on different time and space scales, comparisons with high-resolution climate models as well as analysis of connections with regional temperature reconstructions. (orig.)

  18. A High-Resolution Terrestrial Modeling System (TMS): A Demonstration in China

    Science.gov (United States)

    Duan, Q.; Dai, Y.; Zheng, X.; Ye, A.; Ji, D.; Chen, Z.

    2013-12-01

    This presentation describes a terrestrial modeling system (TMS) developed at Beijing Normal University. The TMS is designed to be driven by multi-sensor meteorological and land surface observations, including those from satellites and land based observing stations. The purposes of the TMS are (1) to provide a land surface parameterization scheme fully capable of being coupled with the Earth system models; (2) to provide a standalone platform for retrospective historical simulation and for forecasting of future land surface processes at different space and time scales; and (3) to provide a platform for studying human-Earth system interactions and for understanding climate change impacts. This system is built on capabilities among several groups at BNU, including the Common Land Model (CoLM) system, high-resolution atmospheric forcing data sets, high resolution land surface characteristics data sets, data assimilation and uncertainty analysis platforms, ensemble prediction platform, and high-performance computing facilities. This presentation intends to describe the system design and demonstrate the capabilities of TMS with results from a China-wide application.

  19. Highly Efficient Estimators of Multivariate Location with High Breakdown Point

    NARCIS (Netherlands)

    Lopuhaa, H.P.

    1991-01-01

    We propose an affine equivariant estimator of multivariate location that combines a high breakdown point and a bounded influence function with high asymptotic efficiency. This proposal is basically a location $M$-estimator based on the observations obtained after scaling with an affine equivariant

  20. Large-scale hydrogen production using nuclear reactors

    Energy Technology Data Exchange (ETDEWEB)

    Ryland, D.; Stolberg, L.; Kettner, A.; Gnanapragasam, N.; Suppiah, S. [Atomic Energy of Canada Limited, Chalk River, ON (Canada)

    2014-07-01

    For many years, Atomic Energy of Canada Limited (AECL) has been studying the feasibility of using nuclear reactors, such as the Supercritical Water-cooled Reactor, as an energy source for large scale hydrogen production processes such as High Temperature Steam Electrolysis and the Copper-Chlorine thermochemical cycle. Recent progress includes the augmentation of AECL's experimental capabilities by the construction of experimental systems to test high temperature steam electrolysis button cells at ambient pressure and temperatures up to 850{sup o}C and CuCl/HCl electrolysis cells at pressures up to 7 bar and temperatures up to 100{sup o}C. In parallel, detailed models of solid oxide electrolysis cells and the CuCl/HCl electrolysis cell are being refined and validated using experimental data. Process models are also under development to assess options for economic integration of these hydrogen production processes with nuclear reactors. Options for large-scale energy storage, including hydrogen storage, are also under study. (author)

  1. Environmental performance evaluation of large-scale municipal solid waste incinerators using data envelopment analysis

    International Nuclear Information System (INIS)

    Chen, H.-W.; Chang, N.-B.; Chen, J.-C.; Tsai, S.-J.

    2010-01-01

    Limited to insufficient land resources, incinerators are considered in many countries such as Japan and Germany as the major technology for a waste management scheme capable of dealing with the increasing demand for municipal and industrial solid waste treatment in urban regions. The evaluation of these municipal incinerators in terms of secondary pollution potential, cost-effectiveness, and operational efficiency has become a new focus in the highly interdisciplinary area of production economics, systems analysis, and waste management. This paper aims to demonstrate the application of data envelopment analysis (DEA) - a production economics tool - to evaluate performance-based efficiencies of 19 large-scale municipal incinerators in Taiwan with different operational conditions. A 4-year operational data set from 2002 to 2005 was collected in support of DEA modeling using Monte Carlo simulation to outline the possibility distributions of operational efficiency of these incinerators. Uncertainty analysis using the Monte Carlo simulation provides a balance between simplifications of our analysis and the soundness of capturing the essential random features that complicate solid waste management systems. To cope with future challenges, efforts in the DEA modeling, systems analysis, and prediction of the performance of large-scale municipal solid waste incinerators under normal operation and special conditions were directed toward generating a compromised assessment procedure. Our research findings will eventually lead to the identification of the optimal management strategies for promoting the quality of solid waste incineration, not only in Taiwan, but also elsewhere in the world.

  2. An Efficient and Reliable Statistical Method for Estimating Functional Connectivity in Large Scale Brain Networks Using Partial Correlation.

    Science.gov (United States)

    Wang, Yikai; Kang, Jian; Kemmer, Phebe B; Guo, Ying

    2016-01-01

    Currently, network-oriented analysis of fMRI data has become an important tool for understanding brain organization and brain networks. Among the range of network modeling methods, partial correlation has shown great promises in accurately detecting true brain network connections. However, the application of partial correlation in investigating brain connectivity, especially in large-scale brain networks, has been limited so far due to the technical challenges in its estimation. In this paper, we propose an efficient and reliable statistical method for estimating partial correlation in large-scale brain network modeling. Our method derives partial correlation based on the precision matrix estimated via Constrained L1-minimization Approach (CLIME), which is a recently developed statistical method that is more efficient and demonstrates better performance than the existing methods. To help select an appropriate tuning parameter for sparsity control in the network estimation, we propose a new Dens-based selection method that provides a more informative and flexible tool to allow the users to select the tuning parameter based on the desired sparsity level. Another appealing feature of the Dens-based method is that it is much faster than the existing methods, which provides an important advantage in neuroimaging applications. Simulation studies show that the Dens-based method demonstrates comparable or better performance with respect to the existing methods in network estimation. We applied the proposed partial correlation method to investigate resting state functional connectivity using rs-fMRI data from the Philadelphia Neurodevelopmental Cohort (PNC) study. Our results show that partial correlation analysis removed considerable between-module marginal connections identified by full correlation analysis, suggesting these connections were likely caused by global effects or common connection to other nodes. Based on partial correlation, we find that the most significant

  3. Holography as a highly efficient renormalization group flow. I. Rephrasing gravity

    Science.gov (United States)

    Behr, Nicolas; Kuperstein, Stanislav; Mukhopadhyay, Ayan

    2016-07-01

    We investigate how the holographic correspondence can be reformulated as a generalization of Wilsonian renormalization group (RG) flow in a strongly interacting large-N quantum field theory. We first define a highly efficient RG flow as one in which the Ward identities related to local conservation of energy, momentum and charges preserve the same form at each scale. To achieve this, it is necessary to redefine the background metric and external sources at each scale as functionals of the effective single-trace operators. These redefinitions also absorb the contributions of the multitrace operators to these effective Ward identities. Thus, the background metric and external sources become effectively dynamical, reproducing the dual classical gravity equations in one higher dimension. Here, we focus on reconstructing the pure gravity sector as a highly efficient RG flow of the energy-momentum tensor operator, leaving the explicit constructive field theory approach for generating such RG flows to the second part of the work. We show that special symmetries of the highly efficient RG flows carry information through which we can decode the gauge fixing of bulk diffeomorphisms in the corresponding gravity equations. We also show that the highly efficient RG flow which reproduces a given classical gravity theory in a given gauge is unique provided the endpoint can be transformed to a nonrelativistic fixed point with a finite number of parameters under a universal rescaling. The results obtained here are used in the second part of this work, where we do an explicit field-theoretic construction of the RG flow and obtain the dual classical gravity theory.

  4. Phylogenetic distribution of large-scale genome patchiness

    Directory of Open Access Journals (Sweden)

    Hackenberg Michael

    2008-04-01

    Full Text Available Abstract Background The phylogenetic distribution of large-scale genome structure (i.e. mosaic compositional patchiness has been explored mainly by analytical ultracentrifugation of bulk DNA. However, with the availability of large, good-quality chromosome sequences, and the recently developed computational methods to directly analyze patchiness on the genome sequence, an evolutionary comparative analysis can be carried out at the sequence level. Results The local variations in the scaling exponent of the Detrended Fluctuation Analysis are used here to analyze large-scale genome structure and directly uncover the characteristic scales present in genome sequences. Furthermore, through shuffling experiments of selected genome regions, computationally-identified, isochore-like regions were identified as the biological source for the uncovered large-scale genome structure. The phylogenetic distribution of short- and large-scale patchiness was determined in the best-sequenced genome assemblies from eleven eukaryotic genomes: mammals (Homo sapiens, Pan troglodytes, Mus musculus, Rattus norvegicus, and Canis familiaris, birds (Gallus gallus, fishes (Danio rerio, invertebrates (Drosophila melanogaster and Caenorhabditis elegans, plants (Arabidopsis thaliana and yeasts (Saccharomyces cerevisiae. We found large-scale patchiness of genome structure, associated with in silico determined, isochore-like regions, throughout this wide phylogenetic range. Conclusion Large-scale genome structure is detected by directly analyzing DNA sequences in a wide range of eukaryotic chromosome sequences, from human to yeast. In all these genomes, large-scale patchiness can be associated with the isochore-like regions, as directly detected in silico at the sequence level.

  5. Managing large-scale models: DBS

    International Nuclear Information System (INIS)

    1981-05-01

    A set of fundamental management tools for developing and operating a large scale model and data base system is presented. Based on experience in operating and developing a large scale computerized system, the only reasonable way to gain strong management control of such a system is to implement appropriate controls and procedures. Chapter I discusses the purpose of the book. Chapter II classifies a broad range of generic management problems into three groups: documentation, operations, and maintenance. First, system problems are identified then solutions for gaining management control are disucssed. Chapters III, IV, and V present practical methods for dealing with these problems. These methods were developed for managing SEAS but have general application for large scale models and data bases

  6. The Cosmology Large Angular Scale Surveyor (CLASS)

    Science.gov (United States)

    Harrington, Kathleen; Marriange, Tobias; Aamir, Ali; Appel, John W.; Bennett, Charles L.; Boone, Fletcher; Brewer, Michael; Chan, Manwei; Chuss, David T.; Colazo, Felipe; hide

    2016-01-01

    The Cosmology Large Angular Scale Surveyor (CLASS) is a four telescope array designed to characterize relic primordial gravitational waves from in ation and the optical depth to reionization through a measurement of the polarized cosmic microwave background (CMB) on the largest angular scales. The frequencies of the four CLASS telescopes, one at 38 GHz, two at 93 GHz, and one dichroic system at 145/217 GHz, are chosen to avoid spectral regions of high atmospheric emission and span the minimum of the polarized Galactic foregrounds: synchrotron emission at lower frequencies and dust emission at higher frequencies. Low-noise transition edge sensor detectors and a rapid front-end polarization modulator provide a unique combination of high sensitivity, stability, and control of systematics. The CLASS site, at 5200 m in the Chilean Atacama desert, allows for daily mapping of up to 70% of the sky and enables the characterization of CMB polarization at the largest angular scales. Using this combination of a broad frequency range, large sky coverage, control over systematics, and high sensitivity, CLASS will observe the reionization and recombination peaks of the CMB E- and B-mode power spectra. CLASS will make a cosmic variance limited measurement of the optical depth to reionization and will measure or place upper limits on the tensor-to-scalar ratio, r, down to a level of 0.01 (95% C.L.).

  7. A new system of labour management in African large-scale agriculture?

    DEFF Research Database (Denmark)

    Gibbon, Peter; Riisgaard, Lone

    2014-01-01

    This paper applies a convention theory (CT) approach to the analysis of labour management systems in African large-scale farming. The reconstruction of previous analyses of high-value crop production on large-scale farms in Africa in terms of CT suggests that, since 1980–95, labour management has...

  8. Water Use Efficiency of China's Terrestrial Ecosystems and Responses to Drought

    Science.gov (United States)

    Liu, Y.; Xiao, J.; Ju, W.; Zhou, Y.; Wang, S.; Wu, X.

    2015-12-01

    Yibo Liu1, 2, Jingfeng Xiao2, Weimin Ju3, Yanlian Zhou4, Shaoqiang Wang5, Xiaocui Wu31 Jiangsu Key Laboratory of Agricultural Meteorology, School of Applied Meteorology, Nanjing University of Information Science and Technology, Nanjing, 210044, China, 2Earth Systems Research Center, Institute for the Study of Earth, Oceans, and Space, University of New Hampshire, Durham, NH 03824, USA, 3 International Institute for Earth System Sciences, Nanjing University, Nanjing, 210023, China, 4 School of Geographic and Oceanographic Sciences, Nanjing University, Nanjing, 210023, China, 5 Key Laboratory of Ecosystem Network Observation and Modeling, Institute of Geographic Sciences and Natural Resources Research, Chinese Academy of Sciences, Beijing, 100101, China Water use efficiency (WUE) measures the trade-off between carbon gain and water loss of terrestrial ecosystems, and better understanding its dynamics and controlling factors is essential for predicting ecosystem responses to climate change. We assessed the magnitude, spatial patterns, and trends of WUE of China's terrestrial ecosystems and its responses to drought using a process-based ecosystem model. During the period from 2000 to 2011, the national average annual WUE (net primary productivity (NPP)/evapotranspiration (ET)) of China was 0.79 g C kg-1 H2O. Annual WUE decreased in the southern regions because of the decrease in NPP and increase in ET and increased in most northern regions mainly because of the increase in NPP. Droughts usually increased annual WUE in Northeast China and central Inner Mongolia but decreased annual WUE in central China. "Turning-points" were observed for southern China where moderate and extreme drought reduced annual WUE and severe drought slightly increased annual WUE. The cumulative lagged effect of drought on monthly WUE varied by region. Our findings have implications for ecosystem management and climate policy making. WUE is expected to continue to change under future climate

  9. Terrestrial planet formation.

    Science.gov (United States)

    Righter, K; O'Brien, D P

    2011-11-29

    Advances in our understanding of terrestrial planet formation have come from a multidisciplinary approach. Studies of the ages and compositions of primitive meteorites with compositions similar to the Sun have helped to constrain the nature of the building blocks of planets. This information helps to guide numerical models for the three stages of planet formation from dust to planetesimals (~10(6) y), followed by planetesimals to embryos (lunar to Mars-sized objects; few 10(6) y), and finally embryos to planets (10(7)-10(8) y). Defining the role of turbulence in the early nebula is a key to understanding the growth of solids larger than meter size. The initiation of runaway growth of embryos from planetesimals ultimately leads to the growth of large terrestrial planets via large impacts. Dynamical models can produce inner Solar System configurations that closely resemble our Solar System, especially when the orbital effects of large planets (Jupiter and Saturn) and damping mechanisms, such as gas drag, are included. Experimental studies of terrestrial planet interiors provide additional constraints on the conditions of differentiation and, therefore, origin. A more complete understanding of terrestrial planet formation might be possible via a combination of chemical and physical modeling, as well as obtaining samples and new geophysical data from other planets (Venus, Mars, or Mercury) and asteroids.

  10. Large-Scale Optimization for Bayesian Inference in Complex Systems

    Energy Technology Data Exchange (ETDEWEB)

    Willcox, Karen [MIT; Marzouk, Youssef [MIT

    2013-11-12

    The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimization) Project focused on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimization and inversion methods. The project was a collaborative effort among MIT, the University of Texas at Austin, Georgia Institute of Technology, and Sandia National Laboratories. The research was directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. The MIT--Sandia component of the SAGUARO Project addressed the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas--Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to-observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as ``reduce then sample'' and ``sample then reduce.'' In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to

  11. The carbon balance of terrestrial ecosystems of China

    Directory of Open Access Journals (Sweden)

    Pilli R

    2009-05-01

    Full Text Available A comment is made on a recent letter published on Nature, in which different methodologies are applied to estimate the carbon balance of terrestrial ecosystems of China. A global carbon sink of 0.19-0.26 Pg per year is estimated during the 1980s and 1990s, and it is estimated that in 2006 terrestrial ecosystems have absorbed 28-37 per cent of global carbon emissions in China. Most of the carbon absorption is attributed to large-scale plantation made since the 1980s and shrub recovery. These results will certainly be valuable in the frame of the so-called “REDD” (Reducing Emissions from Deforestation forest Degradation in developing countries mechanism (UN convention on climate change UNFCCC.

  12. An industrial perspective on bioreactor scale-down: what we can learn from combined large-scale bioprocess and model fluid studies.

    Science.gov (United States)

    Noorman, Henk

    2011-08-01

    For industrial bioreactor design, operation, control and optimization, the scale-down approach is often advocated to efficiently generate data on a small scale, and effectively apply suggested improvements to the industrial scale. In all cases it is important to ensure that the scale-down conditions are representative of the real large-scale bioprocess. Progress is hampered by limited detailed and local information from large-scale bioprocesses. Complementary to real fermentation studies, physical aspects of model fluids such as air-water in large bioreactors provide useful information with limited effort and cost. Still, in industrial practice, investments of time, capital and resources often prohibit systematic work, although, in the end, savings obtained in this way are trivial compared to the expenses that result from real process disturbances, batch failures, and non-flyers with loss of business opportunity. Here we try to highlight what can be learned from real large-scale bioprocess in combination with model fluid studies, and to provide suitable computation tools to overcome data restrictions. Focus is on a specific well-documented case for a 30-m(3) bioreactor. Areas for further research from an industrial perspective are also indicated. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Large scale structure and baryogenesis

    International Nuclear Information System (INIS)

    Kirilova, D.P.; Chizhov, M.V.

    2001-08-01

    We discuss a possible connection between the large scale structure formation and the baryogenesis in the universe. An update review of the observational indications for the presence of a very large scale 120h -1 Mpc in the distribution of the visible matter of the universe is provided. The possibility to generate a periodic distribution with the characteristic scale 120h -1 Mpc through a mechanism producing quasi-periodic baryon density perturbations during inflationary stage, is discussed. The evolution of the baryon charge density distribution is explored in the framework of a low temperature boson condensate baryogenesis scenario. Both the observed very large scale of a the visible matter distribution in the universe and the observed baryon asymmetry value could naturally appear as a result of the evolution of a complex scalar field condensate, formed at the inflationary stage. Moreover, for some model's parameters a natural separation of matter superclusters from antimatter ones can be achieved. (author)

  14. Performance of Linear and Nonlinear Two-Leaf Light Use Efficiency Models at Different Temporal Scales

    Directory of Open Access Journals (Sweden)

    Xiaocui Wu

    2015-02-01

    Full Text Available The reliable simulation of gross primary productivity (GPP at various spatial and temporal scales is of significance to quantifying the net exchange of carbon between terrestrial ecosystems and the atmosphere. This study aimed to verify the ability of a nonlinear two-leaf model (TL-LUEn, a linear two-leaf model (TL-LUE, and a big-leaf light use efficiency model (MOD17 to simulate GPP at half-hourly, daily and 8-day scales using GPP derived from 58 eddy-covariance flux sites in Asia, Europe and North America as benchmarks. Model evaluation showed that the overall performance of TL-LUEn was slightly but not significantly better than TL-LUE at half-hourly and daily scale, while the overall performance of both TL-LUEn and TL-LUE were significantly better (p < 0.0001 than MOD17 at the two temporal scales. The improvement of TL-LUEn over TL-LUE was relatively small in comparison with the improvement of TL-LUE over MOD17. However, the differences between TL-LUEn and MOD17, and TL-LUE and MOD17 became less distinct at the 8-day scale. As for different vegetation types, TL-LUEn and TL-LUE performed better than MOD17 for all vegetation types except crops at the half-hourly scale. At the daily and 8-day scales, both TL-LUEn and TL-LUE outperformed MOD17 for forests. However, TL-LUEn had a mixed performance for the three non-forest types while TL-LUE outperformed MOD17 slightly for all these non-forest types at daily and 8-day scales. The better performance of TL-LUEn and TL-LUE for forests was mainly achieved by the correction of the underestimation/overestimation of GPP simulated by MOD17 under low/high solar radiation and sky clearness conditions. TL-LUEn is more applicable at individual sites at the half-hourly scale while TL-LUE could be regionally used at half-hourly, daily and 8-day scales. MOD17 is also an applicable option regionally at the 8-day scale.

  15. Automatic management software for large-scale cluster system

    International Nuclear Information System (INIS)

    Weng Yunjian; Chinese Academy of Sciences, Beijing; Sun Gongxing

    2007-01-01

    At present, the large-scale cluster system faces to the difficult management. For example the manager has large work load. It needs to cost much time on the management and the maintenance of large-scale cluster system. The nodes in large-scale cluster system are very easy to be chaotic. Thousands of nodes are put in big rooms so that some managers are very easy to make the confusion with machines. How do effectively carry on accurate management under the large-scale cluster system? The article introduces ELFms in the large-scale cluster system. Furthermore, it is proposed to realize the large-scale cluster system automatic management. (authors)

  16. Large temporal scale and capacity subsurface bulk energy storage with CO2

    Science.gov (United States)

    Saar, M. O.; Fleming, M. R.; Adams, B. M.; Ogland-Hand, J.; Nelson, E. S.; Randolph, J.; Sioshansi, R.; Kuehn, T. H.; Buscheck, T. A.; Bielicki, J. M.

    2017-12-01

    Decarbonizing energy systems by increasing the penetration of variable renewable energy (VRE) technologies requires efficient and short- to long-term energy storage. Very large amounts of energy can be stored in the subsurface as heat and/or pressure energy in order to provide both short- and long-term (seasonal) storage, depending on the implementation. This energy storage approach can be quite efficient, especially where geothermal energy is naturally added to the system. Here, we present subsurface heat and/or pressure energy storage with supercritical carbon dioxide (CO2) and discuss the system's efficiency, deployment options, as well as its advantages and disadvantages, compared to several other energy storage options. CO2-based subsurface bulk energy storage has the potential to be particularly efficient and large-scale, both temporally (i.e., seasonal) and spatially. The latter refers to the amount of energy that can be stored underground, using CO2, at a geologically conducive location, potentially enabling storing excess power from a substantial portion of the power grid. The implication is that it would be possible to employ centralized energy storage for (a substantial part of) the power grid, where the geology enables CO2-based bulk subsurface energy storage, whereas the VRE technologies (solar, wind) are located on that same power grid, where (solar, wind) conditions are ideal. However, this may require reinforcing the power grid's transmission lines in certain parts of the grid to enable high-load power transmission from/to a few locations.

  17. Less is more: regularization perspectives on large scale machine learning

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Deep learning based techniques provide a possible solution at the expanse of theoretical guidance and, especially, of computational requirements. It is then a key challenge for large scale machine learning to devise approaches guaranteed to be accurate and yet computationally efficient. In this talk, we will consider a regularization perspectives on machine learning appealing to classical ideas in linear algebra and inverse problems to scale-up dramatically nonparametric methods such as kernel methods, often dismissed because of prohibitive costs. Our analysis derives optimal theoretical guarantees while providing experimental results at par or out-performing state of the art approaches.

  18. Rotation invariant fast features for large-scale recognition

    Science.gov (United States)

    Takacs, Gabriel; Chandrasekhar, Vijay; Tsai, Sam; Chen, David; Grzeszczuk, Radek; Girod, Bernd

    2012-10-01

    We present an end-to-end feature description pipeline which uses a novel interest point detector and Rotation- Invariant Fast Feature (RIFF) descriptors. The proposed RIFF algorithm is 15× faster than SURF1 while producing large-scale retrieval results that are comparable to SIFT.2 Such high-speed features benefit a range of applications from Mobile Augmented Reality (MAR) to web-scale image retrieval and analysis.

  19. Initial Screening of Thermochemical Water-Splitting Cycles for High Efficiency Generation of Hydrogen Fuels Using Nuclear Power

    International Nuclear Information System (INIS)

    Brown, L.C.; Funk, J.F.; Showalter, S.K.

    1999-01-01

    OAK B188 Initial Screening of Thermochemical Water-Splitting Cycles for High Efficiency Generation of Hydrogen Fuels Using Nuclear Power There is currently no large scale, cost-effective, environmentally attractive hydrogen production process, nor is such a process available for commercialization. Hydrogen is a promising energy carrier, which potentially could replace the fossil fuels used in the transportation sector of our economy. Fossil fuels are polluting and carbon dioxide emissions from their combustion are thought to be responsible for global warming. The purpose of this work is to determine the potential for efficient, cost-effective, large-scale production of hydrogen utilizing high temperature heat from an advanced nuclear power station. Almost 800 literature references were located which pertain to thermochemical production of hydrogen from water and over 100 thermochemical watersplitting cycles were examined. Using defined criteria and quantifiable metrics, 25 cycles have been selected for more detailed study

  20. Parallel Index and Query for Large Scale Data Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Chou, Jerry; Wu, Kesheng; Ruebel, Oliver; Howison, Mark; Qiang, Ji; Prabhat,; Austin, Brian; Bethel, E. Wes; Ryne, Rob D.; Shoshani, Arie

    2011-07-18

    Modern scientific datasets present numerous data management and analysis challenges. State-of-the-art index and query technologies are critical for facilitating interactive exploration of large datasets, but numerous challenges remain in terms of designing a system for process- ing general scientific datasets. The system needs to be able to run on distributed multi-core platforms, efficiently utilize underlying I/O infrastructure, and scale to massive datasets. We present FastQuery, a novel software framework that address these challenges. FastQuery utilizes a state-of-the-art index and query technology (FastBit) and is designed to process mas- sive datasets on modern supercomputing platforms. We apply FastQuery to processing of a massive 50TB dataset generated by a large scale accelerator modeling code. We demonstrate the scalability of the tool to 11,520 cores. Motivated by the scientific need to search for inter- esting particles in this dataset, we use our framework to reduce search time from hours to tens of seconds.

  1. Electric vehicles and large-scale integration of wind power

    DEFF Research Database (Denmark)

    Liu, Wen; Hu, Weihao; Lund, Henrik

    2013-01-01

    with this imbalance and to reduce its high dependence on oil production. For this reason, it is interesting to analyse the extent to which transport electrification can further the renewable energy integration. This paper quantifies this issue in Inner Mongolia, where the share of wind power in the electricity supply...... was 6.5% in 2009 and which has the plan to develop large-scale wind power. The results show that electric vehicles (EVs) have the ability to balance the electricity demand and supply and to further the wind power integration. In the best case, the energy system with EV can increase wind power...... integration by 8%. The application of EVs benefits from saving both energy system cost and fuel cost. However, the negative consequences of decreasing energy system efficiency and increasing the CO2 emission should be noted when applying the hydrogen fuel cell vehicle (HFCV). The results also indicate...

  2. Human climbing with efficiently scaled gecko-inspired dry adhesives.

    Science.gov (United States)

    Hawkes, Elliot W; Eason, Eric V; Christensen, David L; Cutkosky, Mark R

    2015-01-06

    Since the discovery of the mechanism of adhesion in geckos, many synthetic dry adhesives have been developed with desirable gecko-like properties such as reusability, directionality, self-cleaning ability, rough surface adhesion and high adhesive stress. However, fully exploiting these adhesives in practical applications at different length scales requires efficient scaling (i.e. with little loss in adhesion as area grows). Just as natural gecko adhesives have been used as a benchmark for synthetic materials, so can gecko adhesion systems provide a baseline for scaling efficiency. In the tokay gecko (Gekko gecko), a scaling power law has been reported relating the maximum shear stress σmax to the area A: σmax ∝ A(-1/4). We present a mechanical concept which improves upon the gecko's non-uniform load-sharing and results in a nearly even load distribution over multiple patches of gecko-inspired adhesive. We created a synthetic adhesion system incorporating this concept which shows efficient scaling across four orders of magnitude of area, yielding an improved scaling power law: σmax ∝ A(-1/50). Furthermore, we found that the synthetic adhesion system does not fail catastrophically when a simulated failure is induced on a portion of the adhesive. In a practical demonstration, the synthetic adhesion system enabled a 70 kg human to climb vertical glass with 140 cm(2) of adhesive per hand. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  3. Efficient Large-Scale 2D Culture System for Human Induced Pluripotent Stem Cells and Differentiated Cardiomyocytes

    Directory of Open Access Journals (Sweden)

    Shugo Tohyama

    2017-11-01

    Full Text Available Cardiac regenerative therapies utilizing human induced pluripotent stem cells (hiPSCs are hampered by ineffective large-scale culture. hiPSCs were cultured in multilayer culture plates (CPs with active gas ventilation (AGV, resulting in stable proliferation and pluripotency. Seeding of 1 × 106 hiPSCs per layer yielded 7.2 × 108 hiPSCs in 4-layer CPs and 1.7 × 109 hiPSCs in 10-layer CPs with pluripotency. hiPSCs were sequentially differentiated into cardiomyocytes (CMs in a two-dimensional (2D differentiation protocol. The efficiency of cardiac differentiation using 10-layer CPs with AGV was 66%–87%. Approximately 6.2–7.0 × 108 cells (4-layer and 1.5–2.8 × 109 cells (10-layer were obtained with AGV. After metabolic purification with glucose- and glutamine-depleted and lactate-supplemented media, a massive amount of purified CMs was prepared. Here, we present a scalable 2D culture system using multilayer CPs with AGV for hiPSC-derived CMs, which will facilitate clinical applications for severe heart failure in the near future.

  4. Local-scale high-resolution atmospheric dispersion model using large-eddy simulation. LOHDIM-LES

    International Nuclear Information System (INIS)

    Nakayama, Hiromasa; Nagai, Haruyasu

    2016-03-01

    We developed LOcal-scale High-resolution atmospheric DIspersion Model using Large-Eddy Simulation (LOHDIM-LES). This dispersion model is designed based on LES which is effective to reproduce unsteady behaviors of turbulent flows and plume dispersion. The basic equations are the continuity equation, the Navier-Stokes equation, and the scalar conservation equation. Buildings and local terrain variability are resolved by high-resolution grids with a few meters and these turbulent effects are represented by immersed boundary method. In simulating atmospheric turbulence, boundary layer flows are generated by a recycling turbulent inflow technique in a driver region set up at the upstream of the main analysis region. This turbulent inflow data are imposed at the inlet of the main analysis region. By this approach, the LOHDIM-LES can provide detailed information on wind velocities and plume concentration in the investigated area. (author)

  5. Electrical efficiency and renewable energy - Economical alternatives to large-scale power generation; Stromeffizienz und erneuerbare Energien - Wirtschaftliche alternative zu Grosskraftwerken

    Energy Technology Data Exchange (ETDEWEB)

    Oettli, B.; Hammer, S.; Moret, F.; Iten, R. [Infras, Zuerich (Switzerland); Nordmann, T. [TNC Consulting AG, Erlenbach (Switzerland)

    2010-05-15

    This final report for WWF Switzerland, Greenpeace Switzerland, the Swiss Energy Foundation SES, Pro Natura and the Swiss Cantons of Basel City and Geneva takes a look at the energy-relevant effects of the propositions made by Swiss electricity utilities for large-scale power generation. These proposals are compared with a strategy that proposes investments in energy-efficiency and the use of renewable sources of energy. The effects of both scenarios on the environment and the risks involved are discussed, as are the investments involved. The associated effects on the Swiss national economy are also discussed. For the efficiency and renewables scenario, two implementation variants are discussed: Inland investments and production are examined as are foreign production options and/or import from foreign countries. The methods used in the study are introduced and discussed. Investment and cost considerations, earnings and effects on employment are also reviewed. The report is completed with an extensive appendix which, amongst other things, includes potential reviews, cost estimates and a discussion on 'smart grids'

  6. A highly efficient approach to protein interactome mapping based on collaborative filtering framework.

    Science.gov (United States)

    Luo, Xin; You, Zhuhong; Zhou, Mengchu; Li, Shuai; Leung, Hareton; Xia, Yunni; Zhu, Qingsheng

    2015-01-09

    The comprehensive mapping of protein-protein interactions (PPIs) is highly desired for one to gain deep insights into both fundamental cell biology processes and the pathology of diseases. Finely-set small-scale experiments are not only very expensive but also inefficient to identify numerous interactomes despite their high accuracy. High-throughput screening techniques enable efficient identification of PPIs; yet the desire to further extract useful knowledge from these data leads to the problem of binary interactome mapping. Network topology-based approaches prove to be highly efficient in addressing this problem; however, their performance deteriorates significantly on sparse putative PPI networks. Motivated by the success of collaborative filtering (CF)-based approaches to the problem of personalized-recommendation on large, sparse rating matrices, this work aims at implementing a highly efficient CF-based approach to binary interactome mapping. To achieve this, we first propose a CF framework for it. Under this framework, we model the given data into an interactome weight matrix, where the feature-vectors of involved proteins are extracted. With them, we design the rescaled cosine coefficient to model the inter-neighborhood similarity among involved proteins, for taking the mapping process. Experimental results on three large, sparse datasets demonstrate that the proposed approach outperforms several sophisticated topology-based approaches significantly.

  7. Laser-accelerated proton conversion efficiency thickness scaling

    International Nuclear Information System (INIS)

    Hey, D. S.; Foord, M. E.; Key, M. H.; LePape, S. L.; Mackinnon, A. J.; Patel, P. K.; Ping, Y.; Akli, K. U.; Stephens, R. B.; Bartal, T.; Beg, F. N.; Fedosejevs, R.; Friesen, H.; Tiedje, H. F.; Tsui, Y. Y.

    2009-01-01

    The conversion efficiency from laser energy into proton kinetic energy is measured with the 0.6 ps, 9x10 19 W/cm 2 Titan laser at the Jupiter Laser Facility as a function of target thickness in Au foils. For targets thicker than 20 μm, the conversion efficiency scales approximately as 1/L, where L is the target thickness. This is explained by the domination of hot electron collisional losses over adiabatic cooling. In thinner targets, the two effects become comparable, causing the conversion efficiency to scale weaker than 1/L; the measured conversion efficiency is constant within the scatter in the data for targets between 5 and 15 μm, with a peak conversion efficiency of 4% into protons with energy greater than 3 MeV. Depletion of the hydrocarbon contaminant layer is eliminated as an explanation for this plateau by using targets coated with 200 nm of ErH 3 on the rear surface. The proton acceleration is modeled with the hybrid-particle in cell code LSP, which reproduced the conversion efficiency scaling observed in the data.

  8. Highly efficient periodically poled KTP-isomorphs with large apertures and extreme domain aspect-ratios

    Science.gov (United States)

    Canalias, Carlota; Zukauskas, Andrius; Tjörnhamman, Staffan; Viotti, Anne-Lise; Pasiskevicius, Valdas; Laurell, Fredrik

    2018-02-01

    Since the early 1990's, a substantial effort has been devoted to the development of quasi-phased-matched (QPM) nonlinear devices, not only in ferroelectric oxides like LiNbO3, LiTaO3 and KTiOPO4 (KTP), but also in semiconductors as GaAs, and GaP. The technology to implement QPM structures in ferroelectric oxides has by now matured enough to satisfy the most basic frequency-conversion schemes without substantial modification of the poling procedures. Here, we present a qualitative leap in periodic poling techniques that allows us to demonstrate devices and frequency conversion schemes that were deemed unfeasible just a few years ago. Thanks to our short-pulse poling and coercive-field engineering techniques, we are able to demonstrate large aperture (5 mm) periodically poled Rb-doped KTP devices with a highly-uniform conversion efficiency over the whole aperture. These devices allow parametric conversion with energies larger than 60 mJ. Moreover, by employing our coercive-field engineering technique we fabricate highlyefficient sub-µm periodically poled devices, with periodicities as short as 500 nm, uniform over 1 mm-thick crystals, which allow us to realize mirrorless optical parametric oscillators with counter-propagating signal and idler waves. These novel devices present unique spectral and tuning properties, superior to those of conventional OPOs. Furthermore, our techniques are compatible with KTA, a KTP isomorph with extended transparency in the mid-IR range. We demonstrate that our highly-efficient PPKTA is superior both for mid-IR and for green light generation - as a result of improved transmission properties in the visible range. Our KTP-isomorph poling techniques leading to highly-efficient QPM devices will be presented. Their optical performance and attractive damage thresholds will be discussed.

  9. A cloud-based framework for large-scale traditional Chinese medical record retrieval.

    Science.gov (United States)

    Liu, Lijun; Liu, Li; Fu, Xiaodong; Huang, Qingsong; Zhang, Xianwen; Zhang, Yin

    2018-01-01

    Electronic medical records are increasingly common in medical practice. The secondary use of medical records has become increasingly important. It relies on the ability to retrieve the complete information about desired patient populations. How to effectively and accurately retrieve relevant medical records from large- scale medical big data is becoming a big challenge. Therefore, we propose an efficient and robust framework based on cloud for large-scale Traditional Chinese Medical Records (TCMRs) retrieval. We propose a parallel index building method and build a distributed search cluster, the former is used to improve the performance of index building, and the latter is used to provide high concurrent online TCMRs retrieval. Then, a real-time multi-indexing model is proposed to ensure the latest relevant TCMRs are indexed and retrieved in real-time, and a semantics-based query expansion method and a multi- factor ranking model are proposed to improve retrieval quality. Third, we implement a template-based visualization method for displaying medical reports. The proposed parallel indexing method and distributed search cluster can improve the performance of index building and provide high concurrent online TCMRs retrieval. The multi-indexing model can ensure the latest relevant TCMRs are indexed and retrieved in real-time. The semantics expansion method and the multi-factor ranking model can enhance retrieval quality. The template-based visualization method can enhance the availability and universality, where the medical reports are displayed via friendly web interface. In conclusion, compared with the current medical record retrieval systems, our system provides some advantages that are useful in improving the secondary use of large-scale traditional Chinese medical records in cloud environment. The proposed system is more easily integrated with existing clinical systems and be used in various scenarios. Copyright © 2017. Published by Elsevier Inc.

  10. A modular approach to large-scale design optimization of aerospace systems

    Science.gov (United States)

    Hwang, John T.

    Gradient-based optimization and the adjoint method form a synergistic combination that enables the efficient solution of large-scale optimization problems. Though the gradient-based approach struggles with non-smooth or multi-modal problems, the capability to efficiently optimize up to tens of thousands of design variables provides a valuable design tool for exploring complex tradeoffs and finding unintuitive designs. However, the widespread adoption of gradient-based optimization is limited by the implementation challenges for computing derivatives efficiently and accurately, particularly in multidisciplinary and shape design problems. This thesis addresses these difficulties in two ways. First, to deal with the heterogeneity and integration challenges of multidisciplinary problems, this thesis presents a computational modeling framework that solves multidisciplinary systems and computes their derivatives in a semi-automated fashion. This framework is built upon a new mathematical formulation developed in this thesis that expresses any computational model as a system of algebraic equations and unifies all methods for computing derivatives using a single equation. The framework is applied to two engineering problems: the optimization of a nanosatellite with 7 disciplines and over 25,000 design variables; and simultaneous allocation and mission optimization for commercial aircraft involving 330 design variables, 12 of which are integer variables handled using the branch-and-bound method. In both cases, the framework makes large-scale optimization possible by reducing the implementation effort and code complexity. The second half of this thesis presents a differentiable parametrization of aircraft geometries and structures for high-fidelity shape optimization. Existing geometry parametrizations are not differentiable, or they are limited in the types of shape changes they allow. This is addressed by a novel parametrization that smoothly interpolates aircraft

  11. Combined microfluidization and ultrasonication: a synergistic protocol for high-efficient processing of SWCNT dispersions with high quality

    Energy Technology Data Exchange (ETDEWEB)

    Luo, Sida, E-mail: s.luo@buaa.edu.cn [Beihang University, School of Mechanical Engineering and Automation (China); Liu, Tao, E-mail: tliu@fsu.edu [Florida State University, High-Performance Materials Institute (United States); Wang, Yong; Li, Liuhe [Beihang University, School of Mechanical Engineering and Automation (China); Wang, Guantao; Luo, Yun [China University of Geosciences, Center of Safety Research, School of Engineering and Technology (China)

    2016-08-15

    High-efficient and large-scale production of high-quality CNT dispersions is necessary for meeting the future needs to develop various CNT-based electronic devices. Herein, we have designed novel processing protocols by combining conventional ultrasonication process with a new microfluidization technique to produce high-quality SWCNT dispersions with improved processing efficiency. To judge the quality of SWCNT dispersions, one critical factor is the degree of exfoliation, which could be quantified by both geometrical dimension of the exfoliated nanotubes and percentage of individual tubes in a given dispersion. In this paper, the synergistic effect of the combined protocols was systematically investigated through evaluating SWCNT dispersions with newly developed characterization techniques, namely preparative ultracentrifuge method (PUM) and simultaneous Raman scattering and photoluminescence spectroscopy (SRSPL). The results of both techniques draw similar conclusions that as compared with either of the processes operated separately, a low-pass microfluidization followed by a reasonable duration of ultrasonication could substantially improve the processing efficiency to produce high-quality SWCNT dispersions with averaged particle length and diameter as small as ~600 and ~2 nm, respectively.Graphical abstract.

  12. Reconciling apparent inconsistencies in estimates of terrestrial CO2 sources and sinks

    International Nuclear Information System (INIS)

    House, J.I.; Prentice, I.C.; Heimann, M.; Ramankutty, N.

    2003-01-01

    The magnitude and location of terrestrial carbon sources and sinks remains subject to large uncertainties. Estimates of terrestrial CO 2 fluxes from ground-based inventory measurements typically find less carbon uptake than inverse model calculations based on atmospheric CO 2 measurements, while a wide range of results have been obtained using models of different types. However, when full account is taken of the processes, pools, time scales and geographic areas being measured, the different approaches can be understood as complementary rather than inconsistent, and can provide insight as to the contribution of various processes to the terrestrial carbon budget. For example, quantitative differences between atmospheric inversion model estimates and forest inventory estimates in northern extratropical regions suggest that carbon fluxes to soils (often not accounted for in inventories), and into non-forest vegetation, may account for about half of the terrestrial uptake. A consensus of inventory and inverse methods indicates that, in the 1980s, northern extratropical land regions were a large net sink of carbon, and the tropics were approximately neutral (albeit with high uncertainty around the central estimate of zero net flux). The terrestrial flux in southern extratropical regions was small. Book-keeping model studies of the impacts of land-use change indicated a large source in the tropics and almost zero net flux for most northern extratropical regions; similar land use change impacts were also recently obtained using process-based models. The difference between book-keeping land-use change model studies and inversions or inventories was previously interpreted as a 'missing' terrestrial carbon uptake. Land-use change studies do not account for environmental or many management effects (which are implicitly included in inventory and inversion methods). Process-based model studies have quantified the impacts of CO 2 fertilisation and climate change in addition to

  13. Scale-dependent performances of CMIP5 earth system models in simulating terrestrial vegetation carbon

    Science.gov (United States)

    Jiang, L.; Luo, Y.; Yan, Y.; Hararuk, O.

    2013-12-01

    Mitigation of global changes will depend on reliable projection for the future situation. As the major tools to predict future climate, Earth System Models (ESMs) used in Coupled Model Intercomparison Project Phase 5 (CMIP5) for the IPCC Fifth Assessment Report have incorporated carbon cycle components, which account for the important fluxes of carbon between the ocean, atmosphere, and terrestrial biosphere carbon reservoirs; and therefore are expected to provide more detailed and more certain projections. However, ESMs are never perfect; and evaluating the ESMs can help us to identify uncertainties in prediction and give the priorities for model development. In this study, we benchmarked carbon in live vegetation in the terrestrial ecosystems simulated by 19 ESMs models from CMIP5 with an observationally estimated data set of global carbon vegetation pool 'Olson's Major World Ecosystem Complexes Ranked by Carbon in Live Vegetation: An Updated Database Using the GLC2000 Land Cover Product' by Gibbs (2006). Our aim is to evaluate the ability of ESMs to reproduce the global vegetation carbon pool at different scales and what are the possible causes for the bias. We found that the performance CMIP5 ESMs is very scale-dependent. While CESM1-BGC, CESM1-CAM5, CESM1-FASTCHEM and CESM1-WACCM, and NorESM1-M and NorESM1-ME (they share the same model structure) have very similar global sums with the observation data but they usually perform poorly at grid cell and biome scale. In contrast, MIROC-ESM and MIROC-ESM-CHEM simulate the best on at grid cell and biome scale but have larger differences in global sums than others. Our results will help improve CMIP5 ESMs for more reliable prediction.

  14. A new asynchronous parallel algorithm for inferring large-scale gene regulatory networks.

    Directory of Open Access Journals (Sweden)

    Xiangyun Xiao

    Full Text Available The reconstruction of gene regulatory networks (GRNs from high-throughput experimental data has been considered one of the most important issues in systems biology research. With the development of high-throughput technology and the complexity of biological problems, we need to reconstruct GRNs that contain thousands of genes. However, when many existing algorithms are used to handle these large-scale problems, they will encounter two important issues: low accuracy and high computational cost. To overcome these difficulties, the main goal of this study is to design an effective parallel algorithm to infer large-scale GRNs based on high-performance parallel computing environments. In this study, we proposed a novel asynchronous parallel framework to improve the accuracy and lower the time complexity of large-scale GRN inference by combining splitting technology and ordinary differential equation (ODE-based optimization. The presented algorithm uses the sparsity and modularity of GRNs to split whole large-scale GRNs into many small-scale modular subnetworks. Through the ODE-based optimization of all subnetworks in parallel and their asynchronous communications, we can easily obtain the parameters of the whole network. To test the performance of the proposed approach, we used well-known benchmark datasets from Dialogue for Reverse Engineering Assessments and Methods challenge (DREAM, experimentally determined GRN of Escherichia coli and one published dataset that contains more than 10 thousand genes to compare the proposed approach with several popular algorithms on the same high-performance computing environments in terms of both accuracy and time complexity. The numerical results demonstrate that our parallel algorithm exhibits obvious superiority in inferring large-scale GRNs.

  15. A new asynchronous parallel algorithm for inferring large-scale gene regulatory networks.

    Science.gov (United States)

    Xiao, Xiangyun; Zhang, Wei; Zou, Xiufen

    2015-01-01

    The reconstruction of gene regulatory networks (GRNs) from high-throughput experimental data has been considered one of the most important issues in systems biology research. With the development of high-throughput technology and the complexity of biological problems, we need to reconstruct GRNs that contain thousands of genes. However, when many existing algorithms are used to handle these large-scale problems, they will encounter two important issues: low accuracy and high computational cost. To overcome these difficulties, the main goal of this study is to design an effective parallel algorithm to infer large-scale GRNs based on high-performance parallel computing environments. In this study, we proposed a novel asynchronous parallel framework to improve the accuracy and lower the time complexity of large-scale GRN inference by combining splitting technology and ordinary differential equation (ODE)-based optimization. The presented algorithm uses the sparsity and modularity of GRNs to split whole large-scale GRNs into many small-scale modular subnetworks. Through the ODE-based optimization of all subnetworks in parallel and their asynchronous communications, we can easily obtain the parameters of the whole network. To test the performance of the proposed approach, we used well-known benchmark datasets from Dialogue for Reverse Engineering Assessments and Methods challenge (DREAM), experimentally determined GRN of Escherichia coli and one published dataset that contains more than 10 thousand genes to compare the proposed approach with several popular algorithms on the same high-performance computing environments in terms of both accuracy and time complexity. The numerical results demonstrate that our parallel algorithm exhibits obvious superiority in inferring large-scale GRNs.

  16. A novel artificial fish swarm algorithm for solving large-scale reliability-redundancy application problem.

    Science.gov (United States)

    He, Qiang; Hu, Xiangtao; Ren, Hong; Zhang, Hongqi

    2015-11-01

    A novel artificial fish swarm algorithm (NAFSA) is proposed for solving large-scale reliability-redundancy allocation problem (RAP). In NAFSA, the social behaviors of fish swarm are classified in three ways: foraging behavior, reproductive behavior, and random behavior. The foraging behavior designs two position-updating strategies. And, the selection and crossover operators are applied to define the reproductive ability of an artificial fish. For the random behavior, which is essentially a mutation strategy, the basic cloud generator is used as the mutation operator. Finally, numerical results of four benchmark problems and a large-scale RAP are reported and compared. NAFSA shows good performance in terms of computational accuracy and computational efficiency for large scale RAP. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  17. Human climbing with efficiently scaled gecko-inspired dry adhesives

    OpenAIRE

    Hawkes, Elliot W.; Eason, Eric V.; Christensen, David L.; Cutkosky, Mark R.

    2015-01-01

    Since the discovery of the mechanism of adhesion in geckos, many synthetic dry adhesives have been developed with desirable gecko-like properties such as reusability, directionality, self-cleaning ability, rough surface adhesion and high adhesive stress. However, fully exploiting these adhesives in practical applications at different length scales requires efficient scaling (i.e. with little loss in adhesion as area grows). Just as natural gecko adhesives have been used as a benchmark for syn...

  18. Development of large area, high efficiency amorphous silicon solar cell

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, K.S.; Kim, S.; Kim, D.W. [Yu Kong Taedok Institute of Technology (Korea, Republic of)

    1996-02-01

    The objective of the research is to develop the mass-production technologies of high efficiency amorphous silicon solar cells in order to reduce the costs of solar cells and dissemination of solar cells. Amorphous silicon solar cell is the most promising option of thin film solar cells which are relatively easy to reduce the costs. The final goal of the research is to develop amorphous silicon solar cells having the efficiency of 10%, the ratio of light-induced degradation 15% in the area of 1200 cm{sup 2} and test the cells in the form of 2 Kw grid-connected photovoltaic system. (author) 35 refs., 8 tabs., 67 figs.

  19. High Efficiency Colloidal Quantum Dot Phosphors

    Energy Technology Data Exchange (ETDEWEB)

    Kahen, Keith

    2013-12-31

    The project showed that non-Cd containing, InP-based nanocrystals (semiconductor materials with dimensions of ~6 nm) have high potential for enabling next-generation, nanocrystal-based, on chip phosphors for solid state lighting. Typical nanocrystals fall short of the requirements for on chip phosphors due to their loss of quantum efficiency under the operating conditions of LEDs, such as, high temperature (up to 150 °C) and high optical flux (up to 200 W/cm2). The InP-based nanocrystals invented during this project maintain high quantum efficiency (>80%) in polymer-based films under these operating conditions for emission wavelengths ranging from ~530 to 620 nm. These nanocrystals also show other desirable attributes, such as, lack of blinking (a common problem with nanocrystals which limits their performance) and no increase in the emission spectral width from room to 150 °C (emitters with narrower spectral widths enable higher efficiency LEDs). Prior to these nanocrystals, no nanocrystal system (regardless of nanocrystal type) showed this collection of properties; in fact, other nanocrystal systems are typically limited to showing only one desirable trait (such as high temperature stability) but being deficient in other properties (such as high flux stability). The project showed that one can reproducibly obtain these properties by generating a novel compositional structure inside of the nanomaterials; in addition, the project formulated an initial theoretical framework linking the compositional structure to the list of high performance optical properties. Over the course of the project, the synthetic methodology for producing the novel composition was evolved to enable the synthesis of these nanomaterials at a cost approximately equal to that required for forming typical conventional nanocrystals. Given the above results, the last major remaining step prior to scale up of the nanomaterials is to limit the oxidation of these materials during the tens of

  20. Temporal development and chemical efficiency of positive streamers in a large scale wire-plate reactor as a function of voltage waveform parameters

    Science.gov (United States)

    Winands, G. J. J.; Liu, Z.; Pemen, A. J. M.; van Heesch, E. J. M.; Yan, K.; van Veldhuizen, E. M.

    2006-07-01

    In this paper a large-scale pulsed corona system is described in which pulse parameters such as pulse rise-time, peak voltage, pulse width and energy per pulse can be varied. The chemical efficiency of the system is determined by measuring ozone production. The temporal and spatial development of the discharge streamers is recorded using an ICCD camera with a shortest exposure time of 5 ns. The camera can be triggered at any moment starting from the time the voltage pulse arrives on the reactor, with an accuracy of less than 1 ns. Measurements were performed on an industrial size wire-plate reactor. The influence of pulse parameters like pulse voltage, DC bias voltage, rise-time and pulse repetition rate on plasma generation was monitored. It was observed that for higher peak voltages, an increase could be seen in the primary streamer velocity, the growth of the primary streamer diameter, the light intensity and the number of streamers per unit length of corona wire. No significant separate influence of DC bias voltage level was observed as long as the total reactor voltage (pulse + DC bias) remained constant and the DC bias voltage remained below the DC corona onset. For those situations in which the plasma appearance changed (e.g. different streamer velocity, diameter, intensity), a change in ozone production was also observed. The best chemical yields were obtained for low voltage (55 kV), low energetic pulses (0.4 J/pulse): 60 g (kWh)-1. For high voltage (86 kV), high energetic pulses (2.3 J/pulse) the yield decreased to approximately 45 g (kWh)-1, still a high value for ozone production in ambient air (RH 42%). The pulse repetition rate has no influence on plasma generation and on chemical efficiency up to 400 pulses per second.

  1. Temporal development and chemical efficiency of positive streamers in a large scale wire-plate reactor as a function of voltage waveform parameters

    Energy Technology Data Exchange (ETDEWEB)

    Winands, G J J [EPS Group, Department of Electrical Engineering, Eindhoven University of Technology, 5600 MB, Eindhoven (Netherlands); Liu, Z [EPS Group, Department of Electrical Engineering, Eindhoven University of Technology, 5600 MB, Eindhoven (Netherlands); Pemen, A J M [EPS Group, Department of Electrical Engineering, Eindhoven University of Technology, 5600 MB, Eindhoven (Netherlands); Heesch, E J M van [EPS Group, Department of Electrical Engineering, Eindhoven University of Technology, 5600 MB, Eindhoven (Netherlands); Yan, K [EPS Group, Department of Electrical Engineering, Eindhoven University of Technology, 5600 MB, Eindhoven (Netherlands); Veldhuizen, E M van [EPG Group, Department of Applied Physics, Eindhoven University of Technology, 5600 MB, Eindhoven (Netherlands)

    2006-07-21

    In this paper a large-scale pulsed corona system is described in which pulse parameters such as pulse rise-time, peak voltage, pulse width and energy per pulse can be varied. The chemical efficiency of the system is determined by measuring ozone production. The temporal and spatial development of the discharge streamers is recorded using an ICCD camera with a shortest exposure time of 5 ns. The camera can be triggered at any moment starting from the time the voltage pulse arrives on the reactor, with an accuracy of less than 1 ns. Measurements were performed on an industrial size wire-plate reactor. The influence of pulse parameters like pulse voltage, DC bias voltage, rise-time and pulse repetition rate on plasma generation was monitored. It was observed that for higher peak voltages, an increase could be seen in the primary streamer velocity, the growth of the primary streamer diameter, the light intensity and the number of streamers per unit length of corona wire. No significant separate influence of DC bias voltage level was observed as long as the total reactor voltage (pulse + DC bias) remained constant and the DC bias voltage remained below the DC corona onset. For those situations in which the plasma appearance changed (e.g. different streamer velocity, diameter, intensity), a change in ozone production was also observed. The best chemical yields were obtained for low voltage (55 kV), low energetic pulses (0.4 J/pulse): 60 g (kWh){sup -1}. For high voltage (86 kV), high energetic pulses (2.3 J/pulse) the yield decreased to approximately 45 g (kWh){sup -1}, still a high value for ozone production in ambient air (RH 42%). The pulse repetition rate has no influence on plasma generation and on chemical efficiency up to 400 pulses per second.

  2. Temporal development and chemical efficiency of positive streamers in a large scale wire-plate reactor as a function of voltage waveform parameters

    International Nuclear Information System (INIS)

    Winands, G J J; Liu, Z; Pemen, A J M; Heesch, E J M van; Yan, K; Veldhuizen, E M van

    2006-01-01

    In this paper a large-scale pulsed corona system is described in which pulse parameters such as pulse rise-time, peak voltage, pulse width and energy per pulse can be varied. The chemical efficiency of the system is determined by measuring ozone production. The temporal and spatial development of the discharge streamers is recorded using an ICCD camera with a shortest exposure time of 5 ns. The camera can be triggered at any moment starting from the time the voltage pulse arrives on the reactor, with an accuracy of less than 1 ns. Measurements were performed on an industrial size wire-plate reactor. The influence of pulse parameters like pulse voltage, DC bias voltage, rise-time and pulse repetition rate on plasma generation was monitored. It was observed that for higher peak voltages, an increase could be seen in the primary streamer velocity, the growth of the primary streamer diameter, the light intensity and the number of streamers per unit length of corona wire. No significant separate influence of DC bias voltage level was observed as long as the total reactor voltage (pulse + DC bias) remained constant and the DC bias voltage remained below the DC corona onset. For those situations in which the plasma appearance changed (e.g. different streamer velocity, diameter, intensity), a change in ozone production was also observed. The best chemical yields were obtained for low voltage (55 kV), low energetic pulses (0.4 J/pulse): 60 g (kWh) -1 . For high voltage (86 kV), high energetic pulses (2.3 J/pulse) the yield decreased to approximately 45 g (kWh) -1 , still a high value for ozone production in ambient air (RH 42%). The pulse repetition rate has no influence on plasma generation and on chemical efficiency up to 400 pulses per second

  3. Large-Scale Transit Signal Priority Implementation

    OpenAIRE

    Lee, Kevin S.; Lozner, Bailey

    2018-01-01

    In 2016, the District Department of Transportation (DDOT) deployed Transit Signal Priority (TSP) at 195 intersections in highly urbanized areas of Washington, DC. In collaboration with a broader regional implementation, and in partnership with the Washington Metropolitan Area Transit Authority (WMATA), DDOT set out to apply a systems engineering–driven process to identify, design, test, and accept a large-scale TSP system. This presentation will highlight project successes and lessons learned.

  4. An Efficient Parallel Multi-Scale Segmentation Method for Remote Sensing Imagery

    Directory of Open Access Journals (Sweden)

    Haiyan Gu

    2018-04-01

    Full Text Available Remote sensing (RS image segmentation is an essential step in geographic object-based image analysis (GEOBIA to ultimately derive “meaningful objects”. While many segmentation methods exist, most of them are not efficient for large data sets. Thus, the goal of this research is to develop an efficient parallel multi-scale segmentation method for RS imagery by combining graph theory and the fractal net evolution approach (FNEA. Specifically, a minimum spanning tree (MST algorithm in graph theory is proposed to be combined with a minimum heterogeneity rule (MHR algorithm that is used in FNEA. The MST algorithm is used for the initial segmentation while the MHR algorithm is used for object merging. An efficient implementation of the segmentation strategy is presented using data partition and the “reverse searching-forward processing” chain based on message passing interface (MPI parallel technology. Segmentation results of the proposed method using images from multiple sensors (airborne, SPECIM AISA EAGLE II, WorldView-2, RADARSAT-2 and different selected landscapes (residential/industrial, residential/agriculture covering four test sites indicated its efficiency in accuracy and speed. We conclude that the proposed method is applicable and efficient for the segmentation of a variety of RS imagery (airborne optical, satellite optical, SAR, high-spectral, while the accuracy is comparable with that of the FNEA method.

  5. Large-scale HTS bulks for magnetic application

    Science.gov (United States)

    Werfel, Frank N.; Floegel-Delor, Uta; Riedel, Thomas; Goebel, Bernd; Rothfeld, Rolf; Schirrmeister, Peter; Wippich, Dieter

    2013-01-01

    ATZ Company has constructed about 130 HTS magnet systems using high-Tc bulk magnets. A key feature in scaling-up is the fabrication of YBCO melts textured multi-seeded large bulks with three to eight seeds. Except of levitation, magnetization, trapped field and hysteresis, we review system engineering parameters of HTS magnetic linear and rotational bearings like compactness, cryogenics, power density, efficiency and robust construction. We examine mobile compact YBCO bulk magnet platforms cooled with LN2 and Stirling cryo-cooler for demonstrator use. Compact cryostats for Maglev train operation contain 24 pieces of 3-seed bulks and can levitate 2500-3000 N at 10 mm above a permanent magnet (PM) track. The effective magnetic distance of the thermally insulated bulks is 2 mm only; the stored 2.5 l LN2 allows more than 24 h operation without refilling. 34 HTS Maglev vacuum cryostats are manufactured tested and operate in Germany, China and Brazil. The magnetic levitation load to weight ratio is more than 15, and by group assembling the HTS cryostats under vehicles up to 5 t total loads levitated above a magnetic track is achieved.

  6. RESTRUCTURING OF THE LARGE-SCALE SPRINKLERS

    Directory of Open Access Journals (Sweden)

    Paweł Kozaczyk

    2016-09-01

    Full Text Available One of the best ways for agriculture to become independent from shortages of precipitation is irrigation. In the seventies and eighties of the last century a number of large-scale sprinklers in Wielkopolska was built. At the end of 1970’s in the Poznan province 67 sprinklers with a total area of 6400 ha were installed. The average size of the sprinkler reached 95 ha. In 1989 there were 98 sprinklers, and the area which was armed with them was more than 10 130 ha. The study was conducted on 7 large sprinklers with the area ranging from 230 to 520 hectares in 1986÷1998. After the introduction of the market economy in the early 90’s and ownership changes in agriculture, large-scale sprinklers have gone under a significant or total devastation. Land on the State Farms of the State Agricultural Property Agency has leased or sold and the new owners used the existing sprinklers to a very small extent. This involved a change in crop structure, demand structure and an increase in operating costs. There has also been a threefold increase in electricity prices. Operation of large-scale irrigation encountered all kinds of barriers in practice and limitations of system solutions, supply difficulties, high levels of equipment failure which is not inclined to rational use of available sprinklers. An effect of a vision of the local area was to show the current status of the remaining irrigation infrastructure. The adopted scheme for the restructuring of Polish agriculture was not the best solution, causing massive destruction of assets previously invested in the sprinkler system.

  7. A Large Aperture, High Energy Laser System for Optics and Optical Component Testing

    International Nuclear Information System (INIS)

    Nostrand, M.C.; Weiland, T.L.; Luthi, R.L.; Vickers, J.L.; Sell, W.D.; Stanley, J.A.; Honig, J.; Auerbach, J.; Hackel, R.P.; Wegner, P.J.

    2003-01-01

    A large aperture, kJ-class, multi-wavelength Nd-glass laser system has been constructed at Lawrence Livermore National Lab which has unique capabilities for studying a wide variety of optical phenomena. The master-oscillator, power-amplifier (MOPA) configuration of this ''Optical Sciences Laser'' (OSL) produces 1053 nm radiation with shaped pulse lengths which are variable from 0.1-100 ns. The output can be frequency doubled or tripled with high conversion efficiency with a resultant 100 cm 2 high quality output beam. This facility can accommodate prototype hardware for large-scale inertial confinement fusion lasers allowing for investigation of integrated system issues such as optical lifetime at high fluence, optics contamination, compatibility of non-optical materials, and laser diagnostics

  8. Factors influencing aquatic-to-terrestrial contaminant transport to terrestrial arthropod consumers in a multiuse river system.

    Science.gov (United States)

    Alberts, Jeremy M; Sullivan, S Mažeika P

    2016-06-01

    Emerging aquatic insects are important vectors of contaminant transfer from aquatic to terrestrial food webs. However, the environmental factors that regulate contaminant body burdens in nearshore terrestrial consumers remain largely unexplored. We investigated the relative influences of riparian landscape composition (i.e., land use and nearshore vegetation structure) and contaminant flux via the emergent aquatic insect subsidy on selenium (Se) and mercury (Hg) body burdens of riparian ants (Formica subsericea) and spiders of the family Tetragnathidae along 11 river reaches spanning an urban-rural land-use gradient in Ohio, USA. Model-selection results indicated that fine-scale land cover (e.g., riparian zone width, shrub cover) in the riparian zone was positively associated with reach-wide body burdens of Se and Hg in both riparian F. subsericea and tetragnathid spiders (i.e., total magnitude of Hg and Se concentrations in ant and spider populations, respectively, for each reach). River distance downstream of Columbus, Ohio - where study reaches were impounded and flow through a large urban center - was also implicated as an important factor. Although stable-isotope analysis suggested that emergent aquatic insects were likely vectors of Se and Hg to tetragnathid spiders (but not to F. subsericea), emergent insect contaminant flux did not emerge as a significant predictor for either reach-wide body burdens of spider Hg or Se. Improved understanding of the pathways and influences that control aquatic-to-terrestrial contaminant transport will be critical for effective risk management and remediation. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Dissecting the large-scale galactic conformity

    Science.gov (United States)

    Seo, Seongu

    2018-01-01

    Galactic conformity is an observed phenomenon that galaxies located in the same region have similar properties such as star formation rate, color, gas fraction, and so on. The conformity was first observed among galaxies within in the same halos (“one-halo conformity”). The one-halo conformity can be readily explained by mutual interactions among galaxies within a halo. Recent observations however further witnessed a puzzling connection among galaxies with no direct interaction. In particular, galaxies located within a sphere of ~5 Mpc radius tend to show similarities, even though the galaxies do not share common halos with each other ("two-halo conformity" or “large-scale conformity”). Using a cosmological hydrodynamic simulation, Illustris, we investigate the physical origin of the two-halo conformity and put forward two scenarios. First, back-splash galaxies are likely responsible for the large-scale conformity. They have evolved into red galaxies due to ram-pressure stripping in a given galaxy cluster and happen to reside now within a ~5 Mpc sphere. Second, galaxies in strong tidal field induced by large-scale structure also seem to give rise to the large-scale conformity. The strong tides suppress star formation in the galaxies. We discuss the importance of the large-scale conformity in the context of galaxy evolution.

  10. Microalgal and Terrestrial Transport Biofuels to Displace Fossil Fuels

    Directory of Open Access Journals (Sweden)

    Lucas Reijnders

    2009-02-01

    Full Text Available Terrestrial transport biofuels differ in their ability to replace fossil fuels. When both the conversion of solar energy into biomass and the life cycle inputs of fossil fuels are considered, ethanol from sugarcane and biodiesel from palm oil do relatively well, if compared with ethanol from corn, sugar beet or wheat and biodiesel from rapeseed. When terrestrial biofuels are to replace mineral oil-derived transport fuels, large areas of good agricultural land are needed: about 5x108 ha in the case of biofuels from sugarcane or oil palm, and at least 1.8-3.6x109 ha in the case of ethanol from wheat, corn or sugar beet, as produced in industrialized countries. Biofuels from microalgae which are commercially produced with current technologies do not appear to outperform terrestrial plants such as sugarcane in their ability to displace fossil fuels. Whether they will able to do so on a commercial scale in the future, is uncertain.

  11. Response of Water Use Efficiency to Global Environmental Change Based on Output From Terrestrial Biosphere Models

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Sha [Tsinghua Univ., Beijing (China); Yu, Bofu [Griffith Univ., Nathan Queensland (Australia); Schwalm, Christopher R. [Woods Hole Research Center, Falmouth, MA (United States); Northern Arizona Univ., Flagstaff, AZ (United States); Ciais, Philippe [Lab. des Sciences du Climat et de l' Environnement, Gif-sur-Yvette (France); Zhang, Yao [Univ. of Oklahoma, Norman, OK (United States); Fisher, Joshua B. [California Institute of Technology, Pasadena, CA (United States); Michalak, Anna M. [Carnegie Institution for Science, Stanford, CA (United States); Wang, Weile [California State Uni., Monterey Bay, Seasid, CA (United States); Poulter, Benjamin [Montana State Univ., Bozeman, MT (United States); Huntzinger, Deborah N. [Northern Arizona Univ., Flagstaff, AZ (United States); Niu, Shuli [Institute of Geographic Sciences and Natural Resources Research, Beijing (China); Chinese Academy of Sciences (CAS), Beijing (China); Mao, Jiafu [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Jain, Atul [Univ. of Illinois at Urbana-Champaign, Urbana, IL (United States); Ricciuto, Daniel M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Shi, Xiaoying [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Ito, Akihiko [Tohoku Univ., Sendai (Japan); Wei, Yaxing [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Huang, Yuefei [Tsinghua Univ., Beijing (China); Qinghai Univ., Xining (China); Wang, Guangqian [Tsinghua Univ., Beijing (China)

    2017-10-18

    Here, water use efficiency (WUE), defined as the ratio of gross primary productivity and evapotranspiration at the ecosystem scale, is a critical variable linking the carbon and water cycles. Incorporating a dependency on vapor pressure deficit, apparent underlying WUE (uWUE) provides a better indicator of how terrestrial ecosystems respond to environmental changes than other WUE formulations. Here we used 20th century simulations from four terrestrial biosphere models to develop a novel variance decomposition method. With this method, we attributed variations in apparent uWUE to both the trend and interannual variation of environmental drivers. The secular increase in atmospheric CO2 explained a clear majority of total variation (66 ± 32%: mean ± one standard deviation), followed by positive trends in nitrogen deposition and climate, as well as a negative trend in land use change. In contrast, interannual variation was mostly driven by interannual climate variability. To analyze the mechanism of the CO2 effect, we partitioned the apparent uWUE into the transpiration ratio (transpiration over evapotranspiration) and potential uWUE. The relative increase in potential uWUE parallels that of CO2, but this direct CO2 effect was offset by 20 ± 4% by changes in ecosystem structure, that is, leaf area index for different vegetation types. However, the decrease in transpiration due to stomatal closure with rising CO2 was reduced by 84% by an increase in leaf area index, resulting in small changes in the transpiration ratio. CO2 concentration thus plays a dominant role in driving apparent uWUE variations over time, but its role differs for the two constituent components: potential uWUE and transpiration.

  12. High-efficiency silicon solar cells for low-illumination applications

    OpenAIRE

    Glunz, S.W.; Dicker, J.; Esterle, M.; Hermle, M.; Isenberg, J.; Kamerewerd, F.; Knobloch, J.; Kray, D.; Leimenstoll, A.; Lutz, F.; Oßwald, D.; Preu, R.; Rein, S.; Schäffer, E.; Schetter, C.

    2002-01-01

    At Fraunhofer ISE the fabrication of high-efficiency solar cells was extended from a laboratory scale to a small pilot-line production. Primarily, the fabricated cells are used in small high-efficiency modules integrated in prototypes of solar-powered portable electronic devices such as cellular phones, handheld computers etc. Compared to other applications of high-efficiency cells such as solar cars and planes, the illumination densities found in these mainly indoor applications are signific...

  13. Environmental performance evaluation of large-scale municipal solid waste incinerators using data envelopment analysis.

    Science.gov (United States)

    Chen, Ho-Wen; Chang, Ni-Bin; Chen, Jeng-Chung; Tsai, Shu-Ju

    2010-07-01

    Limited to insufficient land resources, incinerators are considered in many countries such as Japan and Germany as the major technology for a waste management scheme capable of dealing with the increasing demand for municipal and industrial solid waste treatment in urban regions. The evaluation of these municipal incinerators in terms of secondary pollution potential, cost-effectiveness, and operational efficiency has become a new focus in the highly interdisciplinary area of production economics, systems analysis, and waste management. This paper aims to demonstrate the application of data envelopment analysis (DEA)--a production economics tool--to evaluate performance-based efficiencies of 19 large-scale municipal incinerators in Taiwan with different operational conditions. A 4-year operational data set from 2002 to 2005 was collected in support of DEA modeling using Monte Carlo simulation to outline the possibility distributions of operational efficiency of these incinerators. Uncertainty analysis using the Monte Carlo simulation provides a balance between simplifications of our analysis and the soundness of capturing the essential random features that complicate solid waste management systems. To cope with future challenges, efforts in the DEA modeling, systems analysis, and prediction of the performance of large-scale municipal solid waste incinerators under normal operation and special conditions were directed toward generating a compromised assessment procedure. Our research findings will eventually lead to the identification of the optimal management strategies for promoting the quality of solid waste incineration, not only in Taiwan, but also elsewhere in the world. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  14. The fastclime Package for Linear Programming and Large-Scale Precision Matrix Estimation in R.

    Science.gov (United States)

    Pang, Haotian; Liu, Han; Vanderbei, Robert

    2014-02-01

    We develop an R package fastclime for solving a family of regularized linear programming (LP) problems. Our package efficiently implements the parametric simplex algorithm, which provides a scalable and sophisticated tool for solving large-scale linear programs. As an illustrative example, one use of our LP solver is to implement an important sparse precision matrix estimation method called CLIME (Constrained L 1 Minimization Estimator). Compared with existing packages for this problem such as clime and flare, our package has three advantages: (1) it efficiently calculates the full piecewise-linear regularization path; (2) it provides an accurate dual certificate as stopping criterion; (3) it is completely coded in C and is highly portable. This package is designed to be useful to statisticians and machine learning researchers for solving a wide range of problems.

  15. High Intensity Laser Power Beaming Architecture for Space and Terrestrial Missions

    Science.gov (United States)

    Nayfeh, Taysir; Fast, Brian; Raible, Daniel; Dinca, Dragos; Tollis, Nick; Jalics, Andrew

    2011-01-01

    High Intensity Laser Power Beaming (HILPB) has been developed as a technique to achieve Wireless Power Transmission (WPT) for both space and terrestrial applications. In this paper, the system architecture and hardware results for a terrestrial application of HILPB are presented. These results demonstrate continuous conversion of high intensity optical energy at near-IR wavelengths directly to electrical energy at output power levels as high as 6.24 W from the single cell 0.8 cm2 aperture receiver. These results are scalable, and may be realized by implementing receiver arraying and utilizing higher power source lasers. This type of system would enable long range optical refueling of electric platforms, such as MUAV s, airships, robotic exploration missions and provide power to spacecraft platforms which may utilize it to drive electric means of propulsion.

  16. Neighborhood Discriminant Hashing for Large-Scale Image Retrieval.

    Science.gov (United States)

    Tang, Jinhui; Li, Zechao; Wang, Meng; Zhao, Ruizhen

    2015-09-01

    With the proliferation of large-scale community-contributed images, hashing-based approximate nearest neighbor search in huge databases has aroused considerable interest from the fields of computer vision and multimedia in recent years because of its computational and memory efficiency. In this paper, we propose a novel hashing method named neighborhood discriminant hashing (NDH) (for short) to implement approximate similarity search. Different from the previous work, we propose to learn a discriminant hashing function by exploiting local discriminative information, i.e., the labels of a sample can be inherited from the neighbor samples it selects. The hashing function is expected to be orthogonal to avoid redundancy in the learned hashing bits as much as possible, while an information theoretic regularization is jointly exploited using maximum entropy principle. As a consequence, the learned hashing function is compact and nonredundant among bits, while each bit is highly informative. Extensive experiments are carried out on four publicly available data sets and the comparison results demonstrate the outperforming performance of the proposed NDH method over state-of-the-art hashing techniques.

  17. Review of Dynamic Modeling and Simulation of Large Scale Belt Conveyor System

    Science.gov (United States)

    He, Qing; Li, Hong

    Belt conveyor is one of the most important devices to transport bulk-solid material for long distance. Dynamic analysis is the key to decide whether the design is rational in technique, safe and reliable in running, feasible in economy. It is very important to study dynamic properties, improve efficiency and productivity, guarantee conveyor safe, reliable and stable running. The dynamic researches and applications of large scale belt conveyor are discussed. The main research topics, the state-of-the-art of dynamic researches on belt conveyor are analyzed. The main future works focus on dynamic analysis, modeling and simulation of main components and whole system, nonlinear modeling, simulation and vibration analysis of large scale conveyor system.

  18. Chirping for large-scale maritime archaeological survey

    DEFF Research Database (Denmark)

    Grøn, Ole; Boldreel, Lars Ole

    2014-01-01

    Archaeological wrecks exposed on the sea floor are mapped using side-scan and multibeam techniques, whereas the detection of submerged archaeological sites, such as Stone Age settlements, and wrecks, partially or wholly embedded in sea-floor sediments, requires the application of high-resolution ...... the present state of this technology, it appears well suited to large-scale maritime archaeological mapping....

  19. Deterministic patterned growth of high-mobility large-crystal graphene: a path towards wafer scale integration

    Science.gov (United States)

    Miseikis, Vaidotas; Bianco, Federica; David, Jérémy; Gemmi, Mauro; Pellegrini, Vittorio; Romagnoli, Marco; Coletti, Camilla

    2017-06-01

    We demonstrate rapid deterministic (seeded) growth of large single-crystals of graphene by chemical vapour deposition (CVD) utilising pre-patterned copper substrates with chromium nucleation sites. Arrays of graphene single-crystals as large as several hundred microns are grown with a periodicity of up to 1 mm. The graphene is transferred to target substrates using aligned and contamination- free semi-dry transfer. The high quality of the synthesised graphene is confirmed by Raman spectroscopy and transport measurements, demonstrating room-temperature carrier mobility of 21 000 cm2 V-1 s-1 when transferred on top of hexagonal boron nitride. By tailoring the nucleation of large single-crystals according to the desired device geometry, it will be possible to produce complex device architectures based on single-crystal graphene, thus paving the way to the adoption of CVD graphene in wafer-scale fabrication.

  20. Large-scale simulations of plastic neural networks on neuromorphic hardware

    Directory of Open Access Journals (Sweden)

    James Courtney Knight

    2016-04-01

    Full Text Available SpiNNaker is a digital, neuromorphic architecture designed for simulating large-scale spiking neural networks at speeds close to biological real-time. Rather than using bespoke analog or digital hardware, the basic computational unit of a SpiNNaker system is a general-purpose ARM processor, allowing it to be programmed to simulate a wide variety of neuron and synapse models. This flexibility is particularly valuable in the study of biological plasticity phenomena. A recently proposed learning rule based on the Bayesian Confidence Propagation Neural Network (BCPNN paradigm offers a generic framework for modeling the interaction of different plasticity mechanisms using spiking neurons. However, it can be computationally expensive to simulate large networks with BCPNN learning since it requires multiple state variables for each synapse, each of which needs to be updated every simulation time-step. We discuss the trade-offs in efficiency and accuracy involved in developing an event-based BCPNN implementation for SpiNNaker based on an analytical solution to the BCPNN equations, and detail the steps taken to fit this within the limited computational and memory resources of the SpiNNaker architecture. We demonstrate this learning rule by learning temporal sequences of neural activity within a recurrent attractor network which we simulate at scales of up to 20000 neurons and 51200000 plastic synapses: the largest plastic neural network ever to be simulated on neuromorphic hardware. We also run a comparable simulation on a Cray XC-30 supercomputer system and find that, if it is to match the run-time of our SpiNNaker simulation, the super computer system uses approximately more power. This suggests that cheaper, more power efficient neuromorphic systems are becoming useful discovery tools in the study of plasticity in large-scale brain models.

  1. Empirical Analysis of High Efficient Remote Cloud Data Center Backup Using HBase and Cassandra

    Directory of Open Access Journals (Sweden)

    Bao Rong Chang

    2015-01-01

    Full Text Available HBase, a master-slave framework, and Cassandra, a peer-to-peer (P2P framework, are the two most commonly used large-scale distributed NoSQL databases, especially applicable to the cloud computing with high flexibility and scalability and the ease of big data processing. Regarding storage structure, different structure adopts distinct backup strategy to reduce the risks of data loss. This paper aims to realize high efficient remote cloud data center backup using HBase and Cassandra, and in order to verify the high efficiency backup they have applied Thrift Java for cloud data center to take a stress test by performing strictly data read/write and remote database backup in the large amounts of data. Finally, in terms of the effectiveness-cost evaluation to assess the remote datacenter backup, a cost-performance ratio has been evaluated for several benchmark databases and the proposed ones. As a result, the proposed HBase approach outperforms the other databases.

  2. Large-Scale Query-by-Image Video Retrieval Using Bloom Filters

    OpenAIRE

    Araujo, Andre; Chaves, Jason; Lakshman, Haricharan; Angst, Roland; Girod, Bernd

    2016-01-01

    We consider the problem of using image queries to retrieve videos from a database. Our focus is on large-scale applications, where it is infeasible to index each database video frame independently. Our main contribution is a framework based on Bloom filters, which can be used to index long video segments, enabling efficient image-to-video comparisons. Using this framework, we investigate several retrieval architectures, by considering different types of aggregation and different functions to ...

  3. Large-scale perspective as a challenge

    NARCIS (Netherlands)

    Plomp, M.G.A.

    2012-01-01

    1. Scale forms a challenge for chain researchers: when exactly is something ‘large-scale’? What are the underlying factors (e.g. number of parties, data, objects in the chain, complexity) that determine this? It appears to be a continuum between small- and large-scale, where positioning on that

  4. Large scale cross hole testing

    International Nuclear Information System (INIS)

    Ball, J.K.; Black, J.H.; Doe, T.

    1991-05-01

    As part of the Site Characterisation and Validation programme the results of the large scale cross hole testing have been used to document hydraulic connections across the SCV block, to test conceptual models of fracture zones and obtain hydrogeological properties of the major hydrogeological features. The SCV block is highly heterogeneous. This heterogeneity is not smoothed out even over scales of hundreds of meters. Results of the interpretation validate the hypothesis of the major fracture zones, A, B and H; not much evidence of minor fracture zones is found. The uncertainty in the flow path, through the fractured rock, causes sever problems in interpretation. Derived values of hydraulic conductivity were found to be in a narrow range of two to three orders of magnitude. Test design did not allow fracture zones to be tested individually. This could be improved by testing the high hydraulic conductivity regions specifically. The Piezomac and single hole equipment worked well. Few, if any, of the tests ran long enough to approach equilibrium. Many observation boreholes showed no response. This could either be because there is no hydraulic connection, or there is a connection but a response is not seen within the time scale of the pumping test. The fractional dimension analysis yielded credible results, and the sinusoidal testing procedure provided an effective means of identifying the dominant hydraulic connections. (10 refs.) (au)

  5. Algorithm 896: LSA: Algorithms for Large-Scale Optimization

    Czech Academy of Sciences Publication Activity Database

    Lukšan, Ladislav; Matonoha, Ctirad; Vlček, Jan

    2009-01-01

    Roč. 36, č. 3 (2009), 16-1-16-29 ISSN 0098-3500 R&D Pro jects: GA AV ČR IAA1030405; GA ČR GP201/06/P397 Institutional research plan: CEZ:AV0Z10300504 Keywords : algorithms * design * large-scale optimization * large-scale nonsmooth optimization * large-scale nonlinear least squares * large-scale nonlinear minimax * large-scale systems of nonlinear equations * sparse pro blems * partially separable pro blems * limited-memory methods * discrete Newton methods * quasi-Newton methods * primal interior-point methods Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 1.904, year: 2009

  6. GT-WGS: an efficient and economic tool for large-scale WGS analyses based on the AWS cloud service.

    Science.gov (United States)

    Wang, Yiqi; Li, Gen; Ma, Mark; He, Fazhong; Song, Zhuo; Zhang, Wei; Wu, Chengkun

    2018-01-19

    Whole-genome sequencing (WGS) plays an increasingly important role in clinical practice and public health. Due to the big data size, WGS data analysis is usually compute-intensive and IO-intensive. Currently it usually takes 30 to 40 h to finish a 50× WGS analysis task, which is far from the ideal speed required by the industry. Furthermore, the high-end infrastructure required by WGS computing is costly in terms of time and money. In this paper, we aim to improve the time efficiency of WGS analysis and minimize the cost by elastic cloud computing. We developed a distributed system, GT-WGS, for large-scale WGS analyses utilizing the Amazon Web Services (AWS). Our system won the first prize on the Wind and Cloud challenge held by Genomics and Cloud Technology Alliance conference (GCTA) committee. The system makes full use of the dynamic pricing mechanism of AWS. We evaluate the performance of GT-WGS with a 55× WGS dataset (400GB fastq) provided by the GCTA 2017 competition. In the best case, it only took 18.4 min to finish the analysis and the AWS cost of the whole process is only 16.5 US dollars. The accuracy of GT-WGS is 99.9% consistent with that of the Genome Analysis Toolkit (GATK) best practice. We also evaluated the performance of GT-WGS performance on a real-world dataset provided by the XiangYa hospital, which consists of 5× whole-genome dataset with 500 samples, and on average GT-WGS managed to finish one 5× WGS analysis task in 2.4 min at a cost of $3.6. WGS is already playing an important role in guiding therapeutic intervention. However, its application is limited by the time cost and computing cost. GT-WGS excelled as an efficient and affordable WGS analyses tool to address this problem. The demo video and supplementary materials of GT-WGS can be accessed at https://github.com/Genetalks/wgs_analysis_demo .

  7. KINETIC ALFVÉN WAVE GENERATION BY LARGE-SCALE PHASE MIXING

    International Nuclear Information System (INIS)

    Vásconez, C. L.; Pucci, F.; Valentini, F.; Servidio, S.; Malara, F.; Matthaeus, W. H.

    2015-01-01

    One view of the solar wind turbulence is that the observed highly anisotropic fluctuations at spatial scales near the proton inertial length d p may be considered as kinetic Alfvén waves (KAWs). In the present paper, we show how phase mixing of large-scale parallel-propagating Alfvén waves is an efficient mechanism for the production of KAWs at wavelengths close to d p and at a large propagation angle with respect to the magnetic field. Magnetohydrodynamic (MHD), Hall magnetohydrodynamic (HMHD), and hybrid Vlasov–Maxwell (HVM) simulations modeling the propagation of Alfvén waves in inhomogeneous plasmas are performed. In the linear regime, the role of dispersive effects is singled out by comparing MHD and HMHD results. Fluctuations produced by phase mixing are identified as KAWs through a comparison of polarization of magnetic fluctuations and wave-group velocity with analytical linear predictions. In the nonlinear regime, a comparison of HMHD and HVM simulations allows us to point out the role of kinetic effects in shaping the proton-distribution function. We observe the generation of temperature anisotropy with respect to the local magnetic field and the production of field-aligned beams. The regions where the proton-distribution function highly departs from thermal equilibrium are located inside the shear layers, where the KAWs are excited, this suggesting that the distortions of the proton distribution are driven by a resonant interaction of protons with KAW fluctuations. Our results are relevant in configurations where magnetic-field inhomogeneities are present, as, for example, in the solar corona, where the presence of Alfvén waves has been ascertained

  8. KINETIC ALFVÉN WAVE GENERATION BY LARGE-SCALE PHASE MIXING

    Energy Technology Data Exchange (ETDEWEB)

    Vásconez, C. L.; Pucci, F.; Valentini, F.; Servidio, S.; Malara, F. [Dipartimento di Fisica, Università della Calabria, I-87036, Rende (CS) (Italy); Matthaeus, W. H. [Department of Physics and Astronomy, University of Delaware, DE 19716 (United States)

    2015-12-10

    One view of the solar wind turbulence is that the observed highly anisotropic fluctuations at spatial scales near the proton inertial length d{sub p} may be considered as kinetic Alfvén waves (KAWs). In the present paper, we show how phase mixing of large-scale parallel-propagating Alfvén waves is an efficient mechanism for the production of KAWs at wavelengths close to d{sub p} and at a large propagation angle with respect to the magnetic field. Magnetohydrodynamic (MHD), Hall magnetohydrodynamic (HMHD), and hybrid Vlasov–Maxwell (HVM) simulations modeling the propagation of Alfvén waves in inhomogeneous plasmas are performed. In the linear regime, the role of dispersive effects is singled out by comparing MHD and HMHD results. Fluctuations produced by phase mixing are identified as KAWs through a comparison of polarization of magnetic fluctuations and wave-group velocity with analytical linear predictions. In the nonlinear regime, a comparison of HMHD and HVM simulations allows us to point out the role of kinetic effects in shaping the proton-distribution function. We observe the generation of temperature anisotropy with respect to the local magnetic field and the production of field-aligned beams. The regions where the proton-distribution function highly departs from thermal equilibrium are located inside the shear layers, where the KAWs are excited, this suggesting that the distortions of the proton distribution are driven by a resonant interaction of protons with KAW fluctuations. Our results are relevant in configurations where magnetic-field inhomogeneities are present, as, for example, in the solar corona, where the presence of Alfvén waves has been ascertained.

  9. Evaluating high risks in large-scale projects using an extended VIKOR method under a fuzzy environment

    Directory of Open Access Journals (Sweden)

    S. Ebrahimnejad

    2012-04-01

    Full Text Available The complexity of large-scale projects has led to numerous risks in their life cycle. This paper presents a new risk evaluation approach in order to rank the high risks in large-scale projects and improve the performance of these projects. It is based on the fuzzy set theory that is an effective tool to handle uncertainty. It is also based on an extended VIKOR method that is one of the well-known multiple criteria decision-making (MCDM methods. The proposed decision-making approach integrates knowledge and experience acquired from professional experts, since they perform the risk identification and also the subjective judgments of the performance rating for high risks in terms of conflicting criteria, including probability, impact, quickness of reaction toward risk, event measure quantity and event capability criteria. The most notable difference of the proposed VIKOR method with its traditional version is just the use of fuzzy decision-matrix data to calculate the ranking index without the need to ask the experts. Finally, the proposed approach is illustrated with a real-case study in an Iranian power plant project, and the associated results are compared with two well-known decision-making methods under a fuzzy environment.

  10. Large-scale matrix-handling subroutines 'ATLAS'

    International Nuclear Information System (INIS)

    Tsunematsu, Toshihide; Takeda, Tatsuoki; Fujita, Keiichi; Matsuura, Toshihiko; Tahara, Nobuo

    1978-03-01

    Subroutine package ''ATLAS'' has been developed for handling large-scale matrices. The package is composed of four kinds of subroutines, i.e., basic arithmetic routines, routines for solving linear simultaneous equations and for solving general eigenvalue problems and utility routines. The subroutines are useful in large scale plasma-fluid simulations. (auth.)

  11. Large-scale circulation departures related to wet episodes in northeast Brazil

    Science.gov (United States)

    Sikdar, D. N.; Elsner, J. B.

    1985-01-01

    Large scale circulation features are presented as related to wet spells over northeast Brazil (Nordeste) during the rainy season (March and April) of 1979. The rainy season season is devided into dry and wet periods, the FGGE and geostationary satellite data was averaged and mean and departure fields of basic variables and cloudiness were studied. Analysis of seasonal mean circulation features show: lowest sea level easterlies beneath upper level westerlies; weak meridional winds; high relative humidity over the Amazon basin and relatively dry conditions over the South Atlantic Ocean. A fluctuation was found in the large scale circulation features on time scales of a few weeks or so over Nordeste and the South Atlantic sector. Even the subtropical High SLP's have large departures during wet episodes, implying a short period oscillation in the Southern Hemisphere Hadley circulation.

  12. Large-scale solar heat

    Energy Technology Data Exchange (ETDEWEB)

    Tolonen, J.; Konttinen, P.; Lund, P. [Helsinki Univ. of Technology, Otaniemi (Finland). Dept. of Engineering Physics and Mathematics

    1998-12-31

    In this project a large domestic solar heating system was built and a solar district heating system was modelled and simulated. Objectives were to improve the performance and reduce costs of a large-scale solar heating system. As a result of the project the benefit/cost ratio can be increased by 40 % through dimensioning and optimising the system at the designing stage. (orig.)

  13. Creating Large Scale Database Servers

    International Nuclear Information System (INIS)

    Becla, Jacek

    2001-01-01

    The BaBar experiment at the Stanford Linear Accelerator Center (SLAC) is designed to perform a high precision investigation of the decays of the B-meson produced from electron-positron interactions. The experiment, started in May 1999, will generate approximately 300TB/year of data for 10 years. All of the data will reside in Objectivity databases accessible via the Advanced Multi-threaded Server (AMS). To date, over 70TB of data have been placed in Objectivity/DB, making it one of the largest databases in the world. Providing access to such a large quantity of data through a database server is a daunting task. A full-scale testbed environment had to be developed to tune various software parameters and a fundamental change had to occur in the AMS architecture to allow it to scale past several hundred terabytes of data. Additionally, several protocol extensions had to be implemented to provide practical access to large quantities of data. This paper will describe the design of the database and the changes that we needed to make in the AMS for scalability reasons and how the lessons we learned would be applicable to virtually any kind of database server seeking to operate in the Petabyte region

  14. Creating Large Scale Database Servers

    Energy Technology Data Exchange (ETDEWEB)

    Becla, Jacek

    2001-12-14

    The BaBar experiment at the Stanford Linear Accelerator Center (SLAC) is designed to perform a high precision investigation of the decays of the B-meson produced from electron-positron interactions. The experiment, started in May 1999, will generate approximately 300TB/year of data for 10 years. All of the data will reside in Objectivity databases accessible via the Advanced Multi-threaded Server (AMS). To date, over 70TB of data have been placed in Objectivity/DB, making it one of the largest databases in the world. Providing access to such a large quantity of data through a database server is a daunting task. A full-scale testbed environment had to be developed to tune various software parameters and a fundamental change had to occur in the AMS architecture to allow it to scale past several hundred terabytes of data. Additionally, several protocol extensions had to be implemented to provide practical access to large quantities of data. This paper will describe the design of the database and the changes that we needed to make in the AMS for scalability reasons and how the lessons we learned would be applicable to virtually any kind of database server seeking to operate in the Petabyte region.

  15. Application of soft x-ray laser interferometry to study large-scale-length, high-density plasmas

    International Nuclear Information System (INIS)

    Wan, A.S.; Barbee, T.W., Jr.; Cauble, R.

    1996-01-01

    We have employed a Mach-Zehnder interferometer, using a Ne-like Y x- ray laser at 155 Angstrom as the probe source, to study large-scale- length, high-density colliding plasmas and exploding foils. The measured density profile of counter-streaming high-density colliding plasmas falls in between the calculated profiles using collisionless and fluid approximations with the radiation hydrodynamic code LASNEX. We have also performed simultaneous measured the local gain and electron density of Y x-ray laser amplifier. Measured gains in the amplifier were found to be between 10 and 20 cm -1 , similar to predictions and indicating that refraction is the major cause of signal loss in long line focus lasers. Images showed that high gain was produced in spots with dimensions of ∼ 10 μm, which we believe is caused by intensity variations in the optical drive laser. Measured density variations were smooth on the 10-μm scale so that temperature variations were likely the cause of the localized gain regions. We are now using the interferometry technique as a mechanism to validate and benchmark our numerical codes used for the design and analysis of high-energy-density physics experiments. 11 refs., 6 figs

  16. A 1372-element Large Scale Hemispherical Ultrasound Phased Array Transducer for Noninvasive Transcranial Therapy

    International Nuclear Information System (INIS)

    Song, Junho; Hynynen, Kullervo

    2009-01-01

    Noninvasive transcranial therapy using high intensity focused ultrasound transducers has attracted high interest as a promising new modality for the treatments of brain related diseases. We describe the development of a 1372 element large scale hemispherical ultrasound phased array transducer operating at a resonant frequency of 306 kHz. The hemispherical array has a diameter of 31 cm and a 15.5 cm radius of curvature. It is constructed with piezoelectric (PZT-4) tube elements of a 10 mm in diameter, 6 mm in length and 1.4 mm wall thickness. Each element is quasi-air backed by attaching a cork-rubber membrane on the back of the element. The acoustic efficiency of the element is determined to be approximately 50%. The large number of the elements delivers high power ultrasound and offers better beam steering and focusing capability. Comparisons of sound pressure-squared field measurements with theoretical calculations in water show that the array provides good beam steering and tight focusing capability over an efficient volume of approximately 100x100x80 mm 3 with nominal focal spot size of approximately 2.3 mm in diameter at -6 dB. We also present its beam steering and focusing capability through an ex vivo human skull by measuring pressure-squared amplitude after phase corrections. These measurements show the same efficient volume range and focal spot sizes at -6 dB as the ones in water without the skull present. These results indicate that the array is sufficient for use in noninvasive transcranial ultrasound therapy.

  17. THE EFFECT OF LARGE-SCALE MAGNETIC TURBULENCE ON THE ACCELERATION OF ELECTRONS BY PERPENDICULAR COLLISIONLESS SHOCKS

    International Nuclear Information System (INIS)

    Guo Fan; Giacalone, Joe

    2010-01-01

    We study the physics of electron acceleration at collisionless shocks that move through a plasma containing large-scale magnetic fluctuations. We numerically integrate the trajectories of a large number of electrons, which are treated as test particles moving in the time-dependent electric and magnetic fields determined from two-dimensional hybrid simulations (kinetic ions and fluid electron). The large-scale magnetic fluctuations effect the electrons in a number of ways and lead to efficient and rapid energization at the shock front. Since the electrons mainly follow along magnetic lines of force, the large-scale braiding of field lines in space allows the fast-moving electrons to cross the shock front several times, leading to efficient acceleration. Ripples in the shock front occurring at various scales will also contribute to the acceleration by mirroring the electrons. Our calculation shows that this process favors electron acceleration at perpendicular shocks. The current study is also helpful in understanding the injection problem for electron acceleration by collisionless shocks. It is also shown that the spatial distribution of energetic electrons is similar to in situ observations. The process may be important to our understanding of energetic electrons in planetary bow shocks and interplanetary shocks, and explaining herringbone structures seen in some type II solar radio bursts.

  18. Efficient Similarity Search Using the Earth Mover's Distance for Large Multimedia Databases

    DEFF Research Database (Denmark)

    Assent, Ira; Wichterich, Marc; Meisen, Tobias

    2008-01-01

    Multimedia similarity search in large databases requires efficient query processing. The Earth mover's distance, introduced in computer vision, is successfully used as a similarity model in a number of small-scale applications. Its computational complexity hindered its adoption in large multimedia...... databases. We enable directly indexing the Earth mover's distance in structures such as the R-tree and the VA-file by providing the accurate 'MinDist' function to any bounding rectangle in the index. We exploit the computational structure of the new MinDist to derive a new lower bound for the EMD Min...

  19. Safeguarding of large scale reprocessing and MOX plants

    International Nuclear Information System (INIS)

    Howsley, R.; Burrows, B.; Longevialle, H. de; Kuroi, H.; Izumi, A.

    1997-01-01

    In May 97, the IAEA Board of Governors approved the final measures of the ''93+2'' safeguards strengthening programme, thus improving the international non-proliferation regime by enhancing the effectiveness and efficiency of safeguards verification. These enhancements are not however, a revolution in current practices, but rather an important step in the continuous evolution of the safeguards system. The principles embodied in 93+2, for broader access to information and increased physical access already apply, in a pragmatic way, to large scale reprocessing and MOX fabrication plants. In these plants, qualitative measures and process monitoring play an important role in addition to accountancy and material balance evaluations in attaining the safeguard's goals. This paper will reflect on the safeguards approaches adopted for these large bulk handling facilities and draw analogies, conclusions and lessons for the forthcoming implementation of the 93+2 Programme. (author)

  20. Measurement Axis Searching Model for Terrestrial Laser Scans Registration

    Directory of Open Access Journals (Sweden)

    Shaoxing Hu

    2016-01-01

    Full Text Available Nowadays, terrestrial Lidar scans can cover rather a large area; the point densities are strongly varied because of the line-of-sight measurement principle in potential overlaps with scans taken from different viewpoints. Most of the traditional methods focus on registration algorithm and ignore searching model. Sometimes the traditional methods are directly used to align two point clouds; a large critically unsolved problem of the large biases will be created in areas distant from the overlaps while the local overlaps are often aligned well. So a novel measurement axis searching model (MASM has been proposed in this paper. The method includes four steps: (1 the principal axis fitting, (2 the measurement axis generation, (3 low-high-precision search, and (4 result generation. The principal axis gives an orientation to the point cloud; the search scope is limited by the measurement axis. The point cloud orientation can be adjusted gradually until the achievement of the global optimum using low- and high-precision search. We perform some experiments with simulated point clouds and real terrestrial laser scans. The results of simulated point clouds have shown the processing steps of our method, and the results of real terrestrial laser scans have shown the sensitivity of the approach with respect to the indoor and outdoor scenes.

  1. Internationalization Measures in Large Scale Research Projects

    Science.gov (United States)

    Soeding, Emanuel; Smith, Nancy

    2017-04-01

    Internationalization measures in Large Scale Research Projects Large scale research projects (LSRP) often serve as flagships used by universities or research institutions to demonstrate their performance and capability to stakeholders and other interested parties. As the global competition among universities for the recruitment of the brightest brains has increased, effective internationalization measures have become hot topics for universities and LSRP alike. Nevertheless, most projects and universities are challenged with little experience on how to conduct these measures and make internationalization an cost efficient and useful activity. Furthermore, those undertakings permanently have to be justified with the Project PIs as important, valuable tools to improve the capacity of the project and the research location. There are a variety of measures, suited to support universities in international recruitment. These include e.g. institutional partnerships, research marketing, a welcome culture, support for science mobility and an effective alumni strategy. These activities, although often conducted by different university entities, are interlocked and can be very powerful measures if interfaced in an effective way. On this poster we display a number of internationalization measures for various target groups, identify interfaces between project management, university administration, researchers and international partners to work together, exchange information and improve processes in order to be able to recruit, support and keep the brightest heads to your project.

  2. Probes of large-scale structure in the Universe

    International Nuclear Information System (INIS)

    Suto, Yasushi; Gorski, K.; Juszkiewicz, R.; Silk, J.

    1988-01-01

    Recent progress in observational techniques has made it possible to confront quantitatively various models for the large-scale structure of the Universe with detailed observational data. We develop a general formalism to show that the gravitational instability theory for the origin of large-scale structure is now capable of critically confronting observational results on cosmic microwave background radiation angular anisotropies, large-scale bulk motions and large-scale clumpiness in the galaxy counts. (author)

  3. Large-Scale and Global Hydrology. Chapter 92

    Science.gov (United States)

    Rodell, Matthew; Beaudoing, Hiroko Kato; Koster, Randal; Peters-Lidard, Christa D.; Famiglietti, James S.; Lakshmi, Venkat

    2016-01-01

    Powered by the sun, water moves continuously between and through Earths oceanic, atmospheric, and terrestrial reservoirs. It enables life, shapes Earths surface, and responds to and influences climate change. Scientists measure various features of the water cycle using a combination of ground, airborne, and space-based observations, and seek to characterize it at multiple scales with the aid of numerical models. Over time our understanding of the water cycle and ability to quantify it have improved, owing to advances in observational capabilities, the extension of the data record, and increases in computing power and storage. Here we present some of the most recent estimates of global and continental ocean basin scale water cycle stocks and fluxes and provide examples of modern numerical modeling systems and reanalyses.Further, we discuss prospects for predicting water cycle variability at seasonal and longer scales, which is complicated by a changing climate and direct human impacts related to water management and agriculture. Changes to the water cycle will be among the most obvious and important facets of climate change, thus it is crucial that we continue to invest in our ability to monitor it.

  4. A Phenotype Classification of Internet Use Disorder in a Large-Scale High-School Study

    Directory of Open Access Journals (Sweden)

    Katajun Lindenberg

    2018-04-01

    Full Text Available Internet Use Disorder (IUD affects numerous adolescents worldwide, and (Internet Gaming Disorder, a specific subtype of IUD, has recently been included in DSM-5 and ICD-11. Epidemiological studies have identified prevalence rates up to 5.7% among adolescents in Germany. However, little is known about the risk development during adolescence and its association to education. The aim of this study was to: (a identify a clinically relevant latent profile in a large-scale high-school sample; (b estimate prevalence rates of IUD for distinct age groups and (c investigate associations to gender and education. N = 5387 adolescents out of 41 schools in Germany aged 11–21 were assessed using the Compulsive Internet Use Scale (CIUS. Latent profile analyses showed five profile groups with differences in CIUS response pattern, age and school type. IUD was found in 6.1% and high-risk Internet use in 13.9% of the total sample. Two peaks were found in prevalence rates indicating the highest risk of IUD in age groups 15–16 and 19–21. Prevalence did not differ significantly between boys and girls. High-level education schools showed the lowest (4.9% and vocational secondary schools the highest prevalence rate (7.8%. The differences between school types could not be explained by academic level.

  5. Facile Large-scale synthesis of stable CuO nanoparticles

    Science.gov (United States)

    Nazari, P.; Abdollahi-Nejand, B.; Eskandari, M.; Kohnehpoushi, S.

    2018-04-01

    In this work, a novel approach in synthesizing the CuO nanoparticles was introduced. A sequential corrosion and detaching was proposed in the growth and dispersion of CuO nanoparticles in the optimum pH value of eight. The produced CuO nanoparticles showed six nm (±2 nm) in diameter and spherical feather with a high crystallinity and uniformity in size. In this method, a large-scale production of CuO nanoparticles (120 grams in an experimental batch) from Cu micro-particles was achieved which may met the market criteria for large-scale production of CuO nanoparticles.

  6. Thermodynamic analysis of the efficiency of high-temperature steam electrolysis system for hydrogen production

    Science.gov (United States)

    Mingyi, Liu; Bo, Yu; Jingming, Xu; Jing, Chen

    High-temperature steam electrolysis (HTSE), a reversible process of solid oxide fuel cell (SOFC) in principle, is a promising method for highly efficient large-scale hydrogen production. In our study, the overall efficiency of the HTSE system was calculated through electrochemical and thermodynamic analysis. A thermodynamic model in regards to the efficiency of the HTSE system was established and the quantitative effects of three key parameters, electrical efficiency (η el), electrolysis efficiency (η es), and thermal efficiency (η th) on the overall efficiency (η overall) of the HTSE system were investigated. Results showed that the contribution of η el, η es, η th to the overall efficiency were about 70%, 22%, and 8%, respectively. As temperatures increased from 500 °C to 1000 °C, the effect of η el on η overall decreased gradually and the η es effect remained almost constant, while the η th effect increased gradually. The overall efficiency of the high-temperature gas-cooled reactor (HTGR) coupled with the HTSE system under different conditions was also calculated. With the increase of electrical, electrolysis, and thermal efficiency, the overall efficiencies were anticipated to increase from 33% to a maximum of 59% at 1000 °C, which is over two times higher than that of the conventional alkaline water electrolysis.

  7. Large-scale grid management; Storskala Nettforvaltning

    Energy Technology Data Exchange (ETDEWEB)

    Langdal, Bjoern Inge; Eggen, Arnt Ove

    2003-07-01

    The network companies in the Norwegian electricity industry now have to establish a large-scale network management, a concept essentially characterized by (1) broader focus (Broad Band, Multi Utility,...) and (2) bigger units with large networks and more customers. Research done by SINTEF Energy Research shows so far that the approaches within large-scale network management may be structured according to three main challenges: centralization, decentralization and out sourcing. The article is part of a planned series.

  8. Large scale particle simulations in a virtual memory computer

    International Nuclear Information System (INIS)

    Gray, P.C.; Million, R.; Wagner, J.S.; Tajima, T.

    1983-01-01

    Virtual memory computers are capable of executing large-scale particle simulations even when the memory requirements exceeds the computer core size. The required address space is automatically mapped onto slow disc memory the the operating system. When the simulation size is very large, frequent random accesses to slow memory occur during the charge accumulation and particle pushing processes. Assesses to slow memory significantly reduce the excecution rate of the simulation. We demonstrate in this paper that with the proper choice of sorting algorithm, a nominal amount of sorting to keep physically adjacent particles near particles with neighboring array indices can reduce random access to slow memory, increase the efficiency of the I/O system, and hence, reduce the required computing time. (orig.)

  9. Large-scale particle simulations in a virtual-memory computer

    International Nuclear Information System (INIS)

    Gray, P.C.; Wagner, J.S.; Tajima, T.; Million, R.

    1982-08-01

    Virtual memory computers are capable of executing large-scale particle simulations even when the memory requirements exceed the computer core size. The required address space is automatically mapped onto slow disc memory by the operating system. When the simulation size is very large, frequent random accesses to slow memory occur during the charge accumulation and particle pushing processes. Accesses to slow memory significantly reduce the execution rate of the simulation. We demonstrate in this paper that with the proper choice of sorting algorithm, a nominal amount of sorting to keep physically adjacent particles near particles with neighboring array indices can reduce random access to slow memory, increase the efficiency of the I/O system, and hence, reduce the required computing time

  10. Opportunities and Challenges for Terrestrial Carbon Offsetting and Marketing, with Some Implications for Forestry in the UK

    Directory of Open Access Journals (Sweden)

    Maria Nijnik

    2010-12-01

    Full Text Available Background and Purpose: Climate change and its mitigation have become increasingly high profile issues since the late 1990s, with the potential of forestry in carbon sequestration a particular focus. The purpose of this paper is to outline the importance of socio-economic considerations in this area. Opportunities for forestry to sequester carbon and the role of terrestrial carbon uptake credits in climate change negotiations are addressed, together with the feasibility of bringing terrestrial carbon offsets into the regulatory emission trading scheme. The paper discusses whether or not significant carbon offsetting and trading will occur on a large scale in the UK or internationally. Material and Methods: The paper reviews the literature on the socio-economic aspects of climate change mitigation via forestry (including the authors’ research on this topic to assess the potential for carbon offsetting and trading, and the likely scale of action. Results and Conclusion: We conclude that the development of appropriate socio-economic framework conditions (e.g. policies, tenure rights, including forest carbon ownership, and markets and incentives for creating and trading terrestrial carbon credits are important in mitigating climate change through forestry projects, and we make suggestions for future research that would be required to support such developments.

  11. Optimal Selection of AC Cables for Large Scale Offshore Wind Farms

    DEFF Research Database (Denmark)

    Hou, Peng; Hu, Weihao; Chen, Zhe

    2014-01-01

    The investment of large scale offshore wind farms is high in which the electrical system has a significant contribution to the total cost. As one of the key components, the cost of the connection cables affects the initial investment a lot. The development of cable manufacturing provides a vast...... and systematical way for the optimal selection of cables in large scale offshore wind farms....

  12. Empirical Analysis of High Efficient Remote Cloud Data Center Backup Using HBase and Cassandra

    OpenAIRE

    Chang, Bao Rong; Tsai, Hsiu-Fen; Chen, Chia-Yen; Guo, Cin-Long

    2015-01-01

    HBase, a master-slave framework, and Cassandra, a peer-to-peer (P2P) framework, are the two most commonly used large-scale distributed NoSQL databases, especially applicable to the cloud computing with high flexibility and scalability and the ease of big data processing. Regarding storage structure, different structure adopts distinct backup strategy to reduce the risks of data loss. This paper aims to realize high efficient remote cloud data center backup using HBase and Cassandra, and in or...

  13. Palmprint and Palmvein Recognition Based on DCNN and A New Large-Scale Contactless Palmvein Dataset

    Directory of Open Access Journals (Sweden)

    Lin Zhang

    2018-03-01

    Full Text Available Among the members of biometric identifiers, the palmprint and the palmvein have received significant attention due to their stability, uniqueness, and non-intrusiveness. In this paper, we investigate the problem of palmprint/palmvein recognition and propose a Deep Convolutional Neural Network (DCNN based scheme, namely P a l m R CNN (short for palmprint/palmvein recognition using CNNs. The effectiveness and efficiency of P a l m R CNN have been verified through extensive experiments conducted on benchmark datasets. In addition, though substantial effort has been devoted to palmvein recognition, it is still quite difficult for the researchers to know the potential discriminating capability of the contactless palmvein. One of the root reasons is that a large-scale and publicly available dataset comprising high-quality, contactless palmvein images is still lacking. To this end, a user-friendly acquisition device for collecting high quality contactless palmvein images is at first designed and developed in this work. Then, a large-scale palmvein image dataset is established, comprising 12,000 images acquired from 600 different palms in two separate collection sessions. The collected dataset now is publicly available.

  14. Japanese large-scale interferometers

    CERN Document Server

    Kuroda, K; Miyoki, S; Ishizuka, H; Taylor, C T; Yamamoto, K; Miyakawa, O; Fujimoto, M K; Kawamura, S; Takahashi, R; Yamazaki, T; Arai, K; Tatsumi, D; Ueda, A; Fukushima, M; Sato, S; Shintomi, T; Yamamoto, A; Suzuki, T; Saitô, Y; Haruyama, T; Sato, N; Higashi, Y; Uchiyama, T; Tomaru, T; Tsubono, K; Ando, M; Takamori, A; Numata, K; Ueda, K I; Yoneda, H; Nakagawa, K; Musha, M; Mio, N; Moriwaki, S; Somiya, K; Araya, A; Kanda, N; Telada, S; Sasaki, M; Tagoshi, H; Nakamura, T; Tanaka, T; Ohara, K

    2002-01-01

    The objective of the TAMA 300 interferometer was to develop advanced technologies for kilometre scale interferometers and to observe gravitational wave events in nearby galaxies. It was designed as a power-recycled Fabry-Perot-Michelson interferometer and was intended as a step towards a final interferometer in Japan. The present successful status of TAMA is presented. TAMA forms a basis for LCGT (large-scale cryogenic gravitational wave telescope), a 3 km scale cryogenic interferometer to be built in the Kamioka mine in Japan, implementing cryogenic mirror techniques. The plan of LCGT is schematically described along with its associated R and D.

  15. Evaluation of Kirkwood-Buff integrals via finite size scaling: a large scale molecular dynamics study

    Science.gov (United States)

    Dednam, W.; Botha, A. E.

    2015-01-01

    Solvation of bio-molecules in water is severely affected by the presence of co-solvent within the hydration shell of the solute structure. Furthermore, since solute molecules can range from small molecules, such as methane, to very large protein structures, it is imperative to understand the detailed structure-function relationship on the microscopic level. For example, it is useful know the conformational transitions that occur in protein structures. Although such an understanding can be obtained through large-scale molecular dynamic simulations, it is often the case that such simulations would require excessively large simulation times. In this context, Kirkwood-Buff theory, which connects the microscopic pair-wise molecular distributions to global thermodynamic properties, together with the recently developed technique, called finite size scaling, may provide a better method to reduce system sizes, and hence also the computational times. In this paper, we present molecular dynamics trial simulations of biologically relevant low-concentration solvents, solvated by aqueous co-solvent solutions. In particular we compare two different methods of calculating the relevant Kirkwood-Buff integrals. The first (traditional) method computes running integrals over the radial distribution functions, which must be obtained from large system-size NVT or NpT simulations. The second, newer method, employs finite size scaling to obtain the Kirkwood-Buff integrals directly by counting the particle number fluctuations in small, open sub-volumes embedded within a larger reservoir that can be well approximated by a much smaller simulation cell. In agreement with previous studies, which made a similar comparison for aqueous co-solvent solutions, without the additional solvent, we conclude that the finite size scaling method is also applicable to the present case, since it can produce computationally more efficient results which are equivalent to the more costly radial distribution

  16. Evaluation of Kirkwood-Buff integrals via finite size scaling: a large scale molecular dynamics study

    International Nuclear Information System (INIS)

    Dednam, W; Botha, A E

    2015-01-01

    Solvation of bio-molecules in water is severely affected by the presence of co-solvent within the hydration shell of the solute structure. Furthermore, since solute molecules can range from small molecules, such as methane, to very large protein structures, it is imperative to understand the detailed structure-function relationship on the microscopic level. For example, it is useful know the conformational transitions that occur in protein structures. Although such an understanding can be obtained through large-scale molecular dynamic simulations, it is often the case that such simulations would require excessively large simulation times. In this context, Kirkwood-Buff theory, which connects the microscopic pair-wise molecular distributions to global thermodynamic properties, together with the recently developed technique, called finite size scaling, may provide a better method to reduce system sizes, and hence also the computational times. In this paper, we present molecular dynamics trial simulations of biologically relevant low-concentration solvents, solvated by aqueous co-solvent solutions. In particular we compare two different methods of calculating the relevant Kirkwood-Buff integrals. The first (traditional) method computes running integrals over the radial distribution functions, which must be obtained from large system-size NVT or NpT simulations. The second, newer method, employs finite size scaling to obtain the Kirkwood-Buff integrals directly by counting the particle number fluctuations in small, open sub-volumes embedded within a larger reservoir that can be well approximated by a much smaller simulation cell. In agreement with previous studies, which made a similar comparison for aqueous co-solvent solutions, without the additional solvent, we conclude that the finite size scaling method is also applicable to the present case, since it can produce computationally more efficient results which are equivalent to the more costly radial distribution

  17. Application of parallel computing techniques to a large-scale reservoir simulation

    International Nuclear Information System (INIS)

    Zhang, Keni; Wu, Yu-Shu; Ding, Chris; Pruess, Karsten

    2001-01-01

    Even with the continual advances made in both computational algorithms and computer hardware used in reservoir modeling studies, large-scale simulation of fluid and heat flow in heterogeneous reservoirs remains a challenge. The problem commonly arises from intensive computational requirement for detailed modeling investigations of real-world reservoirs. This paper presents the application of a massive parallel-computing version of the TOUGH2 code developed for performing large-scale field simulations. As an application example, the parallelized TOUGH2 code is applied to develop a three-dimensional unsaturated-zone numerical model simulating flow of moisture, gas, and heat in the unsaturated zone of Yucca Mountain, Nevada, a potential repository for high-level radioactive waste. The modeling approach employs refined spatial discretization to represent the heterogeneous fractured tuffs of the system, using more than a million 3-D gridblocks. The problem of two-phase flow and heat transfer within the model domain leads to a total of 3,226,566 linear equations to be solved per Newton iteration. The simulation is conducted on a Cray T3E-900, a distributed-memory massively parallel computer. Simulation results indicate that the parallel computing technique, as implemented in the TOUGH2 code, is very efficient. The reliability and accuracy of the model results have been demonstrated by comparing them to those of small-scale (coarse-grid) models. These comparisons show that simulation results obtained with the refined grid provide more detailed predictions of the future flow conditions at the site, aiding in the assessment of proposed repository performance

  18. Generating mock data sets for large-scale Lyman-α forest correlation measurements

    Energy Technology Data Exchange (ETDEWEB)

    Font-Ribera, Andreu [Institut de Ciències de l' Espai (CSIC-IEEC), Campus UAB, Fac. Ciències, torre C5 parell 2, Bellaterra, Catalonia (Spain); McDonald, Patrick [Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Berkeley, CA 94720 (United States); Miralda-Escudé, Jordi, E-mail: font@ieec.uab.es, E-mail: pvmcdonald@lbl.gov, E-mail: miralda@icc.ub.edu [Institució Catalana de Recerca i Estudis Avançats, Barcelona, Catalonia (Spain)

    2012-01-01

    Massive spectroscopic surveys of high-redshift quasars yield large numbers of correlated Lyα absorption spectra that can be used to measure large-scale structure. Simulations of these surveys are required to accurately interpret the measurements of correlations and correct for systematic errors. An efficient method to generate mock realizations of Lyα forest surveys is presented which generates a field over the lines of sight to the survey sources only, instead of having to generate it over the entire three-dimensional volume of the survey. The method can be calibrated to reproduce the power spectrum and one-point distribution function of the transmitted flux fraction, as well as the redshift evolution of these quantities, and is easily used for modeling any survey systematic effects. We present an example of how these mock surveys are applied to predict the measurement errors in a survey with similar parameters as the BOSS quasar survey in SDSS-III.

  19. Generating mock data sets for large-scale Lyman-α forest correlation measurements

    International Nuclear Information System (INIS)

    Font-Ribera, Andreu; McDonald, Patrick; Miralda-Escudé, Jordi

    2012-01-01

    Massive spectroscopic surveys of high-redshift quasars yield large numbers of correlated Lyα absorption spectra that can be used to measure large-scale structure. Simulations of these surveys are required to accurately interpret the measurements of correlations and correct for systematic errors. An efficient method to generate mock realizations of Lyα forest surveys is presented which generates a field over the lines of sight to the survey sources only, instead of having to generate it over the entire three-dimensional volume of the survey. The method can be calibrated to reproduce the power spectrum and one-point distribution function of the transmitted flux fraction, as well as the redshift evolution of these quantities, and is easily used for modeling any survey systematic effects. We present an example of how these mock surveys are applied to predict the measurement errors in a survey with similar parameters as the BOSS quasar survey in SDSS-III

  20. Task-Management Method Using R-Tree Spatial Cloaking for Large-Scale Crowdsourcing

    Directory of Open Access Journals (Sweden)

    Yan Li

    2017-12-01

    Full Text Available With the development of sensor technology and the popularization of the data-driven service paradigm, spatial crowdsourcing systems have become an important way of collecting map-based location data. However, large-scale task management and location privacy are important factors for participants in spatial crowdsourcing. In this paper, we propose the use of an R-tree spatial cloaking-based task-assignment method for large-scale spatial crowdsourcing. We use an estimated R-tree based on the requested crowdsourcing tasks to reduce the crowdsourcing server-side inserting cost and enable the scalability. By using Minimum Bounding Rectangle (MBR-based spatial anonymous data without exact position data, this method preserves the location privacy of participants in a simple way. In our experiment, we showed that our proposed method is faster than the current method, and is very efficient when the scale is increased.

  1. Visual attention mitigates information loss in small- and large-scale neural codes

    Science.gov (United States)

    Sprague, Thomas C; Saproo, Sameer; Serences, John T

    2015-01-01

    Summary The visual system transforms complex inputs into robust and parsimonious neural codes that efficiently guide behavior. Because neural communication is stochastic, the amount of encoded visual information necessarily decreases with each synapse. This constraint requires processing sensory signals in a manner that protects information about relevant stimuli from degradation. Such selective processing – or selective attention – is implemented via several mechanisms, including neural gain and changes in tuning properties. However, examining each of these effects in isolation obscures their joint impact on the fidelity of stimulus feature representations by large-scale population codes. Instead, large-scale activity patterns can be used to reconstruct representations of relevant and irrelevant stimuli, providing a holistic understanding about how neuron-level modulations collectively impact stimulus encoding. PMID:25769502

  2. ability in Large Scale Land Acquisitions in Kenya

    International Development Research Centre (IDRC) Digital Library (Canada)

    Corey Piccioni

    Kenya's national planning strategy, Vision 2030. Agri- culture, natural resource exploitation, and infrastruc- ... sitions due to high levels of poverty and unclear or in- secure land tenure rights in Kenya. Inadequate social ... lease to a private company over the expansive Yala. Swamp to undertake large-scale irrigation farming.

  3. Development of polymers for large scale roll-to-roll processing of polymer solar cells

    DEFF Research Database (Denmark)

    Carlé, Jon Eggert

    Development of polymers for large scale roll-to-roll processing of polymer solar cells Conjugated polymers potential to both absorb light and transport current as well as the perspective of low cost and large scale production has made these kinds of material attractive in solar cell research....... The research field of polymer solar cells (PSCs) is rapidly progressing along three lines: Improvement of efficiency and stability together with the introduction of large scale production methods. All three lines are explored in this work. The thesis describes low band gap polymers and why these are needed....... Polymer of this type display broader absorption resulting in better overlap with the solar spectrum and potentially higher current density. Synthesis, characterization and device performance of three series of polymers illustrating how the absorption spectrum of polymers can be manipulated synthetically...

  4. Using relational databases for improved sequence similarity searching and large-scale genomic analyses.

    Science.gov (United States)

    Mackey, Aaron J; Pearson, William R

    2004-10-01

    Relational databases are designed to integrate diverse types of information and manage large sets of search results, greatly simplifying genome-scale analyses. Relational databases are essential for management and analysis of large-scale sequence analyses, and can also be used to improve the statistical significance of similarity searches by focusing on subsets of sequence libraries most likely to contain homologs. This unit describes using relational databases to improve the efficiency of sequence similarity searching and to demonstrate various large-scale genomic analyses of homology-related data. This unit describes the installation and use of a simple protein sequence database, seqdb_demo, which is used as a basis for the other protocols. These include basic use of the database to generate a novel sequence library subset, how to extend and use seqdb_demo for the storage of sequence similarity search results and making use of various kinds of stored search results to address aspects of comparative genomic analysis.

  5. Collaborating CPU and GPU for large-scale high-order CFD simulations with complex grids on the TianHe-1A supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Chuanfu, E-mail: xuchuanfu@nudt.edu.cn [College of Computer Science, National University of Defense Technology, Changsha 410073 (China); Deng, Xiaogang; Zhang, Lilun [College of Computer Science, National University of Defense Technology, Changsha 410073 (China); Fang, Jianbin [Parallel and Distributed Systems Group, Delft University of Technology, Delft 2628CD (Netherlands); Wang, Guangxue; Jiang, Yi [State Key Laboratory of Aerodynamics, P.O. Box 211, Mianyang 621000 (China); Cao, Wei; Che, Yonggang; Wang, Yongxian; Wang, Zhenghua; Liu, Wei; Cheng, Xinghua [College of Computer Science, National University of Defense Technology, Changsha 410073 (China)

    2014-12-01

    Programming and optimizing complex, real-world CFD codes on current many-core accelerated HPC systems is very challenging, especially when collaborating CPUs and accelerators to fully tap the potential of heterogeneous systems. In this paper, with a tri-level hybrid and heterogeneous programming model using MPI + OpenMP + CUDA, we port and optimize our high-order multi-block structured CFD software HOSTA on the GPU-accelerated TianHe-1A supercomputer. HOSTA adopts two self-developed high-order compact definite difference schemes WCNS and HDCS that can simulate flows with complex geometries. We present a dual-level parallelization scheme for efficient multi-block computation on GPUs and perform particular kernel optimizations for high-order CFD schemes. The GPU-only approach achieves a speedup of about 1.3 when comparing one Tesla M2050 GPU with two Xeon X5670 CPUs. To achieve a greater speedup, we collaborate CPU and GPU for HOSTA instead of using a naive GPU-only approach. We present a novel scheme to balance the loads between the store-poor GPU and the store-rich CPU. Taking CPU and GPU load balance into account, we improve the maximum simulation problem size per TianHe-1A node for HOSTA by 2.3×, meanwhile the collaborative approach can improve the performance by around 45% compared to the GPU-only approach. Further, to scale HOSTA on TianHe-1A, we propose a gather/scatter optimization to minimize PCI-e data transfer times for ghost and singularity data of 3D grid blocks, and overlap the collaborative computation and communication as far as possible using some advanced CUDA and MPI features. Scalability tests show that HOSTA can achieve a parallel efficiency of above 60% on 1024 TianHe-1A nodes. With our method, we have successfully simulated an EET high-lift airfoil configuration containing 800M cells and China's large civil airplane configuration containing 150M cells. To our best knowledge, those are the largest-scale CPU–GPU collaborative simulations

  6. Collaborating CPU and GPU for large-scale high-order CFD simulations with complex grids on the TianHe-1A supercomputer

    International Nuclear Information System (INIS)

    Xu, Chuanfu; Deng, Xiaogang; Zhang, Lilun; Fang, Jianbin; Wang, Guangxue; Jiang, Yi; Cao, Wei; Che, Yonggang; Wang, Yongxian; Wang, Zhenghua; Liu, Wei; Cheng, Xinghua

    2014-01-01

    Programming and optimizing complex, real-world CFD codes on current many-core accelerated HPC systems is very challenging, especially when collaborating CPUs and accelerators to fully tap the potential of heterogeneous systems. In this paper, with a tri-level hybrid and heterogeneous programming model using MPI + OpenMP + CUDA, we port and optimize our high-order multi-block structured CFD software HOSTA on the GPU-accelerated TianHe-1A supercomputer. HOSTA adopts two self-developed high-order compact definite difference schemes WCNS and HDCS that can simulate flows with complex geometries. We present a dual-level parallelization scheme for efficient multi-block computation on GPUs and perform particular kernel optimizations for high-order CFD schemes. The GPU-only approach achieves a speedup of about 1.3 when comparing one Tesla M2050 GPU with two Xeon X5670 CPUs. To achieve a greater speedup, we collaborate CPU and GPU for HOSTA instead of using a naive GPU-only approach. We present a novel scheme to balance the loads between the store-poor GPU and the store-rich CPU. Taking CPU and GPU load balance into account, we improve the maximum simulation problem size per TianHe-1A node for HOSTA by 2.3×, meanwhile the collaborative approach can improve the performance by around 45% compared to the GPU-only approach. Further, to scale HOSTA on TianHe-1A, we propose a gather/scatter optimization to minimize PCI-e data transfer times for ghost and singularity data of 3D grid blocks, and overlap the collaborative computation and communication as far as possible using some advanced CUDA and MPI features. Scalability tests show that HOSTA can achieve a parallel efficiency of above 60% on 1024 TianHe-1A nodes. With our method, we have successfully simulated an EET high-lift airfoil configuration containing 800M cells and China's large civil airplane configuration containing 150M cells. To our best knowledge, those are the largest-scale CPU–GPU collaborative simulations

  7. Large area flexible polymer solar cells with high efficiency enabled by imprinted Ag grid and modified buffer layer

    International Nuclear Information System (INIS)

    Lu, Shudi; Lin, Jie; Liu, Kong; Yue, Shizhong; Ren, Kuankuan; Tan, Furui; Wang, Zhijie; Jin, Peng; Qu, Shengchun; Wang, Zhanguo

    2017-01-01

    To take a full advantage of polymer semiconductors on realization of large-area flexible photovoltaic devices, herein, we fabricate polymer solar cells on the basis of polyethylene terephthalate (PET) with imprinted Ag grid as transparent electrode. The key fabrication procedure is the adoption of a modified PEDOT:PSS (PH1000) solution for spin-coating the buffer layer to form a compact contact with the substrate. In comparison with the devices with intrinsic PEDOT:PSS buffer layer, the advanced devices present a much higher efficiency of 6.51%, even in a large device area of 2.25 cm"2. Subsequent characterizations reveal that such devices show an impressive performance stability as the bending angle is enlarged to 180° and bending time is up to 1000 cycles. Not only providing a general methodology to construct high efficient and flexible polymer solar cells, this paper also involves deep insights on device working mechanism in bending conditions.

  8. An eigenfunction method for reconstruction of large-scale and high-contrast objects.

    Science.gov (United States)

    Waag, Robert C; Lin, Feng; Varslot, Trond K; Astheimer, Jeffrey P

    2007-07-01

    A multiple-frequency inverse scattering method that uses eigenfunctions of a scattering operator is extended to image large-scale and high-contrast objects. The extension uses an estimate of the scattering object to form the difference between the scattering by the object and the scattering by the estimate of the object. The scattering potential defined by this difference is expanded in a basis of products of acoustic fields. These fields are defined by eigenfunctions of the scattering operator associated with the estimate. In the case of scattering objects for which the estimate is radial, symmetries in the expressions used to reconstruct the scattering potential greatly reduce the amount of computation. The range of parameters over which the reconstruction method works well is illustrated using calculated scattering by different objects. The method is applied to experimental data from a 48-mm diameter scattering object with tissue-like properties. The image reconstructed from measurements has, relative to a conventional B-scan formed using a low f-number at the same center frequency, significantly higher resolution and less speckle, implying that small, high-contrast structures can be demonstrated clearly using the extended method.

  9. Large scale model testing

    International Nuclear Information System (INIS)

    Brumovsky, M.; Filip, R.; Polachova, H.; Stepanek, S.

    1989-01-01

    Fracture mechanics and fatigue calculations for WWER reactor pressure vessels were checked by large scale model testing performed using large testing machine ZZ 8000 (with a maximum load of 80 MN) at the SKODA WORKS. The results are described from testing the material resistance to fracture (non-ductile). The testing included the base materials and welded joints. The rated specimen thickness was 150 mm with defects of a depth between 15 and 100 mm. The results are also presented of nozzles of 850 mm inner diameter in a scale of 1:3; static, cyclic, and dynamic tests were performed without and with surface defects (15, 30 and 45 mm deep). During cyclic tests the crack growth rate in the elastic-plastic region was also determined. (author). 6 figs., 2 tabs., 5 refs

  10. ASSOCIATION OF {sup 3}He-RICH SOLAR ENERGETIC PARTICLES WITH LARGE-SCALE CORONAL WAVES

    Energy Technology Data Exchange (ETDEWEB)

    Bučík, Radoslav [Institut für Astrophysik, Georg-August-Universität Göttingen, D-37077, Göttingen (Germany); Innes, Davina E. [Max-Planck-Institut für Sonnensystemforschung, D-37077, Göttingen (Germany); Mason, Glenn M. [Applied Physics Laboratory, Johns Hopkins University, Laurel, MD 20723 (United States); Wiedenbeck, Mark E., E-mail: bucik@mps.mpg.de [Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA 91109 (United States)

    2016-12-10

    Small, {sup 3}He-rich solar energetic particle (SEP) events have been commonly associated with extreme-ultraviolet (EUV) jets and narrow coronal mass ejections (CMEs) that are believed to be the signatures of magnetic reconnection, involving field lines open to interplanetary space. The elemental and isotopic fractionation in these events are thought to be caused by processes confined to the flare sites. In this study, we identify 32 {sup 3}He-rich SEP events observed by the Advanced Composition Explorer , near the Earth, during the solar minimum period 2007–2010, and we examine their solar sources with the high resolution Solar Terrestrial Relations Observatory ( STEREO ) EUV images. Leading the Earth, STEREO -A has provided, for the first time, a direct view on {sup 3}He-rich flares, which are generally located on the Sun’s western hemisphere. Surprisingly, we find that about half of the {sup 3}He-rich SEP events in this survey are associated with large-scale EUV coronal waves. An examination of the wave front propagation, the source-flare distribution, and the coronal magnetic field connections suggests that the EUV waves may affect the injection of {sup 3}He-rich SEPs into interplanetary space.

  11. Root structural and functional dynamics in terrestrial biosphere models--evaluation and recommendations.

    Science.gov (United States)

    Warren, Jeffrey M; Hanson, Paul J; Iversen, Colleen M; Kumar, Jitendra; Walker, Anthony P; Wullschleger, Stan D

    2015-01-01

    There is wide breadth of root function within ecosystems that should be considered when modeling the terrestrial biosphere. Root structure and function are closely associated with control of plant water and nutrient uptake from the soil, plant carbon (C) assimilation, partitioning and release to the soils, and control of biogeochemical cycles through interactions within the rhizosphere. Root function is extremely dynamic and dependent on internal plant signals, root traits and morphology, and the physical, chemical and biotic soil environment. While plant roots have significant structural and functional plasticity to changing environmental conditions, their dynamics are noticeably absent from the land component of process-based Earth system models used to simulate global biogeochemical cycling. Their dynamic representation in large-scale models should improve model veracity. Here, we describe current root inclusion in models across scales, ranging from mechanistic processes of single roots to parameterized root processes operating at the landscape scale. With this foundation we discuss how existing and future root functional knowledge, new data compilation efforts, and novel modeling platforms can be leveraged to enhance root functionality in large-scale terrestrial biosphere models by improving parameterization within models, and introducing new components such as dynamic root distribution and root functional traits linked to resource extraction. No claim to original US Government works. New Phytologist © 2014 New Phytologist Trust.

  12. Experimental simulation of microinteractions in large scale explosions

    Energy Technology Data Exchange (ETDEWEB)

    Chen, X.; Luo, R.; Yuen, W.W.; Theofanous, T.G. [California Univ., Santa Barbara, CA (United States). Center for Risk Studies and Safety

    1998-01-01

    This paper presents data and analysis of recent experiments conducted in the SIGMA-2000 facility to simulate microinteractions in large scale explosions. Specifically, the fragmentation behavior of a high temperature molten steel drop under high pressure (beyond critical) conditions are investigated. The current data demonstrate, for the first time, the effect of high pressure in suppressing the thermal effect of fragmentation under supercritical conditions. The results support the microinteractions idea, and the ESPROSE.m prediction of fragmentation rate. (author)

  13. Distributed large-scale dimensional metrology new insights

    CERN Document Server

    Franceschini, Fiorenzo; Maisano, Domenico

    2011-01-01

    Focuses on the latest insights into and challenges of distributed large scale dimensional metrology Enables practitioners to study distributed large scale dimensional metrology independently Includes specific examples of the development of new system prototypes

  14. GRIMP: A web- and grid-based tool for high-speed analysis of large-scale genome-wide association using imputed data.

    NARCIS (Netherlands)

    K. Estrada Gil (Karol); A. Abuseiris (Anis); F.G. Grosveld (Frank); A.G. Uitterlinden (André); T.A. Knoch (Tobias); F. Rivadeneira Ramirez (Fernando)

    2009-01-01

    textabstractThe current fast growth of genome-wide association studies (GWAS) combined with now common computationally expensive imputation requires the online access of large user groups to high-performance computing resources capable of analyzing rapidly and efficiently millions of genetic

  15. Complexity-aware high efficiency video coding

    CERN Document Server

    Correa, Guilherme; Agostini, Luciano; Cruz, Luis A da Silva

    2016-01-01

    This book discusses computational complexity of High Efficiency Video Coding (HEVC) encoders with coverage extending from the analysis of HEVC compression efficiency and computational complexity to the reduction and scaling of its encoding complexity. After an introduction to the topic and a review of the state-of-the-art research in the field, the authors provide a detailed analysis of the HEVC encoding tools compression efficiency and computational complexity.  Readers will benefit from a set of algorithms for scaling the computational complexity of HEVC encoders, all of which take advantage from the flexibility of the frame partitioning structures allowed by the standard.  The authors also provide a set of early termination methods based on data mining and machine learning techniques, which are able to reduce the computational complexity required to find the best frame partitioning structures. The applicability of the proposed methods is finally exemplified with an encoding time control system that emplo...

  16. A family of conjugate gradient methods for large-scale nonlinear equations.

    Science.gov (United States)

    Feng, Dexiang; Sun, Min; Wang, Xueyong

    2017-01-01

    In this paper, we present a family of conjugate gradient projection methods for solving large-scale nonlinear equations. At each iteration, it needs low storage and the subproblem can be easily solved. Compared with the existing solution methods for solving the problem, its global convergence is established without the restriction of the Lipschitz continuity on the underlying mapping. Preliminary numerical results are reported to show the efficiency of the proposed method.

  17. Development of Large Concrete Object Geometrical Model Based on Terrestrial Laser Scanning

    Directory of Open Access Journals (Sweden)

    Zaczek-Peplinska Janina

    2015-02-01

    Full Text Available The paper presents control periodic measurements of movements and survey of concrete dam on Dunajec River in Rożnów, Poland. Topographical survey was conducted using laser scanning technique. The goal of survey was data collection and creation of a geometrical model. Acquired cross- and horizontal sections were utilised to create a numerical model of object behaviour at various load depending of changing level of water in reservoir. Modelling was accomplished using finite elements technique. During the project an assessment was conducted to terrestrial laser scanning techniques for such type of research of large hydrotechnical objects such as gravitational water dams. Developed model can be used to define deformations and displacement prognosis.

  18. SCALE INTERACTION IN A MIXING LAYER. THE ROLE OF THE LARGE-SCALE GRADIENTS

    KAUST Repository

    Fiscaletti, Daniele; Attili, Antonio; Bisetti, Fabrizio; Elsinga, Gerrit E.

    2015-01-01

    from physical considerations we would expect the scales to interact in a qualitatively similar way within the flow and across different turbulent flows. Therefore, instead of the large-scale fluctuations, the large-scale gradients modulation of the small scales has been additionally investigated.

  19. Terrestrial pesticide exposure of amphibians: an underestimated cause of global decline?

    Science.gov (United States)

    Brühl, Carsten A; Schmidt, Thomas; Pieper, Silvia; Alscher, Annika

    2013-01-01

    Amphibians, a class of animals in global decline, are present in agricultural landscapes characterized by agrochemical inputs. Effects of pesticides on terrestrial life stages of amphibians such as juvenile and adult frogs, toads and newts are little understood and a specific risk assessment for pesticide exposure, mandatory for other vertebrate groups, is currently not conducted. We studied the effects of seven pesticide products on juvenile European common frogs (Rana temporaria) in an agricultural overspray scenario. Mortality ranged from 100% after one hour to 40% after seven days at the recommended label rate of currently registered products. The demonstrated toxicity is alarming and a large-scale negative effect of terrestrial pesticide exposure on amphibian populations seems likely. Terrestrial pesticide exposure might be underestimated as a driver of their decline calling for more attention in conservation efforts and the risk assessment procedures in place do not protect this vanishing animal group.

  20. Efficient, reliable and fast high-level triggering using a bonsai boosted decision tree

    International Nuclear Information System (INIS)

    Gligorov, V V; Williams, M

    2013-01-01

    High-level triggering is a vital component of many modern particle physics experiments. This paper describes a modification to the standard boosted decision tree (BDT) classifier, the so-called bonsai BDT, that has the following important properties: it is more efficient than traditional cut-based approaches; it is robust against detector instabilities, and it is very fast. Thus, it is fit-for-purpose for the online running conditions faced by any large-scale data acquisition system.