WorldWideScience

Sample records for large-scale geologic processes

  1. Impact of Large-scale Geological Architectures On Recharge

    Science.gov (United States)

    Troldborg, L.; Refsgaard, J. C.; Engesgaard, P.; Jensen, K. H.

    Geological and hydrogeological data constitutes the basis for assessment of ground- water flow pattern and recharge zones. The accessibility and applicability of hard ge- ological data is often a major obstacle in deriving plausible conceptual models. Nev- ertheless focus is often on parameter uncertainty caused by the effect of geological heterogeneity due to lack of hard geological data, thus neglecting the possibility of alternative conceptualizations of the large-scale geological architecture. For a catchment in the eastern part of Denmark we have constructed different geologi- cal models based on different conceptualization of the major geological trends and fa- cies architecture. The geological models are equally plausible in a conceptually sense and they are all calibrated to well head and river flow measurements. Comparison of differences in recharge zones and subsequently well protection zones emphasize the importance of assessing large-scale geological architecture in hydrological modeling on regional scale in a non-deterministic way. Geostatistical modeling carried out in a transitional probability framework shows the possibility of assessing multiple re- alizations of large-scale geological architecture from a combination of soft and hard geological information.

  2. Process Principles for Large-Scale Nanomanufacturing.

    Science.gov (United States)

    Behrens, Sven H; Breedveld, Victor; Mujica, Maritza; Filler, Michael A

    2017-06-07

    Nanomanufacturing-the fabrication of macroscopic products from well-defined nanoscale building blocks-in a truly scalable and versatile manner is still far from our current reality. Here, we describe the barriers to large-scale nanomanufacturing and identify routes to overcome them. We argue for nanomanufacturing systems consisting of an iterative sequence of synthesis/assembly and separation/sorting unit operations, analogous to those used in chemicals manufacturing. In addition to performance and economic considerations, phenomena unique to the nanoscale must guide the design of each unit operation and the overall process flow. We identify and discuss four key nanomanufacturing process design needs: (a) appropriately selected process break points, (b) synthesis techniques appropriate for large-scale manufacturing, (c) new structure- and property-based separations, and (d) advances in stabilization and packaging.

  3. Application of simplified models to CO2 migration and immobilization in large-scale geological systems

    KAUST Repository

    Gasda, Sarah E.

    2012-07-01

    Long-term stabilization of injected carbon dioxide (CO 2) is an essential component of risk management for geological carbon sequestration operations. However, migration and trapping phenomena are inherently complex, involving processes that act over multiple spatial and temporal scales. One example involves centimeter-scale density instabilities in the dissolved CO 2 region leading to large-scale convective mixing that can be a significant driver for CO 2 dissolution. Another example is the potentially important effect of capillary forces, in addition to buoyancy and viscous forces, on the evolution of mobile CO 2. Local capillary effects lead to a capillary transition zone, or capillary fringe, where both fluids are present in the mobile state. This small-scale effect may have a significant impact on large-scale plume migration as well as long-term residual and dissolution trapping. Computational models that can capture both large and small-scale effects are essential to predict the role of these processes on the long-term storage security of CO 2 sequestration operations. Conventional modeling tools are unable to resolve sufficiently all of these relevant processes when modeling CO 2 migration in large-scale geological systems. Herein, we present a vertically-integrated approach to CO 2 modeling that employs upscaled representations of these subgrid processes. We apply the model to the Johansen formation, a prospective site for sequestration of Norwegian CO 2 emissions, and explore the sensitivity of CO 2 migration and trapping to subscale physics. Model results show the relative importance of different physical processes in large-scale simulations. The ability of models such as this to capture the relevant physical processes at large spatial and temporal scales is important for prediction and analysis of CO 2 storage sites. © 2012 Elsevier Ltd.

  4. Large scale processing of dielectric electroactive polymers

    DEFF Research Database (Denmark)

    Vudayagiri, Sindhu

    of square of films’ thickness. Production of thin elastomer films with microstructures on one or both surfaces is therefore the crucial step in the manufacturing. The manufacture process is still not perfect and further optimization is required. Smart processing techniques are required at Danfoss Polypower...... is sputtered on the microstructured surface of the film. Two such films are laminated to make a single DEAP laminate with two microstructured surfaces. The lamination process introduces two problems: 1) it may entrap air bubbles and dust at the interface which will cause the films to breakdown at the operating...

  5. Large scale and big data processing and management

    CERN Document Server

    Sakr, Sherif

    2014-01-01

    Large Scale and Big Data: Processing and Management provides readers with a central source of reference on the data management techniques currently available for large-scale data processing. Presenting chapters written by leading researchers, academics, and practitioners, it addresses the fundamental challenges associated with Big Data processing tools and techniques across a range of computing environments.The book begins by discussing the basic concepts and tools of large-scale Big Data processing and cloud computing. It also provides an overview of different programming models and cloud-bas

  6. Approaches to large scale unsaturated flow in heterogeneous, stratified, and fractured geologic media

    Energy Technology Data Exchange (ETDEWEB)

    Ababou, R.

    1991-08-01

    This report develops a broad review and assessment of quantitative modeling approaches and data requirements for large-scale subsurface flow in radioactive waste geologic repository. The data review includes discussions of controlled field experiments, existing contamination sites, and site-specific hydrogeologic conditions at Yucca Mountain. Local-scale constitutive models for the unsaturated hydrodynamic properties of geologic media are analyzed, with particular emphasis on the effect of structural characteristics of the medium. The report further reviews and analyzes large-scale hydrogeologic spatial variability from aquifer data, unsaturated soil data, and fracture network data gathered from the literature. Finally, various modeling strategies toward large-scale flow simulations are assessed, including direct high-resolution simulation, and coarse-scale simulation based on auxiliary hydrodynamic models such as single equivalent continuum and dual-porosity continuum. The roles of anisotropy, fracturing, and broad-band spatial variability are emphasized. 252 refs.

  7. Large-Scale Graph Processing Using Apache Giraph

    KAUST Repository

    Sakr, Sherif

    2017-01-07

    This book takes its reader on a journey through Apache Giraph, a popular distributed graph processing platform designed to bring the power of big data processing to graph data. Designed as a step-by-step self-study guide for everyone interested in large-scale graph processing, it describes the fundamental abstractions of the system, its programming models and various techniques for using the system to process graph data at scale, including the implementation of several popular and advanced graph analytics algorithms.

  8. UAV Data Processing for Large Scale Topographical Mapping

    Science.gov (United States)

    Tampubolon, W.; Reinhardt, W.

    2014-06-01

    Large scale topographical mapping in the third world countries is really a prominent challenge in geospatial industries nowadays. On one side the demand is significantly increasing while on the other hand it is constrained by limited budgets available for mapping projects. Since the advent of Act Nr.4/yr.2011 about Geospatial Information in Indonesia, large scale topographical mapping has been on high priority for supporting the nationwide development e.g. detail spatial planning. Usually large scale topographical mapping relies on conventional aerial survey campaigns in order to provide high resolution 3D geospatial data sources. Widely growing on a leisure hobby, aero models in form of the so-called Unmanned Aerial Vehicle (UAV) bring up alternative semi photogrammetric aerial data acquisition possibilities suitable for relatively small Area of Interest (AOI) i.e. Indonesia this area size can be used as a mapping unit since it usually concentrates on the basis of sub district area (kecamatan) level. In this paper different camera and processing software systems will be further analyzed for identifying the best optimum UAV data acquisition campaign components in combination with the data processing scheme. The selected AOI is covering the cultural heritage of Borobudur Temple as one of the Seven Wonders of the World. A detailed accuracy assessment will be concentrated within the object feature of the temple at the first place. Feature compilation involving planimetric objects (2D) and digital terrain models (3D) will be integrated in order to provide Digital Elevation Models (DEM) as the main interest of the topographic mapping activity. By doing this research, incorporating the optimum amount of GCPs in the UAV photo data processing will increase the accuracy along with its high resolution in 5 cm Ground Sampling Distance (GSD). Finally this result will be used as the benchmark for alternative geospatial data acquisition in the future in which it can support

  9. Modeling of large-scale oxy-fuel combustion processes

    DEFF Research Database (Denmark)

    Yin, Chungen

    2012-01-01

    , among which radiative heat transfer under oxy-fuel conditions is one of the fundamental issues. This paper demonstrates the nongray-gas effects in modeling of large-scale oxy-fuel combustion processes. Oxy-fuel combustion of natural gas in a 609MW utility boiler is numerically studied, in which....... The simulation results show that the gray and non-gray calculations of the same oxy-fuel WSGGM make distinctly different predictions in the wall radiative heat transfer, incident radiative flux, radiative source, gas temperature and species profiles. In relative to the non-gray implementation, the gray...

  10. Possible implications of large scale radiation processing of food

    Science.gov (United States)

    Zagórski, Z. P.

    Large scale irradiation has been discussed in terms of the participation of processing cost in the final value of the improved product. Another factor has been taken into account and that is the saturation of the market with the new product. In the case of succesful projects the participation of irradiation cost is low, and the demand for the better product is covered. A limited availability of sources makes the modest saturation of the market difficult with all food subjected to correct radiation treatment. The implementation of the preservation of food needs a decided selection of these kinds of food which comply to all conditions i.e. of acceptance by regulatory bodies, real improvement of quality and economy. The last condition prefers the possibility of use of electron beams of low energy. The best fullfilment of conditions for succesful processing is observed in the group of dry food, in expensive spices in particular.

  11. LUCI: A facility at DUSEL for large-scale experimental study of geologic carbon sequestration

    Energy Technology Data Exchange (ETDEWEB)

    Peters, C. A.; Dobson, P.F.; Oldenburg, C.M.; Wang, J. S. Y.; Onstott, T.C.; Scherer, G.W.; Freifeld, B.M.; Ramakrishnan, T.S.; Stabinski, E.L.; Liang, K.; Verma, S.

    2010-10-01

    LUCI, the Laboratory for Underground CO{sub 2} Investigations, is an experimental facility being planned for the DUSEL underground laboratory in South Dakota, USA. It is designed to study vertical flow of CO{sub 2} in porous media over length scales representative of leakage scenarios in geologic carbon sequestration. The plan for LUCI is a set of three vertical column pressure vessels, each of which is {approx}500 m long and {approx}1 m in diameter. The vessels will be filled with brine and sand or sedimentary rock. Each vessel will have an inner column to simulate a well for deployment of down-hole logging tools. The experiments are configured to simulate CO{sub 2} leakage by releasing CO{sub 2} into the bottoms of the columns. The scale of the LUCI facility will permit measurements to study CO{sub 2} flow over pressure and temperature variations that span supercritical to subcritical gas conditions. It will enable observation or inference of a variety of relevant processes such as buoyancy-driven flow in porous media, Joule-Thomson cooling, thermal exchange, viscous fingering, residual trapping, and CO{sub 2} dissolution. Experiments are also planned for reactive flow of CO{sub 2} and acidified brines in caprock sediments and well cements, and for CO{sub 2}-enhanced methanogenesis in organic-rich shales. A comprehensive suite of geophysical logging instruments will be deployed to monitor experimental conditions as well as provide data to quantify vertical resolution of sensor technologies. The experimental observations from LUCI will generate fundamental new understanding of the processes governing CO{sub 2} trapping and vertical migration, and will provide valuable data to calibrate and validate large-scale model simulations.

  12. High-throughput solution processing of large-scale graphene

    Science.gov (United States)

    Tung, Vincent C.; Allen, Matthew J.; Yang, Yang; Kaner, Richard B.

    2009-01-01

    The electronic properties of graphene, such as high charge carrier concentrations and mobilities, make it a promising candidate for next-generation nanoelectronic devices. In particular, electrons and holes can undergo ballistic transport on the sub-micrometre scale in graphene and do not suffer from the scale limitations of current MOSFET technologies. However, it is still difficult to produce single-layer samples of graphene and bulk processing has not yet been achieved, despite strenuous efforts to develop a scalable production method. Here, we report a versatile solution-based process for the large-scale production of single-layer chemically converted graphene over the entire area of a silicon/SiO2 wafer. By dispersing graphite oxide paper in pure hydrazine we were able to remove oxygen functionalities and restore the planar geometry of the single sheets. The chemically converted graphene sheets that were produced have the largest area reported to date (up to 20 × 40 µm), making them far easier to process. Field-effect devices have been fabricated by conventional photolithography, displaying currents that are three orders of magnitude higher than previously reported for chemically produced graphene. The size of these sheets enables a wide range of characterization techniques, including optical microscopy, scanning electron microscopy and atomic force microscopy, to be performed on the same specimen.

  13. Visualization of large scale geologically related data in virtual 3D scenes with OpenGL

    Science.gov (United States)

    Seng, Dewen; Liang, Xi; Wang, Hongxia; Yue, Guoying

    2007-11-01

    This paper demonstrates a method for three-dimensional (3D) reconstruction and visualization of large scale multidimensional surficial, geological and mine planning data with the programmable visualization environment OpenGL. A simulation system developed by the authors is presented for importing, filtering and visualizing of multidimensional geologically related data. The approach for the visual simulation of complicated mining engineering environment implemented in the system is described in detail. Aspects like presentations of multidimensional data with spatial dependence, navigation in the surficial and geological frame of reference and in time, interaction techniques are presented. The system supports real 3D landscape representations. Furthermore, the system provides many visualization methods for rendering multidimensional data within virtual 3D scenes and combines them with several navigation techniques. Real data derived from an iron mine in Wuhan City of China demonstrates the effectiveness and efficiency of the system. A case study with the results and benefits achieved by using real 3D representations and navigations of the system is given.

  14. Framework for Managing the Very Large Scale Integration Design Process

    Directory of Open Access Journals (Sweden)

    Sabah Al-Fedaghi

    2012-01-01

    Full Text Available Problem statement: The VLSI design cycle was described in terms of successive states and substages; it starts with system specification and ends with packaging. At the next descriptive level, currently known methodologies (e.g., flowchart based, object-oriented based lack a global conceptual representation suitable for managing the VLSI design process. Technical details were intermixed with tool-dependent and implementation issues such as control flow and data structure. It was important to fill the gap between these two levels of description because VLSI chip manufacturing was a complex management project and providing a conceptual detailed depiction of the design process would assist in managing operations on the great number of generated artifacts. Approach: This study introduces a conceptual framework representing flows and transformations of various descriptions (e.g., circuits, technical sketches to be used as a tracking apparatus for directing traffic during the VLSI design process. The proposed methodology views a description as an integral element of a process, called a flow system, constructed from six generic operations and designed to handle descriptions. It draws maps of flows of representations (called flowthings that run through the design flow. These flowthings are created, transformed (processed, transferred, released and received by various functions along the design flow at different levels (a hierarchy. The resultant conceptual framework can be used to support designers with computer-aided tools to organize and manage chains of tasks. Results: The proposed model for managing the VLSI design process was characterized by being conceptual (no technical or implementation details and can be uniformly applied at different levels of design and to various kinds of artifacts. The methodology is applied to describe the VLSI physical design stage that includes partitioning, floorplanning and placement, routing, compaction and extraction

  15. Manufacturing Process Simulation of Large-Scale Cryotanks

    Science.gov (United States)

    Babai, Majid; Phillips, Steven; Griffin, Brian

    2003-01-01

    NASA's Space Launch Initiative (SLI) is an effort to research and develop the technologies needed to build a second-generation reusable launch vehicle. It is required that this new launch vehicle be 100 times safer and 10 times cheaper to operate than current launch vehicles. Part of the SLI includes the development of reusable composite and metallic cryotanks. The size of these reusable tanks is far greater than anything ever developed and exceeds the design limits of current manufacturing tools. Several design and manufacturing approaches have been formulated, but many factors must be weighed during the selection process. Among these factors are tooling reachability, cycle times, feasibility, and facility impacts. The manufacturing process simulation capabilities available at NASA.s Marshall Space Flight Center have played a key role in down selecting between the various manufacturing approaches. By creating 3-D manufacturing process simulations, the varying approaches can be analyzed in a virtual world before any hardware or infrastructure is built. This analysis can detect and eliminate costly flaws in the various manufacturing approaches. The simulations check for collisions between devices, verify that design limits on joints are not exceeded, and provide cycle times which aide in the development of an optimized process flow. In addition, new ideas and concerns are often raised after seeing the visual representation of a manufacturing process flow. The output of the manufacturing process simulations allows for cost and safety comparisons to be performed between the various manufacturing approaches. This output helps determine which manufacturing process options reach the safety and cost goals of the SLI. As part of the SLI, The Boeing Company was awarded a basic period contract to research and propose options for both a metallic and a composite cryotank. Boeing then entered into a task agreement with the Marshall Space Flight Center to provide manufacturing

  16. Large-scale ordering of nanoparticles using viscoelastic shear processing

    NARCIS (Netherlands)

    Zhao, Qibin; Finlayson, Chris E.; Snoswell, David R E; Haines, Andrew; Schäfer, Christian; Spahn, Peter; Hellmann, Goetz P.; Petukhov, Andrei V.; Herrmann, Lars; Burdet, Pierre; Midgley, Paul A.; Butler, Simon; Mackley, Malcolm; Guo, Qixin; Baumberg, Jeremy J.

    2016-01-01

    Despite the availability of elaborate varieties of nanoparticles, their assembly into regular superstructures and photonic materials remains challenging. Here we show how flexible films of stacked polymer nanoparticles can be directly assembled in a roll-to-roll process using a bending-induced oscil

  17. Modified Augmented Lagrange Multiplier Methods for Large-Scale Chemical Process Optimization

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Chemical process optimization can be described as large-scale nonlinear constrained minimization. The modified augmented Lagrange multiplier methods (MALMM) for large-scale nonlinear constrained minimization are studied in this paper. The Lagrange function contains the penalty terms on equality and inequality constraints and the methods can be applied to solve a series of bound constrained sub-problems instead of a series of unconstrained sub-problems. The steps of the methods are examined in full detail. Numerical experiments are made for a variety of problems, from small to very large-scale, which show the stability and effectiveness of the methods in large-scale problems.

  18. Improved Large-Scale Process Cooling Operation through Energy Optimization

    Directory of Open Access Journals (Sweden)

    Kriti Kapoor

    2013-11-01

    Full Text Available This paper presents a study based on real plant data collected from chiller plants at the University of Texas at Austin. It highlights the advantages of operating the cooling processes based on an optimal strategy. A multi-component model is developed for the entire cooling process network. The model is used to formulate and solve a multi-period optimal chiller loading problem, posed as a mixed-integer nonlinear programming (MINLP problem. The results showed that an average energy savings of 8.57% could be achieved using optimal chiller loading as compared to the historical energy consumption data from the plant. The scope of the optimization problem was expanded by including a chilled water thermal storage in the cooling system. The effect of optimal thermal energy storage operation on the net electric power consumption by the cooling system was studied. The results include a hypothetical scenario where the campus purchases electricity at wholesale market prices and an optimal hour-by-hour operating strategy is computed to use the thermal energy storage tank.

  19. Large-scale ordering of nanoparticles using viscoelastic shear processing

    Science.gov (United States)

    Zhao, Qibin; Finlayson, Chris E.; Snoswell, David R. E.; Haines, Andrew; Schäfer, Christian; Spahn, Peter; Hellmann, Goetz P.; Petukhov, Andrei V.; Herrmann, Lars; Burdet, Pierre; Midgley, Paul A.; Butler, Simon; Mackley, Malcolm; Guo, Qixin; Baumberg, Jeremy J.

    2016-06-01

    Despite the availability of elaborate varieties of nanoparticles, their assembly into regular superstructures and photonic materials remains challenging. Here we show how flexible films of stacked polymer nanoparticles can be directly assembled in a roll-to-roll process using a bending-induced oscillatory shear technique. For sub-micron spherical nanoparticles, this gives elastomeric photonic crystals termed polymer opals showing extremely strong tunable structural colour. With oscillatory strain amplitudes of 300%, crystallization initiates at the wall and develops quickly across the bulk within only five oscillations. The resulting structure of random hexagonal close-packed layers is improved by shearing bidirectionally, alternating between two in-plane directions. Our theoretical framework indicates how the reduction in shear viscosity with increasing order of each layer accounts for these results, even when diffusion is totally absent. This general principle of shear ordering in viscoelastic media opens the way to manufacturable photonic materials, and forms a generic tool for ordering nanoparticles.

  20. Properties of large-scale melt-processed YBCO samples

    Science.gov (United States)

    Gauss, S.; Elschner, S.; Bestgen, H.

    Magnetic bearings and superconducting permanent magnets are some of the first possible applications of bulk high Tc superconductors. Large samples were prepared by a new melt process starting from reacted YBCO 123 and 211 powders. The addition of PtO 2 to the mixture led to reduced 211 inclusion size and better homogeneity. Simultaneously the density of microcracks dividing the a- b basal plane was reduced. For testing the overall magnetic properties of these samples magnetization and levitation force measurements were performed. In comparison to samples without PtO 2 addition a strong increase in the magnetization M and the repulsion force from a magnet were observed. The maximum in the field dependence of M increased to more than 1000 G. According to the time dependence of the trapped field after a field cooling experiment an acceptable flux creep at 77 K for a long-term application was achieved.

  1. Efficient Large-Scale 5D Seismic Data Acquisition and Processing using Rank-Minimization

    Science.gov (United States)

    Kumar, R.; Herrmann, F. J.; Sharan, S.; Aravkin, A.; Wason, H.; Lopez, O.; Davis, D.

    2016-12-01

    Seismic data collection is becoming challenging because of increased demands for high-quality, long-offset and wide-azimuth data. Leveraging ideas from CS, in this work we establish a cost effective acquisition and processing techniques, which are no longer dominated by survey area size but by the sparsity of seismic data volumes. In the first part of abstract, we establish connections between random time dithering and jittered sampling in space. Specifically, we recover high-quality 5D seismic data volumes from time-jittered marine acquisition where the average inter-shot time is significantly reduced, leading to cheaper surveys due to fewer overlapping shots. The time-jittered acquisition, in conjunction with the shot separation by Singular-Value Decomposition (SVD)-free factorization based rank-minimization approach, allows us to recover high quality 5D seismic data volumes. Results are illustrated for simulations of simultaneous time-jittered continuous recording for a 3D ocean-bottom cable survey, where we outperforms existing techniques, by an order of magnitude computational speedup and using 1/20th of the memory, that use sparsity in transforms domains. The second part of abstract focussed on leveraging low-rank structure in seismic data to solve extremely large data recovery (interpolation) problems. We introduced a large-scale SVD-free optimization framework that is robust with respect to outliers and that uses information on the support. We test the efficacy of the proposed interpolation framework on a large-scale 5D seismic data, generated from the geologically complex synthetic 3D Compass velocity model, where 80% of the data has been removed. Our findings show that major computational and memory gains are possible compared to curvelet-based reconstruction.

  2. A large-scale phylogeny of Synodontis (Mochokidae, Siluriformes) reveals the influence of geological events on continental diversity during the Cenozoic.

    Science.gov (United States)

    Pinton, Aurélie; Agnèse, Jean-François; Paugy, Didier; Otero, Olga

    2013-03-01

    To explain the spatial variability of fish taxa at a large scale, two alternative proposals are usually evoked. In recent years, the debate has centred on the relative roles of present and historical processes in shaping biodiversity patterns. In Africa, attempts to understand the processes that determine the large scale distribution of fishes and exploration of historical contingencies have been under-investigated given that most of the phylogenetic studies focus on the history of the Great Lakes. Here, we explore phylogeographic events in the evolutionary history of Synodontis (Mohokidae, Siluriformes) over Africa during the Cenozoic focusing on the putative role of historical processes. We discuss how known geological events together with hydrographical changes contributed to shape Synodontis biogeographical history. Synodontis was chosen on the basis of its high diversity and distribution in Africa: it consists of approximately 120 species that are widely distributed in all hydrographic basins except the Maghreb and South Africa. We propose the most comprehensive phylogeny of this catfish genus. Our results provide support for the 'hydrogeological' hypothesis, which proposes that palaeohydrological changes linked with the geological context may have been the cause of diversification of freshwater fish deep in the Tertiary. More precisely, the two main geological structures that participated to shape the hydrographical network in Africa, namely the Central African Shear zone and the East African rift system, appear as strong drivers of Synodontis diversification and evolution. Copyright © 2012 Elsevier Inc. All rights reserved.

  3. Design of Large-Scale Sensory Data Processing System Based on Cloud Computing

    Directory of Open Access Journals (Sweden)

    Bing Tang

    2012-04-01

    Full Text Available In large-scale Wireless Sensor Networks (WSNs, with limited computing power and storage capacity of sensor nodes, there is an urgent demand of high performance sensory data processing. This study studies the interconnection of wireless sensor networks and cloud-based storage and computing infrastructure. It proposes the idea of distributed databases to store sensory data and MapReduce programming model for large-scale sensory data parallel processing. In our prototype of large-scale sensory data processing system, Hadoop Distributed File System (HDFS and HBase are used for sensory data storage, and Hadoop MapReduce is used for data processing application execution framework. The design and implementation of this system are described in detail. The simulation of environment temperature surveillance application is used to verify the feasibility and reasonableness of the system, which also proves that it significantly improves the data processing capability of WSNs.

  4. A KPI-based process monitoring and fault detection framework for large-scale processes.

    Science.gov (United States)

    Zhang, Kai; Shardt, Yuri A W; Chen, Zhiwen; Yang, Xu; Ding, Steven X; Peng, Kaixiang

    2017-02-09

    Large-scale processes, consisting of multiple interconnected subprocesses, are commonly encountered in industrial systems, whose performance needs to be determined. A common approach to this problem is to use a key performance indicator (KPI)-based approach. However, the different KPI-based approaches are not developed with a coherent and consistent framework. Thus, this paper proposes a framework for KPI-based process monitoring and fault detection (PM-FD) for large-scale industrial processes, which considers the static and dynamic relationships between process and KPI variables. For the static case, a least squares-based approach is developed that provides an explicit link with least-squares regression, which gives better performance than partial least squares. For the dynamic case, using the kernel representation of each subprocess, an instrument variable is used to reduce the dynamic case to the static case. This framework is applied to the TE benchmark process and the hot strip mill rolling process. The results show that the proposed method can detect faults better than previous methods.

  5. Large-Scale Digital Geologic Map Databases and Reports of the North Coal District in Afghanistan

    Science.gov (United States)

    Hare, Trent M.; Davis, Philip A.; Nigh, Devon; Skinner, James A.; SanFilipo, John R.; Bolm, Karen S.; Fortezzo, Corey M.; Galuszka, Donna; Stettner, William R.; Sultani, Shafiqullah; Nader, Billal

    2008-01-01

    This report describes the Afghanistan coal resource maps and associated databases that have been digitally captured and maps that have been thus far converted to GIS databases. Several maps by V/O Technoexport, USSR (VOTU) and Bundesanstalt fur Bodenforschung (BGR), Hannover, Germany, are captured here. Most of the historical coal exploration is concentrated in north-central Afghanistan, a region referred to as the 'North Coal District', and almost all of the coal-related maps found Afghanistan Geological Survey (AGS) archives to date cover various locations within that district as shown in the index map. Most of the maps included herein were originally scanned during U.S. Geological Survey (USGS) site visits to Kabul in November 2004 and February 2006. The scanning was performed using equipment purchased by U.S. Agency for International Development (USAID) and U.S. Trade and Development Agency (USTDA) and installed at the AGS by USGS. Many of these maps and associated reports exist as single unpublished copies in the AGS archives, so these efforts served not only to provide a basis for digital capturing, but also as a means for preserving these rare geologic maps and reports. The data included herein represent most of the coal-related reports and maps that are available in the AGS archives. This report excludes the limited cases when a significant portion of a report's text could not be located, but it does not exclude reports with missing plates. The vector files are released using the Environmental Systems Research Institute (ESRI) Personal Geodatabase, ESRI shapefile vector format, and the open Geography Markup Language (GML) format. Scanned images are available in JPEG and, when rectified, GeoTIFF format. The authors wish to acknowledge the contributions made by the staff of the AGS Records and Coal Departments whose valuable assistance made it possible to locate and catalogue the data provided herein. We especially acknowledge the efforts of particular

  6. Geological storage of captured carbon dioxide as a large-scale carbon mitigation option

    Science.gov (United States)

    Celia, Michael A.

    2017-05-01

    Carbon capture and storage (CCS), involves capture of CO2 emissions from power plants and other large stationary sources and subsequent injection of the captured CO2 into deep geological formations. This is the only technology currently available that allows continued use of fossil fuels while simultaneously reducing emissions of CO2 to the atmosphere. Although the subsurface injection and subsequent migration of large amounts of CO2 involve a number of challenges, many decades of research in the earth sciences, focused on fluid movement in porous rocks, provides a strong foundation on which to analyze the system. These analyses indicate that environmental risks associated with large CO2 injections appear to be manageable.

  7. Evaluating Experience-Based Geologic Field Instruction: Lessons Learned from A Large-Scale Eye-Tracking Experiment

    Science.gov (United States)

    Tarduno, J. A.; Walders, K.; Bono, R. K.; Pelz, J.; Jacobs, R.

    2015-12-01

    A course centered on experience-based learning in field geology has been offered ten times at the University of Rochester. The centerpiece of the course is a 10-day field excursion to California featuring a broad cross-section of the geology of the state, from the San Andreas Fault to Death Valley. Here we describe results from a large-scale eye-tracking experiment aimed at understanding how experts and novices acquire visual geologic information. One ultimate goal of the project is to determine whether expert gaze patterns can be quantified to improve the instruction of beginning geology students. Another goal is to determine if aspects of the field experience can be transferred to the classroom/laboratory. Accordingly, ultra-high resolution segmented panoramic images have been collected at key sites visited during the field excursion. We have found that strict controls are needed in the field to obtain meaningful data; this often involves behavior atypical of geologists (e.g. limiting the field of view prior to data collection and placing time limits on scene viewing). Nevertheless some general conclusions can be made from a select data set. After an initial quick search, experts tend to exhibit scanning behavior that appears to support hypothesis testing. Novice fixations appear to define a scattered search pattern and/or one distracted by geologic noise in a scene. Noise sources include modern erosion features and vegetation. One way to quantify noise is through the use of saliency maps. With the caveat that our expert data set is small, our preliminary analysis suggests that experts tend to exhibit top-down behavior (indicating hypothesis driven responses) whereas novices show bottom-up gaze patterns, influenced by more salient features in a scene. We will present examples and discuss how these observations might be used to improve instruction.

  8. Nongray-gas Effects in Modeling of Large-scale Oxy-fuel Combustion Processes

    DEFF Research Database (Denmark)

    Yin, Chungen

    2012-01-01

    , among which radiative heat transfer under oxy-fuel conditions is one of the fundamental issues. This paper demonstrates the nongray-gas effects in modeling of large-scale oxy-fuel combustion processes. Oxy-fuel combustion of natural gas in a large-scale utility boiler is numerically investigated...... cases. The simulation results show that the gray and non-gray calculations of the same oxy-fuel WSGGM make distinctly different predictions in the wall radiative heat transfer, incident radiative flux, radiative source, gas temperature and species profiles. In relative to the non-gray implementation...

  9. Large scale production and downstream processing of a recombinant porcine parvovirus vaccine

    NARCIS (Netherlands)

    Maranga, L.; Rueda, P.; Antonis, A.F.G.; Vela, C.; Langeveld, J.P.M.; Casal, J.I.; Carrondo, M.J.T.

    2002-01-01

    Porcine parvovirus (PPV) virus-like particles (VLPs) constitute a potential vaccine for prevention of parvovirus-induced reproductive failure in gilts. Here we report the development of a large scale (25 l) production process for PPV-VLPs with baculovirus-infected insect cells. A low multiplicity of

  10. Applying Analytical Derivative and Sparse Matrix Techniques to Large-Scale Process Optimization Problems

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    The performance of analytical derivative and sparse matrix techniques applied to a traditional densesequential quadratic programming(SQP) is studied, and the strategy utilizing those techniques is also presented. Computational results on two typicalchemical optimization problems demonstrate significant enhancement in efficiency, which shows this strategy ispromising and suitable for large-scale process optimization problems.

  11. Large scale production and downstream processing of a recombinant porcine parvovirus vaccine

    NARCIS (Netherlands)

    Maranga, L.; Rueda, P.; Antonis, A.F.G.; Vela, C.; Langeveld, J.P.M.; Casal, J.I.; Carrondo, M.J.T.

    2002-01-01

    Porcine parvovirus (PPV) virus-like particles (VLPs) constitute a potential vaccine for prevention of parvovirus-induced reproductive failure in gilts. Here we report the development of a large scale (25 l) production process for PPV-VLPs with baculovirus-infected insect cells. A low multiplicity of

  12. Multi-stage evolution process of large scale landslides at the Patanpunas stream, Taiwan

    Science.gov (United States)

    Lin, Ming-Lang; Lee, Kuo-Chen; Lo, Chia-Ming; Weng, Meng-Chia; Lee, Shun-Min

    2016-04-01

    This study used multi-temporal terrain and remote sensing images to investigate the geomorphological evolution of the Putanpunas stream caused by large-scale landslides over the last decade. We conducted an analysis of the landslides evolution process within the study area, which included a multi-temporal terrain analysis, remote sensing interpretation, surface displacement analysis, and mechanism investigation. By integrating the results from these analyses, we provided explanations for the topographic and geomorphologic action processes of the deep-seated landslides as well as the development of the potential collapsing mechanisms within the study area. Then, discrete element method was used to simulate the process of landslide movement and deposition. The results show that the evolution process of large-scale landslides in the Putanpunas stream can be divided into four stages, namely downcutting of the stream gully and decompression of the river gully in the early stage, creep and deformation of the rock slope, sliding surface development of the deformed bands in the rock strata, and movement of the sliding mass. The results of terrain analysis and interpretation show topographical changes in the alluvial fan downstream and the deposits in the midstream and downstream segments of the Putanpunas Stream between 2005 and 2009. In 2009, torrential rainfall induced large-scale landslides that greatly altered the terrain of the Putanpunas Stream and the alluvial fan. There still exists 7.2 × 107 m3 of unstable colluvium accumulated at the slope surface and stream gully within the upstream and midstream areas. In 2012, further large-scale landslides turned the colluvial layer into debris flows that cut across the Ryukyu Terraces downstream to the downstream segment of the Laonong Stream to the southwest. This greatly changed later debris flows and alluvial fan deposits. Key Words: large-scale landslides, multi-temporal terrain, remote sensing, discrete element method

  13. Large-scale slope remodelling by landslides - Geomorphic diversity and geological controls, Kamienne Mts., Central Europe

    Science.gov (United States)

    Migoń, Piotr; Jancewicz, Kacper; Różycka, Milena; Duszyński, Filip; Kasprzak, Marek

    2017-07-01

    The Kamienne Mts. in the Sudetes, Central Europe, bear widespread evidence of landsliding which mainly occurred at the boundary between Carboniferous and Permian clastic sedimentary rocks and overlying Permian volcanic and sub-volcanic rocks. 47 individual landslides have been delimited using a combination of LiDAR-derived DEM analysis and field mapping. They have been characterized through a range of geomorphometric parameters and cluster analysis reveals four major groups in terms of surface expression and consequently, likely origin and history. Spatial analysis confirms distinct association of landslides with the steepest terrain and north to east aspect of slopes, but distance from lithological contact emerges as the critical parameter. > 80% of the total landslide area is located within 200 m of a contact, on either side of it. Joint measurements within head scarps at selected landslides indicate that displacements took place along steeply dipping joints in the volcanic cap, but the existence of low angle detachment surfaces in the underlying sedimentary formations is inferred. The spatial distribution of slope deformations coupled with apparently different (although yet unspecified) ages of individual landslides suggests that remodelling of the mountain range by landslides is an ongoing process. Geomorphic history of the area excludes glacial or fluvial erosion and resultant slope de-buttressing as the cause of instability. Rather, landslides are considered as mechanisms by which slopes which have become too high and steep due to long-term differential erosion restore their strength equilibrium.

  14. Large-scale continuous process to vitrify nuclear defense waste: operating experience with nonradioactive waste

    Energy Technology Data Exchange (ETDEWEB)

    Cosper, M B; Randall, C T; Traverso, G M

    1982-01-01

    The developmental program underway at SRL has demonstrated the vitrification process proposed for the sludge processing facility of the DWPF on a large scale. DWPF design criteria for production rate, equipment lifetime, and operability have all been met. The expected authorization and construction of the DWPF will result in the safe and permanent immobilization of a major quantity of existing high level waste. 11 figures, 4 tables.

  15. Large scale geological characterisation

    Energy Technology Data Exchange (ETDEWEB)

    Guillemot, D.; Yven, B.; Lefranc, M.; Trouiller, A.; Geraud, Y.; Esteban, L. [Andra - Agence Nationale pour la Gestion des Dechets Radioactifs, 92 - Chatenay Malabry (France); Marache, A.; Riss, J.; Denis, A. [Bordeaux-1 Univ., CDGA (GHYMAC), Talence (France); Yven, B.; Trouiller, A. [Andra - Agence Nationale pour la Gestion des Dechets Radioactifs, Service Milieu Geologique, 92 - Chatenay Malabry (France); Lefranc, M.; Beaudoin, B.; Chiles, J.P. [Ecole des Mines de Paris, Centre de Geosciences, 77 - Fontainebleau (France); Ravenne, C. [Institut Francais du Petrole, 92 - Rueil Malmaison (France); Bouchez, J.L.; Esteban, L.; Siqueira, R. [Toulouse Univ., 31 (France); Toulouse and GdR CNRS FORPRO, 31 (France); Esteban, L. [Strasbourg Univ., EOST, 67 (France); Now at Geological Survey of Canada-Pacific, Sidney, B.C. (Canada)

    2007-07-01

    This session gathers 4 articles dealing with: the seismic imaging of the variability in the Callovo-Oxfordian argillite (D. Guillemot, B. Yven); the identification of homogeneous zones from well logging measurements by statistical tools (A. Marache, J. Riss, A. Denis, B. Yven, A. Trouiller); the geostatistical characterization of Callovo-Oxfordian clay variability from high resolution log data (M. Lefranc, B. Beaudoin, J.P. Chiles, D. Guillemot, C. Ravenne, A. Trouiller); and the Magnetic properties, magnetic mineralogy and fabric of the argillites of the Meuse/Haute-Marne URL (J.L. Bouchez, L. Esteban, Y. Geraud, A. Trouiller, R. Siqueira)

  16. Simulation research on the process of large scale ship plane segmentation intelligent workshop

    Science.gov (United States)

    Xu, Peng; Liao, Liangchuang; Zhou, Chao; Xue, Rui; Fu, Wei

    2017-04-01

    Large scale ship plane segmentation intelligent workshop is a new thing, and there is no research work in related fields at home and abroad. The mode of production should be transformed by the existing industry 2.0 or part of industry 3.0, also transformed from "human brain analysis and judgment + machine manufacturing" to "machine analysis and judgment + machine manufacturing". In this transforming process, there are a great deal of tasks need to be determined on the aspects of management and technology, such as workshop structure evolution, development of intelligent equipment and changes in business model. Along with them is the reformation of the whole workshop. Process simulation in this project would verify general layout and process flow of large scale ship plane section intelligent workshop, also would analyze intelligent workshop working efficiency, which is significant to the next step of the transformation of plane segmentation intelligent workshop.

  17. STEADY-STATE HIERARCHICAL INTELLIGENT CONTROL OF LARGE-SCALE INDUSTRIAL PROCESSES

    Institute of Scientific and Technical Information of China (English)

    WAN Baiwu

    2004-01-01

    This paper considers the fourth stage of development of hierarchical control ofindustrial processes to the intelligent control and optimization stage, and reviews what theauthor and his Group have been investigating for the past decade in the on-line steady-state hierarchical intelligent control of large-scale industrial processes (LSIP)This papergives a definition of intelligent control of large-scale systems first, and then reviews the useof neural networks for identification and optimization, the use of expert systems to solvesome kinds of hierarchical multi-objective optimization problems by an intelligent decisionunit (ID), the use of fuzzy logic control, and the use of iterative learning controlSeveralimplementation examples are introducedThis paper reviews other main achievements ofthe Group alsoFinally this paper gives a perspective of future development.

  18. Evaluating and Grading Students in Large-Scale Image Processing Courses.

    Science.gov (United States)

    Artner, Nicole M; Janusch, Ines; Kropatsch, Walter G

    2015-01-01

    In undergraduate practical courses, it is common to work with groups of 100 or more students. These large-scale courses bring their own challenges. For example, course problems are too small and lack "the big picture"; grading becomes burdensome and repetitive for the teaching staff; and it is difficult to detect cheating. Based on their experience with a traditional large-scale practical course in image processing, the authors developed a novel course approach to teaching "Introduction to Digital Image Processing" (or EDBV, from the German course title Einführung in die Digitale Bild-Verarbeitung) for all undergraduate students of media informatics and visual computing and medical informatics at the TU Wien.

  19. Large-scale calculations of the beta-decay rates and r-process nucleosynthesis

    Energy Technology Data Exchange (ETDEWEB)

    Borzov, I.N.; Goriely, S. [Inst. d`Astronomie et d`Astrophysique, Univ. Libre de Bruxelles, Campus Plaine, Bruxelles (Belgium); Pearson, J.M. [Inst. d`Astronomie et d`Astrophysique, Univ. Libre de Bruxelles, Campus Plaine, Bruxelles (Belgium)]|[Lab. de Physique Nucleaire, Univ. de Montreal, Montreal (Canada)

    1998-06-01

    An approximation to a self-consistent model of the ground state and {beta}-decay properties of neutron-rich nuclei is outlined. The structure of the {beta}-strength functions in stable and short-lived nuclei is discussed. The results of large-scale calculations of the {beta}-decay rates for spherical and slightly deformed nuclides of relevance to the r-process are analysed and compared with the results of existing global calculations and recent experimental data. (orig.)

  20. Development of polymers for large scale roll-to-roll processing of polymer solar cells

    DEFF Research Database (Denmark)

    Carlé, Jon Eggert

    . Polymer of this type display broader absorption resulting in better overlap with the solar spectrum and potentially higher current density. Synthesis, characterization and device performance of three series of polymers illustrating how the absorption spectrum of polymers can be manipulated synthetically...... and how this affects the PSC parameters are presented. It is generally found that it is possible to synthetically control the absorption spectrum of conjugated polymer systems. One way to alter the spectrum is by incorporating alternating donor-acceptor motifs, resulting in an additional optical......Development of polymers for large scale roll-to-roll processing of polymer solar cells Conjugated polymers potential to both absorb light and transport current as well as the perspective of low cost and large scale production has made these kinds of material attractive in solar cell research...

  1. Controls on boundary layer ventilation: Boundary layer processes and large-scale dynamics

    Science.gov (United States)

    Sinclair, V. A.; Gray, S. L.; Belcher, S. E.

    2010-06-01

    Midlatitude cyclones are important contributors to boundary layer ventilation. However, it is uncertain how efficient such systems are at transporting pollutants out of the boundary layer, and variations between cyclones are unexplained. In this study 15 idealized baroclinic life cycles, with a passive tracer included, are simulated to identify the relative importance of two transport processes: horizontal divergence and convergence within the boundary layer and large-scale advection by the warm conveyor belt. Results show that the amount of ventilation is insensitive to surface drag over a realistic range of values. This indicates that although boundary layer processes are necessary for ventilation they do not control the magnitude of ventilation. A diagnostic for the mass flux out of the boundary layer has been developed to identify the synoptic-scale variables controlling the strength of ascent in the warm conveyor belt. A very high level of correlation (R2 values exceeding 0.98) is found between the diagnostic and the actual mass flux computed from the simulations. This demonstrates that the large-scale dynamics control the amount of ventilation, and the efficiency of midlatitude cyclones to ventilate the boundary layer can be estimated using the new mass flux diagnostic. We conclude that meteorological analyses, such as ERA-40, are sufficient to quantify boundary layer ventilation by the large-scale dynamics.

  2. Modeling of a Large-Scale High Temperature Regenerative Sulfur Removal Process

    DEFF Research Database (Denmark)

    Konttinen, Jukka T.; Johnsson, Jan Erik

    1999-01-01

    -up. Steady-state kinetic reactor models are needed for reactor sizing, and dynamic models can be used for process control design and operator training. The regenerative sulfur removal process to be studied in this paper consists of two side-by-side fluidized bed reactors operating at temperatures of 400......-650°C and at elevated pressure. In this paper, hydrodynamic modeling equations for dense fluidized bed and freeboard are applied for the prediction of the performance of a large-scale regeneration reactor. These equations can partly explain the differences in modeling results observed with a simpler...

  3. Big data Mining Using Very-Large-Scale Data Processing Platforms

    Directory of Open Access Journals (Sweden)

    Ms. K. Deepthi

    2016-02-01

    Full Text Available Big Data consists of large-volume, complex, growing data sets with multiple, heterogenous sources. With the tremendous development of networking, data storage, and the data collection capacity, Big Data are now rapidly expanding in all science and engineering domains, including physical, biological and biomedical sciences. The MapReduce programming mode which has parallel processing ability to analyze the large-scale network. MapReduce is a programming model that allows easy development of scalable parallel applications to process big data on large clusters of commodity machines. Google’s MapReduce or its open-source equivalent Hadoop is a powerful tool for building such applications.

  4. The Measurement and Control of Diameter in Large-Scale Part Processing

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Based on laser-scanned measuring technology, a met ho d of on-line dynamic non-contact measurement and feedback control of processin g dimension, i.e. the double edges laser-scanned large diameter on-line dynami c measurement and control system is presented, which can be used to measure diam eter in large-scale machine part processing. In this paper, the working princip le, overall structure and microcomputer real-time control and data processing s ystem of the system are discussed in detail, the method of ...

  5. Titan's global geologic processes

    Science.gov (United States)

    Malaska, Michael; Lopes, Rosaly M. C.; Schoenfeld, Ashley; Birch, Samuel; Hayes, Alexander; Williams, David A.; Solomonidou, Anezina; Janssen, Michael A.; Le Gall, Alice; Soderblom, Jason M.; Neish, Catherine; Turtle, Elizabeth P.; Cassini RADAR Team

    2016-10-01

    We have mapped the Cassini SAR imaged areas of Saturn's moon Titan in order to determine the geological properties that modify the surface [1]. We used the SAR dataset for mapping, but incorporated data from radiometry, VIMS, ISS, and SARTopo for terrain unit determination. This work extends our analyses of the mid-latitude/equatorial Afekan Crater region [2] and in the southern and northern polar regions [3]. We placed Titan terrains into six broad terrain classes: craters, mountain/hummocky, labyrinth, plains, dunes, and lakes. We also extended the fluvial mapping done by Burr et al. [4], and defined areas as potential cryovolcanic features [5]. We found that hummocky/mountainous and labyrinth areas are the oldest units on Titan, and that lakes and dunes are among the youngest. Plains units are the largest unit in terms of surface area, followed by the dunes unit. Radiometry data suggest that most of Titan's surface is covered in high-emissivity materials, consistent with organic materials, with only minor exposures of low-emissivity materials that are consistent with water ice, primarily in the mountain and hummocky areas and crater rims and ejecta [6, 7]. From examination of terrain orientation, we find that landscape evolution in the mid-latitude and equatorial regions is driven by aeolian processes, while polar landscapes are shaped by fluvial, lacrustine, and possibly dissolution or volatilization processes involving cycling organic materials [3, 8]. Although important in deciphering Titan's terrain evolution, impact processes play a very minor role in the modification of Titan's landscape [9]. We find no evidence for large-scale aqueous cryovolcanic deposits.References: [1] Lopes, R.M.C. et al. (2010) Icarus, 205, 540–558. [2] Malaska, M.J. et al. (2016) Icarus, 270, 130–161. [3] Birch et al., in revision. [4] Burr et al. (2013) GSA Bulletin 125, 299–321. [5] Lopes et al. JGR: Planets, 118, 1–20. [6] Janssen et al., (2009) Icarus, 200, 222–239. [7

  6. Large-scale processes in the upper layers of the Indian Ocean inferred from temperature climatology

    Digital Repository Service at National Institute of Oceanography (India)

    Unnikrishnan, A.S.; PrasannaKumar, S.; Navelkar, G.S.

    , q 0 , 9,O , SfJ , IQ0 , N , -\\- ,\\ I I 0 70 1 90 90 I -, 20 100 110 E 120 Figure 5. The distribution of (a) amplitude (“C) and (b) phase (deg.) of the seasonal cycle of temperature at 100 m for the semi-annual periodicity. Note... that the phase values range from 0 to 180 degrees. 19971 Unnikrishnan et al.: Large-scale processes in Indian Ocean 101 40 30 , 5p , 69 , 7,O , 69 , Sp , IqO , 119 E , 120 N ci3t.l 30 1 n .- d 20 Figure 6. The distribution of (a) amplitude (dynes cme2...

  7. On scale and magnitude of pressure build-up induced by large-scale geologic storage of CO2

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Q.; Birkholzer, J. T.

    2011-05-01

    The scale and magnitude of pressure perturbation and brine migration induced by geologic carbon sequestration is discussed assuming a full-scale deployment scenario in which enough CO{sub 2} is captured and stored to make relevant contributions to global climate change mitigation. In this scenario, the volumetric rates and cumulative volumes of CO{sub 2} injection would be comparable to or higher than those related to existing deep-subsurface injection and extraction activities, such as oil production. Large-scale pressure build-up in response to the injection may limit the dynamic storage capacity of suitable formations, because over-pressurization may fracture the caprock, may drive CO{sub 2}/brine leakage through localized pathways, and may cause induced seismicity. On the other hand, laterally extensive sedimentary basins may be less affected by such limitations because (i) local pressure effects are moderated by pressure propagation and brine displacement into regions far away from the CO{sub 2} storage domain; and (ii) diffuse and/or localized brine migration into overlying and underlying formations allows for pressure bleed-off in the vertical direction. A quick analytical estimate of the extent of pressure build-up induced by industrial-scale CO{sub 2} storage projects is presented. Also discussed are pressure perturbation and attenuation effects simulated for two representative sedimentary basins in the USA: the laterally extensive Illinois Basin and the partially compartmentalized southern San Joaquin Basin in California. These studies show that the limiting effect of pressure build-up on dynamic storage capacity is not as significant as suggested by Ehlig-Economides and Economides, who considered closed systems without any attenuation effects.

  8. The large-scale process of microbial carbonate precipitation for nickel remediation from an industrial soil.

    Science.gov (United States)

    Zhu, Xuejiao; Li, Weila; Zhan, Lu; Huang, Minsheng; Zhang, Qiuzhuo; Achal, Varenyam

    2016-12-01

    Microbial carbonate precipitation is known as an efficient process for the remediation of heavy metals from contaminated soils. In the present study, a urease positive bacterial isolate, identified as Bacillus cereus NS4 through 16S rDNA sequencing, was utilized on a large scale to remove nickel from industrial soil contaminated by the battery industry. The soil was highly contaminated with an initial total nickel concentration of approximately 900 mg kg(-1). The soluble-exchangeable fraction was reduced to 38 mg kg(-1) after treatment. The primary objective of metal stabilization was achieved by reducing the bioavailability through immobilizing the nickel in the urease-driven carbonate precipitation. The nickel removal in the soils contributed to the transformation of nickel from mobile species into stable biominerals identified as calcite, vaterite, aragonite and nickelous carbonate when analyzed under XRD. It was proven that during precipitation of calcite, Ni(2+) with an ion radius close to Ca(2+) was incorporated into the CaCO3 crystal. The biominerals were also characterized by using SEM-EDS to observe the crystal shape and Raman-FTIR spectroscopy to predict responsible bonding during bioremediation with respect to Ni immobilization. The electronic structure and chemical-state information of the detected elements during MICP bioremediation process was studied by XPS. This is the first study in which microbial carbonate precipitation was used for the large-scale remediation of metal-contaminated industrial soil.

  9. Plasma separation process facility for large-scale stable isotope production

    Energy Technology Data Exchange (ETDEWEB)

    Bigelow, T.S.; Collins, E.D.; Tracy, J.G. [Oak Ridge National Lab., TN (United States)

    1997-12-01

    A facility for large-scale separation of stable isotopes using the plasma separation process (PSP) is under development at the Oak Ridge National Laboratory. The PSP is capable of separating isotopes at a large throughput rate with medium purity product and at relatively low cost. The PSP has a number of convenient features that make it an attractive technology for general isotope separation purposes. Several isotopes for medical and industrial applications, including {sup 102}Pd, {sup 98}Mo, {sup 203}Tl, {sup 184}W, and others, are expected to be processed in this facility. The large throughput and low processing cost of the PSP will likely lead to new applications for stable isotopes. A description of this facility and its typical throughput capability is presented here.

  10. Moditored unsaturated soil transport processes as a support for large scale soil and water management

    Science.gov (United States)

    Vanclooster, Marnik

    2010-05-01

    The current societal demand for sustainable soil and water management is very large. The drivers of global and climate change exert many pressures on the soil and water ecosystems, endangering appropriate ecosystem functioning. The unsaturated soil transport processes play a key role in soil-water system functioning as it controls the fluxes of water and nutrients from the soil to plants (the pedo-biosphere link), the infiltration flux of precipitated water to groundwater and the evaporative flux, and hence the feed back from the soil to the climate system. Yet, unsaturated soil transport processes are difficult to quantify since they are affected by huge variability of the governing properties at different space-time scales and the intrinsic non-linearity of the transport processes. The incompatibility of the scales between the scale at which processes reasonably can be characterized, the scale at which the theoretical process correctly can be described and the scale at which the soil and water system need to be managed, calls for further development of scaling procedures in unsaturated zone science. It also calls for a better integration of theoretical and modelling approaches to elucidate transport processes at the appropriate scales, compatible with the sustainable soil and water management objective. Moditoring science, i.e the interdisciplinary research domain where modelling and monitoring science are linked, is currently evolving significantly in the unsaturated zone hydrology area. In this presentation, a review of current moditoring strategies/techniques will be given and illustrated for solving large scale soil and water management problems. This will also allow identifying research needs in the interdisciplinary domain of modelling and monitoring and to improve the integration of unsaturated zone science in solving soil and water management issues. A focus will be given on examples of large scale soil and water management problems in Europe.

  11. Accelerating large-scale protein structure alignments with graphics processing units

    Directory of Open Access Journals (Sweden)

    Pang Bin

    2012-02-01

    Full Text Available Abstract Background Large-scale protein structure alignment, an indispensable tool to structural bioinformatics, poses a tremendous challenge on computational resources. To ensure structure alignment accuracy and efficiency, efforts have been made to parallelize traditional alignment algorithms in grid environments. However, these solutions are costly and of limited accessibility. Others trade alignment quality for speedup by using high-level characteristics of structure fragments for structure comparisons. Findings We present ppsAlign, a parallel protein structure Alignment framework designed and optimized to exploit the parallelism of Graphics Processing Units (GPUs. As a general-purpose GPU platform, ppsAlign could take many concurrent methods, such as TM-align and Fr-TM-align, into the parallelized algorithm design. We evaluated ppsAlign on an NVIDIA Tesla C2050 GPU card, and compared it with existing software solutions running on an AMD dual-core CPU. We observed a 36-fold speedup over TM-align, a 65-fold speedup over Fr-TM-align, and a 40-fold speedup over MAMMOTH. Conclusions ppsAlign is a high-performance protein structure alignment tool designed to tackle the computational complexity issues from protein structural data. The solution presented in this paper allows large-scale structure comparisons to be performed using massive parallel computing power of GPU.

  12. Field Geology/Processes

    Science.gov (United States)

    Allen, Carlton; Jakes, Petr; Jaumann, Ralf; Marshall, John; Moses, Stewart; Ryder, Graham; Saunders, Stephen; Singer, Robert

    1996-01-01

    The field geology/process group examined the basic operations of a terrestrial field geologist and the manner in which these operations could be transferred to a planetary lander. Four basic requirements for robotic field geology were determined: geologic content; surface vision; mobility; and manipulation. Geologic content requires a combination of orbital and descent imaging. Surface vision requirements include range, resolution, stereo, and multispectral imaging. The minimum mobility for useful field geology depends on the scale of orbital imagery. Manipulation requirements include exposing unweathered surfaces, screening samples, and bringing samples in contact with analytical instruments. To support these requirements, several advanced capabilities for future development are recommended. Capabilities include near-infrared reflectance spectroscopy, hyper-spectral imaging, multispectral microscopy, artificial intelligence in support of imaging, x ray diffraction, x ray fluorescence, and rock chipping.

  13. Statistical mechanics of neocortical interactions: Large-scale EEG influences on molecular processes.

    Science.gov (United States)

    Ingber, Lester

    2016-04-21

    Calculations further support the premise that large-scale synchronous firings of neurons may affect molecular processes. The context is scalp electroencephalography (EEG) during short-term memory (STM) tasks. The mechanism considered is Π=p+qA (SI units) coupling, where p is the momenta of free Ca(2+) waves, q the charge of Ca(2+) in units of the electron charge, and A the magnetic vector potential of current I from neuronal minicolumnar firings considered as wires, giving rise to EEG. Data has processed using multiple graphs to identify sections of data to which spline-Laplacian transformations are applied, to fit the statistical mechanics of neocortical interactions (SMNI) model to EEG data, sensitive to synaptic interactions subject to modification by Ca(2+) waves.

  14. Large scale neural circuit mapping data analysis accelerated with the graphical processing unit (GPU)

    Science.gov (United States)

    Shi, Yulin; Veidenbaum, Alexander V.; Nicolau, Alex; Xu, Xiangmin

    2014-01-01

    Background Modern neuroscience research demands computing power. Neural circuit mapping studies such as those using laser scanning photostimulation (LSPS) produce large amounts of data and require intensive computation for post-hoc processing and analysis. New Method Here we report on the design and implementation of a cost-effective desktop computer system for accelerated experimental data processing with recent GPU computing technology. A new version of Matlab software with GPU enabled functions is used to develop programs that run on Nvidia GPUs to harness their parallel computing power. Results We evaluated both the central processing unit (CPU) and GPU-enabled computational performance of our system in benchmark testing and practical applications. The experimental results show that the GPU-CPU co-processing of simulated data and actual LSPS experimental data clearly outperformed the multi-core CPU with up to a 22x speedup, depending on computational tasks. Further, we present a comparison of numerical accuracy between GPU and CPU computation to verify the precision of GPU computation. In addition, we show how GPUs can be effectively adapted to improve the performance of commercial image processing software such as Adobe Photoshop. Comparison with Existing Method(s) To our best knowledge, this is the first demonstration of GPU application in neural circuit mapping and electrophysiology-based data processing. Conclusions Together, GPU enabled computation enhances our ability to process large-scale data sets derived from neural circuit mapping studies, allowing for increased processing speeds while retaining data precision. PMID:25277633

  15. Toward the Development and Deployment of Large-Scale Carbon Dioxide Capture and Conversion Processes

    DEFF Research Database (Denmark)

    Yuan, Zhihong; Eden, Mario R.; Gani, Rafiqul

    2016-01-01

    In light of the depletion of fossil fuels and the increased daily requirements for liquid fuels and chemicals, CO2 should indeed be regarded as a valuable C-1. additional feedstock for sustainable manufacturing of liquid fuels and chemicals. Development and deployment of CO2 capture and chemical...... conversion processes are among the grand challenges faced by today's scientists and engineers. Very few of the reported CO2 capture and conversion technologies have been employed for industrial installations on a large scale, where high-efficiency, cost/energy-effectiveness, and environmental friendliness...... are three keys factors. The CO2 capture technologies from stationary sources and ambient air based on solvents, solid sorbents, and membranes are discussed first. Transforming CO2 to liquid fuels and chemicals, which are presently produced from petroleum, through thermochemical, electrochemical...

  16. Integrated Technologies for Large-Scale Trapped-Ion Quantum Information Processing

    Science.gov (United States)

    Sorace-Agaskar, C.; Bramhavar, S.; Kharas, D.; Mehta, K. K.; Loh, W.; Panock, R.; Bruzewicz, C. D.; McConnell, R.; Ram, R. J.; Sage, J. M.; Chiaverini, J.

    2016-05-01

    Atomic ions trapped and controlled using electromagnetic fields hold great promise for practical quantum information processing due to their inherent coherence properties and controllability. However, to realize this promise, the ability to maintain and manipulate large-scale systems is required. We present progress toward the development of, and proof-of-principle demonstrations and characterization of, several technologies that can be integrated with ion-trap arrays on-chip to enable such scaling to practically useful sizes. Of particular use are integrated photonic elements for routing and focusing light throughout a chip without the need for free-space optics. The integration of CMOS electronics and photo-detectors for on-chip control and readout, and methods for monolithic fabrication and wafer-scale integration to incorporate these capabilities into tile-able 2D ion-trap array cells, are also explored.

  17. Validation of the process control system of an automated large scale manufacturing plant.

    Science.gov (United States)

    Neuhaus, H; Kremers, H; Karrer, T; Traut, R H

    1998-02-01

    The validation procedure for the process control system of a plant for the large scale production of human albumin from plasma fractions is described. A validation master plan is developed, defining the system and elements to be validated, the interfaces with other systems with the validation limits, a general validation concept and supporting documentation. Based on this master plan, the validation protocols are developed. For the validation, the system is subdivided into a field level, which is the equipment part, and an automation level. The automation level is further subdivided into sections according to the different software modules. Based on a risk categorization of the modules, the qualification activities are defined. The test scripts for the different qualification levels (installation, operational and performance qualification) are developed according to a previously performed risk analysis.

  18. Performance evaluation of the DCMD desalination process under bench scale and large scale module operating conditions

    KAUST Repository

    Francis, Lijo

    2014-04-01

    The flux performance of different hydrophobic microporous flat sheet commercial membranes made of poly tetrafluoroethylene (PTFE) and poly propylene (PP) was tested for Red Sea water desalination using the direct contact membrane distillation (DCMD) process, under bench scale (high δT) and large scale module (low δT) operating conditions. Membranes were characterized for their surface morphology, water contact angle, thickness, porosity, pore size and pore size distribution. The DCMD process performance was optimized using a locally designed and fabricated module aiming to maximize the flux at different levels of operating parameters, mainly feed water and coolant inlet temperatures at different temperature differences across the membrane (δT). Water vapor flux of 88.8kg/m2h was obtained using a PTFE membrane at high δT (60°C). In addition, the flux performance was compared to the first generation of a new locally synthesized and fabricated membrane made of a different class of polymer under the same conditions. A total salt rejection of 99.99% and boron rejection of 99.41% were achieved under extreme operating conditions. On the other hand, a detailed water characterization revealed that low molecular weight non-ionic molecules (ppb level) were transported with the water vapor molecules through the membrane structure. The membrane which provided the highest flux was then tested under large scale module operating conditions. The average flux of the latter study (low δT) was found to be eight times lower than that of the bench scale (high δT) operating conditions.

  19. A Natural Language Processing Tool for Large-Scale Data Extraction from Echocardiography Reports

    Science.gov (United States)

    Jonnalagadda, Siddhartha R.

    2016-01-01

    Large volumes of data are continuously generated from clinical notes and diagnostic studies catalogued in electronic health records (EHRs). Echocardiography is one of the most commonly ordered diagnostic tests in cardiology. This study sought to explore the feasibility and reliability of using natural language processing (NLP) for large-scale and targeted extraction of multiple data elements from echocardiography reports. An NLP tool, EchoInfer, was developed to automatically extract data pertaining to cardiovascular structure and function from heterogeneously formatted echocardiographic data sources. EchoInfer was applied to echocardiography reports (2004 to 2013) available from 3 different on-going clinical research projects. EchoInfer analyzed 15,116 echocardiography reports from 1684 patients, and extracted 59 quantitative and 21 qualitative data elements per report. EchoInfer achieved a precision of 94.06%, a recall of 92.21%, and an F1-score of 93.12% across all 80 data elements in 50 reports. Physician review of 400 reports demonstrated that EchoInfer achieved a recall of 92–99.9% and a precision of >97% in four data elements, including three quantitative and one qualitative data element. Failure of EchoInfer to correctly identify or reject reported parameters was primarily related to non-standardized reporting of echocardiography data. EchoInfer provides a powerful and reliable NLP-based approach for the large-scale, targeted extraction of information from heterogeneous data sources. The use of EchoInfer may have implications for the clinical management and research analysis of patients undergoing echocardiographic evaluation. PMID:27124000

  20. Large-scale brain networks emerge from dynamic processing of musical timbre, key and rhythm.

    Science.gov (United States)

    Alluri, Vinoo; Toiviainen, Petri; Jääskeläinen, Iiro P; Glerean, Enrico; Sams, Mikko; Brattico, Elvira

    2012-02-15

    We investigated the neural underpinnings of timbral, tonal, and rhythmic features of a naturalistic musical stimulus. Participants were scanned with functional Magnetic Resonance Imaging (fMRI) while listening to a stimulus with a rich musical structure, a modern tango. We correlated temporal evolutions of timbral, tonal, and rhythmic features of the stimulus, extracted using acoustic feature extraction procedures, with the fMRI time series. Results corroborate those obtained with controlled stimuli in previous studies and highlight additional areas recruited during musical feature processing. While timbral feature processing was associated with activations in cognitive areas of the cerebellum, and sensory and default mode network cerebrocortical areas, musical pulse and tonality processing recruited cortical and subcortical cognitive, motor and emotion-related circuits. In sum, by combining neuroimaging, acoustic feature extraction and behavioral methods, we revealed the large-scale cognitive, motor and limbic brain circuitry dedicated to acoustic feature processing during listening to a naturalistic stimulus. In addition to these novel findings, our study has practical relevance as it provides a powerful means to localize neural processing of individual acoustical features, be it those of music, speech, or soundscapes, in ecological settings.

  1. Large scale behaviour of the spatial Lambda-Fleming-Viot process

    CERN Document Server

    Berestycki, N; Veber, A

    2011-01-01

    We consider the spatial Lambda-Fleming-Viot process model for frequencies of genetic types in a population living in R^d, in the special case in which there are just two types of individual, labelled 0 and 1. At time zero, everyone in the half-space consisting of points whose first coordinate is non-positive is type 1, whereas everyone in the complementary half-space is of type 0. We are concerned with patterns of frequencies of the two types at large space and time scales. We consider two cases, one in which the dynamics of the process are driven by purely `local' events and one incorporating large-scale extinction recolonisation events. We choose the frequency of these events in such a way that, under a suitable rescaling of space and time, the ancestry of a single individual in the population converges to a symmetric stable process of index alpha in (1,2] (with alpha=2 corresponding to Brownian motion). We consider the behaviour of the process of allele frequencies under the same space and time rescaling. ...

  2. Criticality in large-scale brain FMRI dynamics unveiled by a novel point process analysis.

    Science.gov (United States)

    Tagliazucchi, Enzo; Balenzuela, Pablo; Fraiman, Daniel; Chialvo, Dante R

    2012-01-01

    Functional magnetic resonance imaging (fMRI) techniques have contributed significantly to our understanding of brain function. Current methods are based on the analysis of gradual and continuous changes in the brain blood oxygenated level dependent (BOLD) signal. Departing from that approach, recent work has shown that equivalent results can be obtained by inspecting only the relatively large amplitude BOLD signal peaks, suggesting that relevant information can be condensed in discrete events. This idea is further explored here to demonstrate how brain dynamics at resting state can be captured just by the timing and location of such events, i.e., in terms of a spatiotemporal point process. The method allows, for the first time, to define a theoretical framework in terms of an order and control parameter derived from fMRI data, where the dynamical regime can be interpreted as one corresponding to a system close to the critical point of a second order phase transition. The analysis demonstrates that the resting brain spends most of the time near the critical point of such transition and exhibits avalanches of activity ruled by the same dynamical and statistical properties described previously for neuronal events at smaller scales. Given the demonstrated functional relevance of the resting state brain dynamics, its representation as a discrete process might facilitate large-scale analysis of brain function both in health and disease.

  3. Biologically inspired large scale chemical sensor arrays and embedded data processing

    Science.gov (United States)

    Marco, S.; Gutiérrez-Gálvez, A.; Lansner, A.; Martinez, D.; Rospars, J. P.; Beccherelli, R.; Perera, A.; Pearce, T.; Vershure, P.; Persaud, K.

    2013-05-01

    Biological olfaction outperforms chemical instrumentation in specificity, response time, detection limit, coding capacity, time stability, robustness, size, power consumption, and portability. This biological function provides outstanding performance due, to a large extent, to the unique architecture of the olfactory pathway, which combines a high degree of redundancy, an efficient combinatorial coding along with unmatched chemical information processing mechanisms. The last decade has witnessed important advances in the understanding of the computational primitives underlying the functioning of the olfactory system. EU Funded Project NEUROCHEM (Bio-ICT-FET- 216916) has developed novel computing paradigms and biologically motivated artefacts for chemical sensing taking inspiration from the biological olfactory pathway. To demonstrate this approach, a biomimetic demonstrator has been built featuring a large scale sensor array (65K elements) in conducting polymer technology mimicking the olfactory receptor neuron layer, and abstracted biomimetic algorithms have been implemented in an embedded system that interfaces the chemical sensors. The embedded system integrates computational models of the main anatomic building blocks in the olfactory pathway: the olfactory bulb, and olfactory cortex in vertebrates (alternatively, antennal lobe and mushroom bodies in the insect). For implementation in the embedded processor an abstraction phase has been carried out in which their processing capabilities are captured by algorithmic solutions. Finally, the algorithmic models are tested with an odour robot with navigation capabilities in mixed chemical plumes

  4. Morphotectonic evolution of passive margins undergoing active surface processes: large-scale experiments using numerical models.

    Science.gov (United States)

    Beucher, Romain; Huismans, Ritske S.

    2016-04-01

    Extension of the continental lithosphere can lead to the formation of a wide range of rifted margins styles with contrasting tectonic and geomorphological characteristics. It is now understood that many of these characteristics depend on the manner extension is distributed depending on (among others factors) rheology, structural inheritance, thermal structure and surface processes. The relative importance and the possible interactions of these controlling factors is still largely unknown. Here we investigate the feedbacks between tectonics and the transfers of material at the surface resulting from erosion, transport, and sedimentation. We use large-scale (1200 x 600 km) and high-resolution (~1km) numerical experiments coupling a 2D upper-mantle-scale thermo-mechanical model with a plan-form 2D surface processes model (SPM). We test the sensitivity of the coupled models to varying crust-lithosphere rheology and erosional efficiency ranging from no-erosion to very efficient erosion. We discuss how fast, when and how the topography of the continents evolves and how it can be compared to actual passive margins escarpment morphologies. We show that although tectonics is the main factor controlling the rift geometry, transfers of masses at the surface affect the timing of faulting and the initiation of sea-floor spreading. We discuss how such models may help to understand the evolution of high-elevated passive margins around the world.

  5. Groundwater in the Earth's critical zone: Relevance to large-scale patterns and processes

    Science.gov (United States)

    Fan, Ying

    2015-05-01

    Although we have an intuitive understanding of the behavior and functions of groundwater in the Earth's critical zone at the scales of a column (atmosphere-plant-soil-bedrock), along a toposequence (ridge to valley), and across a small catchment (up to third-order streams), this paper attempts to assess the relevance of groundwater to understanding large-scale patterns and processes such as represented in global climate and Earth system models. Through observation syntheses and conceptual models, evidence are presented that groundwater influence is globally prevalent, it forms an environmental gradient not fully captured by the climate, and it can profoundly shape critical zone evolution at continental to global scales. Four examples are used to illustrate these ideas: (1) groundwater as a water source for plants in rainless periods, (2) water table depth as a driver of plant rooting depth, (3) the accessibility of groundwater as an ecological niche separator, and (4) groundwater as the lower boundary of land drainage and a global driver of wetlands. The implications to understanding past and future global environmental change are briefly discussed, as well as critical discipline, scale, and data gaps that must be bridged in order for us to translate what we learn in the field at column, hillslope and catchment scales, to what we must predict at regional, continental, and global scales.

  6. Formal Reduction of Interfaces to Large-scale Process Control Systems

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    A formal methodology is proposed to reduce the amount of information displayed to remote human operators at interfaces to large-scale process control plants of a certain type. The reduction proceeds in two stages. In the first stage, minimal reduced subsets of components, which give full information about the state of the whole system, are generated by determining functional dependencies between components. This is achieved by using a temporal logic proof obligation to check whether the state of all components can be inferred from the state of components in a subset in specified situations that the human operator needs to detect, with respect to a finite state machine model of the system and other human operator behavior. Generation of reduced subsets is automated with the help of a temporal logic model checker. The second stage determines the interconnections between components to be displayed in the reduced system so that the natural overall graphical structure of the system is maintained. A formal definition of an aesthetic for the required subgraph of a graph representation of the full system, containing the reduced subset of components, is given for this purpose. The methodology is demonstrated by a case study.

  7. A Large-Scale Circuit Mechanism for Hierarchical Dynamical Processing in the Primate Cortex.

    Science.gov (United States)

    Chaudhuri, Rishidev; Knoblauch, Kenneth; Gariel, Marie-Alice; Kennedy, Henry; Wang, Xiao-Jing

    2015-10-21

    We developed a large-scale dynamical model of the macaque neocortex, which is based on recently acquired directed- and weighted-connectivity data from tract-tracing experiments, and which incorporates heterogeneity across areas. A hierarchy of timescales naturally emerges from this system: sensory areas show brief, transient responses to input (appropriate for sensory processing), whereas association areas integrate inputs over time and exhibit persistent activity (suitable for decision-making and working memory). The model displays multiple temporal hierarchies, as evidenced by contrasting responses to visual versus somatosensory stimulation. Moreover, slower prefrontal and temporal areas have a disproportionate impact on global brain dynamics. These findings establish a circuit mechanism for "temporal receptive windows" that are progressively enlarged along the cortical hierarchy, suggest an extension of time integration in decision making from local to large circuits, and should prompt a re-evaluation of the analysis of functional connectivity (measured by fMRI or electroencephalography/magnetoencephalography) by taking into account inter-areal heterogeneity.

  8. An architecture for distributed real-time large-scale information processing for intelligence analysis

    Science.gov (United States)

    Santos, Eugene, Jr.; Santos, Eunice E.; Santos, Eugene S.

    2004-04-01

    Given a massive and dynamic space of information (nuggets) and a query to be answered, how can the correct (answer) nuggets be retrieved in an effective and efficient manner? We present a large-scale distributed real-time architecture based on anytime intelligent foraging, gathering, and matching (I-FGM) on massive and dynamic information spaces. Simply put, we envision that when given a search query, large numbers of computational processes are alerted or activated in parallel to begin identifying and retrieving the appro-priate information nuggets. In particular, our approach aims to provide an anytime capa-bility which functions as follows: Given finite computational resources, I-FGM will pro-ceed to explore the information space and, over time, continuously identify and update promising candidate nugget, thus, good candidates will be available at anytime on re-quest. With the computational costs of evaluating the relevance of a candidate nugget, the anytime nature of I-FGM will provide increasing confidence on nugget selections over time by providing admissible partial evaluations. When a new promising candidate is identified, the current set of selected nuggets is re-evaluated and updated appropriately. Essentially, I-FGM will guide its finite computational resources in locating the target in-formation nuggets quickly and iteratively over time. In addition, the goal of I-FGM is to naturally handle new nuggets as they appear. A central element of our framework is to provide a formal computational model of this massive data-intensive problem.

  9. Constructing a large-scale 3D Geologic Model for Analysis of the Non-Proliferation Experiment

    Energy Technology Data Exchange (ETDEWEB)

    Wagoner, J; Myers, S

    2008-04-09

    We have constructed a regional 3D geologic model of the southern Great Basin, in support of a seismic wave propagation investigation of the 1993 Nonproliferation Experiment (NPE) at the Nevada Test Site (NTS). The model is centered on the NPE and spans longitude -119.5{sup o} to -112.6{sup o} and latitude 34.5{sup o} to 39.8{sup o}; the depth ranges from the topographic surface to 150 km below sea level. The model includes the southern half of Nevada, as well as parts of eastern California, western Utah, and a portion of northwestern Arizona. The upper crust is constrained by both geologic and geophysical studies, while the lower crust and upper mantle are constrained by geophysical studies. The mapped upper crustal geologic units are Quaternary basin fill, Tertiary deposits, pre-Tertiary deposits, intrusive rocks of all ages, and calderas. The lower crust and upper mantle are parameterized with 5 layers, including the Moho. Detailed geologic data, including surface maps, borehole data, and geophysical surveys, were used to define the geology at the NTS. Digital geologic outcrop data were available for both Nevada and Arizona, whereas geologic maps for California and Utah were scanned and hand-digitized. Published gravity data (2km spacing) were used to determine the thickness of the Cenozoic deposits and thus estimate the depth of the basins. The free surface is based on a 10m lateral resolution DEM at the NTS and a 90m lateral resolution DEM elsewhere. Variations in crustal thickness are based on receiver function analysis and a framework compilation of reflection/refraction studies. We used Earthvision (Dynamic Graphics, Inc.) to integrate the geologic and geophysical information into a model of x,y,z,p nodes, where p is a unique integer index value representing the geologic unit. For seismic studies, the geologic units are mapped to specific seismic velocities. The gross geophysical structure of the crust and upper mantle is taken from regional surface

  10. Conditionally Averaged Large-Scale Motions in the Neutral Atmospheric Boundary Layer: Insights for Aeolian Processes

    Science.gov (United States)

    Jacob, Chinthaka; Anderson, William

    2016-06-01

    Aeolian erosion of flat, arid landscapes is induced (and sustained) by the aerodynamic surface stress imposed by flow in the atmospheric surface layer. Conceptual models typically indicate that sediment mass flux, Q (via saltation or drift), scales with imposed aerodynamic stress raised to some exponent, n, where n > 1 . This scaling demonstrates the importance of turbulent fluctuations in driving aeolian processes. In order to illustrate the importance of surface-stress intermittency in aeolian processes, and to elucidate the role of turbulence, conditional averaging predicated on aerodynamic surface stress has been used within large-eddy simulation of atmospheric boundary-layer flow over an arid, flat landscape. The conditional-sampling thresholds are defined based on probability distribution functions of surface stress. The simulations have been performed for a computational domain with ≈ 25 H streamwise extent, where H is the prescribed depth of the neutrally-stratified boundary layer. Thus, the full hierarchy of spatial scales are captured, from surface-layer turbulence to large- and very-large-scale outer-layer coherent motions. Spectrograms are used to support this argument, and also to illustrate how turbulent energy is distributed across wavelengths with elevation. Conditional averaging provides an ensemble-mean visualization of flow structures responsible for erosion `events'. Results indicate that surface-stress peaks are associated with the passage of inclined, high-momentum regions flanked by adjacent low-momentum regions. Fluid in the interfacial shear layers between these adjacent quasi-uniform momentum regions exhibits high streamwise and vertical vorticity.

  11. Conditionally Averaged Large-Scale Motions in the Neutral Atmospheric Boundary Layer: Insights for Aeolian Processes

    Science.gov (United States)

    Jacob, Chinthaka; Anderson, William

    2017-01-01

    Aeolian erosion of flat, arid landscapes is induced (and sustained) by the aerodynamic surface stress imposed by flow in the atmospheric surface layer. Conceptual models typically indicate that sediment mass flux, Q (via saltation or drift), scales with imposed aerodynamic stress raised to some exponent, n, where n > 1. This scaling demonstrates the importance of turbulent fluctuations in driving aeolian processes. In order to illustrate the importance of surface-stress intermittency in aeolian processes, and to elucidate the role of turbulence, conditional averaging predicated on aerodynamic surface stress has been used within large-eddy simulation of atmospheric boundary-layer flow over an arid, flat landscape. The conditional-sampling thresholds are defined based on probability distribution functions of surface stress. The simulations have been performed for a computational domain with ≈ 25 H streamwise extent, where H is the prescribed depth of the neutrally-stratified boundary layer. Thus, the full hierarchy of spatial scales are captured, from surface-layer turbulence to large- and very-large-scale outer-layer coherent motions. Spectrograms are used to support this argument, and also to illustrate how turbulent energy is distributed across wavelengths with elevation. Conditional averaging provides an ensemble-mean visualization of flow structures responsible for erosion `events'. Results indicate that surface-stress peaks are associated with the passage of inclined, high-momentum regions flanked by adjacent low-momentum regions. Fluid in the interfacial shear layers between these adjacent quasi-uniform momentum regions exhibits high streamwise and vertical vorticity.

  12. Research project on CO2 geological storage and groundwaterresources: Large-scale hydrological evaluation and modeling of impact ongroundwater systems

    Energy Technology Data Exchange (ETDEWEB)

    Birkholzer, Jens; Zhou, Quanlin; Rutqvist, Jonny; Jordan,Preston; Zhang,K.; Tsang, Chin-Fu

    2007-10-24

    If carbon dioxide capture and storage (CCS) technologies areimplemented on a large scale, the amounts of CO2 injected and sequesteredunderground could be extremely large. The stored CO2 then replaces largevolumes of native brine, which can cause considerable pressureperturbation and brine migration in the deep saline formations. Ifhydraulically communicating, either directly via updipping formations orthrough interlayer pathways such as faults or imperfect seals, theseperturbations may impact shallow groundwater or even surface waterresources used for domestic or commercial water supply. Possibleenvironmental concerns include changes in pressure and water table,changes in discharge and recharge zones, as well as changes in waterquality. In compartmentalized formations, issues related to large-scalepressure buildup and brine displacement may also cause storage capacityproblems, because significant pressure buildup can be produced. Toaddress these issues, a three-year research project was initiated inOctober 2006, the first part of which is summarized in this annualreport.

  13. Fine-Scale Relief in the Amazon Drives Large Scale Ecohydrological Processes

    Science.gov (United States)

    Nobre, A. D.; Cuartas, A.; Hodnett, M.; Saleska, S. R.

    2014-12-01

    Access to soil water by roots is a key ecophysiological factor for plant productivity in natural systems. Periodically during dry seasons or critically during episodic climate droughts, shortage of water supply can reduce or severely impair plant life. At the other extreme persistent soil waterlogging will limit root respiration and restrict local establishment to adapted species, usually leading to stunted and less productive communities. Soil-water availability is therefore a very important climate variable controlling plant physiology and ecosystem dynamics. Terra-firme, the non-seasonally floodable terrain that covers 82% of the landscape in Amazonia,[1] supports the most massive part of the rainforest ecosystem. The availability of soil water data for terra-firme is scant and very coarse. This lack of data has hampered observational and modeling studies aiming to develop a large-scale integrative ecohydrological picture of Amazonia and its vulnerability to climate change. We have mapped the Amazon basin with a new terrain model developed in our group (HAND, Height Above the Nearest drainage[2]), delineating soil water environments using topographical data from the SRTM digital elevation model (250 m horizontal interpolated resolution). The preliminary results show that more than 50% of Terra-firme has the water table very close to the surface (up to 2 m deep), while the remainder of the upland landscape has variable degree of dependence on non-saturated soil (vadose layer). The mapping also shows extremely heterogeneous patterns of fine-scale relief across the basin, which implies complex ecohydrological regional forcing on the forest physiology. Ecoclimate studies should therefore take into account fine-scale relief and its implications for soil-water availability to plant processes. [1] Melack, J. M., & Hess, L. L. (2011). Remote sensing of the distribution and extent of wetlands in the Amazon basin. In W. J. Junk & M. Piedade (Eds.), Amazonian floodplain

  14. Large-Scale Elongated Gas Blowouts, Offshore Virginia/North Carolina: Process and Product

    Science.gov (United States)

    Hill, J. C.; Driscoll, N. W.; Weissel, J. K.; Goff, J. A.

    2002-12-01

    A shipboard program conducted in May 2000 onboard the R/V Hatteras provided major new insight into the origin of the enigmatic "crack"-like features arranged in en-echelon fashion along a 40 km-long stretch of the outermost shelf off Virginia and North Carolina. High-resolution side-scan backscatter and chirp subbottom reflection data show the features are not simple normal faults, but appear to be large-scale excavations or craters resulting from massive expulsion of gas through the seafloor. Visualization of the dip- and strike-line sonar mosaics in three dimensions, along with co-registered seismic data, has improved the spatial resolution of the features and reveals a strong correlation between trapped gas and internally deformed, deltaic strata that drape the shelf edge. The geometry of the blowouts and their location along the outer shelf suggests a composite formation of the pockmark features, combining gas accumulation, down-slope creep of the deltaic strata and fluid expulsion. Shallow gas accumulation is seen clearly in the chirp profiles as bright, high amplitude reflections, obscuring any deeper reflectors. The gas is trapped beneath a thin veneer (few tens of meters) of stratified sediment draped across the outermost shelf/upper slope, which is interpreted as a shelf-edge delta, probably deposited since the last glacial maximum (LGM). The chirp data show clear evidence of internal deformation of the shelf-edge delta, including thickening and thinning of chaotic or transparent layers, segmentation and rotation of superjacent sections, and homoclinal contacts that emphasize contrasts between transparent and reflective intervals. The observed stratigraphic disturbance, most likely a result of downslope creep processes, is interpreted to create permeable pathways for upslope/updip gas migration and eventual expulsion. In summary, the new data 1) show that gas expulsion occurred after the deposition of a seaward-dipping wedge of sediments, which we suggest

  15. On Continuous Magnetically Enhanced Centrifugation in Large Scale Downstream Processing of Bioproducts

    OpenAIRE

    Lindner, Johannes

    2014-01-01

    The current thesis targets on the technical use of Magnetically Enhanced Centrifugation (MEC). Aim is the understanding of the mechanisms of particle transport out of the magnetic field by simulations of the phenomena, and the realization of MEC in a large scale. Industrial scale machines for batch-wise and continuous discharge were tested. The use of synthetic magnetic particles with functionalized surface allows the separation of non-magnetic matter.

  16. Novel method to construct large-scale design space in lubrication process utilizing Bayesian estimation based on a small-scale design-of-experiment and small sets of large-scale manufacturing data.

    Science.gov (United States)

    Maeda, Jin; Suzuki, Tatsuya; Takayama, Kozo

    2012-12-01

    A large-scale design space was constructed using a Bayesian estimation method with a small-scale design of experiments (DoE) and small sets of large-scale manufacturing data without enforcing a large-scale DoE. The small-scale DoE was conducted using various Froude numbers (X(1)) and blending times (X(2)) in the lubricant blending process for theophylline tablets. The response surfaces, design space, and their reliability of the compression rate of the powder mixture (Y(1)), tablet hardness (Y(2)), and dissolution rate (Y(3)) on a small scale were calculated using multivariate spline interpolation, a bootstrap resampling technique, and self-organizing map clustering. The constant Froude number was applied as a scale-up rule. Three experiments under an optimal condition and two experiments under other conditions were performed on a large scale. The response surfaces on the small scale were corrected to those on a large scale by Bayesian estimation using the large-scale results. Large-scale experiments under three additional sets of conditions showed that the corrected design space was more reliable than that on the small scale, even if there was some discrepancy in the pharmaceutical quality between the manufacturing scales. This approach is useful for setting up a design space in pharmaceutical development when a DoE cannot be performed at a commercial large manufacturing scale.

  17. Curbing variations in packaging process through Six Sigma way in a large-scale food-processing industry

    Science.gov (United States)

    Desai, Darshak A.; Kotadiya, Parth; Makwana, Nikheel; Patel, Sonalinkumar

    2015-08-01

    Indian industries need overall operational excellence for sustainable profitability and growth in the present age of global competitiveness. Among different quality and productivity improvement techniques, Six Sigma has emerged as one of the most effective breakthrough improvement strategies. Though Indian industries are exploring this improvement methodology to their advantage and reaping the benefits, not much has been presented and published regarding experience of Six Sigma in the food-processing industries. This paper is an effort to exemplify the application of Six Sigma quality improvement drive to one of the large-scale food-processing sectors in India. The paper discusses the phase wiz implementation of define, measure, analyze, improve, and control (DMAIC) on one of the chronic problems, variations in the weight of milk powder pouch. The paper wraps up with the improvements achieved and projected bottom-line gain to the unit by application of Six Sigma methodology.

  18. Characterization of Pliocene and Miocene Formations in the Wilmington Graben, Offshore Los Angeles, for Large-Scale Geologic Storage of CO₂

    Energy Technology Data Exchange (ETDEWEB)

    Bruno, Michael [Geomechanics Technologies, Incorporated, Monrovia, CA (United States)

    2014-12-08

    Geomechanics Technologies has completed a detailed characterization study of the Wilmington Graben offshore Southern California area for large-scale CO₂ storage. This effort has included: an evaluation of existing wells in both State and Federal waters, field acquisition of about 175 km (109 mi) of new seismic data, new well drilling, development of integrated 3D geologic, geomechanics, and fluid flow models for the area. The geologic analysis indicates that more than 796 MMt of storage capacity is available within the Pliocene and Miocene formations in the Graben for midrange geologic estimates (P50). Geomechanical analyses indicate that injection can be conducted without significant risk for surface deformation, induced stresses or fault activation. Numerical analysis of fluid migration indicates that injection into the Pliocene Formation at depths of 1525 m (5000 ft) would lead to undesirable vertical migration of the CO₂ plume. Recent well drilling however, indicates that deeper sand is present at depths exceeding 2135 m (7000 ft), which could be viable for large volume storage. For vertical containment, injection would need to be limited to about 250,000 metric tons per year per well, would need to be placed at depths greater than 7000ft, and would need to be placed in new wells located at least 1 mile from any existing offset wells. As a practical matter, this would likely limit storage operations in the Wilmington Graben to about 1 million tons per year or less. A quantitative risk analysis for the Wilmington Graben indicate that such large scale CO₂ storage in the area would represent higher risk than other similar size projects in the US and overseas.

  19. Reducing aeration energy consumption in a large-scale membrane bioreactor: Process simulation and engineering application.

    Science.gov (United States)

    Sun, Jianyu; Liang, Peng; Yan, Xiaoxu; Zuo, Kuichang; Xiao, Kang; Xia, Junlin; Qiu, Yong; Wu, Qing; Wu, Shijia; Huang, Xia; Qi, Meng; Wen, Xianghua

    2016-04-15

    Reducing the energy consumption of membrane bioreactors (MBRs) is highly important for their wider application in wastewater treatment engineering. Of particular significance is reducing aeration in aerobic tanks to reduce the overall energy consumption. This study proposed an in situ ammonia-N-based feedback control strategy for aeration in aerobic tanks; this was tested via model simulation and through a large-scale (50,000 m(3)/d) engineering application. A full-scale MBR model was developed based on the activated sludge model (ASM) and was calibrated to the actual MBR. The aeration control strategy took the form of a two-step cascaded proportion-integration (PI) feedback algorithm. Algorithmic parameters were optimized via model simulation. The strategy achieved real-time adjustment of aeration amounts based on feedback from effluent quality (i.e., ammonia-N). The effectiveness of the strategy was evaluated through both the model platform and the full-scale engineering application. In the former, the aeration flow rate was reduced by 15-20%. In the engineering application, the aeration flow rate was reduced by 20%, and overall specific energy consumption correspondingly reduced by 4% to 0.45 kWh/m(3)-effluent, using the present practice of regulating the angle of guide vanes of fixed-frequency blowers. Potential energy savings are expected to be higher for MBRs with variable-frequency blowers. This study indicated that the ammonia-N-based aeration control strategy holds promise for application in full-scale MBRs.

  20. Large-scale production of diesel-like biofuels - process design as an inherent part of microorganism development.

    Science.gov (United States)

    Cuellar, Maria C; Heijnen, Joseph J; van der Wielen, Luuk A M

    2013-06-01

    Industrial biotechnology is playing an important role in the transition to a bio-based economy. Currently, however, industrial implementation is still modest, despite the advances made in microorganism development. Given that the fuels and commodity chemicals sectors are characterized by tight economic margins, we propose to address overall process design and efficiency at the start of bioprocess development. While current microorganism development is targeted at product formation and product yield, addressing process design at the start of bioprocess development means that microorganism selection can also be extended to other critical targets for process technology and process scale implementation, such as enhancing cell separation or increasing cell robustness at operating conditions that favor the overall process. In this paper we follow this approach for the microbial production of diesel-like biofuels. We review current microbial routes with both oleaginous and engineered microorganisms. For the routes leading to extracellular production, we identify the process conditions for large scale operation. The process conditions identified are finally translated to microorganism development targets. We show that microorganism development should be directed at anaerobic production, increasing robustness at extreme process conditions and tailoring cell surface properties. All the same time, novel process configurations integrating fermentation and product recovery, cell reuse and low-cost technologies for product separation are mandatory. This review provides a state-of-the-art summary of the latest challenges in large-scale production of diesel-like biofuels.

  1. Large-scale fabrication of In2S3 porous films via one-step hydrothermal process.

    Science.gov (United States)

    Chen, Fei; Deng, Dan; Lei, Yinlin

    2013-10-01

    Large-scale indium sulfide (In2S3) porous films were fabricated via a facile one-step and non-template hydrothermal process using L-cysteine as a capping agent. The impact of reaction conditions such as reaction time, temperatures, and capping agents on the synthesis of the In2S3 porous films were studied. The morphology, structure, and phase composition of In2S3 porous films were characterized by X-ray diffraction (XRD), field-emission scanning electron microscopy (FESEM), and transmission electron microscopy (TEM). The formation process and the optical property of the In2S3 porous films were also evaluated.

  2. Consultancy on Large-Scale Submerged Aerobic Cultivation Process Design - Final Technical Report: February 1, 2016 -- June 30, 2016

    Energy Technology Data Exchange (ETDEWEB)

    Crater, Jason [Gemomatica, Inc., San Diego, CA (United States); Galleher, Connor [Gemomatica, Inc., San Diego, CA (United States); Lievense, Jeff [Gemomatica, Inc., San Diego, CA (United States)

    2017-05-12

    NREL is developing an advanced aerobic bubble column model using Aspen Custom Modeler (ACM). The objective of this work is to integrate the new fermentor model with existing techno-economic models in Aspen Plus and Excel to establish a new methodology for guiding process design. To assist this effort, NREL has contracted Genomatica to critique and make recommendations for improving NREL's bioreactor model and large scale aerobic bioreactor design for biologically producing lipids at commercial scale. Genomatica has highlighted a few areas for improving the functionality and effectiveness of the model. Genomatica recommends using a compartment model approach with an integrated black-box kinetic model of the production microbe. We also suggest including calculations for stirred tank reactors to extend the models functionality and adaptability for future process designs. Genomatica also suggests making several modifications to NREL's large-scale lipid production process design. The recommended process modifications are based on Genomatica's internal techno-economic assessment experience and are focused primarily on minimizing capital and operating costs. These recommendations include selecting/engineering a thermotolerant yeast strain with lipid excretion; using bubble column fermentors; increasing the size of production fermentors; reducing the number of vessels; employing semi-continuous operation; and recycling cell mass.

  3. John Cage's Number Pieces as Stochastic Processes: a Large-Scale Analysis

    CERN Document Server

    Popoff, Alexandre

    2013-01-01

    The Number Pieces are a corpus of works by composer John Cage, which rely on a particular time-structure used for determining the temporal location of sounds, named the "time-bracket". The time-bracket system is an inherently stochastic process, which complicates the analysis of the Number Pieces as it leads to a large number of possibilities in terms of sonic content instead of one particular fixed performance. The purpose of this paper is to propose a statistical approach of the Number Pieces by assimilating them to stochastic processes. Two Number Pieces, "Four" and "Five", are studied here in terms of pitch-class set content: the stochastic processes at hand lead to a collection of random variables indexed over time giving the distribution of the possible pitch-class sets. This approach allows for a static and dynamic analysis of the score encompassing all the possible outcomes during the performance of these works.

  4. The power of event-driven analytics in Large Scale Data Processing

    CERN Document Server

    CERN. Geneva; Marques, Paulo

    2011-01-01

    FeedZai is a software company specialized in creating high-­‐throughput low-­‐latency data processing solutions. FeedZai develops a product called "FeedZai Pulse" for continuous event-­‐driven analytics that makes application development easier for end users. It automatically calculates key performance indicators and baselines, showing how current performance differ from previous history, creating timely business intelligence updated to the second. The tool does predictive analytics and trend analysis, displaying data on real-­‐time web-­‐based graphics. In 2010 FeedZai won the European EBN Smart Entrepreneurship Competition, in the Digital Models category, being considered one of the "top-­‐20 smart companies in Europe". The main objective of this seminar/workshop is to explore the topic for large-­‐scale data processing using Complex Event Processing and, in particular, the possible uses of Pulse in...

  5. A General-purpose Framework for Parallel Processing of Large-scale LiDAR Data

    Science.gov (United States)

    Li, Z.; Hodgson, M.; Li, W.

    2016-12-01

    Light detection and ranging (LiDAR) technologies have proven efficiency to quickly obtain very detailed Earth surface data for a large spatial extent. Such data is important for scientific discoveries such as Earth and ecological sciences and natural disasters and environmental applications. However, handling LiDAR data poses grand geoprocessing challenges due to data intensity and computational intensity. Previous studies received notable success on parallel processing of LiDAR data to these challenges. However, these studies either relied on high performance computers and specialized hardware (GPUs) or focused mostly on finding customized solutions for some specific algorithms. We developed a general-purpose scalable framework coupled with sophisticated data decomposition and parallelization strategy to efficiently handle big LiDAR data. Specifically, 1) a tile-based spatial index is proposed to manage big LiDAR data in the scalable and fault-tolerable Hadoop distributed file system, 2) two spatial decomposition techniques are developed to enable efficient parallelization of different types of LiDAR processing tasks, and 3) by coupling existing LiDAR processing tools with Hadoop, this framework is able to conduct a variety of LiDAR data processing tasks in parallel in a highly scalable distributed computing environment. The performance and scalability of the framework is evaluated with a series of experiments conducted on a real LiDAR dataset using a proof-of-concept prototype system. The results show that the proposed framework 1) is able to handle massive LiDAR data more efficiently than standalone tools; and 2) provides almost linear scalability in terms of either increased workload (data volume) or increased computing nodes with both spatial decomposition strategies. We believe that the proposed framework provides valuable references on developing a collaborative cyberinfrastructure for processing big earth science data in a highly scalable environment.

  6. Large-scale analytical Fourier transform of photomask layouts using graphics processing units

    Science.gov (United States)

    Sakamoto, Julia A.

    2015-10-01

    Compensation of lens-heating effects during the exposure scan in an optical lithographic system requires knowledge of the heating profile in the pupil of the projection lens. A necessary component in the accurate estimation of this profile is the total integrated distribution of light, relying on the squared modulus of the Fourier transform (FT) of the photomask layout for individual process layers. Requiring a layout representation in pixelated image format, the most common approach is to compute the FT numerically via the fast Fourier transform (FFT). However, the file size for a standard 26- mm×33-mm mask with 5-nm pixels is an overwhelming 137 TB in single precision; the data importing process alone, prior to FFT computation, can render this method highly impractical. A more feasible solution is to handle layout data in a highly compact format with vertex locations of mask features (polygons), which correspond to elements in an integrated circuit, as well as pattern symmetries and repetitions (e.g., GDSII format). Provided the polygons can decompose into shapes for which analytical FT expressions are possible, the analytical approach dramatically reduces computation time and alleviates the burden of importing extensive mask data. Algorithms have been developed for importing and interpreting hierarchical layout data and computing the analytical FT on a graphics processing unit (GPU) for rapid parallel processing, not assuming incoherent imaging. Testing was performed on the active layer of a 392- μm×297-μm virtual chip test structure with 43 substructures distributed over six hierarchical levels. The factor of improvement in the analytical versus numerical approach for importing layout data, performing CPU-GPU memory transfers, and executing the FT on a single NVIDIA Tesla K20X GPU was 1.6×104, 4.9×103, and 3.8×103, respectively. Various ideas for algorithm enhancements will be discussed.

  7. Leveraging human oversight and intervention in large-scale parallel processing of open-source data

    Science.gov (United States)

    Casini, Enrico; Suri, Niranjan; Bradshaw, Jeffrey M.

    2015-05-01

    The popularity of cloud computing along with the increased availability of cheap storage have led to the necessity of elaboration and transformation of large volumes of open-source data, all in parallel. One way to handle such extensive volumes of information properly is to take advantage of distributed computing frameworks like Map-Reduce. Unfortunately, an entirely automated approach that excludes human intervention is often unpredictable and error prone. Highly accurate data processing and decision-making can be achieved by supporting an automatic process through human collaboration, in a variety of environments such as warfare, cyber security and threat monitoring. Although this mutual participation seems easily exploitable, human-machine collaboration in the field of data analysis presents several challenges. First, due to the asynchronous nature of human intervention, it is necessary to verify that once a correction is made, all the necessary reprocessing is done in chain. Second, it is often needed to minimize the amount of reprocessing in order to optimize the usage of resources due to limited availability. In order to improve on these strict requirements, this paper introduces improvements to an innovative approach for human-machine collaboration in the processing of large amounts of open-source data in parallel.

  8. A Review on Large Scale Graph Processing Using Big Data Based Parallel Programming Models

    Directory of Open Access Journals (Sweden)

    Anuraj Mohan

    2017-02-01

    Full Text Available Processing big graphs has become an increasingly essential activity in various fields like engineering, business intelligence and computer science. Social networks and search engines usually generate large graphs which demands sophisticated techniques for social network analysis and web structure mining. Latest trends in graph processing tend towards using Big Data platforms for parallel graph analytics. MapReduce has emerged as a Big Data based programming model for the processing of massively large datasets. Apache Giraph, an open source implementation of Google Pregel which is based on Bulk Synchronous Parallel Model (BSP is used for graph analytics in social networks like Facebook. This proposed work is to investigate the algorithmic effects of the MapReduce and BSP model on graph problems. The triangle counting problem in graphs is considered as a benchmark and evaluations are made on the basis of time of computation on the same cluster, scalability in relation to graph and cluster size, resource utilization and the structure of the graph.

  9. Combining Vertex-centric Graph Processing with SPARQL for Large-scale RDF Data Analytics

    KAUST Repository

    Abdelaziz, Ibrahim

    2017-06-27

    Modern applications, such as drug repositioning, require sophisticated analytics on RDF graphs that combine structural queries with generic graph computations. Existing systems support either declarative SPARQL queries, or generic graph processing, but not both. We bridge the gap by introducing Spartex, a versatile framework for complex RDF analytics. Spartex extends SPARQL to support programs that combine seamlessly generic graph algorithms (e.g., PageRank, Shortest Paths, etc.) with SPARQL queries. Spartex builds on existing vertex-centric graph processing frameworks, such as Graphlab or Pregel. It implements a generic SPARQL operator as a vertex-centric program that interprets SPARQL queries and executes them efficiently using a built-in optimizer. In addition, any graph algorithm implemented in the underlying vertex-centric framework, can be executed in Spartex. We present various scenarios where our framework simplifies significantly the implementation of complex RDF data analytics programs. We demonstrate that Spartex scales to datasets with billions of edges, and show that our core SPARQL engine is at least as fast as the state-of-the-art specialized RDF engines. For complex analytical tasks that combine generic graph processing with SPARQL, Spartex is at least an order of magnitude faster than existing alternatives.

  10. Large-Scale Reactive Atomistic Simulation of Shock-induced Initiation Processes in Energetic Materials

    Science.gov (United States)

    Thompson, Aidan

    2013-06-01

    Initiation in energetic materials is fundamentally dependent on the interaction between a host of complex chemical and mechanical processes, occurring on scales ranging from intramolecular vibrations through molecular crystal plasticity up to hydrodynamic phenomena at the mesoscale. A variety of methods (e.g. quantum electronic structure methods (QM), non-reactive classical molecular dynamics (MD), mesoscopic continuum mechanics) exist to study processes occurring on each of these scales in isolation, but cannot describe how these processes interact with each other. In contrast, the ReaxFF reactive force field, implemented in the LAMMPS parallel MD code, allows us to routinely perform multimillion-atom reactive MD simulations of shock-induced initiation in a variety of energetic materials. This is done either by explicitly driving a shock-wave through the structure (NEMD) or by imposing thermodynamic constraints on the collective dynamics of the simulation cell e.g. using the Multiscale Shock Technique (MSST). These MD simulations allow us to directly observe how energy is transferred from the shockwave into other processes, including intramolecular vibrational modes, plastic deformation of the crystal, and hydrodynamic jetting at interfaces. These processes in turn cause thermal excitation of chemical bonds leading to initial chemical reactions, and ultimately to exothermic formation of product species. Results will be presented on the application of this approach to several important energetic materials, including pentaerythritol tetranitrate (PETN) and ammonium nitrate/fuel oil (ANFO). In both cases, we validate the ReaxFF parameterizations against QM and experimental data. For PETN, we observe initiation occurring via different chemical pathways, depending on the shock direction. For PETN containing spherical voids, we observe enhanced sensitivity due to jetting, void collapse, and hotspot formation, with sensitivity increasing with void size. For ANFO, we

  11. A Simple and Efficient Process for Large Scale Glycerol Oligomerization by Microwave Irradiation

    Directory of Open Access Journals (Sweden)

    Rémi Nguyen

    2017-04-01

    Full Text Available Herein, an optimized method for 100 g scale synthesis of glycerol oligomers using a microwave multimode source and the low priced K2CO3 as catalyst is reported. This method allows the complete conversion of glycerol to its oligomers in only 30 min, yielding molecular weights up to 1000 g mol−1. Furthermore, a simple iterative purification process, involving the precipitation of the crude product in acetone and methanol, affords a final product composed of larger oligomers with a narrow dispersity (D < 1.5.

  12. The next step in real time data processing for large scale physics experiments

    CERN Document Server

    Paramesvaran, Sudarshan

    2016-01-01

    Run 2 of the LHC represents one of the most challenging scientific environments for real time data analysis and processing. The steady increase in instantaneous luminosity will result in the CMS detector producing around 150 TB/s of data, only a small fraction of which is useful for interesting Physics studies. During 2015 the CMS collaboration will be completing a total upgrade of its Level 1 Trigger to deal with these conditions. In this talk a description of the major components of this complex system will be described. This will include a discussion of custom-designed electronic processing boards, built to the uTCA specification with AMC cards based on Xilinx 7 FPGAs and a network of high-speed optical links. In addition, novel algorithms will be described which deliver excellent performance in FPGAs and are combined with highly stable software frameworks to ensure a minimal risk of downtime. This upgrade is planned to take data from 2016. However a system of parallel running has been developed that will ...

  13. Parallel Processing for Large-scale Fault Tree in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Xinyan Wang

    2013-05-01

    Full Text Available Wireless sensor networks (WSN covers many kinds of technologies, such as technology of sensor, embedded system, wireless communication, etc. WSN is different from the traditional networks in size, communication distance and energy-constrained so as to develop new topology, protocol, quality of service (QoS, and so on. In order to solve the problem of self-organizing in the topology, this paper proposes a novel strategy which is based on communication delay between sensors. Firstly, the gateway selects some boundary nodes to connect. Secondly, the boundary nodes choose inner nodes. The rest may be deduced by analogy. Finally, a net-tree topology with multi-path routing is developed. The analyses of the topology show that net-tree has strong ability in self-organizing and extensible. However, the scale of system is usually very large and complexity so that it is hard to detect the failure nodes when the nodes fail. To solve the greater challenge, the paper proposes to adopt fault tree analysis. Fault tree is a commonly used method to analyze the reliability of a network or system. Based on the fault tree analysis, a parallel computing algorithm is represented to these faults in the net-tree. Firstly, two models for parallel processing are came up and we focus on the parallel processing algorithm based on the cut sets. Then, the speedup ratio is studied. Compare with the serial algorithm, the results of the experiment shows that the efficiency has been greatly improved.

  14. Formation and fate of marine snow : small-scale processes with large-scale implications

    DEFF Research Database (Denmark)

    Kiørboe, Thomas

    2001-01-01

    to the aggregate. Also, suspended bacteria may enjoy the elevated concentration of organic solutes in the plume. I explore these small-scale formation and degradation processes by means of models, experiments and field observations. The larger scale implications for the structure and functioning of pelagic food......Marine snow aggregates are believed to be the main vehicles for vertical material transport in the ocean. However, aggregates are also sites of elevated heterotrophic activity, which may rather cause enhanced retention of aggregated material in the upper ocean. Small-scale biological......-physical interactions govern the formation and fate of marine snow. Aggregates may form by physical coagulation: fluid motion causes collisions between small primary particles (e.g. phytoplankton) that may then stick together to form aggregates with enhanced sinking velocities. Bacteria may subsequently solubilise...

  15. On the self-organizing process of large scale shear flows

    Energy Technology Data Exchange (ETDEWEB)

    Newton, Andrew P. L. [Department of Applied Maths, University of Sheffield, Sheffield, Yorkshire S3 7RH (United Kingdom); Kim, Eun-jin [School of Mathematics and Statistics, University of Sheffield, Sheffield, Yorkshire S3 7RH (United Kingdom); Liu, Han-Li [High Altitude Observatory, National Centre for Atmospheric Research, P. O. BOX 3000, Boulder, Colorado 80303-3000 (United States)

    2013-09-15

    Self organization is invoked as a paradigm to explore the processes governing the evolution of shear flows. By examining the probability density function (PDF) of the local flow gradient (shear), we show that shear flows reach a quasi-equilibrium state as its growth of shear is balanced by shear relaxation. Specifically, the PDFs of the local shear are calculated numerically and analytically in reduced 1D and 0D models, where the PDFs are shown to converge to a bimodal distribution in the case of finite correlated temporal forcing. This bimodal PDF is then shown to be reproduced in nonlinear simulation of 2D hydrodynamic turbulence. Furthermore, the bimodal PDF is demonstrated to result from a self-organizing shear flow with linear profile. Similar bimodal structure and linear profile of the shear flow are observed in gulf stream, suggesting self-organization.

  16. Thinking big: towards ideal strains and processes for large-scale aerobic biofuels production

    Energy Technology Data Exchange (ETDEWEB)

    McMillan, James D. [National Bioenergy Center, National Renewable Energy Laboratory, 15013 Denver West Parkway Golden CO 80401 USA; Beckham, Gregg T. [National Bioenergy Center, National Renewable Energy Laboratory, 15013 Denver West Parkway Golden CO 80401 USA

    2016-12-22

    Global concerns about anthropogenic climate change, energy security and independence, and environmental consequences of continued fossil fuel exploitation are driving significant public and private sector interest and financing to hasten development and deployment of processes to produce renewable fuels, as well as bio-based chemicals and materials, towards scales commensurate with current fossil fuel-based production. Over the past two decades, anaerobic microbial production of ethanol from first-generation hexose sugars derived primarily from sugarcane and starch has reached significant market share worldwide, with fermentation bioreactor sizes often exceeding the million litre scale. More recently, industrial-scale lignocellulosic ethanol plants are emerging that produce ethanol from pentose and hexose sugars using genetically engineered microbes and bioreactor scales similar to first-generation biorefineries.

  17. Lightning Processes And Dynamics Of Large Scale Optical Emissions In Long Delayed Sprites

    Science.gov (United States)

    Li, J.; Cummer, S. A.; Lyons, W. A.; Nelson, T. E.; Hu, W.

    2006-12-01

    Simultaneous measurements of high altitude optical emissions and the magnetic field produced by sprite-associated lightning discharges enable a close examination of the link between low altitude lightning process and high altitude sprite process. In this work, we report results of the coordinated analysis of high speed (1000--10000 frames per second) sprite video and wideband (0.1 Hz to 30 kHz) magnetic field measurements made simultaneously at the Yucca Ridge Field Station and Duke University during the June through August 2005 campaign period. We investigate the relationship of lightning charge transfer characteristics and long delayed (>30 ms) sprites after the lightning return stroke. These long delayed sprites initiated after a total vertical charge moment change from a few thousand C km to more than ten thousand C km. Continuing currents provide about 50% to 90% of this total charge transfer depending on the sprite delayed time and amplitude of continuing current. Our data also show that intense continuing current bigger than a few kA plays an important role in sprites whose primary optical emissions last unusually long (>30 ms). On one observation night (4 July 2005), a large mesoscale convective system produced many sprites that were part of complex transient luminous event (TLE) sequences that included optical emission elements that appear well after any return stroke and initiate at apparently relatively low altitudes (~ 50 km). These low initiation altitude sprite events are typically associated with intense continuing currents and total charge moment changes of 4000 C km or more. With the estimated lightning source current moment waveform, we also employ a 2-D FDTD model to numerically simulate the electric field at different altitudes and compare it with the breakdown field. This reveals the initiation altitude of those long delayed sprites and the effect of electric field dependence of the electron mobility.

  18. Decadal climate variability in the Mediterranean region: roles of large-scale forcings and regional processes

    Energy Technology Data Exchange (ETDEWEB)

    Mariotti, Annarita [University of Maryland, Earth System Science Interdisciplinary Center, College Park, MD (United States); Italian National Agency for New Technologies, Energy and Sustainable Economic Development (ENEA), Rome (Italy); Dell' Aquila, Alessandro [Italian National Agency for New Technologies, Energy and Sustainable Economic Development (ENEA), Rome (Italy)

    2012-03-15

    We analyze decadal climate variability in the Mediterranean region using observational datasets over the period 1850-2009 and a regional climate model simulation for the period 1960-2000, focusing in particular on the winter (DJF) and summer (JJA) seasons. Our results show that decadal variability associated with the winter and summer manifestations of the North Atlantic Oscillation (NAO and SNAO respectively) and the Atlantic Multidecadal Oscillation (AMO) significantly contribute to decadal climate anomalies over the Mediterranean region during these seasons. Over 30% of decadal variance in DJF and JJA precipitation in parts of the Mediterranean region can be explained by NAO and SNAO variability respectively. During JJA, the AMO explains over 30% of regional surface air temperature anomalies and Mediterranean Sea surface temperature anomalies, with significant influence also in the transition seasons. In DJF, only Mediterranean SST still significantly correlates with the AMO while regional surface air temperature does not. Also, there is no significant NAO influence on decadal Mediterranean surface air temperature anomalies during this season. A simulation with the PROTHEUS regional ocean-atmosphere coupled model is utilized to investigate processes determining regional decadal changes during the 1960-2000 period, specifically the wetter and cooler 1971-1985 conditions versus the drier and warmer 1986-2000 conditions. The simulation successfully captures the essence of observed decadal changes. Model set-up suggests that AMO variability is transmitted to the Mediterranean/European region and the Mediterranean Sea via atmospheric processes. Regional feedbacks involving cloud cover and soil moisture changes also appear to contribute to observed changes. If confirmed, the linkage between Mediterranean temperatures and the AMO may imply a certain degree of regional decadal climate predictability. The AMO and other decadal influences outlined here should be

  19. Formation and fate of marine snow: small-scale processes with large- scale implications

    Directory of Open Access Journals (Sweden)

    Thomas Kiørboe

    2001-12-01

    Full Text Available Marine snow aggregates are believed to be the main vehicles for vertical material transport in the ocean. However, aggregates are also sites of elevated heterotrophic activity, which may rather cause enhanced retention of aggregated material in the upper ocean. Small-scale biological-physical interactions govern the formation and fate of marine snow. Aggregates may form by physical coagulation: fluid motion causes collisions between small primary particles (e.g. phytoplankton that may then stick together to form aggregates with enhanced sinking velocities. Bacteria may subsequently solubilise and remineralise aggregated particles. Because the solubilization rate exceeds the remineralization rate, organic solutes leak out of sinking aggregates. The leaking solutes spread by diffusion and advection and form a chemical trail in the wake of the sinking aggregate that may guide small zooplankters to the aggregate. Also, suspended bacteria may enjoy the elevated concentration of organic solutes in the plume. I explore these small-scale formation and degradation processes by means of models, experiments and field observations. The larger scale implications for the structure and functioning of pelagic food chains of export vs. retention of material will be discussed.

  20. Large-scale analysis of expression signatures reveals hidden links among diverse cellular processes

    Directory of Open Access Journals (Sweden)

    Ge Steven X

    2011-05-01

    Full Text Available Abstract Background Cells must respond to various perturbations using their limited available gene repertoires. In order to study how cells coordinate various responses, we conducted a comprehensive comparison of 1,186 gene expression signatures (gene lists associated with various genetic and chemical perturbations. Results We identified 7,419 statistically significant overlaps between various published gene lists. Most (80% of the overlaps can be represented by a highly connected network, a "molecular signature map," that highlights the correlation of various expression signatures. By dissecting this network, we identified sub-networks that define clusters of gene sets related to common biological processes (cell cycle, immune response, etc. Examination of these sub-networks has confirmed relationships among various pathways and also generated new hypotheses. For example, our result suggests that glutamine deficiency might suppress cellular growth by inhibiting the MYC pathway. Interestingly, we also observed 1,369 significant overlaps between a set of genes upregulated by factor X and a set of genes downregulated by factor Y, suggesting a repressive interaction between X and Y factors. Conclusions Our results suggest that molecular-level responses to diverse chemical and genetic perturbations are heavily interconnected in a modular fashion. Also, shared molecular pathways can be identified by comparing newly defined gene expression signatures with databases of previously published gene expression signatures.

  1. Geological evidence against large-scale pre-Holocene offsets along the Karakoram Fault: Implications for the limited extrusion of the Tibetan plateau

    Science.gov (United States)

    Searle, M. P.

    1996-02-01

    Two end-member models proposed to accommodate the convergence between India and Asia north of the Himalaya are (1) homogeneous crustal thickening of the Tibetan plateau and (2) continental escape, or extrusion, of Tibet and southeast Asia, away from the indenting Indian plate. Foremost among the arguments supporting the latter would be large-scale (˜1000-km) offsets and high present-day slip rates along the major strike-slip faults bounding the postulated extruding crust, notably the Altyn Tagh Fault along the northern margin of Tibet and the Karakoram Fault along the SW margin. Satellite photographic interpretation and field mapping in the Karakoram mountains in Pakistan, the Nubra-Siachen area of north Ladakh, and the Pamirs in Xinjiang show that although the Karakoram Fault is extremely active today, geological offsets along the right-lateral fault are probably less than 120 km. The 21±0.5 Ma Baltoro monzogranite-leucogranite batholith has been rotated clockwise about a vertical axis 35°-40° into NW-SE alignment, parallel with the Karakoram Fault, east of the Siachen glacier, with a maximum offset of 90 km across the fault. The Bangong-Shyok suture zone similarly has a dextral offset of 85 km. The course of the Indus River, which was antecedent to the rise of the Ladakh, Karakoram, and Himalayan ranges, has been offset dextrally by 120 km south of Pangong Lake. If present-day slip rates (approximately 32 mm/yr) (Avouac and Tapponnier, 1993) are correct, only 4 Ma are required to obtain a 120-km offset. There is no geological evidence for any larger-scale pre-Holocene offsets, and it is suggested that the Karakoram Fault cannot have accommodated major eastward lateral motion of Tibetan crust. The fault has also exerted little or no influence on surface topographic uplift, cutting obliquely across the highest peaks of the Karakoram. Dextral motion along the central part of the Karakoram Fault has been transferred in the north to the Rangkul, Murghab, and

  2. GATECloud.net: a platform for large-scale, open-source text processing on the cloud.

    Science.gov (United States)

    Tablan, Valentin; Roberts, Ian; Cunningham, Hamish; Bontcheva, Kalina

    2013-01-28

    Cloud computing is increasingly being regarded as a key enabler of the 'democratization of science', because on-demand, highly scalable cloud computing facilities enable researchers anywhere to carry out data-intensive experiments. In the context of natural language processing (NLP), algorithms tend to be complex, which makes their parallelization and deployment on cloud platforms a non-trivial task. This study presents a new, unique, cloud-based platform for large-scale NLP research--GATECloud. net. It enables researchers to carry out data-intensive NLP experiments by harnessing the vast, on-demand compute power of the Amazon cloud. Important infrastructural issues are dealt with by the platform, completely transparently for the researcher: load balancing, efficient data upload and storage, deployment on the virtual machines, security and fault tolerance. We also include a cost-benefit analysis and usage evaluation.

  3. Multi-format all-optical processing based on a large-scale, hybridly integrated photonic circuit.

    Science.gov (United States)

    Bougioukos, M; Kouloumentas, Ch; Spyropoulou, M; Giannoulis, G; Kalavrouziotis, D; Maziotis, A; Bakopoulos, P; Harmon, R; Rogers, D; Harrison, J; Poustie, A; Maxwell, G; Avramopoulos, H

    2011-06-01

    We investigate through numerical studies and experiments the performance of a large scale, silica-on-silicon photonic integrated circuit for multi-format regeneration and wavelength-conversion. The circuit encompasses a monolithically integrated array of four SOAs inside two parallel Mach-Zehnder structures, four delay interferometers and a large number of silica waveguides and couplers. Exploiting phase-incoherent techniques, the circuit is capable of processing OOK signals at variable bit rates, DPSK signals at 22 or 44 Gb/s and DQPSK signals at 44 Gbaud. Simulation studies reveal the wavelength-conversion potential of the circuit with enhanced regenerative capabilities for OOK and DPSK modulation formats and acceptable quality degradation for DQPSK format. Regeneration of 22 Gb/s OOK signals with amplified spontaneous emission (ASE) noise and DPSK data signals degraded with amplitude, phase and ASE noise is experimentally validated demonstrating a power penalty improvement up to 1.5 dB.

  4. Processes of Geology

    Science.gov (United States)

    2003-01-01

    [figure removed for brevity, see original site] Released 16 July 2003This THEMIS visible image captures a complex process of deposition, burial and exhumation. The crater ejecta in the top of the image is in the form of flow lobes, indicating that the crater was formed in volatile-rich terrain. While a radial pattern can be seen in the ejecta, the pattern is sharper in the lower half of the ejecta. This is because the top half of the ejecta is still buried by a thin layer of sediment. It is most likely that at one time the entire area was covered. Wind, and perhaps water erosion have started to remove this layer, once again exposing the what was present underneath.Image information: VIS instrument. Latitude -34.3, Longitude 181.2 East (178.8 West). 19 meter/pixel resolution.Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time.NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.

  5. Production Cycle for Large Scale Fission Mo-99 Separation by the Processing of Irradiated LEU Uranium Silicide Fuel Element Targets

    Directory of Open Access Journals (Sweden)

    Abdel-Hadi Ali Sameh

    2013-01-01

    Full Text Available Uranium silicide fuels proved over decades their exceptional qualification for the operation of higher flux material testing reactors with LEU elements. The application of such fuels as target materials, particularly for the large scale fission Mo-99 producers, offers an efficient and economical solution for the related facilities. The realization of such aim demands the introduction of a suitable dissolution process for the applied U3Si2 compound. Excellent results are achieved by the oxidizing dissolution of the fuel meat in hydrofluoric acid at room temperature. The resulting solution is directly behind added to an over stoichiometric amount of potassium hydroxide solution. Uranium and the bulk of fission products are precipitated together with the transuranium compounds. The filtrate contains the molybdenum and the soluble fission product species. It is further treated similar to the in-full scale proven process. The generated off gas stream is handled also as experienced before after passing through KOH washing solution. The generated alkaline fluoride containing waste solution is noncorrosive. Nevertheless fluoride can be selectively bonded as in soluble CaF2 by addition of a mixture of solid calcium hydroxide calcium carbonate to the sand cement mixture used for waste solidification. The generated elevated amounts of LEU remnants can be recycled and retargeted. The related technology permits the minimization of the generated fuel waste, saving environment, and improving processing economy.

  6. Selective Methane Oxidation Catalyzed by Platinum Salts in Oleum at Turnover Frequencies of Large-Scale Industrial Processes.

    Science.gov (United States)

    Zimmermann, Tobias; Soorholtz, Mario; Bilke, Marius; Schüth, Ferdi

    2016-09-28

    Direct catalytic methane functionalization, a "dream reaction", is typically characterized by relatively low catalyst activities. This also holds for the η(2)-(2,2'-bipyrimidyl)dichloroplatinum(II) [(bpym)PtCl2, 1] catalyst which oxidizes methane to methyl bisulfate in sulfuric acid. Nevertheless, it is arguably still one of the best systems for the partial oxidation of methane reported so far. Detailed studies of the dependence of activity on the SO3 concentration and the interplay with the solubility of different platinum compounds revealed potassium tetrachloroplatinate (K2PtCl4) as an extremely active, selective, and stable catalyst, reaching turnover frequencies (TOFs) of more than 25,000 h(-1) in 20% oleum with selectivities above 98%. The TOFs are more than 3 orders of magnitude higher compared to the original report on (bpym)PtCl2 and easily reach or exceed those realized in commercial industrial processes, such as the Cativa process for the carbonylation of methanol. Also space-time-yields are on the order of large-scale commercialized processes.

  7. Using an Energy Performance Based Design-Build Process to Procure a Large Scale Low-Energy Building: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Pless, S.; Torcellini, P.; Shelton, D.

    2011-05-01

    This paper will review a procurement, acquisition, and contract process of a large-scale replicable net zero energy (ZEB) office building. The owners developed and implemented an energy performance based design-build process to procure a 220,000 ft2 office building with contractual requirements to meet demand side energy and LEED goals. We will outline the key procurement steps needed to ensure achievement of our energy efficiency and ZEB goals. The development of a clear and comprehensive Request for Proposals (RFP) that includes specific and measurable energy use intensity goals is critical to ensure energy goals are met in a cost effective manner. The RFP includes a contractual requirement to meet an absolute demand side energy use requirement of 25 kBtu/ft2, with specific calculation methods on what loads are included, how to normalize the energy goal based on increased space efficiency and data center allocation, specific plug loads and schedules, and calculation details on how to account for energy used from the campus hot and chilled water supply. Additional advantages of integrating energy requirements into this procurement process include leveraging the voluntary incentive program, which is a financial incentive based on how well the owner feels the design-build team is meeting the RFP goals.

  8. Criticality in Large-Scale Brain fMRI Dynamics Unveiled by a Novel Point Process Analysis

    Science.gov (United States)

    Tagliazucchi, Enzo; Balenzuela, Pablo; Fraiman, Daniel; Chialvo, Dante R.

    2012-01-01

    Functional magnetic resonance imaging (fMRI) techniques have contributed significantly to our understanding of brain function. Current methods are based on the analysis of gradual and continuous changes in the brain blood oxygenated level dependent (BOLD) signal. Departing from that approach, recent work has shown that equivalent results can be obtained by inspecting only the relatively large amplitude BOLD signal peaks, suggesting that relevant information can be condensed in discrete events. This idea is further explored here to demonstrate how brain dynamics at resting state can be captured just by the timing and location of such events, i.e., in terms of a spatiotemporal point process. The method allows, for the first time, to define a theoretical framework in terms of an order and control parameter derived from fMRI data, where the dynamical regime can be interpreted as one corresponding to a system close to the critical point of a second order phase transition. The analysis demonstrates that the resting brain spends most of the time near the critical point of such transition and exhibits avalanches of activity ruled by the same dynamical and statistical properties described previously for neuronal events at smaller scales. Given the demonstrated functional relevance of the resting state brain dynamics, its representation as a discrete process might facilitate large-scale analysis of brain function both in health and disease. PMID:22347863

  9. Assessment of PAH dissipation processes in large-scale outdoor mesocosms simulating vegetated road-side swales.

    Science.gov (United States)

    Leroy, M C; Legras, M; Marcotte, S; Moncond'huy, V; Machour, N; Le Derf, F; Portet-Koltalo, F

    2015-07-01

    Biofilters have been implemented in urban areas due to their ability to improve road runoff quality. However, little is known about the role of soil microorganisms and plants on pollutant remediation in planted swales. Therefore, four large-scale outdoor mesocosms were built and co-contaminated with metals and model polycyclic aromatic hydrocarbons (PAHs) (phenanthrene (Phen), pyrene (Pyr) and benzo[a]pyrene (BaP)), to better understand the complex functioning of swale-like environments. Three macrophyte plant species were tested for enhanced remediation of PAHs: Juncus effusus, Iris pseudacorus, Phalaris arundinacea and a grass mix. Long-term dynamics of PAHs in water outflow and soil was studied. Results showed that only 0.07 to 0.22% of total PAHs were released in water outflow after one year. Two years after contamination, soil sample analyses showed a dissipation of 99.6% for Phen and 99.4% for Pyr whatever the mesocosm considered and ranging from 75.5 to 91% for BaP, depending on plant species. Furthermore, dissipation time-courses may be described by a biphasic process. Experiments showed that the grass mix facilitated BaP long-term biodegradation. Grass appeared also to be the best filter for suspended solids because of its dense rhizosphere, which prevents the transfer of BaP to groundwater.

  10. Hydromechanical coupling in geologic processes

    Science.gov (United States)

    Neuzil, C.E.

    2003-01-01

    Earth's porous crust and the fluids within it are intimately linked through their mechanical effects on each other. This paper presents an overview of such "hydromechanical" coupling and examines current understanding of its role in geologic processes. An outline of the theory of hydromechanics and rheological models for geologic deformation is included to place various analytical approaches in proper context and to provide an introduction to this broad topic for nonspecialists. Effects of hydromechanical coupling are ubiquitous in geology, and can be local and short-lived or regional and very long-lived. Phenomena such as deposition and erosion, tectonism, seismicity, earth tides, and barometric loading produce strains that tend to alter fluid pressure. Resulting pressure perturbations can be dramatic, and many so-called "anomalous" pressures appear to have been created in this manner. The effects of fluid pressure on crustal mechanics are also profound. Geologic media deform and fail largely in response to effective stress, or total stress minus fluid pressure. As a result, fluid pressures control compaction, decompaction, and other types of deformation, as well as jointing, shear failure, and shear slippage, including events that generate earthquakes. By controlling deformation and failure, fluid pressures also regulate states of stress in the upper crust. Advances in the last 80 years, including theories of consolidation, transient groundwater flow, and poroelasticity, have been synthesized into a reasonably complete conceptual framework for understanding and describing hydromechanical coupling. Full coupling in two or three dimensions is described using force balance equations for deformation coupled with a mass conservation equation for fluid flow. Fully coupled analyses allow hypothesis testing and conceptual model development. However, rigorous application of full coupling is often difficult because (1) the rheological behavior of geologic media is complex

  11. Fault diagnosis of nonlinear and large-scale processes using novel modified kernel Fisher discriminant analysis approach

    Science.gov (United States)

    Shi, Huaitao; Liu, Jianchang; Wu, Yuhou; Zhang, Ke; Zhang, Lixiu; Xue, Peng

    2016-04-01

    It is pretty significant for fault diagnosis timely and accurately to improve the dependability of industrial processes. In this study, fault diagnosis of nonlinear and large-scale processes by variable-weighted kernel Fisher discriminant analysis (KFDA) based on improved biogeography-based optimisation (IBBO) is proposed, referred to as IBBO-KFDA, where IBBO is used to determine the parameters of variable-weighted KFDA, and variable-weighted KFDA is used to solve the multi-classification overlapping problem. The main contributions of this work are four-fold to further improve the performance of KFDA for fault diagnosis. First, a nonlinear fault diagnosis approach with variable-weighted KFDA is developed for maximising separation between the overlapping fault samples. Second, kernel parameters and features selection of variable-weighted KFDA are simultaneously optimised using IBBO. Finally, a single fitness function that combines erroneous diagnosis rate with feature cost is created, a novel mixed kernel function is introduced to improve the classification capability in the feature space and diagnosis accuracy of the IBBO-KFDA, and serves as the target function in the optimisation problem. Moreover, an IBBO approach is developed to obtain the better quality of solution and faster convergence speed. On the one hand, the proposed IBBO-KFDA method is first used on Tennessee Eastman process benchmark data sets to validate the feasibility and efficiency. On the other hand, IBBO-KFDA is applied to diagnose faults of automation gauge control system. Simulation results demonstrate that IBBO-KFDA can obtain better kernel parameters and feature vectors with a lower computing cost, higher diagnosis accuracy and a better real-time capacity.

  12. Computer image processing: Geologic applications

    Science.gov (United States)

    Abrams, M. J.

    1978-01-01

    Computer image processing of digital data was performed to support several geological studies. The specific goals were to: (1) relate the mineral content to the spectral reflectance of certain geologic materials, (2) determine the influence of environmental factors, such as atmosphere and vegetation, and (3) improve image processing techniques. For detection of spectral differences related to mineralogy, the technique of band ratioing was found to be the most useful. The influence of atmospheric scattering and methods to correct for the scattering were also studied. Two techniques were used to correct for atmospheric effects: (1) dark object subtraction, (2) normalization of use of ground spectral measurements. Of the two, the first technique proved to be the most successful for removing the effects of atmospheric scattering. A digital mosaic was produced from two side-lapping LANDSAT frames. The advantages were that the same enhancement algorithm can be applied to both frames, and there is no seam where the two images are joined.

  13. Task Effects on Linguistic Complexity and Accuracy: A Large-Scale Learner Corpus Analysis Employing Natural Language Processing Techniques

    Science.gov (United States)

    Alexopoulou, Theodora; Michel, Marije; Murakami, Akira; Meurers, Detmar

    2017-01-01

    Large-scale learner corpora collected from online language learning platforms, such as the EF-Cambridge Open Language Database (EFCAMDAT), provide opportunities to analyze learner data at an unprecedented scale. However, interpreting the learner language in such corpora requires a precise understanding of tasks: How does the prompt and input of a…

  14. Innovation Processes in Large-Scale Public Foodservice-Case Findings from the Implementation of Organic Foods in a Danish County

    DEFF Research Database (Denmark)

    Mikkelsen, Bent Egberg; Nielsen, Thorkild; Kristensen, Niels Heine

    2005-01-01

    foods into large-scale foodservice such as that taking place in hospitals and larger homes for the elderly, has proven to be quite difficult. The very complex planning, procurement and processing procedures used in such facilities are among reasons for this. Against this background an evaluation...... was carried out of the change process related implementation of organic foods in large-scale foodservice facilities in Greater Copenhagen county in order to study the effects of such a change. Based on the findings, a set of guidelines has been developed for the successful implementation of organic foods...... into the large-scale foodservice. The findings and guidelines are however applicable to other types of innovation processes in food service....

  15. Continuous, Large-Scale Processing of Seismic Archives for High-Resolution Monitoring of Seismic Activity and Seismogenic Properties

    Science.gov (United States)

    Waldhauser, F.; Schaff, D. P.

    2012-12-01

    Archives of digital seismic data recorded by seismometer networks around the world have grown tremendously over the last several decades helped by the deployment of seismic stations and their continued operation within the framework of monitoring earthquake activity and verification of the Nuclear Test-Ban Treaty. We show results from our continuing effort in developing efficient waveform cross-correlation and double-difference analysis methods for the large-scale processing of regional and global seismic archives to improve existing earthquake parameter estimates, detect seismic events with magnitudes below current detection thresholds, and improve real-time monitoring procedures. We demonstrate the performance of these algorithms as applied to the 28-year long seismic archive of the Northern California Seismic Network. The tools enable the computation of periodic updates of a high-resolution earthquake catalog of currently over 500,000 earthquakes using simultaneous double-difference inversions, achieving up to three orders of magnitude resolution improvement over existing hypocenter locations. This catalog, together with associated metadata, form the underlying relational database for a real-time double-difference scheme, DDRT, which rapidly computes high-precision correlation times and hypocenter locations of new events with respect to the background archive (http://ddrt.ldeo.columbia.edu). The DDRT system facilitates near-real-time seismicity analysis, including the ability to search at an unprecedented resolution for spatio-temporal changes in seismogenic properties. In areas with continuously recording stations, we show that a detector built around a scaled cross-correlation function can lower the detection threshold by one magnitude unit compared to the STA/LTA based detector employed at the network. This leads to increased event density, which in turn pushes the resolution capability of our location algorithms. On a global scale, we are currently building

  16. Reducing Plug and Process Loads for a Large Scale, Low Energy Office Building: NREL's Research Support Facility; Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Lobato, C.; Pless, S.; Sheppy, M.; Torcellini, P.

    2011-02-01

    This paper documents the design and operational plug and process load energy efficiency measures needed to allow a large scale office building to reach ultra high efficiency building goals. The appendices of this document contain a wealth of documentation pertaining to plug and process load design in the RSF, including a list of equipment was selected for use.

  17. Large-scale self-assembled zirconium phosphate smectic layers via a simple spray-coating process.

    Science.gov (United States)

    Wong, Minhao; Ishige, Ryohei; White, Kevin L; Li, Peng; Kim, Daehak; Krishnamoorti, Ramanan; Gunther, Robert; Higuchi, Takeshi; Jinnai, Hiroshi; Takahara, Atsushi; Nishimura, Riichi; Sue, Hung-Jue

    2014-04-07

    The large-scale assembly of asymmetric colloidal particles is used in creating high-performance fibres. A similar concept is extended to the manufacturing of thin films of self-assembled two-dimensional crystal-type materials with enhanced and tunable properties. Here we present a spray-coating method to manufacture thin, flexible and transparent epoxy films containing zirconium phosphate nanoplatelets self-assembled into a lamellar arrangement aligned parallel to the substrate. The self-assembled mesophase of zirconium phosphate nanoplatelets is stabilized by epoxy pre-polymer and exhibits rheology favourable towards large-scale manufacturing. The thermally cured film forms a mechanically robust coating and shows excellent gas barrier properties at both low- and high humidity levels as a result of the highly aligned and overlapping arrangement of nanoplatelets. This work shows that the large-scale ordering of high aspect ratio nanoplatelets is easier to achieve than previously thought and may have implications in the technological applications for similar materials.

  18. Karhunen-Loève (PCA) based detection of multiple oscillations in multiple measurement signals from large-scale process plants

    DEFF Research Database (Denmark)

    Odgaard, Peter Fogh; Wickerhauser, M.V.

    2007-01-01

     In the perspective of optimizing the control and operation of large scale process plants, it is important to detect and to locate oscillations in the plants. This paper presents a scheme for detecting and localizing multiple oscillations in multiple measurements from such a large-scale power pla...

  19. Importance of regional species pools and functional traits in colonization processes: predicting re-colonization after large-scale destruction of ecosystems

    NARCIS (Netherlands)

    Kirmer, A.; Tischew, S.; Ozinga, W.A.; Lampe, von M.; Baasch, A.; Groenendael, van J.M.

    2008-01-01

    Large-scale destruction of ecosystems caused by surface mining provides an opportunity for the study of colonization processes starting with primary succession. Surprisingly, over several decades and without any restoration measures, most of these sites spontaneously developed into valuable biotope

  20. Inducing a health-promoting change process within an organization the Effectiveness of a Large-Scale Intervention on Social Capital, Openness, and Autonomous Motivation Toward Health

    NARCIS (Netherlands)

    Scheppingen, A.R. van; Vroome, E.M.M. de; Have, K.C.J.M. ten; Bos, E.H.; Zwetsloot, G.I.J.M.; Mechelen, W. van

    2014-01-01

    Objective: To examine the effectiveness of an organizational large-scale intervention applied to induce a health-promoting organizational change process. Design and Methods: A quasi-experimental, "as-treated" design was used. Regression analyses on data of employees of a Dutch dairy company (n =324)

  1. Inducing a health-promoting change process within an organization the Effectiveness of a Large-Scale Intervention on Social Capital, Openness, and Autonomous Motivation Toward Health

    NARCIS (Netherlands)

    Scheppingen, A.R. van; Vroome, E.M.M. de; Have, K.C.J.M. ten; Bos, E.H.; Zwetsloot, G.I.J.M.; Mechelen, W. van

    2014-01-01

    Objective: To examine the effectiveness of an organizational large-scale intervention applied to induce a health-promoting organizational change process. Design and Methods: A quasi-experimental, "as-treated" design was used. Regression analyses on data of employees of a Dutch dairy company (n =324)

  2. Large scale tracking algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Ross L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Love, Joshua Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Melgaard, David Kennett [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Karelitz, David B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pitts, Todd Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Zollweg, Joshua David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Anderson, Dylan Z. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Nandy, Prabal [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Whitlow, Gary L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bender, Daniel A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Byrne, Raymond Harry [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  3. Large scale tracking algorithms.

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett; Karelitz, David B.; Pitts, Todd Alan; Zollweg, Joshua David; Anderson, Dylan Z.; Nandy, Prabal; Whitlow, Gary L.; Bender, Daniel A.; Byrne, Raymond Harry

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  4. Large Scale Dynamos in Stars

    Science.gov (United States)

    Vishniac, Ethan T.

    2015-01-01

    We show that a differentially rotating conducting fluid automatically creates a magnetic helicity flux with components along the rotation axis and in the direction of the local vorticity. This drives a rapid growth in the local density of current helicity, which in turn drives a large scale dynamo. The dynamo growth rate derived from this process is not constant, but depends inversely on the large scale magnetic field strength. This dynamo saturates when buoyant losses of magnetic flux compete with the large scale dynamo, providing a simple prediction for magnetic field strength as a function of Rossby number in stars. Increasing anisotropy in the turbulence produces a decreasing magnetic helicity flux, which explains the flattening of the B/Rossby number relation at low Rossby numbers. We also show that the kinetic helicity is always a subdominant effect. There is no kinematic dynamo in real stars.

  5. Large-Scale Growth of Tubular Aragonite Whiskers through a MgCl2-Assisted Hydrothermal Process

    Directory of Open Access Journals (Sweden)

    Changyin Dong

    2011-08-01

    Full Text Available In this paper, we have developed a facile MgCl2-assissted hydrothermal synthesis route to grow tubular aragonite whiskers on a large scale. The products have been characterized by powder X-ray diffraction (XRD, optical microscopy, and scanning electronic microscopy (SEM. The results show the as-grown product is pure tubular aragonite crystalline whiskers with a diameter of 5–10 mm and a length of 100–200 mm, respectively. The concentration of Mg2+ plays an important role in determining the quality and purity of the products. Furthermore, the method can be extended to fabricate CaSO4 fibers. The high quality of the product and the mild conditions used mean that the present route has good prospects for the growth of inorganic crystalline whiskers.

  6. LARGE SCALE GLAZED

    DEFF Research Database (Denmark)

    Bache, Anja Margrethe

    2010-01-01

    WORLD FAMOUS ARCHITECTS CHALLENGE TODAY THE EXPOSURE OF CONCRETE IN THEIR ARCHITECTURE. IT IS MY HOPE TO BE ABLE TO COMPLEMENT THESE. I TRY TO DEVELOP NEW AESTHETIC POTENTIALS FOR THE CONCRETE AND CERAMICS, IN LARGE SCALES THAT HAS NOT BEEN SEEN BEFORE IN THE CERAMIC AREA. IT IS EXPECTED TO RESULT...

  7. Challenges and Opportunities: One Stop Processing of Automatic Large-Scale Base Map Production Using Airborne LIDAR Data Within GIS Environment. Case Study: Makassar City, Indonesia

    Science.gov (United States)

    Widyaningrum, E.; Gorte, B. G. H.

    2017-05-01

    LiDAR data acquisition is recognized as one of the fastest solutions to provide basis data for large-scale topographical base maps worldwide. Automatic LiDAR processing is believed one possible scheme to accelerate the large-scale topographic base map provision by the Geospatial Information Agency in Indonesia. As a progressive advanced technology, Geographic Information System (GIS) open possibilities to deal with geospatial data automatic processing and analyses. Considering further needs of spatial data sharing and integration, the one stop processing of LiDAR data in a GIS environment is considered a powerful and efficient approach for the base map provision. The quality of the automated topographic base map is assessed and analysed based on its completeness, correctness, quality, and the confusion matrix.

  8. Inducing a health-promoting change process within an organization: the effectiveness of a large-scale intervention on social capital, openness, and autonomous motivation toward health.

    Science.gov (United States)

    van Scheppingen, Arjella R; de Vroome, Ernest M M; Ten Have, Kristin C J M; Bos, Ellen H; Zwetsloot, Gerard I J M; van Mechelen, W

    2014-11-01

    To examine the effectiveness of an organizational large-scale intervention applied to induce a health-promoting organizational change process. A quasi-experimental, "as-treated" design was used. Regression analyses on data of employees of a Dutch dairy company (n = 324) were used to examine the effects on bonding social capital, openness, and autonomous motivation toward health and on employees' lifestyle, health, vitality, and sustainable employability. Also, the sensitivity of the intervention components was examined. Intervention effects were found for bonding social capital, openness toward health, smoking, healthy eating, and sustainable employability. The effects were primarily attributable to the intervention's dialogue component. The change process initiated by the large-scale intervention contributed to a social climate in the workplace that promoted health and ownership toward health. The study confirms the relevance of collective change processes for health promotion.

  9. High-Resiliency and Auto-Scaling of Large-Scale Cloud Computing for OCO-2 L2 Full Physics Processing

    Science.gov (United States)

    Hua, H.; Manipon, G.; Starch, M.; Dang, L. B.; Southam, P.; Wilson, B. D.; Avis, C.; Chang, A.; Cheng, C.; Smyth, M.; McDuffie, J. L.; Ramirez, P.

    2015-12-01

    Next generation science data systems are needed to address the incoming flood of data from new missions such as SWOT and NISAR where data volumes and data throughput rates are order of magnitude larger than present day missions. Additionally, traditional means of procuring hardware on-premise are already limited due to facilities capacity constraints for these new missions. Existing missions, such as OCO-2, may also require high turn-around time for processing different science scenarios where on-premise and even traditional HPC computing environments may not meet the high processing needs. We present our experiences on deploying a hybrid-cloud computing science data system (HySDS) for the OCO-2 Science Computing Facility to support large-scale processing of their Level-2 full physics data products. We will explore optimization approaches to getting best performance out of hybrid-cloud computing as well as common issues that will arise when dealing with large-scale computing. Novel approaches were utilized to do processing on Amazon's spot market, which can potentially offer ~10X costs savings but with an unpredictable computing environment based on market forces. We will present how we enabled high-tolerance computing in order to achieve large-scale computing as well as operational cost savings.

  10. Geological analysis and FT dating of the large-scale right-lateral strike-slip movement of the Red River fault zone

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Tectonically, the large-scale right-lateral strike-slip movement along the Red River fault zone is char-acterized at its late phase with the southeastward extension and deformation of the Northwestern Yunnan normal fault depression on its northern segment, and the dextral shear displacement on its central-southern segment. Research of the relations between stratum deformation and fault movement on the typical fault segments, such as Jianchuan, southeast Midu, Yuanjiang River, Yuanyang, etc. since the Miocene Epoch shows that there are two times dextral faulting dominated by normal shearing occurring along the Red River fault zone since the Miocene Epoch. The fission track dating (abbrevi-ated to FT dating, the same below) is conducted on apatite samples collected from the above fault segments and relating to these movements. Based on the measured single grain’s age and the con-fined track length, we choose the Laslet annealing model to retrieve the thermal history of the samples, and the results show that the fault zone experienced two times obvious shear displacement, one in 5.5 ± 1.5 MaBP and the other in 2.1± 0.8 MaBP. The central-southern segment sees two intensive uplifts of mountain mass in the Yuanjiang River-Yuanyang region at 3.6―3.8 MaBP and 1.6―2.3 MaBP, which correspond to the above-mentioned two dextral normal displacement events since the late Miocene Epoch.

  11. Geological analysis and FT dating of the large-scale risht-lateral strike-slip movement of the Red River fault zone

    Institute of Scientific and Technical Information of China (English)

    XIANG HongFa; WAN JingLin; HAN ZhuJun; GUO ShunMin; ZHANG WanXia; CHEN LiChun; DONG XingQuan

    2007-01-01

    Tectonically,the large-scale right-lateral strike-slip movement along the Red River fault zone is characterized at its late phase with the southeastward extension and deformation of the Northwestern Yunnan normal fault depression on its northern segment,and the dextral shear displacement on its central-southern segment.Research of the relations between stratum deformation and fault movement on the typical fault segments,such as Jianchuan,southeast Midu,Yuanjiang River,Yuanyang,etc.since the Miocene Epoch shows that there are two times dextral faulting dominated by normal shearing occurring along the Red River fault zone since the Miocene Epoch.The fission track dating (abbreviated to FT dating,the same below) is conducted on apatite samples collected from the above fault segments and relating to these movements.Based on the measured single grain's age and the confined track length,we choose the Laslet annealing model to retrieve the thermal history of the samples,and the results show that the fault zone experienced two times obvious shear displacement,one in 5.5 ±1.5 MaBP and the other in 2.1±0.8 MaBP.The central-southern segment sees two intensive uplifts of mountain mass in the Yuanjiang River-Yuanyang region at 3.6-3.8 MaBP and 1.6-2.3 MaBP,which correspond to the above-mentioned two dextral normal displacement events since the late Miocene Epoch.

  12. The Landscape Evolution Observatory: A large-scale controllable infrastructure to study coupled Earth-surface processes

    Science.gov (United States)

    Pangle, Luke A.; DeLong, Stephen B.; Abramson, Nate; Adams, John; Barron-Gafford, Greg A.; Breshears, David D.; Brooks, Paul D.; Chorover, Jon; Dietrich, William E.; Dontsova, Katerina; Durcik, Matej; Espeleta, Javier; Ferre, T. P. A.; Ferriere, Regis; Henderson, Whitney; Hunt, Edward A.; Huxman, Travis E.; Millar, David; Murphy, Brendan; Niu, Guo-Yue; Pavao-Zuckerman, Mitch; Pelletier, Jon D.; Rasmussen, Craig; Ruiz, Joaquin; Saleska, Scott; Schaap, Marcel; Sibayan, Michael; Troch, Peter A.; Tuller, Markus; van Haren, Joost; Zeng, Xubin

    2015-09-01

    Zero-order drainage basins, and their constituent hillslopes, are the fundamental geomorphic unit comprising much of Earth's uplands. The convergent topography of these landscapes generates spatially variable substrate and moisture content, facilitating biological diversity and influencing how the landscape filters precipitation and sequesters atmospheric carbon dioxide. In light of these significant ecosystem services, refining our understanding of how these functions are affected by landscape evolution, weather variability, and long-term climate change is imperative. In this paper we introduce the Landscape Evolution Observatory (LEO): a large-scale controllable infrastructure consisting of three replicated artificial landscapes (each 330 m2 surface area) within the climate-controlled Biosphere 2 facility in Arizona, USA. At LEO, experimental manipulation of rainfall, air temperature, relative humidity, and wind speed are possible at unprecedented scale. The Landscape Evolution Observatory was designed as a community resource to advance understanding of how topography, physical and chemical properties of soil, and biological communities coevolve, and how this coevolution affects water, carbon, and energy cycles at multiple spatial scales. With well-defined boundary conditions and an extensive network of sensors and samplers, LEO enables an iterative scientific approach that includes numerical model development and virtual experimentation, physical experimentation, data analysis, and model refinement. We plan to engage the broader scientific community through public dissemination of data from LEO, collaborative experimental design, and community-based model development.

  13. The Landscape Evolution Observatory: a large-scale controllable infrastructure to study coupled Earth-surface processes

    Science.gov (United States)

    Pangle, Luke A.; DeLong, Stephen B.; Abramson, Nate; Adams, John; Barron-Gafford, Greg A.; Breshears, David D.; Brooks, Paul D.; Chorover, Jon; Dietrich, William E.; Dontsova, Katerina; Durcik, Matej; Espeleta, Javier; Ferre, T. P. A.; Ferriere, Regis; Henderson, Whitney; Hunt, Edward A.; Huxman, Travis E.; Millar, David; Murphy, Brendan; Niu, Guo-Yue; Pavao-Zuckerman, Mitch; Pelletier, Jon D.; Rasmussen, Craig; Ruiz, Joaquin; Saleska, Scott; Schaap, Marcel; Sibayan, Michael; Troch, Peter A.; Tuller, Markus; van Haren, Joost; Zeng, Xubin

    2015-01-01

    Zero-order drainage basins, and their constituent hillslopes, are the fundamental geomorphic unit comprising much of Earth's uplands. The convergent topography of these landscapes generates spatially variable substrate and moisture content, facilitating biological diversity and influencing how the landscape filters precipitation and sequesters atmospheric carbon dioxide. In light of these significant ecosystem services, refining our understanding of how these functions are affected by landscape evolution, weather variability, and long-term climate change is imperative. In this paper we introduce the Landscape Evolution Observatory (LEO): a large-scale controllable infrastructure consisting of three replicated artificial landscapes (each 330 m2 surface area) within the climate-controlled Biosphere 2 facility in Arizona, USA. At LEO, experimental manipulation of rainfall, air temperature, relative humidity, and wind speed are possible at unprecedented scale. The Landscape Evolution Observatory was designed as a community resource to advance understanding of how topography, physical and chemical properties of soil, and biological communities coevolve, and how this coevolution affects water, carbon, and energy cycles at multiple spatial scales. With well-defined boundary conditions and an extensive network of sensors and samplers, LEO enables an iterative scientific approach that includes numerical model development and virtual experimentation, physical experimentation, data analysis, and model refinement. We plan to engage the broader scientific community through public dissemination of data from LEO, collaborative experimental design, and community-based model development.

  14. Health benefits of geologic materials and geologic processes

    Science.gov (United States)

    Finkelman, R.B.

    2006-01-01

    The reemerging field of Medical Geology is concerned with the impacts of geologic materials and geologic processes on animal and human health. Most medical geology research has been focused on health problems caused by excess or deficiency of trace elements, exposure to ambient dust, and on other geologically related health problems or health problems for which geoscience tools, techniques, or databases could be applied. Little, if any, attention has been focused on the beneficial health effects of rocks, minerals, and geologic processes. These beneficial effects may have been recognized as long as two million years ago and include emotional, mental, and physical health benefits. Some of the earliest known medicines were derived from rocks and minerals. For thousands of years various clays have been used as an antidote for poisons. "Terra sigillata," still in use today, may have been the first patented medicine. Many trace elements, rocks, and minerals are used today in a wide variety of pharmaceuticals and health care products. There is also a segment of society that believes in the curative and preventative properties of crystals (talismans and amulets). Metals and trace elements are being used in some of today's most sophisticated medical applications. Other recent examples of beneficial effects of geologic materials and processes include epidemiological studies in Japan that have identified a wide range of health problems (such as muscle and joint pain, hemorrhoids, burns, gout, etc.) that may be treated by one or more of nine chemically distinct types of hot springs, and a study in China indicating that residential coal combustion may be mobilizing sufficient iodine to prevent iodine deficiency disease. ?? 2006 MDPI. All rights reserved.

  15. Health Benefits of Geologic Materials and Geologic Processes

    Directory of Open Access Journals (Sweden)

    Robert B. Finkelman

    2006-12-01

    Full Text Available The reemerging field of Medical Geology is concerned with the impacts of geologic materials and geologic processes on animal and human health. Most medical geology research has been focused on health problems caused by excess or deficiency of trace elements, exposure to ambient dust, and on other geologically related health problems or health problems for which geoscience tools, techniques, or databases could be applied. Little, if any, attention has been focused on the beneficial health effects of rocks, minerals, and geologic processes. These beneficial effects may have been recognized as long as two million years ago and include emotional, mental, and physical health benefits. Some of the earliest known medicines were derived from rocks and minerals. For thousands of years various clays have been used as an antidote for poisons. “Terra sigillata,” still in use today, may have been the first patented medicine. Many trace elements, rocks, and minerals are used today in a wide variety of pharmaceuticals and health care products. There is also a segment of society that believes in the curative and preventative properties of crystals (talismans and amulets. Metals and trace elements are being used in some of today’s most sophisticated medical applications. Other recent examples of beneficial effects of geologic materials and processes include epidemiological studies in Japan that have identified a wide range of health problems (such as muscle and joint pain, hemorrhoids, burns, gout, etc. that may be treated by one or more of nine chemically distinct types of hot springs, and a study in China indicating that residential coal combustion may be mobilizing sufficient iodine to prevent iodine deficiency disease.

  16. Glacistore: Understanding Late Cenozoic Glaciation and Basin Processes for the Development of Secure Large Scale Offshore CO2 Storage (North Sea).

    Science.gov (United States)

    Stewart, H. A.; Barrio, M.; Akhurst, M.; Aagaard, P.; Alcalde, J.; Bauer, A.; Bradwell, T.; Cavanagh, A.; Faleide, J. I.; Furre, A. K.; Haszeldine, S.; Hjelstuen, B. O.; Holloway, S.; Johansen, H.; Johnson, G.; Kuerschner, W.; Mondol, N. H.; Querendez, E.; Ringrose, P. S.; Sejrup, H. P.; Stewart, M.; Stoddart, D.; Wilkinson, M.; Zalmstra, H.

    2014-12-01

    The sedimentary strata of the North Sea Basin (NSB) record the glacial and interglacial history of environmental change in the Northern Hemisphere, and are a proposed location for the engineered storage of carbon dioxide (CO2) captured from power plant and industrial sources to reduce greenhouse gas emissions. These aspects interact in the geomechanical and fluid flow domain, as ice sheet dynamics change the properties of potential seal and reservoir rocks that are the prospective geological storage strata for much of Europe's captured CO2. The intensification of the global glacial-interglacial cycle at the onset of the Pleistocene (2.5-2.7 Ma) was a critical tipping-point in Earth's recent climate history. The increased severity of glaciations at the Plio-Pleistocene boundary triggered the first development of large-scale continental ice sheets in the Northern Hemisphere. The central part of the NSB preserves a unique history of the depositional record spanning at least the last 3 Ma, which also forms the overburden and seal to the underlying CO2 reservoirs. There is good evidence that these ice sheets created strong feedback loops that subsequently affected the evolution of the Quaternary climate system through complex ocean-atmosphere-cryosphere linkages. Understanding NSB dynamics, including the role of fluids in controlling compaction, cementation, and diagenetic processes in shale-dominated basins, is essential for CO2 storage site characterisation to increase understanding and confidence in secure storage. An increased understanding of the overlying sequence will inform quantitative predictions of the performance of prospective CO2 storage sites in glaciated areas in Europe and worldwide; to include improved resolution of glacial cycles (depositional and chronological framework), characterise pore fluids, flow properties of glacial landforms within the sequence (e.g. tunnel valleys) and the geomechanical effects (quantify compaction, rock stiffness, strength

  17. Large Scale Solar Heating

    DEFF Research Database (Denmark)

    Heller, Alfred

    2001-01-01

    The main objective of the research was to evaluate large-scale solar heating connected to district heating (CSDHP), to build up a simulation tool and to demonstrate the application of the simulation tool for design studies and on a local energy planning case. The evaluation was mainly carried out...... model is designed and validated on the Marstal case. Applying the Danish Reference Year, a design tool is presented. The simulation tool is used for proposals for application of alternative designs, including high-performance solar collector types (trough solar collectors, vaccum pipe collectors......). Simulation programs are proposed as control supporting tool for daily operation and performance prediction of central solar heating plants. Finaly the CSHP technolgy is put into persepctive with respect to alternatives and a short discussion on the barries and breakthrough of the technology are given....

  18. Large-scale analysis of high-speed atomic force microscopy data sets using adaptive image processing

    Directory of Open Access Journals (Sweden)

    Blake W. Erickson

    2012-11-01

    Full Text Available Modern high-speed atomic force microscopes generate significant quantities of data in a short amount of time. Each image in the sequence has to be processed quickly and accurately in order to obtain a true representation of the sample and its changes over time. This paper presents an automated, adaptive algorithm for the required processing of AFM images. The algorithm adaptively corrects for both common one-dimensional distortions as well as the most common two-dimensional distortions. This method uses an iterative thresholded processing algorithm for rapid and accurate separation of background and surface topography. This separation prevents artificial bias from topographic features and ensures the best possible coherence between the different images in a sequence. This method is equally applicable to all channels of AFM data, and can process images in seconds.

  19. The Interplay Between Signal Processing and Networking in Sensor Networks: A Perspective on Large-scale Networks for Military Applications

    Science.gov (United States)

    2006-07-01

    cannons or moving ground/aerial vehicles. Severe constraints on energy, computing, and com- munications capabilities dominate this problem. As the... Gauss -Markov process, a link metric in the form of mutual information is defined in [46], which leads to the optimal route that maximizes the decay

  20. [Energy Consumption Comparison and Energy Saving Approaches for Different Wastewater Treatment Processes in a Large-scale Reclaimed Water Plant].

    Science.gov (United States)

    Yang, Min; Li, Ya-ming; Wei, Yuan-song; Lü, Jian; Yu, Da-wei; Liu, Ji-bao; Fan, Yao-bo

    2015-06-01

    Energy consumption is the main performance indicator of reclaimed water plant (RWP) operation. Methods of specific energy consumption analysis, unit energy consumption analysis and redundancy analysis were applied to investigate the composition and spatio-temporal distribution of energy consumption in Qinghe RWP with inverted A2/O, A2/O and A2/O-MBR processes. And the A2/ O-MBR process was mainly analyzed to identify the main nodes and causes for high energy consumption, approaches for energy saving were explored, and the energy consumption before and after upgrading for energy saving was compared. The results showed that aeration was the key factor affecting energy consumption in both conventional and A2/O-MBR processes, accounting for 42.97% and 50.65% of total energy consumption, respectively. A pulsating aeration allowed an increasing membrane flux and remarkably reduced the energy consumption of the A2/O-MBR process while still meeting the effluent standard, e.g., the membrane flux was increased by 20%, and the energy consumptions per kiloton wastewater and kilogram COD(removed) were decreased by 42.39% to 0.53 kW-h-kg-3 and by 54.74% to 1.29 kW x h x kg(-1), respectively. The decrease of backflow ratio in the A2/O-MBR process within a certain range would not deteriorate the effluent quality due to its insignificant correlation with the effluent quality, and therefore may be considered as one of the ways for further energy saving.

  1. Large-scale experiments for the vulnerability analysis of buildings impacted and intruded by fluviatile torrential hazard processes

    Science.gov (United States)

    Sturm, Michael; Gems, Bernhard; Fuchs, Sven; Mazzorana, Bruno; Papathoma-Köhle, Maria; Aufleger, Markus

    2016-04-01

    In European mountain regions, losses due to torrential hazards are still considerable high despite the ongoing debate on an overall increasing or decreasing trend. Recent events in Austria severely revealed that due to technical and economic reasons, an overall protection of settlements in the alpine environment against torrential hazards is not feasible. On the side of the hazard process, events with unpredictable intensities may represent overload scenarios for existent protection structures in the torrent catchments. They bear a particular risk of significant losses in the living space. Although the importance of vulnerability is widely recognised, there is still a research gap concerning its assessment. Currently, potential losses at buildings due to torrential hazards and their comparison with reinstatement costs are determined by the use of empirical functions. Hence, relations of process intensities and the extent of losses, gathered by the analysis of historic hazard events and the information of object-specific restoration values, are used. This approach does not represent a physics-based and integral concept since relevant and often crucial processes, as the intrusion of the fluid-sediment-mixture into elements at risk, are not considered. Based on these findings, our work is targeted at extending these findings and models of present risk research in the context of an integral, more physics-based vulnerability analysis concept. Fluviatile torrential hazard processes and their impacts on the building envelope are experimentally modelled. Material intrusion processes are thereby explicitly considered. Dynamic impacts are gathered quantitatively and spatially distributed by the use of a large set of force transducers. The experimental tests are accomplished with artificial, vertical and skewed plates, including also openings for material intrusion. Further, the impacts on specific buildings within the test site of the work, the fan apex of the Schnannerbach

  2. Modeling the thermal-hydrologic processes in a large-scale underground heater test in partially saturated fractured tuff

    Science.gov (United States)

    Birkholzer, J. T.; Tsang, Y. W.

    2000-02-01

    The Drift Scale Test (DST) is being conducted in an underground facility at Yucca Mountain, Nevada, to probe the coupled thermal, hydrological, mechanical, and chemical processes likely to occur in the fractured rock mass around a potential high-level nuclear waste repository. Thermal-hydrological processes in the DST have been simulated using a three-dimensional numerical model. The model incorporates the realistic test configuration and all available site-specific measurements pertaining to the thermal and hydrological properties of the unsaturated fractured tuff of the test block. The modeled predictions were compared to the extensive set of measured data collected in the first year of this 8-year-long test. The mean error between the predictions and measurement at 12 months of heating for over 1600 temperature sensors is about 2°C. Heat-pipe signature in the temperature data, indicating two-phase regions of liquid-vapor counterflow, is seen in both the measurements and simulated results. The redistribution of moisture content in the rock mass (resulting from vaporization and condensation) was probed by periodic air-injection testing and geophysical measurements. Good agreement also occurred between the model predictions and these measurements. The general agreement between predictions from the numerical simulations and the measurements of the thermal test indicates that our fundamental understanding of the coupled thermal-hydrologic processes at Yucca Mountain is sound. However, effects of spatial heterogeneity from discrete fractures that are observed in the temperature data are not matched by simulations from the numerical model, which treat the densely spaced fractures as a continuum.

  3. Computation of 2-D pinhole image-formation process of large-scale furnaces using the discrete ordinates method

    CERN Document Server

    Li Hong; Lu Ji Dong; Zheng Chu Guan

    2003-01-01

    In most of the discrete ordinate schemes (DOS) reported in the literature, the discrete directions are fixed, and unable to be arbitrarily adjusted; therefore, it is difficult to employ these schemes to calculate the radiative energy image-formation of pulverized-coal furnaces. On the basis of a new DOS, named the discrete ordinate scheme with (an) infinitely small weight(s), which was recently proposed by the authors, a novel algorithm for computing the pinhole image-formation process is developed in this work. The performance of this algorithm is tested, and is found to be also suitable for parallel computation.

  4. Standardization developments for large scale biobanks in smoking related diseases - a model system for blood sample processing and storage.

    Science.gov (United States)

    Malm, Johan; Fehniger, Thomas E; Danmyr, Pia; Végvári, Ákos; Welinder, Charlotte; Lindberg, Henrik; Upton, Paul; Carter, Stephanie; Appelqvist, Roger; Sjödin, Karin; Wieslander, Elisabet; Dahlbäck, Magnus; Rezeli, Melinda; Erlinge, David; Marko-Varga, György

    2013-12-01

    Biobank samples stored in biobanks give researchers and respiratory healthcare institutions access to datasets of analytes valuable for both diagnostic and research practices. The usefulness of these samples in clinical decision-making is highly dependent on their quality and integrity. New procedures that better preserve sample integrity and reduce degradation are being developed to meet the needs of both present and future biobanking. Hereby we present an automatic sample workflow scheme that is designed to handle high numbers of blood samples. Blood fractions are aliquoted, heat sealed using novel technology, and stored in 384 tube high-density sample arrays. The newly developed 384 biobank rack system is especially suited for preserving identical small aliquots. We provide data on robotic processing of clinical samples at -80°C, following initial processing, analysis and shipping between laboratories throughout Europe. Subsequent to unpacking, re-sorting, and storage at these sites, the samples have been returned for analysis. Biomarker analysis of 13 common tests in the clinical chemistry unit of the hospital provides evidence of qualitative and stable logistics using the 384-sample tube system. This technology development allows rapid access to a given sample in the frozen archive while maintaining individual sample integrity with sample tube confinement and quality management.

  5. Integrative analysis of large scale expression profiles reveals core transcriptional response and coordination between multiple cellular processes in a cyanobacterium

    Directory of Open Access Journals (Sweden)

    Bhattacharyya-Pakrasi Maitrayee

    2010-08-01

    Full Text Available Abstract Background Cyanobacteria are the only known prokaryotes capable of oxygenic photosynthesis. They play significant roles in global biogeochemical cycles and carbon sequestration, and have recently been recognized as potential vehicles for production of renewable biofuels. Synechocystis sp. PCC 6803 has been extensively used as a model organism for cyanobacterial studies. DNA microarray studies in Synechocystis have shown varying degrees of transcriptome reprogramming under altered environmental conditions. However, it is not clear from published work how transcriptome reprogramming affects pre-existing networks of fine-tuned cellular processes. Results We have integrated 163 transcriptome data sets generated in response to numerous environmental and genetic perturbations in Synechocystis. Our analyses show that a large number of genes, defined as the core transcriptional response (CTR, are commonly regulated under most perturbations. The CTR contains nearly 12% of Synechocystis genes found on its chromosome. The majority of genes in the CTR are involved in photosynthesis, translation, energy metabolism and stress protection. Our results indicate that a large number of differentially regulated genes identified in most reported studies in Synechocystis under different perturbations are associated with the general stress response. We also find that a majority of genes in the CTR are coregulated with 25 regulatory genes. Some of these regulatory genes have been implicated in cellular responses to oxidative stress, suggesting that reactive oxygen species are involved in the regulation of the CTR. A Bayesian network, based on the regulation of various KEGG pathways determined from the expression patterns of their associated genes, has revealed new insights into the coordination between different cellular processes. Conclusion We provide here the first integrative analysis of transcriptome data sets generated in a cyanobacterium. This

  6. Groundwater in geologic processes, 2nd edition

    Science.gov (United States)

    Ingebritsen, Steven E.; Sanford, Ward E.; Neuzil, Christopher E.

    2006-01-01

    Interest in the role of Groundwater in Geologic Processes has increased steadily over the past few decades. Hydrogeologists and geologists are now actively exploring the role of groundwater and other subsurface fluids in such fundamental geologic processes as crustal heat transfer, ore deposition, hydrocarbon migration, earthquakes, tectonic deformation, diagenesis, and metamorphism.Groundwater in Geologic Processes is the first comprehensive treatment of this body of inquiry. Chapters 1 to 4 develop the basic theories of groundwater motion, hydromechanics, solute transport, and heat transport. Chapter 5 applies these theories to regional groundwater flow systems in a generic sense, and Chapters 6 to 13 focus on particular geologic processes and environments. Relative to the first edition of Groundwater in Geologic Processes , this second edition includes a much more comprehensive treatment of hydromechanics (the coupling of groundwater flow and deformation). It also includes new chapters on "compaction and diagenesis," "metamorphism," and "subsea hydrogeology." Finally, it takes advantage of the substantial body of published research that has appeared since the first edition in 1998. The systematic presentation of theory and application, and the problem sets that conclude each chapter, make this book ideal for undergraduate- and graduate-level geology courses (assuming that the students have some background in calculus and introductory chemistry). It also serves as an invaluable reference for researchers and other professionals in the field

  7. Towards large-scale production of solution-processed organic tandem modules based on ternary composites: Design of the intermediate layer, device optimization and laser based module processing

    DEFF Research Database (Denmark)

    Li, Ning; Kubis, Peter; Forberich, Karen

    2014-01-01

    We report on a novel approach including: 1. the design of an efficient intermediate layer, which facilitates the use of most high performance active materials in tandem structure and the compatibility of the tandem concept with large-scale production; 2. the concept of ternary composites based on...

  8. Improved Large-Scale Inundation Modelling by 1D-2D Coupling and Consideration of Hydrologic and Hydrodynamic Processes - a Case Study in the Amazon

    Science.gov (United States)

    Hoch, J. M.; Bierkens, M. F.; Van Beek, R.; Winsemius, H.; Haag, A.

    2015-12-01

    Understanding the dynamics of fluvial floods is paramount to accurate flood hazard and risk modeling. Currently, economic losses due to flooding constitute about one third of all damage resulting from natural hazards. Given future projections of climate change, the anticipated increase in the World's population and the associated implications, sound knowledge of flood hazard and related risk is crucial. Fluvial floods are cross-border phenomena that need to be addressed accordingly. Yet, only few studies model floods at the large-scale which is preferable to tiling the output of small-scale models. Most models cannot realistically model flood wave propagation due to a lack of either detailed channel and floodplain geometry or the absence of hydrologic processes. This study aims to develop a large-scale modeling tool that accounts for both hydrologic and hydrodynamic processes, to find and understand possible sources of errors and improvements and to assess how the added hydrodynamics affect flood wave propagation. Flood wave propagation is simulated by DELFT3D-FM (FM), a hydrodynamic model using a flexible mesh to schematize the study area. It is coupled to PCR-GLOBWB (PCR), a macro-scale hydrological model, that has its own simpler 1D routing scheme (DynRout) which has already been used for global inundation modeling and flood risk assessments (GLOFRIS; Winsemius et al., 2013). A number of model set-ups are compared and benchmarked for the simulation period 1986-1996: (0) PCR with DynRout; (1) using a FM 2D flexible mesh forced with PCR output and (2) as in (1) but discriminating between 1D channels and 2D floodplains, and, for comparison, (3) and (4) the same set-ups as (1) and (2) but forced with observed GRDC discharge values. Outputs are subsequently validated against observed GRDC data at Óbidos and flood extent maps from the Dartmouth Flood Observatory. The present research constitutes a first step into a globally applicable approach to fully couple

  9. Using a Large-scale Neural Model of Cortical Object Processing to Investigate the Neural Substrate for Managing Multiple Items in Short-term Memory.

    Science.gov (United States)

    Liu, Qin; Ulloa, Antonio; Horwitz, Barry

    2017-07-07

    Many cognitive and computational models have been proposed to help understand working memory. In this article, we present a simulation study of cortical processing of visual objects during several working memory tasks using an extended version of a previously constructed large-scale neural model [Tagamets, M. A., & Horwitz, B. Integrating electrophysiological and anatomical experimental data to create a large-scale model that simulates a delayed match-to-sample human brain imaging study. Cerebral Cortex, 8, 310-320, 1998]. The original model consisted of arrays of Wilson-Cowan type of neuronal populations representing primary and secondary visual cortices, inferotemporal (IT) cortex, and pFC. We added a module representing entorhinal cortex, which functions as a gating module. We successfully implemented multiple working memory tasks using the same model and produced neuronal patterns in visual cortex, IT cortex, and pFC that match experimental findings. These working memory tasks can include distractor stimuli or can require that multiple items be retained in mind during a delay period (Sternberg's task). Besides electrophysiology data and behavioral data, we also generated fMRI BOLD time series from our simulation. Our results support the involvement of IT cortex in working memory maintenance and suggest the cortical architecture underlying the neural mechanisms mediating particular working memory tasks. Furthermore, we noticed that, during simulations of memorizing a list of objects, the first and last items in the sequence were recalled best, which may implicate the neural mechanism behind this important psychological effect (i.e., the primacy and recency effect).

  10. Energy optimization of thermochemical vacuum processes and equipment in large-scale production; Energetische Optimierung von thermochemischen Vakuumprozessen und Anlagen in der Grossserie

    Energy Technology Data Exchange (ETDEWEB)

    Heuer, Volker; Loeser, Klaus [ALD Vacuum Technologies GmbH, Hanau (Germany)

    2011-09-15

    The energy optimization of thermoprocessing equipment is of great ecological and economical importance. Thermoprocessing equipment consumes up to 40 % of the energy used in industrial applications. Therefore it is necessary to increase the energy efficiency of thermoprocessing equipment in order to meet the EU's targets to reduce greenhouse gas emissions. To exploit the potential for energy savings, it is essential to analyze and optimize processes and plants as well as operating methods of electrically heated vacuum plants used in large scale production. The process can be improved by accelerated heating through the application of ''convective heating''. In addition higher process temperatures can be applied in diffusion-controlled thermochemical processes to accelerate the process significantly. Modular vacuum systems prove to be very energy-efficient because they adapt to the changing production requirements step-by-step. An optimized insulation structure reduces thermal losses considerably. Energy mangement systems installed in the plant-control optimally manage the energy used for start-up and shutdown of the plants while preventing energy peak loads. The use of new CFC-fixtures also contributes to reduce the energy demand. (orig.)

  11. Evaluating cloud processes in large-scale models: Of idealized case studies, parameterization testbeds and single-column modelling on climate time-scales

    Science.gov (United States)

    Neggers, Roel

    2016-04-01

    Boundary-layer schemes have always formed an integral part of General Circulation Models (GCMs) used for numerical weather and climate prediction. The spatial and temporal scales associated with boundary-layer processes and clouds are typically much smaller than those at which GCMs are discretized, which makes their representation through parameterization a necessity. The need for generally applicable boundary-layer parameterizations has motivated many scientific studies, which in effect has created its own active research field in the atmospheric sciences. Of particular interest has been the evaluation of boundary-layer schemes at "process-level". This means that parameterized physics are studied in isolated mode from the larger-scale circulation, using prescribed forcings and excluding any upscale interaction. Although feedbacks are thus prevented, the benefit is an enhanced model transparency, which might aid an investigator in identifying model errors and understanding model behavior. The popularity and success of the process-level approach is demonstrated by the many past and ongoing model inter-comparison studies that have been organized by initiatives such as GCSS/GASS. A red line in the results of these studies is that although most schemes somehow manage to capture first-order aspects of boundary layer cloud fields, there certainly remains room for improvement in many areas. Only too often are boundary layer parameterizations still found to be at the heart of problems in large-scale models, negatively affecting forecast skills of NWP models or causing uncertainty in numerical predictions of future climate. How to break this parameterization "deadlock" remains an open problem. This presentation attempts to give an overview of the various existing methods for the process-level evaluation of boundary-layer physics in large-scale models. This includes i) idealized case studies, ii) longer-term evaluation at permanent meteorological sites (the testbed approach

  12. Leveraging the power of multi-core platforms for large-scale geospatial data processing: Exemplified by generating DEM from massive LiDAR point clouds

    Science.gov (United States)

    Guan, Xuefeng; Wu, Huayi

    2010-10-01

    In recent years improvements in spatial data acquisition technologies, such as LiDAR, resulted in an explosive increase in the volume of spatial data, presenting unprecedented challenges for computation capacity. At the same time, the kernel of computing platforms the CPU, also evolved from a single-core to multi-core architecture. This radical change significantly affected existing data processing algorithms. Exemplified by the problem of generating DEM from massive air-borne LiDAR point clouds, this paper studies how to leverage the power of multi-core platforms for large-scale geospatial data processing and demonstrates how multi-core technologies can improve performance. Pipelining is adopted to exploit the thread level parallelism of multi-core platforms. First, raw point clouds are partitioned into overlapped blocks. Second, these discrete blocks are interpolated concurrently on parallel pipelines. On the interpolation run, intermediate results are sorted and finally merged into an integrated DEM. This parallelization demonstrates the great potential of multi-core platforms with high data throughput and low memory footprint. This approach achieves excellent performance speedup with greatly reduced processing time. For example, on a 2.0 GHz Quad-Core Intel Xeon platform, the proposed parallel approach can process approximately one billion LiDAR points (16.4 GB) in about 12 min and produces a 27,500×30,500 raster DEM, using less than 800 MB main memory.

  13. Large Scale Metal Additive Techniques Review

    Energy Technology Data Exchange (ETDEWEB)

    Nycz, Andrzej [ORNL; Adediran, Adeola I [ORNL; Noakes, Mark W [ORNL; Love, Lonnie J [ORNL

    2016-01-01

    In recent years additive manufacturing made long strides toward becoming a main stream production technology. Particularly strong progress has been made in large-scale polymer deposition. However, large scale metal additive has not yet reached parity with large scale polymer. This paper is a review study of the metal additive techniques in the context of building large structures. Current commercial devices are capable of printing metal parts on the order of several cubic feet compared to hundreds of cubic feet for the polymer side. In order to follow the polymer progress path several factors are considered: potential to scale, economy, environment friendliness, material properties, feedstock availability, robustness of the process, quality and accuracy, potential for defects, and post processing as well as potential applications. This paper focuses on current state of art of large scale metal additive technology with a focus on expanding the geometric limits.

  14. Sound to language: different cortical processing for first and second languages in elementary school children as revealed by a large-scale study using fNIRS.

    Science.gov (United States)

    Sugiura, Lisa; Ojima, Shiro; Matsuba-Kurita, Hiroko; Dan, Ippeita; Tsuzuki, Daisuke; Katura, Takusige; Hagiwara, Hiroko

    2011-10-01

    A large-scale study of 484 elementary school children (6-10 years) performing word repetition tasks in their native language (L1-Japanese) and a second language (L2-English) was conducted using functional near-infrared spectroscopy. Three factors presumably associated with cortical activation, language (L1/L2), word frequency (high/low), and hemisphere (left/right), were investigated. L1 words elicited significantly greater brain activation than L2 words, regardless of semantic knowledge, particularly in the superior/middle temporal and inferior parietal regions (angular/supramarginal gyri). The greater L1-elicited activation in these regions suggests that they are phonological loci, reflecting processes tuned to the phonology of the native language, while phonologically unfamiliar L2 words were processed like nonword auditory stimuli. The activation was bilateral in the auditory and superior/middle temporal regions. Hemispheric asymmetry was observed in the inferior frontal region (right dominant), and in the inferior parietal region with interactions: low-frequency words elicited more right-hemispheric activation (particularly in the supramarginal gyrus), while high-frequency words elicited more left-hemispheric activation (particularly in the angular gyrus). The present results reveal the strong involvement of a bilateral language network in children's brains depending more on right-hemispheric processing while acquiring unfamiliar/low-frequency words. A right-to-left shift in laterality should occur in the inferior parietal region, as lexical knowledge increases irrespective of language.

  15. Sound to Language: Different Cortical Processing for First and Second Languages in Elementary School Children as Revealed by a Large-Scale Study Using fNIRS

    Science.gov (United States)

    Ojima, Shiro; Matsuba-Kurita, Hiroko; Dan, Ippeita; Tsuzuki, Daisuke; Katura, Takusige; Hagiwara, Hiroko

    2011-01-01

    A large-scale study of 484 elementary school children (6–10 years) performing word repetition tasks in their native language (L1-Japanese) and a second language (L2-English) was conducted using functional near-infrared spectroscopy. Three factors presumably associated with cortical activation, language (L1/L2), word frequency (high/low), and hemisphere (left/right), were investigated. L1 words elicited significantly greater brain activation than L2 words, regardless of semantic knowledge, particularly in the superior/middle temporal and inferior parietal regions (angular/supramarginal gyri). The greater L1-elicited activation in these regions suggests that they are phonological loci, reflecting processes tuned to the phonology of the native language, while phonologically unfamiliar L2 words were processed like nonword auditory stimuli. The activation was bilateral in the auditory and superior/middle temporal regions. Hemispheric asymmetry was observed in the inferior frontal region (right dominant), and in the inferior parietal region with interactions: low-frequency words elicited more right-hemispheric activation (particularly in the supramarginal gyrus), while high-frequency words elicited more left-hemispheric activation (particularly in the angular gyrus). The present results reveal the strong involvement of a bilateral language network in children’s brains depending more on right-hemispheric processing while acquiring unfamiliar/low-frequency words. A right-to-left shift in laterality should occur in the inferior parietal region, as lexical knowledge increases irrespective of language. PMID:21350046

  16. Large Scale Structure Observations

    CERN Document Server

    Percival, Will J

    2015-01-01

    Galaxy Surveys are enjoying a renaissance thanks to the advent of multi-object spectrographs on ground-based telescopes. The last 15 years have seen the fruits of this experimental advance, including the 2-degree Field Galaxy Redshift Survey (2dFGRS; Colless et al. 2003) and the Sloan Digital Sky Survey (SDSS; York et al. 2000). Most recently, the Baryon Oscillation Spectroscopic Survey (BOSS; Dawson et al. 2013), part of the SDSS-III project (Eisenstein et al. 2011), has provided the largest volume of the low-redshift Universe ever surveyed with a galaxy density useful for high-precision cosmology. This set of lecture notes looks at some of the physical processes that underpin these measurements, the evolution of measurements themselves, and looks ahead to the next 15 years and the advent of surveys such as the enhanced Baryon Oscillation Spectroscopic Survey (eBOSS), the Dark Energy Spectroscopic Instrument (DESI) and the ESA Euclid satellite mission.

  17. Large-scale data analytics

    CERN Document Server

    Gkoulalas-Divanis, Aris

    2014-01-01

    Provides cutting-edge research in large-scale data analytics from diverse scientific areas Surveys varied subject areas and reports on individual results of research in the field Shares many tips and insights into large-scale data analytics from authors and editors with long-term experience and specialization in the field

  18. Large-Scale Testing of Effects of Anti-Foam Agent on Gas Holdup in Process Vessels in the Hanford Waste Treatment Plant - 8280

    Energy Technology Data Exchange (ETDEWEB)

    Mahoney, Lenna A.; Alzheimer, James M.; Arm, Stuart T.; Guzman-Leong, Consuelo E.; Jagoda, Lynette K.; Stewart, Charles W.; Wells, Beric E.; Yokuda, Satoru T.

    2008-06-03

    The Hanford Waste Treatment Plant (WTP) will vitrify the radioactive wastes stored in underground tanks. These wastes generate and retain hydrogen and other flammable gases that create safety concerns for the vitrification process tanks in the WTP. An anti-foam agent (AFA) will be added to the WTP process streams. Prior testing in a bubble column and a small-scale impeller-mixed vessel indicated that gas holdup in a high-level waste chemical simulant with AFA was up to 10 times that in clay simulant without AFA. This raised a concern that major modifications to the WTP design or qualification of an alternative AFA might be required to satisfy plant safety criteria. However, because the mixing and gas generation mechanisms in the small-scale tests differed from those expected in WTP process vessels, additional tests were performed in a large-scale prototypic mixing system with in situ gas generation. This paper presents the results of this test program. The tests were conducted at Pacific Northwest National Laboratory in a ¼-scale model of the lag storage process vessel using pulse jet mixers and air spargers. Holdup and release of gas bubbles generated by hydrogen peroxide decomposition were evaluated in waste simulants containing an AFA over a range of Bingham yield stresses and gas gen geration rates. Results from the ¼-scale test stand showed that, contrary to the small-scale impeller-mixed tests, gas holdup in clay without AFA is comparable to that in the chemical waste simulant with AFA. The test stand, simulants, scaling and data-analysis methods, and results are described in relation to previous tests and anticipated WTP operating conditions.

  19. Effects of sex and proficiency in second language processing as revealed by a large-scale fNIRS study of school-aged children.

    Science.gov (United States)

    Sugiura, Lisa; Ojima, Shiro; Matsuba-Kurita, Hiroko; Dan, Ippeita; Tsuzuki, Daisuke; Katura, Takusige; Hagiwara, Hiroko

    2015-10-01

    Previous neuroimaging studies in adults have revealed that first and second languages (L1/L2) share similar neural substrates, and that proficiency is a major determinant of the neural organization of L2 in the lexical-semantic and syntactic domains. However, little is known about neural substrates of children in the phonological domain, or about sex differences. Here, we conducted a large-scale study (n = 484) of school-aged children using functional near-infrared spectroscopy and a word repetition task, which requires a great extent of phonological processing. We investigated cortical activation during word processing, emphasizing sex differences, to clarify similarities and differences between L1 and L2, and proficiency-related differences during early L2 learning. L1 and L2 shared similar neural substrates with decreased activation in L2 compared to L1 in the posterior superior/middle temporal and angular/supramarginal gyri for both sexes. Significant sex differences were found in cortical activation within language areas during high-frequency word but not during low-frequency word processing. During high-frequency word processing, widely distributed areas including the angular/supramarginal gyri were activated in boys, while more restricted areas, excluding the angular/supramarginal gyri were activated in girls. Significant sex differences were also found in L2 proficiency-related activation: activation significantly increased with proficiency in boys, whereas no proficiency-related differences were found in girls. Importantly, cortical sex differences emerged with proficiency. Based on previous research, the present results indicate that sex differences are acquired or enlarged during language development through different cognitive strategies between sexes, possibly reflecting their different memory functions. © 2015 Wiley Periodicals, Inc.

  20. Modes of Large-Scale Brain Network Organization during Threat Processing and Posttraumatic Stress Disorder Symptom Reduction during TF-CBT among Adolescent Girls.

    Science.gov (United States)

    Cisler, Josh M; Sigel, Benjamin A; Kramer, Teresa L; Smitherman, Sonet; Vanderzee, Karin; Pemberton, Joy; Kilts, Clinton D

    2016-01-01

    Posttraumatic stress disorder (PTSD) is often chronic and disabling across the lifespan. The gold standard treatment for adolescent PTSD is Trauma-Focused Cognitive-Behavioral Therapy (TF-CBT), though treatment response is variable and mediating neural mechanisms are not well understood. Here, we test whether PTSD symptom reduction during TF-CBT is associated with individual differences in large-scale brain network organization during emotion processing. Twenty adolescent girls, aged 11-16, with PTSD related to assaultive violence completed a 12-session protocol of TF-CBT. Participants completed an emotion processing task, in which neutral and fearful facial expressions were presented either overtly or covertly during 3T fMRI, before and after treatment. Analyses focused on characterizing network properties of modularity, assortativity, and global efficiency within an 824 region-of-interest brain parcellation separately during each of the task blocks using weighted functional connectivity matrices. We similarly analyzed an existing dataset of healthy adolescent girls undergoing an identical emotion processing task to characterize normative network organization. Pre-treatment individual differences in modularity, assortativity, and global efficiency during covert fear vs neutral blocks predicted PTSD symptom reduction. Patients who responded better to treatment had greater network modularity and assortativity but lesser efficiency, a pattern that closely resembled the control participants. At a group level, greater symptom reduction was associated with greater pre-to-post-treatment increases in network assortativity and modularity, but this was more pronounced among participants with less symptom improvement. The results support the hypothesis that modularized and resilient brain organization during emotion processing operate as mechanisms enabling symptom reduction during TF-CBT.

  1. Development and Validation of a One-Dimensional Co-Electrolysis Model for Use in Large-Scale Process Modeling Analysis

    Energy Technology Data Exchange (ETDEWEB)

    J. E. O' Brien; M. G. McKellar; G. L. Hawkes; C. M. Stoots

    2007-07-01

    A one-dimensional chemical equilibrium model has been developed for analysis of simultaneous high-temperature electrolysis of steam and carbon dioxide (coelectrolysis) for the direct production of syngas, a mixture of hydrogen and carbon monoxide. The model assumes local chemical equilibrium among the four process-gas species via the shift reaction. For adiabatic or specified-heat-transfer conditions, the electrolyzer model allows for the determination of coelectrolysis outlet temperature, composition (anode and cathode sides), mean Nernst potential, operating voltage and electrolyzer power based on specified inlet gas flow rates, heat loss or gain, current density, and cell area-specific resistance. Alternately, for isothermal operation, it allows for determination of outlet composition, mean Nernst potential, operating voltage, electrolyzer power, and the isothermal heat requirement for specified inlet gas flow rates, operating temperature, current density and area-specific resistance. This model has been developed for incorporation into a system-analysis code from which the overall performance of large-scale coelectrolysis plants can be evaluated. The one-dimensional co-electrolysis model has been validated by comparison with results obtained from a 3-D computational fluid dynamics model and by comparison with experimental results.

  2. Distribution probability of large-scale landslides in central Nepal

    Science.gov (United States)

    Timilsina, Manita; Bhandary, Netra P.; Dahal, Ranjan Kumar; Yatabe, Ryuichi

    2014-12-01

    Large-scale landslides in the Himalaya are defined as huge, deep-seated landslide masses that occurred in the geological past. They are widely distributed in the Nepal Himalaya. The steep topography and high local relief provide high potential for such failures, whereas the dynamic geology and adverse climatic conditions play a key role in the occurrence and reactivation of such landslides. The major geoscientific problems related with such large-scale landslides are 1) difficulties in their identification and delineation, 2) sources of small-scale failures, and 3) reactivation. Only a few scientific publications have been published concerning large-scale landslides in Nepal. In this context, the identification and quantification of large-scale landslides and their potential distribution are crucial. Therefore, this study explores the distribution of large-scale landslides in the Lesser Himalaya. It provides simple guidelines to identify large-scale landslides based on their typical characteristics and using a 3D schematic diagram. Based on the spatial distribution of landslides, geomorphological/geological parameters and logistic regression, an equation of large-scale landslide distribution is also derived. The equation is validated by applying it to another area. For the new area, the area under the receiver operating curve of the landslide distribution probability in the new area is 0.699, and a distribution probability value could explain > 65% of existing landslides. Therefore, the regression equation can be applied to areas of the Lesser Himalaya of central Nepal with similar geological and geomorphological conditions.

  3. Integration of climatic water deficit and fine-scale physiography in process-based modeling of forest landscape resilience to large-scale tree mortality

    Science.gov (United States)

    Yang, J.; Weisberg, P.; Dilts, T.

    2016-12-01

    Climate warming can lead to large-scale drought-induced tree mortality events and greatly affect forest landscape resilience. Climatic water deficit (CWD) and its physiographic variations provide a key mechanism in driving landscape dynamics in response to climate change. Although CWD has been successfully applied in niche-based species distribution models, its application in process-based forest landscape models is still scarce. Here we present a framework incorporating fine-scale influence of terrain on ecohydrology in modeling forest landscape dynamics. We integrated CWD with a forest landscape succession and disturbance model (LANDIS-II) to evaluate how tree species distribution might shift in response to different climate-fire scenarios across an elevation-aspect gradient in a semi-arid montane landscape of northeastern Nevada, USA. Our simulations indicated that drought-intolerant tree species such as quaking aspen could experience greatly reduced distributions in the more arid portions of their existing ranges due to water stress limitations under future climate warming scenarios. However, even at the most xeric portions of its range, aspen is likely to persist in certain environmental settings due to unique and often fine-scale combinations of resource availability, species interactions and disturbance regime. The modeling approach presented here allowed identification of these refugia. In addition, this approach helped quantify how the direction and magnitude of fire influences on species distribution would vary across topoclimatic gradients, as well as furthers our understanding on the role of environmental conditions, fire, and inter-specific competition in shaping potential responses of landscape resilience to climate change.

  4. Overlap of Spoilage-Associated Microbiota between Meat and the Meat Processing Environment in Small-Scale and Large-Scale Retail Distributions.

    Science.gov (United States)

    Stellato, Giuseppina; La Storia, Antonietta; De Filippis, Francesca; Borriello, Giorgia; Villani, Francesco; Ercolini, Danilo

    2016-07-01

    Microbial contamination in food processing plants can play a fundamental role in food quality and safety. The aims of this study were to learn more about the possible influence of the meat processing environment on initial fresh meat contamination and to investigate the differences between small-scale retail distribution (SD) and large-scale retail distribution (LD) facilities. Samples were collected from butcheries (n = 20), including LD (n = 10) and SD (n = 10) facilities, over two sampling campaigns. Samples included fresh beef and pork cuts and swab samples from the knife, the chopping board, and the butcher's hand. The microbiota of both meat samples and environmental swabs were very complex, including more than 800 operational taxonomic units (OTUs) collapsed at the species level. The 16S rRNA sequencing analysis showed that core microbiota were shared by 80% of the samples and included Pseudomonas spp., Streptococcus spp., Brochothrix spp., Psychrobacter spp., and Acinetobacter spp. Hierarchical clustering of the samples based on the microbiota showed a certain separation between meat and environmental samples, with higher levels of Proteobacteria in meat. In particular, levels of Pseudomonas and several Enterobacteriaceae members were significantly higher in meat samples, while Brochothrix, Staphylococcus, lactic acid bacteria, and Psychrobacter prevailed in environmental swab samples. Consistent clustering was also observed when metabolic activities were considered by predictive metagenomic analysis of the samples. An increase in carbohydrate metabolism was predicted for the environmental swabs and was consistently linked to Firmicutes, while increases in pathways related to amino acid and lipid metabolism were predicted for the meat samples and were positively correlated with Proteobacteria Our results highlighted the importance of the processing environment in contributing to the initial microbial levels of meat and clearly showed that the type of retail

  5. Final Report: Process Models of the Equilibrium Size & State of Organic/Inorganic Aerosols for the Development of Large Scale Atmospheric Models & the Analysis of Field Data

    Energy Technology Data Exchange (ETDEWEB)

    Wexler, Anthony Stein [UC Davis; Clegg, Simon Leslie [UC Davis

    2013-10-26

    Our work addressed the following elements of the Call for Proposals: (i) “to improve the theoretical representation of aerosol processes studied in ASP laboratory or field studies”, (ii) “to enhance the incorporation of aerosol process information into modules suitable for large-scale or global atmospheric models”, and (iii) “provide systematic experimental validation of process model predictions ... using data from targeted laboratory and field experiments”. Achievements to the end of 2012 are described in four previous reports, and include: new models of densities and surface tensions of pure (single solute) and mixed aqueous solutions of typical aerosol composition under all atmospheric conditions (0 to 100% RH and T > 150 K); inclusion of these models into the widely used Extended Aerosol Inorganics model (E-AIM, http://www.aim.env.uea.ac.uk/aim/aim.php); the addition of vapor pressure calculators for organic compounds to the E-AIM website; the ability of include user-defined organic compounds and/or lumped surrogates in gas/aerosol partitioning calculations; the development of new equations to represent the properties of soluble aerosols over the entire concentration range (using methods based upon adsorption isotherms, and derived using statistical mechanics), including systems at close to zero RH. These results are described in publications 1-6 at the end of this report, and on the “News” page of the E-AIM website (http://www.aim.env.uea.ac.uk/aim/info/news.html). During 2012 and 2013 we have collaborated in a combined observation and lab-based study of the water uptake of the organic component of atmospheric aerosols (PI Gannet Hallar, of the Desert Research Institute). The aerosol samples were analyzed using several complementary techniques (GC/MS, FT-ICR MS, and ion chromatography) to produce a very complete organic “speciation” including both polar and non-polar compounds. Hygroscopic growth factors of the samples were measured, and

  6. Evidence for geologic processes on comets

    Science.gov (United States)

    Sunshine, Jessica M.; Thomas, Nicolas; El-Maarry, Mohamed Ramy; Farnham, Tony L.

    2016-11-01

    Spacecraft missions have resolved the nuclei of six periodic comets and revealed a set of geologically intriguing and active small bodies. The shapes of these cometary nuclei are dominantly bilobate reflecting their formation from smaller cometesimals. Cometary surfaces include a diverse set of morphologies formed from a variety of mechanisms. Sublimation of ices, driven by the variable insolation over the time since each nucleus was perturbed into the inner Solar System, is a major process on comets and is likely responsible for quasi-circular depressions and ubiquitous layering. Sublimation from near-vertical walls is also seen to lead to undercutting and mass wasting. Fracturing has only been resolved on one comet but likely exists on all comets. There is also evidence for mass redistribution, where material lifted off the nucleus by subliming gases is deposited onto other surfaces. It is surprising that such sedimentary processes are significant in the microgravity environment of comets. There are many enigmatic features on cometary surfaces including tall spires, kilometer-scale flows, and various forms of depressions and pits. Furthermore, even after accounting for the differences in resolution and coverage, significant diversity in landforms among cometary surfaces clearly exists. Yet why certain landforms occur on some comets and not on others remains poorly understood. The exploration and understanding of geologic processes on comets is only beginning. These fascinating bodies will continue to provide a unique laboratory for examining common geologic processes under the uncommon conditions of very high porosity, very low strength, small particle sizes, and near-zero gravity.

  7. A nurse-led interdisciplinary primary care approach to prevent disability among community-dwelling frail older people: a large-scale process evaluation.

    Science.gov (United States)

    Metzelthin, Silke F; Daniëls, Ramon; van Rossum, Erik; Cox, Karen; Habets, Herbert; de Witte, Luc P; Kempen, Gertrudis I J M

    2013-09-01

    The complex healthcare needs of frail older people and their increased risk of disability require an integrated and proactive approach. In the Netherlands, an interdisciplinary primary care approach has recently been developed, involving individualized assessment and interventions (tailor-made care), case management and long-term follow-up. The practice nurse as part of a general practice is case manager and plans, organizes and monitors the care process and facilitates cooperation between professionals. The approach has shown positive indications regarding its feasibility in a small pilot, but its implementation on a large scale had not hitherto been investigated. To examine the extent to which the interdisciplinary care approach is implemented as planned and to gain insight into healthcare professionals' and frail older people's experiences regarding the benefits, burden, stimulating factors and barriers. A process evaluation was conducted using a mixed methods design. Six GP practices in the south of the Netherlands. Practice nurses (n=7), GPs (n=12), occupational therapists (n=6) and physical therapists (n=20) participated in the process evaluation. Furthermore, 194 community-dwelling frail older people (≥ 70 years) were included using the Groningen Frailty Indicator. People who were terminally ill, were confined to bed, had severe cognitive or psychological impairments or were unable to communicate in Dutch were excluded. Quantitative data (logbooks and evaluation forms) were collected from all the participating frail older people and 13 semi-structured interviews with a selection of them were conducted. In addition, data from healthcare professionals were collected through 12 semi-structured interviews and four focus group discussions. Although some parts of the protocol were insufficiently executed, healthcare professionals and frail older people were satisfied with the care approach, as it provided a useful structure for the delivery of geriatric primary

  8. Sentinel-1 data massive processing for large scale DInSAR analyses within Cloud Computing environments through the P-SBAS approach

    Science.gov (United States)

    Lanari, Riccardo; Bonano, Manuela; Buonanno, Sabatino; Casu, Francesco; De Luca, Claudio; Fusco, Adele; Manunta, Michele; Manzo, Mariarosaria; Pepe, Antonio; Zinno, Ivana

    2017-04-01

    -core programming techniques. Currently, Cloud Computing environments make available large collections of computing resources and storage that can be effectively exploited through the presented S1 P-SBAS processing chain to carry out interferometric analyses at a very large scale, in reduced time. This allows us to deal also with the problems connected to the use of S1 P-SBAS chain in operational contexts, related to hazard monitoring and risk prevention and mitigation, where handling large amounts of data represents a challenging task. As a significant experimental result we performed a large spatial scale SBAS analysis relevant to the Central and Southern Italy by exploiting the Amazon Web Services Cloud Computing platform. In particular, we processed in parallel 300 S1 acquisitions covering the Italian peninsula from Lazio to Sicily through the presented S1 P-SBAS processing chain, generating 710 interferograms, thus finally obtaining the displacement time series of the whole processed area. This work has been partially supported by the CNR-DPC agreement, the H2020 EPOS-IP project (GA 676564) and the ESA GEP project.

  9. Large-scale circuit simulation

    Science.gov (United States)

    Wei, Y. P.

    1982-12-01

    The simulation of VLSI (Very Large Scale Integration) circuits falls beyond the capabilities of conventional circuit simulators like SPICE. On the other hand, conventional logic simulators can only give the results of logic levels 1 and 0 with the attendent loss of detail in the waveforms. The aim of developing large-scale circuit simulation is to bridge the gap between conventional circuit simulation and logic simulation. This research is to investigate new approaches for fast and relatively accurate time-domain simulation of MOS (Metal Oxide Semiconductors), LSI (Large Scale Integration) and VLSI circuits. New techniques and new algorithms are studied in the following areas: (1) analysis sequencing (2) nonlinear iteration (3) modified Gauss-Seidel method (4) latency criteria and timestep control scheme. The developed methods have been implemented into a simulation program PREMOS which could be used as a design verification tool for MOS circuits.

  10. Acoustic fluidization - A new geologic process

    Science.gov (United States)

    Melosh, H. J.

    1979-01-01

    A number of geologic processes, particularly seismic faulting, impact crater slumping, and long runout landslides, require the failure of geologic materials under differential stresses much smaller than expected on the basis of conventional rock mechanics. This paper proposes that the low strengths apparent in these phenomena are due to a state of 'acoustic fluidization' induced by a transient strong acoustic wave field. The strain rates possible in such a field are evaluated, and it is shown that acoustically fluidized debris behaves as a newtonian fluid with a viscosity in the range 100,000 to 10,000,000 P for plausible conditions. Energy gains and losses in the acoustic field are discussed, and the mechanism is shown to be effective if internal dissipation in the field gives a Q approximately greater than 100. Whether such values for Q are realized is not known at present. However, acoustic fluidization provides a qualitatively correct description of the failure of rock debris under low differential stresses in the processes of faulting, crater slumping, and long runout landslides. Acoustic fluidization thus deserves serious consideration as a possible explanation of these phenomena.

  11. Very Large Scale Integration (VLSI).

    Science.gov (United States)

    Yeaman, Andrew R. J.

    Very Large Scale Integration (VLSI), the state-of-the-art production techniques for computer chips, promises such powerful, inexpensive computing that, in the future, people will be able to communicate with computer devices in natural language or even speech. However, before full-scale VLSI implementation can occur, certain salient factors must be…

  12. RE-THINKING ANALYSIS OF LARGE-SCALE ENGINEERING ACCIDENTS AND “ECO-GEOLOGICAL ENGINEERING”%由工程事故反思论及“生态地质工程学”

    Institute of Scientific and Technical Information of China (English)

    尚彦军; 傅冰骏; 蒋毅; 李坤

    2013-01-01

    在全球气候变暖、异常天气频发、全球性自然灾害严重和日益加剧的形势下,我国众多水电、核电等大工程快速上马的同时,也面临着生态环境影响、安全运营风险管理等诸多问题和挑战。这是一个极需深入思考、有待联合攻关解决的重大课题。需要从全国乃至全球角度出发,全方位、多层次、多学科,而不仅局限在某一部门或行业内,进行反复论证,以实现工程-经济-社会-环境-能源和资源复杂巨系统的和谐调控和可持续发展。在对众多地质工程事故及其有关灾害反思基础上,作者认为有必要从全球变化和可持续发展角度,系统总结以往工程失败或事故教训,建立交叉学科“生态地质工程学”,推动复杂巨系统的理论研究和工程应用。这样才能超出单纯工程选址安全性和岩土体稳定性评价的狭隘工程安全理念,更好地实现工程建设-生态环境保护-经济可持续发展的长远目标。%With global warming and more extreme climate hazards,China is facing more environment impact,op-erational risk management,and many other security issues and challenges.With the fast development of hydropow-er,nuclear power and other major projects,these issues and challenges need to be thought deeply and solved by joint research.In order to achieve the sustainable development and harmonious regulation of the Project-Economy-Society-Environment-Energy and Resources as one complex giant system,scientists need to demonstrate them re-peatedly,from the perspective of national even global,all-round,multi-level,multi-disciplinary,and not confined to a particular sector or industry.On the basis of the re-thinking analysis on large-scale engineering accidents and their related hazards,the authors think it is necessary to conclude the reasons and lessons from the failures,and es-tablish a cross disciplinary“Eco-Geological Engineering”to boost the

  13. Large-scale solar heat

    Energy Technology Data Exchange (ETDEWEB)

    Tolonen, J.; Konttinen, P.; Lund, P. [Helsinki Univ. of Technology, Otaniemi (Finland). Dept. of Engineering Physics and Mathematics

    1998-12-31

    In this project a large domestic solar heating system was built and a solar district heating system was modelled and simulated. Objectives were to improve the performance and reduce costs of a large-scale solar heating system. As a result of the project the benefit/cost ratio can be increased by 40 % through dimensioning and optimising the system at the designing stage. (orig.)

  14. Testing gravity on Large Scales

    OpenAIRE

    Raccanelli Alvise

    2013-01-01

    We show how it is possible to test general relativity and different models of gravity via Redshift-Space Distortions using forthcoming cosmological galaxy surveys. However, the theoretical models currently used to interpret the data often rely on simplifications that make them not accurate enough for precise measurements. We will discuss improvements to the theoretical modeling at very large scales, including wide-angle and general relativistic corrections; we then show that for wide and deep...

  15. Characterisation of the large-scale production process of oyster mushroom (Pleurotus ostreatus) with the analysis of succession and spatial heterogeneity of lignocellulolytic enzyme activities.

    Science.gov (United States)

    Bánfi, Renáta; Pohner, Zsuzsanna; Kovács, József; Luzics, Szabina; Nagy, Adrienn; Dudás, Melinda; Tanos, Péter; Márialigeti, Károly; Vajna, Balázs

    2015-12-01

    Oyster mushroom (Pleurotus ostreatus) lignocellulolytic enzyme activity pattern and variation was investigated in a large-scale facility from spawning until the end of the second flush. In the first cultivation cycle laccase production reached its peak during vegetative growth stage, while manganese-peroxidase showed the highest activity during fruiting body induction. Cellulose and hemicellulose degrading enzymes had maximal activity at the beginning of flush and harvest stage. The enzyme activities showed similar tendencies among five different mushroom substrate blocks representing a production house. The spatial variability analysis of enzyme activities pointed out the within substrate block heterogeneity as the main source if variation. This result was confirmed by Combined Cluster and Discriminant Analysis (CCDA) method showing minimal among block heterogeneity considering the whole investigation period; furthermore in the first cultivation cycle all blocks were grouped into one cluster.

  16. ELASTIC: A Large Scale Dynamic Tuning Environment

    Directory of Open Access Journals (Sweden)

    Andrea Martínez

    2014-01-01

    Full Text Available The spectacular growth in the number of cores in current supercomputers poses design challenges for the development of performance analysis and tuning tools. To be effective, such analysis and tuning tools must be scalable and be able to manage the dynamic behaviour of parallel applications. In this work, we present ELASTIC, an environment for dynamic tuning of large-scale parallel applications. To be scalable, the architecture of ELASTIC takes the form of a hierarchical tuning network of nodes that perform a distributed analysis and tuning process. Moreover, the tuning network topology can be configured to adapt itself to the size of the parallel application. To guide the dynamic tuning process, ELASTIC supports a plugin architecture. These plugins, called ELASTIC packages, allow the integration of different tuning strategies into ELASTIC. We also present experimental tests conducted using ELASTIC, showing its effectiveness to improve the performance of large-scale parallel applications.

  17. Digital Line Graph - Large Scale

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — Digital line graph (DLG) data are digital representations of cartographic information. DLGs of map features are converted to digital form from maps and related...

  18. Digital Line Graph - Large Scale

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — Digital line graph (DLG) data are digital representations of cartographic information. DLGs of map features are converted to digital form from maps and related...

  19. Strings and large scale magnetohydrodynamics

    CERN Document Server

    Olesen, P

    1995-01-01

    From computer simulations of magnetohydrodynamics one knows that a turbulent plasma becomes very intermittent, with the magnetic fields concentrated in thin flux tubes. This situation looks very "string-like", so we investigate whether strings could be solutions of the magnetohydrodynamics equations in the limit of infinite conductivity. We find that the induction equation is satisfied, and we discuss the Navier-Stokes equation (without viscosity) with the Lorentz force included. We argue that the string equations (with non-universal maximum velocity) should describe the large scale motion of narrow magnetic flux tubes, because of a large reparametrization (gauge) invariance of the magnetic and electric string fields.

  20. Japanese large-scale interferometers

    CERN Document Server

    Kuroda, K; Miyoki, S; Ishizuka, H; Taylor, C T; Yamamoto, K; Miyakawa, O; Fujimoto, M K; Kawamura, S; Takahashi, R; Yamazaki, T; Arai, K; Tatsumi, D; Ueda, A; Fukushima, M; Sato, S; Shintomi, T; Yamamoto, A; Suzuki, T; Saitô, Y; Haruyama, T; Sato, N; Higashi, Y; Uchiyama, T; Tomaru, T; Tsubono, K; Ando, M; Takamori, A; Numata, K; Ueda, K I; Yoneda, H; Nakagawa, K; Musha, M; Mio, N; Moriwaki, S; Somiya, K; Araya, A; Kanda, N; Telada, S; Sasaki, M; Tagoshi, H; Nakamura, T; Tanaka, T; Ohara, K

    2002-01-01

    The objective of the TAMA 300 interferometer was to develop advanced technologies for kilometre scale interferometers and to observe gravitational wave events in nearby galaxies. It was designed as a power-recycled Fabry-Perot-Michelson interferometer and was intended as a step towards a final interferometer in Japan. The present successful status of TAMA is presented. TAMA forms a basis for LCGT (large-scale cryogenic gravitational wave telescope), a 3 km scale cryogenic interferometer to be built in the Kamioka mine in Japan, implementing cryogenic mirror techniques. The plan of LCGT is schematically described along with its associated R and D.

  1. Models of large scale structure

    Energy Technology Data Exchange (ETDEWEB)

    Frenk, C.S. (Physics Dept., Univ. of Durham (UK))

    1991-01-01

    The ingredients required to construct models of the cosmic large scale structure are discussed. Input from particle physics leads to a considerable simplification by offering concrete proposals for the geometry of the universe, the nature of the dark matter and the primordial fluctuations that seed the growth of structure. The remaining ingredient is the physical interaction that governs dynamical evolution. Empirical evidence provided by an analysis of a redshift survey of IRAS galaxies suggests that gravity is the main agent shaping the large-scale structure. In addition, this survey implies large values of the mean cosmic density, {Omega}> or approx.0.5, and is consistent with a flat geometry if IRAS galaxies are somewhat more clustered than the underlying mass. Together with current limits on the density of baryons from Big Bang nucleosynthesis, this lends support to the idea of a universe dominated by non-baryonic dark matter. Results from cosmological N-body simulations evolved from a variety of initial conditions are reviewed. In particular, neutrino dominated and cold dark matter dominated universes are discussed in detail. Finally, it is shown that apparent periodicities in the redshift distributions in pencil-beam surveys arise frequently from distributions which have no intrinsic periodicity but are clustered on small scales. (orig.).

  2. Large-scale structure of the Universe

    Energy Technology Data Exchange (ETDEWEB)

    Shandarin, S.F.; Doroshkevich, A.G.; Zel' dovich, Ya.B. (Inst. Prikladnoj Matematiki, Moscow, USSR)

    1983-01-01

    A review of theory of the large-scale structure of the Universe is given, including formation of clusters and superclusters of galaxies as well as large voids. Particular attention is paid to the theory of neutrino dominated Universe - the cosmological model where neutrinos with the rest mass of several tens eV dominate the mean density. Evolution of small perturbations is discussed, estimates of microwave backgorund radiation fluctuations is given for different angular scales. Adiabatic theory of the Universe structure formation, known as ''cake'' scenario and their successive fragmentation is given. This scenario is based on approximate nonlinear theory of gravitation instability. Results of numerical experiments, modeling the processes of large-scale structure formation are discussed.

  3. Large-scale structure of the universe

    Energy Technology Data Exchange (ETDEWEB)

    Shandarin, S.F.; Doroshkevich, A.G.; Zel' dovich, Y.B.

    1983-01-01

    A survey is given of theories for the origin of large-scale structure in the universe: clusters and superclusters of galaxies, and vast black regions practically devoid of galaxies. Special attention is paid to the theory of a neutrino-dominated universe: a cosmology in which electron neutrinos with a rest mass of a few tens of electron volts would contribute the bulk of the mean density. The evolution of small perturbations is discussed, and estimates are made for the temperature anisotropy of the microwave background radiation on various angular scales. The nonlinear stage in the evolution of smooth irrotational perturbations in a low-pressure medium is described in detail. Numerical experiments simulating large-scale structure formation processes are discussed, as well as their interpretation in the context of catastrophe theory.

  4. Economically viable large-scale hydrogen liquefaction

    Science.gov (United States)

    Cardella, U.; Decker, L.; Klein, H.

    2017-02-01

    The liquid hydrogen demand, particularly driven by clean energy applications, will rise in the near future. As industrial large scale liquefiers will play a major role within the hydrogen supply chain, production capacity will have to increase by a multiple of today’s typical sizes. The main goal is to reduce the total cost of ownership for these plants by increasing energy efficiency with innovative and simple process designs, optimized in capital expenditure. New concepts must ensure a manageable plant complexity and flexible operability. In the phase of process development and selection, a dimensioning of key equipment for large scale liquefiers, such as turbines and compressors as well as heat exchangers, must be performed iteratively to ensure technological feasibility and maturity. Further critical aspects related to hydrogen liquefaction, e.g. fluid properties, ortho-para hydrogen conversion, and coldbox configuration, must be analysed in detail. This paper provides an overview on the approach, challenges and preliminary results in the development of efficient as well as economically viable concepts for large-scale hydrogen liquefaction.

  5. Large-scale neuromorphic computing systems

    Science.gov (United States)

    Furber, Steve

    2016-10-01

    Neuromorphic computing covers a diverse range of approaches to information processing all of which demonstrate some degree of neurobiological inspiration that differentiates them from mainstream conventional computing systems. The philosophy behind neuromorphic computing has its origins in the seminal work carried out by Carver Mead at Caltech in the late 1980s. This early work influenced others to carry developments forward, and advances in VLSI technology supported steady growth in the scale and capability of neuromorphic devices. Recently, a number of large-scale neuromorphic projects have emerged, taking the approach to unprecedented scales and capabilities. These large-scale projects are associated with major new funding initiatives for brain-related research, creating a sense that the time and circumstances are right for progress in our understanding of information processing in the brain. In this review we present a brief history of neuromorphic engineering then focus on some of the principal current large-scale projects, their main features, how their approaches are complementary and distinct, their advantages and drawbacks, and highlight the sorts of capabilities that each can deliver to neural modellers.

  6. 大型换热器SA765-Ⅱ管板的制造%Manufacturing Process of Large-scale Heat Exchanger SA765-Ⅱ Tube Sheet

    Institute of Scientific and Technical Information of China (English)

    施熔刚; 高强; 闫修平; 张丽丹

    2012-01-01

    管板制造难度较大,而大型换热器SA765-Ⅱ管板又有-45℃的冲击要求,制造难度又加大.通过加强冶炼的控制,采用KD法锻造拔长以及中心压凸台、中心压凹台等多种压实方法,使用合理的热处理工艺参数,生产出了合格的大型换热器管板.%As the manufacture of tube plate is difficult, and that large-scale heat exchanger SA765-Ⅱ tube sheets have impacting requirements of-45℃, the manufacturing difficulty again increase. By strengthening the control of the smelting, adopting a variety of combining compaction method of KD stretching as well as forging convex sidestep of the center and forging concave sidestep of the center, using reasonable heat treatment parameters, the qualified large exchanger tube sheets were produced.

  7. An HTML5-Based Pure Website Solution for Rapidly Viewing and Processing Large-Scale 3D Medical Volume Reconstruction on Mobile Internet.

    Science.gov (United States)

    Qiao, Liang; Chen, Xin; Zhang, Ye; Zhang, Jingna; Wu, Yi; Li, Ying; Mo, Xuemei; Chen, Wei; Xie, Bing; Qiu, Mingguo

    2017-01-01

    This study aimed to propose a pure web-based solution to serve users to access large-scale 3D medical volume anywhere with good user experience and complete details. A novel solution of the Master-Slave interaction mode was proposed, which absorbed advantages of remote volume rendering and surface rendering. On server side, we designed a message-responding mechanism to listen to interactive requests from clients (Slave model) and to guide Master volume rendering. On client side, we used HTML5 to normalize user-interactive behaviors on Slave model and enhance the accuracy of behavior request and user-friendly experience. The results showed that more than four independent tasks (each with a data size of 249.4 MB) could be simultaneously carried out with a 100-KBps client bandwidth (extreme test); the first loading time was <12 s, and the response time of each behavior request for final high quality image remained at approximately 1 s, while the peak value of bandwidth was <50-KBps. Meanwhile, the FPS value for each client was ≥40. This solution could serve the users by rapidly accessing the application via one URL hyperlink without special software and hardware requirement in a diversified network environment and could be easily integrated into other telemedical systems seamlessly.

  8. An HTML5-Based Pure Website Solution for Rapidly Viewing and Processing Large-Scale 3D Medical Volume Reconstruction on Mobile Internet

    Directory of Open Access Journals (Sweden)

    Liang Qiao

    2017-01-01

    Full Text Available This study aimed to propose a pure web-based solution to serve users to access large-scale 3D medical volume anywhere with good user experience and complete details. A novel solution of the Master-Slave interaction mode was proposed, which absorbed advantages of remote volume rendering and surface rendering. On server side, we designed a message-responding mechanism to listen to interactive requests from clients (Slave model and to guide Master volume rendering. On client side, we used HTML5 to normalize user-interactive behaviors on Slave model and enhance the accuracy of behavior request and user-friendly experience. The results showed that more than four independent tasks (each with a data size of 249.4 MB could be simultaneously carried out with a 100-KBps client bandwidth (extreme test; the first loading time was <12 s, and the response time of each behavior request for final high quality image remained at approximately 1 s, while the peak value of bandwidth was <50-KBps. Meanwhile, the FPS value for each client was ≥40. This solution could serve the users by rapidly accessing the application via one URL hyperlink without special software and hardware requirement in a diversified network environment and could be easily integrated into other telemedical systems seamlessly.

  9. Large-Scale Galaxy Bias

    CERN Document Server

    Desjacques, Vincent; Schmidt, Fabian

    2016-01-01

    This review presents a comprehensive overview of galaxy bias, that is, the statistical relation between the distribution of galaxies and matter. We focus on large scales where cosmic density fields are quasi-linear. On these scales, the clustering of galaxies can be described by a perturbative bias expansion, and the complicated physics of galaxy formation is absorbed by a finite set of coefficients of the expansion, called bias parameters. The review begins with a pedagogical proof of this very important result, which forms the basis of the rigorous perturbative description of galaxy clustering, under the assumptions of General Relativity and Gaussian, adiabatic initial conditions. Key components of the bias expansion are all leading local gravitational observables, which includes the matter density but also tidal fields and their time derivatives. We hence expand the definition of local bias to encompass all these contributions. This derivation is followed by a presentation of the peak-background split in i...

  10. Large scale biomimetic membrane arrays

    DEFF Research Database (Denmark)

    Hansen, Jesper Søndergaard; Perry, Mark; Vogel, Jörg

    2009-01-01

    To establish planar biomimetic membranes across large scale partition aperture arrays, we created a disposable single-use horizontal chamber design that supports combined optical-electrical measurements. Functional lipid bilayers could easily and efficiently be established across CO2 laser micro......-structured 8 x 8 aperture partition arrays with average aperture diameters of 301 +/- 5 mu m. We addressed the electro-physical properties of the lipid bilayers established across the micro-structured scaffold arrays by controllable reconstitution of biotechnological and physiological relevant membrane...... peptides and proteins. Next, we tested the scalability of the biomimetic membrane design by establishing lipid bilayers in rectangular 24 x 24 and hexagonal 24 x 27 aperture arrays, respectively. The results presented show that the design is suitable for further developments of sensitive biosensor assays...

  11. Testing gravity on Large Scales

    Directory of Open Access Journals (Sweden)

    Raccanelli Alvise

    2013-09-01

    Full Text Available We show how it is possible to test general relativity and different models of gravity via Redshift-Space Distortions using forthcoming cosmological galaxy surveys. However, the theoretical models currently used to interpret the data often rely on simplifications that make them not accurate enough for precise measurements. We will discuss improvements to the theoretical modeling at very large scales, including wide-angle and general relativistic corrections; we then show that for wide and deep surveys those corrections need to be taken into account if we want to measure the growth of structures at a few percent level, and so perform tests on gravity, without introducing systematic errors. Finally, we report the results of some recent cosmological model tests carried out using those precise models.

  12. Conference on Large Scale Optimization

    CERN Document Server

    Hearn, D; Pardalos, P

    1994-01-01

    On February 15-17, 1993, a conference on Large Scale Optimization, hosted by the Center for Applied Optimization, was held at the University of Florida. The con­ ference was supported by the National Science Foundation, the U. S. Army Research Office, and the University of Florida, with endorsements from SIAM, MPS, ORSA and IMACS. Forty one invited speakers presented papers on mathematical program­ ming and optimal control topics with an emphasis on algorithm development, real world applications and numerical results. Participants from Canada, Japan, Sweden, The Netherlands, Germany, Belgium, Greece, and Denmark gave the meeting an important international component. At­ tendees also included representatives from IBM, American Airlines, US Air, United Parcel Serice, AT & T Bell Labs, Thinking Machines, Army High Performance Com­ puting Research Center, and Argonne National Laboratory. In addition, the NSF sponsored attendance of thirteen graduate students from universities in the United States and abro...

  13. Large Scale Correlation Clustering Optimization

    CERN Document Server

    Bagon, Shai

    2011-01-01

    Clustering is a fundamental task in unsupervised learning. The focus of this paper is the Correlation Clustering functional which combines positive and negative affinities between the data points. The contribution of this paper is two fold: (i) Provide a theoretic analysis of the functional. (ii) New optimization algorithms which can cope with large scale problems (>100K variables) that are infeasible using existing methods. Our theoretic analysis provides a probabilistic generative interpretation for the functional, and justifies its intrinsic "model-selection" capability. Furthermore, we draw an analogy between optimizing this functional and the well known Potts energy minimization. This analogy allows us to suggest several new optimization algorithms, which exploit the intrinsic "model-selection" capability of the functional to automatically recover the underlying number of clusters. We compare our algorithms to existing methods on both synthetic and real data. In addition we suggest two new applications t...

  14. Colloquium: Large scale simulations on GPU clusters

    Science.gov (United States)

    Bernaschi, Massimo; Bisson, Mauro; Fatica, Massimiliano

    2015-06-01

    Graphics processing units (GPU) are currently used as a cost-effective platform for computer simulations and big-data processing. Large scale applications require that multiple GPUs work together but the efficiency obtained with cluster of GPUs is, at times, sub-optimal because the GPU features are not exploited at their best. We describe how it is possible to achieve an excellent efficiency for applications in statistical mechanics, particle dynamics and networks analysis by using suitable memory access patterns and mechanisms like CUDA streams, profiling tools, etc. Similar concepts and techniques may be applied also to other problems like the solution of Partial Differential Equations.

  15. Large scale cluster computing workshop

    Energy Technology Data Exchange (ETDEWEB)

    Dane Skow; Alan Silverman

    2002-12-23

    Recent revolutions in computer hardware and software technologies have paved the way for the large-scale deployment of clusters of commodity computers to address problems heretofore the domain of tightly coupled SMP processors. Near term projects within High Energy Physics and other computing communities will deploy clusters of scale 1000s of processors and be used by 100s to 1000s of independent users. This will expand the reach in both dimensions by an order of magnitude from the current successful production facilities. The goals of this workshop were: (1) to determine what tools exist which can scale up to the cluster sizes foreseen for the next generation of HENP experiments (several thousand nodes) and by implication to identify areas where some investment of money or effort is likely to be needed. (2) To compare and record experimences gained with such tools. (3) To produce a practical guide to all stages of planning, installing, building and operating a large computing cluster in HENP. (4) To identify and connect groups with similar interest within HENP and the larger clustering community.

  16. Large Scale Magnetostrictive Valve Actuator

    Science.gov (United States)

    Richard, James A.; Holleman, Elizabeth; Eddleman, David

    2008-01-01

    Marshall Space Flight Center's Valves, Actuators and Ducts Design and Development Branch developed a large scale magnetostrictive valve actuator. The potential advantages of this technology are faster, more efficient valve actuators that consume less power and provide precise position control and deliver higher flow rates than conventional solenoid valves. Magnetostrictive materials change dimensions when a magnetic field is applied; this property is referred to as magnetostriction. Magnetostriction is caused by the alignment of the magnetic domains in the material s crystalline structure and the applied magnetic field lines. Typically, the material changes shape by elongating in the axial direction and constricting in the radial direction, resulting in no net change in volume. All hardware and testing is complete. This paper will discuss: the potential applications of the technology; overview of the as built actuator design; discuss problems that were uncovered during the development testing; review test data and evaluate weaknesses of the design; and discuss areas for improvement for future work. This actuator holds promises of a low power, high load, proportionally controlled actuator for valves requiring 440 to 1500 newtons load.

  17. Large-scale pool fires

    Directory of Open Access Journals (Sweden)

    Steinhaus Thomas

    2007-01-01

    Full Text Available A review of research into the burning behavior of large pool fires and fuel spill fires is presented. The features which distinguish such fires from smaller pool fires are mainly associated with the fire dynamics at low source Froude numbers and the radiative interaction with the fire source. In hydrocarbon fires, higher soot levels at increased diameters result in radiation blockage effects around the perimeter of large fire plumes; this yields lower emissive powers and a drastic reduction in the radiative loss fraction; whilst there are simplifying factors with these phenomena, arising from the fact that soot yield can saturate, there are other complications deriving from the intermittency of the behavior, with luminous regions of efficient combustion appearing randomly in the outer surface of the fire according the turbulent fluctuations in the fire plume. Knowledge of the fluid flow instabilities, which lead to the formation of large eddies, is also key to understanding the behavior of large-scale fires. Here modeling tools can be effectively exploited in order to investigate the fluid flow phenomena, including RANS- and LES-based computational fluid dynamics codes. The latter are well-suited to representation of the turbulent motions, but a number of challenges remain with their practical application. Massively-parallel computational resources are likely to be necessary in order to be able to adequately address the complex coupled phenomena to the level of detail that is necessary.

  18. Handbook of Large-Scale Random Networks

    CERN Document Server

    Bollobas, Bela; Miklos, Dezso

    2008-01-01

    Covers various aspects of large-scale networks, including mathematical foundations and rigorous results of random graph theory, modeling and computational aspects of large-scale networks, as well as areas in physics, biology, neuroscience, sociology and technical areas

  19. What is a large-scale dynamo?

    Science.gov (United States)

    Nigro, G.; Pongkitiwanichakul, P.; Cattaneo, F.; Tobias, S. M.

    2017-01-01

    We consider kinematic dynamo action in a sheared helical flow at moderate to high values of the magnetic Reynolds number (Rm). We find exponentially growing solutions which, for large enough shear, take the form of a coherent part embedded in incoherent fluctuations. We argue that at large Rm large-scale dynamo action should be identified by the presence of structures coherent in time, rather than those at large spatial scales. We further argue that although the growth rate is determined by small-scale processes, the period of the coherent structures is set by mean-field considerations.

  20. Large-Scale Information Systems

    Energy Technology Data Exchange (ETDEWEB)

    D. M. Nicol; H. R. Ammerlahn; M. E. Goldsby; M. M. Johnson; D. E. Rhodes; A. S. Yoshimura

    2000-12-01

    Large enterprises are ever more dependent on their Large-Scale Information Systems (LSLS), computer systems that are distinguished architecturally by distributed components--data sources, networks, computing engines, simulations, human-in-the-loop control and remote access stations. These systems provide such capabilities as workflow, data fusion and distributed database access. The Nuclear Weapons Complex (NWC) contains many examples of LSIS components, a fact that motivates this research. However, most LSIS in use grew up from collections of separate subsystems that were not designed to be components of an integrated system. For this reason, they are often difficult to analyze and control. The problem is made more difficult by the size of a typical system, its diversity of information sources, and the institutional complexities associated with its geographic distribution across the enterprise. Moreover, there is no integrated approach for analyzing or managing such systems. Indeed, integrated development of LSIS is an active area of academic research. This work developed such an approach by simulating the various components of the LSIS and allowing the simulated components to interact with real LSIS subsystems. This research demonstrated two benefits. First, applying it to a particular LSIS provided a thorough understanding of the interfaces between the system's components. Second, it demonstrated how more rapid and detailed answers could be obtained to questions significant to the enterprise by interacting with the relevant LSIS subsystems through simulated components designed with those questions in mind. In a final, added phase of the project, investigations were made on extending this research to wireless communication networks in support of telemetry applications.

  1. Conundrum of the Large Scale Streaming

    CERN Document Server

    Malm, T M

    1999-01-01

    The etiology of the large scale peculiar velocity (large scale streaming motion) of clusters would increasingly seem more tenuous, within the context of the gravitational instability hypothesis. Are there any alternative testable models possibly accounting for such large scale streaming of clusters?

  2. Improving Prediction Accuracy of a Rate-Based Model of an MEA-Based Carbon Capture Process for Large-Scale Commercial Deployment

    Directory of Open Access Journals (Sweden)

    Xiaobo Luo

    2017-04-01

    Full Text Available Carbon capture and storage (CCS technology will play a critical role in reducing anthropogenic carbon dioxide (CO2 emission from fossil-fired power plants and other energy-intensive processes. However, the increment of energy cost caused by equipping a carbon capture process is the main barrier to its commercial deployment. To reduce the capital and operating costs of carbon capture, great efforts have been made to achieve optimal design and operation through process modeling, simulation, and optimization. Accurate models form an essential foundation for this purpose. This paper presents a study on developing a more accurate rate-based model in Aspen Plus® for the monoethanolamine (MEA-based carbon capture process by multistage model validations. The modeling framework for this process was established first. The steady-state process model was then developed and validated at three stages, which included a thermodynamic model, physical properties calculations, and a process model at the pilot plant scale, covering a wide range of pressures, temperatures, and CO2 loadings. The calculation correlations of liquid density and interfacial area were updated by coding Fortran subroutines in Aspen Plus®. The validation results show that the correlation combination for the thermodynamic model used in this study has higher accuracy than those of three other key publications and the model prediction of the process model has a good agreement with the pilot plant experimental data. A case study was carried out for carbon capture from a 250 MWe combined cycle gas turbine (CCGT power plant. Shorter packing height and lower specific duty were achieved using this accurate model.

  3. Large-scale scour of the sea floor and the effect of natural armouring processes, land reclamation Maasvlakte 2, port of Rotterdam

    Science.gov (United States)

    Boer, S.; Elias, E.; Aarninkhof, S.; Roelvink, D.; Vellinga, T.

    2007-01-01

    Morphological model computations based on uniform (non-graded) sediment revealed an unrealistically strong scour of the sea floor in the immediate vicinity to the west of Maasvlakte 2. By means of a state-of-the-art graded sediment transport model the effect of natural armouring and sorting of bed material on the scour process has been examined. Sensitivity computations confirm that the development of the scour hole is strongly reduced due to the incorporation of armouring processes, suggesting an approximately 30% decrease in terms of erosion area below the -20m depth contour. ?? 2007 ASCE.

  4. Proceedings of USC (University of Southern California) Workshop on VLSI (Very Large Scale Integration) & Modern Signal Processing, held at Los Angeles, California on 1-3 November 1982

    Science.gov (United States)

    1983-11-15

    WSI and IFT techniques applied to a random point defect distribucion . 151 used some time redundancy.) This discussion has centered around one...linear transport theory as well as the experimental means that monolayer epitaxy can provide may well be reflected in signal processing concepts

  5. Constructing sites on a large scale

    DEFF Research Database (Denmark)

    Braae, Ellen Marie; Tietjen, Anne

    2011-01-01

    for setting the design brief in a large scale urban landscape in Norway, the Jaeren region around the city of Stavanger. In this paper, we first outline the methodological challenges and then present and discuss the proposed method based on our teaching experiences. On this basis, we discuss aspects...... within the development of our urban landscapes. At the same time, urban and landscape designers are confronted with new methodological problems. Within a strategic transformation perspective, the formulation of the design problem or brief becomes an integrated part of the design process. This paper...... discusses new design (education) methods based on a relational concept of urban sites and design processes. Within this logic site survey is not simply a pre-design activity nor is it a question of comprehensive analysis. Site survey is an integrated part of the design process. By means of active site...

  6. 基于Aspen Plus的超大规模甲醇合成工艺模型%Simulation of large scale methanol synthesis process based on Aspen Plus

    Institute of Scientific and Technical Information of China (English)

    何一夫

    2013-01-01

    The large scale methanol synthesis process model is proposed and simulated by ASPEN PLUS software. The crude methanol composition, the reactor outlet composition, the carbon efficiency and the recycle ratio are obtained, which presents the recycle ratio has great influence on the methanol mole flow, carbon efficiency, recycle gas compressor power and synthesis gas compressor power. The model can simulate and predict the large scale methanol synthesis process for process comparison and optimization design.%利用Aspen Plus对超大规模甲醇合成工艺进行了全流程模拟.模型模拟得到了粗甲醇的成分、反应器出口组成、碳效率、循环比,揭示了循环比对粗甲醇中甲醇摩尔流速、整个反应碳效率、循环气压缩机功率、合成气压缩机功率的影响.通过该模型能够为工艺方案比选、优化设计提供模拟和预测.

  7. Large-scale polyol synthesis of single-crystal bismuth nanowires and the role of NaOH in the synthesis process

    Energy Technology Data Exchange (ETDEWEB)

    Wang Yewu [Department of Physics, Zhejiang University, Hangzhou 310027 (China); Kim, Kwang S [Center for Superfunctional Materials, Department of Chemistry, Pohang University of Science and Technology, San 31, Hyojadong, Namgu, Pohang 790-784 (Korea, Republic of)], E-mail: yewuwang@zju.edu.cn, E-mail: kim@postech.ac.kr

    2008-07-02

    A modified polyol process is introduced for the production of single-crystal bismuth (Bi) nanowires with uniform diameters along each wire in relatively high yield. The appropriate amount of NaOH in the solution reacts with Bi{sup 3+} to form water-soluble complexing ions BiO{sub 2}{sup -}. The tiny Bi nanoparticles formed at the initial stage could serve as seeds for the subsequent growth of Bi nanostructures in the refluxing process with the aid of PVP. We find that the amount of NaOH determines the reduction rate of BiO{sub 2}{sup -}, which influences the morphologies of the synthesized Bi nanostructures. High reduction rates result in nanowires and nanoparticles, while low reduction rates result in nanoplates.

  8. Large-scale polyol synthesis of single-crystal bismuth nanowires and the role of NaOH in the synthesis process.

    Science.gov (United States)

    Wang, Yewu; Kim, Kwang S

    2008-07-01

    A modified polyol process is introduced for the production of single-crystal bismuth (Bi) nanowires with uniform diameters along each wire in relatively high yield. The appropriate amount of NaOH in the solution reacts with Bi(3+) to form water-soluble complexing ions BiO(2)(-). The tiny Bi nanoparticles formed at the initial stage could serve as seeds for the subsequent growth of Bi nanostructures in the refluxing process with the aid of PVP. We find that the amount of NaOH determines the reduction rate of BiO(2)(-), which influences the morphologies of the synthesized Bi nanostructures. High reduction rates result in nanowires and nanoparticles, while low reduction rates result in nanoplates.

  9. Large-Scale Precise Printing of Ultrathin Sol-Gel Oxide Dielectrics for Directly Patterned Solution-Processed Metal Oxide Transistor Arrays.

    Science.gov (United States)

    Lee, Won-June; Park, Won-Tae; Park, Sungjun; Sung, Sujin; Noh, Yong-Young; Yoon, Myung-Han

    2015-09-09

    Ultrathin and dense metal oxide gate di-electric layers are reported by a simple printing of AlOx and HfOx sol-gel precursors. Large-area printed indium gallium zinc oxide (IGZO) thin-film transistor arrays, which exhibit mobilities >5 cm(2) V(-1) s(-1) and gate leakage current of 10(-9) A cm(-2) at a very low operation voltage of 2 V, are demonstrated by continuous simple bar-coated processes.

  10. Adaptive Hessian-based Non-stationary Gaussian Process Response Surface Method for Probability Density Approximation with Application to Bayesian Solution of Large-scale Inverse Problems

    Science.gov (United States)

    2011-10-01

    applications, Spinger -Verlag, 1989. Fig. 7.11. Two dimensional marginal chains for parameters m5,m6,m7,m8. The Gaussian process predictor is obtained after ten...43 (2005), pp. 1306–1315. [48] Radford M. Neal, Bayesian learning for neural networks, Spinger -Verlag, 1996. [49] Ngoc Cuong Nguyen, An uncertainty...J. Santner, Brian J. Williams, and William I. Notz, The Design and Analysis of Computer Experiments, Spinger -Verlag, 2003. [60] Alexandra M. Schmidt

  11. The synthesis of alternatives for the bioconversion of waste-monoethanolamine from large-scale CO{sub 2}-removal processes

    Energy Technology Data Exchange (ETDEWEB)

    Ohtaguchi, Kazuhisa; Yokoyama, Takahisa [Tokyo Inst. of Tech. (Japan). Dept. of Chemical Engineering

    1997-12-31

    The alternatives for bioconversion of monoethanolamine (MEA), which would appear in large quantities in industrial effluent of CO{sub 2}-removal process of power companies, have been proposed by investigating the ability of some microorganisms to deaminate MEA. An evaluation of biotechnology, which includes productions from MEA of acetic acid and acetaldehyde with Escherichia coli, of formic and acetic acids with Clostridium formicoaceticum, confirms and extends our earlier remarks on availability of ecotechnology for solving the above problem. (Author)

  12. Internationalization Measures in Large Scale Research Projects

    Science.gov (United States)

    Soeding, Emanuel; Smith, Nancy

    2017-04-01

    Internationalization measures in Large Scale Research Projects Large scale research projects (LSRP) often serve as flagships used by universities or research institutions to demonstrate their performance and capability to stakeholders and other interested parties. As the global competition among universities for the recruitment of the brightest brains has increased, effective internationalization measures have become hot topics for universities and LSRP alike. Nevertheless, most projects and universities are challenged with little experience on how to conduct these measures and make internationalization an cost efficient and useful activity. Furthermore, those undertakings permanently have to be justified with the Project PIs as important, valuable tools to improve the capacity of the project and the research location. There are a variety of measures, suited to support universities in international recruitment. These include e.g. institutional partnerships, research marketing, a welcome culture, support for science mobility and an effective alumni strategy. These activities, although often conducted by different university entities, are interlocked and can be very powerful measures if interfaced in an effective way. On this poster we display a number of internationalization measures for various target groups, identify interfaces between project management, university administration, researchers and international partners to work together, exchange information and improve processes in order to be able to recruit, support and keep the brightest heads to your project.

  13. Quantification of the relative role of land-surface processes and large-scale forcing in dynamic downscaling over the Tibetan Plateau

    Science.gov (United States)

    Gao, Yanhong; Xiao, Linhong; Chen, Deliang; Chen, Fei; Xu, Jianwei; Xu, Yu

    2017-03-01

    Dynamical downscaling modeling (DDM) is important to understand regional climate change and develop local mitigation strategies, and the accuracy of DDM depends on the physical processes involved in the regional climate model as well as the forcing datasets derived from global models. This study investigates the relative role of the land surface schemes and forcing datasets in the DDM over the Tibet Plateau (TP), a region complex in topography and vulnerable to climate change. Three Weather Research and Forecasting model dynamical downscaling simulations configured with two land surface schemes [Noah versus Noah with multiparameterization (Noah-MP)] and two forcing datasets are performed over the period of 1980-2005. The downscaled temperature and precipitation are evaluated with observations and inter-compared regarding temporal trends, spatial distributions, and climatology. Results show that the temporal trends of the temperature and precipitation are determined by the forcing datasets, and the forcing dataset with the smallest trend bias performs the best. Relative to the forcing datasets, land surface processes play a more critical role in the DDM over the TP due to the strong heating effects on the atmospheric circulation from a vast area at exceptionally high elevations. By changing the vertical profiles of temperature in the atmosphere and the horizontal patterns of moisture advection during the monsoon seasons, the land surface schemes significantly regulate the downscaled temperature and precipitation in terms of climatology and spatial patterns. This study emphasizes the selection of land surface schemes is of crucial importance in the successful DDM over the TP.

  14. Process-based hydrological modeling using SWAT: The effect of permafrost on water resources in the large-scale river catchment Kharaa / Mongolia

    Science.gov (United States)

    Hülsmann, L.; Geyer, T.; Karthe, D.; Priess, J.; Schweitzer, C.

    2012-04-01

    In this study, the Soil Water Assessment Tool (SWAT) was applied to obtain a better understanding of hydrological processes in the semi-arid catchment of the Kharaa River in Northern Mongolia. The transient, physical-based model SWAT was set up using spatial datasets on soil, land use, climate, and stream network provided by the project "IWRM-MoMo" to (i.) simulate the water balance components of the basin and (ii.) to identify potential gaps in the input data. We found that the SWAT model satisfactorily reflects the hydrological processes in the catchment and simulates river runoff as a response to strong rainfall events as well as to snow and ice melt. To obtain correct runoff volumes during spring, permafrost has to be considered. Permafrost-influenced soils constrain water flow in the frozen layer, so that percolation out of the active layer is hampered (Woo 2011). This effect is reproduced in SWAT by assigning an impermeable layer in the subsurface to the areas dominated by permafrost. The simulations indicate that in these regions groundwater resources are limited as a consequence of impermeable ground ice. In addition, groundwater recharge rates in the catchment are generally low due to high evaporation rates (80-90 %). Consequently the base flow contribution is small. Further studies on the estimation of groundwater recharge rates should be carried out, since groundwater is an important resource for water supply. Model results indicate that the non-uniformity of the precipitation distribution was not sufficiently covered by the interpolated input data, so that precipitation and runoff volumes are partially over- or underestimated. Since precipitation defines the overall water availability in river catchments (Baumgartner 1982), additional climate records would considerably improve model outputs. As a consequence of large evapotranspiration losses, discharge as well as groundwater recharge estimates were identified to be highly sensitive to

  15. 在GPU上求解大规模优化问题的反向策略的PSO算法%Opposition-Based Particle Swarm Optimization for Solving Large Scale Optimization Problems on Graphic Process Unit

    Institute of Scientific and Technical Information of China (English)

    汪靖; 吴志健

    2011-01-01

    本文通过对传统粒子群算法(PSO)的分析,在GPU(Graphic Process Unit)上设计了基于一般反向学习策略的粒子群算法,并用于求解大规模优化问题.主要思想是通过一般反向学习策略转化当前解空间,提高算法找到最优解的几率,同时使用GPU大量线程并行来加速收敛速度.对比数值实验表明,对于求解大规模高维的优化问题,本文算法比其他智能算法具有更好的精度和更快的收敛速度.%Through an analysis of the traditional particle swarm algorithm, this paper presents particle swarm algorithm based on the generalized opposition-based particle (GOBL) swarm algorithm on Graphic Processing Unit (GPU), and applies it to solve large scale optimization problem.The generalized opposition learning strategies transforms the current solution space to provide more chances of finding better solutions, and GPU in parallel accelerates the convergence rate.Experiment shows that this algorithm has better accuracy and convergence speed than other algorithm for solving large-scale and high-dimensional problems.

  16. Processes of diversification and dispersion of rice yellow mottle virus inferred from large-scale and high-resolution phylogeographical studies.

    Science.gov (United States)

    Traore, O; Sorho, F; Pinel, A; Abubakar, Z; Banwo, O; Maley, J; Hebrard, E; Winter, S; Sere, Y; Konate, G; Fargette, D

    2005-06-01

    Phylogeography of Rice yellow mottle virus (RYMV) was reconstructed from the coat protein gene sequences of a selection of 173 isolates from the 14 countries of mainland Africa where the disease occurred and from the full sequences of 16 representative isolates. Genetic variation was linked to geographical distribution and not to host species as isolates from wild rice always clustered with isolates from cultivated rice of the same region. Genetic variation was not associated to agro-ecology, viral interference and insect vector species. Distinct RYMV lineages occurred in East, Central and West Africa, although the Central African lineage included isolates from Benin, Togo and Niger at the west, adjacent to countries of the West African lineage. Genetic subdivision at finer geographical scales was apparent within lineages of Central and West Africa, although less pronounced than in East Africa. Physical obstacles, but also habitat fragmentation, as exemplified by the small low-lying island of Pemba offshore Tanzania mainland, explained strain localization. Three new highly divergent strains were found in eastern Tanzania. By contrast, intensive surveys in Cote d'Ivoire and Guinea at the west of Africa did not reveal any new variant. Altogether, this supported the view that the Eastern Arc Mountains biodiversity hotspot was the centre of origin of RYMV and that the virus spread subsequently from east to west across Africa. In West Africa, specific strains occurred in the Inner Niger Delta and suggested it was a secondary centre of diversification. Processes for diversification and dispersion of RYMV are proposed.

  17. Large scale evaluation of beta-decay rates of r-process nuclei with the inclusion of first-forbidden transitions

    CERN Document Server

    Marketin, T; Martínez-Pinedo, G

    2015-01-01

    R-process nucleosynthesis models rely, by necessity, on nuclear structure models for input. Particularly important are beta-decay half-lives of neutron rich nuclei. At present only a single systematic calculation exists that provides values for all relevant nuclei making it difficult to test the sensitivity of nucleosynthesis models to this input. Additionally, even though there are indications that their contribution may be significant, the impact of first-forbidden transitions on decay rates has not been systematically studied within a consistent model. We use a fully self-consistent covariant density functional theory (CDFT) framework to provide a table of $\\beta$-decay half-lives and $\\beta$-delayed neutron emission probabilities, including first-forbidden transitions. We observe a significant contribution of the first-forbidden transitions to the total decay rate in nuclei far from the valley of stability. The experimental half-lives are in general well reproduced, both for even-even, odd-A and odd-odd n...

  18. Simulation of flow processes in a large scale karst system with an integrated catchment model (Mike She) - Identification of relevant parameters influencing spring discharge

    Science.gov (United States)

    Doummar, Joanna; Sauter, Martin; Geyer, Tobias

    2012-03-01

    SummaryIn a complex environment such as karst systems, it is difficult to assess the relative contribution of the different components of the system to the hydrological system response, i.e. spring discharge. Not only is the saturated zone highly heterogeneous due to the presence of highly permeable conduits, but also the recharge processes. The latter are composed of rapid recharge components through shafts and solution channels and diffuse matrix infiltration, generating a highly complex, spatially and temporally variable input signal. The presented study reveals the importance of the compartments vegetation, soils, saturated zone and unsaturated zone. Therefore, the entire water cycle in the catchment area Gallusquelle spring (Southwest Germany) is modelled over a period of 10 years using the integrated hydrological modelling system Mike She by DHI (2007). Sensitivity analyses show that a few individual parameters, varied within physically plausible ranges, play an important role in reshaping the recessions and peaks of the recharge functions and consequently the spring discharge. Vegetation parameters especially the Leaf Area Index (LAI) and the root depth as well as empirical parameters in the relationship of Kristensen and Jensen highly influence evapotranspiration, transpiration to evaporation ratios and recharge respectively. In the unsaturated zone, the type of the soil (mainly the hydraulic conductivity at saturation in the water retention and hydraulic retention curves) has an effect on the infiltration/evapotranspiration and recharge functions. Additionally in the unsaturated karst, the saturated moisture content is considered as a highly indicative parameter as it significantly affects the peaks and recessions of the recharge curve. At the level of the saturated zone the hydraulic conductivity of the matrix and highly conductive zone representing the conduit are dominant parameters influencing the spring response. Other intermediate significant

  19. Metoder for Modellering, Simulering og Regulering af Større Termiske Processer anvendt i Sukkerproduktion. Methods for Modelling, Simulation and Control of Large Scale Thermal Systems Applied in Sugar Production

    DEFF Research Database (Denmark)

    Nielsen, Kirsten Mølgaard; Nielsen, Jens Frederik Dalsgaard

    The subject of this Ph.D. thesis is to investigate and develop methods for modelling, simulation and control applicable in large scale termal industrial plants. An ambition has been to evaluate the results in a physical process. Sugar production is well suited for the purpose. In collaboration...... with The Danish Sugar Corporation two subsystems in the production have been chosen for application - the evaporation process and the crystallization process. In order to obtain information about the static and dynamic behaviour of the subsystems, field measurements have been performed. A realtime evaporator...... simulator has been developed. The simulator handles the normal working conditions relevant to control engineers. A non-linear dynamic model based on mass and energy balances has been developed. The model parameters have been adjusted to data measured on a Danish sugar plant. The simulator consists...

  20. Materials specification VGB-R 109 and processing standards. First experiences of a large-scaled power plant for quality control purposes

    Energy Technology Data Exchange (ETDEWEB)

    Bareiss, J.; Nothdurft, R.; Kurtz, M. [EnBW Kraftwerke AG, Stuttgart (Germany); Helmrich, A.; Hartwig, R. [Alstom Power Systems GmbH, Stuttgart (Germany); Bantle, M. [TUEV SUED Industrie Service GmbH, Filderstadt (Germany)

    2009-07-01

    New boilers in Europe shall be manufactured by the Manufacturer as contractor of the Customer based on PED (European Pressure Equipment Directive 97/23/EC), applicable as legal Directive since May 2002. According PED, the Manufacturer is equivalent to a legal person, responsible for calculation, design, fabrication at workshop and site, including final inspection and declaration of conformity, independent if work packages are subcontracted by Manufacturer or not. Based on Customer contract, Module G shall be used as process to prove conformity according PED. As principle, PED specifies fundamental safety requirements with main focus for materials, and fabrication. For 600 C / 620 C power plants with advanced steam conditions and considerably improving efficiency, new materials are necessary. For selection of the materials, attention has to be paid to the long term design strength values, the manufacturability of the materials as well as the corrosion and oxidation behaviour. Particularly the correct fabrication according to the state of the art closely linked to new discoveries about material behaviour for semi-finished products as well as for boiler components has to be ensured. These new materials are mainly not covered by PED respectively the harmonized standard EN 12952. For that reason and Customer contract with view on national legal directives which specifies the period of in-service inspection during the operational lifetime of the boiler, additional codes and standards shall be applied for boiler manufacturing. All these requirements shall be specified in the Quality Engineering Documents of Manufacturer. The presentation gives an overview of fundamentals of PED and describes the implementation of the requirements for materials, fabrication and inspections in the Quality Engineering Documents, both in the framework of PED and Customer contract. As part of Design Approval by NoBo according PED Module G, the Quality Engineering Documents are fundamental

  1. Large-Scale Astrophysical Visualization on Smartphones

    Science.gov (United States)

    Becciani, U.; Massimino, P.; Costa, A.; Gheller, C.; Grillo, A.; Krokos, M.; Petta, C.

    2011-07-01

    Nowadays digital sky surveys and long-duration, high-resolution numerical simulations using high performance computing and grid systems produce multidimensional astrophysical datasets in the order of several Petabytes. Sharing visualizations of such datasets within communities and collaborating research groups is of paramount importance for disseminating results and advancing astrophysical research. Moreover educational and public outreach programs can benefit greatly from novel ways of presenting these datasets by promoting understanding of complex astrophysical processes, e.g., formation of stars and galaxies. We have previously developed VisIVO Server, a grid-enabled platform for high-performance large-scale astrophysical visualization. This article reviews the latest developments on VisIVO Web, a custom designed web portal wrapped around VisIVO Server, then introduces VisIVO Smartphone, a gateway connecting VisIVO Web and data repositories for mobile astrophysical visualization. We discuss current work and summarize future developments.

  2. Large-scale simulations of reionization

    Energy Technology Data Exchange (ETDEWEB)

    Kohler, Katharina; /JILA, Boulder /Fermilab; Gnedin, Nickolay Y.; /Fermilab; Hamilton, Andrew J.S.; /JILA, Boulder

    2005-11-01

    We use cosmological simulations to explore the large-scale effects of reionization. Since reionization is a process that involves a large dynamic range--from galaxies to rare bright quasars--we need to be able to cover a significant volume of the universe in our simulation without losing the important small scale effects from galaxies. Here we have taken an approach that uses clumping factors derived from small scale simulations to approximate the radiative transfer on the sub-cell scales. Using this technique, we can cover a simulation size up to 1280h{sup -1} Mpc with 10h{sup -1} Mpc cells. This allows us to construct synthetic spectra of quasars similar to observed spectra of SDSS quasars at high redshifts and compare them to the observational data. These spectra can then be analyzed for HII region sizes, the presence of the Gunn-Peterson trough, and the Lyman-{alpha} forest.

  3. Organised convection embedded in a large-scale flow

    Science.gov (United States)

    Naumann, Ann Kristin; Stevens, Bjorn; Hohenegger, Cathy

    2017-04-01

    In idealised simulations of radiative convective equilibrium, convection aggregates spontaneously from randomly distributed convective cells into organized mesoscale convection despite homogeneous boundary conditions. Although these simulations apply very idealised setups, the process of self-aggregation is thought to be relevant for the development of tropical convective systems. One feature that idealised simulations usually neglect is the occurrence of a large-scale background flow. In the tropics, organised convection is embedded in a large-scale circulation system, which advects convection in along-wind direction and alters near surface convergence in the convective areas. A large-scale flow also modifies the surface fluxes, which are expected to be enhanced upwind of the convective area if a large-scale flow is applied. Convective clusters that are embedded in a large-scale flow therefore experience an asymmetric component of the surface fluxes, which influences the development and the pathway of a convective cluster. In this study, we use numerical simulations with explicit convection and add a large-scale flow to the established setup of radiative convective equilibrium. We then analyse how aggregated convection evolves when being exposed to wind forcing. The simulations suggest that convective line structures are more prevalent if a large-scale flow is present and that convective clusters move considerably slower than advection by the large-scale flow would suggest. We also study the asymmetric component of convective aggregation due to enhanced surface fluxes, and discuss the pathway and speed of convective clusters as a function of the large-scale wind speed.

  4. Efficient algorithms for collaborative decision making for large scale settings

    DEFF Research Database (Denmark)

    Assent, Ira

    2011-01-01

    Collaborative decision making is a successful approach in settings where data analysis and querying can be done interactively. In large scale systems with huge data volumes or many users, collaboration is often hindered by impractical runtimes. Existing work on improving collaboration focuses...... to bring about more effective and more efficient retrieval systems that support the users' decision making process. We sketch promising research directions for more efficient algorithms for collaborative decision making, especially for large scale systems....

  5. Large scale parallel document image processing

    NARCIS (Netherlands)

    van der Zant, Tijn; Schomaker, Lambert; Valentijn, Edwin; Yanikoglu, BA; Berkner, K

    2008-01-01

    Building a system which allows to search a very large database of document images. requires professionalization of hardware and software, e-science and web access. In astrophysics there is ample experience dealing with large data sets due to an increasing number of measurement instruments. The probl

  6. Very Large Scale Distributed Information Processing Systems

    Science.gov (United States)

    1991-09-27

    34Reliable Distributed Database Management", Proc. of the IEEE, May 1987, pp. 601-620. [GOTT881 Gottlob , Georg andRoberto Zicari, "Closed World Databases... Gottlob , and Gio Wiederhold, "Interfacing Relational Databases and Prolog Efficiently," in Proceedings 2nd Expert Database Systems Conference, pp. 141

  7. Large scale parallel document image processing

    NARCIS (Netherlands)

    van der Zant, Tijn; Schomaker, Lambert; Valentijn, Edwin; Yanikoglu, BA; Berkner, K

    2008-01-01

    Building a system which allows to search a very large database of document images. requires professionalization of hardware and software, e-science and web access. In astrophysics there is ample experience dealing with large data sets due to an increasing number of measurement instruments. The

  8. Making Predictions using Large Scale Gaussian Processes

    Data.gov (United States)

    National Aeronautics and Space Administration — One of the key problems that arises in many areas is to estimate a potentially nonlinear function [tex] G(x, theta)[/tex] given input and output samples tex [/tex]...

  9. OPTIMIZATION OF LIQUID NITROGEN WASH PROCESS IN LARGE-SCALED AMMONIA PLANT%大型合成氨装置液氮洗工艺流程的优化

    Institute of Scientific and Technical Information of China (English)

    任多胜

    2011-01-01

    Based upon the main scheme for gas refining process in present domestic large-scaled ammonia plant and in combination with the several issues considered importantly in the course of optimization of liquid nitrogen wash process for the enterprise the issue%根据目前国内大型合成氨装置中气体精制工艺的主要方案,结合本企业液氮洗工艺流程在优化过程中所着重考虑的几个问题,主要阐明了流程选择过程注意的问题。

  10. Unfolding large-scale maps.

    Science.gov (United States)

    Jenkins, Glyn

    2003-12-01

    This is an account of the development and use of genetic maps, from humble beginnings at the hands of Thomas Hunt Morgan, to the sophistication of genome sequencing. The review charters the emergence of molecular marker maps exploiting DNA polymorphism, the renaissance of cytogenetics through the use of fluorescence in situ hybridisation, and the discovery and isolation of genes by map-based cloning. The historical significance of sequencing of DNA prefaces a section describing the sequencing of genomes, the ascendancy of particular model organisms, and the utility and limitations of comparative genomic and functional genomic approaches to further our understanding of the control of biological processes. Emphasis is given throughout the treatise as to how the structure and biological behaviour of the DNA molecule underpin the technological development and biological applications of maps.

  11. Large Scale Computations in Air Pollution Modelling

    DEFF Research Database (Denmark)

    Zlatev, Z.; Brandt, J.; Builtjes, P. J. H.

    Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...

  12. Large Scale Computations in Air Pollution Modelling

    DEFF Research Database (Denmark)

    Zlatev, Z.; Brandt, J.; Builtjes, P. J. H.

    Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...

  13. Metoder for Modellering, Simulering og Regulering af Større Termiske Processer anvendt i Sukkerproduktion. Methods for Modelling, Simulation and Control of Large Scale Thermal Systems Applied in Sugar Production

    DEFF Research Database (Denmark)

    Nielsen, Kirsten Mølgaard; Nielsen, Jens Frederik Dalsgaard

    The subject of this Ph.D. thesis is to investigate and develop methods for modelling, simulation and control applicable in large scale termal industrial plants. An ambition has been to evaluate the results in a physical process. Sugar production is well suited for the purpose. In collaboration...... simulator has been developed. The simulator handles the normal working conditions relevant to control engineers. A non-linear dynamic model based on mass and energy balances has been developed. The model parameters have been adjusted to data measured on a Danish sugar plant. The simulator consists...... of a computer, a data terminal and an electric interface corresponding to the interface at the sugar plant. The simulator is operating in realtime and thus a realistic test of controllers is possible. The idiomatic control methodology has been investigated developing a control concept for the evaporation...

  14. Process study and quality control for forming large-scale duplex fitting%大比例三通接头成形工艺研究及质量控制

    Institute of Scientific and Technical Information of China (English)

    王玲; 付冬雪; 郎利辉; 王少华; 杨春雷

    2013-01-01

    For meeting requirements of the large content flow inside the tube, a plenty of large-scale duplex fitting structures are used in the aircraft and aerospace areas. There are many issues to be tackled due to the high scrap rate in the large-scale duplex fitting forming processes. Taking duplex fitting with forming factor of 1 as experiment part, the optimization of the process parameters, tooling types and forming experiments were done. These researches show that the conical forming die is better than spherical die for improving the forming quality and qualified products were formed.%为适应导管内容物流量的要求,航空航天、汽配等领域上采用大量大比例成形系数的三通接头结构.成形系数为1的大比例接头存在成形难度大、报废率高等许多亟待解决的问题.本文以某型号成形系数为1的大比例三通接头为典型件,对成形中椭圆孔尺寸、模具形状等工艺参数进行设计与优化,并通过试验验证了锥面模胎工装形式相比两次球面模胎工装形式对成形质量的改善,最终成形出了合格零件.

  15. Large scale network-centric distributed systems

    CERN Document Server

    Sarbazi-Azad, Hamid

    2014-01-01

    A highly accessible reference offering a broad range of topics and insights on large scale network-centric distributed systems Evolving from the fields of high-performance computing and networking, large scale network-centric distributed systems continues to grow as one of the most important topics in computing and communication and many interdisciplinary areas. Dealing with both wired and wireless networks, this book focuses on the design and performance issues of such systems. Large Scale Network-Centric Distributed Systems provides in-depth coverage ranging from ground-level hardware issu

  16. Network robustness under large-scale attacks

    CERN Document Server

    Zhou, Qing; Liu, Ruifang; Cui, Shuguang

    2014-01-01

    Network Robustness under Large-Scale Attacks provides the analysis of network robustness under attacks, with a focus on large-scale correlated physical attacks. The book begins with a thorough overview of the latest research and techniques to analyze the network responses to different types of attacks over various network topologies and connection models. It then introduces a new large-scale physical attack model coined as area attack, under which a new network robustness measure is introduced and applied to study the network responses. With this book, readers will learn the necessary tools to evaluate how a complex network responds to random and possibly correlated attacks.

  17. GPU-based large-scale visualization

    KAUST Repository

    Hadwiger, Markus

    2013-11-19

    Recent advances in image and volume acquisition as well as computational advances in simulation have led to an explosion of the amount of data that must be visualized and analyzed. Modern techniques combine the parallel processing power of GPUs with out-of-core methods and data streaming to enable the interactive visualization of giga- and terabytes of image and volume data. A major enabler for interactivity is making both the computational and the visualization effort proportional to the amount of data that is actually visible on screen, decoupling it from the full data size. This leads to powerful display-aware multi-resolution techniques that enable the visualization of data of almost arbitrary size. The course consists of two major parts: An introductory part that progresses from fundamentals to modern techniques, and a more advanced part that discusses details of ray-guided volume rendering, novel data structures for display-aware visualization and processing, and the remote visualization of large online data collections. You will learn how to develop efficient GPU data structures and large-scale visualizations, implement out-of-core strategies and concepts such as virtual texturing that have only been employed recently, as well as how to use modern multi-resolution representations. These approaches reduce the GPU memory requirements of extremely large data to a working set size that fits into current GPUs. You will learn how to perform ray-casting of volume data of almost arbitrary size and how to render and process gigapixel images using scalable, display-aware techniques. We will describe custom virtual texturing architectures as well as recent hardware developments in this area. We will also describe client/server systems for distributed visualization, on-demand data processing and streaming, and remote visualization. We will describe implementations using OpenGL as well as CUDA, exploiting parallelism on GPUs combined with additional asynchronous

  18. Database for volcanic processes and geology of Augustine Volcano, Alaska

    Science.gov (United States)

    McIntire, Jacqueline; Ramsey, David W.; Thoms, Evan; Waitt, Richard B.; Beget, James E.

    2012-01-01

    Augustine Island (volcano) in lower Cook Inlet, Alaska, has erupted repeatedly in late-Holocene and historical times. Eruptions typically beget high-energy volcanic processes. Most notable are bouldery debris avalanches containing immense angular clasts shed from summit domes. Coarse deposits of these avalanches form much of Augustine's lower flanks. This geologic map at 1:25,000 scale depicts these deposits, these processes.

  19. DECOVALEX III III/BENCHPAR PROJECTS. Approaches to Upscaling Thermal-Hydro-Mechanical Processes in a Fractured Rock. Mass and its Significance for Large-Scale Repository Performance Assessment. Summary of Findings. Report of BMT2/WP3

    Energy Technology Data Exchange (ETDEWEB)

    Andersson, Johan (comp.) [JA Streamflow AB, Aelvsjoe (Sweden); Staub, Isabelle (comp.) [Golder Associates AB, Stockholm (Sweden); Knight, Les (comp.) [Nirex UK Ltd, Oxon (United Kingdom)

    2005-02-15

    The Benchmark Test 2 of DECOVALEX III and Work Package 3 of BENCHPAR concerns the upscaling Thermal (T), Hydrological (H) and Mechanical (M) processes in a fractured rock mass and its significance for large-scale repository performance assessment. The work is primarily concerned with the extent to which various thermo-hydro-mechanical couplings in a fractured rock mass adjacent to a repository are significant in terms of solute transport typically calculated in large-scale repository performance assessments. Since the presence of even quite small fractures may control the hydraulic, mechanical and coupled hydromechanical behaviour of the rock mass, a key of the work has been to explore the extent to which these can be upscaled and represented by 'equivalent' continuum properties appropriate PA calculations. From these general aims the BMT was set-up as a numerical study of a large scale reference problem. Analysing this reference problem should: help explore how different means of simplifying the geometrical detail of a site, with its implications on model parameters, ('upscaling') impacts model predictions of relevance to repository performance, explore to what extent the THM-coupling needs to be considered in relation to PA-measures, compare the uncertainties in upscaling (both to uncertainty on how to upscale or uncertainty that arises due to the upscaling processes) and consideration of THM couplings with the inherent uncertainty and spatial variability of the site specific data. Furthermore, it has been an essential component of the work that individual teams not only produce numerical results but are forced to make their own judgements and to provide the proper justification for their conclusions based on their analysis. It should also be understood that conclusions drawn will partly be specific to the problem analysed, in particular as it mainly concerns a 2D application. This means that specific conclusions may have limited applicability

  20. DECOVALEX III III/BENCHPAR PROJECTS. Approaches to Upscaling Thermal-Hydro-Mechanical Processes in a Fractured Rock. Mass and its Significance for Large-Scale Repository Performance Assessment. Summary of Findings. Report of BMT2/WP3

    Energy Technology Data Exchange (ETDEWEB)

    Andersson, Johan (comp.) [JA Streamflow AB, Aelvsjoe (Sweden); Staub, Isabelle (comp.) [Golder Associates AB, Stockholm (Sweden); Knight, Les (comp.) [Nirex UK Ltd, Oxon (United Kingdom)

    2005-02-15

    The Benchmark Test 2 of DECOVALEX III and Work Package 3 of BENCHPAR concerns the upscaling Thermal (T), Hydrological (H) and Mechanical (M) processes in a fractured rock mass and its significance for large-scale repository performance assessment. The work is primarily concerned with the extent to which various thermo-hydro-mechanical couplings in a fractured rock mass adjacent to a repository are significant in terms of solute transport typically calculated in large-scale repository performance assessments. Since the presence of even quite small fractures may control the hydraulic, mechanical and coupled hydromechanical behaviour of the rock mass, a key of the work has been to explore the extent to which these can be upscaled and represented by 'equivalent' continuum properties appropriate PA calculations. From these general aims the BMT was set-up as a numerical study of a large scale reference problem. Analysing this reference problem should: help explore how different means of simplifying the geometrical detail of a site, with its implications on model parameters, ('upscaling') impacts model predictions of relevance to repository performance, explore to what extent the THM-coupling needs to be considered in relation to PA-measures, compare the uncertainties in upscaling (both to uncertainty on how to upscale or uncertainty that arises due to the upscaling processes) and consideration of THM couplings with the inherent uncertainty and spatial variability of the site specific data. Furthermore, it has been an essential component of the work that individual teams not only produce numerical results but are forced to make their own judgements and to provide the proper justification for their conclusions based on their analysis. It should also be understood that conclusions drawn will partly be specific to the problem analysed, in particular as it mainly concerns a 2D application. This means that specific conclusions may have limited applicability

  1. GroFi: Large-scale fiber placement research facility

    Directory of Open Access Journals (Sweden)

    Christian Krombholz

    2016-03-01

    and processes for large-scale composite components. Due to the use of coordinated and simultaneously working layup units a high exibility of the research platform is achieved. This allows the investigation of new materials, technologies and processes on both, small coupons, but also large components such as wing covers or fuselage skins.

  2. Large scale digital atlases in neuroscience

    Science.gov (United States)

    Hawrylycz, M.; Feng, D.; Lau, C.; Kuan, C.; Miller, J.; Dang, C.; Ng, L.

    2014-03-01

    Imaging in neuroscience has revolutionized our current understanding of brain structure, architecture and increasingly its function. Many characteristics of morphology, cell type, and neuronal circuitry have been elucidated through methods of neuroimaging. Combining this data in a meaningful, standardized, and accessible manner is the scope and goal of the digital brain atlas. Digital brain atlases are used today in neuroscience to characterize the spatial organization of neuronal structures, for planning and guidance during neurosurgery, and as a reference for interpreting other data modalities such as gene expression and connectivity data. The field of digital atlases is extensive and in addition to atlases of the human includes high quality brain atlases of the mouse, rat, rhesus macaque, and other model organisms. Using techniques based on histology, structural and functional magnetic resonance imaging as well as gene expression data, modern digital atlases use probabilistic and multimodal techniques, as well as sophisticated visualization software to form an integrated product. Toward this goal, brain atlases form a common coordinate framework for summarizing, accessing, and organizing this knowledge and will undoubtedly remain a key technology in neuroscience in the future. Since the development of its flagship project of a genome wide image-based atlas of the mouse brain, the Allen Institute for Brain Science has used imaging as a primary data modality for many of its large scale atlas projects. We present an overview of Allen Institute digital atlases in neuroscience, with a focus on the challenges and opportunities for image processing and computation.

  3. Geology

    Data.gov (United States)

    Kansas Data Access and Support Center — This database is an Arc/Info implementation of the 1:500,000 scale Geology Map of Kansas, M­23, 1991. This work wasperformed by the Automated Cartography section of...

  4. 支持大规模流数据处理的在线MapReduce 数据传输机制%Online MapReduce Data Transmission Mechanism Supporting Large-Scale Stream Data Processing

    Institute of Scientific and Technical Information of China (English)

    魏晓辉; 李聪; 李洪亮; 李翔; 刘圆圆; 李丽娜; 庄园

    2015-01-01

    We proposed a scalable and dynamic MapReduce computation model which supports the online processing of large-scale dynamic/static data against the characteristics of uneven stream data size and dynamic flowing and breaking out suddenly.On this basis,we proposed an online MapReduce data transmission mechanism and implemented its prototype program based on the push mode of Event and the use of Netty asynchronous communication technology.This paper focuses on solving fast online transfer of the large-scale distributed computing program and data dynamic distribution to provide support for dynamic MapReduce model.The experimental results show that the method can greatly improve the transmission efficiency of data between jobs compared with the traditional socket pipeline method in Hadoop system and improve real-time data stream handling significantly.%针对流数据规模参差不齐、流量动态变化且突发性较强的特点,提出一种可伸缩的动态 MapReduce 计算模型,支持大规模动/静态数据在线处理。基于 Event 推送方式,利用Netty 底层异步通信方式等技术,建立在线 MapReduce 数据传输机制,进一步实现其原型程序,解决了大规模分布式计算程序的快速在线传输和数据分发等问题,支持流数据动态分发机制,为动态 MapReduce 模型提供支撑。与 HadoopOnline 系统的传统 Socket 管道传送方式相比,该方法能有效提高作业之间数据的传送效率,从而提高大规模流数据处理的实时性。

  5. Accelerating sustainability in large-scale facilities

    CERN Multimedia

    Marina Giampietro

    2011-01-01

    Scientific research centres and large-scale facilities are intrinsically energy intensive, but how can big science improve its energy management and eventually contribute to the environmental cause with new cleantech? CERN’s commitment to providing tangible answers to these questions was sealed in the first workshop on energy management for large scale scientific infrastructures held in Lund, Sweden, on the 13-14 October.   Participants at the energy management for large scale scientific infrastructures workshop. The workshop, co-organised with the European Spallation Source (ESS) and  the European Association of National Research Facilities (ERF), tackled a recognised need for addressing energy issues in relation with science and technology policies. It brought together more than 150 representatives of Research Infrastrutures (RIs) and energy experts from Europe and North America. “Without compromising our scientific projects, we can ...

  6. 大型集装箱船主机与轴系校中工艺研究%Research on Alignment Process of Main Engine and Shafting of Large-scale Container Ship

    Institute of Scientific and Technical Information of China (English)

    叶峰; 梁小军

    2012-01-01

    主机与轴系是船舶动力装置最重要的部分,主机与轴系校中质量的好坏将决定船舶的动力性能,并直接影响机舱动力装置工作的可靠性与使用寿命.以5100TEU集装箱船为例,分析与探讨了大型集装箱船的主机和轴系校中工艺,总结了校中过程中的参数、测量方法及工装设备,其成果已用于实船建造.%The main engine and shafting are the most important parts in marine power plants. The alignment quality of the main engine and shafting determines the power performance of the ship, and affects the operation reliability and service life of power plants in the engine room directly. The article takes a 5,100 TEU container ship as an example to analyze and study main engine and shafting alignment process of large scale container ships. It summarizes parameters, measurement methods and tooling/equipment during the alignment. The result has been applied to shipbuilding.

  7. Large-scale networks in engineering and life sciences

    CERN Document Server

    Findeisen, Rolf; Flockerzi, Dietrich; Reichl, Udo; Sundmacher, Kai

    2014-01-01

    This edited volume provides insights into and tools for the modeling, analysis, optimization, and control of large-scale networks in the life sciences and in engineering. Large-scale systems are often the result of networked interactions between a large number of subsystems, and their analysis and control are becoming increasingly important. The chapters of this book present the basic concepts and theoretical foundations of network theory and discuss its applications in different scientific areas such as biochemical reactions, chemical production processes, systems biology, electrical circuits, and mobile agents. The aim is to identify common concepts, to understand the underlying mathematical ideas, and to inspire discussions across the borders of the various disciplines.  The book originates from the interdisciplinary summer school “Large Scale Networks in Engineering and Life Sciences” hosted by the International Max Planck Research School Magdeburg, September 26-30, 2011, and will therefore be of int...

  8. Large-scale synthesis of YSZ nanopowder by Pechini method

    Indian Academy of Sciences (India)

    Morteza Hajizadeh-Oghaz; Reza Shoja Razavi; Mohammadreza Loghman Estarki

    2014-08-01

    Yttria–stabilized zirconia nanopowders were synthesized on a relatively large scale using Pechini method. In the present paper, nearly spherical yttria-stabilized zirconia nanopowders with tetragonal structure were synthesized by Pechini process from zirconium oxynitrate hexahydrate, yttrium nitrate, citric acid and ethylene glycol. The phase and structural analyses were accomplished by X-ray diffraction; morphological analysis was carried out by field emission scanning electron microscopy and transmission electron microscopy. The results revealed nearly spherical yttria–stabilized zirconia powder with tetragonal crystal structure and chemical purity of 99.1% by inductively coupled plasma optical emission spectroscopy on a large scale.

  9. Practical Large Scale Syntheses of New Drug Candidates

    Institute of Scientific and Technical Information of China (English)

    Hui-Yin; Li

    2001-01-01

    This presentation will be focus on Practical large scale syntheses of lead compounds and drug candidates from three major therapeutic areas from DuPont Pharmaceuticals Research Laboratory: 1). DMP777-a selective, non-toxic, orally active human elastase inhibitor; 2). DMP754-a potent glycoprotein IIb/IIIa antagonist; 3). R-Wafarin-the pure enantiomeric form of wafarin. The key technology used for preparation these drug candidates is asymmetric hydrogenation under very mild reaction conditions, which produced very high quality final products at large scale (>99% de, >99 A% and >99 wt%). Some practical and GMP aspects of process development will be also discussed.……

  10. Practical Large Scale Syntheses of New Drug Candidates

    Institute of Scientific and Technical Information of China (English)

    Hui-Yin Li

    2001-01-01

    @@ This presentation will be focus on Practical large scale syntheses of lead compounds and drug candidates from three major therapeutic areas from DuPont Pharmaceuticals Research Laboratory: 1). DMP777-a selective, non-toxic, orally active human elastase inhibitor; 2). DMP754-a potent glycoprotein IIb/IIIa antagonist; 3). R-Wafarin-the pure enantiomeric form of wafarin. The key technology used for preparation these drug candidates is asymmetric hydrogenation under very mild reaction conditions, which produced very high quality final products at large scale (>99% de, >99 A% and >99 wt%). Some practical and GMP aspects of process development will be also discussed.

  11. Large-Scale Analysis of Art Proportions

    DEFF Research Database (Denmark)

    Jensen, Karl Kristoffer

    2014-01-01

    While literature often tries to impute mathematical constants into art, this large-scale study (11 databases of paintings and photos, around 200.000 items) shows a different truth. The analysis, consisting of the width/height proportions, shows a value of rarely if ever one (square) and with majo......While literature often tries to impute mathematical constants into art, this large-scale study (11 databases of paintings and photos, around 200.000 items) shows a different truth. The analysis, consisting of the width/height proportions, shows a value of rarely if ever one (square...

  12. Large-scale Complex IT Systems

    CERN Document Server

    Sommerville, Ian; Calinescu, Radu; Keen, Justin; Kelly, Tim; Kwiatkowska, Marta; McDermid, John; Paige, Richard

    2011-01-01

    This paper explores the issues around the construction of large-scale complex systems which are built as 'systems of systems' and suggests that there are fundamental reasons, derived from the inherent complexity in these systems, why our current software engineering methods and techniques cannot be scaled up to cope with the engineering challenges of constructing such systems. It then goes on to propose a research and education agenda for software engineering that identifies the major challenges and issues in the development of large-scale complex, software-intensive systems. Central to this is the notion that we cannot separate software from the socio-technical environment in which it is used.

  13. Topological Routing in Large-Scale Networks

    DEFF Research Database (Denmark)

    Pedersen, Jens Myrup; Knudsen, Thomas Phillip; Madsen, Ole Brun

    2004-01-01

    A new routing scheme, Topological Routing, for large-scale networks is proposed. It allows for efficient routing without large routing tables as known from traditional routing schemes. It presupposes a certain level of order in the networks, known from Structural QoS. The main issues in applying...... Topological Routing to large-scale networks are discussed. Hierarchical extensions are presented along with schemes for shortest path routing, fault handling and path restoration. Further reserach in the area is discussed and perspectives on the prerequisites for practical deployment of Topological Routing...

  14. Topological Routing in Large-Scale Networks

    DEFF Research Database (Denmark)

    Pedersen, Jens Myrup; Knudsen, Thomas Phillip; Madsen, Ole Brun

    A new routing scheme, Topological Routing, for large-scale networks is proposed. It allows for efficient routing without large routing tables as known from traditional routing schemes. It presupposes a certain level of order in the networks, known from Structural QoS. The main issues in applying...... Topological Routing to large-scale networks are discussed. Hierarchical extensions are presented along with schemes for shortest path routing, fault handling and path restoration. Further reserach in the area is discussed and perspectives on the prerequisites for practical deployment of Topological Routing...

  15. Large scale topic modeling made practical

    DEFF Research Database (Denmark)

    Wahlgreen, Bjarne Ørum; Hansen, Lars Kai

    2011-01-01

    Topic models are of broad interest. They can be used for query expansion and result structuring in information retrieval and as an important component in services such as recommender systems and user adaptive advertising. In large scale applications both the size of the database (number of docume......Topic models are of broad interest. They can be used for query expansion and result structuring in information retrieval and as an important component in services such as recommender systems and user adaptive advertising. In large scale applications both the size of the database (number...... topics at par with a much larger case specific vocabulary....

  16. Infolab: a data processing project designed for the geology laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Valois, J.P.; Pilois, D.; Diederichs, F.; Levieil, J.L.; Loyer, H. (Societe Nationale Elf-Aquitaine, 64 - Pau (FR). Lab. de Geologie)

    1989-01-01

    A data processing project has been designed in the Elf Aquitaine geology laboratory. The three main aims were the management of the samples received and analyzed by the lab. data acquisition, and help for interpretation, particularly graphic outputs. The advantages of using a computer in these points are reviewed. Means involved include real time process machines. - because of the automatic data acquisition requirements -, whereas a data base system is dedicated to management of studied samples, and to storage and exploitation of acquired data. Interrelations between data processing requirements - like scattering of data acquisition - and daily lab behaviour are discussed.

  17. Large-scale multimedia modeling applications

    Energy Technology Data Exchange (ETDEWEB)

    Droppo, J.G. Jr.; Buck, J.W.; Whelan, G.; Strenge, D.L.; Castleton, K.J.; Gelston, G.M.

    1995-08-01

    Over the past decade, the US Department of Energy (DOE) and other agencies have faced increasing scrutiny for a wide range of environmental issues related to past and current practices. A number of large-scale applications have been undertaken that required analysis of large numbers of potential environmental issues over a wide range of environmental conditions and contaminants. Several of these applications, referred to here as large-scale applications, have addressed long-term public health risks using a holistic approach for assessing impacts from potential waterborne and airborne transport pathways. Multimedia models such as the Multimedia Environmental Pollutant Assessment System (MEPAS) were designed for use in such applications. MEPAS integrates radioactive and hazardous contaminants impact computations for major exposure routes via air, surface water, ground water, and overland flow transport. A number of large-scale applications of MEPAS have been conducted to assess various endpoints for environmental and human health impacts. These applications are described in terms of lessons learned in the development of an effective approach for large-scale applications.

  18. Evaluating Large-Scale Interactive Radio Programmes

    Science.gov (United States)

    Potter, Charles; Naidoo, Gordon

    2009-01-01

    This article focuses on the challenges involved in conducting evaluations of interactive radio programmes in South Africa with large numbers of schools, teachers, and learners. It focuses on the role such large-scale evaluation has played during the South African radio learning programme's development stage, as well as during its subsequent…

  19. Configuration management in large scale infrastructure development

    NARCIS (Netherlands)

    Rijn, T.P.J. van; Belt, H. van de; Los, R.H.

    2000-01-01

    Large Scale Infrastructure (LSI) development projects such as the construction of roads, rail-ways and other civil engineering (water)works is tendered differently today than a decade ago. Traditional workflow requested quotes from construction companies for construction works where the works to be

  20. Computing in Large-Scale Dynamic Systems

    NARCIS (Netherlands)

    Pruteanu, A.S.

    2013-01-01

    Software applications developed for large-scale systems have always been difficult to de- velop due to problems caused by the large number of computing devices involved. Above a certain network size (roughly one hundred), necessary services such as code updating, topol- ogy discovery and data dissem

  1. Sensitivity analysis for large-scale problems

    Science.gov (United States)

    Noor, Ahmed K.; Whitworth, Sandra L.

    1987-01-01

    The development of efficient techniques for calculating sensitivity derivatives is studied. The objective is to present a computational procedure for calculating sensitivity derivatives as part of performing structural reanalysis for large-scale problems. The scope is limited to framed type structures. Both linear static analysis and free-vibration eigenvalue problems are considered.

  2. Large-scale perspective as a challenge

    NARCIS (Netherlands)

    Plomp, M.G.A.

    2012-01-01

    1. Scale forms a challenge for chain researchers: when exactly is something ‘large-scale’? What are the underlying factors (e.g. number of parties, data, objects in the chain, complexity) that determine this? It appears to be a continuum between small- and large-scale, where positioning on that cont

  3. Ensemble methods for large scale inverse problems

    NARCIS (Netherlands)

    Heemink, A.W.; Umer Altaf, M.; Barbu, A.L.; Verlaan, M.

    2013-01-01

    Variational data assimilation, also sometimes simply called the ‘adjoint method’, is used very often for large scale model calibration problems. Using the available data, the uncertain parameters in the model are identified by minimizing a certain cost function that measures the difference between t

  4. Ethics of large-scale change

    DEFF Research Database (Denmark)

    Arler, Finn

    2006-01-01

    , which kind of attitude is appropriate when dealing with large-scale changes like these from an ethical point of view. Three kinds of approaches are discussed: Aldo Leopold's mountain thinking, the neoclassical economists' approach, and finally the so-called Concentric Circle Theories approach...

  5. Inflation, large scale structure and particle physics

    Indian Academy of Sciences (India)

    S F King

    2004-02-01

    We review experimental and theoretical developments in inflation and its application to structure formation, including the curvation idea. We then discuss a particle physics model of supersymmetric hybrid inflation at the intermediate scale in which the Higgs scalar field is responsible for large scale structure, show how such a theory is completely natural in the framework extra dimensions with an intermediate string scale.

  6. THE ROLE OF PORE PRESSURE IN DEFORMATION IN GEOLOGIC PROCESSES

    Energy Technology Data Exchange (ETDEWEB)

    Narasimhan, T. N.; Houston, W. N.; Nur, A. M.

    1980-03-01

    A Penrose Conference entitled, "The Role of Pore Pressure in Deformation in Geologic Processes" was convened by the authors at San Diego, California between November 9 and 13, 1979. The conference was sponsored by the Geological Society of America. This report is a summary of the highlights of the issues discussed during the conference. In addition, this report also includes a topical reference list relating to the different subject areas relevant to pore pressure and deformation. The references were compiled from a list suggested by the participants and were available for consultation during the conference. Although the list is far from complete, it should prove to be a good starting point for one who is looking for key papers in the field.

  7. Large scale dynamics of protoplanetary discs

    Science.gov (United States)

    Béthune, William

    2017-08-01

    Planets form in the gaseous and dusty disks orbiting young stars. These protoplanetary disks are dispersed in a few million years, being accreted onto the central star or evaporated into the interstellar medium. To explain the observed accretion rates, it is commonly assumed that matter is transported through the disk by turbulence, although the mechanism sustaining turbulence is uncertain. On the other side, irradiation by the central star could heat up the disk surface and trigger a photoevaporative wind, but thermal effects cannot account for the observed acceleration and collimation of the wind into a narrow jet perpendicular to the disk plane. Both issues can be solved if the disk is sensitive to magnetic fields. Weak fields lead to the magnetorotational instability, whose outcome is a state of sustained turbulence. Strong fields can slow down the disk, causing it to accrete while launching a collimated wind. However, the coupling between the disk and the neutral gas is done via electric charges, each of which is outnumbered by several billion neutral molecules. The imperfect coupling between the magnetic field and the neutral gas is described in terms of "non-ideal" effects, introducing new dynamical behaviors. This thesis is devoted to the transport processes happening inside weakly ionized and weakly magnetized accretion disks; the role of microphysical effects on the large-scale dynamics of the disk is of primary importance. As a first step, I exclude the wind and examine the impact of non-ideal effects on the turbulent properties near the disk midplane. I show that the flow can spontaneously organize itself if the ionization fraction is low enough; in this case, accretion is halted and the disk exhibits axisymmetric structures, with possible consequences on planetary formation. As a second step, I study the launching of disk winds via a global model of stratified disk embedded in a warm atmosphere. This model is the first to compute non-ideal effects from

  8. The large-scale structure of vacuum

    CERN Document Server

    Albareti, F D; Maroto, A L

    2014-01-01

    The vacuum state in quantum field theory is known to exhibit an important number of fundamental physical features. In this work we explore the possibility that this state could also present a non-trivial space-time structure on large scales. In particular, we will show that by imposing the renormalized vacuum energy-momentum tensor to be conserved and compatible with cosmological observations, the vacuum energy of sufficiently heavy fields behaves at late times as non-relativistic matter rather than as a cosmological constant. In this limit, the vacuum state supports perturbations whose speed of sound is negligible and accordingly allows the growth of structures in the vacuum energy itself. This large-scale structure of vacuum could seed the formation of galaxies and clusters very much in the same way as cold dark matter does.

  9. Growth Limits in Large Scale Networks

    DEFF Research Database (Denmark)

    Knudsen, Thomas Phillip

    the fundamental technological resources in network technologies are analysed for scalability. Here several technological limits to continued growth are presented. The third step involves a survey of major problems in managing large scale networks given the growth of user requirements and the technological...... limitations. The rising complexity of network management with the convergence of communications platforms is shown as problematic for both automatic management feasibility and for manpower resource management. In the fourth step the scope is extended to include the present society with the DDN project as its...... main focus. Here the general perception of the nature and role in society of large scale networks as a fundamental infrastructure is analysed. This analysis focuses on the effects of the technical DDN projects and on the perception of network infrastructure as expressed by key decision makers...

  10. Quantum Signature of Cosmological Large Scale Structures

    CERN Document Server

    Capozziello, S; De Siena, S; Illuminati, F; Capozziello, Salvatore; Martino, Salvatore De; Siena, Silvio De; Illuminati, Fabrizio

    1998-01-01

    We demonstrate that to all large scale cosmological structures where gravitation is the only overall relevant interaction assembling the system (e.g. galaxies), there is associated a characteristic unit of action per particle whose order of magnitude coincides with the Planck action constant $h$. This result extends the class of physical systems for which quantum coherence can act on macroscopic scales (as e.g. in superconductivity) and agrees with the absence of screening mechanisms for the gravitational forces, as predicted by some renormalizable quantum field theories of gravity. It also seems to support those lines of thought invoking that large scale structures in the Universe should be connected to quantum primordial perturbations as requested by inflation, that the Newton constant should vary with time and distance and, finally, that gravity should be considered as an effective interaction induced by quantization.

  11. Condition Monitoring of Large-Scale Facilities

    Science.gov (United States)

    Hall, David L.

    1999-01-01

    This document provides a summary of the research conducted for the NASA Ames Research Center under grant NAG2-1182 (Condition-Based Monitoring of Large-Scale Facilities). The information includes copies of view graphs presented at NASA Ames in the final Workshop (held during December of 1998), as well as a copy of a technical report provided to the COTR (Dr. Anne Patterson-Hine) subsequent to the workshop. The material describes the experimental design, collection of data, and analysis results associated with monitoring the health of large-scale facilities. In addition to this material, a copy of the Pennsylvania State University Applied Research Laboratory data fusion visual programming tool kit was also provided to NASA Ames researchers.

  12. Wireless Secrecy in Large-Scale Networks

    CERN Document Server

    Pinto, Pedro C; Win, Moe Z

    2011-01-01

    The ability to exchange secret information is critical to many commercial, governmental, and military networks. The intrinsically secure communications graph (iS-graph) is a random graph which describes the connections that can be securely established over a large-scale network, by exploiting the physical properties of the wireless medium. This paper provides an overview of the main properties of this new class of random graphs. We first analyze the local properties of the iS-graph, namely the degree distributions and their dependence on fading, target secrecy rate, and eavesdropper collusion. To mitigate the effect of the eavesdroppers, we propose two techniques that improve secure connectivity. Then, we analyze the global properties of the iS-graph, namely percolation on the infinite plane, and full connectivity on a finite region. These results help clarify how the presence of eavesdroppers can compromise secure communication in a large-scale network.

  13. Measuring Bulk Flows in Large Scale Surveys

    CERN Document Server

    Feldman, H A; Feldman, Hume A.; Watkins, Richard

    1993-01-01

    We follow a formalism presented by Kaiser to calculate the variance of bulk flows in large scale surveys. We apply the formalism to a mock survey of Abell clusters \\'a la Lauer \\& Postman and find the variance in the expected bulk velocities in a universe with CDM, MDM and IRAS--QDOT power spectra. We calculate the velocity variance as a function of the 1--D velocity dispersion of the clusters and the size of the survey.

  14. Statistical characteristics of Large Scale Structure

    OpenAIRE

    Demianski; Doroshkevich

    2002-01-01

    We investigate the mass functions of different elements of the Large Scale Structure -- walls, pancakes, filaments and clouds -- and the impact of transverse motions -- expansion and/or compression -- on their statistical characteristics. Using the Zel'dovich theory of gravitational instability we show that the mass functions of all structure elements are approximately the same and the mass of all elements is found to be concentrated near the corresponding mean mass. At high redshifts, both t...

  15. Topologies for large scale photovoltaic power plants

    OpenAIRE

    Cabrera Tobar, Ana; Bullich Massagué, Eduard; Aragüés Peñalba, Mònica; Gomis Bellmunt, Oriol

    2016-01-01

    © 2016 Elsevier Ltd. All rights reserved. The concern of increasing renewable energy penetration into the grid together with the reduction of prices of photovoltaic solar panels during the last decade have enabled the development of large scale solar power plants connected to the medium and high voltage grid. Photovoltaic generation components, the internal layout and the ac collection grid are being investigated for ensuring the best design, operation and control of these power plants. This ...

  16. Large-scale instabilities of helical flows

    CERN Document Server

    Cameron, Alexandre; Brachet, Marc-Étienne

    2016-01-01

    Large-scale hydrodynamic instabilities of periodic helical flows are investigated using $3$D Floquet numerical computations. A minimal three-modes analytical model that reproduce and explains some of the full Floquet results is derived. The growth-rate $\\sigma$ of the most unstable modes (at small scale, low Reynolds number $Re$ and small wavenumber $q$) is found to scale differently in the presence or absence of anisotropic kinetic alpha (\\AKA{}) effect. When an $AKA$ effect is present the scaling $\\sigma \\propto q\\; Re\\,$ predicted by the $AKA$ effect theory [U. Frisch, Z. S. She, and P. L. Sulem, Physica D: Nonlinear Phenomena 28, 382 (1987)] is recovered for $Re\\ll 1$ as expected (with most of the energy of the unstable mode concentrated in the large scales). However, as $Re$ increases, the growth-rate is found to saturate and most of the energy is found at small scales. In the absence of \\AKA{} effect, it is found that flows can still have large-scale instabilities, but with a negative eddy-viscosity sca...

  17. Large-Scale Visual Data Analysis

    Science.gov (United States)

    Johnson, Chris

    2014-04-01

    Modern high performance computers have speeds measured in petaflops and handle data set sizes measured in terabytes and petabytes. Although these machines offer enormous potential for solving very large-scale realistic computational problems, their effectiveness will hinge upon the ability of human experts to interact with their simulation results and extract useful information. One of the greatest scientific challenges of the 21st century is to effectively understand and make use of the vast amount of information being produced. Visual data analysis will be among our most most important tools in helping to understand such large-scale information. Our research at the Scientific Computing and Imaging (SCI) Institute at the University of Utah has focused on innovative, scalable techniques for large-scale 3D visual data analysis. In this talk, I will present state- of-the-art visualization techniques, including scalable visualization algorithms and software, cluster-based visualization methods and innovate visualization techniques applied to problems in computational science, engineering, and medicine. I will conclude with an outline for a future high performance visualization research challenges and opportunities.

  18. VESPA: Very large-scale Evolutionary and Selective Pressure Analyses

    Directory of Open Access Journals (Sweden)

    Andrew E. Webb

    2017-06-01

    Full Text Available Background Large-scale molecular evolutionary analyses of protein coding sequences requires a number of preparatory inter-related steps from finding gene families, to generating alignments and phylogenetic trees and assessing selective pressure variation. Each phase of these analyses can represent significant challenges, particularly when working with entire proteomes (all protein coding sequences in a genome from a large number of species. Methods We present VESPA, software capable of automating a selective pressure analysis using codeML in addition to the preparatory analyses and summary statistics. VESPA is written in python and Perl and is designed to run within a UNIX environment. Results We have benchmarked VESPA and our results show that the method is consistent, performs well on both large scale and smaller scale datasets, and produces results in line with previously published datasets. Discussion Large-scale gene family identification, sequence alignment, and phylogeny reconstruction are all important aspects of large-scale molecular evolutionary analyses. VESPA provides flexible software for simplifying these processes along with downstream selective pressure variation analyses. The software automatically interprets results from codeML and produces simplified summary files to assist the user in better understanding the results. VESPA may be found at the following website: http://www.mol-evol.org/VESPA.

  19. 大规模合成长链RNA的简易低成本工艺%A Simple and Cost Effective Process for Large-scale Production of Long Oligoribonucleotides

    Institute of Scientific and Technical Information of China (English)

    张平静; 李铁军; 周宋峰; 朱远源; 陈建新; 陆毅祥; 文锋

    2013-01-01

    由于现有技术所限,RNA分子长度和二级结构往往成为RNA合成困难的主要原因.提供了一种简单低成本大规模制备和纯化长链RNA药物的新工艺,尤其针对具有稳定二级结构的长链RNA药物.采用引物延伸方法替代PCR和线性质粒DNA方法制备线性DNA模板可减少步骤及降低污染,然后用T7 RNA聚合酶转录制备的甲氧基修饰的线性DNA模板获得高均一度的长链RNA,转录粗产物直接用source 15Q阴离子HPLC分离T7 RNA聚合酶、rNTP、转录中断产物、内毒素和模板DNA等,从而获得高纯度RNA终产物.该工艺无需繁琐的酚/氯仿抽提和RNA变性,尤其适用于RNA的大量制备.%The length and stable secondary structure of RNA molecule are general obstacles in RNA synthesis because of current technological bottlenecks. A simple and cost effective process for large-scale preparation and purification of long oligoribonucleotides with stable secondary structure was presented. High homogeneous RNAs are transcribed in vitro with T7 RNA polymerase using linear 2'-0me modified DNA templates, which were prepared by primer extension instead of PCR amplification or linearized plasmid DNA transcription to reduce contamination. The crude transcripts are then directly subjected to an anion-exchange HPLC using source 15Q to separate T7 RNA polymerase, unincorporated rNTPs, small abortive transcripts, endotoxin and DNA templates from pure RNA products. The novel process does neither require tedious phenol/ chloroform extraction nor denaturation of RNA, which is especially useful for larger RNAs preparations.

  20. Laser Welding of Large Scale Stainless Steel Aircraft Structures

    Science.gov (United States)

    Reitemeyer, D.; Schultz, V.; Syassen, F.; Seefeld, T.; Vollertsen, F.

    In this paper a welding process for large scale stainless steel structures is presented. The process was developed according to the requirements of an aircraft application. Therefore, stringers are welded on a skin sheet in a t-joint configuration. The 0.6 mm thickness parts are welded with a thin disc laser, seam length up to 1920 mm are demonstrated. The welding process causes angular distortions of the skin sheet which are compensated by a subsequent laser straightening process. Based on a model straightening process parameters matching the induced welding distortion are predicted. The process combination is successfully applied to stringer stiffened specimens.

  1. Large-scale assembly of colloidal particles

    Science.gov (United States)

    Yang, Hongta

    This study reports a simple, roll-to-roll compatible coating technology for producing three-dimensional highly ordered colloidal crystal-polymer composites, colloidal crystals, and macroporous polymer membranes. A vertically beveled doctor blade is utilized to shear align silica microsphere-monomer suspensions to form large-area composites in a single step. The polymer matrix and the silica microspheres can be selectively removed to create colloidal crystals and self-standing macroporous polymer membranes. The thickness of the shear-aligned crystal is correlated with the viscosity of the colloidal suspension and the coating speed, and the correlations can be qualitatively explained by adapting the mechanisms developed for conventional doctor blade coating. Five important research topics related to the application of large-scale three-dimensional highly ordered macroporous films by doctor blade coating are covered in this study. The first topic describes the invention in large area and low cost color reflective displays. This invention is inspired by the heat pipe technology. The self-standing macroporous polymer films exhibit brilliant colors which originate from the Bragg diffractive of visible light form the three-dimensional highly ordered air cavities. The colors can be easily changed by tuning the size of the air cavities to cover the whole visible spectrum. When the air cavities are filled with a solvent which has the same refractive index as that of the polymer, the macroporous polymer films become completely transparent due to the index matching. When the solvent trapped in the cavities is evaporated by in-situ heating, the sample color changes back to brilliant color. This process is highly reversible and reproducible for thousands of cycles. The second topic reports the achievement of rapid and reversible vapor detection by using 3-D macroporous photonic crystals. Capillary condensation of a condensable vapor in the interconnected macropores leads to the

  2. 生物质规模化处理与输料装备技术研究报告%Research Report on Large-scale Biomass Processing and Transmission Equipments and Technology

    Institute of Scientific and Technical Information of China (English)

    庄会永; 张雁茹; 马岩; 董世平

    2016-01-01

    utilizing woody agricultural and forestry biomass in large scale, the research focuses on many new technologies of biomass processing, including: manipulator feedstock-fetching control, manipulator free swerving, forcible feeding, efficient abrasion-resistant shredding, efficient stump-cutting-parabolic sending technology, intertwisting and blocking prevention technology and stable, sustainable and even feeding technology in multiple feedstock transmission. The project has developed new equipments for processing and transmitting feedstock in large scale and can meet the requirements of feedstock characteristics and industrial utilization. These equipments has been applied in biomass demonstration projects that can consume biomass up to 200,000 tons per year, and has promoted high efficiency utilization and sustainable development of agricultural resources.(1)research on high-efficiency mobile shredding equipments technology: during shredding woody biomass, workers always face some difficulties, such as bad working environment, low-efficiency, unsafe, strong intensity, the project has conquered these problems and developed high-efficiency shredding equipment for harvesting woody biomass,(2)research on energy plant combine harvester equipments technology: has conquered efficient stumping and harvesting technology, smooth feeding, chopping and parabolic sending technology, developed combine harvester; integrated efficient saw-disc cutting, forcible grabbing and transmitting, low power consumption shredding and self-driven operating chassis technologies, and has solved the difficulty of salix and KorshinskPeashrub harvest;(3)research on woody biomass feedstock transmission technology: has emphasized the technology of intertwisting and blocking prevention and even distribution and feeding.

  3. RESTRUCTURING OF THE LARGE-SCALE SPRINKLERS

    Directory of Open Access Journals (Sweden)

    Paweł Kozaczyk

    2016-09-01

    Full Text Available One of the best ways for agriculture to become independent from shortages of precipitation is irrigation. In the seventies and eighties of the last century a number of large-scale sprinklers in Wielkopolska was built. At the end of 1970’s in the Poznan province 67 sprinklers with a total area of 6400 ha were installed. The average size of the sprinkler reached 95 ha. In 1989 there were 98 sprinklers, and the area which was armed with them was more than 10 130 ha. The study was conducted on 7 large sprinklers with the area ranging from 230 to 520 hectares in 1986÷1998. After the introduction of the market economy in the early 90’s and ownership changes in agriculture, large-scale sprinklers have gone under a significant or total devastation. Land on the State Farms of the State Agricultural Property Agency has leased or sold and the new owners used the existing sprinklers to a very small extent. This involved a change in crop structure, demand structure and an increase in operating costs. There has also been a threefold increase in electricity prices. Operation of large-scale irrigation encountered all kinds of barriers in practice and limitations of system solutions, supply difficulties, high levels of equipment failure which is not inclined to rational use of available sprinklers. An effect of a vision of the local area was to show the current status of the remaining irrigation infrastructure. The adopted scheme for the restructuring of Polish agriculture was not the best solution, causing massive destruction of assets previously invested in the sprinkler system.

  4. Large-Scale PV Integration Study

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Shuai; Etingov, Pavel V.; Diao, Ruisheng; Ma, Jian; Samaan, Nader A.; Makarov, Yuri V.; Guo, Xinxin; Hafen, Ryan P.; Jin, Chunlian; Kirkham, Harold; Shlatz, Eugene; Frantzis, Lisa; McClive, Timothy; Karlson, Gregory; Acharya, Dhruv; Ellis, Abraham; Stein, Joshua; Hansen, Clifford; Chadliev, Vladimir; Smart, Michael; Salgo, Richard; Sorensen, Rahn; Allen, Barbara; Idelchik, Boris

    2011-07-29

    This research effort evaluates the impact of large-scale photovoltaic (PV) and distributed generation (DG) output on NV Energy’s electric grid system in southern Nevada. It analyzes the ability of NV Energy’s generation to accommodate increasing amounts of utility-scale PV and DG, and the resulting cost of integrating variable renewable resources. The study was jointly funded by the United States Department of Energy and NV Energy, and conducted by a project team comprised of industry experts and research scientists from Navigant Consulting Inc., Sandia National Laboratories, Pacific Northwest National Laboratory and NV Energy.

  5. Conformal Anomaly and Large Scale Gravitational Coupling

    CERN Document Server

    Salehi, H

    2000-01-01

    We present a model in which the breackdown of conformal symmetry of a quantum stress-tensor due to the trace anomaly is related to a cosmological effect in a gravitational model. This is done by characterizing the traceless part of the quantum stress-tensor in terms of the stress-tensor of a conformal invariant classical scalar field. We introduce a conformal frame in which the anomalous trace is identified with a cosmological constant. In this conformal frame we establish the Einstein field equations by connecting the quantum stress-tensor with the large scale distribution of matter in the universe.

  6. Large Scale Quantum Simulations of Nuclear Pasta

    Science.gov (United States)

    Fattoyev, Farrukh J.; Horowitz, Charles J.; Schuetrumpf, Bastian

    2016-03-01

    Complex and exotic nuclear geometries collectively referred to as ``nuclear pasta'' are expected to naturally exist in the crust of neutron stars and in supernovae matter. Using a set of self-consistent microscopic nuclear energy density functionals we present the first results of large scale quantum simulations of pasta phases at baryon densities 0 . 03 pasta configurations. This work is supported in part by DOE Grants DE-FG02-87ER40365 (Indiana University) and DE-SC0008808 (NUCLEI SciDAC Collaboration).

  7. Large scale wind power penetration in Denmark

    DEFF Research Database (Denmark)

    Karnøe, Peter

    2013-01-01

    he Danish electricity generating system prepared to adopt nuclear power in the 1970s, yet has become the world's front runner in wind power with a national plan for 50% wind power penetration by 2020. This paper deploys a sociotechnical perspective to explain the historical transformation of "net...... expertise evolves and contributes to the normalization and large-scale penetration of wind power in the electricity generating system. The analysis teaches us how technological paths become locked-in, but also indicates keys for locking them out....

  8. Stabilization Algorithms for Large-Scale Problems

    DEFF Research Database (Denmark)

    Jensen, Toke Koldborg

    2006-01-01

    The focus of the project is on stabilization of large-scale inverse problems where structured models and iterative algorithms are necessary for computing approximate solutions. For this purpose, we study various iterative Krylov methods and their abilities to produce regularized solutions. Some......-curve. This heuristic is implemented as a part of a larger algorithm which is developed in collaboration with G. Rodriguez and P. C. Hansen. Last, but not least, a large part of the project has, in different ways, revolved around the object-oriented Matlab toolbox MOORe Tools developed by PhD Michael Jacobsen. New...

  9. Large scale phononic metamaterials for seismic isolation

    Energy Technology Data Exchange (ETDEWEB)

    Aravantinos-Zafiris, N. [Department of Sound and Musical Instruments Technology, Ionian Islands Technological Educational Institute, Stylianou Typaldou ave., Lixouri 28200 (Greece); Sigalas, M. M. [Department of Materials Science, University of Patras, Patras 26504 (Greece)

    2015-08-14

    In this work, we numerically examine structures that could be characterized as large scale phononic metamaterials. These novel structures could have band gaps in the frequency spectrum of seismic waves when their dimensions are chosen appropriately, thus raising the belief that they could be serious candidates for seismic isolation structures. Different and easy to fabricate structures were examined made from construction materials such as concrete and steel. The well-known finite difference time domain method is used in our calculations in order to calculate the band structures of the proposed metamaterials.

  10. Hiearchical Engine for Large Scale Infrastructure Simulation

    Energy Technology Data Exchange (ETDEWEB)

    2017-03-15

    HELICS ls a new open-source, cyber-physlcal-energy co-simulation framework for electric power systems. HELICS Is designed to support very-large-scale (100,000+ federates) co­simulations with off-the-shelf power-system, communication, market, and end-use tools. Other key features Include cross platform operating system support, the integration of both eventdrlven (e.g., packetlzed communication) and time-series (e.g.,power flow) simulations, and the ability to co-Iterate among federates to ensure physical model convergence at each time step.

  11. Large-Scale Integrated Carbon Nanotube Gas Sensors

    OpenAIRE

    Kim, Joondong

    2012-01-01

    Carbon nanotube (CNT) is a promising one-dimensional nanostructure for various nanoscale electronics. Additionally, nanostructures would provide a significant large surface area at a fixed volume, which is an advantage for high-responsive gas sensors. However, the difficulty in fabrication processes limits the CNT gas sensors for the large-scale production. We review the viable scheme for large-area application including the CNT gas sensor fabrication and reaction mechanism with a practical d...

  12. Large-scale Globally Propagating Coronal Waves

    Directory of Open Access Journals (Sweden)

    Alexander Warmuth

    2015-09-01

    Full Text Available Large-scale, globally propagating wave-like disturbances have been observed in the solar chromosphere and by inference in the corona since the 1960s. However, detailed analysis of these phenomena has only been conducted since the late 1990s. This was prompted by the availability of high-cadence coronal imaging data from numerous spaced-based instruments, which routinely show spectacular globally propagating bright fronts. Coronal waves, as these perturbations are usually referred to, have now been observed in a wide range of spectral channels, yielding a wealth of information. Many findings have supported the “classical” interpretation of the disturbances: fast-mode MHD waves or shocks that are propagating in the solar corona. However, observations that seemed inconsistent with this picture have stimulated the development of alternative models in which “pseudo waves” are generated by magnetic reconfiguration in the framework of an expanding coronal mass ejection. This has resulted in a vigorous debate on the physical nature of these disturbances. This review focuses on demonstrating how the numerous observational findings of the last one and a half decades can be used to constrain our models of large-scale coronal waves, and how a coherent physical understanding of these disturbances is finally emerging.

  13. Chirping for large-scale maritime archaeological survey

    DEFF Research Database (Denmark)

    Grøn, Ole; Boldreel, Lars Ole

    2014-01-01

    Archaeological wrecks exposed on the sea floor are mapped using side-scan and multibeam techniques, whereas the detection of submerged archaeological sites, such as Stone Age settlements, and wrecks, partially or wholly embedded in sea-floor sediments, requires the application of high...... those employed in several detailed studies of known wreck sites and from the way in which geologists map the sea floor and the geological column beneath it. The strategy has been developed on the basis of extensive practical experience gained during the use of an off-the-shelf 2D chirp system and, given......-resolution subbottom profilers. This paper presents a strategy for cost-effective, large-scale mapping of previously undetected sediment-embedded sites and wrecks based on subbottom profiling with chirp systems. The mapping strategy described includes (a) definition of line spacing depending on the target; (b...

  14. Large-scale coastal impact induced by a catastrophic storm

    DEFF Research Database (Denmark)

    Fruergaard, Mikkel; Andersen, Thorbjørn Joest; Johannessen, Peter N

    Catastrophic storms and storm surges induce rapid and substantial changes along sandy barrier coasts, potentially causing severe environmental and economic damage. Coastal impacts of modern storms are associated with washover deposition, dune erosion, barrier breaching, and coastline and shoreface...... erosion. Little is however known about the impact of major storms and their post-storm coastal recovery on geologic and historic evolution of barrier systems. We apply high-resolution optically stimulated luminescence dating on a barrier system in the Wadden Sea (Denmark) and show that 5 to 8 meters...... of marine sand accumulated in an aggrading-prograding shoal and on a prograding shoreface during and within 3 to 4 decades (“healing phase”) after the most destructive storm documented for the Wadden Sea. Furthermore, we show that the impact of this storm caused large-scale shoreline erosion and barrier...

  15. Series Design of Large-Scale NC Machine Tool

    Institute of Scientific and Technical Information of China (English)

    TANG Zhi

    2007-01-01

    Product system design is a mature concept in western developed countries. It has been applied in war industry during the last century. However, up until now, functional combination is still the main method for product system design in China. Therefore, in terms of a concept of product generation and product interaction we are in a weak position compared with the requirements of global markets. Today, the idea of serial product design has attracted much attention in the design field and the definition of product generation as well as its parameters has already become the standard in serial product designs. Although the design of a large-scale NC machine tool is complicated, it can be further optimized by the precise exercise of object design by placing the concept of platform establishment firmly into serial product design. The essence of a serial product design has been demonstrated by the design process of a large-scale NC machine tool.

  16. Evaluation of Large-scale Public Sector Reforms

    DEFF Research Database (Denmark)

    Breidahl, Karen Nielsen; Gjelstrup, Gunnar; Hansen, Hanne Foss

    2017-01-01

    Research on the evaluation of large-scale public sector reforms is rare. This article sets out to fill that gap in the evaluation literature and argues that it is of vital importance. The impact of such reforms is considerable. Furthermore they change the context in which evaluations of other...... and more delimited policy areas take place. In our analysis we apply four governance perspectives (rational-instrumental, rational-interest based, institutional-cultural and a chaos perspective) in a comparative analysis of the evaluations of two large-scale public sector reforms in Denmark and Norway. We...... compare the evaluation process (focus and purpose), the evaluators and the organization of the evaluation as well as the utilization of the evaluation results. The analysis uncovers several significant findings including how the initial organization of the evaluation show strong impact on the utilization...

  17. Distant galaxy clusters in the XMM Large Scale Structure survey

    CERN Document Server

    Willis, J P; Bremer, M N; Pierre, M; Adami, C; Ilbert, O; Maughan, B; Maurogordato, S; Pacaud, F; Valtchanov, I; Chiappetti, L; Thanjavur, K; Gwyn, S; Stanway, E R; Winkworth, C

    2012-01-01

    (Abridged) Distant galaxy clusters provide important tests of the growth of large scale structure in addition to highlighting the process of galaxy evolution in a consistently defined environment at large look back time. We present a sample of 22 distant (z>0.8) galaxy clusters and cluster candidates selected from the 9 deg2 footprint of the overlapping X-ray Multi Mirror (XMM) Large Scale Structure (LSS), CFHTLS Wide and Spitzer SWIRE surveys. Clusters are selected as extended X-ray sources with an accompanying overdensity of galaxies displaying optical to mid-infrared photometry consistent with z>0.8. Nine clusters have confirmed spectroscopic redshifts in the interval 0.80.8 clusters.

  18. Simulation of Large Scale Automation Process Control Process Optimization Scheduling Model%大型自动化过程控制流程优化调度模型仿真

    Institute of Scientific and Technical Information of China (English)

    任铭

    2015-01-01

    传统的过程控制和作业调度方法采用基于多线程集群聚类的任务调度方法,对多用户、多任务的大型自动化过程控制的调度性能不好.提出基于主特征支配集分簇提取的大型自动化过程控制流程优化调度模型.构建大型自动化过程控制模型,进行优化控制目标函数构建,实现控制流程的优化调度模型改进,最后通过仿真实验进行了性能验证.仿真结果表明,该算法能优化自动化过程控制流程,在提高生产效率,优化工业自动化过程控制方面具有重要应用价值.%Traditional process control and job scheduling method based on multi-threading set clustering task scheduling method, large automation of users, the task scheduling performance of process control is bad. Put forward based on the char-acteristics of dominating sets clumps and extraction of large automation process control process optimization scheduling model. Building large automation process control model for optimal control objective function building, to achieve the opti-mal scheduling model of control process improvements, the performance verification by simulation experiment. The simula-tion results show that the algorithm can optimize the automation process control process, to improve the production effi-ciency, optimize the industrial automation process control has important application value.

  19. Accelerated large-scale multiple sequence alignment

    Directory of Open Access Journals (Sweden)

    Lloyd Scott

    2011-12-01

    Full Text Available Abstract Background Multiple sequence alignment (MSA is a fundamental analysis method used in bioinformatics and many comparative genomic applications. Prior MSA acceleration attempts with reconfigurable computing have only addressed the first stage of progressive alignment and consequently exhibit performance limitations according to Amdahl's Law. This work is the first known to accelerate the third stage of progressive alignment on reconfigurable hardware. Results We reduce subgroups of aligned sequences into discrete profiles before they are pairwise aligned on the accelerator. Using an FPGA accelerator, an overall speedup of up to 150 has been demonstrated on a large data set when compared to a 2.4 GHz Core2 processor. Conclusions Our parallel algorithm and architecture accelerates large-scale MSA with reconfigurable computing and allows researchers to solve the larger problems that confront biologists today. Program source is available from http://dna.cs.byu.edu/msa/.

  20. Clumps in large scale relativistic jets

    CERN Document Server

    Tavecchio, F; Celotti, A

    2003-01-01

    The relatively intense X-ray emission from large scale (tens to hundreds kpc) jets discovered with Chandra likely implies that jets (at least in powerful quasars) are still relativistic at that distances from the active nucleus. In this case the emission is due to Compton scattering off seed photons provided by the Cosmic Microwave Background, and this on one hand permits to have magnetic fields close to equipartition with the emitting particles, and on the other hand minimizes the requirements about the total power carried by the jet. The emission comes from compact (kpc scale) knots, and we here investigate what we can predict about the possible emission between the bright knots. This is motivated by the fact that bulk relativistic motion makes Compton scattering off the CMB photons efficient even when electrons are cold or mildly relativistic in the comoving frame. This implies relatively long cooling times, dominated by adiabatic losses. Therefore the relativistically moving plasma can emit, by Compton sc...

  1. Large-scale parametric survival analysis.

    Science.gov (United States)

    Mittal, Sushil; Madigan, David; Cheng, Jerry Q; Burd, Randall S

    2013-10-15

    Survival analysis has been a topic of active statistical research in the past few decades with applications spread across several areas. Traditional applications usually consider data with only a small numbers of predictors with a few hundreds or thousands of observations. Recent advances in data acquisition techniques and computation power have led to considerable interest in analyzing very-high-dimensional data where the number of predictor variables and the number of observations range between 10(4) and 10(6). In this paper, we present a tool for performing large-scale regularized parametric survival analysis using a variant of the cyclic coordinate descent method. Through our experiments on two real data sets, we show that application of regularized models to high-dimensional data avoids overfitting and can provide improved predictive performance and calibration over corresponding low-dimensional models.

  2. Curvature constraints from Large Scale Structure

    CERN Document Server

    Di Dio, Enea; Raccanelli, Alvise; Durrer, Ruth; Kamionkowski, Marc; Lesgourgues, Julien

    2016-01-01

    We modified the CLASS code in order to include relativistic galaxy number counts in spatially curved geometries; we present the formalism and study the effect of relativistic corrections on spatial curvature. The new version of the code is now publicly available. Using a Fisher matrix analysis, we investigate how measurements of the spatial curvature parameter $\\Omega_K$ with future galaxy surveys are affected by relativistic effects, which influence observations of the large scale galaxy distribution. These effects include contributions from cosmic magnification, Doppler terms and terms involving the gravitational potential. As an application, we consider angle and redshift dependent power spectra, which are especially well suited for model independent cosmological constraints. We compute our results for a representative deep, wide and spectroscopic survey, and our results show the impact of relativistic corrections on the spatial curvature parameter estimation. We show that constraints on the curvature para...

  3. Large-Scale Tides in General Relativity

    CERN Document Server

    Ip, Hiu Yan

    2016-01-01

    Density perturbations in cosmology, i.e. spherically symmetric adiabatic perturbations of a Friedmann-Lema\\^itre-Robertson-Walker (FLRW) spacetime, are locally exactly equivalent to a different FLRW solution, as long as their wavelength is much larger than the sound horizon of all fluid components. This fact is known as the "separate universe" paradigm. However, no such relation is known for anisotropic adiabatic perturbations, which correspond to an FLRW spacetime with large-scale tidal fields. Here, we provide a closed, fully relativistic set of evolutionary equations for the nonlinear evolution of such modes, based on the conformal Fermi (CFC) frame. We show explicitly that the tidal effects are encoded by the Weyl tensor, and are hence entirely different from an anisotropic Bianchi I spacetime, where the anisotropy is sourced by the Ricci tensor. In order to close the system, certain higher derivative terms have to be dropped. We show that this approximation is equivalent to the local tidal approximation ...

  4. Grid sensitivity capability for large scale structures

    Science.gov (United States)

    Nagendra, Gopal K.; Wallerstein, David V.

    1989-01-01

    The considerations and the resultant approach used to implement design sensitivity capability for grids into a large scale, general purpose finite element system (MSC/NASTRAN) are presented. The design variables are grid perturbations with a rather general linking capability. Moreover, shape and sizing variables may be linked together. The design is general enough to facilitate geometric modeling techniques for generating design variable linking schemes in an easy and straightforward manner. Test cases have been run and validated by comparison with the overall finite difference method. The linking of a design sensitivity capability for shape variables in MSC/NASTRAN with an optimizer would give designers a powerful, automated tool to carry out practical optimization design of real life, complicated structures.

  5. Large scale water lens for solar concentration.

    Science.gov (United States)

    Mondol, A S; Vogel, B; Bastian, G

    2015-06-01

    Properties of large scale water lenses for solar concentration were investigated. These lenses were built from readily available materials, normal tap water and hyper-elastic linear low density polyethylene foil. Exposed to sunlight, the focal lengths and light intensities in the focal spot were measured and calculated. Their optical properties were modeled with a raytracing software based on the lens shape. We have achieved a good match of experimental and theoretical data by considering wavelength dependent concentration factor, absorption and focal length. The change in light concentration as a function of water volume was examined via the resulting load on the foil and the corresponding change of shape. The latter was extracted from images and modeled by a finite element simulation.

  6. Principles of computer processing of Landsat data for geologic applications

    Science.gov (United States)

    Taranik, James V.

    1978-01-01

    The main objectives of computer processing of Landsat data for geologic applications are to improve display of image data to the analyst or to facilitate evaluation of the multispectral characteristics of the data. Interpretations of the data are made from enhanced and classified data by an analyst trained in geology. Image enhancements involve adjustments of brightness values for individual picture elements. Image classification involves determination of the brightness values of picture elements for a particular cover type. Histograms are used to display the range and frequency of occurrence of brightness values. Landsat-1 and -2 data are preprocessed at Goddard Space Flight Center (GSFC) to adjust for the detector response of the multispectral scanner (MSS). Adjustments are applied to minimize the effects of striping, adjust for bad-data lines and line segments and lost individual pixel data. Because illumination conditions and landscape characteristics vary considerably and detector response changes with time, the radiometric adjustments applied at GSFC are seldom perfect and some detector striping remain in Landsat data. Rotation of the Earth under the satellite and movements of the satellite platform introduce geometric distortions in the data that must also be compensated for if image data are to be correctly displayed to the data analyst. Adjustments to Landsat data are made to compensate for variable solar illumination and for atmospheric effects. GeoMetric registration of Landsat data involves determination of the spatial location of a pixel in. the output image and the determination of a new value for the pixel. The general objective of image enhancement is to optimize display of the data to the analyst. Contrast enhancements are employed to expand the range of brightness values in Landsat data so that the data can be efficiently recorded in a manner desired by the analyst. Spatial frequency enhancements are designed to enhance boundaries between features

  7. Supporting large-scale computational science

    Energy Technology Data Exchange (ETDEWEB)

    Musick, R

    1998-10-01

    A study has been carried out to determine the feasibility of using commercial database management systems (DBMSs) to support large-scale computational science. Conventional wisdom in the past has been that DBMSs are too slow for such data. Several events over the past few years have muddied the clarity of this mindset: 1. 2. 3. 4. Several commercial DBMS systems have demonstrated storage and ad-hoc quer access to Terabyte data sets. Several large-scale science teams, such as EOSDIS [NAS91], high energy physics [MM97] and human genome [Kin93] have adopted (or make frequent use of) commercial DBMS systems as the central part of their data management scheme. Several major DBMS vendors have introduced their first object-relational products (ORDBMSs), which have the potential to support large, array-oriented data. In some cases, performance is a moot issue. This is true in particular if the performance of legacy applications is not reduced while new, albeit slow, capabilities are added to the system. The basic assessment is still that DBMSs do not scale to large computational data. However, many of the reasons have changed, and there is an expiration date attached to that prognosis. This document expands on this conclusion, identifies the advantages and disadvantages of various commercial approaches, and describes the studies carried out in exploring this area. The document is meant to be brief, technical and informative, rather than a motivational pitch. The conclusions within are very likely to become outdated within the next 5-7 years, as market forces will have a significant impact on the state of the art in scientific data management over the next decade.

  8. Introducing Large-Scale Innovation in Schools

    Science.gov (United States)

    Sotiriou, Sofoklis; Riviou, Katherina; Cherouvis, Stephanos; Chelioti, Eleni; Bogner, Franz X.

    2016-08-01

    Education reform initiatives tend to promise higher effectiveness in classrooms especially when emphasis is given to e-learning and digital resources. Practical changes in classroom realities or school organization, however, are lacking. A major European initiative entitled Open Discovery Space (ODS) examined the challenge of modernizing school education via a large-scale implementation of an open-scale methodology in using technology-supported innovation. The present paper describes this innovation scheme which involved schools and teachers all over Europe, embedded technology-enhanced learning into wider school environments and provided training to teachers. Our implementation scheme consisted of three phases: (1) stimulating interest, (2) incorporating the innovation into school settings and (3) accelerating the implementation of the innovation. The scheme's impact was monitored for a school year using five indicators: leadership and vision building, ICT in the curriculum, development of ICT culture, professional development support, and school resources and infrastructure. Based on about 400 schools, our study produced four results: (1) The growth in digital maturity was substantial, even for previously high scoring schools. This was even more important for indicators such as vision and leadership" and "professional development." (2) The evolution of networking is presented graphically, showing the gradual growth of connections achieved. (3) These communities became core nodes, involving numerous teachers in sharing educational content and experiences: One out of three registered users (36 %) has shared his/her educational resources in at least one community. (4) Satisfaction scores ranged from 76 % (offer of useful support through teacher academies) to 87 % (good environment to exchange best practices). Initiatives such as ODS add substantial value to schools on a large scale.

  9. Large-scale sequential quadratic programming algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Eldersveld, S.K.

    1992-09-01

    The problem addressed is the general nonlinear programming problem: finding a local minimizer for a nonlinear function subject to a mixture of nonlinear equality and inequality constraints. The methods studied are in the class of sequential quadratic programming (SQP) algorithms, which have previously proved successful for problems of moderate size. Our goal is to devise an SQP algorithm that is applicable to large-scale optimization problems, using sparse data structures and storing less curvature information but maintaining the property of superlinear convergence. The main features are: 1. The use of a quasi-Newton approximation to the reduced Hessian of the Lagrangian function. Only an estimate of the reduced Hessian matrix is required by our algorithm. The impact of not having available the full Hessian approximation is studied and alternative estimates are constructed. 2. The use of a transformation matrix Q. This allows the QP gradient to be computed easily when only the reduced Hessian approximation is maintained. 3. The use of a reduced-gradient form of the basis for the null space of the working set. This choice of basis is more practical than an orthogonal null-space basis for large-scale problems. The continuity condition for this choice is proven. 4. The use of incomplete solutions of quadratic programming subproblems. Certain iterates generated by an active-set method for the QP subproblem are used in place of the QP minimizer to define the search direction for the nonlinear problem. An implementation of the new algorithm has been obtained by modifying the code MINOS. Results and comparisons with MINOS and NPSOL are given for the new algorithm on a set of 92 test problems.

  10. Research on the evolution model and deformation mechanisms of Baishuihe landslide based on analyzing geologic process of slope

    Science.gov (United States)

    Zhang, S.; Tang, H.; Cai, Y.; Tan, Q.

    2016-12-01

    The landslide is a result of both inner and exterior geologic agents, and inner ones always have significant influences on the susceptibility of geologic bodies to the exterior ones. However, current researches focus more on impacts of exterior factors, such as precipitation and reservoir water, than that of geologic process. Baishuihe landslide, located on the south bank of Yangtze River and 56km upstream from the Three Gorges Project, was taken as the study subject with the in-situ investigation and exploration carried out for the first step. After the spatial analysis using the 3D model of topography built by ArcGIS (Fig.1), geologic characteristics of the slope that lies in a certain range near the Baishuihe landslide on the same bank were investigated for further insights into geologic process of the slope, with help of the geological map and structure outline map. Baishuihe landslide developed on the north limb of Baifuping anticline, a dip slope on the southwest margin of Zigui basin. The eastern and western boundaries are both ridges and in the middle a distinct slide depression is in process of deforming. Evolutionary process of Baishuihe landslide includes three steps below. 1) Emergence of Baifuping anticline leaded to interbedded dislocation, tension cracks and joint fractures in bedrocks. 2) Weathering continuously weakened strength of soft interlayers in the Shazhenxi Formation (T3s). 3) Rock slide caused by neotectonics happened on a large scale along the weak layers and joint planes, forming initial Baishuihe landslide. Although the landslide has undergone reconstruction for a long time, it could still be divided clearly into two parts, namely a) the rock landslide at the back half (south) and b) the debris landslide at the front half (north). a) The deformation mechanism for the rock landslide is believed to be deterioration in strength of weak bedding planes due to precipitation and free face caused by human activities or river incision. b

  11. Analysis using large-scale ringing data

    Directory of Open Access Journals (Sweden)

    Baillie, S. R.

    2004-06-01

    survival and recruitment estimates from the French CES scheme to assess the relative contributions of survival and recruitment to overall population changes. He develops a novel approach to modelling survival rates from such multi–site data by using within–year recaptures to provide a covariate of between–year recapture rates. This provided parsimonious models of variation in recapture probabilities between sites and years. The approach provides promising results for the four species investigated and can potentially be extended to similar data from other CES/MAPS schemes. The final paper by Blandine Doligez, David Thomson and Arie van Noordwijk (Doligez et al., 2004 illustrates how large-scale studies of population dynamics can be important for evaluating the effects of conservation measures. Their study is concerned with the reintroduction of White Stork populations to the Netherlands where a re–introduction programme started in 1969 had resulted in a breeding population of 396 pairs by 2000. They demonstrate the need to consider a wide range of models in order to account for potential age, time, cohort and “trap–happiness” effects. As the data are based on resightings such trap–happiness must reflect some form of heterogeneity in resighting probabilities. Perhaps surprisingly, the provision of supplementary food did not influence survival, but it may havehad an indirect effect via the alteration of migratory behaviour. Spatially explicit modelling of data gathered at many sites inevitably results in starting models with very large numbers of parameters. The problem is often complicated further by having relatively sparse data at each site, even where the total amount of data gathered is very large. Both Julliard (2004 and Doligez et al. (2004 give explicit examples of problems caused by needing to handle very large numbers of parameters and show how they overcame them for their particular data sets. Such problems involve both the choice of appropriate

  12. Systematic Literature Review of Agile Scalability for Large Scale Projects

    Directory of Open Access Journals (Sweden)

    Hina saeeda

    2015-09-01

    Full Text Available In new methods, “agile” has come out as the top approach in software industry for the development of the soft wares. With different shapes agile is applied for handling the issues such as low cost, tight time to market schedule continuously changing requirements, Communication & Coordination, team size and distributed environment. Agile has proved to be successful in the small and medium size project, however, it have several limitations when applied on large size projects. The purpose of this study is to know agile techniques in detail, finding and highlighting its restrictions for large size projects with the help of systematic literature review. The systematic literature review is going to find answers for the Research questions: 1 How to make agile approaches scalable and adoptable for large projects?2 What are the existing methods, approaches, frameworks and practices support agile process in large scale projects? 3 What are limitations of existing agile approaches, methods, frameworks and practices with reference to large scale projects? This study will identify the current research problems of the agile scalability for large size projects by giving a detail literature review of the identified problems, existed work for providing solution to these problems and will find out limitations of the existing work for covering the identified problems in the agile scalability. All the results gathered will be summarized statistically based on these finding remedial work will be planned in future for handling the identified limitations of agile approaches for large scale projects.

  13. A visualization framework for large-scale virtual astronomy

    Science.gov (United States)

    Fu, Chi-Wing

    Motivated by advances in modern positional astronomy, this research attempts to digitally model the entire Universe through computer graphics technology. Our first challenge is space itself. The gigantic size of the Universe makes it impossible to put everything into a typical graphics system at its own scale. The graphics rendering process can easily fail because of limited computational precision, The second challenge is that the enormous amount of data could slow down the graphics; we need clever techniques to speed up the rendering. Third, since the Universe is dominated by empty space, objects are widely separated; this makes navigation difficult. We attempt to tackle these problems through various techniques designed to extend and optimize the conventional graphics framework, including the following: power homogeneous coordinates for large-scale spatial representations, generalized large-scale spatial transformations, and rendering acceleration via environment caching and object disappearance criteria. Moreover, we implemented an assortment of techniques for modeling and rendering a variety of astronomical bodies, ranging from the Earth up to faraway galaxies, and attempted to visualize cosmological time; a method we call the Lightcone representation was introduced to visualize the whole space-time of the Universe at a single glance. In addition, several navigation models were developed to handle the large-scale navigation problem. Our final results include a collection of visualization tools, two educational animations appropriate for planetarium audiences, and state-of-the-art-advancing rendering techniques that can be transferred to practice in digital planetarium systems.

  14. Image-based Exploration of Large-Scale Pathline Fields

    KAUST Repository

    Nagoor, Omniah H.

    2014-05-27

    While real-time applications are nowadays routinely used in visualizing large nu- merical simulations and volumes, handling these large-scale datasets requires high-end graphics clusters or supercomputers to process and visualize them. However, not all users have access to powerful clusters. Therefore, it is challenging to come up with a visualization approach that provides insight to large-scale datasets on a single com- puter. Explorable images (EI) is one of the methods that allows users to handle large data on a single workstation. Although it is a view-dependent method, it combines both exploration and modification of visual aspects without re-accessing the original huge data. In this thesis, we propose a novel image-based method that applies the concept of EI in visualizing large flow-field pathlines data. The goal of our work is to provide an optimized image-based method, which scales well with the dataset size. Our approach is based on constructing a per-pixel linked list data structure in which each pixel contains a list of pathlines segments. With this view-dependent method it is possible to filter, color-code and explore large-scale flow data in real-time. In addition, optimization techniques such as early-ray termination and deferred shading are applied, which further improves the performance and scalability of our approach.

  15. Large-scale magnetic topologies of early M dwarfs

    CERN Document Server

    Donati, JF; Petit, P; Delfosse, X; Forveille, T; Aurière, M; Cabanac, R; Dintrans, B; Fares, R; Gastine, T; Jardine, MM; Lignières, F; Paletou, F; Velez, J Ramirez; Théado, S

    2008-01-01

    We present here additional results of a spectropolarimetric survey of a small sample of stars ranging from spectral type M0 to M8 aimed at investigating observationally how dynamo processes operate in stars on both sides of the full convection threshold (spectral type M4). The present paper focuses on early M stars (M0--M3), i.e. above the full convection threshold. Applying tomographic imaging techniques to time series of rotationally modulated circularly polarised profiles collected with the NARVAL spectropolarimeter, we determine the rotation period and reconstruct the large-scale magnetic topologies of 6 early M dwarfs. We find that early-M stars preferentially host large-scale fields with dominantly toroidal and non-axisymmetric poloidal configurations, along with significant differential rotation (and long-term variability); only the lowest-mass star of our subsample is found to host an almost fully poloidal, mainly axisymmetric large-scale field ressembling those found in mid-M dwarfs. This abrupt chan...

  16. Multitree Algorithms for Large-Scale Astrostatistics

    Science.gov (United States)

    March, William B.; Ozakin, Arkadas; Lee, Dongryeol; Riegel, Ryan; Gray, Alexander G.

    2012-03-01

    this number every week, resulting in billions of objects. At such scales, even linear-time analysis operations present challenges, particularly since statistical analyses are inherently interactive processes, requiring that computations complete within some reasonable human attention span. The quadratic (or worse) runtimes of straightforward implementations become quickly unbearable. Examples of applications. These analysis subroutines occur ubiquitously in astrostatistical work. We list just a few examples. The need to cross-match objects across different catalogs has led to various algorithms, which at some point perform an AllNN computation. 2-point and higher-order spatial correlations for the basis of spatial statistics, and are utilized in astronomy to compare the spatial structures of two datasets, such as an observed sample and a theoretical sample, for example, forming the basis for two-sample hypothesis testing. Friends-of-friends clustering is often used to identify halos in data from astrophysical simulations. Minimum spanning tree properties have also been proposed as statistics of large-scale structure. Comparison of the distributions of different kinds of objects requires accurate density estimation, for which KDE is the overall statistical method of choice. The prediction of redshifts from optical data requires accurate regression, for which kernel regression is a powerful method. The identification of objects of various types in astronomy, such as stars versus galaxies, requires accurate classification, for which KDA is a powerful method. Overview. In this chapter, we will briefly sketch the main ideas behind recent fast algorithms which achieve, for example, linear runtimes for pairwise-distance problems, or similarly dramatic reductions in computational growth. In some cases, the runtime orders for these algorithms are mathematically provable statements, while in others we have only conjectures backed by experimental observations for the time being

  17. Teaching Introductory Geology by a Paradigm, Process and Product Approach

    Science.gov (United States)

    Reams, M.

    2008-12-01

    Students in introductory geology courses can easily become lost in the minutiae of terms and seemingly random ideas and theories. One way to avoid this and provide a holistic picture of each major subject area in a beginning course is to introduce, at the start of each section, the ruling paradigm, the processes, and resultant products. By use of these three Ps: paradigm, processes, and products, students have a reasonably complete picture of the subject area. If they knew nothing more than this simple construct, they would have an excellent perspective of the subject area. This provides a jumping off point for the instructor to develop the details. The three Ps can make course construction much more straightforward and complete. Students benefit since they have a clearer idea of what the subject is about and its importance. Retention may be improved and carryover to advanced courses may be aided. For faculty, the use of these three P's makes organizing a course more straightforward. Additionally, the instructor benefits include: 1. The main points are clearly stated, thus avoiding the problem of not covering the essential concepts. 2. The course topics hold together, pedagogically. There is significant opportunity for continuity of thought. 3. An outline is developed that is easily analyzed for holes or omissions. 4. A course emerges with a balance of topics, permitting appropriate time to be devoted to significant subject matter. 5. If a course is shared between faculty or passes from one faculty to another by semester or quarter, there is greater assurance that topics and concepts everyone agrees on can be adequately covered. 6. There is less guesswork involved in planning a course. New faculty have an approach that will make sense and allow them to feel less awash and more focused. In summary, taking time to construct a course utilizing the important paradigms, processes, and products can provide significant benefits to the instructor and the student. Material

  18. Cold flows and large scale tides

    Science.gov (United States)

    van de Weygaert, R.; Hoffman, Y.

    1999-01-01

    Within the context of the general cosmological setting it has remained puzzling that the local Universe is a relatively cold environment, in the sense of small-scale peculiar velocities being relatively small. Indeed, it has since long figured as an important argument for the Universe having a low Ω, or if the Universe were to have a high Ω for the existence of a substantial bias between the galaxy and the matter distribution. Here we investigate the dynamical impact of neighbouring matter concentrations on local small-scale characteristics of cosmic flows. While regions where huge nearby matter clumps represent a dominating component in the local dynamics and kinematics may experience a faster collapse on behalf of the corresponding tidal influence, the latter will also slow down or even prevent a thorough mixing and virialization of the collapsing region. By means of N-body simulations starting from constrained realizations of regions of modest density surrounded by more pronounced massive structures, we have explored the extent to which the large scale tidal fields may indeed suppress the `heating' of the small-scale cosmic velocities. Amongst others we quantify the resulting cosmic flows through the cosmic Mach number. This allows us to draw conclusions about the validity of estimates of global cosmological parameters from local cosmic phenomena and the necessity to take into account the structure and distribution of mass in the local Universe.

  19. Large scale mechanical metamaterials as seismic shields

    Science.gov (United States)

    Miniaci, Marco; Krushynska, Anastasiia; Bosia, Federico; Pugno, Nicola M.

    2016-08-01

    Earthquakes represent one of the most catastrophic natural events affecting mankind. At present, a universally accepted risk mitigation strategy for seismic events remains to be proposed. Most approaches are based on vibration isolation of structures rather than on the remote shielding of incoming waves. In this work, we propose a novel approach to the problem and discuss the feasibility of a passive isolation strategy for seismic waves based on large-scale mechanical metamaterials, including for the first time numerical analysis of both surface and guided waves, soil dissipation effects, and adopting a full 3D simulations. The study focuses on realistic structures that can be effective in frequency ranges of interest for seismic waves, and optimal design criteria are provided, exploring different metamaterial configurations, combining phononic crystals and locally resonant structures and different ranges of mechanical properties. Dispersion analysis and full-scale 3D transient wave transmission simulations are carried out on finite size systems to assess the seismic wave amplitude attenuation in realistic conditions. Results reveal that both surface and bulk seismic waves can be considerably attenuated, making this strategy viable for the protection of civil structures against seismic risk. The proposed remote shielding approach could open up new perspectives in the field of seismology and in related areas of low-frequency vibration damping or blast protection.

  20. Large-scale autostereoscopic outdoor display

    Science.gov (United States)

    Reitterer, Jörg; Fidler, Franz; Saint Julien-Wallsee, Ferdinand; Schmid, Gerhard; Gartner, Wolfgang; Leeb, Walter; Schmid, Ulrich

    2013-03-01

    State-of-the-art autostereoscopic displays are often limited in size, effective brightness, number of 3D viewing zones, and maximum 3D viewing distances, all of which are mandatory requirements for large-scale outdoor displays. Conventional autostereoscopic indoor concepts like lenticular lenses or parallax barriers cannot simply be adapted for these screens due to the inherent loss of effective resolution and brightness, which would reduce both image quality and sunlight readability. We have developed a modular autostereoscopic multi-view laser display concept with sunlight readable effective brightness, theoretically up to several thousand 3D viewing zones, and maximum 3D viewing distances of up to 60 meters. For proof-of-concept purposes a prototype display with two pixels was realized. Due to various manufacturing tolerances each individual pixel has slightly different optical properties, and hence the 3D image quality of the display has to be calculated stochastically. In this paper we present the corresponding stochastic model, we evaluate the simulation and measurement results of the prototype display, and we calculate the achievable autostereoscopic image quality to be expected for our concept.

  1. Management of large-scale multimedia conferencing

    Science.gov (United States)

    Cidon, Israel; Nachum, Youval

    1998-12-01

    The goal of this work is to explore management strategies and algorithms for large-scale multimedia conferencing over a communication network. Since the use of multimedia conferencing is still limited, the management of such systems has not yet been studied in depth. A well organized and human friendly multimedia conference management should utilize efficiently and fairly its limited resources as well as take into account the requirements of the conference participants. The ability of the management to enforce fair policies and to quickly take into account the participants preferences may even lead to a conference environment that is more pleasant and more effective than a similar face to face meeting. We suggest several principles for defining and solving resource sharing problems in this context. The conference resources which are addressed in this paper are the bandwidth (conference network capacity), time (participants' scheduling) and limitations of audio and visual equipment. The participants' requirements for these resources are defined and translated in terms of Quality of Service requirements and the fairness criteria.

  2. Large-scale wind turbine structures

    Science.gov (United States)

    Spera, David A.

    1988-01-01

    The purpose of this presentation is to show how structural technology was applied in the design of modern wind turbines, which were recently brought to an advanced stage of development as sources of renewable power. Wind turbine structures present many difficult problems because they are relatively slender and flexible; subject to vibration and aeroelastic instabilities; acted upon by loads which are often nondeterministic; operated continuously with little maintenance in all weather; and dominated by life-cycle cost considerations. Progress in horizontal-axis wind turbines (HAWT) development was paced by progress in the understanding of structural loads, modeling of structural dynamic response, and designing of innovative structural response. During the past 15 years a series of large HAWTs was developed. This has culminated in the recent completion of the world's largest operating wind turbine, the 3.2 MW Mod-5B power plane installed on the island of Oahu, Hawaii. Some of the applications of structures technology to wind turbine will be illustrated by referring to the Mod-5B design. First, a video overview will be presented to provide familiarization with the Mod-5B project and the important components of the wind turbine system. Next, the structural requirements for large-scale wind turbines will be discussed, emphasizing the difficult fatigue-life requirements. Finally, the procedures used to design the structure will be presented, including the use of the fracture mechanics approach for determining allowable fatigue stresses.

  3. Large-scale tides in general relativity

    Science.gov (United States)

    Ip, Hiu Yan; Schmidt, Fabian

    2017-02-01

    Density perturbations in cosmology, i.e. spherically symmetric adiabatic perturbations of a Friedmann-Lemaȋtre-Robertson-Walker (FLRW) spacetime, are locally exactly equivalent to a different FLRW solution, as long as their wavelength is much larger than the sound horizon of all fluid components. This fact is known as the "separate universe" paradigm. However, no such relation is known for anisotropic adiabatic perturbations, which correspond to an FLRW spacetime with large-scale tidal fields. Here, we provide a closed, fully relativistic set of evolutionary equations for the nonlinear evolution of such modes, based on the conformal Fermi (CFC) frame. We show explicitly that the tidal effects are encoded by the Weyl tensor, and are hence entirely different from an anisotropic Bianchi I spacetime, where the anisotropy is sourced by the Ricci tensor. In order to close the system, certain higher derivative terms have to be dropped. We show that this approximation is equivalent to the local tidal approximation of Hui and Bertschinger [1]. We also show that this very simple set of equations matches the exact evolution of the density field at second order, but fails at third and higher order. This provides a useful, easy-to-use framework for computing the fully relativistic growth of structure at second order.

  4. Large scale probabilistic available bandwidth estimation

    CERN Document Server

    Thouin, Frederic; Rabbat, Michael

    2010-01-01

    The common utilization-based definition of available bandwidth and many of the existing tools to estimate it suffer from several important weaknesses: i) most tools report a point estimate of average available bandwidth over a measurement interval and do not provide a confidence interval; ii) the commonly adopted models used to relate the available bandwidth metric to the measured data are invalid in almost all practical scenarios; iii) existing tools do not scale well and are not suited to the task of multi-path estimation in large-scale networks; iv) almost all tools use ad-hoc techniques to address measurement noise; and v) tools do not provide enough flexibility in terms of accuracy, overhead, latency and reliability to adapt to the requirements of various applications. In this paper we propose a new definition for available bandwidth and a novel framework that addresses these issues. We define probabilistic available bandwidth (PAB) as the largest input rate at which we can send a traffic flow along a pa...

  5. Gravitational redshifts from large-scale structure

    CERN Document Server

    Croft, Rupert A C

    2013-01-01

    The recent measurement of the gravitational redshifts of galaxies in galaxy clusters by Wojtak et al. has opened a new observational window on dark matter and modified gravity. By stacking clusters this determination effectively used the line of sight distortion of the cross-correlation function of massive galaxies and lower mass galaxies to estimate the gravitational redshift profile of clusters out to 4 Mpc/h. Here we use a halo model of clustering to predict the distortion due to gravitational redshifts of the cross-correlation function on scales from 1 - 100 Mpc/h. We compare our predictions to simulations and use the simulations to make mock catalogues relevant to current and future galaxy redshift surveys. Without formulating an optimal estimator, we find that the full BOSS survey should be able to detect gravitational redshifts from large-scale structure at the ~4 sigma level. Upcoming redshift surveys will greatly increase the number of galaxies useable in such studies and the BigBOSS and Euclid exper...

  6. Food appropriation through large scale land acquisitions

    Science.gov (United States)

    Rulli, Maria Cristina; D'Odorico, Paolo

    2014-05-01

    The increasing demand for agricultural products and the uncertainty of international food markets has recently drawn the attention of governments and agribusiness firms toward investments in productive agricultural land, mostly in the developing world. The targeted countries are typically located in regions that have remained only marginally utilized because of lack of modern technology. It is expected that in the long run large scale land acquisitions (LSLAs) for commercial farming will bring the technology required to close the existing crops yield gaps. While the extent of the acquired land and the associated appropriation of freshwater resources have been investigated in detail, the amount of food this land can produce and the number of people it could feed still need to be quantified. Here we use a unique dataset of land deals to provide a global quantitative assessment of the rates of crop and food appropriation potentially associated with LSLAs. We show how up to 300-550 million people could be fed by crops grown in the acquired land, should these investments in agriculture improve crop production and close the yield gap. In contrast, about 190-370 million people could be supported by this land without closing of the yield gap. These numbers raise some concern because the food produced in the acquired land is typically exported to other regions, while the target countries exhibit high levels of malnourishment. Conversely, if used for domestic consumption, the crops harvested in the acquired land could ensure food security to the local populations.

  7. Large-scale clustering of cosmic voids

    Science.gov (United States)

    Chan, Kwan Chuen; Hamaus, Nico; Desjacques, Vincent

    2014-11-01

    We study the clustering of voids using N -body simulations and simple theoretical models. The excursion-set formalism describes fairly well the abundance of voids identified with the watershed algorithm, although the void formation threshold required is quite different from the spherical collapse value. The void cross bias bc is measured and its large-scale value is found to be consistent with the peak background split results. A simple fitting formula for bc is found. We model the void auto-power spectrum taking into account the void biasing and exclusion effect. A good fit to the simulation data is obtained for voids with radii ≳30 Mpc h-1 , especially when the void biasing model is extended to 1-loop order. However, the best-fit bias parameters do not agree well with the peak-background results. Being able to fit the void auto-power spectrum is particularly important not only because it is the direct observable in galaxy surveys, but also our method enables us to treat the bias parameters as nuisance parameters, which are sensitive to the techniques used to identify voids.

  8. Large Scale, High Resolution, Mantle Dynamics Modeling

    Science.gov (United States)

    Geenen, T.; Berg, A. V.; Spakman, W.

    2007-12-01

    To model the geodynamic evolution of plate convergence, subduction and collision and to allow for a connection to various types of observational data, geophysical, geodetical and geological, we developed a 4D (space-time) numerical mantle convection code. The model is based on a spherical 3D Eulerian fem model, with quadratic elements, on top of which we constructed a 3D Lagrangian particle in cell(PIC) method. We use the PIC method to transport material properties and to incorporate a viscoelastic rheology. Since capturing small scale processes associated with localization phenomena require a high resolution, we spend a considerable effort on implementing solvers suitable to solve for models with over 100 million degrees of freedom. We implemented Additive Schwartz type ILU based methods in combination with a Krylov solver, GMRES. However we found that for problems with over 500 thousend degrees of freedom the convergence of the solver degraded severely. This observation is known from the literature [Saad, 2003] and results from the local character of the ILU preconditioner resulting in a poor approximation of the inverse of A for large A. The size of A for which ILU is no longer usable depends on the condition of A and on the amount of fill in allowed for the ILU preconditioner. We found that for our problems with over 5×105 degrees of freedom convergence became to slow to solve the system within an acceptable amount of walltime, one minute, even when allowing for considerable amount of fill in. We also implemented MUMPS and found good scaling results for problems up to 107 degrees of freedom for up to 32 CPU¡¯s. For problems with over 100 million degrees of freedom we implemented Algebraic Multigrid type methods (AMG) from the ML library [Sala, 2006]. Since multigrid methods are most effective for single parameter problems, we rebuild our model to use the SIMPLE method in the Stokes solver [Patankar, 1980]. We present scaling results from these solvers for 3D

  9. Beowulf Distributed Processing and the United States Geological Survey

    Science.gov (United States)

    Maddox, Brian G.

    2002-01-01

    Introduction In recent years, the United States Geological Survey's (USGS) National Mapping Discipline (NMD) has expanded its scientific and research activities. Work is being conducted in areas such as emergency response research, scientific visualization, urban prediction, and other simulation activities. Custom-produced digital data have become essential for these types of activities. High-resolution, remotely sensed datasets are also seeing increased use. Unfortunately, the NMD is also finding that it lacks the resources required to perform some of these activities. Many of these projects require large amounts of computer processing resources. Complex urban-prediction simulations, for example, involve large amounts of processor-intensive calculations on large amounts of input data. This project was undertaken to learn and understand the concepts of distributed processing. Experience was needed in developing these types of applications. The idea was that this type of technology could significantly aid the needs of the NMD scientific and research programs. Porting a numerically intensive application currently being used by an NMD science program to run in a distributed fashion would demonstrate the usefulness of this technology. There are several benefits that this type of technology can bring to the USGS's research programs. Projects can be performed that were previously impossible due to a lack of computing resources. Other projects can be performed on a larger scale than previously possible. For example, distributed processing can enable urban dynamics research to perform simulations on larger areas without making huge sacrifices in resolution. The processing can also be done in a more reasonable amount of time than with traditional single-threaded methods (a scaled version of Chester County, Pennsylvania, took about fifty days to finish its first calibration phase with a single-threaded program). This paper has several goals regarding distributed processing

  10. 3D fast adaptive correlation imaging for large-scale gravity data based on GPU computation

    Science.gov (United States)

    Chen, Z.; Meng, X.; Guo, L.; Liu, G.

    2011-12-01

    In recent years, large scale gravity data sets have been collected and employed to enhance gravity problem-solving abilities of tectonics studies in China. Aiming at the large scale data and the requirement of rapid interpretation, previous authors have carried out a lot of work, including the fast gradient module inversion and Euler deconvolution depth inversion ,3-D physical property inversion using stochastic subspaces and equivalent storage, fast inversion using wavelet transforms and a logarithmic barrier method. So it can be say that 3-D gravity inversion has been greatly improved in the last decade. Many authors added many different kinds of priori information and constraints to deal with nonuniqueness using models composed of a large number of contiguous cells of unknown property and obtained good results. However, due to long computation time, instability and other shortcomings, 3-D physical property inversion has not been widely applied to large-scale data yet. In order to achieve 3-D interpretation with high efficiency and precision for geological and ore bodies and obtain their subsurface distribution, there is an urgent need to find a fast and efficient inversion method for large scale gravity data. As an entirely new geophysical inversion method, 3D correlation has a rapid development thanks to the advantage of requiring no a priori information and demanding small amount of computer memory. This method was proposed to image the distribution of equivalent excess masses of anomalous geological bodies with high resolution both longitudinally and transversely. In order to tranform the equivalence excess masses into real density contrasts, we adopt the adaptive correlation imaging for gravity data. After each 3D correlation imaging, we change the equivalence into density contrasts according to the linear relationship, and then carry out forward gravity calculation for each rectangle cells. Next, we compare the forward gravity data with real data, and

  11. Developing Large-Scale Bayesian Networks by Composition

    Data.gov (United States)

    National Aeronautics and Space Administration — In this paper, we investigate the use of Bayesian networks to construct large-scale diagnostic systems. In particular, we consider the development of large-scale...

  12. Distributed large-scale dimensional metrology new insights

    CERN Document Server

    Franceschini, Fiorenzo; Maisano, Domenico

    2011-01-01

    Focuses on the latest insights into and challenges of distributed large scale dimensional metrology Enables practitioners to study distributed large scale dimensional metrology independently Includes specific examples of the development of new system prototypes

  13. Using Large Scale Test Results for Pedagogical Purposes

    DEFF Research Database (Denmark)

    Dolin, Jens

    2012-01-01

    The use and influence of large scale tests (LST), both national and international, has increased dramatically within the last decade. This process has revealed a tension between the legitimate need for information about the performance of the educational system and teachers to inform policy...... wash back effects known from other research but gave additionally some insight in teachers’ attitudes towards LSTs. To account for these findings results from another research project - the Validation of PISA – will be included. This project analyzed how PISA has influenced the Danish educational...

  14. Sensitivity technologies for large scale simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Collis, Samuel Scott; Bartlett, Roscoe Ainsworth; Smith, Thomas Michael; Heinkenschloss, Matthias (Rice University, Houston, TX); Wilcox, Lucas C. (Brown University, Providence, RI); Hill, Judith C. (Carnegie Mellon University, Pittsburgh, PA); Ghattas, Omar (Carnegie Mellon University, Pittsburgh, PA); Berggren, Martin Olof (University of UppSala, Sweden); Akcelik, Volkan (Carnegie Mellon University, Pittsburgh, PA); Ober, Curtis Curry; van Bloemen Waanders, Bart Gustaaf; Keiter, Eric Richard

    2005-01-01

    Sensitivity analysis is critically important to numerous analysis algorithms, including large scale optimization, uncertainty quantification,reduced order modeling, and error estimation. Our research focused on developing tools, algorithms and standard interfaces to facilitate the implementation of sensitivity type analysis into existing code and equally important, the work was focused on ways to increase the visibility of sensitivity analysis. We attempt to accomplish the first objective through the development of hybrid automatic differentiation tools, standard linear algebra interfaces for numerical algorithms, time domain decomposition algorithms and two level Newton methods. We attempt to accomplish the second goal by presenting the results of several case studies in which direct sensitivities and adjoint methods have been effectively applied, in addition to an investigation of h-p adaptivity using adjoint based a posteriori error estimation. A mathematical overview is provided of direct sensitivities and adjoint methods for both steady state and transient simulations. Two case studies are presented to demonstrate the utility of these methods. A direct sensitivity method is implemented to solve a source inversion problem for steady state internal flows subject to convection diffusion. Real time performance is achieved using novel decomposition into offline and online calculations. Adjoint methods are used to reconstruct initial conditions of a contamination event in an external flow. We demonstrate an adjoint based transient solution. In addition, we investigated time domain decomposition algorithms in an attempt to improve the efficiency of transient simulations. Because derivative calculations are at the root of sensitivity calculations, we have developed hybrid automatic differentiation methods and implemented this approach for shape optimization for gas dynamics using the Euler equations. The hybrid automatic differentiation method was applied to a first

  15. Sensitivity technologies for large scale simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Collis, Samuel Scott; Bartlett, Roscoe Ainsworth; Smith, Thomas Michael; Heinkenschloss, Matthias (Rice University, Houston, TX); Wilcox, Lucas C. (Brown University, Providence, RI); Hill, Judith C. (Carnegie Mellon University, Pittsburgh, PA); Ghattas, Omar (Carnegie Mellon University, Pittsburgh, PA); Berggren, Martin Olof (University of UppSala, Sweden); Akcelik, Volkan (Carnegie Mellon University, Pittsburgh, PA); Ober, Curtis Curry; van Bloemen Waanders, Bart Gustaaf; Keiter, Eric Richard

    2005-01-01

    Sensitivity analysis is critically important to numerous analysis algorithms, including large scale optimization, uncertainty quantification,reduced order modeling, and error estimation. Our research focused on developing tools, algorithms and standard interfaces to facilitate the implementation of sensitivity type analysis into existing code and equally important, the work was focused on ways to increase the visibility of sensitivity analysis. We attempt to accomplish the first objective through the development of hybrid automatic differentiation tools, standard linear algebra interfaces for numerical algorithms, time domain decomposition algorithms and two level Newton methods. We attempt to accomplish the second goal by presenting the results of several case studies in which direct sensitivities and adjoint methods have been effectively applied, in addition to an investigation of h-p adaptivity using adjoint based a posteriori error estimation. A mathematical overview is provided of direct sensitivities and adjoint methods for both steady state and transient simulations. Two case studies are presented to demonstrate the utility of these methods. A direct sensitivity method is implemented to solve a source inversion problem for steady state internal flows subject to convection diffusion. Real time performance is achieved using novel decomposition into offline and online calculations. Adjoint methods are used to reconstruct initial conditions of a contamination event in an external flow. We demonstrate an adjoint based transient solution. In addition, we investigated time domain decomposition algorithms in an attempt to improve the efficiency of transient simulations. Because derivative calculations are at the root of sensitivity calculations, we have developed hybrid automatic differentiation methods and implemented this approach for shape optimization for gas dynamics using the Euler equations. The hybrid automatic differentiation method was applied to a first

  16. Large Scale Flame Spread Environmental Characterization Testing

    Science.gov (United States)

    Clayman, Lauren K.; Olson, Sandra L.; Gokoghi, Suleyman A.; Brooker, John E.; Ferkul, Paul V.; Kacher, Henry F.

    2013-01-01

    Under the Advanced Exploration Systems (AES) Spacecraft Fire Safety Demonstration Project (SFSDP), as a risk mitigation activity in support of the development of a large-scale fire demonstration experiment in microgravity, flame-spread tests were conducted in normal gravity on thin, cellulose-based fuels in a sealed chamber. The primary objective of the tests was to measure pressure rise in a chamber as sample material, burning direction (upward/downward), total heat release, heat release rate, and heat loss mechanisms were varied between tests. A Design of Experiments (DOE) method was imposed to produce an array of tests from a fixed set of constraints and a coupled response model was developed. Supplementary tests were run without experimental design to additionally vary select parameters such as initial chamber pressure. The starting chamber pressure for each test was set below atmospheric to prevent chamber overpressure. Bottom ignition, or upward propagating burns, produced rapid acceleratory turbulent flame spread. Pressure rise in the chamber increases as the amount of fuel burned increases mainly because of the larger amount of heat generation and, to a much smaller extent, due to the increase in gaseous number of moles. Top ignition, or downward propagating burns, produced a steady flame spread with a very small flat flame across the burning edge. Steady-state pressure is achieved during downward flame spread as the pressure rises and plateaus. This indicates that the heat generation by the flame matches the heat loss to surroundings during the longer, slower downward burns. One heat loss mechanism included mounting a heat exchanger directly above the burning sample in the path of the plume to act as a heat sink and more efficiently dissipate the heat due to the combustion event. This proved an effective means for chamber overpressure mitigation for those tests producing the most total heat release and thusly was determined to be a feasible mitigation

  17. Synchronization of coupled large-scale Boolean networks

    Energy Technology Data Exchange (ETDEWEB)

    Li, Fangfei, E-mail: li-fangfei@163.com [Department of Mathematics, East China University of Science and Technology, No. 130, Meilong Road, Shanghai, Shanghai 200237 (China)

    2014-03-15

    This paper investigates the complete synchronization and partial synchronization of two large-scale Boolean networks. First, the aggregation algorithm towards large-scale Boolean network is reviewed. Second, the aggregation algorithm is applied to study the complete synchronization and partial synchronization of large-scale Boolean networks. Finally, an illustrative example is presented to show the efficiency of the proposed results.

  18. Synchronization of coupled large-scale Boolean networks

    Science.gov (United States)

    Li, Fangfei

    2014-03-01

    This paper investigates the complete synchronization and partial synchronization of two large-scale Boolean networks. First, the aggregation algorithm towards large-scale Boolean network is reviewed. Second, the aggregation algorithm is applied to study the complete synchronization and partial synchronization of large-scale Boolean networks. Finally, an illustrative example is presented to show the efficiency of the proposed results.

  19. Advances and visions in large-scale hydrological modelling: findings from the 11th Workshop on Large-Scale Hydrological Modelling

    Directory of Open Access Journals (Sweden)

    P. Döll

    2008-10-01

    Full Text Available Large-scale hydrological modelling has become increasingly wide-spread during the last decade. An annual workshop series on large-scale hydrological modelling has provided, since 1997, a forum to the German-speaking community for discussing recent developments and achievements in this research area. In this paper we present the findings from the 2007 workshop which focused on advances and visions in large-scale hydrological modelling. We identify the state of the art, difficulties and research perspectives with respect to the themes "sensitivity of model results", "integrated modelling" and "coupling of processes in hydrosphere, atmosphere and biosphere". Some achievements in large-scale hydrological modelling during the last ten years are presented together with a selection of remaining challenges for the future.

  20. 大型封头锻件终锻成形规律模拟研究%Principals of final forging process in modeling and simulating large-scale head

    Institute of Scientific and Technical Information of China (English)

    谢安; 马庆贤

    2011-01-01

    Some effecting factors and principals in manufacturing a large-scale head were studied by modeling and numerical simulations. The experimental results show that enlarging the rounding of up anvil decreases the deformation at the center of the billet, concave lower anvil is efficiently used in decreasing cracks on the bottom surface and inclusion defects in the samples. The results are significant for new proper forging craft.%通过采用物理模拟和数值模拟相结合的方法,研究了大型封头终锻成形过程中塑性成形规律和影响因素,掌握了型砧对于变形以及产生裂纹的影响作用效果.实验结果表明:增大上型砧的倒角有利于减弱心部压实效果,凹型下砧有利于减少锻件下表面裂纹和内部夹杂性裂纹的产生.研究结果对于制定合理的终锻成形工艺具有支撑作用.

  1. Development of dangerous geological processes in the Hankaisky Region of Primorskiy Krai (Russian Far East)

    Institute of Scientific and Technical Information of China (English)

    Tatiana V. Selivanova

    2006-01-01

    Hankaisky Region is the most densely populated and economic developed part of the Primorskiy Krai. It is promoting development of dangerous geological processes there. In the article the reasons of formation and intensive development in Hankaisky Region of the following dangerous geological processes lateral, winder and ground erosive, sill, floods, taluses, bogging, slope wash, karts, rebound of ground are considered.

  2. Large-Scale Spacecraft Fire Safety Tests

    Science.gov (United States)

    Urban, David; Ruff, Gary A.; Ferkul, Paul V.; Olson, Sandra; Fernandez-Pello, A. Carlos; T'ien, James S.; Torero, Jose L.; Cowlard, Adam J.; Rouvreau, Sebastien; Minster, Olivier; Toth, Balazs; Legros, Guillaume; Eigenbrod, Christian; Smirnov, Nickolay; Fujita, Osamu; Jomaas, Grunde

    2014-01-01

    An international collaborative program is underway to address open issues in spacecraft fire safety. Because of limited access to long-term low-gravity conditions and the small volume generally allotted for these experiments, there have been relatively few experiments that directly study spacecraft fire safety under low-gravity conditions. Furthermore, none of these experiments have studied sample sizes and environment conditions typical of those expected in a spacecraft fire. The major constraint has been the size of the sample, with prior experiments limited to samples of the order of 10 cm in length and width or smaller. This lack of experimental data forces spacecraft designers to base their designs and safety precautions on 1-g understanding of flame spread, fire detection, and suppression. However, low-gravity combustion research has demonstrated substantial differences in flame behavior in low-gravity. This, combined with the differences caused by the confined spacecraft environment, necessitates practical scale spacecraft fire safety research to mitigate risks for future space missions. To address this issue, a large-scale spacecraft fire experiment is under development by NASA and an international team of investigators. This poster presents the objectives, status, and concept of this collaborative international project (Saffire). The project plan is to conduct fire safety experiments on three sequential flights of an unmanned ISS re-supply spacecraft (the Orbital Cygnus vehicle) after they have completed their delivery of cargo to the ISS and have begun their return journeys to earth. On two flights (Saffire-1 and Saffire-3), the experiment will consist of a flame spread test involving a meter-scale sample ignited in the pressurized volume of the spacecraft and allowed to burn to completion while measurements are made. On one of the flights (Saffire-2), 9 smaller (5 x 30 cm) samples will be tested to evaluate NASAs material flammability screening tests

  3. Innovation cycle for small- and large-scale change.

    Science.gov (United States)

    Scott, Kathy; Steinbinder, Amy

    2009-01-01

    In today's complex healthcare systems, transformation requires 2 major efforts: (1) a fundamental changes in the underlying beliefs and assumptions that perpetuate the current system and (2) a fundamental redesign of the multiplicity of diverse and complex subsystems that result in unpredictable aggregate behavior and outcomes. Through an Intelligent Complex Adaptive System framework combined with an innovation process a transformation process and cycle was created for a large healthcare system that resulted in both small- and large-scale changes. This process not only challenges the underlying beliefs and assumptions but also creates new possibilities and prototypes for care delivery through a change-management process that is inclusive and honors the contributions of the entire team.

  4. CRS SEISMIC PROCESSING OF A GEOLOGICAL COMPLEX AREA

    Directory of Open Access Journals (Sweden)

    Montes Luis A.

    2009-12-01

    Full Text Available We applied the NMO and CRS (Common Reflector Surface approaches to a complex geological area in order to compare their performances for obtaining enhanced images. Unlike NMO, CRS does not depend on a previous time velocity model and uses a hyperbolic equation to estimate 2D travel times through three parameters (Normal ray emergence angle, NIP and N wavefront curvatures. To obtain the image a solution provided by coherence analysis algorithm was used.
    A low quality Colombian seismic line acquired in Middle Magdalena basin was used, where a foothill geological area is characterizedby a thrusting fault. The CRS provided an enhanced image which allowed a new geological interpretation that is best constrained with other regional observations.

  5. An establishment on the hazard mitigation system of large scale landslides for Zengwen reservoir watershed management in Taiwan

    Science.gov (United States)

    Tsai, Kuang-Jung; Lee, Ming-Hsi; Chen, Yie-Ruey; Huang, Meng-Hsuan; Yu, Chia-Ching

    2016-04-01

    Extremely heavy rainfall with accumulated rainfall amount more than 2900mm within continuous 3 day event occurred at southern Taiwan has been recognized as a serious natural hazard caused by Morakot typhoon in august, 2009. Very destructive large scale landslides and debris flows were induced by this heavy rainfall event. According to the satellite image processing and monitoring project was conducted by Soil & Water Conservation Bureau after Morakot typhoon. More than 10904 sites of landslide with total sliding area of 18113 ha were significantly found by this project. Also, the field investigation on all landslide areas were executed by this research on the basis of disaster type, scale and location related to the topographic condition, colluvium soil characteristics, bedrock formation and geological structure after Morakot hazard. The mechanism, characteristics and behavior of this large scale landslide combined with debris flow disasters are analyzed and Investigated to rule out the interaction of factors concerned above and identify the disaster extent of rainfall induced landslide during the period of this study. In order to reduce the disaster risk of large scale landslide and debris flow, the adaption strategy of hazard mitigation system should be set up as soon as possible and taken into consideration of slope land conservation, landslide control countermeasure planning, disaster database establishment, environment impact analysis and disaster risk assessment respectively. As a result, this 3-year research has been focused on the field investigation by using GPS/GIS/RS integration, mechanism and behavior study regarding to the rainfall induced landslide occurrence, disaster database and hazard mitigation system establishment. In fact, this project has become an important issue which was seriously concerned by the government and people live in Taiwan. Hopefully, all results come from this research can be used as a guidance for the disaster prevention and

  6. Features of the method of large-scale paleolandscape reconstructions

    Science.gov (United States)

    Nizovtsev, Vyacheslav; Erman, Natalia; Graves, Irina

    2017-04-01

    The method of paleolandscape reconstructions was tested in the key area of the basin of the Central Dubna, located at the junction of the Taldom and Sergiev Posad districts of the Moscow region. A series of maps was created which shows paleoreconstructions of the original (indigenous) living environment of initial settlers during main time periods of the Holocene age and features of human interaction with landscapes at the early stages of economic development of the territory (in the early and middle Holocene). The sequence of these works is as follows. 1. Comprehensive analysis of topographic maps of different scales and aerial and satellite images, stock materials of geological and hydrological surveys and prospecting of peat deposits, archaeological evidence on ancient settlements, palynological and osteological analysis, analysis of complex landscape and archaeological studies. 2. Mapping of factual material and analyzing of the spatial distribution of archaeological sites were performed. 3. Running of a large-scale field landscape mapping (sample areas) and compiling of maps of the modern landscape structure. On this basis, edaphic properties of the main types of natural boundaries were analyzed and their resource base was determined. 4. Reconstruction of lake-river system during the main periods of the Holocene. The boundaries of restored paleolakes were determined based on power and territorial confinement of decay ooze. 5. On the basis of landscape and edaphic method the actual paleolandscape reconstructions for the main periods of the Holocene were performed. During the reconstructions of the original, indigenous flora we relied on data of palynological studies conducted on the studied area or in similar landscape conditions. 6. The result was a retrospective analysis and periodization of the settlement process, economic development and the formation of the first anthropogenically transformed landscape complexes. The reconstruction of the dynamics of the

  7. Brief Mental Training Reorganizes Large-Scale Brain Networks.

    Science.gov (United States)

    Tang, Yi-Yuan; Tang, Yan; Tang, Rongxiang; Lewis-Peacock, Jarrod A

    2017-01-01

    Emerging evidences have shown that one form of mental training-mindfulness meditation, can improve attention, emotion regulation and cognitive performance through changing brain activity and structural connectivity. However, whether and how the short-term mindfulness meditation alters large-scale brain networks are not well understood. Here, we applied a novel data-driven technique, the multivariate pattern analysis (MVPA) to resting-state fMRI (rsfMRI) data to identify changes in brain activity patterns and assess the neural mechanisms induced by a brief mindfulness training-integrative body-mind training (IBMT), which was previously reported in our series of randomized studies. Whole brain rsfMRI was performed on an undergraduate group who received 2 weeks of IBMT with 30 min per session (5 h training in total). Classifiers were trained on measures of functional connectivity in this fMRI data, and they were able to reliably differentiate (with 72% accuracy) patterns of connectivity from before vs. after the IBMT training. After training, an increase in positive functional connections (60 connections) were detected, primarily involving bilateral superior/middle occipital gyrus, bilateral frontale operculum, bilateral superior temporal gyrus, right superior temporal pole, bilateral insula, caudate and cerebellum. These results suggest that brief mental training alters the functional connectivity of large-scale brain networks at rest that may involve a portion of the neural circuitry supporting attention, cognitive and affective processing, awareness and sensory integration and reward processing.

  8. Evaluating large scale orthophotos derived from high resolution satellite imagery

    Science.gov (United States)

    Ioannou, Maria Teresa; Georgopoulos, Andreas

    2013-08-01

    For the purposes of a research project, for the compilation of the archaeological and environmental digital map of the island of Antiparos, the production of updated large scale orthophotos was required. Hence suitable stereoscopic high resolution satellite imagery was acquired. Two Geoeye-1 stereopairs were enough to cover this small island of the Cyclades complex in the central Aegean. For the orientation of the two stereopairs numerous ground control points were determined using GPS observations. Some of them would also serve as check points. The images were processed using commercial stereophotogrammetric software suitable to process satellite stereoscopic imagery. The results of the orientations are evaluated and the digital terrain model was produced using automated and manual procedures. The DTM was checked both internally and externally with comparison to other available DTMs. In this paper the procedures for producing the desired orthophotography are critically presented and the final result is compared and evaluated for its accuracy, completeness and efficiency. The final product is also compared against the orthophotography produced by Ktimatologio S.A. using aerial images in 2007. The orthophotography produced has been evaluated metrically using the available check points, while qualitative evaluation has also been performed. The results are presented and a critical approach for the usability of satellite imagery for the production of large scale orthophotos is attempted.

  9. Brief Mental Training Reorganizes Large-Scale Brain Networks

    Science.gov (United States)

    Tang, Yi-Yuan; Tang, Yan; Tang, Rongxiang; Lewis-Peacock, Jarrod A.

    2017-01-01

    Emerging evidences have shown that one form of mental training—mindfulness meditation, can improve attention, emotion regulation and cognitive performance through changing brain activity and structural connectivity. However, whether and how the short-term mindfulness meditation alters large-scale brain networks are not well understood. Here, we applied a novel data-driven technique, the multivariate pattern analysis (MVPA) to resting-state fMRI (rsfMRI) data to identify changes in brain activity patterns and assess the neural mechanisms induced by a brief mindfulness training—integrative body–mind training (IBMT), which was previously reported in our series of randomized studies. Whole brain rsfMRI was performed on an undergraduate group who received 2 weeks of IBMT with 30 min per session (5 h training in total). Classifiers were trained on measures of functional connectivity in this fMRI data, and they were able to reliably differentiate (with 72% accuracy) patterns of connectivity from before vs. after the IBMT training. After training, an increase in positive functional connections (60 connections) were detected, primarily involving bilateral superior/middle occipital gyrus, bilateral frontale operculum, bilateral superior temporal gyrus, right superior temporal pole, bilateral insula, caudate and cerebellum. These results suggest that brief mental training alters the functional connectivity of large-scale brain networks at rest that may involve a portion of the neural circuitry supporting attention, cognitive and affective processing, awareness and sensory integration and reward processing.

  10. Large scale PV plants - also in Denmark. Project report

    Energy Technology Data Exchange (ETDEWEB)

    Ahm, P. (PA Energy, Malling (Denmark)); Vedde, J. (SiCon. Silicon and PV consulting, Birkeroed (Denmark))

    2011-04-15

    Large scale PV (LPV) plants, plants with a capacity of more than 200 kW, has since 2007 constituted an increasing share of the global PV installations. In 2009 large scale PV plants with cumulative power more that 1,3 GWp were connected to the grid. The necessary design data for LPV plants in Denmark are available or can be found, although irradiance data could be improved. There seems to be very few institutional barriers for LPV projects, but as so far no real LPV projects have been processed, these findings have to be regarded as preliminary. The fast growing number of very large scale solar thermal plants for district heating applications supports these findings. It has further been investigated, how to optimize the lay-out of LPV plants. Under the Danish irradiance conditions with several winter months with very low solar height PV installations on flat surfaces will have to balance the requirements of physical space - and cost, and the loss of electricity production due to shadowing effects. The potential for LPV plants in Denmark are found in three main categories: PV installations on flat roof of large commercial buildings, PV installations on other large scale infrastructure such as noise barriers and ground mounted PV installations. The technical potential for all three categories is found to be significant and in the range of 50 - 250 km2. In terms of energy harvest PV plants will under Danish conditions exhibit an overall efficiency of about 10 % in conversion of the energy content of the light compared to about 0,3 % for biomass. The theoretical ground area needed to produce the present annual electricity consumption of Denmark at 33-35 TWh is about 300 km2 The Danish grid codes and the electricity safety regulations mention very little about PV and nothing about LPV plants. It is expected that LPV plants will be treated similarly to big wind turbines. A number of LPV plant scenarios have been investigated in detail based on real commercial offers and

  11. Development of large-scale functional brain networks in children.

    Directory of Open Access Journals (Sweden)

    Kaustubh Supekar

    2009-07-01

    Full Text Available The ontogeny of large-scale functional organization of the human brain is not well understood. Here we use network analysis of intrinsic functional connectivity to characterize the organization of brain networks in 23 children (ages 7-9 y and 22 young-adults (ages 19-22 y. Comparison of network properties, including path-length, clustering-coefficient, hierarchy, and regional connectivity, revealed that although children and young-adults' brains have similar "small-world" organization at the global level, they differ significantly in hierarchical organization and interregional connectivity. We found that subcortical areas were more strongly connected with primary sensory, association, and paralimbic areas in children, whereas young-adults showed stronger cortico-cortical connectivity between paralimbic, limbic, and association areas. Further, combined analysis of functional connectivity with wiring distance measures derived from white-matter fiber tracking revealed that the development of large-scale brain networks is characterized by weakening of short-range functional connectivity and strengthening of long-range functional connectivity. Importantly, our findings show that the dynamic process of over-connectivity followed by pruning, which rewires connectivity at the neuronal level, also operates at the systems level, helping to reconfigure and rebalance subcortical and paralimbic connectivity in the developing brain. Our study demonstrates the usefulness of network analysis of brain connectivity to elucidate key principles underlying functional brain maturation, paving the way for novel studies of disrupted brain connectivity in neurodevelopmental disorders such as autism.

  12. Maestro: an orchestration framework for large-scale WSN simulations.

    Science.gov (United States)

    Riliskis, Laurynas; Osipov, Evgeny

    2014-03-18

    Contemporary wireless sensor networks (WSNs) have evolved into large and complex systems and are one of the main technologies used in cyber-physical systems and the Internet of Things. Extensive research on WSNs has led to the development of diverse solutions at all levels of software architecture, including protocol stacks for communications. This multitude of solutions is due to the limited computational power and restrictions on energy consumption that must be accounted for when designing typical WSN systems. It is therefore challenging to develop, test and validate even small WSN applications, and this process can easily consume significant resources. Simulations are inexpensive tools for testing, verifying and generally experimenting with new technologies in a repeatable fashion. Consequently, as the size of the systems to be tested increases, so does the need for large-scale simulations. This article describes a tool called Maestro for the automation of large-scale simulation and investigates the feasibility of using cloud computing facilities for such task. Using tools that are built into Maestro, we demonstrate a feasible approach for benchmarking cloud infrastructure in order to identify cloud Virtual Machine (VM)instances that provide an optimal balance of performance and cost for a given simulation.

  13. High Speed Networking and Large-scale Simulation in Geodynamics

    Science.gov (United States)

    Kuang, Weijia; Gary, Patrick; Seablom, Michael; Truszkowski, Walt; Odubiyi, Jide; Jiang, Weiyuan; Liu, Dong

    2004-01-01

    Large-scale numerical simulation has been one of the most important approaches for understanding global geodynamical processes. In this approach, peta-scale floating point operations (pflops) are often required to carry out a single physically-meaningful numerical experiment. For example, to model convective flow in the Earth's core and generation of the geomagnetic field (geodynamo), simulation for one magnetic free-decay time (approximately 15000 years) with a modest resolution of 150 in three spatial dimensions would require approximately 0.2 pflops. If such a numerical model is used to predict geomagnetic secular variation over decades and longer, with e.g. an ensemble Kalman filter assimilation approach, approximately 30 (and perhaps more) independent simulations of similar scales would be needed for one data assimilation analysis. Obviously, such a simulation would require an enormous computing resource that exceeds the capacity of a single facility currently available at our disposal. One solution is to utilize a very fast network (e.g. 10Gb optical networks) and available middleware (e.g. Globus Toolkit) to allocate available but often heterogeneous resources for such large-scale computing efforts. At NASA GSFC, we are experimenting with such an approach by networking several clusters for geomagnetic data assimilation research. We shall present our initial testing results in the meeting.

  14. Large Scale and Performance tests of the ATLAS Online Software

    Institute of Scientific and Technical Information of China (English)

    Alexandrov; H.Wolters; 等

    2001-01-01

    One of the sub-systems of the Trigger/DAQ system of the future ATLAS experiment is the Online Software system.It encompasses the functionality needed to configure,control and monitor the DAQ.Its architecture is based on a component structure described in the ATLAS Trigger/DAQ technical proposal.Resular integration tests ensure its smooth operation in test beam setups during its evolutionary development towards the final ATLAS online system.Feedback is received and returned into the development process.Studies of the system.behavior have been performed on a set of up to 111 PCs on a configuration which is getting closer to the final size,Large scale and performance tests of the integrated system were performed on this setup with emphasis on investigating the aspects of the inter-dependence of the components and the performance of the communication software.Of particular interest were the run control state transitions in various configurations of the run control hierarchy.For the purpose of the tests,the software from other Trigger/DAQ sub-systems has been emulated.This paper presents a brief overview of the online system structure,its components and the large scale integration tests and their results.

  15. Large-scale magnetic topologies of mid-M dwarfs

    CERN Document Server

    Morin, J; Petit, P; Delfosse, X; Forveille, T; Albert, L; Aurière, M; Cabanac, R; Dintrans, B; Fares, R; Gastine, T; Jardine, M M; Lignières, F; Paletou, F; Velez, J C Ramirez; Théado, S

    2008-01-01

    We present in this paper the first results of a spectropolarimetric analysis of a small sample (~ 20) of active stars ranging from spectral type M0 to M8, which are either fully-convective or possess a very small radiative core. This study aims at providing new constraints on dynamo processes in fully-convective stars. The present paper focuses on 5 stars of spectral type ~M4, i.e. with masses close to the full convection threshold (~ 0.35 Msun), and with short rotational periods. Tomographic imaging techniques allow us to reconstruct the surface magnetic topologies from the rotationally modulated time-series of circularly polarised profiles. We fnd that all stars host mainly axisymmetric large-scale poloidal fields. Three stars were observed at two different epochs separated by ~1 yr; we find the magnetic topologies to be globally stable on this timescale. We also provide an accurate estimation of the rotational period of all stars, thus allowing us to start studying how rotation impacts the large-scale magn...

  16. Exploring Cloud Computing for Large-scale Scientific Applications

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Guang; Han, Binh; Yin, Jian; Gorton, Ian

    2013-06-27

    This paper explores cloud computing for large-scale data-intensive scientific applications. Cloud computing is attractive because it provides hardware and software resources on-demand, which relieves the burden of acquiring and maintaining a huge amount of resources that may be used only once by a scientific application. However, unlike typical commercial applications that often just requires a moderate amount of ordinary resources, large-scale scientific applications often need to process enormous amount of data in the terabyte or even petabyte range and require special high performance hardware with low latency connections to complete computation in a reasonable amount of time. To address these challenges, we build an infrastructure that can dynamically select high performance computing hardware across institutions and dynamically adapt the computation to the selected resources to achieve high performance. We have also demonstrated the effectiveness of our infrastructure by building a system biology application and an uncertainty quantification application for carbon sequestration, which can efficiently utilize data and computation resources across several institutions.

  17. Large scale petroleum reservoir simulation and parallel preconditioning algorithms research

    Institute of Scientific and Technical Information of China (English)

    SUN Jiachang; CAO Jianwen

    2004-01-01

    Solving large scale linear systems efficiently plays an important role in a petroleum reservoir simulator, and the key part is how to choose an effective parallel preconditioner. Properly choosing a good preconditioner has been beyond the pure algebraic field. An integrated preconditioner should include such components as physical background, characteristics of PDE mathematical model, nonlinear solving method, linear solving algorithm, domain decomposition and parallel computation. We first discuss some parallel preconditioning techniques, and then construct an integrated preconditioner, which is based on large scale distributed parallel processing, and reservoir simulation-oriented. The infrastructure of this preconditioner contains such famous preconditioning construction techniques as coarse grid correction, constraint residual correction and subspace projection correction. We essentially use multi-step means to integrate totally eight types of preconditioning components in order to give out the final preconditioner. Million-grid cell scale industrial reservoir data were tested on native high performance computers. Numerical statistics and analyses show that this preconditioner achieves satisfying parallel efficiency and acceleration effect.

  18. Evaluating Unmanned Aerial Platforms for Cultural Heritage Large Scale Mapping

    Science.gov (United States)

    Georgopoulos, A.; Oikonomou, C.; Adamopoulos, E.; Stathopoulou, E. K.

    2016-06-01

    When it comes to large scale mapping of limited areas especially for cultural heritage sites, things become critical. Optical and non-optical sensors are developed to such sizes and weights that can be lifted by such platforms, like e.g. LiDAR units. At the same time there is an increase in emphasis on solutions that enable users to get access to 3D information faster and cheaper. Considering the multitude of platforms, cameras and the advancement of algorithms in conjunction with the increase of available computing power this challenge should and indeed is further investigated. In this paper a short review of the UAS technologies today is attempted. A discussion follows as to their applicability and advantages, depending on their specifications, which vary immensely. The on-board cameras available are also compared and evaluated for large scale mapping. Furthermore a thorough analysis, review and experimentation with different software implementations of Structure from Motion and Multiple View Stereo algorithms, able to process such dense and mostly unordered sequence of digital images is also conducted and presented. As test data set, we use a rich optical and thermal data set from both fixed wing and multi-rotor platforms over an archaeological excavation with adverse height variations and using different cameras. Dense 3D point clouds, digital terrain models and orthophotos have been produced and evaluated for their radiometric as well as metric qualities.

  19. EVALUATING UNMANNED AERIAL PLATFORMS FOR CULTURAL HERITAGE LARGE SCALE MAPPING

    Directory of Open Access Journals (Sweden)

    A. Georgopoulos

    2016-06-01

    Full Text Available When it comes to large scale mapping of limited areas especially for cultural heritage sites, things become critical. Optical and non-optical sensors are developed to such sizes and weights that can be lifted by such platforms, like e.g. LiDAR units. At the same time there is an increase in emphasis on solutions that enable users to get access to 3D information faster and cheaper. Considering the multitude of platforms, cameras and the advancement of algorithms in conjunction with the increase of available computing power this challenge should and indeed is further investigated. In this paper a short review of the UAS technologies today is attempted. A discussion follows as to their applicability and advantages, depending on their specifications, which vary immensely. The on-board cameras available are also compared and evaluated for large scale mapping. Furthermore a thorough analysis, review and experimentation with different software implementations of Structure from Motion and Multiple View Stereo algorithms, able to process such dense and mostly unordered sequence of digital images is also conducted and presented. As test data set, we use a rich optical and thermal data set from both fixed wing and multi-rotor platforms over an archaeological excavation with adverse height variations and using different cameras. Dense 3D point clouds, digital terrain models and orthophotos have been produced and evaluated for their radiometric as well as metric qualities.

  20. The combustion behavior of large scale lithium titanate battery

    Science.gov (United States)

    Huang, Peifeng; Wang, Qingsong; Li, Ke; Ping, Ping; Sun, Jinhua

    2015-01-01

    Safety problem is always a big obstacle for lithium battery marching to large scale application. However, the knowledge on the battery combustion behavior is limited. To investigate the combustion behavior of large scale lithium battery, three 50 Ah Li(NixCoyMnz)O2/Li4Ti5O12 batteries under different state of charge (SOC) were heated to fire. The flame size variation is depicted to analyze the combustion behavior directly. The mass loss rate, temperature and heat release rate are used to analyze the combustion behavior in reaction way deeply. Based on the phenomenon, the combustion process is divided into three basic stages, even more complicated at higher SOC with sudden smoke flow ejected. The reason is that a phase change occurs in Li(NixCoyMnz)O2 material from layer structure to spinel structure. The critical temperatures of ignition are at 112–121°C on anode tab and 139 to 147°C on upper surface for all cells. But the heating time and combustion time become shorter with the ascending of SOC. The results indicate that the battery fire hazard increases with the SOC. It is analyzed that the internal short and the Li+ distribution are the main causes that lead to the difference. PMID:25586064

  1. Probability analysis of geological processes: a useful tool for the safety assessment of radioactive waste disposal

    Energy Technology Data Exchange (ETDEWEB)

    D' Alessandro, M.; Murray, C.N.; Bertozzi, G.; Girardi, F.

    1980-05-01

    In the development of methods for the assessment of the risk associated with the disposal of radioactive wastes over periods up to 10/sup 6/ years, much discussion has occurred on the use of probability analysis for geological processes. The applicability and limitations of this concept are related to the proper use of the geological data-base and the critical interpretation of probability distributions. The interpretation of geological phenomena in terms of probability is discussed and an example of application to the determination of faulting probability is illustrated. The method has been used for the determination of failure probability of geological segregation of a waste repository in a clay formation.

  2. Thermal power generation projects ``Large Scale Solar Heating``; EU-Thermie-Projekte ``Large Scale Solar Heating``

    Energy Technology Data Exchange (ETDEWEB)

    Kuebler, R.; Fisch, M.N. [Steinbeis-Transferzentrum Energie-, Gebaeude- und Solartechnik, Stuttgart (Germany)

    1998-12-31

    The aim of this project is the preparation of the ``Large-Scale Solar Heating`` programme for an Europe-wide development of subject technology. The following demonstration programme was judged well by the experts but was not immediately (1996) accepted for financial subsidies. In November 1997 the EU-commission provided 1,5 million ECU which allowed the realisation of an updated project proposal. By mid 1997 a small project was approved, that had been requested under the lead of Chalmes Industriteteknik (CIT) in Sweden and is mainly carried out for the transfer of technology. (orig.) [Deutsch] Ziel dieses Vorhabens ist die Vorbereitung eines Schwerpunktprogramms `Large Scale Solar Heating`, mit dem die Technologie europaweit weiterentwickelt werden sollte. Das daraus entwickelte Demonstrationsprogramm wurde von den Gutachtern positiv bewertet, konnte jedoch nicht auf Anhieb (1996) in die Foerderung aufgenommen werden. Im November 1997 wurden von der EU-Kommission dann kurzfristig noch 1,5 Mio ECU an Foerderung bewilligt, mit denen ein aktualisierter Projektvorschlag realisiert werden kann. Bereits Mitte 1997 wurde ein kleineres Vorhaben bewilligt, das unter Federfuehrung von Chalmers Industriteknik (CIT) in Schweden beantragt worden war und das vor allem dem Technologietransfer dient. (orig.)

  3. Types of hydrogeological response to large-scale explosions and earthquakes

    Science.gov (United States)

    Gorbunova, Ella; Vinogradov, Evgeny; Besedina, Alina; Martynov, Vasilii

    2017-04-01

    Hydrogeological response to anthropogenic and natural impact indicates massif properties and mode of deformation. We studied uneven-aged aquifers that had been unsealed at the Semipalatinsk testing area (Kazakhstan) and geophysical observatory "Mikhnevo" at the Moscow region (Russia). Data was collected during long-term underground water monitoring that was carried out in 1983-1989 when large-scale underground nuclear explosions were realized. Precise observations of underground water response to distant earthquakes waves passage at GPO "Mikhnevo" have been conducted since 2008. One of the goals of the study was to mark out main types of either dynamic or irreversible spatial-temporal underground water response to large-scale explosions and to compare them with those of earthquakes impact as it had been presented in different papers. As far as nobody really knows hydrogeological processes that occur at the earthquake source it's especially important to analyze experimental data of groundwater level variations that was carried close to epicenter first minutes to hours after explosions. We found that hydrogeodynamic reaction strongly depends on initial geological and hydrogeological conditions as far as on seismic impact parameters. In the near area post-dynamic variations can lead to either excess pressure dome or depression cone forming that results of aquifer drainage due to rock massif fracturing. In the far area explosion effect is comparable with the one of distant earthquake and provides dynamic water level oscillations. Precise monitoring at the "Mikhnevo" area was conducted in the platform conditions far from active faults thus we consider it as a purely calm area far from earthquake sources. Both dynamic and irreversible water level change seem to form power dependence on vertical peak ground displacement velocity due to wave passage. Further research will be aimed at transition close-to-far area to identify a criterion that determines either irreversible

  4. Risk Management Challenges in Large-scale Energy PSS

    DEFF Research Database (Denmark)

    Tegeltija, Miroslava; Oehmen, Josef; Kozin, Igor

    2017-01-01

    adequate representations. We focus on a large-scale energy company in Denmark as one case of current product/servicesystems risk management best practices. We analyze their risk management process and investigate the tools they use in order to support decision making processes within the company. First, we......Probabilistic risk management approaches have a long tradition in engineering. A large variety of tools and techniques based on the probabilistic view of risk is available and applied in PSS practice. However, uncertainties that arise due to lack of knowledge and information are still missing...... identify the following challenges in the current risk management practices that are in line with literature: (1) current methods are not appropriate for the situations dominated by weak knowledge and information; (2) quality of traditional models in such situations is open to debate; (3) quality of input...

  5. Large-scale comparative visualisation of sets of multidimensional data

    CERN Document Server

    Vohl, Dany; Fluke, Christopher J; Poudel, Govinda; Georgiou-Karistianis, Nellie; Hassan, Amr H; Benovitski, Yuri; Wong, Tsz Ho; Kaluza, Owen; Nguyen, Toan D; Bonnington, C Paul

    2016-01-01

    We present encube $-$ a qualitative, quantitative and comparative visualisation and analysis system, with application to high-resolution, immersive three-dimensional environments and desktop displays. encube extends previous comparative visualisation systems by considering: 1) the integration of comparative visualisation and analysis into a unified system; 2) the documentation of the discovery process; and 3) an approach that enables scientists to continue the research process once back at their desktop. Our solution enables tablets, smartphones or laptops to be used as interaction units for manipulating, organising, and querying data. We highlight the modularity of encube, allowing additional functionalities to be included as required. Additionally, our approach supports a high level of collaboration within the physical environment. We show how our implementation of encube operates in a large-scale, hybrid visualisation and supercomputing environment using the CAVE2 at Monash University, and on a local deskt...

  6. Development of large-scale structure in the Universe

    CERN Document Server

    Ostriker, J P

    1991-01-01

    This volume grew out of the 1988 Fermi lectures given by Professor Ostriker, and is concerned with cosmological models that take into account the large scale structure of the universe. He starts with homogeneous isotropic models of the universe and then, by considering perturbations, he leads us to modern cosmological theories of the large scale, such as superconducting strings. This will be an excellent companion for all those interested in the cosmology and the large scale nature of the universe.

  7. Scaling filtering and multiplicative cascade information integration techniques for geological, geophysical and geochemical data processing and geological feature recognition

    Science.gov (United States)

    Cheng, Q.

    2013-12-01

    This paper introduces several techniques recently developed based on the concepts of multiplicative cascade processes and multifractals for processing exploration geochemical and geophysical data for recognition of geological features and delineation of target areas for undiscovered mineral deposits. From a nonlinear point of view extreme geo-processes such as cloud formation, rainfall, hurricanes, flooding, landslides, earthquakes, igneous activities, tectonics and mineralization often show singular property that they may result in anomalous amounts of energy release or mass accumulation that generally are confined to narrow intervals in space or time. The end products of these non-linear processes have in common that they can be modeled as fractals or multifractals. Here we show that the three fundamental concepts of scaling in the context of multifractals: singularity, self-similarity and fractal dimension spectrum, make multifractal theory and methods useful for geochemical and geophysical data processing for general purposes of geological features recognition. These methods include: a local singularity analysis based on a area-density (C-A) multifractal model used as a scaling high-pass filtering technique capable of extracting weak signals caused by buried geological features; a suite of multifractal filtering techniques based on spectrum density - area (S-A) multifractal models implemented in various domain including frequency domain can be used for unmixing geochemical or geophysical fields according to distinct generalized self-similarities characterized in certain domain; and multiplicative cascade processes for integration of diverse evidential layers of information for prediction of point events such as location of mineral deposits. It is demonstrated by several case studies involving Fe, Sn, Mo-Ag and Mo-W mineral deposits that singularity method can be utilized to process stream sediment/soil geochemical data and gravity/aeromagnetic data as high

  8. The role of large-scale, extratropical dynamics in climate change

    Energy Technology Data Exchange (ETDEWEB)

    Shepherd, T.G. [ed.

    1994-02-01

    The climate modeling community has focused recently on improving our understanding of certain processes, such as cloud feedbacks and ocean circulation, that are deemed critical to climate-change prediction. Although attention to such processes is warranted, emphasis on these areas has diminished a general appreciation of the role played by the large-scale dynamics of the extratropical atmosphere. Lack of interest in extratropical dynamics may reflect the assumption that these dynamical processes are a non-problem as far as climate modeling is concerned, since general circulation models (GCMs) calculate motions on this scale from first principles. Nevertheless, serious shortcomings in our ability to understand and simulate large-scale dynamics exist. Partly due to a paucity of standard GCM diagnostic calculations of large-scale motions and their transports of heat, momentum, potential vorticity, and moisture, a comprehensive understanding of the role of large-scale dynamics in GCM climate simulations has not been developed. Uncertainties remain in our understanding and simulation of large-scale extratropical dynamics and their interaction with other climatic processes, such as cloud feedbacks, large-scale ocean circulation, moist convection, air-sea interaction and land-surface processes. To address some of these issues, the 17th Stanstead Seminar was convened at Bishop`s University in Lennoxville, Quebec. The purpose of the Seminar was to promote discussion of the role of large-scale extratropical dynamics in global climate change. Abstracts of the talks are included in this volume. On the basis of these talks, several key issues emerged concerning large-scale extratropical dynamics and their climatic role. Individual records are indexed separately for the database.

  9. Signaling in large-scale neural networks

    DEFF Research Database (Denmark)

    Berg, Rune W; Hounsgaard, Jørn

    2009-01-01

    We examine the recent finding that neurons in spinal motor circuits enter a high conductance state during functional network activity. The underlying concomitant increase in random inhibitory and excitatory synaptic activity leads to stochastic signal processing. The possible advantages of this m......We examine the recent finding that neurons in spinal motor circuits enter a high conductance state during functional network activity. The underlying concomitant increase in random inhibitory and excitatory synaptic activity leads to stochastic signal processing. The possible advantages...... of this metabolically costly organization are analyzed by comparing with synaptically less intense networks driven by the intrinsic response properties of the network neurons....

  10. Fast large-scale reionization simulations

    NARCIS (Netherlands)

    Thomas, Rajat M.; Zaroubi, Saleem; Ciardi, Benedetta; Pawlik, Andreas H.; Labropoulos, Panagiotis; Jelic, Vibor; Bernardi, Gianni; Brentjens, Michiel A.; de Bruyn, A. G.; Harker, Geraint J. A.; Koopmans, Leon V. E.; Pandey, V. N.; Schaye, Joop; Yatawatta, Sarod; Mellema, G.

    2009-01-01

    We present an efficient method to generate large simulations of the epoch of reionization without the need for a full three-dimensional radiative transfer code. Large dark-matter-only simulations are post-processed to produce maps of the redshifted 21-cm emission from neutral hydrogen. Dark matter h

  11. High speed and large scale scientific computing

    CERN Document Server

    Gentzsch, W; Joubert, GR

    2010-01-01

    Over the years parallel technologies have completely transformed main stream computing. This book deals with the issues related to the area of cloud computing and discusses developments in grids, applications and information processing, as well as e-science. It is suitable for computer scientists, IT engineers and IT managers.

  12. Fast large-scale reionization simulations

    NARCIS (Netherlands)

    Thomas, Rajat M.; Zaroubi, Saleem; Ciardi, Benedetta; Pawlik, Andreas H.; Labropoulos, Panagiotis; Jelic, Vibor; Bernardi, Gianni; Brentjens, Michiel A.; de Bruyn, A. G.; Harker, Geraint J. A.; Koopmans, Leon V. E.; Pandey, V. N.; Schaye, Joop; Yatawatta, Sarod; Mellema, G.

    2009-01-01

    We present an efficient method to generate large simulations of the epoch of reionization without the need for a full three-dimensional radiative transfer code. Large dark-matter-only simulations are post-processed to produce maps of the redshifted 21-cm emission from neutral hydrogen. Dark matter

  13. Evaluation of the licensing process - Part of Pilot project wind power- Large scale wind power in northern Sweden; Utvaerdering av tillstaandsprocessen - Del av vindpilotprojekt vindkraft - Storskalig vindkraft i norra Sverige

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2009-12-15

    This report is designed to improve permitting processes for wind power plants, thereby contributing to the smooth functioning of wind power development in Sweden. This report is designed to improve permitting processes for wind power, thereby contributing to the smooth functioning of wind power development in Sweden. Based on the managing agent authorities and project groups reflections on the pilot projects on Gabrielsberget and Dragaliden has a number of advice and recommendations been compiled for the permit process as a whole and from the process at different stages.

  14. Volcanic Processes and Geology of Augustine Volcano, Alaska

    Science.gov (United States)

    Waitt, Richard B.; Beget, James E.

    2009-01-01

    Augustine Island (volcano) in lower Cook Inlet, Alaska, has erupted repeatedly in late-Holocene and historical times. Eruptions typically beget high-energy volcanic processes. Most notable are bouldery debris avalanches containing immense angular clasts shed from summit domes. Coarse deposits of these avalanches form much of Augustine's lower flanks. A new geologic map at 1:25,000 scale depicts these deposits, these processes. We correlate deposits by tephra layers calibrated by many radiocarbon dates. Augustine Volcano began erupting on the flank of a small island of Jurassic clastic-sedimentary rock before the late Wisconsin glaciation (late Pleistocene). The oldest known effusions ranged from olivine basalt explosively propelled by steam, to highly explosive magmatic eruptions of dacite or rhyodacite shed as pumice flows. Late Wisconsin piedmont glaciers issuing from the mountainous western mainland surrounded the island while dacitic eruptive debris swept down the south volcano flank. Evidence is scant for eruptions between the late Wisconsin and about 2,200 yr B.P. On a few south-flank inliers, thick stratigraphically low pumiceous pyroclastic-flow and fall deposits probably represent this period from which we have no radiocarbon dates on Augustine Island. Eruptions between about 5,350 and 2,200 yr B.P. we know with certainty by distal tephras. On Shuyak Island 100 km southeast of Augustine, two distal fall ashes of Augustinian chemical provenance (microprobe analysis of glass) date respectively between about 5,330 and 5,020 yr B.P. and between about 3,620 and 3,360 yr B.P. An Augustine ash along Kamishak Creek 70 km southwest of Augustine dates between about 3,850 and 3,660 yr B.P. A probably Augustinian ash lying within peat near Homer dates to about 2,275 yr B.P. From before 2,200 yr B.P. to the present, Augustine eruptive products abundantly mantle the island. During this period, numerous coarse debris avalanches swept beyond Augustine's coast, most

  15. U-shaped Vortex Structures in Large Scale Cloud Cavitation

    Science.gov (United States)

    Cao, Yantao; Peng, Xiaoxing; Xu, Lianghao; Hong, Fangwen

    2015-12-01

    The control of cloud cavitation, especially large scale cloud cavitation(LSCC), is always a hot issue in the field of cavitation research. However, there has been little knowledge on the evolution of cloud cavitation since it is associated with turbulence and vortex flow. In this article, the structure of cloud cavitation shed by sheet cavitation around different hydrofoils and a wedge were observed in detail with high speed camera (HSC). It was found that the U-shaped vortex structures always existed in the development process of LSCC. The results indicated that LSCC evolution was related to this kind of vortex structures, and it may be a universal character for LSCC. Then vortex strength of U-shaped vortex structures in a cycle was analyzed with numerical results.

  16. The gamma ray background from large scale structure formation

    CERN Document Server

    Gabici, S; Gabici, Stefano; Blasi, Pasquale

    2003-01-01

    Hierarchical clustering of dark matter halos is thought to describe well the large scale structure of the universe. The baryonic component of the halos is shock heated to the virial temperature while a small fraction of the energy flux through the shocks may be energized through the first order Fermi process to relativistic energy per particle. It has been proposed that the electrons accelerated in this way may upscatter the photons of the universal microwave background to gamma ray energies and indeed generate a diffuse background of gamma rays that compares well to the observations. In this paper we calculate the spectra of the particles accelerated at the merger shocks and re-evaluate the contribution of structure formation to the extragalactic diffuse gamma ray background (EDGRB), concluding that this contribution adds up to at most 10% of the observed EDGRB.

  17. Building a Large-Scale Knowledge Base for Machine Translation

    CERN Document Server

    Knight, K; Knight, Kevin; Luk, Steve K.

    1994-01-01

    Knowledge-based machine translation (KBMT) systems have achieved excellent results in constrained domains, but have not yet scaled up to newspaper text. The reason is that knowledge resources (lexicons, grammar rules, world models) must be painstakingly handcrafted from scratch. One of the hypotheses being tested in the PANGLOSS machine translation project is whether or not these resources can be semi-automatically acquired on a very large scale. This paper focuses on the construction of a large ontology (or knowledge base, or world model) for supporting KBMT. It contains representations for some 70,000 commonly encountered objects, processes, qualities, and relations. The ontology was constructed by merging various online dictionaries, semantic networks, and bilingual resources, through semi-automatic methods. Some of these methods (e.g., conceptual matching of semantic taxonomies) are broadly applicable to problems of importing/exporting knowledge from one KB to another. Other methods (e.g., bilingual match...

  18. High pressure sheet metal forming of large scale body structures

    Energy Technology Data Exchange (ETDEWEB)

    Trompeter, M.; Krux, R.; Homberg, W.; Kleiner, M. [Dortmund Univ. (Germany). Inst. of Forming Technology and Lightweight Construction

    2005-07-01

    An important trend in the automotive industry is the weight reduction of car bodies by lightweight construction. One approach to realise lightweight structures is the use of load optimised sheet metal parts (e.g. tailored blanks), especially for crash relevant car body structures. To form such parts which are mostly complex and primarily made of high strength steels, the use of working media based forming processes is favorable. The paper presents the manufacturing of a large scale structural component made of tailor rolled blanks (TRB) by high pressure sheet metal forming (HBU). The paper focuses mainly on the tooling system, which is integrated into a specific 100 MN hydroform press at the IUL. The HBU tool basically consists of a multipoint blankholder, a specially designed flange draw-in sensor, which is necessary to determine the material flow, and a sealing system. Furthermore, the paper presents a strategy for an effective closed loop flange draw-in control. (orig.)

  19. Recovery Act - Large Scale SWNT Purification and Solubilization

    Energy Technology Data Exchange (ETDEWEB)

    Michael Gemano; Dr. Linda B. McGown

    2010-10-07

    The goal of this Phase I project was to establish a quantitative foundation for development of binary G-gels for large-scale, commercial processing of SWNTs and to develop scientific insight into the underlying mechanisms of solubilization, selectivity and alignment. In order to accomplish this, we performed systematic studies to determine the effects of G-gel composition and experimental conditions that will enable us to achieve our goals that include (1) preparation of ultra-high purity SWNTs from low-quality, commercial SWNT starting materials, (2) separation of MWNTs from SWNTs, (3) bulk, non-destructive solubilization of individual SWNTs in aqueous solution at high concentrations (10-100 mg/mL) without sonication or centrifugation, (4) tunable enrichment of subpopulations of the SWNTs based on metallic vs. semiconductor properties, diameter, or chirality and (5) alignment of individual SWNTs.

  20. Large scale simulations of the great 1906 San Francisco earthquake

    Science.gov (United States)

    Nilsson, S.; Petersson, A.; Rodgers, A.; Sjogreen, B.; McCandless, K.

    2006-12-01

    As part of a multi-institutional simulation effort, we present large scale computations of the ground motion during the great 1906 San Francisco earthquake using a new finite difference code called WPP. The material data base for northern California provided by USGS together with the rupture model by Song et al. is demonstrated to lead to a reasonable match with historical data. In our simulations, the computational domain covered 550 km by 250 km of northern California down to 40 km depth, so a 125 m grid size corresponds to about 2.2 Billion grid points. To accommodate these large grids, the simulations were run on 512-1024 processors on one of the supercomputers at Lawrence Livermore National Lab. A wavelet compression algorithm enabled storage of time-dependent volumetric data. Nevertheless, the first 45 seconds of the earthquake still generated 1.2 TByte of disk space and the 3-D post processing was done in parallel.

  1. Magnetic Properties of Large-Scale Nanostructured Graphene Systems

    DEFF Research Database (Denmark)

    Gregersen, Søren Schou

    The on-going progress in two-dimensional (2D) materials and nanostructure fabrication motivates the study of altered and combined materials. Graphene—the most studied material of the 2D family—displays unique electronic and spintronic properties. Exceptionally high electron mobilities, that surpass...... those in conventional materials such as silicon, make graphene a very interesting material for high-speed electronics. Simultaneously, long spin-diffusion lengths and spin-life times makes graphene an eligible spin-transport channel. In this thesis, we explore fundamental features of nanostructured...... graphene systems using large-scale modeling techniques. Graphene perforations, or antidots, have received substantial interest in the prospect of opening large band gaps in the otherwise gapless graphene. Motivated by recent improvements of fabrication processes, such as forming graphene antidots and layer...

  2. Large-scale quantum networks based on graphs

    Science.gov (United States)

    Epping, Michael; Kampermann, Hermann; Bruß, Dagmar

    2016-05-01

    Society relies and depends increasingly on information exchange and communication. In the quantum world, security and privacy is a built-in feature for information processing. The essential ingredient for exploiting these quantum advantages is the resource of entanglement, which can be shared between two or more parties. The distribution of entanglement over large distances constitutes a key challenge for current research and development. Due to losses of the transmitted quantum particles, which typically scale exponentially with the distance, intermediate quantum repeater stations are needed. Here we show how to generalise the quantum repeater concept to the multipartite case, by describing large-scale quantum networks, i.e. network nodes and their long-distance links, consistently in the language of graphs and graph states. This unifying approach comprises both the distribution of multipartite entanglement across the network, and the protection against errors via encoding. The correspondence to graph states also provides a tool for optimising the architecture of quantum networks.

  3. Large-Scale Quantitative Analysis of Painting Arts

    Science.gov (United States)

    Kim, Daniel; Son, Seung-Woo; Jeong, Hawoong

    2014-12-01

    Scientists have made efforts to understand the beauty of painting art in their own languages. As digital image acquisition of painting arts has made rapid progress, researchers have come to a point where it is possible to perform statistical analysis of a large-scale database of artistic paints to make a bridge between art and science. Using digital image processing techniques, we investigate three quantitative measures of images - the usage of individual colors, the variety of colors, and the roughness of the brightness. We found a difference in color usage between classical paintings and photographs, and a significantly low color variety of the medieval period. Interestingly, moreover, the increment of roughness exponent as painting techniques such as chiaroscuro and sfumato have advanced is consistent with historical circumstances.

  4. Carbon dioxide recovery: large scale design trends

    Energy Technology Data Exchange (ETDEWEB)

    Mariz, C. L.

    1998-07-01

    Carbon dioxide recovery from flue gas streams for use in enhanced oil recovery were examined, focusing on key design and operating issues and trends that appear promising in reducing plant investment and operating costs associated with this source of carbon dioxide. The emphasis was on conventional processes using chemical solvents, such as the Fluor Daniel ECONAMINE FG{sup S}M process. Developments in new tower packings and solvents and their potential impact on plant and operating costs were reviewed, along with the effects on these costs of the flue gas source. Sample operating and capital recovery cost data is provided for a 1,000 tonne/day plant. This size plant would be one large enough to support an enhanced oil recovery project. 11 refs., 4 figs.

  5. A large-scale forest fragmentation experiment: the Stability of Altered Forest Ecosystems Project

    OpenAIRE

    Ewers, Robert M.; Didham, Raphael K.; Fahrig, Lenore; Ferraz, Gonçalo; Hector, Andy; Holt, Robert D; Kapos, Valerie; Reynolds, Glen; Sinun, Waidi; Snaddon, Jake L.; Turner, Edgar C.

    2011-01-01

    Opportunities to conduct large-scale field experiments are rare, but provide a unique opportunity to reveal the complex processes that operate within natural ecosystems. Here, we review the design of existing, large-scale forest fragmentation experiments. Based on this review, we develop a design for the Stability of Altered Forest Ecosystems (SAFE) Project, a new forest fragmentation experiment to be located in the lowland tropical forests of Borneo (Sabah, Malaysia). The SAFE Project repres...

  6. Large Scale Implementations for Twitter Sentiment Classification

    Directory of Open Access Journals (Sweden)

    Andreas Kanavos

    2017-03-01

    Full Text Available Sentiment Analysis on Twitter Data is indeed a challenging problem due to the nature, diversity and volume of the data. People tend to express their feelings freely, which makes Twitter an ideal source for accumulating a vast amount of opinions towards a wide spectrum of topics. This amount of information offers huge potential and can be harnessed to receive the sentiment tendency towards these topics. However, since no one can invest an infinite amount of time to read through these tweets, an automated decision making approach is necessary. Nevertheless, most existing solutions are limited in centralized environments only. Thus, they can only process at most a few thousand tweets. Such a sample is not representative in order to define the sentiment polarity towards a topic due to the massive number of tweets published daily. In this work, we develop two systems: the first in the MapReduce and the second in the Apache Spark framework for programming with Big Data. The algorithm exploits all hashtags and emoticons inside a tweet, as sentiment labels, and proceeds to a classification method of diverse sentiment types in a parallel and distributed manner. Moreover, the sentiment analysis tool is based on Machine Learning methodologies alongside Natural Language Processing techniques and utilizes Apache Spark’s Machine learning library, MLlib. In order to address the nature of Big Data, we introduce some pre-processing steps for achieving better results in Sentiment Analysis as well as Bloom filters to compact the storage size of intermediate data and boost the performance of our algorithm. Finally, the proposed system was trained and validated with real data crawled by Twitter, and, through an extensive experimental evaluation, we prove that our solution is efficient, robust and scalable while confirming the quality of our sentiment identification.

  7. Interaction Analysis and Decomposition Principle for Control Structure Design of Large-scale Systems

    Institute of Scientific and Technical Information of China (English)

    罗雄麟; 刘雨波; 许锋

    2014-01-01

    Industrial processes are mostly large-scale systems with high order. They use fully centralized control strategy, the parameters of which are difficult to tune. In the design of large-scale systems, the decomposition ac-cording to the interaction between input and output variables is the first step and the basis for the selection of con-trol structure. In this paper, the decomposition principle of processes in large-scale systems is proposed for the de-sign of control structure. A new variable pairing method is presented, considering the steady-state information and dynamic response of large-scale system. By selecting threshold values, the related matrix can be transformed into the adjoining matrixes, which directly measure the couple among different loops. The optimal number of controllers can be obtained after decomposing the large-scale system. A practical example is used to demonstrate the validity and feasibility of the proposed interaction decomposition principle in process large-scale systems.

  8. Carbon dioxide recovery: large scale design trends

    Energy Technology Data Exchange (ETDEWEB)

    Mariz, C.L. [Fluor Daniel Inc. (United States)

    1995-10-01

    The key design, operating issues, and trends are examined for new unit designs that hold potential for reducing the costs of CO{sub 2} recovery from flue gas. The ECONAMINE FG process is described. Operating costs for a CO{sub 2} recovery plant depend on size and CO{sub 2} content of the incoming stream. An operating cost breakdown and capital recovery analysis is given for a 1000 tonne/day CO{sub 2} recovery plant with a hypothetical U.S. Gulf Coast location. The analysis shows that the plant is not economic at the current price of crude oil. It is concluded that much larger plants (2000 tonne/day and larger) and newer and lower energy solvents could make carbon dioxide from flue gases attractive for enhanced oil recovery. 11 refs., 4 figs., 2 tabs.

  9. Advances in large-scale crop modeling

    Science.gov (United States)

    Scholze, Marko; Bondeau, Alberte; Ewert, Frank; Kucharik, Chris; Priess, Jörg; Smith, Pascalle

    Intensified human activity and a growing population have changed the climate and the land biosphere. One of the most widely recognized human perturbations is the emission of carbon dioxide (C02) by fossil fuel burning and land-use change. As the terrestrial biosphere is an active player in the global carbon cycle, changes in land use feed back to the climate of the Earth through regulation of the content of atmospheric CO2, the most important greenhouse gas,and changing albedo (e.g., energy partitioning).Recently, the climate modeling community has started to develop more complex Earthsystem models that include marine and terrestrial biogeochemical processes in addition to the representation of atmospheric and oceanic circulation. However, most terrestrial biosphere models simulate only natural, or so-called potential, vegetation and do not account for managed ecosystems such as croplands and pastures, which make up nearly one-third of the Earth's land surface.

  10. Safeguards instruments for Large-Scale Reprocessing Plants

    Energy Technology Data Exchange (ETDEWEB)

    Hakkila, E.A. [Los Alamos National Lab., NM (United States); Case, R.S.; Sonnier, C. [Sandia National Labs., Albuquerque, NM (United States)

    1993-06-01

    Between 1987 and 1992 a multi-national forum known as LASCAR (Large Scale Reprocessing Plant Safeguards) met to assist the IAEA in development of effective and efficient safeguards for large-scale reprocessing plants. The US provided considerable input for safeguards approaches and instrumentation. This paper reviews and updates instrumentation of importance in measuring plutonium and uranium in these facilities.

  11. Electrodialysis system for large-scale enantiomer separation

    NARCIS (Netherlands)

    Ent, van der E.M.; Thielen, T.P.H.; Cohen Stuart, M.A.; Padt, van der A.; Keurentjes, J.T.F.

    2001-01-01

    In contrast to analytical methods, the range of technologies currently applied for large-scale enantiomer separations is not very extensive. Therefore, a new system has been developed for large-scale enantiomer separations that can be regarded as the scale-up of a capillary electrophoresis system. I

  12. Electrodialysis system for large-scale enantiomer separation

    NARCIS (Netherlands)

    Ent, van der E.M.; Thielen, T.P.H.; Cohen Stuart, M.A.; Padt, van der A.; Keurentjes, J.T.F.

    2001-01-01

    In contrast to analytical methods, the range of technologies currently applied for large-scale enantiomer separations is not very extensive. Therefore, a new system has been developed for large-scale enantiomer separations that can be regarded as the scale-up of a capillary electrophoresis system.

  13. Prospects for large scale electricity storage in Denmark

    DEFF Research Database (Denmark)

    Krog Ekman, Claus; Jensen, Søren Højgaard

    2010-01-01

    In a future power systems with additional wind power capacity there will be an increased need for large scale power management as well as reliable balancing and reserve capabilities. Different technologies for large scale electricity storage provide solutions to the different challenges arising w...

  14. Fast large-scale reionization simulations

    Science.gov (United States)

    Thomas, Rajat M.; Zaroubi, Saleem; Ciardi, Benedetta; Pawlik, Andreas H.; Labropoulos, Panagiotis; Jelić, Vibor; Bernardi, Gianni; Brentjens, Michiel A.; de Bruyn, A. G.; Harker, Geraint J. A.; Koopmans, Leon V. E.; Mellema, Garrelt; Pandey, V. N.; Schaye, Joop; Yatawatta, Sarod

    2009-02-01

    We present an efficient method to generate large simulations of the epoch of reionization without the need for a full three-dimensional radiative transfer code. Large dark-matter-only simulations are post-processed to produce maps of the redshifted 21-cm emission from neutral hydrogen. Dark matter haloes are embedded with sources of radiation whose properties are either based on semi-analytical prescriptions or derived from hydrodynamical simulations. These sources could either be stars or power-law sources with varying spectral indices. Assuming spherical symmetry, ionized bubbles are created around these sources, whose radial ionized fraction and temperature profiles are derived from a catalogue of one-dimensional radiative transfer experiments. In case of overlap of these spheres, photons are conserved by redistributing them around the connected ionized regions corresponding to the spheres. The efficiency with which these maps are created allows us to span the large parameter space typically encountered in reionization simulations. We compare our results with other, more accurate, three-dimensional radiative transfer simulations and find excellent agreement for the redshifts and the spatial scales of interest to upcoming 21-cm experiments. We generate a contiguous observational cube spanning redshift 6 to 12 and use these simulations to study the differences in the reionization histories between stars and quasars. Finally, the signal is convolved with the Low Frequency Array (LOFAR) beam response and its effects are analysed and quantified. Statistics performed on this mock data set shed light on possible observational strategies for LOFAR.

  15. Superconductivity for Large Scale Wind Turbines

    Energy Technology Data Exchange (ETDEWEB)

    R. Fair; W. Stautner; M. Douglass; R. Rajput-Ghoshal; M. Moscinski; P. Riley; D. Wagner; J. Kim; S. Hou; F. Lopez; K. Haran; J. Bray; T. Laskaris; J. Rochford; R. Duckworth

    2012-10-12

    A conceptual design has been completed for a 10MW superconducting direct drive wind turbine generator employing low temperature superconductors for the field winding. Key technology building blocks from the GE Wind and GE Healthcare businesses have been transferred across to the design of this concept machine. Wherever possible, conventional technology and production techniques have been used in order to support the case for commercialization of such a machine. Appendices A and B provide further details of the layout of the machine and the complete specification table for the concept design. Phase 1 of the program has allowed us to understand the trade-offs between the various sub-systems of such a generator and its integration with a wind turbine. A Failure Modes and Effects Analysis (FMEA) and a Technology Readiness Level (TRL) analysis have been completed resulting in the identification of high risk components within the design. The design has been analyzed from a commercial and economic point of view and Cost of Energy (COE) calculations have been carried out with the potential to reduce COE by up to 18% when compared with a permanent magnet direct drive 5MW baseline machine, resulting in a potential COE of 0.075 $/kWh. Finally, a top-level commercialization plan has been proposed to enable this technology to be transitioned to full volume production. The main body of this report will present the design processes employed and the main findings and conclusions.

  16. Large-scale screens of metagenomic libraries.

    Science.gov (United States)

    Pham, Vinh D; Palden, Tsultrim; DeLong, Edward F

    2007-01-01

    Metagenomic libraries archive large fragments of contiguous genomic sequences from microorganisms without requiring prior cultivation. Generating a streamlined procedure for creating and screening metagenomic libraries is therefore useful for efficient high-throughput investigations into the genetic and metabolic properties of uncultured microbial assemblages. Here, key protocols are presented on video, which we propose is the most useful format for accurately describing a long process that alternately depends on robotic instrumentation and (human) manual interventions. First, we employed robotics to spot library clones onto high-density macroarray membranes, each of which can contain duplicate colonies from twenty-four 384-well library plates. Automation is essential for this procedure not only for accuracy and speed, but also due to the miniaturization of scale required to fit the large number of library clones into highly dense spatial arrangements. Once generated, we next demonstrated how the macroarray membranes can be screened for genes of interest using modified versions of standard protocols for probe labeling, membrane hybridization, and signal detection. We complemented the visual demonstration of these procedures with detailed written descriptions of the steps involved and the materials required, all of which are available online alongside the video.

  17. High Fidelity Simulations of Large-Scale Wireless Networks

    Energy Technology Data Exchange (ETDEWEB)

    Onunkwo, Uzoma [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Benz, Zachary [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-11-01

    The worldwide proliferation of wireless connected devices continues to accelerate. There are 10s of billions of wireless links across the planet with an additional explosion of new wireless usage anticipated as the Internet of Things develops. Wireless technologies do not only provide convenience for mobile applications, but are also extremely cost-effective to deploy. Thus, this trend towards wireless connectivity will only continue and Sandia must develop the necessary simulation technology to proactively analyze the associated emerging vulnerabilities. Wireless networks are marked by mobility and proximity-based connectivity. The de facto standard for exploratory studies of wireless networks is discrete event simulations (DES). However, the simulation of large-scale wireless networks is extremely difficult due to prohibitively large turnaround time. A path forward is to expedite simulations with parallel discrete event simulation (PDES) techniques. The mobility and distance-based connectivity associated with wireless simulations, however, typically doom PDES and fail to scale (e.g., OPNET and ns-3 simulators). We propose a PDES-based tool aimed at reducing the communication overhead between processors. The proposed solution will use light-weight processes to dynamically distribute computation workload while mitigating communication overhead associated with synchronizations. This work is vital to the analytics and validation capabilities of simulation and emulation at Sandia. We have years of experience in Sandia’s simulation and emulation projects (e.g., MINIMEGA and FIREWHEEL). Sandia’s current highly-regarded capabilities in large-scale emulations have focused on wired networks, where two assumptions prevent scalable wireless studies: (a) the connections between objects are mostly static and (b) the nodes have fixed locations.

  18. Sheltering in buildings from large-scale outdoor releases

    Energy Technology Data Exchange (ETDEWEB)

    Chan, W.R.; Price, P.N.; Gadgil, A.J.

    2004-06-01

    Intentional or accidental large-scale airborne toxic release (e.g. terrorist attacks or industrial accidents) can cause severe harm to nearby communities. Under these circumstances, taking shelter in buildings can be an effective emergency response strategy. Some examples where shelter-in-place was successful at preventing injuries and casualties have been documented [1, 2]. As public education and preparedness are vital to ensure the success of an emergency response, many agencies have prepared documents advising the public on what to do during and after sheltering [3, 4, 5]. In this document, we will focus on the role buildings play in providing protection to occupants. The conclusions to this article are: (1) Under most circumstances, shelter-in-place is an effective response against large-scale outdoor releases. This is particularly true for release of short duration (a few hours or less) and chemicals that exhibit non-linear dose-response characteristics. (2) The building envelope not only restricts the outdoor-indoor air exchange, but can also filter some biological or even chemical agents. Once indoors, the toxic materials can deposit or sorb onto indoor surfaces. All these processes contribute to the effectiveness of shelter-in-place. (3) Tightening of building envelope and improved filtration can enhance the protection offered by buildings. Common mechanical ventilation system present in most commercial buildings, however, should be turned off and dampers closed when sheltering from an outdoor release. (4) After the passing of the outdoor plume, some residuals will remain indoors. It is therefore important to terminate shelter-in-place to minimize exposure to the toxic materials.

  19. Large scale scientific computing - future directions

    Science.gov (United States)

    Patterson, G. S.

    1982-06-01

    Every new generation of scientific computers has opened up new areas of science for exploration through the use of more realistic numerical models or the ability to process ever larger amounts of data. Concomitantly, scientists, because of the success of past models and the wide range of physical phenomena left unexplored, have pressed computer designers to strive for the maximum performance that current technology will permit. This encompasses not only increased processor speed, but also substantial improvements in processor memory, I/O bandwidth, secondary storage and facilities to augment the scientist's ability both to program and to understand the results of a computation. Over the past decade, performance improvements for scientific calculations have come from algoeithm development and a major change in the underlying architecture of the hardware, not from significantly faster circuitry. It appears that this trend will continue for another decade. A future archetectural change for improved performance will most likely be multiple processors coupled together in some fashion. Because the demand for a significantly more powerful computer system comes from users with single large applications, it is essential that an application be efficiently partitionable over a set of processors; otherwise, a multiprocessor system will not be effective. This paper explores some of the constraints on multiple processor architecture posed by these large applications. In particular, the trade-offs between large numbers of slow processors and small numbers of fast processors is examined. Strategies for partitioning range from partitioning at the language statement level (in-the-small) and at the program module level (in-the-large). Some examples of partitioning in-the-large are given and a strategy for efficiently executing a partitioned program is explored.

  20. Thermal activation of dislocations in large scale obstacle bypass

    Science.gov (United States)

    Sobie, Cameron; Capolungo, Laurent; McDowell, David L.; Martinez, Enrique

    2017-08-01

    Dislocation dynamics simulations have been used extensively to predict hardening caused by dislocation-obstacle interactions, including irradiation defect hardening in the athermal case. Incorporating the role of thermal energy on these interactions is possible with a framework provided by harmonic transition state theory (HTST) enabling direct access to thermally activated reaction rates using the Arrhenius equation, including rates of dislocation-obstacle bypass processes. Moving beyond unit dislocation-defect reactions to a representative environment containing a large number of defects requires coarse-graining the activation energy barriers of a population of obstacles into an effective energy barrier that accurately represents the large scale collective process. The work presented here investigates the relationship between unit dislocation-defect bypass processes and the distribution of activation energy barriers calculated for ensemble bypass processes. A significant difference between these cases is observed, which is attributed to the inherent cooperative nature of dislocation bypass processes. In addition to the dislocation-defect interaction, the morphology of the dislocation segments pinned to the defects play an important role on the activation energies for bypass. A phenomenological model for activation energy stress dependence is shown to describe well the effect of a distribution of activation energies, and a probabilistic activation energy model incorporating the stress distribution in a material is presented.

  1. Large-scale magnetic structure formation in three-dimensional magnetohydrodynamic turbulence

    Energy Technology Data Exchange (ETDEWEB)

    Malapaka, Shiva Kumar; Müller, Wolf-Christian [Max-Planck Institute for Plasmaphysics, Boltzmannstrasse 2, D-85748, Garching bei Muenchen (Germany)

    2013-11-20

    The inverse cascade of magnetic helicity in three-dimensional magnetohydrodynamic (3D-MHD) turbulence is believed to be one of the processes responsible for large-scale magnetic structure formation in astrophysical systems. In this work, we present an exhaustive set of high-resolution direct numerical simulations of both forced and decaying 3D-MHD turbulence, to understand this structure formation process. It is first shown that an inverse cascade of magnetic helicity in small-scale driven turbulence does not necessarily generate coherent large-scale magnetic structures. The observed large-scale magnetic field, in this case, is severely perturbed by magnetic fluctuations generated by the small-scale forcing. In the decaying case, coherent large-scale structures form similarly to those observed astronomically. Based on the numerical results, the formation of large-scale magnetic structures in some astrophysical systems is suggested to be the consequence of an initial forcing that imparts the necessary turbulent energy into the system, which, after the forcing shuts off, decays to form the large-scale structures. This idea is supported by representative examples, e.g., clusters of galaxies.

  2. Big Data approaches for the analysis of large-scale fMRI data using Apache Spark and GPU processing: A demonstration on resting-state fMRI data from the Human Connectome Project

    Directory of Open Access Journals (Sweden)

    Roland N Boubela

    2016-01-01

    Full Text Available Technologies for scalable analysis of very large datasets have emerged in the domain of internet computing, but are still only rarely used in neuroimaging despite the existence of data and research questions in need of efficient computation tools especially in fMRI. In this work, we present software tools for the application of Apache Spark and Graphics Processing Units to neuroimaging datasets, in particular providing distributed file input for 4D NIfTI fMRI datasets in Scala for use in an Apache Spark environment. Examples for using this Big Data platform in graph analysis of fMRI datasets are shown to illustrate how processing pipelines employing it can be developed. With more tools for the convenient integration of neuroimaging file formats and typical processing steps, big data technologies could find wider endorsement in the community, leading to a range of potentially useful applications especially in view of the current collaborative creation of a wealth of large data repositories including thousands of individual fMRI datasets.

  3. Big Data Approaches for the Analysis of Large-Scale fMRI Data Using Apache Spark and GPU Processing: A Demonstration on Resting-State fMRI Data from the Human Connectome Project

    Science.gov (United States)

    Boubela, Roland N.; Kalcher, Klaudius; Huf, Wolfgang; Našel, Christian; Moser, Ewald

    2016-01-01

    Technologies for scalable analysis of very large datasets have emerged in the domain of internet computing, but are still rarely used in neuroimaging despite the existence of data and research questions in need of efficient computation tools especially in fMRI. In this work, we present software tools for the application of Apache Spark and Graphics Processing Units (GPUs) to neuroimaging datasets, in particular providing distributed file input for 4D NIfTI fMRI datasets in Scala for use in an Apache Spark environment. Examples for using this Big Data platform in graph analysis of fMRI datasets are shown to illustrate how processing pipelines employing it can be developed. With more tools for the convenient integration of neuroimaging file formats and typical processing steps, big data technologies could find wider endorsement in the community, leading to a range of potentially useful applications especially in view of the current collaborative creation of a wealth of large data repositories including thousands of individual fMRI datasets. PMID:26778951

  4. Beyond data collection in digital mapping: interpretation, sketching and thought process elements in geological map making

    Science.gov (United States)

    Watkins, Hannah; Bond, Clare; Butler, Rob

    2016-04-01

    Geological mapping techniques have advanced significantly in recent years from paper fieldslips to Toughbook, smartphone and tablet mapping; but how do the methods used to create a geological map affect the thought processes that result in the final map interpretation? Geological maps have many key roles in the field of geosciences including understanding geological processes and geometries in 3D, interpreting geological histories and understanding stratigraphic relationships in 2D and 3D. Here we consider the impact of the methods used to create a map on the thought processes that result in the final geological map interpretation. As mapping technology has advanced in recent years, the way in which we produce geological maps has also changed. Traditional geological mapping is undertaken using paper fieldslips, pencils and compass clinometers. The map interpretation evolves through time as data is collected. This interpretive process that results in the final geological map is often supported by recording in a field notebook, observations, ideas and alternative geological models explored with the use of sketches and evolutionary diagrams. In combination the field map and notebook can be used to challenge the map interpretation and consider its uncertainties. These uncertainties and the balance of data to interpretation are often lost in the creation of published 'fair' copy geological maps. The advent of Toughbooks, smartphones and tablets in the production of geological maps has changed the process of map creation. Digital data collection, particularly through the use of inbuilt gyrometers in phones and tablets, has changed smartphones into geological mapping tools that can be used to collect lots of geological data quickly. With GPS functionality this data is also geospatially located, assuming good GPS connectivity, and can be linked to georeferenced infield photography. In contrast line drawing, for example for lithological boundary interpretation and sketching

  5. Research on Control Process of Large Scale Antenna Back Shelf Making Planeness%大型天线背架制造平面度控制工艺研究

    Institute of Scientific and Technical Information of China (English)

    孙汶; 赵书行; 万忠

    2011-01-01

    大型天线背架是用铝质型材拼焊而成的桁架结构,外廓尺寸为2 639 mm×2 620 mm×230mm,长、宽度足寸较大,厚度尺寸较小,刚度低,焊接、加工等都会产生较大的变形.天线背架顶部由380个槽的槽底组成平面的平面度要求为0.25 mm.控制制造过程中的变形,特别是控制加工380个槽的变形,保证其槽底的平面度0.25 mm的要求,是制造天线背架的核心,也是工艺的难点.本文阐述了天线背架制造过程中,控制变形的工艺流程和工艺手段,重点阐述了380个槽加工及保证槽底平面度的夹具,以及其设计原理和调整方法.%Large scale back shelf is the truss work through welding of aluminium proximate matter, the scale of gabarit is 2 639×2 620×230 nun. The length and the width are big but the thickness is small and rigidity is low that make the weld and process generating more deformation. The planeness requiement of plane formed by bottoms of 380 grooves of antenna back shelf top is 0. 25 mm. Must control the deformation under the manufacturing, especaily the deformation of 380 groovesi make sue of the requiemnts of 0. 25 mm of planeness of groove bottoms, it is the core and the difficulty pointe of making antenna back shelf. The paper described the process fiow and process ways of controlling deformation under making process of antenna back shelf, presented the process of 380 grooves and clamp of ensuring planeness of groove bottoms in details and also gave the desigh principle and adusting way.

  6. 用于海量图像存储与处理的 Hadoop扩展%An extension of Hadoop used to store and process large scale image dataset

    Institute of Scientific and Technical Information of China (English)

    付波; 黄廷磊

    2014-01-01

    In order to figure out the challenge of the massive image and video storage and analysis,this paper extends the Ha-doop supported class to image,and integrates OpenCV open source library into Hadoop.Thus,a distributed image process-ing platform based on Hadoop and OpenCV is implemented.The experimental data shows that this extent data type is more efficient on the wildly representation and store form on Hadoop.It provides a reliable platform for developing the distributed algorithm in the computer vision.%为应对海量图像、视频对存储与分析带来的挑战,提出扩展Hadoop支持数据类型,同时集成OpenCV开源库,实现了基于 Hadoop和 OpenCV的计算机视觉分布式处理平台。测试结果表明,该扩展方式较目前在 Hadoop平台下广泛使用的图像表示与存储方式更为高效,为开发分布式计算机视觉算法提供了高效的基础平台。

  7. 基于大型工程实例评价两种常用的消毒方式%Evaluation of Two Disinfection Processes on the Basis of Large-scale Engineering Application

    Institute of Scientific and Technical Information of China (English)

    吴仲斯; 阳宇恒; 靳小虎; 周勤; 蔡展航

    2015-01-01

    In order to evaluate the effectiveness of two different types of disinfection processes, the production of disinfection byproducts and relevant capital costs for a water treatment plant, named A Water Treatment Plant, located in the south part of China, water quality analysis and determination of chlorine usage for the treated water by the two processes are conducted. It has been concluded that chloramine and chlorine are both effective. The concentration of chloroform and free chlorine in the water are rapidly increased after chlorine treatment. Additionally, the costs of chlorine are 70% of chloramine costs, which are relatively lower.%以某南方大都市A水厂为研究对象,通过对实际运行中消毒工艺变更前后的出厂水和管网水的水质和耗氯量进行分析,评价两种消毒工艺的消毒效果、副产物产生情况以及运营成本。结果表明:氯胺和液氯两种消毒剂均有良好的消毒效果;使用液氯后水中的三氯甲烷和游离氯的含量骤增;液氯消毒成本较低,约为氯胺消毒的70%。

  8. Using SMOS for validation and parameter estimation of a large scale hydrological model in Paraná river basin

    Science.gov (United States)

    Colossi, Bibiana; Fleischmann, Ayan; Siqueira, Vinicius; Bitar, Ahmad Al; Paiva, Rodrigo; Fan, Fernando; Ruhoff, Anderson; Pontes, Paulo; Collischonn, Walter

    2017-04-01

    Large scale representation of soil moisture conditions can be achieved through hydrological simulation and remote sensing techniques. However, both methodologies have several limitations, which suggests the potential benefits of using both information together. So, this study had two main objectives: perform a cross-validation between remotely sensed soil moisture from SMOS (Soil Moisture and Ocean Salinity) L3 product and soil moisture simulated with the large scale hydrological model MGB-IPH; and to evaluate the potential benefits of including remotely sensed soil moisture for model parameter estimation. The study analyzed results in South American continent, where hydrometeorological monitoring is usually scarce. The study was performed in Paraná River Basin, an important South American basin, whose extension and particular characteristics allow the representation of different climatic, geological, and, consequently, hydrological conditions. Soil moisture estimated with SMOS was transformed from water content to a Soil Water Index (SWI) so it is comparable to the saturation degree simulated with MGB-IPH model. The multi-objective complex evolution algorithm (MOCOM-UA) was applied for model automatic calibration considering only remotely sensed soil moisture, only discharge and both information together. Results show that this type of analysis can be very useful, because it allows to recognize limitations in model structure. In the case of the hydrological model calibration, this approach can avoid the use of parameters out of range, in an attempt to compensate model limitations. Also, it indicates aspects of the model were efforts should be concentrated, in order to improve hydrological or hydraulics process representation. Automatic calibration gives an estimative about the way different information can be applied and the quality of results it might lead. We emphasize that these findings can be valuable for hydrological modeling in large scale South American

  9. The investigation of dangerous geological processes resulting in land subsidence while designing the main gas pipeline in South Yakutia

    Science.gov (United States)

    Strokova, L. A.; Ermolaeva, A. V.; Golubeva, V. V.

    2016-09-01

    The number of gas main accidents has increased recently due to dangerous geological processes in underdeveloped areas located in difficult geological conditions. The paper analyses land subsidence caused by karst and thermokarst processes in the right of way, reveals the assessment criteria for geological hazards and creates zoning schemes considering the levels of karst and thermorkarst hazards.

  10. Large-Scale Spray Releases: Additional Aerosol Test Results

    Energy Technology Data Exchange (ETDEWEB)

    Daniel, Richard C.; Gauglitz, Phillip A.; Burns, Carolyn A.; Fountain, Matthew S.; Shimskey, Rick W.; Billing, Justin M.; Bontha, Jagannadha R.; Kurath, Dean E.; Jenks, Jeromy WJ; MacFarlan, Paul J.; Mahoney, Lenna A.

    2013-08-01

    One of the events postulated in the hazard analysis for the Waste Treatment and Immobilization Plant (WTP) and other U.S. Department of Energy (DOE) nuclear facilities is a breach in process piping that produces aerosols with droplet sizes in the respirable range. The current approach for predicting the size and concentration of aerosols produced in a spray leak event involves extrapolating from correlations reported in the literature. These correlations are based on results obtained from small engineered spray nozzles using pure liquids that behave as a Newtonian fluid. The narrow ranges of physical properties on which the correlations are based do not cover the wide range of slurries and viscous materials that will be processed in the WTP and in processing facilities across the DOE complex. To expand the data set upon which the WTP accident and safety analyses were based, an aerosol spray leak testing program was conducted by Pacific Northwest National Laboratory (PNNL). PNNL’s test program addressed two key technical areas to improve the WTP methodology (Larson and Allen 2010). The first technical area was to quantify the role of slurry particles in small breaches where slurry particles may plug the hole and prevent high-pressure sprays. The results from an effort to address this first technical area can be found in Mahoney et al. (2012a). The second technical area was to determine aerosol droplet size distribution and total droplet volume from prototypic breaches and fluids, including sprays from larger breaches and sprays of slurries for which literature data are mostly absent. To address the second technical area, the testing program collected aerosol generation data at two scales, commonly referred to as small-scale and large-scale testing. The small-scale testing and resultant data are described in Mahoney et al. (2012b), and the large-scale testing and resultant data are presented in Schonewill et al. (2012). In tests at both scales, simulants were used

  11. Geologic processes and Cenozoic history related to salt dissolution in southeastern New Mexico

    Science.gov (United States)

    Bachman, George Odell

    1974-01-01

    Salt of Permian age in the subsurface of an area near The Divide, east of Carlsbad, N. Mex., is being considered for a nuclear waste repository. The geologic history of the region indicates that dissolution of salt has occurred in the past during at least three distinct epochs: (1) after Triassic but before middle Pleistocene time; (2) during middle Pleistocene; and (3) during late Pleistocene. Thus, destructive geologic processes have been intermittent through more than I00 million years. Nash Draw, near The Divide, formed during late Pleistocene time by the coalescing of collapse sinks. The rate of its subsidence is estimated to have been about 10 cm (0.33 foot) per thousand years. The immediate area of The Divide adjacent to Nash Draw has not undergone stress by geologic processes during Pleistocene time and there are no present indications that this geologic environment will change drastically within the period of concern for the repository.

  12. Forcings and Feedbacks on Convection in the 2010 Pakistan Flood: Modeling Extreme Precipitation with Interactive Large-Scale Ascent

    CERN Document Server

    Nie, Ji; Sobel, Adam H

    2016-01-01

    Extratropical extreme precipitation events are usually associated with large-scale flow disturbances, strong ascent and large latent heat release. The causal relationships between these factors are often not obvious, however, and the roles of different physical processes in producing the extreme precipitation event can be difficult to disentangle. Here, we examine the large-scale forcings and convective heating feedback in the precipitation events which caused the 2010 Pakistan flood within the Column Quasi-Geostrophic framework. A cloud-revolving model (CRM) is forced with the large-scale forcings (other than large-scale vertical motion) computed from the quasi-geostrophic omega equation with input data from a reanalysis data set, and the large-scale vertical motion is diagnosed interactively with the simulated convection. Numerical results show that the positive feedback of convective heating to large-scale dynamics is essential in amplifying the precipitation intensity to the observed values. Orographic li...

  13. Probabilistic cartography of the large-scale structure

    CERN Document Server

    Leclercq, Florent; Lavaux, Guilhem; Wandelt, Benjamin

    2015-01-01

    The BORG algorithm is an inference engine that derives the initial conditions given a cosmological model and galaxy survey data, and produces physical reconstructions of the underlying large-scale structure by assimilating the data into the model. We present the application of BORG to real galaxy catalogs and describe the primordial and late-time large-scale structure in the considered volumes. We then show how these results can be used for building various probabilistic maps of the large-scale structure, with rigorous propagation of uncertainties. In particular, we study dynamic cosmic web elements and secondary effects in the cosmic microwave background.

  14. Large-scale Modeling of Inundation in the Amazon Basin

    Science.gov (United States)

    Luo, X.; Li, H. Y.; Getirana, A.; Leung, L. R.; Tesfa, T. K.

    2015-12-01

    Flood events have impacts on the exchange of energy, water and trace gases between land and atmosphere, hence potentially affecting the climate. The Amazon River basin is the world's largest river basin. Seasonal floods occur in the Amazon Basin each year. The basin being characterized by flat gradients, backwater effects are evident in the river dynamics. This factor, together with large uncertainties in river hydraulic geometry, surface topography and other datasets, contribute to difficulties in simulating flooding processes over this basin. We have developed a large-scale inundation scheme in the framework of the Model for Scale Adaptive River Transport (MOSART) river routing model. Both the kinematic wave and the diffusion wave routing methods are implemented in the model. A new process-based algorithm is designed to represent river channel - floodplain interactions. Uncertainties in the input datasets are partly addressed through model calibration. We will present the comparison of simulated results against satellite and in situ observations and analysis to understand factors that influence inundation processes in the Amazon Basin.

  15. Parallel Index and Query for Large Scale Data Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Chou, Jerry; Wu, Kesheng; Ruebel, Oliver; Howison, Mark; Qiang, Ji; Prabhat,; Austin, Brian; Bethel, E. Wes; Ryne, Rob D.; Shoshani, Arie

    2011-07-18

    Modern scientific datasets present numerous data management and analysis challenges. State-of-the-art index and query technologies are critical for facilitating interactive exploration of large datasets, but numerous challenges remain in terms of designing a system for process- ing general scientific datasets. The system needs to be able to run on distributed multi-core platforms, efficiently utilize underlying I/O infrastructure, and scale to massive datasets. We present FastQuery, a novel software framework that address these challenges. FastQuery utilizes a state-of-the-art index and query technology (FastBit) and is designed to process mas- sive datasets on modern supercomputing platforms. We apply FastQuery to processing of a massive 50TB dataset generated by a large scale accelerator modeling code. We demonstrate the scalability of the tool to 11,520 cores. Motivated by the scientific need to search for inter- esting particles in this dataset, we use our framework to reduce search time from hours to tens of seconds.

  16. New Advances In Multiphase Flow Numerical Modelling Using A General Domain Decomposition and Non-orthogonal Collocated Finite Volume Algorithm: Application To Industrial Fluid Catalytical Cracking Process and Large Scale Geophysical Fluids.

    Science.gov (United States)

    Martin, R.; Gonzalez Ortiz, A.

    In the industry as well as in the geophysical community, multiphase flows are mod- elled using a finite volume approach and a multicorrector algorithm in time in order to determine implicitly the pressures, velocities and volume fractions for each phase. Pressures, and velocities are generally determined at mid-half mesh step from each other following the staggered grid approach. This ensures stability and prevents os- cillations in pressure. It allows to treat almost all the Reynolds number ranges for all speeds and viscosities. The disadvantages appear when we want to treat more complex geometries or if a generalized curvilinear formulation of the conservation equations is considered. Too many interpolations have to be done and accuracy is then lost. In order to overcome these problems, we use here a similar algorithm in time and a Rhie and Chow interpolation (1983) of the collocated variables and essentially the velocities at the interface. The Rhie and Chow interpolation of the velocities at the finite volume interfaces allows to have no oscillatons of the pressure without checkerboard effects and to stabilize all the algorithm. In a first predictor step, fluxes at the interfaces of the finite volumes are then computed using 2nd and 3rd order shock capturing schemes of MUSCL/TVD or Van Leer type, and the orthogonal stress components are treated implicitly while cross viscous/diffusion terms are treated explicitly. A pentadiagonal system in 2D or a septadiagonal in 3D must be solve but here we have chosen to solve 3 tridiagonal linear systems (the so called Alternate Direction Implicit algorithm), one in each spatial direction, to reduce the cost of computation. Then a multi-correction of interpolated velocities, pressures and volumic fractions of each phase are done in the cartesian frame or the deformed local curvilinear coordinate system till convergence and mass conservation. At the end the energy conservation equations are solved. In all this process the

  17. Importance of the Small-Scale Processes Melting, Plate Boundary Formation and Mineralogy on the Large-Scale, Long-Term Thermo-Chemical Evolution of Earth's Mantle-Plate System

    Science.gov (United States)

    Tackley, P.

    2015-12-01

    the importance of appropriate treatment of small-scale processes in global models.

  18. 图形处理器在大规模力学问题计算中的应用进展%ADVANCES IN GRAPHICS PROCESSING UNITS' APPLICATIONS TO THE COMPUTATION OF LARGE-SCALE MECHANICAL PROBLEMS

    Institute of Scientific and Technical Information of China (English)

    夏健明; 魏德敏

    2010-01-01

    现代图形处理器(graphics processing units,GPU)具有较强的并行数值运算功能.该文简单介绍了GPU的硬件结构,基于GPU通用计算的数据结构和实现方法,以及用于编写片元程序的OpenGL着色语言.介绍了应用GPU计算大规模力学问题的研究进展.简要介绍了以下内容:应用GPU模拟自然界的流体现象,其实质是使用有限差分法求解Navier-Stokes方程;应用GPU实现有限元法计算,使用基于GPU的共轭梯度法求解有限元方程组;应用GPU实现分子动力学计算,用GPU计算原子间短程作用力,并生成邻近原子列表;应用GPU实现量子力学Monte Carlo计算;应用GPU实现n个物体的引力相互作用,用GPU纹理存储n个物体的位置、质量、速度和加速度等.对基于图象处理器和中央处理器的计算作比较,已完成了以下基于GPU的计算:实现求解线性方程组的高斯消元法和共轭梯度法,并应用于大规模的有限元计算;加速无网格法计算;加速线性和非线性分子结构力学方法计算;用于计算分析碳纳米管的力学性能.指出GPU在大规模力学计算中的研究方向.

  19. How semantics can inform the geological mapping process and support intelligent queries

    Science.gov (United States)

    Lombardo, Vincenzo; Piana, Fabrizio; Mimmo, Dario

    2017-04-01

    The geologic mapping process requires the organization of data according to the general knowledge about the objects, namely the geologic units, and to the objectives of a graphic representation of such objects in a map, following an established model of geotectonic evolution. Semantics can greatly help such a process in two concerns: the provision of a terminological base to name and classify the objects of the map; on the other, the implementation of a machine-readable encoding of the geologic knowledge base supports the application of reasoning mechanisms and the derivation of novel properties and relations about the objects of the map. The OntoGeonous initiative has built a terminological base of geological knowledge in a machine-readable format, following the Semantic Web tenets and the Linked Data paradigm. The major knowledge sources of the OntoGeonous initiative are GeoScience Markup Language schemata and vocabularies (through its last version, GeoSciML 4, 2015, published by the IUGS CGI Commission) and the INSPIRE "Data Specification on Geology" directives (an operative simplification of GeoSciML, published by INSPIRE Thematic Working Group Geology of the European Commission). The Linked Data paradigm has been exploited by linking (without replicating, to avoid inconsistencies) the already existing machine-readable encoding for some specific domains, such as the lithology domain (vocabulary Simple Lithology) and the geochronologic time scale (ontology "gts"). Finally, for the upper level knowledge, shared across several geologic domains, we have resorted to NASA SWEET ontology. The OntoGeonous initiative has also produced a wiki that explains how the geologic knowledge has been encoded from shared geoscience vocabularies (https://www.di.unito.it/wikigeo/). In particular, the sections dedicated to axiomatization will support the construction of an appropriate data base schema that can be then filled with the objects of the map. This contribution will discuss

  20. The theory of large-scale ocean circulation

    National Research Council Canada - National Science Library

    Samelson, R. M

    2011-01-01

    "This is a concise but comprehensive introduction to the basic elements of the theory of large-scale ocean circulation for advanced students and researchers"-- "Mounting evidence that human activities...

  1. Learning networks for sustainable, large-scale improvement.

    Science.gov (United States)

    McCannon, C Joseph; Perla, Rocco J

    2009-05-01

    Large-scale improvement efforts known as improvement networks offer structured opportunities for exchange of information and insights into the adaptation of clinical protocols to a variety of settings.

  2. Personalized Opportunistic Computing for CMS at Large Scale

    CERN Document Server

    CERN. Geneva

    2015-01-01

    **Douglas Thain** is an Associate Professor of Computer Science and Engineering at the University of Notre Dame, where he designs large scale distributed computing systems to power the needs of advanced science and...

  3. An Evaluation Framework for Large-Scale Network Structures

    DEFF Research Database (Denmark)

    Pedersen, Jens Myrup; Knudsen, Thomas Phillip; Madsen, Ole Brun

    2004-01-01

    An evaluation framework for large-scale network structures is presented, which facilitates evaluations and comparisons of different physical network structures. A number of quantitative and qualitative parameters are presented, and their importance to networks discussed. Choosing a network...

  4. Modified gravity and large scale flows, a review

    Science.gov (United States)

    Mould, Jeremy

    2017-02-01

    Large scale flows have been a challenging feature of cosmography ever since galaxy scaling relations came on the scene 40 years ago. The next generation of surveys will offer a serious test of the standard cosmology.

  5. Some perspective on the Large Scale Scientific Computation Research

    Institute of Scientific and Technical Information of China (English)

    DU Qiang

    2004-01-01

    @@ The "Large Scale Scientific Computation (LSSC) Research"project is one of the State Major Basic Research projects funded by the Chinese Ministry of Science and Technology in the field ofinformation science and technology.

  6. Some perspective on the Large Scale Scientific Computation Research

    Institute of Scientific and Technical Information of China (English)

    DU; Qiang

    2004-01-01

    The "Large Scale Scientific Computation (LSSC) Research"project is one of the State Major Basic Research projects funded by the Chinese Ministry of Science and Technology in the field ofinformation science and technology.……

  7. PetroChina to Expand Dushanzi Refinery on Large Scale

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    @@ A large-scale expansion project for PetroChina Dushanzi Petrochemical Company has been given the green light, a move which will make it one of the largest refineries and petrochemical complexes in the country.

  8. Needs, opportunities, and options for large scale systems research

    Energy Technology Data Exchange (ETDEWEB)

    Thompson, G.L.

    1984-10-01

    The Office of Energy Research was recently asked to perform a study of Large Scale Systems in order to facilitate the development of a true large systems theory. It was decided to ask experts in the fields of electrical engineering, chemical engineering and manufacturing/operations research for their ideas concerning large scale systems research. The author was asked to distribute a questionnaire among these experts to find out their opinions concerning recent accomplishments and future research directions in large scale systems research. He was also requested to convene a conference which included three experts in each area as panel members to discuss the general area of large scale systems research. The conference was held on March 26--27, 1984 in Pittsburgh with nine panel members, and 15 other attendees. The present report is a summary of the ideas presented and the recommendations proposed by the attendees.

  9. Parallel Framework for Dimensionality Reduction of Large-Scale Datasets

    Directory of Open Access Journals (Sweden)

    Sai Kiranmayee Samudrala

    2015-01-01

    Full Text Available Dimensionality reduction refers to a set of mathematical techniques used to reduce complexity of the original high-dimensional data, while preserving its selected properties. Improvements in simulation strategies and experimental data collection methods are resulting in a deluge of heterogeneous and high-dimensional data, which often makes dimensionality reduction the only viable way to gain qualitative and quantitative understanding of the data. However, existing dimensionality reduction software often does not scale to datasets arising in real-life applications, which may consist of thousands of points with millions of dimensions. In this paper, we propose a parallel framework for dimensionality reduction of large-scale data. We identify key components underlying the spectral dimensionality reduction techniques, and propose their efficient parallel implementation. We show that the resulting framework can be used to process datasets consisting of millions of points when executed on a 16,000-core cluster, which is beyond the reach of currently available methods. To further demonstrate applicability of our framework we perform dimensionality reduction of 75,000 images representing morphology evolution during manufacturing of organic solar cells in order to identify how processing parameters affect morphology evolution.

  10. Large-scale comparative visualisation of sets of multidimensional data

    Directory of Open Access Journals (Sweden)

    Dany Vohl

    2016-10-01

    Full Text Available We present encube—a qualitative, quantitative and comparative visualisation and analysis system, with application to high-resolution, immersive three-dimensional environments and desktop displays. encube extends previous comparative visualisation systems by considering: (1 the integration of comparative visualisation and analysis into a unified system; (2 the documentation of the discovery process; and (3 an approach that enables scientists to continue the research process once back at their desktop. Our solution enables tablets, smartphones or laptops to be used as interaction units for manipulating, organising, and querying data. We highlight the modularity of encube, allowing additional functionalities to be included as required. Additionally, our approach supports a high level of collaboration within the physical environment. We show how our implementation of encube operates in a large-scale, hybrid visualisation and supercomputing environment using the CAVE2 at Monash University, and on a local desktop, making it a versatile solution. We discuss how our approach can help accelerate the discovery rate in a variety of research scenarios.

  11. Comparison Between Overtopping Discharge in Small and Large Scale Models

    DEFF Research Database (Denmark)

    Helgason, Einar; Burcharth, Hans F.

    2006-01-01

    small and large scale model tests show no clear evidence of scale effects for overtopping above a threshold value. In the large scale model no overtopping was measured for waveheights below Hs = 0.5m as the water sunk into the voids between the stones on the crest. For low overtopping scale effects...... are presented as the small-scale model underpredicts the overtopping discharge....

  12. Balancing modern Power System with large scale of wind power

    OpenAIRE

    Basit, Abdul; Altin, Müfit; Hansen, Anca Daniela; Sørensen, Poul Ejnar

    2014-01-01

    Power system operators must ensure robust, secure and reliable power system operation even with a large scale integration of wind power. Electricity generated from the intermittent wind in large propor-tion may impact on the control of power system balance and thus deviations in the power system frequency in small or islanded power systems or tie line power flows in interconnected power systems. Therefore, the large scale integration of wind power into the power system strongly concerns the s...

  13. Vertically integrated approaches to large scale CO2 storage: Evaluating long-term storage security of CO2 injection in saline aquifers

    Science.gov (United States)

    Gasda, S. E.; Nordbotten, J.; Celia, M. A.

    2009-12-01

    Storage security of injected carbon dioxide (CO2) is an essential component of risk management for geological carbon sequestration operations. During the injection and early post-injection periods, CO2 leakage may occur along faults and leaky wells, but this risk may be partly managed by proper site selection and sensible deployment of monitoring and remediation technologies. On the other hand, long-term storage security is an entirely different risk management problem—one that is dominated by a mobile CO2 plume that may travel over very large spatial and temporal scales before it is trapped by different physical and chemical processes. The primary trapping mechanisms are capillary and solubility trapping, which evolve over thousands to tens of thousands of years and can immobilize a significant portion of the mobile, free-phase CO2 plume. However, these processes are complex, involving a combination of small and large spatial scales over varying time scales. Solubility trapping is a prime example of this complexity, where small-scale density instabilities in the dissolved CO2 region leads to convective mixing that has that has a significant effect on the large-scale dissolution process over very long time scales. Using appropriate models that can capture both large and small-scale effects is essential for understanding the role of dissolution and convective mixing on the long-term storage security of CO2 sequestration operations. There are several approaches to modeling long-term CO2 trapping mechanisms. One modeling option is the use of traditional numerical methods, which are often highly sophisticated models that can handle multiple complex phenomena with high levels of accuracy. However, these complex models quickly become prohibitively expensive for the type of large-scale, long-term modeling that is necessary for risk assessment applications such as the late post-injection period. We present an alternative modeling option, the VESA model, that combines

  14. A study of MLFMA for large-scale scattering problems

    Science.gov (United States)

    Hastriter, Michael Larkin

    This research is centered in computational electromagnetics with a focus on solving large-scale problems accurately in a timely fashion using first principle physics. Error control of the translation operator in 3-D is shown. A parallel implementation of the multilevel fast multipole algorithm (MLFMA) was studied as far as parallel efficiency and scaling. The large-scale scattering program (LSSP), based on the ScaleME library, was used to solve ultra-large-scale problems including a 200lambda sphere with 20 million unknowns. As these large-scale problems were solved, techniques were developed to accurately estimate the memory requirements. Careful memory management is needed in order to solve these massive problems. The study of MLFMA in large-scale problems revealed significant errors that stemmed from inconsistencies in constants used by different parts of the algorithm. These were fixed to produce the most accurate data possible for large-scale surface scattering problems. Data was calculated on a missile-like target using both high frequency methods and MLFMA. This data was compared and analyzed to determine possible strategies to increase data acquisition speed and accuracy through multiple computation method hybridization.

  15. Vector dissipativity theory for large-scale impulsive dynamical systems

    Directory of Open Access Journals (Sweden)

    Haddad Wassim M.

    2004-01-01

    Full Text Available Modern complex large-scale impulsive systems involve multiple modes of operation placing stringent demands on controller analysis of increasing complexity. In analyzing these large-scale systems, it is often desirable to treat the overall impulsive system as a collection of interconnected impulsive subsystems. Solution properties of the large-scale impulsive system are then deduced from the solution properties of the individual impulsive subsystems and the nature of the impulsive system interconnections. In this paper, we develop vector dissipativity theory for large-scale impulsive dynamical systems. Specifically, using vector storage functions and vector hybrid supply rates, dissipativity properties of the composite large-scale impulsive systems are shown to be determined from the dissipativity properties of the impulsive subsystems and their interconnections. Furthermore, extended Kalman-Yakubovich-Popov conditions, in terms of the impulsive subsystem dynamics and interconnection constraints, characterizing vector dissipativeness via vector system storage functions, are derived. Finally, these results are used to develop feedback interconnection stability results for large-scale impulsive dynamical systems using vector Lyapunov functions.

  16. The Neo-Institutional Framework of Large-Scale Retail Spatial Policy Formulation Processin Serbia

    Directory of Open Access Journals (Sweden)

    Milica MAKSIĆ

    2014-12-01

    Full Text Available In this paper, in the theoretical framework of neo-institutional theory, the process of spatial policy formulation related to large-scale retail in Serbia was researched. The basic objective of the research is to explore possible ways of im-provement of institutional framework of spatial and urban policy formulation in Serbia in order to make the decision-making process on large-scale retail building more effective and to ade-quately address the complex problems of this type of building. Since neo-institutional theory considers institutions as series of formal and in-formal organizations, rules and procedures that build patterns of behavior within organizations, the paper includes the analysis of the effciency in the existing institutional framework of large-scale retail policy formulation and problems occurring in that process; the actual roles of public, private and civil actors, the content of policies by which large-scale retail building is directed at different levels of spatial organization; the methodologi-cal and procedural framework of these policies’ formulation. Possible directions for institutional framework redefnition are suggested in order to achieve more effective decision-making on the large-scale retail building in Serbia.

  17. Risk management in a large-scale CO2 geosequestration pilot project, Illinois, USA

    Science.gov (United States)

    Hnottavange-Telleen, K.; Chabora, E.; Finley, R.J.; Greenberg, S.E.; Marsteller, S.

    2011-01-01

    Like most large-scale infrastructure projects, carbon dioxide (CO 2) geological sequestration (GS) projects have multiple success criteria and multiple stakeholders. In this context "risk evaluation" encompasses multiple scales. Yet a risk management program aims to maximize the chance of project success by assessing, monitoring, minimizing all risks in a consistent framework. The 150,000-km2 Illinois Basin underlies much of the state of Illinois, USA, and parts of adjacent Kentucky and Indiana. Its potential for CO2 storage is first-rate among basins in North America, an impression that has been strengthened by early testing of the injection well of the Midwest Geological Sequestration Consortium's (MGSC's) Phase III large scale demonstration project, the Illinois Basin - Decatur Project (IBDP). The IBDP, funded by the U.S. Department of Energy's National Energy Technology Laboratory (NETL), represents a key trial of GS technologies and project-management techniques. Though risks are specific to each site and project, IBDP risk management methodologies provide valuable experience for future GS projects. IBDP views risk as the potential for negative impact to any of these five values: health and safety, environment, financial, advancing the viability and public acceptability of a GS industry, and research. Research goals include monitoring one million metric tonnes of injected CO2 in the subsurface. Risk management responds to the ways in which any values are at risk: for example, monitoring is designed to reduce uncertainties in parameter values that are important for research and system control, and is also designed to provide public assurance. Identified risks are the primary basis for risk-reduction measures: risks linked to uncertainty in geologic parameters guide further characterization work and guide simulations applied to performance evaluation. Formally, industry defines risk (more precisely risk criticality) as the product L*S, the Likelihood multiplied

  18. 大型公益项目全寿命周期过程集成模型及其支撑条件研究%Research on the full life-cycle process integration model of large-scale public utility construction project and its support conditions

    Institute of Scientific and Technical Information of China (English)

    张国宗; 王永华; 刘雄

    2014-01-01

    大型公益项目管理全寿命周期过程集成是指从项目决策、规划设计、实施、运行维护、结束等阶段的项目过程集成。本文以实现项目全寿命周期目标体系为目的,以系统观点、过程观点及价值工程思想为指导,研究了从项目策划、规划设计、实施到运营的全过程集成管理,及其不同阶段不同任务之间的相互关系和作用,建立了大型公益项目全寿命周期过程集成模型,并讨论了大型公益项目全寿命周期过程集成的支撑条件,实现项目全寿命周期的平衡与和谐,提高了项目投资效益和社会公益服务功能,从而提高了大型公益项目的价值。%The life cycle integration management of the large-scale public utility construction projects refers to the process integration of every stage of the project from decisions,planning, design,implementation,operation,maintenance to the end. In this paper,in order to realize the full life-cycle target system,we studied the whole process integration management and the interrelation of different assignments in different phases. The full life-cycle process integration model of large-scale public utility construction projects was proposed. And on this basis we dis-cussed the support conditions of full life-cycle process integration of large-scale public utility construction projects so that the project can achieve balance and harmony in the whole life cycle as well as the investment benefit and social service function can be further improved.

  19. Evaluation of drought propagation in an ensemble mean of large-scale hydrological models

    Directory of Open Access Journals (Sweden)

    A. F. Van Loon

    2012-11-01

    underestimation of wet-to-dry-season droughts and snow-related droughts. Furthermore, almost no composite droughts were simulated for slowly responding areas, while many multi-year drought events were expected in these systems.

    We conclude that most drought propagation processes are reasonably well reproduced by the ensemble mean of large-scale models in contrasting catchments in Europe. Challenges, however, remain in catchments with cold and semi-arid climates and catchments with large storage in aquifers or lakes. This leads to a high uncertainty in hydrological drought simulation at large scales. Improvement of drought simulation in large-scale models should focus on a better representation of hydrological processes that are important for drought development, such as evapotranspiration, snow accumulation and melt, and especially storage. Besides the more explicit inclusion of storage in large-scale models, also parametrisation of storage processes requires attention, for example through a global-scale dataset on aquifer characteristics, improved large-scale datasets on other land characteristics (e.g. soils, land cover, and calibration/evaluation of the models against observations of storage (e.g. in snow, groundwater.

  20. Local and Regional Impacts of Large Scale Wind Energy Deployment

    Science.gov (United States)

    Michalakes, J.; Hammond, S.; Lundquist, J. K.; Moriarty, P.; Robinson, M.

    2010-12-01

    The U.S. is currently on a path to produce 20% of its electricity from wind energy by 2030, almost a 10-fold increase over present levels of electricity generated from wind. Such high-penetration wind energy deployment will entail extracting elevated energy levels from the planetary boundary layer and preliminary studies indicate that this will have significant but uncertain impacts on the local and regional environment. State and federal regulators have raised serious concerns regarding potential agricultural impacts from large farms deployed throughout the Midwest where agriculture is the basis of the local economy. The effects of large wind farms have been proposed to be both beneficial (drying crops to reduce occurrences of fungal diseases, avoiding late spring freezes, enhancing pollen viability, reducing dew duration) and detrimental (accelerating moisture loss during drought) with no conclusive investigations thus far. As both wind and solar technologies are deployed at scales required to replace conventional technologies, there must be reasonable certainty that the potential environmental impacts at the micro, macro, regional and global scale do not exceed those anticipated from carbon emissions. Largely because of computational limits, the role of large wind farms in affecting regional-scale weather patterns has only been investigated in coarse simulations and modeling tools do not yet exist which are capable of assessing the downwind affects of large wind farms may have on microclimatology. In this presentation, we will outline the vision for and discuss technical and scientific challenges in developing a multi-model high-performance simulation capability covering the range of mesoscale to sub-millimeter scales appropriate for assessing local, regional, and ultimately global environmental impacts and quantifying uncertainties of large scale wind energy deployment scenarios. Such a system will allow continuous downscaling of atmospheric processes on wind

  1. Large scale stochastic spatio-temporal modelling with PCRaster

    Science.gov (United States)

    Karssenberg, Derek; Drost, Niels; Schmitz, Oliver; de Jong, Kor; Bierkens, Marc F. P.

    2013-04-01

    software from the eScience Technology Platform (eSTeP), developed at the Netherlands eScience Center. This will allow us to scale up to hundreds of machines, with thousands of compute cores. A key requirement is not to change the user experience of the software. PCRaster operations and the use of the Python framework classes should work in a similar manner on machines ranging from a laptop to a supercomputer. This enables a seamless transfer of models from small machines, where model development is done, to large machines used for large-scale model runs. Domain specialists from a large range of disciplines, including hydrology, ecology, sedimentology, and land use change studies, currently use the PCRaster Python software within research projects. Applications include global scale hydrological modelling and error propagation in large-scale land use change models. The software runs on MS Windows, Linux operating systems, and OS X.

  2. The small-world organization of large-scale brain systems and relationships with subcortical structures.

    Science.gov (United States)

    Koziol, Leonard F; Barker, Lauren A; Joyce, Arthur W; Hrin, Skip

    2014-01-01

    Brain structure and function is characterized by large-scale brain systems. However, each system has its own "small-world" organization, with sub-regions, or "hubs," that have varying degrees of specialization for certain cognitive and behavioral processes. This article describes this small-world organization, and the concepts of functional specialization and functional integration are defined and explained through practical examples. We also describe the development of large-scale brain systems and this small-world organization as a sensitive, protracted process, vulnerable to a variety of influences that generate neurodevelopmental disorders.

  3. Algorithm of search and track of static and moving large-scale objects

    Directory of Open Access Journals (Sweden)

    Kalyaev Anatoly

    2017-01-01

    Full Text Available We suggest an algorithm for processing of a sequence, which contains images of search and track of static and moving large-scale objects. The possible software implementation of the algorithm, based on multithread CUDA processing, is suggested. Experimental analysis of the suggested algorithm implementation is performed.

  4. Ectopically tethered CP190 induces large-scale chromatin decondensation

    Science.gov (United States)

    Ahanger, Sajad H.; Günther, Katharina; Weth, Oliver; Bartkuhn, Marek; Bhonde, Ramesh R.; Shouche, Yogesh S.; Renkawitz, Rainer

    2014-01-01

    Insulator mediated alteration in higher-order chromatin and/or nucleosome organization is an important aspect of epigenetic gene regulation. Recent studies have suggested a key role for CP190 in such processes. In this study, we analysed the effects of ectopically tethered insulator factors on chromatin structure and found that CP190 induces large-scale decondensation when targeted to a condensed lacO array in mammalian and Drosophila cells. In contrast, dCTCF alone, is unable to cause such a decondensation, however, when CP190 is present, dCTCF recruits it to the lacO array and mediates chromatin unfolding. The CP190 induced opening of chromatin may not be correlated with transcriptional activation, as binding of CP190 does not enhance luciferase activity in reporter assays. We propose that CP190 may mediate histone modification and chromatin remodelling activity to induce an open chromatin state by its direct recruitment or targeting by a DNA binding factor such as dCTCF.

  5. Efficient Graph Based Approach to Large Scale Role Engineering

    Directory of Open Access Journals (Sweden)

    Dana Zhang

    2014-04-01

    Full Text Available Role engineering is the process of defining a set of roles that offer administrative benefit for Role Based Access Control (RBAC, which ensures data privacy. It is a business critical task that is required by enterprises wishing to migrate to RBAC. However, existing methods of role generation have not analysed what constitutes a beneficial role and as a result, often produce inadequate solutions in a time consuming manner. To address the urgent issue of identifying high quality RBAC structures in real enterprise environments, we present a cost based analysis of the problem for both flat and hierarchical RBAC structures. Specifically we propose two cost models to evaluate the administration cost of roles and provide a k-partite graph approach to role engineering. Existing role cost evaulations are approximations that overestimate the benefit of a role. Our method and cost models can provide exact role cost and show when existing role cost evaluations can be used as a lower bound to improve efficiency without effecting quality of results. In the first work to address role engineering using large scale real data sets, we propose RoleAnnealing, a fast solution space search algorithm with incremental computation and guided search space heuristics. Our experimental results on both real and synthetic data sets demonstrate that high quality RBAC configurations that maintain data privacy are identified efficiently by RoleAnnealing. Comparison with an existing approach shows RoleAnnealing is significantly faster and produces RBAC configurations with lower cost.

  6. Large Scale Synthesis of Carbon Nanofibres on Sodium Chloride Support

    Directory of Open Access Journals (Sweden)

    Ravindra Rajarao

    2012-06-01

    Full Text Available Large scale synthesis of carbon nanofibres (CNFs on a sodium chloride support has been achieved. CNFs have been synthesized using metal oxalate (Ni, Co and Fe as catalyst precursors at 680 C by chemical vapour deposition method. Upon pyrolysis, this catalyst precursors yield catalyst nanoparticles directly. The sodium chloride was used as a catalyst support, it was chosen because of its non‐toxic and water soluble nature. Problems, such as the detrimental effect of CNFs, the detrimental effects on the environment and even cost, have been avoided by using a water soluble support. The structure of products was characterized by scanning electron microscopy, transmission electron microscopy and Raman spectroscopy. The purity of the grown products and purified products were determined by the thermal analysis and X‐ray diffraction method. Here we report the 7600, 7000 and 6500 wt% yield of CNFs synthesized over nickel, cobalt and iron oxalate. The long, curved and worm shaped CNFs were obtained on Ni, Co and Fe catalysts respectively. The lengthy process of calcination and reduction for the preparation of catalysts is avoided in this method. This synthesis route is simple and economical, hence, it can be used for CNF synthesis in industries.

  7. Multi-Resolution Modeling of Large Scale Scientific Simulation Data

    Energy Technology Data Exchange (ETDEWEB)

    Baldwin, C; Abdulla, G; Critchlow, T

    2003-01-31

    This paper discusses using the wavelets modeling technique as a mechanism for querying large-scale spatio-temporal scientific simulation data. Wavelets have been used successfully in time series analysis and in answering surprise and trend queries. Our approach however is driven by the need for compression, which is necessary for viable throughput given the size of the targeted data, along with the end user requirements from the discovery process. Our users would like to run fast queries to check the validity of the simulation algorithms used. In some cases users are welling to accept approximate results if the answer comes back within a reasonable time. In other cases they might want to identify a certain phenomena and track it over time. We face a unique problem because of the data set sizes. It may take months to generate one set of the targeted data; because of its shear size, the data cannot be stored on disk for long and thus needs to be analyzed immediately before it is sent to tape. We integrated wavelets within AQSIM, a system that we are developing to support exploration and analyses of tera-scale size data sets. We will discuss the way we utilized wavelets decomposition in our domain to facilitate compression and in answering a specific class of queries that is harder to answer with any other modeling technique. We will also discuss some of the shortcomings of our implementation and how to address them.

  8. Large Scale Obscuration and Related Climate Effects Workshop: Proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Zak, B.D.; Russell, N.A.; Church, H.W.; Einfeld, W.; Yoon, D.; Behl, Y.K. [eds.

    1994-05-01

    A Workshop on Large Scale Obsurcation and Related Climate Effects was held 29--31 January, 1992, in Albuquerque, New Mexico. The objectives of the workshop were: to determine through the use of expert judgement the current state of understanding of regional and global obscuration and related climate effects associated with nuclear weapons detonations; to estimate how large the uncertainties are in the parameters associated with these phenomena (given specific scenarios); to evaluate the impact of these uncertainties on obscuration predictions; and to develop an approach for the prioritization of further work on newly-available data sets to reduce the uncertainties. The workshop consisted of formal presentations by the 35 participants, and subsequent topical working sessions on: the source term; aerosol optical properties; atmospheric processes; and electro-optical systems performance and climatic impacts. Summaries of the conclusions reached in the working sessions are presented in the body of the report. Copies of the transparencies shown as part of each formal presentation are contained in the appendices (microfiche).

  9. Parallel cluster labeling for large-scale Monte Carlo simulations

    CERN Document Server

    Flanigan, M; Flanigan, M; Tamayo, P

    1995-01-01

    We present an optimized version of a cluster labeling algorithm previously introduced by the authors. This algorithm is well suited for large-scale Monte Carlo simulations of spin models using cluster dynamics on parallel computers with large numbers of processors. The algorithm divides physical space into rectangular cells which are assigned to processors and combines a serial local labeling procedure with a relaxation process across nearest-neighbor processors. By controlling overhead and reducing inter-processor communication this method attains good computational speed-up and efficiency. Large systems of up to 65536 X 65536 spins have been simulated at updating speeds of 11 nanosecs/site (90.7 million spin updates/sec) using state-of-the-art supercomputers. In the second part of the article we use the cluster algorithm to study the relaxation of magnetization and energy on large Ising models using Swendsen-Wang dynamics. We found evidence that exponential and power law factors are present in the relaxatio...

  10. Analyzing large-scale proteomics projects with latent semantic indexing.

    Science.gov (United States)

    Klie, Sebastian; Martens, Lennart; Vizcaíno, Juan Antonio; Côté, Richard; Jones, Phil; Apweiler, Rolf; Hinneburg, Alexander; Hermjakob, Henning

    2008-01-01

    Since the advent of public data repositories for proteomics data, readily accessible results from high-throughput experiments have been accumulating steadily. Several large-scale projects in particular have contributed substantially to the amount of identifications available to the community. Despite the considerable body of information amassed, very few successful analyses have been performed and published on this data, leveling off the ultimate value of these projects far below their potential. A prominent reason published proteomics data is seldom reanalyzed lies in the heterogeneous nature of the original sample collection and the subsequent data recording and processing. To illustrate that at least part of this heterogeneity can be compensated for, we here apply a latent semantic analysis to the data contributed by the Human Proteome Organization's Plasma Proteome Project (HUPO PPP). Interestingly, despite the broad spectrum of instruments and methodologies applied in the HUPO PPP, our analysis reveals several obvious patterns that can be used to formulate concrete recommendations for optimizing proteomics project planning as well as the choice of technologies used in future experiments. It is clear from these results that the analysis of large bodies of publicly available proteomics data by noise-tolerant algorithms such as the latent semantic analysis holds great promise and is currently underexploited.

  11. Efficient Topology Estimation for Large Scale Optical Mapping

    CERN Document Server

    Elibol, Armagan; Garcia, Rafael

    2013-01-01

    Large scale optical mapping methods are in great demand among scientists who study different aspects of the seabed, and have been fostered by impressive advances in the capabilities of underwater robots in gathering optical data from the seafloor. Cost and weight constraints mean that low-cost ROVs usually have a very limited number of sensors. When a low-cost robot carries out a seafloor survey using a down-looking camera, it usually follows a predefined trajectory that provides several non time-consecutive overlapping image pairs. Finding these pairs (a process known as topology estimation) is indispensable to obtaining globally consistent mosaics and accurate trajectory estimates, which are necessary for a global view of the surveyed area, especially when optical sensors are the only data source. This book contributes to the state-of-art in large area image mosaicing methods for underwater surveys using low-cost vehicles equipped with a very limited sensor suite. The main focus has been on global alignment...

  12. Modelling large-scale evacuation of music festivals

    Directory of Open Access Journals (Sweden)

    E. Ronchi

    2016-05-01

    Full Text Available This paper explores the use of multi-agent continuous evacuation modelling for representing large-scale evacuation scenarios at music festivals. A 65,000 people capacity music festival area was simulated using the model Pathfinder. Three evacuation scenarios were developed in order to explore the capabilities of evacuation modelling during such incidents, namely (1 a preventive evacuation of a section of the festival area containing approximately 15,000 people due to a fire breaking out on a ship, (2 an escalating scenario involving the total evacuation of the entire festival area (65,000 people due to a bomb threat, and (3 a cascading scenario involving the total evacuation of the entire festival area (65,000 people due to the threat of an explosion caused by a ship engine overheating. This study suggests that the analysis of the people-evacuation time curves produced by evacuation models, coupled with a visual analysis of the simulated evacuation scenarios, allows for the identification of the main factors affecting the evacuation process (e.g., delay times, overcrowding at exits in relation to exit widths, etc. and potential measures that could improve safety.

  13. Bio-inspired wooden actuators for large scale applications.

    Directory of Open Access Journals (Sweden)

    Markus Rüggeberg

    Full Text Available Implementing programmable actuation into materials and structures is a major topic in the field of smart materials. In particular the bilayer principle has been employed to develop actuators that respond to various kinds of stimuli. A multitude of small scale applications down to micrometer size have been developed, but up-scaling remains challenging due to either limitations in mechanical stiffness of the material or in the manufacturing processes. Here, we demonstrate the actuation of wooden bilayers in response to changes in relative humidity, making use of the high material stiffness and a good machinability to reach large scale actuation and application. Amplitude and response time of the actuation were measured and can be predicted and controlled by adapting the geometry and the constitution of the bilayers. Field tests in full weathering conditions revealed long-term stability of the actuation. The potential of the concept is shown by a first demonstrator. With the sensor and actuator intrinsically incorporated in the wooden bilayers, the daily change in relative humidity is exploited for an autonomous and solar powered movement of a tracker for solar modules.

  14. On the self-sustained nature of large-scale motions in turbulent Couette flow

    CERN Document Server

    Rawat, Subhandu; Hwang, Yongyun; Rincon, François

    2015-01-01

    Large-scale motions in wall-bounded turbulent flows are frequently interpreted as resulting from an aggregation process of smaller-scale structures. Here, we explore the alternative possibility that such large-scale motions are themselves self-sustained and do not draw their energy from smaller-scale turbulent motions activated in buffer layers. To this end, it is first shown that large-scale motions in turbulent Couette flow at Re=2150 self-sustain even when active processes at smaller scales are artificially quenched by increasing the Smagorinsky constant Cs in large eddy simulations. These results are in agreement with earlier results on pressure driven turbulent channels. We further investigate the nature of the large-scale coherent motions by computing upper and lower-branch nonlinear steady solutions of the filtered (LES) equations with a Newton-Krylov solver,and find that they are connected by a saddle-node bifurcation at large values of Cs. Upper branch solutions for the filtered large scale motions a...

  15. Large-scale Magnetic Structure Formation in 3D-MHD Turbulence

    CERN Document Server

    Malapaka, Shiva Kumar

    2013-01-01

    The inverse cascade of magnetic helicity in 3D-MHD turbulence is believed to be one of the processes responsible for large scale magnetic structure formation in astrophysical systems. In this work we present an exhaustive set of high resolution direct numerical simulations (DNS) of both forced and decaying 3D-MHD turbulence, to understand this structure formation process. It is first shown that an inverse cascade of magnetic helicity in small-scale driven turbulence does not necessarily generate coherent large-scale magnetic structures. The observed large-scale magnetic field, in this case, is severely perturbed by magnetic fluctuations generated by the small-scale forcing. In the decaying case, coherent large-scale structure form similar to those observed astronomically. Based on the numerical results the formation of large-scale magnetic structures in some astrophysical systems, is suggested to be the consequence of an initial forcing which imparts the necessary turbulent energy into the system, which, afte...

  16. Multi-Resolution Modeling of Large Scale Scientific Simulation Data

    Energy Technology Data Exchange (ETDEWEB)

    Baldwin, C; Abdulla, G; Critchlow, T

    2002-02-25

    Data produced by large scale scientific simulations, experiments, and observations can easily reach tera-bytes in size. The ability to examine data-sets of this magnitude, even in moderate detail, is problematic at best. Generally this scientific data consists of multivariate field quantities with complex inter-variable correlations and spatial-temporal structure. To provide scientists and engineers with the ability to explore and analyze such data sets we are using a twofold approach. First, we model the data with the objective of creating a compressed yet manageable representation. Second, with that compressed representation, we provide the user with the ability to query the resulting approximation to obtain approximate yet sufficient answers; a process called adhoc querying. This paper is concerned with a wavelet modeling technique that seeks to capture the important physical characteristics of the target scientific data. Our approach is driven by the compression, which is necessary for viable throughput, along with the end user requirements from the discovery process. Our work contrasts existing research which applies wavelets to range querying, change detection, and clustering problems by working directly with a decomposition of the data. The difference in this procedures is due primarily to the nature of the data and the requirements of the scientists and engineers. Our approach directly uses the wavelet coefficients of the data to compress as well as query. We will provide some background on the problem, describe how the wavelet decomposition is used to facilitate data compression and how queries are posed on the resulting compressed model. Results of this process will be shown for several problems of interest and we will end with some observations and conclusions about this research.

  17. Multi-Resolution Modeling of Large Scale Scientific Simulation Data

    Energy Technology Data Exchange (ETDEWEB)

    Baldwin, C; Abdulla, G; Critchlow, T

    2002-02-25

    Data produced by large scale scientific simulations, experiments, and observations can easily reach tera-bytes in size. The ability to examine data-sets of this magnitude, even in moderate detail, is problematic at best. Generally this scientific data consists of multivariate field quantities with complex inter-variable correlations and spatial-temporal structure. To provide scientists and engineers with the ability to explore and analyze such data sets we are using a twofold approach. First, we model the data with the objective of creating a compressed yet manageable representation. Second, with that compressed representation, we provide the user with the ability to query the resulting approximation to obtain approximate yet sufficient answers; a process called adhoc querying. This paper is concerned with a wavelet modeling technique that seeks to capture the important physical characteristics of the target scientific data. Our approach is driven by the compression, which is necessary for viable throughput, along with the end user requirements from the discovery process. Our work contrasts existing research which applies wavelets to range querying, change detection, and clustering problems by working directly with a decomposition of the data. The difference in this procedures is due primarily to the nature of the data and the requirements of the scientists and engineers. Our approach directly uses the wavelet coefficients of the data to compress as well as query. We will provide some background on the problem, describe how the wavelet decomposition is used to facilitate data compression and how queries are posed on the resulting compressed model. Results of this process will be shown for several problems of interest and we will end with some observations and conclusions about this research.

  18. Hong Kong Geological Survey

    Institute of Scientific and Technical Information of China (English)

    R J Sewell

    2007-01-01

    @@ History and objectives The Hong Kong Geological Survey(HKGS) was created on 5 May,1982,wimin the then Engineering Development Department of the Hong Kong Govemment.The initial objective was to carry out a new geological survey of the Territory at 1∶20,000 scale.This followed recognition of an urgent need to produce high quality geological maps at a large scale with sufficient detail to facilitate physical planning and land use management of Hong Kong.

  19. Towards understanding how surface life can affect interior geological processes: a non-equilibrium thermodynamics approach

    Directory of Open Access Journals (Sweden)

    J. G. Dyke

    2011-06-01

    Full Text Available Life has significantly altered the Earth's atmosphere, oceans and crust. To what extent has it also affected interior geological processes? To address this question, three models of geological processes are formulated: mantle convection, continental crust uplift and erosion and oceanic crust recycling. These processes are characterised as non-equilibrium thermodynamic systems. Their states of disequilibrium are maintained by the power generated from the dissipation of energy from the interior of the Earth. Altering the thickness of continental crust via weathering and erosion affects the upper mantle temperature which leads to changes in rates of oceanic crust recycling and consequently rates of outgassing of carbon dioxide into the atmosphere. Estimates for the power generated by various elements in the Earth system are shown. This includes, inter alia, surface life generation of 264 TW of power, much greater than those of geological processes such as mantle convection at 12 TW. This high power results from life's ability to harvest energy directly from the sun. Life need only utilise a small fraction of the generated free chemical energy for geochemical transformations at the surface, such as affecting rates of weathering and erosion of continental rocks, in order to affect interior, geological processes. Consequently when assessing the effects of life on Earth, and potentially any planet with a significant biosphere, dynamical models may be required that better capture the coupled nature of biologically-mediated surface and interior processes.

  20. Large-scale-vortex dynamos in planar rotating convection

    CERN Document Server

    Guervilly, Céline; Jones, Chris A

    2016-01-01

    Several recent studies have demonstrated how large-scale vortices may arise spontaneously in rotating planar convection. Here we examine the dynamo properties of such flows in rotating Boussinesq convection. For moderate values of the magnetic Reynolds number ($100 \\lesssim Rm \\lesssim 550$, with $Rm$ based on the box depth and the convective velocity), a large-scale (i.e. system-size) magnetic field is generated. The amplitude of the magnetic energy oscillates in time, out of phase with the oscillating amplitude of the large-scale vortex. The dynamo mechanism relies on those components of the flow that have length scales lying between that of the large-scale vortex and the typical convective cell size; smaller-scale flows are not required. The large-scale vortex plays a crucial role in the magnetic induction despite being essentially two-dimensional. For larger magnetic Reynolds numbers, the dynamo is small scale, with a magnetic energy spectrum that peaks at the scale of the convective cells. In this case, ...

  1. Large-Scale Spray Releases: Initial Aerosol Test Results

    Energy Technology Data Exchange (ETDEWEB)

    Schonewill, Philip P.; Gauglitz, Phillip A.; Bontha, Jagannadha R.; Daniel, Richard C.; Kurath, Dean E.; Adkins, Harold E.; Billing, Justin M.; Burns, Carolyn A.; Davis, James M.; Enderlin, Carl W.; Fischer, Christopher M.; Jenks, Jeromy WJ; Lukins, Craig D.; MacFarlan, Paul J.; Shutthanandan, Janani I.; Smith, Dennese M.

    2012-12-01

    One of the events postulated in the hazard analysis at the Waste Treatment and Immobilization Plant (WTP) and other U.S. Department of Energy (DOE) nuclear facilities is a breach in process piping that produces aerosols with droplet sizes in the respirable range. The current approach for predicting the size and concentration of aerosols produced in a spray leak involves extrapolating from correlations reported in the literature. These correlations are based on results obtained from small engineered spray nozzles using pure liquids with Newtonian fluid behavior. The narrow ranges of physical properties on which the correlations are based do not cover the wide range of slurries and viscous materials that will be processed in the WTP and across processing facilities in the DOE complex. Two key technical areas were identified where testing results were needed to improve the technical basis by reducing the uncertainty due to extrapolating existing literature results. The first technical need was to quantify the role of slurry particles in small breaches where the slurry particles may plug and result in substantially reduced, or even negligible, respirable fraction formed by high-pressure sprays. The second technical need was to determine the aerosol droplet size distribution and volume from prototypic breaches and fluids, specifically including sprays from larger breaches with slurries where data from the literature are scarce. To address these technical areas, small- and large-scale test stands were constructed and operated with simulants to determine aerosol release fractions and generation rates from a range of breach sizes and geometries. The properties of the simulants represented the range of properties expected in the WTP process streams and included water, sodium salt solutions, slurries containing boehmite or gibbsite, and a hazardous chemical simulant. The effect of anti-foam agents was assessed with most of the simulants. Orifices included round holes and

  2. Large Scale Landslide Database System Established for the Reservoirs in Southern Taiwan

    Science.gov (United States)

    Tsai, Tsai-Tsung; Tsai, Kuang-Jung; Shieh, Chjeng-Lun

    2017-04-01

    Typhoon Morakot seriously attack southern Taiwan awaken the public awareness of large scale landslide disasters. Large scale landslide disasters produce large quantity of sediment due to negative effects on the operating functions of reservoirs. In order to reduce the risk of these disasters within the study area, the establishment of a database for hazard mitigation / disaster prevention is necessary. Real time data and numerous archives of engineering data, environment information, photo, and video, will not only help people make appropriate decisions, but also bring the biggest concern for people to process and value added. The study tried to define some basic data formats / standards from collected various types of data about these reservoirs and then provide a management platform based on these formats / standards. Meanwhile, in order to satisfy the practicality and convenience, the large scale landslide disasters database system is built both provide and receive information abilities, which user can use this large scale landslide disasters database system on different type of devices. IT technology progressed extreme quick, the most modern system might be out of date anytime. In order to provide long term service, the system reserved the possibility of user define data format /standard and user define system structure. The system established by this study was based on HTML5 standard language, and use the responsive web design technology. This will make user can easily handle and develop this large scale landslide disasters database system.

  3. Seismic safety in conducting large-scale blasts

    Science.gov (United States)

    Mashukov, I. V.; Chaplygin, V. V.; Domanov, V. P.; Semin, A. A.; Klimkin, M. A.

    2017-09-01

    In mining enterprises to prepare hard rocks for excavation a drilling and blasting method is used. With the approach of mining operations to settlements the negative effect of large-scale blasts increases. To assess the level of seismic impact of large-scale blasts the scientific staff of Siberian State Industrial University carried out expertise for coal mines and iron ore enterprises. Determination of the magnitude of surface seismic vibrations caused by mass explosions was performed using seismic receivers, an analog-digital converter with recording on a laptop. The registration results of surface seismic vibrations during production of more than 280 large-scale blasts at 17 mining enterprises in 22 settlements are presented. The maximum velocity values of the Earth’s surface vibrations are determined. The safety evaluation of seismic effect was carried out according to the permissible value of vibration velocity. For cases with exceedance of permissible values recommendations were developed to reduce the level of seismic impact.

  4. Acoustic Studies of the Large Scale Ocean Circulation

    Science.gov (United States)

    Menemenlis, Dimitris

    1999-01-01

    Detailed knowledge of ocean circulation and its transport properties is prerequisite to an understanding of the earth's climate and of important biological and chemical cycles. Results from two recent experiments, THETIS-2 in the Western Mediterranean and ATOC in the North Pacific, illustrate the use of ocean acoustic tomography for studies of the large scale circulation. The attraction of acoustic tomography is its ability to sample and average the large-scale oceanic thermal structure, synoptically, along several sections, and at regular intervals. In both studies, the acoustic data are compared to, and then combined with, general circulation models, meteorological analyses, satellite altimetry, and direct measurements from ships. Both studies provide complete regional descriptions of the time-evolving, three-dimensional, large scale circulation, albeit with large uncertainties. The studies raise serious issues about existing ocean observing capability and provide guidelines for future efforts.

  5. Human pescadillo induces large-scale chromatin unfolding

    Institute of Scientific and Technical Information of China (English)

    ZHANG Hao; FANG Yan; HUANG Cuifen; YANG Xiao; YE Qinong

    2005-01-01

    The human pescadillo gene encodes a protein with a BRCT domain. Pescadillo plays an important role in DNA synthesis, cell proliferation and transformation. Since BRCT domains have been shown to induce chromatin large-scale unfolding, we tested the role of Pescadillo in regulation of large-scale chromatin unfolding. To this end, we isolated the coding region of Pescadillo from human mammary MCF10A cells. Compared with the reported sequence, the isolated Pescadillo contains in-frame deletion from amino acid 580 to 582. Targeting the Pescadillo to an amplified, lac operator-containing chromosome region in the mammalian genome results in large-scale chromatin decondensation. This unfolding activity maps to the BRCT domain of Pescadillo. These data provide a new clue to understanding the vital role of Pescadillo.

  6. Transport of Large Scale Poloidal Flux in Black Hole Accretion

    CERN Document Server

    Beckwith, Kris; Krolik, Julian H

    2009-01-01

    We perform a global, three-dimensional GRMHD simulation of an accretion torus embedded in a large scale vertical magnetic field orbiting a Schwarzschild black hole. This simulation investigates how a large scale vertical field evolves within a turbulent accretion disk and whether global magnetic field configurations suitable for launching jets and winds can develop. We identify a ``coronal mechanism'' of magnetic flux motion, which dominates the global flux evolution. In this coronal mechanism, magnetic stresses driven by orbital shear create large-scale half-loops of magnetic field that stretch radially inward and then reconnect, leading to discontinuous jumps in the location of magnetic flux. This mechanism is supplemented by a smaller amount of flux advection in the accretion flow proper. Because the black hole in this case does not rotate, the magnetic flux on the horizon determines the mean magnetic field strength in the funnel around the disk axis; this field strength is regulated by a combination of th...

  7. Large Scale Anomalies of the Cosmic Microwave Background with Planck

    DEFF Research Database (Denmark)

    Frejsel, Anne Mette

    This thesis focuses on the large scale anomalies of the Cosmic Microwave Background (CMB) and their possible origins. The investigations consist of two main parts. The first part is on statistical tests of the CMB, and the consistency of both maps and power spectrum. We find that the Planck data...... is very consistent, while the WMAP 9 year release appears more contaminated by non-CMB residuals than the 7 year release. The second part is concerned with the anomalies of the CMB from two approaches. One is based on an extended inflationary model as the origin of one specific large scale anomaly, namely....... Here we find evidence that the Planck CMB maps contain residual radiation in the loop areas, which can be linked to some of the large scale CMB anomalies: the point-parity asymmetry, the alignment of quadrupole and octupole and the dipolemodulation....

  8. Large Scale Magnetohydrodynamic Dynamos from Cylindrical Differentially Rotating Flows

    CERN Document Server

    Ebrahimi, F

    2015-01-01

    For cylindrical differentially rotating plasmas threaded with a uniform vertical magnetic field, we study large-scale magnetic field generation from finite amplitude perturbations using analytic theory and direct numerical simulations. Analytically, we impose helical fluctuations, a seed field, and a background flow and use quasi-linear theory for a single mode. The predicted large-scale field growth agrees with numerical simulations in which the magnetorotational instability (MRI) arises naturally. The vertically and azimuthally averaged toroidal field is generated by a fluctuation-induced EMF that depends on differential rotation. Given fluctuations, the method also predicts large-scale field growth for MRI-stable rotation profiles and flows with no rotation but shear.

  9. A relativistic signature in large-scale structure

    Science.gov (United States)

    Bartolo, Nicola; Bertacca, Daniele; Bruni, Marco; Koyama, Kazuya; Maartens, Roy; Matarrese, Sabino; Sasaki, Misao; Verde, Licia; Wands, David

    2016-09-01

    In General Relativity, the constraint equation relating metric and density perturbations is inherently nonlinear, leading to an effective non-Gaussianity in the dark matter density field on large scales-even if the primordial metric perturbation is Gaussian. Intrinsic non-Gaussianity in the large-scale dark matter overdensity in GR is real and physical. However, the variance smoothed on a local physical scale is not correlated with the large-scale curvature perturbation, so that there is no relativistic signature in the galaxy bias when using the simplest model of bias. It is an open question whether the observable mass proxies such as luminosity or weak lensing correspond directly to the physical mass in the simple halo bias model. If not, there may be observables that encode this relativistic signature.

  10. Balancing modern Power System with large scale of wind power

    DEFF Research Database (Denmark)

    Basit, Abdul; Altin, Müfit; Hansen, Anca Daniela

    2014-01-01

    Power system operators must ensure robust, secure and reliable power system operation even with a large scale integration of wind power. Electricity generated from the intermittent wind in large propor-tion may impact on the control of power system balance and thus deviations in the power system...... to be analysed with improved analytical tools and techniques. This paper proposes techniques for the active power balance control in future power systems with the large scale wind power integration, where power balancing model provides the hour-ahead dispatch plan with reduced planning horizon and the real time...... frequency in small or islanded power systems or tie line power flows in interconnected power systems. Therefore, the large scale integration of wind power into the power system strongly concerns the secure and stable grid operation. To ensure the stable power system operation, the evolving power system has...

  11. Large Scale Anomalies of the Cosmic Microwave Background with Planck

    DEFF Research Database (Denmark)

    Frejsel, Anne Mette

    This thesis focuses on the large scale anomalies of the Cosmic Microwave Background (CMB) and their possible origins. The investigations consist of two main parts. The first part is on statistical tests of the CMB, and the consistency of both maps and power spectrum. We find that the Planck data...... is very consistent, while the WMAP 9 year release appears more contaminated by non-CMB residuals than the 7 year release. The second part is concerned with the anomalies of the CMB from two approaches. One is based on an extended inflationary model as the origin of one specific large scale anomaly, namely....... Here we find evidence that the Planck CMB maps contain residual radiation in the loop areas, which can be linked to some of the large scale CMB anomalies: the point-parity asymmetry, the alignment of quadrupole and octupole and the dipolemodulation....

  12. Large-scale climatic anomalies affect marine predator foraging behaviour and demography

    Science.gov (United States)

    Bost, Charles A.; Cotté, Cedric; Terray, Pascal; Barbraud, Christophe; Bon, Cécile; Delord, Karine; Gimenez, Olivier; Handrich, Yves; Naito, Yasuhiko; Guinet, Christophe; Weimerskirch, Henri

    2015-10-01

    Determining the links between the behavioural and population responses of wild species to environmental variations is critical for understanding the impact of climate variability on ecosystems. Using long-term data sets, we show how large-scale climatic anomalies in the Southern Hemisphere affect the foraging behaviour and population dynamics of a key marine predator, the king penguin. When large-scale subtropical dipole events occur simultaneously in both subtropical Southern Indian and Atlantic Oceans, they generate tropical anomalies that shift the foraging zone southward. Consequently the distances that penguins foraged from the colony and their feeding depths increased and the population size decreased. This represents an example of a robust and fast impact of large-scale climatic anomalies affecting a marine predator through changes in its at-sea behaviour and demography, despite lack of information on prey availability. Our results highlight a possible behavioural mechanism through which climate variability may affect population processes.

  13. Adaptive Hierarchical B-spline Surface Representation of Large-Scale Scattered Data

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    The representation of large scale scattered data is a difficult problem, especially when various features of the representation, such as C2-continuity, are required. This paper describes a fast algorithm for large scale scattered data approximation and interpolation. The interpolation algorithm uses a coarse-to-fine hierarchical control lattice to fit the scattered data. The refinement process is only used in the regions where the error between the scattered data and the result in a surface is greater than a specified tolerance. A method to ensure C2-continuity is introduced to calculate the control lattice under constrained conditions. Experimental results show that this method can quickly represent large scale scattered data set.

  14. Non-parametric co-clustering of large scale sparse bipartite networks on the GPU

    DEFF Research Database (Denmark)

    Hansen, Toke Jansen; Mørup, Morten; Hansen, Lars Kai

    2011-01-01

    Co-clustering is a problem of both theoretical and practical importance, e.g., market basket analysis and collaborative filtering, and in web scale text processing. We state the co-clustering problem in terms of non-parametric generative models which can address the issue of estimating the number...... of row and column clusters from a hypothesis space of an infinite number of clusters. To reach large scale applications of co-clustering we exploit that parameter inference for co-clustering is well suited for parallel computing. We develop a generic GPU framework for efficient inference on large scale......-life large scale collaborative filtering data and web scale text corpora, demonstrating that latent mesoscale structures extracted by the co-clustering problem as formulated by the Infinite Relational Model (IRM) are consistent across consecutive runs with different initializations and also relevant...

  15. Hypersingular integral equations, waveguiding effects in Cantorian Universe and genesis of large scale structures

    Energy Technology Data Exchange (ETDEWEB)

    Iovane, G. [Dipartimento di Ingegneria, dell' Informazione e Matematica Applicata, Universita di Salerno, Salerno (Italy)] e-mail: iovane@diima.unisa.it; Giordano, P. [Dipartimento di Ingegneria, dell' Informazione e Matematica Applicata, Universita di Salerno, Salerno (Italy)

    2005-08-01

    In this work we introduce the hypersingular integral equations and analyze a realistic model of gravitational waveguides on a cantorian space-time. A waveguiding effect is considered with respect to the large scale structure of the Universe, where the structure formation appears as if it were a classically self-similar random process at all astrophysical scales. The result is that it seems we live in an El Naschie's o {sup ({infinity})} Cantorian space-time, where gravitational lensing and waveguiding effects can explain the appearing Universe. In particular, we consider filamentary and planar large scale structures as possible refraction channels for electromagnetic radiation coming from cosmological structures. From this vision the Universe appears like a large self-similar adaptive mirrors set, thanks to three numerical simulations. Consequently, an infinite Universe is just an optical illusion that is produced by mirroring effects connected with the large scale structure of a finite and not a large Universe.

  16. The Effect of Large Scale Magnetic Turbulence on the Acceleration of Electrons by Perpendicular Collisionless Shocks

    CERN Document Server

    Guo, Fan

    2010-01-01

    We study the physics of electron acceleration at collisionless shocks that move through a plasma containing large-scale magnetic fluctuations. We numerically integrate the trajectories of a large number of electrons, which are treated as test particles moving in the time dependent electric and magnetic fields determined from 2-D hybrid simulations (kinetic ions, fluid electron). The large-scale magnetic fluctuations effect the electrons in a number of ways and lead to efficient and rapid energization at the shock front. Since the electrons mainly follow along magnetic lines of force, the large-scale braiding of field lines in space allows the fast-moving electrons to cross the shock front several times, leading to efficient acceleration. Ripples in the shock front occuring at various scales will also contribute to the acceleration by mirroring the electrons. Our calculation shows that this process favors electron acceleration at perpendicular shocks. The current study is also helpful in understanding the inje...

  17. Survey of Large-Scale Data Management Systems for Big Data Applications

    Institute of Scientific and Technical Information of China (English)

    吴冷冬; 袁立言; 犹嘉槐

    2015-01-01

    Today, data is flowing into various organizations at an unprecedented scale. The ability to scale out for processing an enhanced workload has become an important factor for the proliferation and popularization of database systems. Big data applications demand and consequently lead to the developments of diverse large-scale data management systems in different organizations, ranging from traditional database vendors to new emerging Internet-based enterprises. In this survey, we investigate, characterize, and analyze the large-scale data management systems in depth and develop comprehensive taxonomies for various critical aspects covering the data model, the system architecture, and the consistency model. We map the prevailing highly scalable data management systems to the proposed taxonomies, not only to classify the common techniques but also to provide a basis for analyzing current system scalability limitations. To overcome these limitations, we predicate and highlight the possible principles that future efforts need to be undertaken for the next generation large-scale data management systems.

  18. Using the High-Level Based Program Interface to Facilitate the Large Scale Scientific Computing

    Directory of Open Access Journals (Sweden)

    Yizi Shang

    2014-01-01

    Full Text Available This paper is to make further research on facilitating the large-scale scientific computing on the grid and the desktop grid platform. The related issues include the programming method, the overhead of the high-level program interface based middleware, and the data anticipate migration. The block based Gauss Jordan algorithm as a real example of large-scale scientific computing is used to evaluate those issues presented above. The results show that the high-level based program interface makes the complex scientific applications on large-scale scientific platform easier, though a little overhead is unavoidable. Also, the data anticipation migration mechanism can improve the efficiency of the platform which needs to process big data based scientific applications.

  19. Large-scale seismic waveform quality metric calculation using Hadoop

    Science.gov (United States)

    Magana-Zook, S.; Gaylord, J. M.; Knapp, D. R.; Dodge, D. A.; Ruppert, S. D.

    2016-09-01

    In this work we investigated the suitability of Hadoop MapReduce and Apache Spark for large-scale computation of seismic waveform quality metrics by comparing their performance with that of a traditional distributed implementation. The Incorporated Research Institutions for Seismology (IRIS) Data Management Center (DMC) provided 43 terabytes of broadband waveform data of which 5.1 TB of data were processed with the traditional architecture, and the full 43 TB were processed using MapReduce and Spark. Maximum performance of 0.56 terabytes per hour was achieved using all 5 nodes of the traditional implementation. We noted that I/O dominated processing, and that I/O performance was deteriorating with the addition of the 5th node. Data collected from this experiment provided the baseline against which the Hadoop results were compared. Next, we processed the full 43 TB dataset using both MapReduce and Apache Spark on our 18-node Hadoop cluster. These experiments were conducted multiple times with various subsets of the data so that we could build models to predict performance as a function of dataset size. We found that both MapReduce and Spark significantly outperformed the traditional reference implementation. At a dataset size of 5.1 terabytes, both Spark and MapReduce were about 15 times faster than the reference implementation. Furthermore, our performance models predict that for a dataset of 350 terabytes, Spark running on a 100-node cluster would be about 265 times faster than the reference implementation. We do not expect that the reference implementation deployed on a 100-node cluster would perform significantly better than on the 5-node cluster because the I/O performance cannot be made to scale. Finally, we note that although Big Data technologies clearly provide a way to process seismic waveform datasets in a high-performance and scalable manner, the technology is still rapidly changing, requires a high degree of investment in personnel, and will likely

  20. Statistical equilibria of large scales in dissipative hydrodynamic turbulence

    CERN Document Server

    Dallas, Vassilios; Alexakis, Alexandros

    2015-01-01

    We present a numerical study of the statistical properties of three-dimensional dissipative turbulent flows at scales larger than the forcing scale. Our results indicate that the large scale flow can be described to a large degree by the truncated Euler equations with the predictions of the zero flux solutions given by absolute equilibrium theory, both for helical and non-helical flows. Thus, the functional shape of the large scale spectra can be predicted provided that scales sufficiently larger than the forcing length scale but also sufficiently smaller than the box size are examined. Deviations from the predictions of absolute equilibrium are discussed.

  1. Fatigue Analysis of Large-scale Wind turbine

    Directory of Open Access Journals (Sweden)

    Zhu Yongli

    2017-01-01

    Full Text Available The paper does research on top flange fatigue damage of large-scale wind turbine generator. It establishes finite element model of top flange connection system with finite element analysis software MSC. Marc/Mentat, analyzes its fatigue strain, implements load simulation of flange fatigue working condition with Bladed software, acquires flange fatigue load spectrum with rain-flow counting method, finally, it realizes fatigue analysis of top flange with fatigue analysis software MSC. Fatigue and Palmgren-Miner linear cumulative damage theory. The analysis result indicates that its result provides new thinking for flange fatigue analysis of large-scale wind turbine generator, and possesses some practical engineering value.

  2. Large-Scale Inverse Problems and Quantification of Uncertainty

    CERN Document Server

    Biegler, Lorenz; Ghattas, Omar

    2010-01-01

    Large-scale inverse problems and associated uncertainty quantification has become an important area of research, central to a wide range of science and engineering applications. Written by leading experts in the field, Large-scale Inverse Problems and Quantification of Uncertainty focuses on the computational methods used to analyze and simulate inverse problems. The text provides PhD students, researchers, advanced undergraduate students, and engineering practitioners with the perspectives of researchers in areas of inverse problems and data assimilation, ranging from statistics and large-sca

  3. [Issues of large scale tissue culture of medicinal plant].

    Science.gov (United States)

    Lv, Dong-Mei; Yuan, Yuan; Zhan, Zhi-Lai

    2014-09-01

    In order to increase the yield and quality of the medicinal plant and enhance the competitive power of industry of medicinal plant in our country, this paper analyzed the status, problem and countermeasure of the tissue culture of medicinal plant on large scale. Although the biotechnology is one of the most efficient and promising means in production of medicinal plant, it still has problems such as stability of the material, safety of the transgenic medicinal plant and optimization of cultured condition. Establishing perfect evaluation system according to the characteristic of the medicinal plant is the key measures to assure the sustainable development of the tissue culture of medicinal plant on large scale.

  4. Generation Expansion Planning Considering Integrating Large-scale Wind Generation

    DEFF Research Database (Denmark)

    Zhang, Chunyu; Ding, Yi; Østergaard, Jacob

    2013-01-01

    Generation expansion planning (GEP) is the problem of finding the optimal strategy to plan the Construction of new generation while satisfying technical and economical constraints. In the deregulated and competitive environment, large-scale integration of wind generation (WG) in power system has...... necessitated the inclusion of more innovative and sophisticated approaches in power system investment planning. A bi-level generation expansion planning approach considering large-scale wind generation was proposed in this paper. The first phase is investment decision, while the second phase is production...

  5. The fractal octahedron network of the large scale structure

    CERN Document Server

    Battaner, E

    1998-01-01

    In a previous article, we have proposed that the large scale structure network generated by large scale magnetic fields could consist of a network of octahedra only contacting at their vertexes. Assuming such a network could arise at different scales producing a fractal geometry, we study here its properties, and in particular how a sub-octahedron network can be inserted within an octahedron of the large network. We deduce that the scale of the fractal structure would range from $\\approx$100 Mpc, i.e. the scale of the deepest surveys, down to about 10 Mpc, as other smaller scale magnetic fields were probably destroyed in the radiation dominated Universe.

  6. Large-scale streaming motions and microwave background anisotropies

    Energy Technology Data Exchange (ETDEWEB)

    Martinez-Gonzalez, E.; Sanz, J.L. (Cantabria Universidad, Santander (Spain))

    1989-12-01

    The minimal microwave background radiation is calculated on each angular scale implied by the existence of large-scale streaming motions. These minimal anisotropies, due to the Sachs-Wolfe effect, are obtained for different experiments, and give quite different results from those found in previous work. They are not in conflict with present theories of galaxy formation. Upper limits are imposed on the scale at which large-scale streaming motions can occur by extrapolating results from present double-beam-switching experiments. 17 refs.

  7. Distributed chaos tuned to large scale coherent motions in turbulence

    CERN Document Server

    Bershadskii, A

    2016-01-01

    It is shown, using direct numerical simulations and laboratory experiments data, that distributed chaos is often tuned to large scale coherent motions in anisotropic inhomogeneous turbulence. The examples considered are: fully developed turbulent boundary layer (range of coherence: $14 < y^{+} < 80$), turbulent thermal convection (in a horizontal cylinder), and Cuette-Taylor flow. Two ways of the tuning have been described: one via fundamental frequency (wavenumber) and another via subharmonic (period doubling). For the second way the large scale coherent motions are a natural component of distributed chaos. In all considered cases spontaneous breaking of space translational symmetry is accompanied by reflexional symmetry breaking.

  8. Topology Optimization of Large Scale Stokes Flow Problems

    DEFF Research Database (Denmark)

    Aage, Niels; Poulsen, Thomas Harpsøe; Gersborg-Hansen, Allan

    2008-01-01

    This note considers topology optimization of large scale 2D and 3D Stokes flow problems using parallel computations. We solve problems with up to 1.125.000 elements in 2D and 128.000 elements in 3D on a shared memory computer consisting of Sun UltraSparc IV CPUs.......This note considers topology optimization of large scale 2D and 3D Stokes flow problems using parallel computations. We solve problems with up to 1.125.000 elements in 2D and 128.000 elements in 3D on a shared memory computer consisting of Sun UltraSparc IV CPUs....

  9. Large-scale liquid scintillation detectors for solar neutrinos

    Energy Technology Data Exchange (ETDEWEB)

    Benziger, Jay B.; Calaprice, Frank P. [Princeton University Princeton, Princeton, NJ (United States)

    2016-04-15

    Large-scale liquid scintillation detectors are capable of providing spectral yields of the low energy solar neutrinos. These detectors require > 100 tons of liquid scintillator with high optical and radiopurity. In this paper requirements for low-energy neutrino detection by liquid scintillation are specified and the procedures to achieve low backgrounds in large-scale liquid scintillation detectors for solar neutrinos are reviewed. The designs, operations and achievements of Borexino, KamLAND and SNO+ in measuring the low-energy solar neutrino fluxes are reviewed. (orig.)

  10. Optimal Dispatching of Large-scale Water Supply System

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    This paper deals with the use of optimal control techniques in large-scale water distribution networks. According to the network characteristics and actual state of the water supply system in China, the implicit model, which may be solved by utilizing the hierarchical optimization method, is established. In special, based on the analyses of the water supply system containing variable-speed pumps, a software tool has been developed successfully. The application of this model to the city of Shenyang (China) is compared to experiential strategy. The results of this study show that the developed model is a very promising optimization method to control the large-scale water supply systems.

  11. Reliability Evaluation considering Structures of a Large Scale Wind Farm

    DEFF Research Database (Denmark)

    Shin, Je-Seok; Cha, Seung-Tae; Wu, Qiuwei

    2012-01-01

    evaluation on wind farm is necessarily required. Also, because large scale offshore wind farm has a long repair time and a high repair cost as well as a high investment cost, it is essential to take into account the economic aspect. One of methods to efficiently build and to operate wind farm is to construct......Wind energy is one of the most widely used renewable energy resources. Wind power has been connected to the grid as large scale wind farm which is made up of dozens of wind turbines, and the scale of wind farm is more increased recently. Due to intermittent and variable wind source, reliability...

  12. Fast paths in large-scale dynamic road networks

    CERN Document Server

    Nannicini, Giacomo; Barbier, Gilles; Krob, Daniel; Liberti, Leo

    2007-01-01

    Efficiently computing fast paths in large scale dynamic road networks (where dynamic traffic information is known over a part of the network) is a practical problem faced by several traffic information service providers who wish to offer a realistic fast path computation to GPS terminal enabled vehicles. The heuristic solution method we propose is based on a highway hierarchy-based shortest path algorithm for static large-scale networks; we maintain a static highway hierarchy and perform each query on the dynamically evaluated network.

  13. Quantifying geological processes on Mars - Results of the high resolution stereo camera (HRSC) on Mars express

    NARCIS (Netherlands)

    Jaumann, R.; Tirsch, D.; Hauber, E.; Ansan, V.; Di Achille, G.; Erkeling, G.; Fueten, F.; Head, J.; Kleinhans, M. G.|info:eu-repo/dai/nl/217675123; Mangold, N.; Michael, G. G.; Neukum, G.; Pacifici, A.; Platz, T.; Pondrelli, M.; Raack, J.; Reiss, D.; Williams, D. A.; Adeli, S.; Baratoux, D.; De Villiers, G.; Foing, B.; Gupta, S.; Gwinner, K.; Hiesinger, H.; Hoffmann, H.; Deit, L. Le; Marinangeli, L.; Matz, K. D.; Mertens, V.; Muller, J. P.; Pasckert, J. H.; Roatsch, T.; Rossi, A. P.; Scholten, F.; Sowe, M.; Voigt, J.; Warner, N.

    2015-01-01

    Abstract This review summarizes the use of High Resolution Stereo Camera (HRSC) data as an instrumental tool and its application in the analysis of geological processes and landforms on Mars during the last 10 years of operation. High-resolution digital elevations models on a local to regional scale

  14. Quantifying geological processes on Mars - Results of the high resolution stereo camera (HRSC) on Mars express

    NARCIS (Netherlands)

    Jaumann, R.; Tirsch, D.; Hauber, E.; Ansan, V.; Di Achille, G.; Erkeling, G.; Fueten, F.; Head, J.; Kleinhans, M. G.; Mangold, N.; Michael, G. G.; Neukum, G.; Pacifici, A.; Platz, T.; Pondrelli, M.; Raack, J.; Reiss, D.; Williams, D. A.; Adeli, S.; Baratoux, D.; De Villiers, G.; Foing, B.; Gupta, S.; Gwinner, K.; Hiesinger, H.; Hoffmann, H.; Deit, L. Le; Marinangeli, L.; Matz, K. D.; Mertens, V.; Muller, J. P.; Pasckert, J. H.; Roatsch, T.; Rossi, A. P.; Scholten, F.; Sowe, M.; Voigt, J.; Warner, N.

    2015-01-01

    Abstract This review summarizes the use of High Resolution Stereo Camera (HRSC) data as an instrumental tool and its application in the analysis of geological processes and landforms on Mars during the last 10 years of operation. High-resolution digital elevations models on a local to regional scale

  15. Stochastic simulation by image quilting of process-based geological models

    DEFF Research Database (Denmark)

    Hoffimann, Júlio; Scheidt, Celine; Barfod, Adrian

    2017-01-01

    . In this work, we further develop image quilting as a method for 3D stochastic simulation capable of mimicking the realism of process-based geological models with minimal modeling effort (i.e. parameter tuning) and at the same time condition them to a variety of data. In particular, we develop a new...

  16. Time-lapse motion picture technique applied to the study of geological processes

    Science.gov (United States)

    Miller, R.D.; Crandell, D.R.

    1959-01-01

    Light-weight, battery-operated timers were built and coupled to 16-mm motion-picture cameras having apertures controlled by photoelectric cells. The cameras were placed adjacent to Emmons Glacier on Mount Rainier. The film obtained confirms the view that exterior time-lapse photography can be applied to the study of slow-acting geologic processes.

  17. Evaluation of drought propagation in an ensemble mean of large-scale hydrological models

    Directory of Open Access Journals (Sweden)

    A. F. Van Loon

    2012-07-01

    underestimation of wet-to-dry-season droughts and snow-related droughts. Furthermore, almost no composite droughts were simulated for slowly responding areas, while many multi-year drought events were expected in these systems.

    We conclude that drought propagation processes are reasonably well reproduced by the ensemble mean of large-scale models in contrasting catchments in Europe and that some challenges remain in catchments with cold and semi-arid climates and catchments with large storage in aquifers or lakes. Improvement of drought simulation in large-scale models should focus on a better representation of hydrological processes that are important for drought development, such as evapotranspiration, snow accumulation and melt, and especially storage. Besides the more explicit inclusion of storage (e.g. aquifers in large-scale models, also parametrisation of storage processes requires attention, for example through a global scale dataset on aquifer characteristics.

  18. Large-Scale Sequencing: The Future of Genomic Sciences Colloquium

    Energy Technology Data Exchange (ETDEWEB)

    Margaret Riley; Merry Buckley

    2009-01-01

    Genetic sequencing and the various molecular techniques it has enabled have revolutionized the field of microbiology. Examining and comparing the genetic sequences borne by microbes - including bacteria, archaea, viruses, and microbial eukaryotes - provides researchers insights into the processes microbes carry out, their pathogenic traits, and new ways to use microorganisms in medicine and manufacturing. Until recently, sequencing entire microbial genomes has been laborious and expensive, and the decision to sequence the genome of an organism was made on a case-by-case basis by individual researchers and funding agencies. Now, thanks to new technologies, the cost and effort of sequencing is within reach for even the smallest facilities, and the ability to sequence the genomes of a significant fraction of microbial life may be possible. The availability of numerous microbial genomes will enable unprecedented insights into microbial evolution, function, and physiology. However, the current ad hoc approach to gathering sequence data has resulted in an unbalanced and highly biased sampling of microbial diversity. A well-coordinated, large-scale effort to target the breadth and depth of microbial diversity would result in the greatest impact. The American Academy of Microbiology convened a colloquium to discuss the scientific benefits of engaging in a large-scale, taxonomically-based sequencing project. A group of individuals with expertise in microbiology, genomics, informatics, ecology, and evolution deliberated on the issues inherent in such an effort and generated a set of specific recommendations for how best to proceed. The vast majority of microbes are presently uncultured and, thus, pose significant challenges to such a taxonomically-based approach to sampling genome diversity. However, we have yet to even scratch the surface of the genomic diversity among cultured microbes. A coordinated sequencing effort of cultured organisms is an appropriate place to begin

  19. Response of large-scale coastal basins to wind forcing: influence of topography

    NARCIS (Netherlands)

    Chen, WenLong; Roos, P.C.; Schuttelaars, H.M.; Kumar, M.; Zitman, T.J.; Hulscher, S.J.M.H.

    2016-01-01

    Because wind is one of the main forcings in storm surge, we present an idealised process-based model to study the influence of topographic variations on the frequency response of large-scale coastal basins subject to time-periodic wind forcing. Coastal basins are represented by a semi-enclosed recta

  20. New Principles of Coordination in Large-scale Micro- and Molecular-Robotic Groups

    CERN Document Server

    Kornienko, S

    2011-01-01

    Micro- and molecular-robotic systems act as large-scale swarms. Capabilities of sensing, communication and information processing are very limited on these scales. This short position paper describes a swarm-based minimalistic approach, which can be applied for coordinating collective behavior in such systems.