WorldWideScience

Sample records for providing large-scale distributed

  1. Large scale network-centric distributed systems

    CERN Document Server

    Sarbazi-Azad, Hamid

    2014-01-01

    A highly accessible reference offering a broad range of topics and insights on large scale network-centric distributed systems Evolving from the fields of high-performance computing and networking, large scale network-centric distributed systems continues to grow as one of the most important topics in computing and communication and many interdisciplinary areas. Dealing with both wired and wireless networks, this book focuses on the design and performance issues of such systems. Large Scale Network-Centric Distributed Systems provides in-depth coverage ranging from ground-level hardware issu

  2. Distributed large-scale dimensional metrology new insights

    CERN Document Server

    Franceschini, Fiorenzo; Maisano, Domenico

    2011-01-01

    Focuses on the latest insights into and challenges of distributed large scale dimensional metrology Enables practitioners to study distributed large scale dimensional metrology independently Includes specific examples of the development of new system prototypes

  3. Large-scale mass distribution in the Illustris simulation

    Science.gov (United States)

    Haider, M.; Steinhauser, D.; Vogelsberger, M.; Genel, S.; Springel, V.; Torrey, P.; Hernquist, L.

    2016-04-01

    Observations at low redshifts thus far fail to account for all of the baryons expected in the Universe according to cosmological constraints. A large fraction of the baryons presumably resides in a thin and warm-hot medium between the galaxies, where they are difficult to observe due to their low densities and high temperatures. Cosmological simulations of structure formation can be used to verify this picture and provide quantitative predictions for the distribution of mass in different large-scale structure components. Here we study the distribution of baryons and dark matter at different epochs using data from the Illustris simulation. We identify regions of different dark matter density with the primary constituents of large-scale structure, allowing us to measure mass and volume of haloes, filaments and voids. At redshift zero, we find that 49 per cent of the dark matter and 23 per cent of the baryons are within haloes more massive than the resolution limit of 2 × 108 M⊙. The filaments of the cosmic web host a further 45 per cent of the dark matter and 46 per cent of the baryons. The remaining 31 per cent of the baryons reside in voids. The majority of these baryons have been transported there through active galactic nuclei feedback. We note that the feedback model of Illustris is too strong for heavy haloes, therefore it is likely that we are overestimating this amount. Categorizing the baryons according to their density and temperature, we find that 17.8 per cent of them are in a condensed state, 21.6 per cent are present as cold, diffuse gas, and 53.9 per cent are found in the state of a warm-hot intergalactic medium.

  4. The Cambridge CFD Grid for Large Scale Distributed CFD Applications

    OpenAIRE

    Yang, Xiaobo; Hayes, Mark; Jenkins, K.; Cant, Stewart R.

    2005-01-01

    A revised version submitted for publication in the Elsevier Journal of Future Generation Computer Systems: promotional issue on Grid Computing, originally appeared in the Proceedings of ICCS 2004, Krakow, Poland, June 2004 The Cambridge CFD (computational fluid dynamics) Grid is a distributed problem solving environment for large-scale CFD applications set up between the Cambridge eScience Centre and the CFD Lab in the Engineering Department at the University of Cambridge. A Web portal, th...

  5. ANTITRUST ISSUES IN THE LARGE-SCALE FOOD DISTRIBUTION SECTOR

    Directory of Open Access Journals (Sweden)

    Enrico Adriano Raffaelli

    2014-12-01

    Full Text Available In light of the slow modernization of the Italian large-scale food distribution sector, of the fragmentation at national level, of the significant roles of the cooperatives at local level and of the alliances between food retail chains, the ICA during the recent years has developed a strong interest in this sector.After having analyzed the peculiarities of the Italian large-scale food distribution sector, this article shows the recent approach taken by the ICA toward the main antitrust issues in this sector.In the analysis of such issues, mainly the contractual relations between the GDO retailers and their suppliers, the introduction of Article 62 of Law no. 27 dated 24th March 2012 is crucial, because, by facilitating and encouraging complaints by the interested parties, it should allow the developing of normal competitive dynamics within the food distribution sector, where companies should be free to enter the market using the tools at their disposal, without undue restrictions.

  6. A New Large Scale Distributed System: Object Distribution

    NARCIS (Netherlands)

    Kavitha Muthukrishnan, K.; Meratnia, Nirvana; Koprinkov, G.T.; Lijding, M.E.M.; Havinga, Paul J.M.

    We introduce in this work Object Distribution System, a distributed system based on distribution models used in everyday life (e.g. food distribution chains, newspapers, etc.). This system is designed to scale correctly in a wide area network, using weak consistency replication mechanisms. It is

  7. Staghorn: An Automated Large-Scale Distributed System Analysis Platform

    Energy Technology Data Exchange (ETDEWEB)

    Gabert, Kasimir [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Burns, Ian [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Elliott, Steven [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Kallaher, Jenna [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Vail, Adam [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-09-01

    Conducting experiments on large-scale distributed computing systems is becoming significantly easier with the assistance of emulation. Researchers can now create a model of a distributed computing environment and then generate a virtual, laboratory copy of the entire system composed of potentially thousands of virtual machines, switches, and software. The use of real software, running at clock rate in full virtual machines, allows experiments to produce meaningful results without necessitating a full understanding of all model components. However, the ability to inspect and modify elements within these models is bound by the limitation that such modifications must compete with the model, either running in or alongside it. This inhibits entire classes of analyses from being conducted upon these models. We developed a mechanism to snapshot an entire emulation-based model as it is running. This allows us to \\freeze time" and subsequently fork execution, replay execution, modify arbitrary parts of the model, or deeply explore the model. This snapshot includes capturing packets in transit and other input/output state along with the running virtual machines. We were able to build this system in Linux using Open vSwitch and Kernel Virtual Machines on top of Sandia's emulation platform Firewheel. This primitive opens the door to numerous subsequent analyses on models, including state space exploration, debugging distributed systems, performance optimizations, improved training environments, and improved experiment repeatability.

  8. Monitoring and Steering of Large-Scale Distributed Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Geist, G.A.; Kohl, J.A.

    1999-09-01

    Oak Ridge National Laboratory is developing a state-of-the-art parallel application development system called CUMULVS, which allows scientists to easily incorporate interactive visualization, computational steering and fault tolerance into distributed software applications. The system is a valuable tool for many large scientific applications because it enables the scientist to visually monitor large data fields and remotely control parameters inside a running application. Collaborative monitoring is provided by allowing multiple researchers to simultaneously attach to a simulation, each controlling their own view of the same or different data fields within the simulation. By supporting steering of a simulation while it is running, CUMULVS provides the opportunity to accelerate the process of scientific discovery. CUMULVS also provides a simple mechanism to incorporate automatic checkpointing and heterogeneous task migration into large applications so that simulations can continue to run for weeks unattended. This paper will give an overview of the CUMULVS system and its capabilities, including several case histories. The status of the project is described with instructions on how to obtain the software.

  9. Configuration monitoring tool for large-scale distributed computing

    CERN Document Server

    Wu, Y; Fisk, I; Graham, G; Kim, B J; Lü, X

    2004-01-01

    The Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) at CERN will likely use a grid system to achieve much of its offline processing need. Given the heterogeneous and dynamic nature of grid systems, it is desirable to have in place a configuration monitor. The configuration monitoring tool is built using the Globus toolkit and web services. It consists of an information provider for the Globus MDS, a relational database for keeping track of the current and old configurations, and client interfaces to query and administer the configuration system. The Grid Security Infrastructure (GSI), together with EDG Java Security packages, are used for secure authentication and transparent access to the configuration information across the CMS grid. This work has been prototyped and tested using US-CMS grid resources.

  10. Large-scale geographic variation in distribution and abundance of Australian deep-water kelp forests.

    Directory of Open Access Journals (Sweden)

    Ezequiel M Marzinelli

    Full Text Available Despite the significance of marine habitat-forming organisms, little is known about their large-scale distribution and abundance in deeper waters, where they are difficult to access. Such information is necessary to develop sound conservation and management strategies. Kelps are main habitat-formers in temperate reefs worldwide; however, these habitats are highly sensitive to environmental change. The kelp Ecklonia radiate is the major habitat-forming organism on subtidal reefs in temperate Australia. Here, we provide large-scale ecological data encompassing the latitudinal distribution along the continent of these kelp forests, which is a necessary first step towards quantitative inferences about the effects of climatic change and other stressors on these valuable habitats. We used the Autonomous Underwater Vehicle (AUV facility of Australia's Integrated Marine Observing System (IMOS to survey 157,000 m2 of seabed, of which ca 13,000 m2 were used to quantify kelp covers at multiple spatial scales (10-100 m to 100-1,000 km and depths (15-60 m across several regions ca 2-6° latitude apart along the East and West coast of Australia. We investigated the large-scale geographic variation in distribution and abundance of deep-water kelp (>15 m depth and their relationships with physical variables. Kelp cover generally increased with latitude despite great variability at smaller spatial scales. Maximum depth of kelp occurrence was 40-50 m. Kelp latitudinal distribution along the continent was most strongly related to water temperature and substratum availability. This extensive survey data, coupled with ongoing AUV missions, will allow for the detection of long-term shifts in the distribution and abundance of habitat-forming kelp and the organisms they support on a continental scale, and provide information necessary for successful implementation and management of conservation reserves.

  11. Large-scale geographic variation in distribution and abundance of Australian deep-water kelp forests.

    Science.gov (United States)

    Marzinelli, Ezequiel M; Williams, Stefan B; Babcock, Russell C; Barrett, Neville S; Johnson, Craig R; Jordan, Alan; Kendrick, Gary A; Pizarro, Oscar R; Smale, Dan A; Steinberg, Peter D

    2015-01-01

    Despite the significance of marine habitat-forming organisms, little is known about their large-scale distribution and abundance in deeper waters, where they are difficult to access. Such information is necessary to develop sound conservation and management strategies. Kelps are main habitat-formers in temperate reefs worldwide; however, these habitats are highly sensitive to environmental change. The kelp Ecklonia radiate is the major habitat-forming organism on subtidal reefs in temperate Australia. Here, we provide large-scale ecological data encompassing the latitudinal distribution along the continent of these kelp forests, which is a necessary first step towards quantitative inferences about the effects of climatic change and other stressors on these valuable habitats. We used the Autonomous Underwater Vehicle (AUV) facility of Australia's Integrated Marine Observing System (IMOS) to survey 157,000 m2 of seabed, of which ca 13,000 m2 were used to quantify kelp covers at multiple spatial scales (10-100 m to 100-1,000 km) and depths (15-60 m) across several regions ca 2-6° latitude apart along the East and West coast of Australia. We investigated the large-scale geographic variation in distribution and abundance of deep-water kelp (>15 m depth) and their relationships with physical variables. Kelp cover generally increased with latitude despite great variability at smaller spatial scales. Maximum depth of kelp occurrence was 40-50 m. Kelp latitudinal distribution along the continent was most strongly related to water temperature and substratum availability. This extensive survey data, coupled with ongoing AUV missions, will allow for the detection of long-term shifts in the distribution and abundance of habitat-forming kelp and the organisms they support on a continental scale, and provide information necessary for successful implementation and management of conservation reserves.

  12. Large-Scale Ichthyoplankton and Water Mass Distribution along the South Brazil Shelf

    Science.gov (United States)

    de Macedo-Soares, Luis Carlos Pinto; Garcia, Carlos Alberto Eiras; Freire, Andrea Santarosa; Muelbert, José Henrique

    2014-01-01

    Ichthyoplankton is an essential component of pelagic ecosystems, and environmental factors play an important role in determining its distribution. We have investigated simultaneous latitudinal and cross-shelf gradients in ichthyoplankton abundance to test the hypothesis that the large-scale distribution of fish larvae in the South Brazil Shelf is associated with water mass composition. Vertical plankton tows were collected between 21°27′ and 34°51′S at 107 stations, in austral late spring and early summer seasons. Samples were taken with a conical-cylindrical plankton net from the depth of chlorophyll maxima to the surface in deep stations, or from 10 m from the bottom to the surface in shallow waters. Salinity and temperature were obtained with a CTD/rosette system, which provided seawater for chlorophyll-a and nutrient concentrations. The influence of water mass on larval fish species was studied using Indicator Species Analysis, whereas environmental effects on the distribution of larval fish species were analyzed by Distance-based Redundancy Analysis. Larval fish species were associated with specific water masses: in the north, Sardinella brasiliensis was found in Shelf Water; whereas in the south, Engraulis anchoita inhabited the Plata Plume Water. At the slope, Tropical Water was characterized by the bristlemouth Cyclothone acclinidens. The concurrent analysis showed the importance of both cross-shelf and latitudinal gradients on the large-scale distribution of larval fish species. Our findings reveal that ichthyoplankton composition and large-scale spatial distribution are determined by water mass composition in both latitudinal and cross-shelf gradients. PMID:24614798

  13. Large-scale ichthyoplankton and water mass distribution along the South Brazil Shelf.

    Directory of Open Access Journals (Sweden)

    Luis Carlos Pinto de Macedo-Soares

    Full Text Available Ichthyoplankton is an essential component of pelagic ecosystems, and environmental factors play an important role in determining its distribution. We have investigated simultaneous latitudinal and cross-shelf gradients in ichthyoplankton abundance to test the hypothesis that the large-scale distribution of fish larvae in the South Brazil Shelf is associated with water mass composition. Vertical plankton tows were collected between 21°27' and 34°51'S at 107 stations, in austral late spring and early summer seasons. Samples were taken with a conical-cylindrical plankton net from the depth of chlorophyll maxima to the surface in deep stations, or from 10 m from the bottom to the surface in shallow waters. Salinity and temperature were obtained with a CTD/rosette system, which provided seawater for chlorophyll-a and nutrient concentrations. The influence of water mass on larval fish species was studied using Indicator Species Analysis, whereas environmental effects on the distribution of larval fish species were analyzed by Distance-based Redundancy Analysis. Larval fish species were associated with specific water masses: in the north, Sardinella brasiliensis was found in Shelf Water; whereas in the south, Engraulis anchoita inhabited the Plata Plume Water. At the slope, Tropical Water was characterized by the bristlemouth Cyclothone acclinidens. The concurrent analysis showed the importance of both cross-shelf and latitudinal gradients on the large-scale distribution of larval fish species. Our findings reveal that ichthyoplankton composition and large-scale spatial distribution are determined by water mass composition in both latitudinal and cross-shelf gradients.

  14. Large-scale ichthyoplankton and water mass distribution along the South Brazil Shelf.

    Science.gov (United States)

    de Macedo-Soares, Luis Carlos Pinto; Garcia, Carlos Alberto Eiras; Freire, Andrea Santarosa; Muelbert, José Henrique

    2014-01-01

    Ichthyoplankton is an essential component of pelagic ecosystems, and environmental factors play an important role in determining its distribution. We have investigated simultaneous latitudinal and cross-shelf gradients in ichthyoplankton abundance to test the hypothesis that the large-scale distribution of fish larvae in the South Brazil Shelf is associated with water mass composition. Vertical plankton tows were collected between 21°27' and 34°51'S at 107 stations, in austral late spring and early summer seasons. Samples were taken with a conical-cylindrical plankton net from the depth of chlorophyll maxima to the surface in deep stations, or from 10 m from the bottom to the surface in shallow waters. Salinity and temperature were obtained with a CTD/rosette system, which provided seawater for chlorophyll-a and nutrient concentrations. The influence of water mass on larval fish species was studied using Indicator Species Analysis, whereas environmental effects on the distribution of larval fish species were analyzed by Distance-based Redundancy Analysis. Larval fish species were associated with specific water masses: in the north, Sardinella brasiliensis was found in Shelf Water; whereas in the south, Engraulis anchoita inhabited the Plata Plume Water. At the slope, Tropical Water was characterized by the bristlemouth Cyclothone acclinidens. The concurrent analysis showed the importance of both cross-shelf and latitudinal gradients on the large-scale distribution of larval fish species. Our findings reveal that ichthyoplankton composition and large-scale spatial distribution are determined by water mass composition in both latitudinal and cross-shelf gradients.

  15. High density distributed strain sensing of landslide in large scale physical model

    Science.gov (United States)

    Schenato, L.; Camporese, M.; Bersan, S.; Cola, S.; Galtarossa, A.; Pasuto, A.; Simonini, P.; Salandin, P.; Palmieri, L.

    2017-04-01

    This paper describes the application of a commercial distributed optical fiber sensing system to a large scale physical model of landslide. An optical fiber cable, deployed inside the landslide body, is interrogated by means of optical frequency domain reflectometry with very high spatial density. A shallow landslide is triggered in the physical model by artificial rainfall and the evolution of the strain is measured up to the slope failure. Precursory signs of failure are detected well before the collapse, providing insights to the failure dynamic.

  16. Distributed weighted least-squares estimation with fast convergence for large-scale systems.

    Science.gov (United States)

    Marelli, Damián Edgardo; Fu, Minyue

    2015-01-01

    In this paper we study a distributed weighted least-squares estimation problem for a large-scale system consisting of a network of interconnected sub-systems. Each sub-system is concerned with a subset of the unknown parameters and has a measurement linear in the unknown parameters with additive noise. The distributed estimation task is for each sub-system to compute the globally optimal estimate of its own parameters using its own measurement and information shared with the network through neighborhood communication. We first provide a fully distributed iterative algorithm to asymptotically compute the global optimal estimate. The convergence rate of the algorithm will be maximized using a scaling parameter and a preconditioning method. This algorithm works for a general network. For a network without loops, we also provide a different iterative algorithm to compute the global optimal estimate which converges in a finite number of steps. We include numerical experiments to illustrate the performances of the proposed methods.

  17. Multi-agent based control of large-scale complex systems employing distributed dynamic inference engine

    Science.gov (United States)

    Zhang, Daili

    . Combining these three distributed algorithms and a Distributed Belief Propagation (DBP) algorithm in MSDBNs makes state estimations robust to partial damage in the whole system. Combining the distributed control architecture design and the distributed inference engine design leads to a process of control system design for a general large-scale complex system. As applications of the proposed methodology, the control system design of a simplified ship chilled water system and a notional ship chilled water system have been demonstrated step by step. Simulation results not only show that the proposed methodology gives a clear guideline for control system design for general large-scale complex systems with dynamic and uncertain environment, but also indicate that the combination of MSDBNs and HyMABC can provide excellent performance for controlling general large-scale complex systems.

  18. Aggregated Representation of Distribution Networks for Large-Scale Transmission Network Simulations

    DEFF Research Database (Denmark)

    Göksu, Ömer; Altin, Müfit; Sørensen, Poul Ejnar

    2014-01-01

    As a common practice of large-scale transmission network analysis the distribution networks have been represented as aggregated loads. However, with increasing share of distributed generation, especially wind and solar power, in the distribution networks, it became necessary to include the distri......As a common practice of large-scale transmission network analysis the distribution networks have been represented as aggregated loads. However, with increasing share of distributed generation, especially wind and solar power, in the distribution networks, it became necessary to include...... the distributed generation within those analysis. In this paper a practical methodology to obtain aggregated behaviour of the distributed generation is proposed. The methodology, which is based on the use of the IEC standard wind turbine models, is applied on a benchmark distribution network via simulations....

  19. Framing Innovation: The Role of Distributed Leadership in Gaining Acceptance of Large-Scale Technology Initiatives

    Science.gov (United States)

    Turner, Henry J.

    2014-01-01

    This dissertation of practice utilized a multiple case-study approach to examine distributed leadership within five school districts that were attempting to gain acceptance of a large-scale 1:1 technology initiative. Using frame theory and distributed leadership theory as theoretical frameworks, this study interviewed each district's…

  20. Pump schedules optimisation with pressure aspects in complex large-scale water distribution systems

    Directory of Open Access Journals (Sweden)

    P. Skworcow

    2014-06-01

    Full Text Available This paper considers optimisation of pump and valve schedules in complex large-scale water distribution networks (WDN, taking into account pressure aspects such as minimum service pressure and pressure-dependent leakage. An optimisation model is automatically generated in the GAMS language from a hydraulic model in the EPANET format and from additional files describing operational constraints, electricity tariffs and pump station configurations. The paper describes in details how each hydraulic component is modelled. To reduce the size of the optimisation problem the full hydraulic model is simplified using module reduction algorithm, while retaining the nonlinear characteristics of the model. Subsequently, a nonlinear programming solver CONOPT is used to solve the optimisation model, which is in the form of Nonlinear Programming with Discontinuous Derivatives (DNLP. The results produced by CONOPT are processed further by heuristic algorithms to generate integer solution. The proposed approached was tested on a large-scale WDN model provided in the EPANET format. The considered WDN included complex structures and interactions between pump stations. Solving of several scenarios considering different horizons, time steps, operational constraints, demand levels and topological changes demonstrated ability of the approach to automatically generate and solve optimisation problems for a variety of requirements.

  1. Secure File Allocation and Caching in Large-scale Distributed Systems

    DEFF Research Database (Denmark)

    Di Mauro, Alessio; Mei, Alessandro; Jajodia, Sushil

    2012-01-01

    In this paper, we present a file allocation and caching scheme that guarantees high assurance, availability, and load balancing in a large-scale distributed file system that can support dynamic updates of authorization policies. The scheme uses fragmentation and replication to store files with hi......-balancing, and reducing delay of read operations. The system offers a trade-off-between performance and security that is dynamically tunable according to the current level of threat. We validate our mechanisms with extensive simulations in an Internet-like network....... security requirements in a system composed of a majority of low-security servers. We develop mechanisms to fragment files, to allocate them into multiple servers, and to cache them as close as possible to their readers while preserving the security requirement of the files, providing load...

  2. A topology visualization early warning distribution algorithm for large-scale network security incidents.

    Science.gov (United States)

    He, Hui; Fan, Guotao; Ye, Jianwei; Zhang, Weizhe

    2013-01-01

    It is of great significance to research the early warning system for large-scale network security incidents. It can improve the network system's emergency response capabilities, alleviate the cyber attacks' damage, and strengthen the system's counterattack ability. A comprehensive early warning system is presented in this paper, which combines active measurement and anomaly detection. The key visualization algorithm and technology of the system are mainly discussed. The large-scale network system's plane visualization is realized based on the divide and conquer thought. First, the topology of the large-scale network is divided into some small-scale networks by the MLkP/CR algorithm. Second, the sub graph plane visualization algorithm is applied to each small-scale network. Finally, the small-scale networks' topologies are combined into a topology based on the automatic distribution algorithm of force analysis. As the algorithm transforms the large-scale network topology plane visualization problem into a series of small-scale network topology plane visualization and distribution problems, it has higher parallelism and is able to handle the display of ultra-large-scale network topology.

  3. A Topology Visualization Early Warning Distribution Algorithm for Large-Scale Network Security Incidents

    Directory of Open Access Journals (Sweden)

    Hui He

    2013-01-01

    Full Text Available It is of great significance to research the early warning system for large-scale network security incidents. It can improve the network system’s emergency response capabilities, alleviate the cyber attacks’ damage, and strengthen the system’s counterattack ability. A comprehensive early warning system is presented in this paper, which combines active measurement and anomaly detection. The key visualization algorithm and technology of the system are mainly discussed. The large-scale network system’s plane visualization is realized based on the divide and conquer thought. First, the topology of the large-scale network is divided into some small-scale networks by the MLkP/CR algorithm. Second, the sub graph plane visualization algorithm is applied to each small-scale network. Finally, the small-scale networks’ topologies are combined into a topology based on the automatic distribution algorithm of force analysis. As the algorithm transforms the large-scale network topology plane visualization problem into a series of small-scale network topology plane visualization and distribution problems, it has higher parallelism and is able to handle the display of ultra-large-scale network topology.

  4. A Topology Visualization Early Warning Distribution Algorithm for Large-Scale Network Security Incidents

    Science.gov (United States)

    He, Hui; Fan, Guotao; Ye, Jianwei; Zhang, Weizhe

    2013-01-01

    It is of great significance to research the early warning system for large-scale network security incidents. It can improve the network system's emergency response capabilities, alleviate the cyber attacks' damage, and strengthen the system's counterattack ability. A comprehensive early warning system is presented in this paper, which combines active measurement and anomaly detection. The key visualization algorithm and technology of the system are mainly discussed. The large-scale network system's plane visualization is realized based on the divide and conquer thought. First, the topology of the large-scale network is divided into some small-scale networks by the MLkP/CR algorithm. Second, the sub graph plane visualization algorithm is applied to each small-scale network. Finally, the small-scale networks' topologies are combined into a topology based on the automatic distribution algorithm of force analysis. As the algorithm transforms the large-scale network topology plane visualization problem into a series of small-scale network topology plane visualization and distribution problems, it has higher parallelism and is able to handle the display of ultra-large-scale network topology. PMID:24191145

  5. Design and implementation of a distributed large-scale spatial database system based on J2EE

    Science.gov (United States)

    Gong, Jianya; Chen, Nengcheng; Zhu, Xinyan; Zhang, Xia

    2003-03-01

    With the increasing maturity of distributed object technology, CORBA, .NET and EJB are universally used in traditional IT field. However, theories and practices of distributed spatial database need farther improvement in virtue of contradictions between large scale spatial data and limited network bandwidth or between transitory session and long transaction processing. Differences and trends among of CORBA, .NET and EJB are discussed in details, afterwards the concept, architecture and characteristic of distributed large-scale seamless spatial database system based on J2EE is provided, which contains GIS client application, web server, GIS application server and spatial data server. Moreover the design and implementation of components of GIS client application based on JavaBeans, the GIS engine based on servlet, the GIS Application server based on GIS enterprise JavaBeans(contains session bean and entity bean) are explained.Besides, the experiments of relation of spatial data and response time under different conditions are conducted, which proves that distributed spatial database system based on J2EE can be used to manage, distribute and share large scale spatial data on Internet. Lastly, a distributed large-scale seamless image database based on Internet is presented.

  6. Large-Scale Cooperative Task Distribution on Peer-to-Peer Networks

    Science.gov (United States)

    2012-01-01

    datamining to identify trends becomes more attractive [35]. Because the P2P agents store the data, they can be leveraged to also distribute the... datamin - ing computation. However, at one million agents, such a large-scale system requires a new method of orga- nizing the agents and algorithms into

  7. Local, distributed topology control for large-scale wireless ad-hoc networks

    NARCIS (Netherlands)

    Nieberg, T.; Hurink, Johann L.

    In this document, topology control of a large-scale, wireless network by a distributed algorithm that uses only locally available information is presented. Topology control algorithms adjust the transmission power of wireless nodes to create a desired topology. The algorithm, named local power

  8. Reducing the Runtime Acceptance Costs of Large-Scale Distributed Component-Based Systems

    NARCIS (Netherlands)

    Gonzalez, A.; Piel, E.; Gross, H.G.

    2008-01-01

    Software Systems of Systems (SoS) are large-scale distributed component-based systems in which the individual components are elaborate and complex systems in their own right. Distinguishing characteristics are their short expected integration and deployment time, and the need to modify their

  9. Distributed weighted least-squares estimation with fast convergence for large-scale systems☆

    Science.gov (United States)

    Marelli, Damián Edgardo; Fu, Minyue

    2015-01-01

    In this paper we study a distributed weighted least-squares estimation problem for a large-scale system consisting of a network of interconnected sub-systems. Each sub-system is concerned with a subset of the unknown parameters and has a measurement linear in the unknown parameters with additive noise. The distributed estimation task is for each sub-system to compute the globally optimal estimate of its own parameters using its own measurement and information shared with the network through neighborhood communication. We first provide a fully distributed iterative algorithm to asymptotically compute the global optimal estimate. The convergence rate of the algorithm will be maximized using a scaling parameter and a preconditioning method. This algorithm works for a general network. For a network without loops, we also provide a different iterative algorithm to compute the global optimal estimate which converges in a finite number of steps. We include numerical experiments to illustrate the performances of the proposed methods. PMID:25641976

  10. An architecture for distributed real-time large-scale information processing for intelligence analysis

    Science.gov (United States)

    Santos, Eugene, Jr.; Santos, Eunice E.; Santos, Eugene S.

    2004-04-01

    Given a massive and dynamic space of information (nuggets) and a query to be answered, how can the correct (answer) nuggets be retrieved in an effective and efficient manner? We present a large-scale distributed real-time architecture based on anytime intelligent foraging, gathering, and matching (I-FGM) on massive and dynamic information spaces. Simply put, we envision that when given a search query, large numbers of computational processes are alerted or activated in parallel to begin identifying and retrieving the appro-priate information nuggets. In particular, our approach aims to provide an anytime capa-bility which functions as follows: Given finite computational resources, I-FGM will pro-ceed to explore the information space and, over time, continuously identify and update promising candidate nugget, thus, good candidates will be available at anytime on re-quest. With the computational costs of evaluating the relevance of a candidate nugget, the anytime nature of I-FGM will provide increasing confidence on nugget selections over time by providing admissible partial evaluations. When a new promising candidate is identified, the current set of selected nuggets is re-evaluated and updated appropriately. Essentially, I-FGM will guide its finite computational resources in locating the target in-formation nuggets quickly and iteratively over time. In addition, the goal of I-FGM is to naturally handle new nuggets as they appear. A central element of our framework is to provide a formal computational model of this massive data-intensive problem.

  11. Particle Velocity Distributions and Large-Scale Electric Field in Solar Wind

    Science.gov (United States)

    Pavan, J.; Vinas, A. F.

    2016-12-01

    Velocity distributions of particles are key elements in the study of solar wind. The physical mechanisms that regulate their many features are a matter of debate. The present work addresses the subject with a fully analytical method in order to establish the shape of particle velocity distributions in solar wind. The method consists in solving the steady-state kinetic equation for particles and the related fluid equations, assigning spatial profiles for density and temperature matching observational data. The model is one-dimensional in configuration-space and two-dimensional in velocity-space, and accounts for large-scale processes, namely, advection, gravity, magnetic mirroring and the large-scale ambipolar electric field, without the aid of wave-particle interactions or collisions. The findings reported add to the general understanding of regulation of particle distributions in solar wind and to the predictions of their shape in regions restricted for in situ measurements.

  12. A study of residence time distribution using radiotracer technique in the large scale plant facility

    Science.gov (United States)

    Wetchagarun, S.; Tippayakul, C.; Petchrak, A.; Sukrod, K.; Khoonkamjorn, P.

    2017-06-01

    As the demand for troubleshooting of large industrial plants increases, radiotracer techniques, which have capability to provide fast, online and effective detections to plant problems, have been continually developed. One of the good potential applications of the radiotracer for troubleshooting in a process plant is the analysis of Residence Time Distribution (RTD). In this paper, the study of RTD in a large scale plant facility using radiotracer technique was presented. The objective of this work is to gain experience on the RTD analysis using radiotracer technique in a “larger than laboratory” scale plant setup which can be comparable to the real industrial application. The experiment was carried out at the sedimentation tank in the water treatment facility of Thailand Institute of Nuclear Technology (Public Organization). Br-82 was selected to use in this work due to its chemical property, its suitable half-life and its on-site availability. NH4Br in the form of aqueous solution was injected into the system as the radiotracer. Six NaI detectors were placed along the pipelines and at the tank in order to determine the RTD of the system. The RTD and the Mean Residence Time (MRT) of the tank was analysed and calculated from the measured data. The experience and knowledge attained from this study is important for extending this technique to be applied to industrial facilities in the future.

  13. Spatial distribution of ultra-diffuse galaxies within large-scale structures

    Science.gov (United States)

    Román, Javier; Trujillo, Ignacio

    2017-06-01

    Taking advantage of the Sloan Digital Sky Survey Stripe82 data, we have explored the spatial distribution of ultra-diffuse galaxies (UDGs) within an area of 8 × 8 Mpc2 centred around the galaxy cluster Abell 168 (z = 0.045). This intermediate massive cluster (σ = 550 km s-1) is surrounded by a complex large-scale structure. Our work confirms the presence of UDGs in the cluster and in the large-scale structure that surrounds it, and it is the first detection of UDGs outside clusters. Approximately 50 per cent of the UDGs analysed in the selected area inhabit the cluster region (˜11 ± 5 per cent in the core and ˜39 ± 9 per cent in the outskirts), whereas the remaining UDGs are found outside the main cluster structure (˜50 ± 11 per cent). The colours and the spatial distribution of the UDGs within this large-scale structure are more similar to dwarf galaxies than to L⋆ galaxies, suggesting that most UDGs could be bona fide dwarf galaxies.

  14. Distributed constraint satisfaction for coordinating and integrating a large-scale, heterogenous enterprise

    CERN Document Server

    Eisenberg, C

    2003-01-01

    Market forces are continuously driving public and private organisations towards higher productivity, shorter process and production times, and fewer labour hours. To cope with these changes, organisations are adopting new organisational models of coordination and cooperation that increase their flexibility, consistency, efficiency, productivity and profit margins. In this thesis an organisational model of coordination and cooperation is examined using a real life example; the technical integration of a distributed large-scale project of an international physics collaboration. The distributed resource constraint project scheduling problem is modelled and solved with the methods of distributed constraint satisfaction. A distributed local search method, the distributed breakout algorithm (DisBO), is used as the basis for the coordination scheme. The efficiency of the local search method is improved by extending it with an incremental problem solving scheme with variable ordering. The scheme is implemented as cen...

  15. Using geographic information system analyses to monitor large-scale distribution of nicotine replacement therapy in New York City.

    Science.gov (United States)

    Czarnecki, Karen Davis; Goranson, Chris; Ellis, Jennifer A; Vichinsky, Laura E; Coady, Micaela H; Perl, Sarah B

    2010-01-01

    Since 2003, the New York City Department of Health and Mental Hygiene has distributed nicotine replacement therapy nicotine replacement therapy to adult smokers through annual large-scale distribution programs. In 2008, the New York City Department of Health and Mental Hygiene formally integrated geographic information system analyses to track program enrollment, map the geographic density of enrollees, and assess the effects of outreach strategies. Geographic information system analyses provided a unique, near real-time visual method of assessing participation patterns as well as the impact of media and outreach strategies. Among neighborhoods with high smoking prevalence, lower income neighborhoods had higher enrollment compared to higher income neighborhoods. Mapping before and after a press release demonstrated that program interest increased over 700% in one area. Although geographic information system analysis is traditionally utilized for large-scale infectious disease surveillance, the New York City Department of Health and Mental Hygiene used GIS to inform and improve an annual large-scale smoking cessation program. These analyses provide unique feedback that can aid public health program planners in improving efficiency and efficacy of service delivery. Copyright (c) 2010 Elsevier Inc. All rights reserved.

  16. Large-scale determinants of intestinal schistosomiasis and intermediate host snail distribution across Africa

    DEFF Research Database (Denmark)

    Stensgaard, Anna-Sofie; Utzinger, Jürg; Vounatsou, Penelope

    2013-01-01

    to climate change. Here, we combine a growing degree day model for Schistosoma mansoni with species distribution models for the intermediate host snail (Biomphalaria spp.) to investigate large-scale environmental determinants of the distribution of the African S. mansoni-Biomphalaria system and potential...... impacts of climatic changes. Snail species distribution models included several combinations of climatic and habitat-related predictors; the latter divided into "natural" and "human-impacted" habitat variables to measure anthropogenic influence. The predictive performance of the combined snail......-parasite model was evaluated against a comprehensive compilation of historical S. mansoni parasitological survey records, and then examined for two climate change scenarios of increasing severity for 2080. Future projections indicate that while the potential S. mansoni transmission area expands, the snail ranges...

  17. A Framework for Managing Access of Large-Scale Distributed Resources in a Collaborative Platform

    Directory of Open Access Journals (Sweden)

    Su Chen

    2009-01-01

    Full Text Available In an e-Science environment, large-scale distributed resources in autonomous domains are aggregated by unified collaborative platforms to support scientific research across organizational boundaries. In order to enhance the scalability of access management, an integrated approach for decentralizing the task from resource owners to administrators on the platform is needed. We propose an extensible access management framework to meet this requirement by supporting an administrative delegation policy. This feature allows administrators on the platform to make new policies based on the original policies made by resources owners. An access protocol that merges SAML and XACML is also included in the framework. It defines how distributed parties operate with each other to make decentralized authorization decisions.

  18. Large-scale distributed foraging, gathering, and matching for information retrieval: assisting the geospatial intelligence analyst

    Science.gov (United States)

    Santos, Eugene, Jr.; Santos, Eunice E.; Nguyen, Hien; Pan, Long; Korah, John

    2005-03-01

    With the proliferation of online resources, there is an increasing need to effectively and efficiently retrieve data and knowledge from distributed geospatial databases. One of the key challenges of this problem is the fact that geospatial databases are usually large and dynamic. In this paper, we address this problem by developing a large scale distributed intelligent foraging, gathering and matching (I-FGM) framework for massive and dynamic information spaces. We assess the effectiveness of our approach by comparing a prototype I-FGM against two simple controls systems (randomized selection and partially intelligent systems). We designed and employed a medium-sized testbed to get an accurate measure of retrieval precision and recall for each system. The results obtained show that I-FGM retrieves relevant information more quickly than the two other control approaches.

  19. Scalable and Fully Distributed Localization in Large-Scale Sensor Networks

    Directory of Open Access Journals (Sweden)

    Miao Jin

    2017-06-01

    Full Text Available This work proposes a novel connectivity-based localization algorithm, well suitable for large-scale sensor networks with complex shapes and a non-uniform nodal distribution. In contrast to current state-of-the-art connectivity-based localization methods, the proposed algorithm is highly scalable with linear computation and communication costs with respect to the size of the network; and fully distributed where each node only needs the information of its neighbors without cumbersome partitioning and merging process. The algorithm is theoretically guaranteed and numerically stable. Moreover, the algorithm can be readily extended to the localization of networks with a one-hop transmission range distance measurement, and the propagation of the measurement error at one sensor node is limited within a small area of the network around the node. Extensive simulations and comparison with other methods under various representative network settings are carried out, showing the superior performance of the proposed algorithm.

  20. Economic Model Predictive Control for Large-Scale and Distributed Energy Systems

    DEFF Research Database (Denmark)

    Standardi, Laura

    Sources (RESs) in the smart grids is increasing. These energy sources bring uncertainty to the production due to their fluctuations. Hence,smart grids need suitable control systems that are able to continuously balance power production and consumption.  We apply the Economic Model Predictive Control (EMPC.......  The mathematical model of the large-scale energy system embodies the decoupled dynamics of each power units. Moreover,all units of the grid contribute to the overall power production. Economic Model Predictive Control (EMPC) This control strategy is an extension of the Model Predictive Control (MPC......In this thesis, we consider control strategies for large and distributed energy systems that are important for the implementation of smart grid technologies.  An electrical grid has to ensure reliability and avoid long-term interruptions in the power supply. Moreover, the share of Renewable Energy...

  1. Shared and Distributed Memory Parallel Security Analysis of Large-Scale Source Code and Binary Applications

    Energy Technology Data Exchange (ETDEWEB)

    Quinlan, D; Barany, G; Panas, T

    2007-08-30

    Many forms of security analysis on large scale applications can be substantially automated but the size and complexity can exceed the time and memory available on conventional desktop computers. Most commercial tools are understandably focused on such conventional desktop resources. This paper presents research work on the parallelization of security analysis of both source code and binaries within our Compass tool, which is implemented using the ROSE source-to-source open compiler infrastructure. We have focused on both shared and distributed memory parallelization of the evaluation of rules implemented as checkers for a wide range of secure programming rules, applicable to desktop machines, networks of workstations and dedicated clusters. While Compass as a tool focuses on source code analysis and reports violations of an extensible set of rules, the binary analysis work uses the exact same infrastructure but is less well developed into an equivalent final tool.

  2. Multiple antibiotic resistance genes distribution in ten large-scale membrane bioreactors for municipal wastewater treatment.

    Science.gov (United States)

    Sun, Yanmei; Shen, Yue-Xiao; Liang, Peng; Zhou, Jizhong; Yang, Yunfeng; Huang, Xia

    2016-12-01

    Wastewater treatment plants are thought to be potential reservoirs of antibiotic resistance genes. In this study, GeoChip was used for analyzing multiple antibiotic resistance genes, including four multidrug efflux system gene groups and three β-lactamase genes in ten large-scale membrane bioreactors (MBRs) for municipal wastewater treatment. Results revealed that the diversity of antibiotic genes varied a lot among MBRs, but about 40% common antibiotic resistance genes were existent. The average signal intensity of each antibiotic resistance group was similar among MBRs, nevertheless the total abundance of each group varied remarkably and the dominant resistance gene groups were different in individual MBR. The antibiotic resistance genes majorly derived from Proteobacteria and Actinobacteria. Further study indicated that TN, TP and COD of influent, temperature and conductivity of mixed liquor were significant (P<0.05) correlated to the multiple antibiotic resistance genes distribution in MBRs. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Large Scale Behavior and Droplet Size Distributions in Crude Oil Jets and Plumes

    Science.gov (United States)

    Katz, Joseph; Murphy, David; Morra, David

    2013-11-01

    The 2010 Deepwater Horizon blowout introduced several million barrels of crude oil into the Gulf of Mexico. Injected initially as a turbulent jet containing crude oil and gas, the spill caused formation of a subsurface plume stretching for tens of miles. The behavior of such buoyant multiphase plumes depends on several factors, such as the oil droplet and bubble size distributions, current speed, and ambient stratification. While large droplets quickly rise to the surface, fine ones together with entrained seawater form intrusion layers. Many elements of the physics of droplet formation by an immiscible turbulent jet and their resulting size distribution have not been elucidated, but are known to be significantly influenced by the addition of dispersants, which vary the Weber Number by orders of magnitude. We present experimental high speed visualizations of turbulent jets of sweet petroleum crude oil (MC 252) premixed with Corexit 9500A dispersant at various dispersant to oil ratios. Observations were conducted in a 0.9 m × 0.9 m × 2.5 m towing tank, where large-scale behavior of the jet, both stationary and towed at various speeds to simulate cross-flow, have been recorded at high speed. Preliminary data on oil droplet size and spatial distributions were also measured using a videoscope and pulsed light sheet. Sponsored by Gulf of Mexico Research Initiative (GoMRI).

  4. Dictionaries and distributions: Combining expert knowledge and large scale textual data content analysis : Distributed dictionary representation.

    Science.gov (United States)

    Garten, Justin; Hoover, Joe; Johnson, Kate M; Boghrati, Reihane; Iskiwitch, Carol; Dehghani, Morteza

    2018-02-01

    Theory-driven text analysis has made extensive use of psychological concept dictionaries, leading to a wide range of important results. These dictionaries have generally been applied through word count methods which have proven to be both simple and effective. In this paper, we introduce Distributed Dictionary Representations (DDR), a method that applies psychological dictionaries using semantic similarity rather than word counts. This allows for the measurement of the similarity between dictionaries and spans of text ranging from complete documents to individual words. We show how DDR enables dictionary authors to place greater emphasis on construct validity without sacrificing linguistic coverage. We further demonstrate the benefits of DDR on two real-world tasks and finally conduct an extensive study of the interaction between dictionary size and task performance. These studies allow us to examine how DDR and word count methods complement one another as tools for applying concept dictionaries and where each is best applied. Finally, we provide references to tools and resources to make this method both available and accessible to a broad psychological audience.

  5. Distributed and Cooperative Link Scheduling for Large-Scale Multihop Wireless Networks

    Directory of Open Access Journals (Sweden)

    Swami Ananthram

    2007-01-01

    Full Text Available A distributed and cooperative link-scheduling (DCLS algorithm is introduced for large-scale multihop wireless networks. With this algorithm, each and every active link in the network cooperatively calibrates its environment and converges to a desired link schedule for data transmissions within a time frame of multiple slots. This schedule is such that the entire network is partitioned into a set of interleaved subnetworks, where each subnetwork consists of concurrent cochannel links that are properly separated from each other. The desired spacing in each subnetwork can be controlled by a tuning parameter and the number of time slots specified for each frame. Following the DCLS algorithm, a distributed and cooperative power control (DCPC algorithm can be applied to each subnetwork to ensure a desired data rate for each link with minimum network transmission power. As shown consistently by simulations, the DCLS algorithm along with a DCPC algorithm yields significant power savings. The power savings also imply an increased feasible region of averaged link data rates for the entire network.

  6. Distributed and Cooperative Link Scheduling for Large-Scale Multihop Wireless Networks

    Directory of Open Access Journals (Sweden)

    Ananthram Swami

    2007-12-01

    Full Text Available A distributed and cooperative link-scheduling (DCLS algorithm is introduced for large-scale multihop wireless networks. With this algorithm, each and every active link in the network cooperatively calibrates its environment and converges to a desired link schedule for data transmissions within a time frame of multiple slots. This schedule is such that the entire network is partitioned into a set of interleaved subnetworks, where each subnetwork consists of concurrent cochannel links that are properly separated from each other. The desired spacing in each subnetwork can be controlled by a tuning parameter and the number of time slots specified for each frame. Following the DCLS algorithm, a distributed and cooperative power control (DCPC algorithm can be applied to each subnetwork to ensure a desired data rate for each link with minimum network transmission power. As shown consistently by simulations, the DCLS algorithm along with a DCPC algorithm yields significant power savings. The power savings also imply an increased feasible region of averaged link data rates for the entire network.

  7. Large scale density perturbations from a uniform distribution by wave transport

    Science.gov (United States)

    Lieu, Richard

    2017-11-01

    It has long been known that a uniform distribution of matter cannot produce a Poisson distribution of density fluctuations on very large scales 1/k > ct by the motion of discrete particles over timescale t. The constraint is part of what is sometimes referred to as the Zel'dovich bound. We investigate in this paper the transport of energy by the propagation of waves emanating incoherently from a regular and infinite lattice of oscillators, each having the same finite amount of energy reserve initially. The model we employ does not involve the expansion of the Universe; indeed there is no need to do so, because although the scales of interest are all deeply sub-horizon the size of regions over which perturbations are evaluated do far exceed ct, where t is the time elapsed since a uniform array of oscillators started to emit energy by radiation (it is assumed that t greatly exceeds the duration of emission). We find that to lowest order, when only wave fields propto 1/r are included, there is exact compensation between the energy loss of the oscillators and the energy emitted into space, which means P(0)=0 for the power spectrum of density fluctuations on the largest scales. This is consistent with the Zel'dovich bound; it proves that the model employed is causal, has finite support, and energy is strictly conserved. To the next order when near fields propto r‑2 are included, however, P(0) settles at late times to a positive value that depends only on time, as t‑2 (the same applies to an excess (non-conserving) energy term). We further observe that the behavior is peculiar to near fields. Even though this effect may give the impression of superluminal energy transport, there is no violation of causality because the two-point function vanishes completely for r>t if the emission of each oscillator is sharply truncated beyond some duration. The result calls to question any need of enlisting cosmic inflation to seed large scale density perturbations in the early

  8. Rucio - The next generation of large scale distributed system for ATLAS Data Management

    CERN Document Server

    Garonne, V; The ATLAS collaboration; Beermann, T; Goossens, L; Lassnig, M; Nairz, A; Stewart, GA; Vigne, V; Serfon, C

    2013-01-01

    Rucio is the next-generation Distributed Data Management(DDM) system benefiting from recent advances in cloud and "Big Data" computing to address HEP experiments scaling requirements. Rucio is an evolution of the ATLAS DDM system Don Quijote 2 (DQ2), which has demonstrated very large scale data management capabilities with more than 140 petabytes spread worldwide across 130 sites, and accesses from 1,000 active users. However, DQ2 is reaching its limits in terms of scalability, requiring a large number of support staff to operate and being hard to extend with new technologies. Rucio will address these issues by relying on a conceptual data model and new technology to ensure system scalability, address new user requirements and employ new automation framework to reduce operational overheads. We present the key concepts of Rucio, including its data organization/representation and a model of how ATLAS central group and user activities will be managed. The Rucio design, and the technology it employs, is described...

  9. Rucio - The next generation of large scale distributed system for ATLAS Data Management

    CERN Document Server

    Garonne, V; The ATLAS collaboration; Beermann, T; Goossens, L; Lassnig, M; Nairz, A; Stewart, GA; Vigne, V; Serfon, C

    2014-01-01

    Rucio is the next-generation Distributed Data Management(DDM) system benefiting from recent advances in cloud and ”Big Data” computing to address HEP experiments scaling requirements. Rucio is an evolution of the ATLAS DDM system Don Quijote 2 (DQ2), which has demonstrated very large scale data management capabilities with more than 140 petabytes spread worldwide across 130 sites, and accesses from 1,000 active users. However, DQ2 is reaching its limits in terms of scalability, requiring a large number of support staff to operate and being hard to extend with new technologies. Rucio will address these issues by relying on a conceptual data model and new technology to ensure system scalability, address new user requirements and employ new automation framework to reduce operational overheads. We present the key concepts of Rucio, including its data organization/representation and a model of how ATLAS central group and user activities will be managed. The Rucio design, and the technology it employs, is descr...

  10. Large-scale spatial distribution patterns of echinoderms in nearshore rocky habitats.

    Directory of Open Access Journals (Sweden)

    Katrin Iken

    Full Text Available This study examined echinoderm assemblages from nearshore rocky habitats for large-scale distribution patterns with specific emphasis on identifying latitudinal trends and large regional hotspots. Echinoderms were sampled from 76 globally-distributed sites within 12 ecoregions, following the standardized sampling protocol of the Census of Marine Life NaGISA project (www.nagisa.coml.org. Sample-based species richness was overall low (2 cm in 1 m(2 quadrats was highest in the Caribbean ecoregions and echinoids dominated these assemblages with an average of 5 ind m(-2. In contrast, intertidal echinoderm assemblages collected from clearings of 0.0625 m(2 quadrats had the highest abundance and richness in the Northeast Pacific ecoregions where asteroids and holothurians dominated with an average of 14 ind 0.0625 m(-2. Distinct latitudinal trends existed for abundance and richness in intertidal assemblages with declines from peaks at high northern latitudes. No latitudinal trends were found for subtidal echinoderm assemblages with either sampling technique. Latitudinal gradients appear to be superseded by regional diversity hotspots. In these hotspots echinoderm assemblages may be driven by local and regional processes, such as overall productivity and evolutionary history. We also tested a set of 14 environmental variables (six natural and eight anthropogenic as potential drivers of echinoderm assemblages by ecoregions. The natural variables of salinity, sea-surface temperature, chlorophyll a, and primary productivity were strongly correlated with echinoderm assemblages; the anthropogenic variables of inorganic pollution and nutrient contamination also contributed to correlations. Our results indicate that nearshore echinoderm assemblages appear to be shaped by a network of environmental and ecological processes, and by the differing responses of various echinoderm taxa, making generalizations about the patterns of nearshore rocky habitat echinoderm

  11. Large-scale spatial distribution patterns of gastropod assemblages in rocky shores.

    Directory of Open Access Journals (Sweden)

    Patricia Miloslavich

    Full Text Available Gastropod assemblages from nearshore rocky habitats were studied over large spatial scales to (1 describe broad-scale patterns in assemblage composition, including patterns by feeding modes, (2 identify latitudinal pattern of biodiversity, i.e., richness and abundance of gastropods and/or regional hotspots, and (3 identify potential environmental and anthropogenic drivers of these assemblages. Gastropods were sampled from 45 sites distributed within 12 Large Marine Ecosystem regions (LME following the NaGISA (Natural Geography in Shore Areas standard protocol (www.nagisa.coml.org. A total of 393 gastropod taxa from 87 families were collected. Eight of these families (9.2% appeared in four or more different LMEs. Among these, the Littorinidae was the most widely distributed (8 LMEs followed by the Trochidae and the Columbellidae (6 LMEs. In all regions, assemblages were dominated by few species, the most diverse and abundant of which were herbivores. No latitudinal gradients were evident in relation to species richness or densities among sampling sites. Highest diversity was found in the Mediterranean and in the Gulf of Alaska, while highest densities were found at different latitudes and represented by few species within one genus (e.g. Afrolittorina in the Agulhas Current, Littorina in the Scotian Shelf, and Lacuna in the Gulf of Alaska. No significant correlation was found between species composition and environmental variables (r≤0.355, p>0.05. Contributing variables to this low correlation included invasive species, inorganic pollution, SST anomalies, and chlorophyll-a anomalies. Despite data limitations in this study which restrict conclusions in a global context, this work represents the first effort to sample gastropod biodiversity on rocky shores using a standardized protocol across a wide scale. Our results will generate more work to build global databases allowing for large-scale diversity comparisons of rocky intertidal assemblages.

  12. Large-scale spatial distribution patterns of echinoderms in nearshore rocky habitats.

    Science.gov (United States)

    Iken, Katrin; Konar, Brenda; Benedetti-Cecchi, Lisandro; Cruz-Motta, Juan José; Knowlton, Ann; Pohle, Gerhard; Mead, Angela; Miloslavich, Patricia; Wong, Melisa; Trott, Thomas; Mieszkowska, Nova; Riosmena-Rodriguez, Rafael; Airoldi, Laura; Kimani, Edward; Shirayama, Yoshihisa; Fraschetti, Simonetta; Ortiz-Touzet, Manuel; Silva, Angelica

    2010-11-05

    This study examined echinoderm assemblages from nearshore rocky habitats for large-scale distribution patterns with specific emphasis on identifying latitudinal trends and large regional hotspots. Echinoderms were sampled from 76 globally-distributed sites within 12 ecoregions, following the standardized sampling protocol of the Census of Marine Life NaGISA project (www.nagisa.coml.org). Sample-based species richness was overall low (species per site), with a total of 32 asteroid, 18 echinoid, 21 ophiuroid, and 15 holothuroid species. Abundance and species richness in intertidal assemblages sampled with visual methods (organisms >2 cm in 1 m(2) quadrats) was highest in the Caribbean ecoregions and echinoids dominated these assemblages with an average of 5 ind m(-2). In contrast, intertidal echinoderm assemblages collected from clearings of 0.0625 m(2) quadrats had the highest abundance and richness in the Northeast Pacific ecoregions where asteroids and holothurians dominated with an average of 14 ind 0.0625 m(-2). Distinct latitudinal trends existed for abundance and richness in intertidal assemblages with declines from peaks at high northern latitudes. No latitudinal trends were found for subtidal echinoderm assemblages with either sampling technique. Latitudinal gradients appear to be superseded by regional diversity hotspots. In these hotspots echinoderm assemblages may be driven by local and regional processes, such as overall productivity and evolutionary history. We also tested a set of 14 environmental variables (six natural and eight anthropogenic) as potential drivers of echinoderm assemblages by ecoregions. The natural variables of salinity, sea-surface temperature, chlorophyll a, and primary productivity were strongly correlated with echinoderm assemblages; the anthropogenic variables of inorganic pollution and nutrient contamination also contributed to correlations. Our results indicate that nearshore echinoderm assemblages appear to be shaped by a

  13. Systematic large-scale study of the inheritance mode of Mendelian disorders provides new insight into human diseasome.

    Science.gov (United States)

    Hao, Dapeng; Wang, Guangyu; Yin, Zuojing; Li, Chuanxing; Cui, Yan; Zhou, Meng

    2014-11-01

    One important piece of information about the human Mendelian disorders is the mode of inheritance. Recent studies of human genetic diseases on a large scale have provided many novel insights into the underlying molecular mechanisms. However, most successful analyses ignored the mode of inheritance of diseases, which severely limits our understanding of human disease mechanisms relating to the mode of inheritance at the large scale. Therefore, we here conducted a systematic large-scale study of the inheritance mode of Mendelian disorders, to bring new insight into human diseases. Our analyses include the comparison between dominant and recessive disease genes on both genomic and proteomic characteristics, Mendelian mutations, protein network properties and disease connections on both the genetic and the population levels. We found that dominant disease genes are more functionally central, topological central and more sensitive to disease outcome. On the basis of these findings, we suggested that dominant diseases should have higher genetic heterogeneity and should have more comprehensive connections with each other compared with recessive diseases, a prediction we confirm by disease network and disease comorbidity.

  14. A Java-Based Distributed Approach for Generating Large-Scale Social Network Graphs

    NARCIS (Netherlands)

    V.N. Serbanescu (Vlad); F.S. de Boer (Frank)

    2016-01-01

    textabstractBig Data management is an important topic of research not only in Computer Science, but also in several other domains. A challenging use of Big Data is the generation of large-scale graphs used to model social networks. In this paper, we present an actor-based Java library that eases the

  15. An Alternative Way to Model Population Ability Distributions in Large-Scale Educational Surveys

    Science.gov (United States)

    Wetzel, Eunike; Xu, Xueli; von Davier, Matthias

    2015-01-01

    In large-scale educational surveys, a latent regression model is used to compensate for the shortage of cognitive information. Conventionally, the covariates in the latent regression model are principal components extracted from background data. This operational method has several important disadvantages, such as the handling of missing data and…

  16. Large-scale GWAS identifies multiple loci for hand grip strength providing biological insights into muscular fitness

    DEFF Research Database (Denmark)

    Willems, Sara M; Wright, Daniel J.; Day, Felix R

    2017-01-01

    Hand grip strength is a widely used proxy of muscular fitness, a marker of frailty, and predictor of a range of morbidities and all-cause mortality. To investigate the genetic determinants of variation in grip strength, we perform a large-scale genetic discovery analysis in a combined sample of 195...... with involvement of psychomotor impairment (PEX14, LRPPRC and KANSL1). Mendelian randomization analyses are consistent with a causal effect of higher genetically predicted grip strength on lower fracture risk. In conclusion, our findings provide new biological insight into the mechanistic underpinnings of grip...

  17. Experiences from Measuring Learning and Performance in Large-Scale Distributed Software Development

    OpenAIRE

    Britto, Ricardo; Šmite, Darja; Lars-Ola, Damm

    2016-01-01

    Background: Developers and development teams in large-scale software development are often required to learn continuously. Organizations also face the need to train and support new developers and teams on-boarded in ongoing projects. Although learning is associated with performance improvements, experience shows that training and learning does not always result in a better performance or significant improvements might take too long. Aims: In this paper, we report our experiences from establis...

  18. Status of the Use of Large-Scale Corba-Distributed Software Framework for NIF Controls

    Energy Technology Data Exchange (ETDEWEB)

    Carey, R W; Bettenhausen, R C; Estes, C M; Fisher, J M; Krammen, J E; Lagin, L J; Ludwigsen, A P; Mathisen, D G; Matone, J T; Patterson, R W; Reynolds, C A; Sanchez, R J; Stout, E A; Tappero, J D; Van Arsdall, P J

    2005-09-09

    The Integrated Computer Control System (ICCS) for the National Ignition Facility (NIF) is based on a scalable software framework that will be distributed over some 750 Computers throughout the NIF. The framework provides templates and services at multiple levels of abstraction for the construction of software applications that communicate via CORBA (Common Object Request Broker Architecture). Object-oriented software design patterns are implemented as templates to be extended by application software. Developers extend the framework base classes to model the numerous physical control points. About 140 thousand software objects, each individually addressable through CORBA, will be active at full scale. Most of the objects have persistent state that is initialized at system start-up and stored in a database. Centralized server programs that implement events, alerts, reservations, message logging, data archive, name services, and process management provide additional framework services. A higher-level model-based, distributed shot automation framework also provides a flexible and scalable scripted framework for automatic sequencing of work-flow for control and monitoring of NIF shots. The ICCS software framework has allowed for efficient construction of a software system that supports a large number of distributed control points representing a complex control application. Status of the use of this framework during first experimental shot campaigns and initial commissioning and build-out of the laser facility is described.

  19. On Event-Triggered Adaptive Architectures for Decentralized and Distributed Control of Large-Scale Modular Systems

    Directory of Open Access Journals (Sweden)

    Ali Albattat

    2016-08-01

    Full Text Available The last decade has witnessed an increased interest in physical systems controlled over wireless networks (networked control systems. These systems allow the computation of control signals via processors that are not attached to the physical systems, and the feedback loops are closed over wireless networks. The contribution of this paper is to design and analyze event-triggered decentralized and distributed adaptive control architectures for uncertain networked large-scale modular systems; that is, systems consist of physically-interconnected modules controlled over wireless networks. Specifically, the proposed adaptive architectures guarantee overall system stability while reducing wireless network utilization and achieving a given system performance in the presence of system uncertainties that can result from modeling and degraded modes of operation of the modules and their interconnections between each other. In addition to the theoretical findings including rigorous system stability and the boundedness analysis of the closed-loop dynamical system, as well as the characterization of the effect of user-defined event-triggering thresholds and the design parameters of the proposed adaptive architectures on the overall system performance, an illustrative numerical example is further provided to demonstrate the efficacy of the proposed decentralized and distributed control approaches.

  20. Large-Scale CORBA-Distributed Software Framework for NIF Controls

    Energy Technology Data Exchange (ETDEWEB)

    Carey, R W; Fong, K W; Sanchez, R J; Tappero, J D; Woodruff, J P

    2001-10-16

    The Integrated Computer Control System (ICCS) is based on a scalable software framework that is distributed over some 325 computers throughout the NIF facility. The framework provides templates and services at multiple levels of abstraction for the construction of software applications that communicate via CORBA (Common Object Request Broker Architecture). Various forms of object-oriented software design patterns are implemented as templates to be extended by application software. Developers extend the framework base classes to model the numerous physical control points, thereby sharing the functionality defined by the base classes. About 56,000 software objects each individually addressed through CORBA are to be created in the complete ICCS. Most objects have a persistent state that is initialized at system start-up and stored in a database. Additional framework services are provided by centralized server programs that implement events, alerts, reservations, message logging, database/file persistence, name services, and process management. The ICCS software framework approach allows for efficient construction of a software system that supports a large number of distributed control points representing a complex control application.

  1. A Low-Cost, Passive Approach for Bacterial Growth and Distribution for Large-Scale Implementation of Bioaugmentation

    Science.gov (United States)

    2012-07-01

    Approach for Bacterial Growth and Distribution for Large-Scale Implementation of Bioaugmentation July 2012 Report Documentation Page Form... Bioaugmentation 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7... Bioaugmentation Scale-Up Issues .............................................................. 8 3.2 ADVANTAGES AND LIMITATIONS OF THE TECHNOLOGY

  2. Responses of Cloud Type Distributions to the Large-Scale Dynamical Circulation: Water Budget-Related Dynamical Phase Space and Dynamical Regimes

    Science.gov (United States)

    Wong, Sun; Del Genio, Anthony; Wang, Tao; Kahn, Brian; Fetzer, Eric J.; L'Ecuyer, Tristan S.

    2015-01-01

    Goals: Water budget-related dynamical phase space; Connect large-scale dynamical conditions to atmospheric water budget (including precipitation); Connect atmospheric water budget to cloud type distributions.

  3. Large-Scale Genetic Structuring of a Widely Distributed Carnivore - The Eurasian Lynx (Lynx lynx)

    Science.gov (United States)

    Rueness, Eli K.; Naidenko, Sergei; Trosvik, Pål; Stenseth, Nils Chr.

    2014-01-01

    Over the last decades the phylogeography and genetic structure of a multitude of species inhabiting Europe and North America have been described. The flora and fauna of the vast landmasses of north-eastern Eurasia are still largely unexplored in this respect. The Eurasian lynx is a large felid that is relatively abundant over much of the Russian sub-continent and the adjoining countries. Analyzing 148 museum specimens collected throughout its range over the last 150 years we have described the large-scale genetic structuring in this highly mobile species. We have investigated the spatial genetic patterns using mitochondrial DNA sequences (D-loop and cytochrome b) and 11 microsatellite loci, and describe three phylogenetic clades and a clear structuring along an east-west gradient. The most likely scenario is that the contemporary Eurasian lynx populations originated in central Asia and that parts of Europe were inhabited by lynx during the Pleistocene. After the Last Glacial Maximum (LGM) range expansions lead to colonization of north-western Siberia and Scandinavia from the Caucasus and north-eastern Siberia from a refugium further east. No evidence of a Berinigan refugium could be detected in our data. We observed restricted gene flow and suggest that future studies of the Eurasian lynx explore to what extent the contemporary population structure may be explained by ecological variables. PMID:24695745

  4. Large-scale genetic structuring of a widely distributed carnivore--the Eurasian lynx (Lynx lynx).

    Science.gov (United States)

    Rueness, Eli K; Naidenko, Sergei; Trosvik, Pål; Stenseth, Nils Chr

    2014-01-01

    Over the last decades the phylogeography and genetic structure of a multitude of species inhabiting Europe and North America have been described. The flora and fauna of the vast landmasses of north-eastern Eurasia are still largely unexplored in this respect. The Eurasian lynx is a large felid that is relatively abundant over much of the Russian sub-continent and the adjoining countries. Analyzing 148 museum specimens collected throughout its range over the last 150 years we have described the large-scale genetic structuring in this highly mobile species. We have investigated the spatial genetic patterns using mitochondrial DNA sequences (D-loop and cytochrome b) and 11 microsatellite loci, and describe three phylogenetic clades and a clear structuring along an east-west gradient. The most likely scenario is that the contemporary Eurasian lynx populations originated in central Asia and that parts of Europe were inhabited by lynx during the Pleistocene. After the Last Glacial Maximum (LGM) range expansions lead to colonization of north-western Siberia and Scandinavia from the Caucasus and north-eastern Siberia from a refugium further east. No evidence of a Berinigan refugium could be detected in our data. We observed restricted gene flow and suggest that future studies of the Eurasian lynx explore to what extent the contemporary population structure may be explained by ecological variables.

  5. Large-scale genetic structuring of a widely distributed carnivore--the Eurasian lynx (Lynx lynx.

    Directory of Open Access Journals (Sweden)

    Eli K Rueness

    Full Text Available Over the last decades the phylogeography and genetic structure of a multitude of species inhabiting Europe and North America have been described. The flora and fauna of the vast landmasses of north-eastern Eurasia are still largely unexplored in this respect. The Eurasian lynx is a large felid that is relatively abundant over much of the Russian sub-continent and the adjoining countries. Analyzing 148 museum specimens collected throughout its range over the last 150 years we have described the large-scale genetic structuring in this highly mobile species. We have investigated the spatial genetic patterns using mitochondrial DNA sequences (D-loop and cytochrome b and 11 microsatellite loci, and describe three phylogenetic clades and a clear structuring along an east-west gradient. The most likely scenario is that the contemporary Eurasian lynx populations originated in central Asia and that parts of Europe were inhabited by lynx during the Pleistocene. After the Last Glacial Maximum (LGM range expansions lead to colonization of north-western Siberia and Scandinavia from the Caucasus and north-eastern Siberia from a refugium further east. No evidence of a Berinigan refugium could be detected in our data. We observed restricted gene flow and suggest that future studies of the Eurasian lynx explore to what extent the contemporary population structure may be explained by ecological variables.

  6. Response time distributions in rapid chess: A large-scale decision making experiment

    Directory of Open Access Journals (Sweden)

    Mariano Sigman

    2010-10-01

    Full Text Available Rapid chess provides an unparalleled laboratory to understand decision making in a natural environment. In a chess game, players choose consecutively around 40 moves in a finite time budget. The goodness of each choice can be determined quantitatively since current chess algorithms estimate precisely the value of a position. Web-based chess produces vast amounts of data, millions of decisions per day, incommensurable with traditional psychological experiments. We generated a database of response times and position value in rapid chess games. We measured robust emergent statistical observables: 1 Response time (RT distributions are long-tailed and show qualitatively distinct forms at different stages of the game, 2 RT of successive moves are highly correlated both for intra- and inter-player moves. These findings have theoretical implications since they deny two basic assumptions of sequential decision making algorithms: RTs are not stationary and can not be generated by a state function. Our results also have practical implications. First, we characterized the capacity of blunders and score fluctuations to predict a player strength, which is yet an open problem in chess softwares. Second, we show that the winning likelihood can be reliably estimated from a weighted combination of remaining times and position evaluation.

  7. Using Distributed Fiber Optic Sensing to Monitor Large Scale Permafrost Transitions: Preliminary Results from a Controlled Thaw Experiment

    Science.gov (United States)

    Ajo Franklin, J. B.; Wagner, A. M.; Lindsey, N.; Dou, S.; Bjella, K.; Daley, T. M.; Freifeld, B. M.; Ulrich, C.; Gelvin, A.; Morales, A.; James, S. R.; Saari, S.; Ekblaw, I.; Wood, T.; Robertson, M.; Martin, E. R.

    2016-12-01

    In a warming world, permafrost landscapes are being rapidly transformed by thaw, yielding surface subsidence and groundwater flow alteration. The same transformations pose a threat to arctic infrastructure and can induce catastrophic failure of the roads, runways, and pipelines on which human habitation depends. Scalable solutions to monitoring permafrost thaw dynamics are required to both quantitatively understand biogeochemical feedbacks as well as to protect built infrastructure from damage. Unfortunately, permafrost alteration happens over the time scale of climate change, years to decades, a decided challenge for testing new sensing technologies in a limited context. One solution is to engineer systems capable of rapidly thawing large permafrost units to allow short duration experiments targeting next-generation sensing approaches. We present preliminary results from a large-scale controlled permafrost thaw experiment designed to evaluate the utility of different geophysical approaches for tracking the cause, precursors, and early phases of thaw subsidence. We focus on the use of distributed fiber optic sensing for this challenge and deployed distributed temperature (DTS), strain (DSS), and acoustic (DAS) sensing systems in a 2D array to detect thaw signatures. A 10 x 15 x 1 m section of subsurface permafrost was heated using an array of 120 downhole heaters (60 w) at an experimental site near Fairbanks, AK. Ambient noise analysis of DAS datasets collected at the plot, coupled to shear wave inversion, was utilized to evaluate changes in shear wave velocity associated with heating and thaw. These measurements were confirmed by seismic surveys collected using a semi-permanent orbital seismic source activated on a daily basis. Fiber optic measurements were complemented by subsurface thermistor and thermocouple arrays, timelapse total station surveys, LIDAR, secondary seismic measurements (geophone and broadband recordings), timelapse ERT, borehole NMR, soil

  8. DC-DC Converter Topology Assessment for Large Scale Distributed Photovoltaic Plant Architectures

    Energy Technology Data Exchange (ETDEWEB)

    Agamy, Mohammed S; Harfman-Todorovic, Maja; Elasser, Ahmed; Sabate, Juan A; Steigerwald, Robert L; Jiang, Yan; Essakiappan, Somasundaram

    2011-07-01

    Distributed photovoltaic (PV) plant architectures are emerging as a replacement for the classical central inverter based systems. However, power converters of smaller ratings may have a negative impact on system efficiency, reliability and cost. Therefore, it is necessary to design converters with very high efficiency and simpler topologies in order not to offset the benefits gained by using distributed PV systems. In this paper an evaluation of the selection criteria for dc-dc converters for distributed PV systems is performed; this evaluation includes efficiency, simplicity of design, reliability and cost. Based on this evaluation, recommendations can be made as to which class of converters is best fit for this application.

  9. On Distributed Deep Network for Processing Large-Scale Sets of Complex Data

    OpenAIRE

    Chen, D

    2016-01-01

    Recent work in unsupervised feature learning and deep learning has shown that being able to train large models can dramatically improve performance. In this paper, we consider the problem of training a deep network with hundreds of parameters using distributed CPU cores. We have developed Bagging-Down SGD algorithm to solve the distributing problems. Bagging-Down SGD introduces the parameter server adding on the several model replicas, and separates the updating and the training computing to ...

  10. Probing large scale homogeneity and periodicity in the LRG distribution using Shannon entropy

    Science.gov (United States)

    Pandey, Biswajit; Sarkar, Suman

    2016-08-01

    We quantify the degree of inhomogeneity in the Luminous Red Galaxy (LRG) distribution from the SDSS DR7 as a function of length scales by measuring the Shannon entropy in independent and regular cubic voxels of increasing grid sizes. We also analyse the data by carrying out measurements in overlapping spheres and find that it suppresses inhomogeneities by a factor of 5-10 on different length scales. Despite the differences observed in the degree of inhomogeneity both the methods show a decrease in inhomogeneity with increasing length scales which eventually settle down to a plateau at ˜150 h-1 Mpc. Considering the minuscule values of inhomogeneity at the plateaus and their expected variations we conclude that the LRG distribution becomes homogeneous at 150 h-1 Mpc and beyond. We also use the Kullback-Leibler divergence as an alternative measure of inhomogeneity which reaffirms our findings. We show that the method presented here can effectively capture the inhomogeneity in a truly inhomogeneous distribution at all length scales. We analyse a set of Monte Carlo simulations with certain periodicity in their spatial distributions and find periodic variations in their inhomogeneity which helps us to identify the underlying regularities present in such distributions and quantify the scale of their periodicity. We do not find any underlying regularities in the LRG distribution within the length scales probed.

  11. Large scale genotype comparison of human papillomavirus E2-host interaction networks provides new insights for e2 molecular functions.

    Directory of Open Access Journals (Sweden)

    Mandy Muller

    Full Text Available Human Papillomaviruses (HPV cause widespread infections in humans, resulting in latent infections or diseases ranging from benign hyperplasia to cancers. HPV-induced pathologies result from complex interplays between viral proteins and the host proteome. Given the major public health concern due to HPV-associated cancers, most studies have focused on the early proteins expressed by HPV genotypes with high oncogenic potential (designated high-risk HPV or HR-HPV. To advance the global understanding of HPV pathogenesis, we mapped the virus/host interaction networks of the E2 regulatory protein from 12 genotypes representative of the range of HPV pathogenicity. Large-scale identification of E2-interaction partners was performed by yeast two-hybrid screenings of a HaCaT cDNA library. Based on a high-confidence scoring scheme, a subset of these partners was then validated for pair-wise interaction in mammalian cells with the whole range of the 12 E2 proteins, allowing a comparative interaction analysis. Hierarchical clustering of E2-host interaction profiles mostly recapitulated HPV phylogeny and provides clues to the involvement of E2 in HPV infection. A set of cellular proteins could thus be identified discriminating, among the mucosal HPV, E2 proteins of HR-HPV 16 or 18 from the non-oncogenic genital HPV. The study of the interaction networks revealed a preferential hijacking of highly connected cellular proteins and the targeting of several functional families. These include transcription regulation, regulation of apoptosis, RNA processing, ubiquitination and intracellular trafficking. The present work provides an overview of E2 biological functions across multiple HPV genotypes.

  12. Tail-scope: Using friends to estimate heavy tails of degree distributions in large-scale complex networks.

    Science.gov (United States)

    Eom, Young-Ho; Jo, Hang-Hyun

    2015-05-11

    Many complex networks in natural and social phenomena have often been characterized by heavy-tailed degree distributions. However, due to rapidly growing size of network data and concerns on privacy issues about using these data, it becomes more difficult to analyze complete data sets. Thus, it is crucial to devise effective and efficient estimation methods for heavy tails of degree distributions in large-scale networks only using local information of a small fraction of sampled nodes. Here we propose a tail-scope method based on local observational bias of the friendship paradox. We show that the tail-scope method outperforms the uniform node sampling for estimating heavy tails of degree distributions, while the opposite tendency is observed in the range of small degrees. In order to take advantages of both sampling methods, we devise the hybrid method that successfully recovers the whole range of degree distributions. Our tail-scope method shows how structural heterogeneities of large-scale complex networks can be used to effectively reveal the network structure only with limited local information.

  13. Tail-scope: Using friends to estimate heavy tails of degree distributions in large-scale complex networks

    Science.gov (United States)

    Eom, Young-Ho; Jo, Hang-Hyun

    2015-05-01

    Many complex networks in natural and social phenomena have often been characterized by heavy-tailed degree distributions. However, due to rapidly growing size of network data and concerns on privacy issues about using these data, it becomes more difficult to analyze complete data sets. Thus, it is crucial to devise effective and efficient estimation methods for heavy tails of degree distributions in large-scale networks only using local information of a small fraction of sampled nodes. Here we propose a tail-scope method based on local observational bias of the friendship paradox. We show that the tail-scope method outperforms the uniform node sampling for estimating heavy tails of degree distributions, while the opposite tendency is observed in the range of small degrees. In order to take advantages of both sampling methods, we devise the hybrid method that successfully recovers the whole range of degree distributions. Our tail-scope method shows how structural heterogeneities of large-scale complex networks can be used to effectively reveal the network structure only with limited local information.

  14. Information Power Grid: Distributed High-Performance Computing and Large-Scale Data Management for Science and Engineering

    Science.gov (United States)

    Johnston, William E.; Gannon, Dennis; Nitzberg, Bill

    2000-01-01

    We use the term "Grid" to refer to distributed, high performance computing and data handling infrastructure that incorporates geographically and organizationally dispersed, heterogeneous resources that are persistent and supported. This infrastructure includes: (1) Tools for constructing collaborative, application oriented Problem Solving Environments / Frameworks (the primary user interfaces for Grids); (2) Programming environments, tools, and services providing various approaches for building applications that use aggregated computing and storage resources, and federated data sources; (3) Comprehensive and consistent set of location independent tools and services for accessing and managing dynamic collections of widely distributed resources: heterogeneous computing systems, storage systems, real-time data sources and instruments, human collaborators, and communications systems; (4) Operational infrastructure including management tools for distributed systems and distributed resources, user services, accounting and auditing, strong and location independent user authentication and authorization, and overall system security services The vision for NASA's Information Power Grid - a computing and data Grid - is that it will provide significant new capabilities to scientists and engineers by facilitating routine construction of information based problem solving environments / frameworks. Such Grids will knit together widely distributed computing, data, instrument, and human resources into just-in-time systems that can address complex and large-scale computing and data analysis problems. Examples of these problems include: (1) Coupled, multidisciplinary simulations too large for single systems (e.g., multi-component NPSS turbomachine simulation); (2) Use of widely distributed, federated data archives (e.g., simultaneous access to metrological, topological, aircraft performance, and flight path scheduling databases supporting a National Air Space Simulation systems}; (3

  15. Large-Scale Distributed Bayesian Matrix Factorization using Stochastic Gradient MCMC

    NARCIS (Netherlands)

    Ahn, S.; Korattikara, A.; Liu, N.; Rajan, S.; Welling, M.

    2015-01-01

    Despite having various attractive qualities such as high prediction accuracy and the ability to quantify uncertainty and avoid ovrfitting, Bayesian Matrix Factorization has not been widely adopted because of the prohibitive cost of inference. In this paper, we propose a scalable distributed Bayesian

  16. Large Scale Solar Power Integration in Distribution Grids : PV Modelling, Voltage Support and Aggregation Studies

    NARCIS (Netherlands)

    Samadi, A.

    2014-01-01

    Long term supporting schemes for photovoltaic (PV) system installation have led to accommodating large numbers of PV systems within load pockets in distribution grids. High penetrations of PV systems can cause new technical challenges, such as voltage rise due to reverse power flow during light load

  17. Large-scale GWAS identifies multiple loci for hand grip strength providing biological insights into muscular fitness

    NARCIS (Netherlands)

    Willems, Sara M.; Wright, D.J.; Day, Felix R.; Trajanoska, Katerina; Joshi, P.K.; Morris, John A.; Matteini, Amy M.; Garton, Fleur C.; Grarup, Niels; Oskolkov, Nikolay; Thalamuthu, Anbupalam; Mangino, Massimo; Liu, Jun; Demirkan, Ayse; Lek, Monkol; Xu, Liwen; Wang, Guan; Oldmeadow, Christopher; Gaulton, Kyle J.; Lotta, Luca A.; Miyamoto-Mikami, Eri; Rivas, Manuel A.; White, Tom; Loh, Po Ru; Aadahl, Mette; Amin, Najaf; Attia, John R.; Austin, Krista; Benyamin, Beben; Brage, Søren; Cheng, Yu Ching; Ciȩszczyk, Paweł; Derave, Wim; Eriksson, Karl Fredrik; Eynon, Nir; Linneberg, Allan; Lucia, Alejandro; Massidda, Myosotis; Mitchell, Braxton D.; Miyachi, Motohiko; Murakami, Haruka; Padmanabhan, Sandosh; Pandey, Ashutosh; Papadimitriou, Ioannis; Rajpal, Deepak K.; Sale, Craig; Schnurr, Theresia M.; Sessa, Francesco; Shrine, Nick; Tobin, Martin D.; Varley, Ian; Wain, Louise V.; Wray, Naomi R.; Lindgren, Cecilia M.; MacArthur, Daniel G.; Waterworth, Dawn M.; McCarthy, Mark I.; Pedersen, Oluf; Khaw, Kay Tee; Kiel, Douglas P.; Pitsiladis, Yannis; Fuku, Noriyuki; Franks, Paul W.; North, Kathryn N.; Duijn, Van C.M.; Mather, Karen A.; Hansen, Torben; Hansson, Ola; Spector, Tim D.; Murabito, Joanne M.; Richards, J.B.; Rivadeneira, Fernando; Langenberg, Claudia; Perry, John R.B.; Wareham, Nick J.; Scott, Robert A.; Oei, Ling; Zheng, Hou Feng; Forgetta, Vincenzo; Leong, Aaron; Ahmad, Omar S.; Laurin, Charles; Mokry, Lauren E.; Ross, Stephanie; Elks, Cathy E.; Bowden, Jack; Warrington, Nicole M.; Murray, Anna; Ruth, Katherine S.; Tsilidis, Konstantinos K.; Medina-Gómez, Carolina; Estrada, Karol; Bis, Joshua C.; Chasman, Daniel I.; Demissie, Serkalem; Enneman, Anke W.; Hsu, Yi Hsiang; Ingvarsson, Thorvaldur; Kähönen, Mika; Kammerer, Candace; Lacroix, Andrea Z.; Li, Guo; Liu, Ching Ti; Liu, Yongmei; Lorentzon, Mattias; Mägi, Reedik; Mihailov, Evelin; Milani, Lili; Moayyeri, Alireza; Nielson, Carrie M.; Sham, Pack Chung; Siggeirsdotir, Kristin; Sigurdsson, Gunnar; Stefansson, Kari; Trompet, Stella; Thorleifsson, Gudmar; Vandenput, Liesbeth; Velde, Van Der Nathalie; Viikari, Jorma; Xiao, Su Mei; Zhao, Jing Hua; Evans, Daniel S.; Cummings, Steven R.; Cauley, Jane; Duncan, Emma L.; Groot, De Lisette C.P.G.M.; Esko, Tonu; Gudnason, Vilmundar; Harris, Tamara B.; Jackson, Rebecca D.; Jukema, J.W.; Ikram, Arfan M.A.; Karasik, David; Kaptoge, Stephen; Kung, Annie Wai Chee; Lehtimäki, Terho; Lyytikäinen, Leo Pekka; Lips, Paul; Luben, Robert; Metspalu, Andres; Meurs, van Joyce B.; Minster, Ryan L.; Orwoll, Erick; Oei, Edwin; Psaty, Bruce M.; Raitakari, Olli T.; Ralston, Stuart W.; Ridker, Paul M.; Robbins, John A.; Smith, Albert V.; Styrkarsdottir, Unnur; Tranah, Gregory J.; Thorstensdottir, Unnur; Uitterlinden, Andre G.; Zmuda, Joseph; Zillikens, M.C.; Ntzani, Evangelia E.; Evangelou, Evangelos; Ioannidis, John P.A.; Evans, David M.; Ohlsson, Claes

    2017-01-01

    Hand grip strength is a widely used proxy of muscular fitness, a marker of frailty, and predictor of a range of morbidities and all-cause mortality. To investigate the genetic determinants of variation in grip strength, we perform a large-scale genetic discovery analysis in a combined sample of

  18. Large-scale GWAS identifies multiple loci for hand grip strength providing biological insights into muscular fitness

    DEFF Research Database (Denmark)

    Willems, Sara M.; Wright, Daniel J.; Day, Felix R.

    2017-01-01

    Hand grip strength is a widely used proxy of muscular fitness, a marker of frailty, and predictor of a range of morbidities and all-cause mortality. To investigate the genetic determinants of variation in grip strength, we perform a large-scale genetic discovery analysis in a combined sample of 1...

  19. Progress in Large-Scale Femtosecond Timing Distribution and RF-Synchronization

    CERN Document Server

    Kärtner, Franz X; Chen, Jeff; Grawert, Felix J; Ilday, Fatih O; Kim, Jung-Won; Winter, Axel

    2005-01-01

    For future advances in accelerator physics in general and seeding of free electron lasers (FELs) in particular, precise synchronization between low-level RF-system, photo-injector laser, seed radiation as well as potential probe lasers at the FEL output is required. We propose a modular system based on optical pulse trains from mode-locked lasers for timing distribution and timing information transfer in the optical domain to avoid detrimental effects due to amplitude to phase conversion in photo detectors. Synchronization of various RF- and optical sub-systems with femtosecond precision over distances of several hundred meters can be achieved. First experimental results and limitations of the proposed scheme for timing distribution are discussed.

  20. Impact of large-scale installation of LED lamps in a distribution system

    OpenAIRE

    UDDIN, SOHEL; SHAREF, HUSSAIN; KRAUSE, OLAV; Mohamed, Azah; HANNAN, MOHAMMAD ABDUL; ISLAM, NAZ NIAMUL

    2015-01-01

    The aim of this paper was to examine the utilization of a large number of nonlinear light emitting diode (LED) lamps in a distribution system and its impact on system power quality. Initially, harmonic and others electrical characteristics were identified by carrying out an experiment on four types of LED lamps available on the local market. On the basis of the identified characteristics, a new LED lamp model was developed in MATLAB Simulink. This model is called the time-dependent current so...

  1. Large scale patterns in vertical distribution and behaviour of mesopelagic scattering layers

    KAUST Repository

    Klevjer, Thor Aleksander

    2016-01-27

    Recent studies suggest that previous estimates of mesopelagic biomasses are severely biased, with the new, higher estimates underlining the need to unveil behaviourally mediated coupling between shallow and deep ocean habitats. We analysed vertical distribution and diel vertical migration (DVM) of mesopelagic acoustic scattering layers (SLs) recorded at 38 kHz across oceanographic regimes encountered during the circumglobal Malaspina expedition. Mesopelagic SLs were observed in all areas covered, but vertical distributions and DVM patterns varied markedly. The distribution of mesopelagic backscatter was deepest in the southern Indian Ocean (weighted mean daytime depth: WMD 590 m) and shallowest at the oxygen minimum zone in the eastern Pacific (WMD 350 m). DVM was evident in all areas covered, on average ~50% of mesopelagic backscatter made daily excursions from mesopelagic depths to shallow waters. There were marked differences in migrating proportions between the regions, ranging from ~20% in the Indian Ocean to ~90% in the Eastern Pacific. Overall the data suggest strong spatial gradients in mesopelagic DVM patterns, with implied ecological and biogeochemical consequences. Our results suggest that parts of this spatial variability can be explained by horizontal patterns in physical-chemical properties of water masses, such as oxygen, temperature and turbidity.

  2. 'Oorja' in India: Assessing a large-scale commercial distribution of advanced biomass stoves to households.

    Science.gov (United States)

    Thurber, Mark C; Phadke, Himani; Nagavarapu, Sriniketh; Shrimali, Gireesh; Zerriffi, Hisham

    2014-04-01

    Replacing traditional stoves with advanced alternatives that burn more cleanly has the potential to ameliorate major health problems associated with indoor air pollution in developing countries. With a few exceptions, large government and charitable programs to distribute advanced stoves have not had the desired impact. Commercially-based distributions that seek cost recovery and even profits might plausibly do better, both because they encourage distributors to supply and promote products that people want and because they are based around properly-incentivized supply chains that could more be scalable, sustainable, and replicable. The sale in India of over 400,000 "Oorja" stoves to households from 2006 onwards represents the largest commercially-based distribution of a gasification-type advanced biomass stove. BP's Emerging Consumer Markets (ECM) division and then successor company First Energy sold this stove and the pelletized biomass fuel on which it operates. We assess the success of this effort and the role its commercial aspect played in outcomes using a survey of 998 households in areas of Maharashtra and Karnataka where the stove was sold as well as detailed interviews with BP and First Energy staff. Statistical models based on this data indicate that Oorja purchase rates were significantly influenced by the intensity of Oorja marketing in a region as well as by pre-existing stove mix among households. The highest rate of adoption came from LPG-using households for which Oorja's pelletized biomass fuel reduced costs. Smoke- and health-related messages from Oorja marketing did not significantly influence the purchase decision, although they did appear to affect household perceptions about smoke. By the time of our survey, only 9% of households that purchased Oorja were still using the stove, the result in large part of difficulties First Energy encountered in developing a viable supply chain around low-cost procurement of "agricultural waste" to make

  3. Coupling a distributed hydrological model with detailed forest structural information for large-scale global change impact assessment

    Science.gov (United States)

    Eisner, Stephanie; Huang, Shaochun; Majasalmi, Titta; Bright, Ryan; Astrup, Rasmus; Beldring, Stein

    2017-04-01

    Forests are recognized for their decisive effect on landscape water balance with structural forest characteristics as stand density or species composition determining energy partitioning and dominant flow paths. However, spatial and temporal variability in forest structure is often poorly represented in hydrological modeling frameworks, in particular in regional to large scale hydrological modeling and impact analysis. As a common practice, prescribed land cover classes (including different generic forest types) are linked to parameter values derived from literature, or parameters are determined by calibration. While national forest inventory (NFI) data provide comprehensive, detailed information on hydrologically relevant forest characteristics, their potential to inform hydrological simulation over larger spatial domains is rarely exploited. In this study we present a modeling framework that couples the distributed hydrological model HBV with forest structural information derived from the Norwegian NFI and multi-source remote sensing data. The modeling framework, set up for the entire of continental Norway at 1 km spatial resolution, is explicitly designed to study the combined and isolated impacts of climate change, forest management and land use change on hydrological fluxes. We use a forest classification system based on forest structure rather than biomes which allows to implicitly account for impacts of forest management on forest structural attributes. In the hydrological model, different forest classes are represented by three parameters: leaf area index (LAI), mean tree height and surface albedo. Seasonal cycles of LAI and surface albedo are dynamically simulated to make the framework applicable under climate change conditions. Based on a hindcast for the pilot regions Nord-Trøndelag and Sør-Trøndelag, we show how forest management has affected regional hydrological fluxes during the second half of the 20th century as contrasted to climate variability.

  4. Generalizations of the Alternating Direction Method of Multipliers for Large-Scale and Distributed Optimization

    Science.gov (United States)

    2014-05-01

    closely related to the dual ascent method (or dual subgradient method) [53] as well as the quadratic penalty method. Compared to these methods, the...t)y) ≤ tf(x) + (1− t)f(y)− 1 2 νt(1− t)‖x− y‖2. (2.28) For a convex function f , we let the subdifferential (i.e., the set of all subgradients ) of f...review A simple distributed algorithm for solving (5.1) is dual decomposition [19], which is essentially a dual ascent method or dual subgradient

  5. Large-scale malaria survey in Cambodia: Novel insights on species distribution and risk factors

    Directory of Open Access Journals (Sweden)

    Doung Socheat

    2007-03-01

    Full Text Available Abstract Background In Cambodia, estimates of the malaria burden rely on a public health information system that does not record cases occurring among remote populations, neither malaria cases treated in the private sector nor asymptomatic carriers. A global estimate of the current malaria situation and associated risk factors is, therefore, still lacking. Methods A large cross-sectional survey was carried out in three areas of multidrug resistant malaria in Cambodia, enrolling 11,652 individuals. Fever and splenomegaly were recorded. Malaria prevalence, parasite densities and spatial distribution of infection were determined to identify parasitological profiles and the associated risk factors useful for improving malaria control programmes in the country. Results Malaria prevalence was 3.0%, 7.0% and 12.3% in Sampovloun, Koh Kong and Preah Vihear areas. Prevalences and Plasmodium species were heterogeneously distributed, with higher Plasmodium vivax rates in areas of low transmission. Malaria-attributable fevers accounted only for 10–33% of malaria cases, and 23–33% of parasite carriers were febrile. Multivariate multilevel regression analysis identified adults and males, mostly involved in forest activities, as high risk groups in Sampovloun, with additional risks for children in forest-fringe villages in the other areas along with an increased risk with distance from health facilities. Conclusion These observations point to a more complex malaria situation than suspected from official reports. A large asymptomatic reservoir was observed. The rates of P. vivax infections were higher than recorded in several areas. In remote areas, malaria prevalence was high. This indicates that additional health facilities should be implemented in areas at higher risk, such as remote rural and forested parts of the country, which are not adequately served by health services. Precise malaria risk mapping all over the country is needed to assess the

  6. Large-scale distributed deformation controlled topography along the western Africa-Eurasia limit: Tectonic constraints

    Science.gov (United States)

    de Vicente, G.; Vegas, R.

    2009-09-01

    In the interior of the Iberian Peninsula, the main geomorphic features, mountain ranges and basins, seems to be arranged in several directions whose origin can be related to the N-S plate convergence which occurred along the Cantabro-Pyrenean border during the Eocene-Lower Miocene time span. The Iberian Variscan basement accommodated part of this plate convergence in three E-W trending crustal folds as well as in the reactivation of two left-lateral NNE-SSW strike-slip belts. The rest of the convergence was assumed through the inversion of the Iberian Mesozoic Rift to form the Iberian Chain. This inversion gave rise to a process of oblique crustal shortening involving the development of two right lateral NW-SE shear zones. Crustal folds, strike-slip corridors and one inverted rift compose a tectonic mechanism of pure shear in which the shortening is solved vertically by the development of mountain ranges and related sedimentary basins. This model can be expanded to NW Africa, up to the Atlasic System, where N-S plate convergence seems also to be accommodated in several basement uplifts, Anti-Atlas and Meseta, and through the inversion of two Mesozoic rifts, High and Middle Atlas. In this tectonic situation, the microcontinent Iberia used to be firmly attached to Africa during most part of the Tertiary, in such a way that N-S compressive stresses could be transmitted from the collision of the Pyrenean boundary. This tectonic scenario implies that most part of the Tertiary Eurasia-Africa convergence was not accommodated along the Iberia-Africa interface, but in the Pyrenean plateboundary. A broad zone of distributed deformation resulted from the transmission of compressive stresses from the collision at the Pyrenean border. This distributed, intraplate deformation, can be easily related to the topographic pattern of the Africa-Eurasia interface at the longitude of the Iberian Peninsula. Shortening in the Rif-Betics external zones - and their related topographic

  7. Large-scale GWAS identifies multiple loci for hand grip strength providing biological insights into muscular fitness

    DEFF Research Database (Denmark)

    Willems, Sara M; Wright, Daniel J.; Day, Felix R

    2017-01-01

    Hand grip strength is a widely used proxy of muscular fitness, a marker of frailty, and predictor of a range of morbidities and all-cause mortality. To investigate the genetic determinants of variation in grip strength, we perform a large-scale genetic discovery analysis in a combined sample of 195...... strength and the causal role of muscular strength in age-related morbidities and mortality....

  8. Large-Scale Timing Distribution and RF-Synchronization for FEL Facilities

    CERN Document Server

    Kim, Jung Won; Kaertner, Franz X; Muecke, Oliver; Perrott, Michael

    2004-01-01

    For future advances in accelerator physics in general and seeding of FELs in particular, precise synchronization between seed radiation, low-level RF-systems and photo-injector laser is required. Typical synchronization methods based on direct photodetection are limited by the detector nonlinearities, which lead to amplitude-to-phase conversion and introduce timing jitter. A new synchronization scheme for extraction of low jitter RF-signals from optical pulse trains distributed by mode-locked lasers is proposed. It is robust against photodetector nonlinearities. The scheme is based on a transfer of timing information into an intensity imbalance between the two output beams from a Sagnac-loop interferometer. As a first experimental demonstration, sub-100 fs timing jitter between the extracted 2 GHz RF-signal and the 100 MHz optical pulse train from a mode-locked Ti:sapphire laser is demonstrated. Numerical simulations show that scaling to sub-femtosecond precision is possible. Together with mode-locked fiber l...

  9. Large scale integration of CVD-graphene based NEMS with narrow distribution of resonance parameters

    Science.gov (United States)

    Arjmandi-Tash, Hadi; Allain, Adrien; (Vitto Han, Zheng; Bouchiat, Vincent

    2017-06-01

    We present a novel method for the fabrication of the arrays of suspended micron-sized membranes, based on monolayer pulsed-CVD graphene. Such devices are the source of an efficient integration of graphene nano-electro-mechanical resonators, compatible with production at the wafer scale using standard photolithography and processing tools. As the graphene surface is continuously protected by the same polymer layer during the whole process, suspended graphene membranes are clean and free of imperfections such as deposits, wrinkles and tears. Batch fabrication of 100 μm-long multi-connected suspended ribbons is presented. At room temperature, mechanical resonance of electrostatically-actuated devices show narrow distribution of their characteristic parameters with high quality factor and low effective mass and resonance frequencies, as expected for low stress and adsorbate-free membranes. Upon cooling, a sharp increase of both resonant frequency and quality factor is observed, enabling to extract the thermal expansion coefficient of CVD graphene. Comparison with state-of-the-art graphene NEMS is presented.

  10. Large-Scale No-Show Patterns and Distributions for Clinic Operational Research

    Directory of Open Access Journals (Sweden)

    Michael L. Davies

    2016-02-01

    Full Text Available Patient no-shows for scheduled primary care appointments are common. Unused appointment slots reduce patient quality of care, access to services and provider productivity while increasing loss to follow-up and medical costs. This paper describes patterns of no-show variation by patient age, gender, appointment age, and type of appointment request for six individual service lines in the United States Veterans Health Administration (VHA. This retrospective observational descriptive project examined 25,050,479 VHA appointments contained in individual-level records for eight years (FY07-FY14 for 555,183 patients. Multifactor analysis of variance (ANOVA was performed, with no-show rate as the dependent variable, and gender, age group, appointment age, new patient status, and service line as factors. The analyses revealed that males had higher no-show rates than females to age 65, at which point males and females exhibited similar rates. The average no-show rates decreased with age until 75–79, whereupon rates increased. As appointment age increased, males and new patients had increasing no-show rates. Younger patients are especially prone to no-show as appointment age increases. These findings provide novel information to healthcare practitioners and management scientists to more accurately characterize no-show and attendance rates and the impact of certain patient factors. Future general population data could determine whether findings from VHA data generalize to others.

  11. Large-scale biotic interaction effects - tree cover interacts with shade toler-ance to affect distribution patterns of herb and shrub species across the Alps

    DEFF Research Database (Denmark)

    Nieto-Lugilde, Diego; Lenoir, Jonathan; Abdulhak, Sylvain

    2012-01-01

    While often ignored in large-scale studies of species distributions, there is an increasing interest in the role played by biotic interactions. The effect of competition for light in vegetation dynamics and succession has long been recognized. However, its role for large-scale plant species distr...

  12. Hierarchical Parallel Matrix Multiplication on Large-Scale Distributed Memory Platforms

    KAUST Repository

    Quintin, Jean-Noel

    2013-10-01

    Matrix multiplication is a very important computation kernel both in its own right as a building block of many scientific applications and as a popular representative for other scientific applications. Cannon\\'s algorithm which dates back to 1969 was the first efficient algorithm for parallel matrix multiplication providing theoretically optimal communication cost. However this algorithm requires a square number of processors. In the mid-1990s, the SUMMA algorithm was introduced. SUMMA overcomes the shortcomings of Cannon\\'s algorithm as it can be used on a nonsquare number of processors as well. Since then the number of processors in HPC platforms has increased by two orders of magnitude making the contribution of communication in the overall execution time more significant. Therefore, the state of the art parallel matrix multiplication algorithms should be revisited to reduce the communication cost further. This paper introduces a new parallel matrix multiplication algorithm, Hierarchical SUMMA (HSUMMA), which is a redesign of SUMMA. Our algorithm reduces the communication cost of SUMMA by introducing a two-level virtual hierarchy into the two-dimensional arrangement of processors. Experiments on an IBM BlueGene/P demonstrate the reduction of communication cost up to 2.08 times on 2048 cores and up to 5.89 times on 16384 cores. © 2013 IEEE.

  13. Impact of large-scale distribution and subsequent use of free nicotine patches on primary care physician interaction

    Directory of Open Access Journals (Sweden)

    Vladyslav Kushnir

    2017-07-01

    Full Text Available Abstract Background Large-scale distribution efforts of free nicotine replacement therapy (NRT have been documented to be cost-effective interventions for increasing smoking quit rates. However, despite nearly a dozen studies evaluating their effectiveness, none have examined whether free NRT provision promotes further primary care help-seeking and the impact that it may have on cessation efforts. Methods In the context of a randomized controlled trial, a secondary analysis was conducted on 1000 adult regular smokers randomized to be mailed a 5-week supply of nicotine patches or to a no intervention control group. Recipients and users of free nicotine patches at an 8 week follow-up were successfully case matched to controls based on age, gender, baseline level of nicotine dependence and intent to quit (n = 201 per group. Differences in physician interaction between the two groups were evaluated at both 8 week and 6 month follow-ups. The impact of physician interaction on self-reported smoking abstinence at each follow-up was also examined. Results Although no differences in physician interaction were noted between groups at the 8 week follow-up, at the 6 month follow-up, nicotine patch users reported greater frequency of discussing smoking with their physician (43.9%, as compared to the control group (30.3% (p = 0.011. Across both groups, over 90% of those that discussed smoking with a physician were encouraged to quit and approximately 70% were provided with additional support. Separate ANOVAs revealed no significant impact of physician interaction on cessation (p > 0.05, regardless of group or follow-up period, however, at the 6 month follow-up, nicotine patch users who discussed cessation with a physician had made serious quit attempts at significantly greater rates (72.6%, compared to controls (49.1% (p = 0.007. Conclusions Irrespective of group, the majority of smokers in the present study did not discuss cessation with their

  14. Distributed parallel cooperative coevolutionary multi-objective large-scale immune algorithm for deployment of wireless sensor networks

    DEFF Research Database (Denmark)

    Cao, Bin; Zhao, Jianwei; Yang, Po

    2018-01-01

    -objective evolutionary algorithms the Cooperative Coevolutionary Generalized Differential Evolution 3, the Cooperative Multi-objective Differential Evolution and the Nondominated Sorting Genetic Algorithm III, the proposed algorithm addresses the deployment optimization problem efficiently and effectively.......Using immune algorithms is generally a time-intensive process especially for problems with a large number of variables. In this paper, we propose a distributed parallel cooperative coevolutionary multi-objective large-scale immune algorithm that is implemented using the message passing interface...... (MPI). The proposed algorithm is composed of three layers: objective, group and individual layers. First, for each objective in the multi-objective problem to be addressed, a subpopulation is used for optimization, and an archive population is used to optimize all the objectives. Second, the large...

  15. Assessing Impact of Large-Scale Distributed Residential HVAC Control Optimization on Electricity Grid Operation and Renewable Energy Integration

    Science.gov (United States)

    Corbin, Charles D.

    Demand management is an important component of the emerging Smart Grid, and a potential solution to the supply-demand imbalance occurring increasingly as intermittent renewable electricity is added to the generation mix. Model predictive control (MPC) has shown great promise for controlling HVAC demand in commercial buildings, making it an ideal solution to this problem. MPC is believed to hold similar promise for residential applications, yet very few examples exist in the literature despite a growing interest in residential demand management. This work explores the potential for residential buildings to shape electric demand at the distribution feeder level in order to reduce peak demand, reduce system ramping, and increase load factor using detailed sub-hourly simulations of thousands of buildings coupled to distribution power flow software. More generally, this work develops a methodology for the directed optimization of residential HVAC operation using a distributed but directed MPC scheme that can be applied to today's programmable thermostat technologies to address the increasing variability in electric supply and demand. Case studies incorporating varying levels of renewable energy generation demonstrate the approach and highlight important considerations for large-scale residential model predictive control.

  16. Landscape mapping at sub-Antarctic South Georgia provides a protocol for underpinning large-scale marine protected areas

    Science.gov (United States)

    Hogg, Oliver T.; Huvenne, Veerle A. I.; Griffiths, Huw J.; Dorschel, Boris; Linse, Katrin

    2016-10-01

    Global biodiversity is in decline, with the marine environment experiencing significant and increasing anthropogenic pressures. In response marine protected areas (MPAs) have increasingly been adopted as the flagship approach to marine conservation, many covering enormous areas. At present, however, the lack of biological sampling makes prioritising which regions of the ocean to protect, especially over large spatial scales, particularly problematic. Here we present an interdisciplinary approach to marine landscape mapping at the sub-Antarctic island of South Georgia as an effective protocol for underpinning large-scale (105-106  km2) MPA designations. We have developed a new high-resolution (100 m) digital elevation model (DEM) of the region and integrated this DEM with bathymetry-derived parameters, modelled oceanographic data, and satellite primary productivity data. These interdisciplinary datasets were used to apply an objective statistical approach to hierarchically partition and map the benthic environment into physical habitats types. We assess the potential application of physical habitat classifications as proxies for biological structuring and the application of the landscape mapping for informing on marine spatial planning.

  17. Distributed Model Predictive Control over Multiple Groups of Vehicles in Highway Intelligent Space for Large Scale System

    Directory of Open Access Journals (Sweden)

    Tang Xiaofeng

    2014-01-01

    Full Text Available The paper presents the three time warning distances for solving the large scale system of multiple groups of vehicles safety driving characteristics towards highway tunnel environment based on distributed model prediction control approach. Generally speaking, the system includes two parts. First, multiple vehicles are divided into multiple groups. Meanwhile, the distributed model predictive control approach is proposed to calculate the information framework of each group. Each group of optimization performance considers the local optimization and the neighboring subgroup of optimization characteristics, which could ensure the global optimization performance. Second, the three time warning distances are studied based on the basic principles used for highway intelligent space (HIS and the information framework concept is proposed according to the multiple groups of vehicles. The math model is built to avoid the chain avoidance of vehicles. The results demonstrate that the proposed highway intelligent space method could effectively ensure driving safety of multiple groups of vehicles under the environment of fog, rain, or snow.

  18. Using large scale surveys to investigate seasonal variations in seabird distribution and abundance. Part II: The Bay of Biscay and the English Channel

    Science.gov (United States)

    Pettex, Emeline; Laran, Sophie; Authier, Matthieu; Blanck, Aurélie; Dorémus, Ghislain; Falchetto, Hélène; Lambert, Charlotte; Monestiez, Pascal; Stéfan, Eric; Van Canneyt, Olivier; Ridoux, Vincent

    2017-07-01

    Seabird distributions and the associated seasonal variations remain challenging to investigate, especially in oceanic areas. Recent advances in telemetry have provided considerable information on seabird ecology, but still exclude small species, non-breeding birds and individuals from inaccessible colonies from any scientific survey. To overcome this issue and investigate seabird distribution and abundance in the eastern North Atlantic (ENA), large-scale aerial surveys were conducted in winter 2011-12 and summer 2012 over a 375,000 km2 area encompassing the English Channel (EC) and the Bay of Biscay (BoB). Seabird sightings, from 15 taxonomic groups, added up to 17,506 and 8263 sightings in winter and summer respectively, along 66,307 km. Using geostatistical methods, density maps were provided for both seasons. Abundance was estimated by strip transect sampling. Most taxa showed marked seasonal variations in their density and distribution. The highest densities were recorded during winter for most groups except shearwaters, storm-petrels, terns and large-sized gulls. Subsequently, the abundance in winter nearly reached one million individuals and was 2.5 times larger than in summer. The continental shelf and the slope in the BoB and the EC were identified as key areas for seabird conservation, especially during winter, as birds from northern Europe migrate southward after breeding. This large-scale study provided a synoptic view of the seabird community in the ENA, over two contrasting seasons. Our results highlight that oceanic areas harbour an abundant avifauna. Since most of the existing marine protected areas are restricted to the coastal fringe, the importance of oceanic areas in winter should be considered in future conservation plans. Our work will provide a baseline for the monitoring of seabird distribution at sea, and could inform the EU Marine Strategy Framework Directive.

  19. Multivariate geostatistical modeling of the spatial sediment distribution in a large scale drainage basin, Upper Rhone, Switzerland

    Science.gov (United States)

    Schoch, Anna; Blöthe, Jan Henrik; Hoffmann, Thomas; Schrott, Lothar

    2018-02-01

    There is a notable discrepancy between detailed sediment budget studies in small headwater catchments ( 103 km2) in higher order catchments applying modeling and/or remote sensing based approaches for major sediment storage delineation. To bridge the gap between these scales, we compiled an inventory of sediment and bedrock coverage from field mapping, remote sensing analysis and published data for five key sites in the Upper Rhone Basin (Val d'Illiez, Val de la Liène, Turtmanntal, Lötschental, Goms; 360.3 km2, equivalent to 6.7% of the Upper Rhone Basin). This inventory was used as training and testing data for the classification of sediment and bedrock cover. From a digital elevation model (2 × 2 m ground resolution) and Landsat imagery we derived 22 parameters characterizing local morphometry, topography and position, contributing area, and climatic and biotic factors on different spatial scales, which were used as inputs for different statistical models (logistic regression, principal component logistic regression, generalized additive model). Best prediction results with an excellent performance (mean AUROC: 0.8721 ± 0.0012) and both a high spatial and non-spatial transferability were achieved applying a generalized additive model. Since the model has a high thematic consistency, the independent input variables chosen based on their geomorphic relevance are suitable to model the spatial distribution of sediment. Our high-resolution classification shows that 53.5 ± 21.7% of the Upper Rhone Basin are covered with sediment. These are by no means evenly distributed: small headwaters (constitutes a significant component of large scale sediment budgets that needs to be better included into future analysis.

  20. Suzaku Observation of A1689: Anisotropic Temperature and Entropy Distributions Associated with the Large-scale Structure

    Science.gov (United States)

    Kawaharada, Madoka; Okabe, Nobuhiro; Umetsu, Keiichi; Takizawa, Motokazu; Matsushita, Kyoko; Fukazawa, Yasushi; Hamana, Takashi; Miyazaki, Satoshi; Nakazawa, Kazuhiro; Ohashi, Takaya

    2010-05-01

    We present results of new, deep Suzaku X-ray observations (160 ks) of the intracluster medium (ICM) in A1689 out to its virial radius, combined with complementary data sets of the projected galaxy distribution obtained from the SDSS catalog and the projected mass distribution from our recent comprehensive weak and strong lensing analysis of Subaru/Suprime-Cam and Hubble Space Telescope/Advanced Camera for Surveys observations. Faint X-ray emission from the ICM around the virial radius (r vir ~ 15farcm6) is detected at 4.0σ significance, thanks to the low and stable particle background of Suzaku. The Suzaku observations reveal anisotropic gas temperature and entropy distributions in cluster outskirts of r 500 connected to an overdense filamentary structure of galaxies outside the cluster. The gas temperature and entropy profiles in the NE direction are in good agreement, out to the virial radius, with that expected from a recent XMM-Newton statistical study and with an accretion shock heating model of the ICM, respectively. On the contrary, the other outskirt regions in contact with low-density void environments have low gas temperatures (~1.7 keV) and entropies, deviating from hydrostatic equilibrium. These anisotropic ICM features associated with large-scale structure environments suggest that the thermalization of the ICM occurs faster along overdense filamentary structures than along low-density void regions. We find that the ICM density distribution is fairly isotropic, with a three-dimensional density slope of -2.29 ± 0.18 in the radial range of r 2500 lensing analysis shows that the hydrostatic mass is lower than the spherical-lensing one (~60%-90%), but comparable to a triaxial halo mass within errors, at intermediate radii of 0.6r 2500 lensing mass, and ~30%-40% around the virial radius. Although these constitute lower limits when one considers the possible halo triaxiality, these small relative contributions of thermal pressure would require additional

  1. Large-scale association analyses identify new loci influencing glycemic traits and provide insight into the underlying biological pathways

    Science.gov (United States)

    Scott, Robert A; Lagou, Vasiliki; Welch, Ryan P; Wheeler, Eleanor; Montasser, May E; Luan, Jian’an; Mägi, Reedik; Strawbridge, Rona J; Rehnberg, Emil; Gustafsson, Stefan; Kanoni, Stavroula; Rasmussen-Torvik, Laura J; Yengo, Loïc; Lecoeur, Cecile; Shungin, Dmitry; Sanna, Serena; Sidore, Carlo; Johnson, Paul C D; Jukema, J Wouter; Johnson, Toby; Mahajan, Anubha; Verweij, Niek; Thorleifsson, Gudmar; Hottenga, Jouke-Jan; Shah, Sonia; Smith, Albert V; Sennblad, Bengt; Gieger, Christian; Salo, Perttu; Perola, Markus; Timpson, Nicholas J; Evans, David M; Pourcain, Beate St; Wu, Ying; Andrews, Jeanette S; Hui, Jennie; Bielak, Lawrence F; Zhao, Wei; Horikoshi, Momoko; Navarro, Pau; Isaacs, Aaron; O’Connell, Jeffrey R; Stirrups, Kathleen; Vitart, Veronique; Hayward, Caroline; Esko, Tönu; Mihailov, Evelin; Fraser, Ross M; Fall, Tove; Voight, Benjamin F; Raychaudhuri, Soumya; Chen, Han; Lindgren, Cecilia M; Morris, Andrew P; Rayner, Nigel W; Robertson, Neil; Rybin, Denis; Liu, Ching-Ti; Beckmann, Jacques S; Willems, Sara M; Chines, Peter S; Jackson, Anne U; Kang, Hyun Min; Stringham, Heather M; Song, Kijoung; Tanaka, Toshiko; Peden, John F; Goel, Anuj; Hicks, Andrew A; An, Ping; Müller-Nurasyid, Martina; Franco-Cereceda, Anders; Folkersen, Lasse; Marullo, Letizia; Jansen, Hanneke; Oldehinkel, Albertine J; Bruinenberg, Marcel; Pankow, James S; North, Kari E; Forouhi, Nita G; Loos, Ruth J F; Edkins, Sarah; Varga, Tibor V; Hallmans, Göran; Oksa, Heikki; Antonella, Mulas; Nagaraja, Ramaiah; Trompet, Stella; Ford, Ian; Bakker, Stephan J L; Kong, Augustine; Kumari, Meena; Gigante, Bruna; Herder, Christian; Munroe, Patricia B; Caulfield, Mark; Antti, Jula; Mangino, Massimo; Small, Kerrin; Miljkovic, Iva; Liu, Yongmei; Atalay, Mustafa; Kiess, Wieland; James, Alan L; Rivadeneira, Fernando; Uitterlinden, Andre G; Palmer, Colin N A; Doney, Alex S F; Willemsen, Gonneke; Smit, Johannes H; Campbell, Susan; Polasek, Ozren; Bonnycastle, Lori L; Hercberg, Serge; Dimitriou, Maria; Bolton, Jennifer L; Fowkes, Gerard R; Kovacs, Peter; Lindström, Jaana; Zemunik, Tatijana; Bandinelli, Stefania; Wild, Sarah H; Basart, Hanneke V; Rathmann, Wolfgang; Grallert, Harald; Maerz, Winfried; Kleber, Marcus E; Boehm, Bernhard O; Peters, Annette; Pramstaller, Peter P; Province, Michael A; Borecki, Ingrid B; Hastie, Nicholas D; Rudan, Igor; Campbell, Harry; Watkins, Hugh; Farrall, Martin; Stumvoll, Michael; Ferrucci, Luigi; Waterworth, Dawn M; Bergman, Richard N; Collins, Francis S; Tuomilehto, Jaakko; Watanabe, Richard M; de Geus, Eco J C; Penninx, Brenda W; Hofman, Albert; Oostra, Ben A; Psaty, Bruce M; Vollenweider, Peter; Wilson, James F; Wright, Alan F; Hovingh, G Kees; Metspalu, Andres; Uusitupa, Matti; Magnusson, Patrik K E; Kyvik, Kirsten O; Kaprio, Jaakko; Price, Jackie F; Dedoussis, George V; Deloukas, Panos; Meneton, Pierre; Lind, Lars; Boehnke, Michael; Shuldiner, Alan R; van Duijn, Cornelia M; Morris, Andrew D; Toenjes, Anke; Peyser, Patricia A; Beilby, John P; Körner, Antje; Kuusisto, Johanna; Laakso, Markku; Bornstein, Stefan R; Schwarz, Peter E H; Lakka, Timo A; Rauramaa, Rainer; Adair, Linda S; Smith, George Davey; Spector, Tim D; Illig, Thomas; de Faire, Ulf; Hamsten, Anders; Gudnason, Vilmundur; Kivimaki, Mika; Hingorani, Aroon; Keinanen-Kiukaanniemi, Sirkka M; Saaristo, Timo E; Boomsma, Dorret I; Stefansson, Kari; van der Harst, Pim; Dupuis, Josée; Pedersen, Nancy L; Sattar, Naveed; Harris, Tamara B; Cucca, Francesco; Ripatti, Samuli; Salomaa, Veikko; Mohlke, Karen L; Balkau, Beverley; Froguel, Philippe; Pouta, Anneli; Jarvelin, Marjo-Riitta; Wareham, Nicholas J; Bouatia-Naji, Nabila; McCarthy, Mark I; Franks, Paul W; Meigs, James B; Teslovich, Tanya M; Florez, Jose C; Langenberg, Claudia; Ingelsson, Erik; Prokopenko, Inga; Barroso, Inês

    2012-01-01

    Through genome-wide association meta-analyses of up to 133,010 individuals of European ancestry without diabetes, including individuals newly genotyped using the Metabochip, we have raised the number of confirmed loci influencing glycemic traits to 53, of which 33 also increase type 2 diabetes risk (q < 0.05). Loci influencing fasting insulin showed association with lipid levels and fat distribution, suggesting impact on insulin resistance. Gene-based analyses identified further biologically plausible loci, suggesting that additional loci beyond those reaching genome-wide significance are likely to represent real associations. This conclusion is supported by an excess of directionally consistent and nominally significant signals between discovery and follow-up studies. Functional follow-up of these newly discovered loci will further improve our understanding of glycemic control. PMID:22885924

  2. Novel probabilistic and distributed algorithms for guidance, control, and nonlinear estimation of large-scale multi-agent systems

    Science.gov (United States)

    Bandyopadhyay, Saptarshi

    Multi-agent systems are widely used for constructing a desired formation shape, exploring an area, surveillance, coverage, and other cooperative tasks. This dissertation introduces novel algorithms in the three main areas of shape formation, distributed estimation, and attitude control of large-scale multi-agent systems. In the first part of this dissertation, we address the problem of shape formation for thousands to millions of agents. Here, we present two novel algorithms for guiding a large-scale swarm of robotic systems into a desired formation shape in a distributed and scalable manner. These probabilistic swarm guidance algorithms adopt an Eulerian framework, where the physical space is partitioned into bins and the swarm's density distribution over each bin is controlled using tunable Markov chains. In the first algorithm - Probabilistic Swarm Guidance using Inhomogeneous Markov Chains (PSG-IMC) - each agent determines its bin transition probabilities using a time-inhomogeneous Markov chain that is constructed in real-time using feedback from the current swarm distribution. This PSG-IMC algorithm minimizes the expected cost of the transitions required to achieve and maintain the desired formation shape, even when agents are added to or removed from the swarm. The algorithm scales well with a large number of agents and complex formation shapes, and can also be adapted for area exploration applications. In the second algorithm - Probabilistic Swarm Guidance using Optimal Transport (PSG-OT) - each agent determines its bin transition probabilities by solving an optimal transport problem, which is recast as a linear program. In the presence of perfect feedback of the current swarm distribution, this algorithm minimizes the given cost function, guarantees faster convergence, reduces the number of transitions for achieving the desired formation, and is robust to disturbances or damages to the formation. We demonstrate the effectiveness of these two proposed swarm

  3. A Large-Scale Initiative Inviting Patients to Share Personal Fitness Tracker Data with Their Providers: Initial Results.

    Science.gov (United States)

    Pevnick, Joshua M; Fuller, Garth; Duncan, Ray; Spiegel, Brennan M R

    2016-01-01

    Personal fitness trackers (PFT) have substantial potential to improve healthcare. To quantify and characterize early adopters who shared their PFT data with providers. We used bivariate statistics and logistic regression to compare patients who shared any PFT data vs. patients who did not. A patient portal was used to invite 79,953 registered portal users to share their data. Of 66,105 users included in our analysis, 499 (0.8%) uploaded data during an initial 37-day study period. Bivariate and regression analysis showed that early adopters were more likely than non-adopters to be younger, male, white, health system employees, and to have higher BMIs. Neither comorbidities nor utilization predicted adoption. Our results demonstrate that patients had little intrinsic desire to share PFT data with their providers, and suggest that patients most at risk for poor health outcomes are least likely to share PFT data. Marketing, incentives, and/or cultural change may be needed to induce such data-sharing.

  4. Large-scale gene expression study in the ophiuroid Amphiura filiformis provides insights into evolution of gene regulatory networks

    Directory of Open Access Journals (Sweden)

    David Viktor Dylus

    2016-01-01

    Full Text Available Abstract Background The evolutionary mechanisms involved in shaping complex gene regulatory networks (GRN that encode for morphologically similar structures in distantly related animals remain elusive. In this context, echinoderm larval skeletons found in brittle stars and sea urchins provide an ideal system. Here, we characterize for the first time the development of the larval skeleton in the ophiuroid Amphiura filiformis and compare it systematically with its counterpart in sea urchin. Results We show that ophiuroids and euechinoids, that split at least 480 Million years ago (Mya, have remarkable similarities in tempo and mode of skeletal development. Despite morphological and ontological similarities, our high-resolution study of the dynamics of genetic regulatory states in A. filiformis highlights numerous differences in the architecture of their underlying GRNs. Importantly, the A.filiformis pplx, the closest gene to the sea urchin double negative gate (DNG repressor pmar1, fails to drive the skeletogenic program in sea urchin, showing important evolutionary differences in protein function. hesC, the second repressor of the DNG, is co-expressed with most of the genes that are repressed in sea urchin, indicating the absence of direct repression of tbr, ets1/2, and delta in A. filiformis. Furthermore, the absence of expression in later stages of brittle star skeleton development of key regulatory genes, such as foxb and dri, shows significantly different regulatory states. Conclusion Our data fill up an important gap in the picture of larval mesoderm in echinoderms and allows us to explore the evolutionary implications relative to the recently established phylogeny of echinoderm classes. In light of recent studies on other echinoderms, our data highlight a high evolutionary plasticity of the same nodes throughout evolution of echinoderm skeletogenesis. Finally, gene duplication, protein function diversification, and cis-regulatory element

  5. PROMET - Large scale distributed hydrological modelling to study the impact of climate change on the water flows of mountain watersheds

    Science.gov (United States)

    Mauser, Wolfram; Bach, Heike

    2009-10-01

    SummaryClimate change will change availability, quality and allocation of regional water resources. Appropriate modelling tools should therefore be available to realistically describe reactions of watersheds to climate change and to identify efficient and effective adaptation strategies on the regional scale. The paper presents the hydrologic model PROMET (Processes of Radiation, Mass and Energy Transfer), which was developed within the GLOWA-Danube project as part of the decision support system DANUBIA. PROMET covers the coupled water and energy fluxes of large-scale ( A ˜ 100,000 km 2) watersheds. It is fully spatially distributed, raster-based with raster-elements of 1 km 2 area, runs on an hourly time step, strictly conserves mass and energy and is not calibrated using measured discharges. Details on the model concept and the individual model components are given. An application case of PROMET is given for the mountainous Upper-Danube watershed in Central Europe ( A = 77,000 km 2). The water resources are intensively utilized for hydropower, agriculture, industry and tourism. The water flows are significantly influenced by man-made structures like reservoirs and water diversions. A 33-years model run covering the period from 1971 to 2003 using the existing meteorological station network as input is used to validate the performance of PROMET against measured stream flow data. Three aspects of the model performance were validated with good to very good results: the annual variation of the water balance of the whole watershed and selected sub-watersheds, the daily runoff for the whole period at selected gauges and the annual flood peaks and low flows (minimum 7-days average). PROMET is used to investigate the impact of climate change on the water cycle of the Upper Danube. A stochastic climate generator is fed with two scenarios of climate development until 2060. One assumes no future temperature change, the other uses the temperature trends of the IPCC-A1B

  6. Large-scale data analytics

    CERN Document Server

    Gkoulalas-Divanis, Aris

    2014-01-01

    Provides cutting-edge research in large-scale data analytics from diverse scientific areas Surveys varied subject areas and reports on individual results of research in the field Shares many tips and insights into large-scale data analytics from authors and editors with long-term experience and specialization in the field

  7. Geospatial analysis and distribution patterns of home nursing care in a metropolitan area - a large-scale analysis.

    Science.gov (United States)

    Bauer, Jan; Reinhard, Julia; Boll, Michael; Groneberg, David

    2017-01-01

    This study focuses on home nursing care distribution in an urban setting in Germany. A shortage of nursing care workforce is present in Germany. A geospatial analysis was performed to examine distribution patterns at the district level in Frankfurt, Germany (n = 46 districts) and factors were analysed influencing the location choice of home nursing care providers (n = 151). Furthermore, within the analysis we focused on the population aged over 65 years to model the demand for nursing care. The analysis revealed a tendency of home nursing care providers to be located near the city centre (centripetal distribution pattern). However, the demand for care showed more inconsistent patterns. Still, a centripetal distribution pattern of demand could be stated. Compared with the control groups (e.g. acute hospitals and pharmacies) similar geographical distribution patterns were present. However, the location of home nursing care providers was less influenced by demand compared with the control groups. The supply of nursing care was unevenly distributed in this metropolitan setting, but still matched the demand for nursing care. Due to the rapidly changing health care environments policy, regulations must be (re-)evaluated critically to improve the management and delivery of nursing care provision. © 2016 John Wiley & Sons Ltd.

  8. Large-scale distribution of arrival directions of cosmic rays detected above 10(18) eV at the Pierre Auger Observatory

    NARCIS (Netherlands)

    Abreu, P.; Aglietta, M.; Ahlers, M.; Ahn, E. J.; Albuquerque, I. F. M.; Allard, D.; Allekotte, I.; Allen, J.; Allison, P.; Almela, A.; Castillo, J. Alvarez; Alvarez-Muniz, J.; Batista, R. Alves; Ambrosio, M.; Aminaei, A.; Anchordoqui, L.; Andringa, S.; Antici'c, T.; Aramo, C.; Arganda, E.; Arqueros, F.; Asorey, H.; Assis, P.; Aublin, J.; Ave, M.; Avenier, M.; Avila, G.; Badescu, A. M.; Balzer, M.; Barber, K. B.; Barbosa, A. F.; Bardenet, R.; Barroso, S.L.C.; Baughman, B.; Baeuml, J.; Baus, C.; Beatty, J. J.; Becker, K.H.; Belletoile, A.; Bellido, J. A.; BenZvi, S.; Berat, C.; Bertou, X.; Biermann, P. L.; Billoir, P.; Blanco, F.; Blanco, M.; Bleve, C.; Bluemer, H.; Bohacova, M.; Boncioli, D.; Bonifazi, C.; Bonino, R.; Borodai, N.; Brack, J.; Brancus, I.; Brogueira, P.; Brown, W. C.; Bruijn, R.; Buchholz, P.; Bueno, A.; Buroker, L.; Burton, R. E.; Caballero-Mora, K. S.; Caccianiga, B.; Caramete, L.; Caruso, R.; Castellina, A.; Catalano, O.; Cataldi, G.; Cazon, L.; Cester, R.; Chauvin, J.; Cheng, S.H.; Chiavassa, A.; Chinellato, J. A.; Diaz, J. Chirinos; Chudoba, J.; Cilmo, M.; Clay, R. W.; Cocciolo, G.; Collica, L.; Coluccia, M. R.; Conceicao, R.; Contreras, F.; Cook, H.; Cooper, M. J.; Coppens, J.; Cordier, A.; Coutu, S.; Covault, C. E.; Creusot, A.; Criss, A.; Cronin, J.; Curutiu, A.; Dagoret-Campagne, S.; Dallier, R.; Daniel, B.; Dasso, S.; Daumiller, K.; Dawson, B. R.; de Almeida, R.M.; De Domenico, M.; De Donato, C.; de Jong, S. J.; De La Vega, G.; de Mello Junior, W. J. M.; de Mello Neto, J. R. T.; De Mitri, I.; de Souza, V.; de Vries, K. D.; del Peral, L.; del Rio, M.; Deligny, O.; Dembinski, H.; Dhital, N.; Di Giulio, C.; Diaz Castro, M. L.; Diep, P. N.; Diogo, F.; Dobrigkeit, C.; Docters, W.; D'Olivo, J. C.; Dong, P. N.; Dorofeev, A.; dos Anjos, J. C.; Dova, M. T.; D'Urso, D.; Dutan, I.; Ebr, J.; Engel, R.; Erdmann, M.; Escobar, C. O.; Espadanal, J.; Etchegoyen, A.; San Luis, P. Facal; Falcke, H.; Fang, K.; Farrar, G.; Fauth, A. C.; Fazzini, N.; Ferguson, A. P.; Fick, B.; Figueira, J. M.; Filevich, A.; Filipcic, A.; Fliescher, S.; Fracchiolla, C. E.; Fraenkel, E. D.; Fratu, O.; Froehlich, U.; Fuchs, B.; Gaior, R.; Gamarra, R. F.; Gambetta, S.; Garcia, B.; Garcia Roca, S. T.; Garcia-Gamez, D.; Garcia-Pinto, D.; Garilli, G.; Gascon Bravo, A.; Gemmeke, H.; Ghia, P. L.; Giller, M.; Gitto, J.; Glass, H.; Gold, M. S.; Golup, G.; Gomez Albarracin, F.; Gomez Berisso, M.; Gomez Vitale, P. F.; Goncalves, P.; Gonzalez, J. G.; Gookin, B.; Gorgi, A.; Gouffon, P.; Grashorn, E.; Grebe, S.; Griffith, N.; Grillo, A. F.; Guardincerri, Y.; Guarino, F.; Guedes, G. P.; Hansen, P.; Harari, D.; Harrison, T. A.; Harton, J. L.; Haungs, A.; Hebbeker, T.; Heck, D.; Herve, A. E.; Hill, G. C.; Hojvat, C.; Hollon, N.; Holmes, V. C.; Homola, P.; Hoerandel, J. R.; Horvath, P.; Hrabovsky, M.; Huber, D.; Huege, T.; Insolia, A.; Ionita, F.; Italiano, A.; Jansen, S.; Jarne, C.; Jiraskova, S.; Josebachuili, M.; Kadija, K.; Kampert, K. H.; Karhan, P.; Kasper, P.; Katkov, I.; Kegl, B.; Keilhauer, B.; Keivani, A.; Kelley, J. L.; Kemp, E.; Kieckhafer, R. M.; Klages, H. O.; Kleifges, M.; Kleinfeller, J.; Knapp, J.; Koang, D. -H.; Kotera, K.; Krohm, N.; Kroemer, O.; Kruppke-Hansen, D.; Kuempel, D.; Kulbartz, J. K.; Kunka, N.; La Rosa, G.; Lachaud, C.; LaHurd, D.; Latronico, L.; Lauer, R.; Lautridou, P.; Le Coz, S.; Leao, M. S. A. B.; Lebrun, D.; Lebrun, P.; Leigui De Oliveira, M. A.; Letessier-Selvon, A.; Lhenry-Yvon, I.; Link, K.; Lopez, R.; Lopez Agueera, A.; Louedec, K.; Lozano Bahilo, J.; Lu, L.; Lucero, A.; Ludwig, M.; Lyberis, H.; Maccarone, M. C.; Macolino, C.; Maldera, S.; Maller, J.; Mandat, D.; Mantsch, P.; Mariazzi, A. G.; Marin, J.; Marin, V.; Maris, I. C.; Marquez Falcon, H. R.; Marsella, G.; Martello, D.; Martinez, H.; Martinez Bravo, O.; Martraire, D.; Masias Meza, J. J.; Mathes, H. J.; Matthews, J.; Matthews, J. A. J.; Matthiae, G.; Maurel, D.; Maurizio, D.; Mazur, P. O.; Medina-Tanco, G.; Melissas, M.; Melo, D.; Menichetti, E.; Menshikov, A.; Mertsch, P.; Messina, S.; Meurer, C.; Meyhandan, Rishi; Mi'canovi'c, S.; Micheletti, M. I.; Minaya, I. A.; Miramonti, L.; Molina-Bueno, L.; Mollerach, S.; Monasor, M.; Monnier Ragaigne, D.; Montanet, F.; Morales, B.; Morello, C.; Moreno, E.; Moreno, J. C.; Mostafa, M.; Moura, C. A.; Muller, M. A.; Mueller, G.; Muenchmeyer, M.; Mussa, R.; Navarra, G.; Navarro, J. L.; Navas, S.; Necesal, P.; Nellen, L.; Nelles, A.; Neuser, J.; Nhung, P. T.; Niechciol, M.; Niemietz, L.; Nierstenhoefer, N.; Nitz, D.; Nosek, D.; Nozka, L.; Oehlschlaeger, J.; Olinto, A.; Ortiz, M.; Pacheco, N.; Selmi-Dei, D. Pakk; Palatka, M.; Pallotta, J.; Palmieri, N.; Parente, G.; Parizot, E.; Parra, A.; Pastor, S.; Paul, T.; Pech, M.; Pekala, J.; Pelayo, R.; Pepe, I. M.; Perrone, L.; Pesce, R.; Petermann, E.; Petrera, S.; Petrolini, A.; Petrov, Y.; Pfendner, C.; Piegaia, R.; Pierog, T.; Pieroni, P.; Pimenta, M.; Pirronello, V.; Platino, M.; Plum, M.; Ponce, V. H.; Pontz, M.; Porcelli, A.; Privitera, P.; Prouza, M.; Quel, E. J.; Querchfeld, S.; Rautenberg, J.; Ravel, O.; Ravignani, D.; Revenu, B.; Ridky, J.; Riggi, S.; Risse, M.; Ristori, P.; Rivera, H.; Rizi, V.; Roberts, J.; Rodrigues de Carvalho, W.; Rodriguez, G.; Rodriguez Cabo, I.; Rodriguez Martino, J.; Rodriguez Rojo, J.; Rodriguez-Frias, M. D.; Ros, G.; Rosado, J.; Rossler, T.; Roth, M.; Rouille-d'Orfeuil, B.; Roulet, E.; Rovero, A. C.; Ruehle, C.; Saftoiu, A.; Salamida, F.; Salazar, H.; Greus, F. Salesa; Salina, G.; Sanchez, F.; Santo, C. E.; Santos, E.; Santos, E. M.; Sarazin, F.; Sarkar, B.; Sarkar, S.; Sato, R.; Scharf, N.; Scherini, V.; Schieler, H.; Schiffer, P.; Schmidt, A.; Scholten, O.; Schoorlemmer, H.; Schovancova, J.; Schovanek, P.; Schroeder, F.; Schuster, D.; Sciutto, S. J.; Scuderi, M.; Segreto, A.; Settimo, M.; Shadkam, A.; Shellard, R. C.; Sidelnik, I.; Sigl, G.; Silva Lopez, H. H.; Sima, O.; 'Smiallkowski, A.; Smida, R.; Snow, G. R.; Sommers, P.; Sorokin, J.; Spinka, H.; Squartini, R.; Srivastava, Y. N.; Stanic, S.; Stapleton, J.; Stasielak, J.; Stephan, M.; Stutz, A.; Suarez, F.; Suomijaervi, T.; Supanitsky, A. D.; Susa, T.; Sutherland, M. S.; Swain, J.; Szadkowski, Z.; Szuba, M.; Tapia, A.; Tartare, M.; Tascau, O.; Tcaciuc, R.; Thao, N. T.; Thomas, D.; Tiffenberg, J.; Timmermans, C.; Tkaczyk, W.; Todero Peixoto, C. J.; Toma, G.; Tomankova, L.; Tome, B.; Tonachini, A.; Torralba Elipe, G.; Travnicek, P.; Tridapalli, D. B.; Tristram, G.; Trovato, E.; Tueros, M.; Ulrich, R.; Unger, M.; Urban, M.; Valdes Galicia, J. F.; Valino, I.; Valore, L.; van Aar, G.; van den Berg, A. M.; van Velzen, S.; van Vliet, Arjen; Varela, E.; Vargas Cardenas, B.; Vazquez, J. R.; Vazquez, R. A.; Veberic, D.; Verzi, V.; Vicha, J.; Videla, M.; Villasenor, L.; Wahlberg, H.; Wahrlich, P.; Wainberg, O.; Walz, D.; Watson, A. A.; Weber, M.; Weidenhaupt, K.; Weindl, A.; Werner, F.; Westerhoff, S.; Whelan, B. J.; Widom, A.; Wieczorek, G.; Wiencke, L.; Wilczynska, B.; Wilczynski, H.; Will, M.; Williams, C.; Winchen, T.; Wommer, M.; Wundheiler, B.; Yamamoto, T.; Yapici, T.; Younk, P.; Yuan, G.; Yushkov, A.; Zamorano Garcia, B.; Zas, E.; Zavrtanik, D.; Zavrtanik, M.; Zaw, I.; Zepeda, A.; Zhou, J.; Zhu, Y.; Zimbres Silva, M.; Ziolkowski, M.; Martin, L.

    2012-01-01

    A thorough search for large-scale anisotropies in the distribution of arrival directions of cosmic rays detected above 10(18) eV at the Pierre Auger Observatory is presented. This search is performed as a function of both declination and right ascension in several energy ranges above 10(18) eV, and

  9. Integrating SMOS brightness temperatures with a new conceptual spatially distributed hydrological model for improving flood and drought predictions at large scale.

    Science.gov (United States)

    Hostache, Renaud; Rains, Dominik; Chini, Marco; Lievens, Hans; Verhoest, Niko E. C.; Matgen, Patrick

    2017-04-01

    Motivated by climate change and its impact on the scarcity or excess of water in many parts of the world, several agencies and research institutions have taken initiatives in monitoring and predicting the hydrologic cycle at a global scale. Such a monitoring/prediction effort is important for understanding the vulnerability to extreme hydrological events and for providing early warnings. This can be based on an optimal combination of hydro-meteorological models and remote sensing, in which satellite measurements can be used as forcing or calibration data or for regularly updating the model states or parameters. Many advances have been made in these domains and the near future will bring new opportunities with respect to remote sensing as a result of the increasing number of spaceborn sensors enabling the large scale monitoring of water resources. Besides of these advances, there is currently a tendency to refine and further complicate physically-based hydrologic models to better capture the hydrologic processes at hand. However, this may not necessarily be beneficial for large-scale hydrology, as computational efforts are therefore increasing significantly. As a matter of fact, a novel thematic science question that is to be investigated is whether a flexible conceptual model can match the performance of a complex physically-based model for hydrologic simulations at large scale. In this context, the main objective of this study is to investigate how innovative techniques that allow for the estimation of soil moisture from satellite data can help in reducing errors and uncertainties in large scale conceptual hydro-meteorological modelling. A spatially distributed conceptual hydrologic model has been set up based on recent developments of the SUPERFLEX modelling framework. As it requires limited computational efforts, this model enables early warnings for large areas. Using as forcings the ERA-Interim public dataset and coupled with the CMEM radiative transfer model

  10. The typical MSW odorants identification and the spatial odorants distribution in a large-scale transfer station.

    Science.gov (United States)

    Sun, Zhongtao; Cheng, Zhaowen; Wang, Luochun; Lou, Ziyang; Zhu, Nanwen; Zhou, Xuejun; Feng, Lili

    2017-03-01

    Odorants from municipal solid waste (MSW) were complex variable, and the screening of key offensive odorants was the prerequisite for odor control process. In this study, spatial odor emissions and environmental impacts were investigated based on a large-scale working waste transfer station (LSWTS) using waste container system, and a comprehensive odor characterization method was developed and applied in terms of the odor concentration (OC), theory odor concentration (TOC), total chemical concentration (TCC), and electric nose (EN). The detected odor concentration ranged from 14 to 28 (dimensionless), and MSW container showed the highest OC value of 28, EN of 78, and TCC of 35 (ppm) due to the accumulation of leachate and residual MSW. Ninety-two species odorants were identified, and H2S, NH3, benzene, styrene, ethyl acetate, and dichloromethane were the main contributors in the container, while benzene, m,p,x-xylene, butanone, acetone, isopropanol, and ethyl acetate were predominant in the compression surface (CS) and compression plant (CP). Side of roads (SR) and unload hall (UH) showed low odorous impact. Based on this odor list, 20 species of odor substances were screened for the priority control through the synthetic evaluation method, considering the odorants concentrations, toxicity, threshold values, detection frequency, saturated vapor pressure, and appeared frequency. Graphical abstract.

  11. Simulating the large-scale spatial sand-mud distribution in a schematized process-based tidal inlet system model

    NARCIS (Netherlands)

    Scheel, F.; van Ledden, M.; van Prooijen, B.C.; Stive, M.J.F.

    2012-01-01

    Tidal basins, as found in the Dutch Wadden Sea, are characterized by strong spatial variations in bathymetry and sediment distribution. In this contribution, the aim is at simulating the spatial sand-mud distribution of a tidal basin. Predicting this spatial distribution is however complicated, due

  12. Large scale experiments simulating hydrogen distribution in a spent fuel pool building during a hypothetical fuel uncovery accident scenario

    Energy Technology Data Exchange (ETDEWEB)

    Mignot, Guillaume; Paranjape, Sidharth; Paladino, Domenico; Jaeckel, Bernd; Rydl, Adolf [Paul Scherrer Institute, Villigen (Switzerland)

    2016-08-15

    Following the Fukushima accident and its extended station blackout, attention was brought to the importance of the spent fuel pools' (SFPs) behavior in case of a prolonged loss of the cooling system. Since then, many analytical works have been performed to estimate the timing of hypothetical fuel uncovery for various SFP types. Experimentally, however, little was done to investigate issues related to the formation of a flammable gas mixture, distribution, and stratification in the SFP building itself and to some extent assess the capability for the code to correctly predict it. This paper presents the main outcomes of the Experiments on Spent Fuel Pool (ESFP) project carried out under the auspices of Swissnuclear (Framework 2012–2013) in the PANDA facility at the Paul Scherrer Institut in Switzerland. It consists of an experimental investigation focused on hydrogen concentration build-up into a SFP building during a predefined scaled scenario for different venting positions. Tests follow a two-phase scenario. Initially steam is released to mimic the boiling of the pool followed by a helium/steam mixture release to simulate the deterioration of the oxidizing spent fuel. Results shows that while the SFP building would mainly be inerted by the presence of a high concentration of steam, the volume located below the level of the pool in adjacent rooms would maintain a high air content. The interface of the two-gas mixture presents the highest risk of flammability. Additionally, it was observed that the gas mixture could become stagnant leading locally to high hydrogen concentration while steam condenses. Overall, the experiments provide relevant information for the potentially hazardous gas distribution formed in the SFP building and hints on accident management and on eventual retrofitting measures to be implemented in the SFP building.

  13. Cosmological density fluctuations and large-scale structure From N-point correlation functions to the probability distribution

    Science.gov (United States)

    Fry, J. N.

    1985-01-01

    Knowledge of N-point correlation functions for all N allows one to invert and obtain the probability distribution of mass fluctuations in a fixed volume. The hierarchical sequence of higher order is applied to correlations with dimensionless amplitudes suggested by the BBGKY equations. The resulting distribution is significantly non-Gaussian, even for quite small mean square fluctuations. The qualitative and to some degree quantitative results are to a large degree independent of the exact sequence of amplitudes. An ensemble of such models compared with N-body simulations fails in detail to account for the low-density frequency distribution.

  14. Towards a Scalable and Adaptive Application Support Platform for Large-Scale Distributed E-Sciences in High-Performance Network Environments

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Chase Qishi [New Jersey Inst. of Technology, Newark, NJ (United States); Univ. of Memphis, TN (United States); Zhu, Michelle Mengxia [Southern Illinois Univ., Carbondale, IL (United States)

    2016-06-06

    The advent of large-scale collaborative scientific applications has demonstrated the potential for broad scientific communities to pool globally distributed resources to produce unprecedented data acquisition, movement, and analysis. System resources including supercomputers, data repositories, computing facilities, network infrastructures, storage systems, and display devices have been increasingly deployed at national laboratories and academic institutes. These resources are typically shared by large communities of users over Internet or dedicated networks and hence exhibit an inherent dynamic nature in their availability, accessibility, capacity, and stability. Scientific applications using either experimental facilities or computation-based simulations with various physical, chemical, climatic, and biological models feature diverse scientific workflows as simple as linear pipelines or as complex as a directed acyclic graphs, which must be executed and supported over wide-area networks with massively distributed resources. Application users oftentimes need to manually configure their computing tasks over networks in an ad hoc manner, hence significantly limiting the productivity of scientists and constraining the utilization of resources. The success of these large-scale distributed applications requires a highly adaptive and massively scalable workflow platform that provides automated and optimized computing and networking services. This project is to design and develop a generic Scientific Workflow Automation and Management Platform (SWAMP), which contains a web-based user interface specially tailored for a target application, a set of user libraries, and several easy-to-use computing and networking toolkits for application scientists to conveniently assemble, execute, monitor, and control complex computing workflows in heterogeneous high-performance network environments. SWAMP will enable the automation and management of the entire process of scientific

  15. Large-scale structure

    CERN Document Server

    White, S D M

    1993-01-01

    Abstract. Recent observational surveys have made substantial progress in quantifying the structure of the Universe on large scales. Galaxy density and galaxy velocity fields show deviations from the predictions of a homogeneous and isotropic world model on scales approaching one percent of the current hori— zon scale. A comparison of the amplitudes in density and in velocity provides the first direct dynamical evidence in favour of a high mean density similar to that required for closure. The fluctuations observed on these scales have the amplitude predicted by the standard Cold Dark Matter (CDM) model when this model is normalised to agree with the microwave background fluc- tuations measured on much larger scales by the COBE satellite. However, a CDM model with this amplitude appears inconsistent with observational data on smaller scales. In addition it predicts a scale dependence of fluctua— tion amplitude which disagrees with that observed for galaxies in the APM survey of two million faint galaxi...

  16. Breeding density, fine-scale tracking, and large-scale modeling reveal the regional distribution of four seabird species.

    Science.gov (United States)

    Wakefield, Ewan D; Owen, Ellie; Baer, Julia; Carroll, Matthew J; Daunt, Francis; Dodd, Stephen G; Green, Jonathan A; Guilford, Tim; Mavor, Roddy A; Miller, Peter I; Newell, Mark A; Newton, Stephen F; Robertson, Gail S; Shoji, Akiko; Soanes, Louise M; Votier, Stephen C; Wanless, Sarah; Bolton, Mark

    2017-10-01

    Population-level estimates of species' distributions can reveal fundamental ecological processes and facilitate conservation. However, these may be difficult to obtain for mobile species, especially colonial central-place foragers (CCPFs; e.g., bats, corvids, social insects), because it is often impractical to determine the provenance of individuals observed beyond breeding sites. Moreover, some CCPFs, especially in the marine realm (e.g., pinnipeds, turtles, and seabirds) are difficult to observe because they range tens to ten thousands of kilometers from their colonies. It is hypothesized that the distribution of CCPFs depends largely on habitat availability and intraspecific competition. Modeling these effects may therefore allow distributions to be estimated from samples of individual spatial usage. Such data can be obtained for an increasing number of species using tracking technology. However, techniques for estimating population-level distributions using the telemetry data are poorly developed. This is of concern because many marine CCPFs, such as seabirds, are threatened by anthropogenic activities. Here, we aim to estimate the distribution at sea of four seabird species, foraging from approximately 5,500 breeding sites in Britain and Ireland. To do so, we GPS-tracked a sample of 230 European Shags Phalacrocorax aristotelis, 464 Black-legged Kittiwakes Rissa tridactyla, 178 Common Murres Uria aalge, and 281 Razorbills Alca torda from 13, 20, 12, and 14 colonies, respectively. Using Poisson point process habitat use models, we show that distribution at sea is dependent on (1) density-dependent competition among sympatric conspecifics (all species) and parapatric conspecifics (Kittiwakes and Murres); (2) habitat accessibility and coastal geometry, such that birds travel further from colonies with limited access to the sea; and (3) regional habitat availability. Using these models, we predict space use by birds from unobserved colonies and thereby map the

  17. Mortality at wind-farms is positively related to large-scale distribution and aggregation in griffon vultures

    OpenAIRE

    Carrete, Martina; Sánchez-Zapata, José A.; Benítez, J. R.; Lobón, M.; Montoya, F.; Donázar, José A.

    2012-01-01

    Wind-farms have negative impacts on the environment, mainly through habitat destruction and bird mortality, making it urgent to design predictive tools to use in landscape planning. A frequent assumption of wind-farm assessment studies is that bird distribution and abundance and bird mortality through collision with turbines are closely related. However, previous results are contradictory and question the usefulness of these variables to select safer wind-farm locations. We focused on a speci...

  18. ‘Oorja’ in India: Assessing a large-scale commercial distribution of advanced biomass stoves to households

    Science.gov (United States)

    Thurber, Mark C.; Phadke, Himani; Nagavarapu, Sriniketh; Shrimali, Gireesh; Zerriffi, Hisham

    2015-01-01

    Replacing traditional stoves with advanced alternatives that burn more cleanly has the potential to ameliorate major health problems associated with indoor air pollution in developing countries. With a few exceptions, large government and charitable programs to distribute advanced stoves have not had the desired impact. Commercially-based distributions that seek cost recovery and even profits might plausibly do better, both because they encourage distributors to supply and promote products that people want and because they are based around properly-incentivized supply chains that could more be scalable, sustainable, and replicable. The sale in India of over 400,000 “Oorja” stoves to households from 2006 onwards represents the largest commercially-based distribution of a gasification-type advanced biomass stove. BP's Emerging Consumer Markets (ECM) division and then successor company First Energy sold this stove and the pelletized biomass fuel on which it operates. We assess the success of this effort and the role its commercial aspect played in outcomes using a survey of 998 households in areas of Maharashtra and Karnataka where the stove was sold as well as detailed interviews with BP and First Energy staff. Statistical models based on this data indicate that Oorja purchase rates were significantly influenced by the intensity of Oorja marketing in a region as well as by pre-existing stove mix among households. The highest rate of adoption came from LPG-using households for which Oorja's pelletized biomass fuel reduced costs. Smoke- and health-related messages from Oorja marketing did not significantly influence the purchase decision, although they did appear to affect household perceptions about smoke. By the time of our survey, only 9% of households that purchased Oorja were still using the stove, the result in large part of difficulties First Energy encountered in developing a viable supply chain around low-cost procurement of “agricultural waste” to

  19. Ownership and usage of mosquito nets after four years of large-scale free distribution in Papua New Guinea

    Directory of Open Access Journals (Sweden)

    Hetzel Manuel W

    2012-06-01

    Full Text Available Abstract Background Papua New Guinea (PNG is a highly malaria endemic country in the South-West Pacific with a population of approximately 6.6 million (2009. In 2004, the country intensified its malaria control activities with support from the Global Fund. With the aim of achieving 80% ownership and usage, a country-wide campaign distributed two million free long-lasting insecticide-treated nets (LLINs. Methods In order to evaluate outcomes of the campaign against programme targets, a country-wide household survey based on stratified multi-stage random sampling was carried out in 17 of the 20 provinces after the campaign in 2008/09. In addition, a before-after assessment was carried out in six purposively selected sentinel sites. A structured questionnaire was administered to the heads of sampled households to elicit net ownership and usage information. Results After the campaign, 64.6% of households owned a LLIN, 80.1% any type of mosquito net. Overall usage by household members amounted to 32.5% for LLINs and 44.3% for nets in general. Amongst children under five years, 39.5% used a LLIN and 51.8% any type of net, whereas 41.3% of pregnant women used a LLIN and 56.1% any net. Accessibility of villages was the key determinant of net ownership, while usage was mainly determined by ownership. Most (99.5% of the household members who did not sleep under a net did not have access to a (unused net in their household. In the sentinel sites, LLIN ownership increased from 9.4% to 88.7%, ownership of any net from 52.7% to 94.1%. Usage of LLINs increased from 5.5% to 55.1%, usage of any net from 37.3% to 66.7%. Among children under five years, usage of LLINs and of nets in general increased from 8.2% to 67.0% and from 44.6% to 76.1%, respectively (all p ≤ 0.001. Conclusions While a single round of free distribution of LLINs significantly increased net ownership, an insufficient number of nets coupled with a heterogeneous distribution led to overall

  20. An Integrated Specification and Verification Environment for Component-Based Architectures of Large-Scale Distributed Systems

    Science.gov (United States)

    2009-05-26

    brevity. Intermediate Representation In Indus, Java programs are represented in Jimple [80], a typed three-address repre- sentation provided by...the SOOT.6 As as result, every module in Indus operates on the Jimple representation of Java 6Soot, a Java OptimizationFramework. is available at http...leverage features from Indus with very little effort. Also, provided there is a translator from source to Jimple and/or bytecode to Jimple (as built

  1. Characterization of antibiotics in a large-scale river system of China: Occurrence pattern, spatiotemporal distribution and environmental risks.

    Science.gov (United States)

    Chen, Haiyang; Jing, Lijun; Teng, Yanguo; Wang, Jinsheng

    2017-11-10

    Antibiotics and antibiotic resistance genes in the river system have received growing attention in recent years due to their potential threat to aquatic ecosystems and public health. Recognizing the occurrence and distribution of antibiotics in river environment and assessing their ecological risks are of important precondition for proposing effective strategies to protect basin safety. In this study, a comprehensive investigation was conducted to identify the contamination and risk characteristics of antibiotics in the aquatic environment of Hai River system (HRS) which is the largest water system in northern China. To attain this objective, several tools and methods were considered on the data set of water and sediment samples collected in the past ten years. The occurrence pattern, concentration levels and spatiotemporal distribution of antibiotics in the HRS were characterized utilizing statistical and comparative analysis. Risk quotients were employed to assess the adverse ecology effects caused by single antibiotic or their mixtures. Screening tool with priority factor and accumulation growth factor was used auxiliarily to prioritize antibiotics that should be of highly concern. Results indicated that the occurrence frequencies and concentration levels of 16 representative antibiotics in HRS were generally higher than those reported in global waters. Most antibiotics showed significant seasonal and spatial variations. Comparatively speaking, sulfamethoxazole, norfloxacin, erythromycin and roxithromycin posed higher risks to aquatic organisms in the HRS individually, and the combination of tetracycline and enrofloxacin indicated synergistical actions. Overall, due to their potential risks, considerable levels or quick increasing trends, 13 antibiotics were identified as priority contaminants in the HRS and should be paid special attention to be strictly regulated in the future. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Large-scale upper tropospheric pollution observed by MIPAS HCN and C2H6 global distributions

    Directory of Open Access Journals (Sweden)

    A. Linden

    2009-12-01

    Full Text Available We present global upper tropospheric HCN and C2H6 amounts derived from MIPAS/ENVISAT limb emission spectra. HCN and C2H6 are retrieved in the spectral regions 715.5–782.7 cm−1 and 811.5–835.7 cm−1, respectively. The datasets consist of 54 days between September 2003 and March 2004. This period covers the peak and decline of the southern hemispheric biomass burning period and some months thereafter. HCN is a nearly unambiguous tracer of biomass burning with an assumed tropospheric lifetime of several months. Indeed, the most significant feature in the MIPAS HCN dataset is an upper tropospheric plume of enhanced values caused by southern hemispheric biomass burning, which in September and October 2003 extended from tropical South America over Africa, Australia to the Southern Pacific. The spatial extent of this plume agrees well with the MOPITT CO distribution of September 2003. Further there is good agreement with the shapes and mixing ratios of the southern hemispheric HCN and C2H6 fields measured by the ACE experiment between September and November 2005. The MIPAS HCN plume extended from the lowermost observation height of 8 km up to about 16 km altitude, with maximum values of 500–600 pptv in October 2003. It was still clearly visible in December 2003, but had strongly decreased by March 2004, confirming the assumed tropospheric lifetime. The main sources of C2H6 are production and transmission of fossil fuels, followed by biofuel use and biomass burning. The C2H6 distribution also clearly reflected the southern hemispheric biomass burning plume and its seasonal variation, with maximum amounts of 600–700 pptv. Generally there was good spatial overlap between the southern hemispheric distributions of both pollution tracers, except for the region between Peru and the mid-Pacific. Here C2H6was considerably enhanced, whereas the HCN amounts were low. Backward trajectory calculations suggested that industrial pollution was responsible

  3. Large-scale upper tropospheric pollution observed by MIPAS HCN and C2H6 global distributions

    Science.gov (United States)

    Glatthor, N.; von Clarmann, T.; Stiller, G. P.; Funke, B.; Koukouli, M. E.; Fischer, H.; Grabowski, U.; Höpfner, M.; Kellmann, S.; Linden, A.

    2009-12-01

    We present global upper tropospheric HCN and C2H6 amounts derived from MIPAS/ENVISAT limb emission spectra. HCN and C2H6 are retrieved in the spectral regions 715.5-782.7 cm-1 and 811.5-835.7 cm-1, respectively. The datasets consist of 54 days between September 2003 and March 2004. This period covers the peak and decline of the southern hemispheric biomass burning period and some months thereafter. HCN is a nearly unambiguous tracer of biomass burning with an assumed tropospheric lifetime of several months. Indeed, the most significant feature in the MIPAS HCN dataset is an upper tropospheric plume of enhanced values caused by southern hemispheric biomass burning, which in September and October 2003 extended from tropical South America over Africa, Australia to the Southern Pacific. The spatial extent of this plume agrees well with the MOPITT CO distribution of September 2003. Further there is good agreement with the shapes and mixing ratios of the southern hemispheric HCN and C2H6 fields measured by the ACE experiment between September and November 2005. The MIPAS HCN plume extended from the lowermost observation height of 8 km up to about 16 km altitude, with maximum values of 500-600 pptv in October 2003. It was still clearly visible in December 2003, but had strongly decreased by March 2004, confirming the assumed tropospheric lifetime. The main sources of C2H6 are production and transmission of fossil fuels, followed by biofuel use and biomass burning. The C2H6 distribution also clearly reflected the southern hemispheric biomass burning plume and its seasonal variation, with maximum amounts of 600-700 pptv. Generally there was good spatial overlap between the southern hemispheric distributions of both pollution tracers, except for the region between Peru and the mid-Pacific. Here C2H6was considerably enhanced, whereas the HCN amounts were low. Backward trajectory calculations suggested that industrial pollution was responsible for the elevated C2H6

  4. Spatiotemporal distribution of nitrogen dioxide within and around a large-scale wind farm - a numerical case study

    Science.gov (United States)

    Mo, Jingyue; Huang, Tao; Zhang, Xiaodong; Zhao, Yuan; Liu, Xiao; Li, Jixiang; Gao, Hong; Ma, Jianmin

    2017-12-01

    As a renewable and clean energy source, wind power has become the most rapidly growing energy resource worldwide in the past decades. Wind power has been thought not to exert any negative impacts on the environment. However, since a wind farm can alter the local meteorological conditions and increase the surface roughness lengths, it may affect air pollutants passing through and over the wind farm after released from their sources and delivered to the wind farm. In the present study, we simulated the nitrogen dioxide (NO2) air concentration within and around the world's largest wind farm (Jiuquan wind farm in Gansu Province, China) using a coupled meteorology and atmospheric chemistry model WRF-Chem. The results revealed an edge effect, which featured higher NO2 levels at the immediate upwind and border region of the wind farm and lower NO2 concentration within the wind farm and the immediate downwind transition area of the wind farm. A surface roughness length scheme and a wind turbine drag force scheme were employed to parameterize the wind farm in this model investigation. Modeling results show that both parameterization schemes yield higher concentration in the immediate upstream of the wind farm and lower concentration within the wind farm compared to the case without the wind farm. We infer this edge effect and the spatial distribution of air pollutants to be the result of the internal boundary layer induced by the changes in wind speed and turbulence intensity driven by the rotation of the wind turbine rotor blades and the enhancement of surface roughness length over the wind farm. The step change in the roughness length from the smooth to rough surfaces (overshooting) in the upstream of the wind farm decelerates the atmospheric transport of air pollutants, leading to their accumulation. The rough to the smooth surface (undershooting) in the downstream of the wind farm accelerates the atmospheric transport of air pollutants, resulting in lower concentration

  5. Large-Scale Control of the Probability Distribution Function of Precipitation over the Continental US in Observations and Models, in the Current and Future Climat

    Science.gov (United States)

    Straus, D. M.

    2016-12-01

    The goals of this research are to: (a) identify features of the probability distribution function (pdf) of pentad precipitation over the continental US (CONUS) that are controlled by the configuration of the large-scale fields, including both tails of the pdf, hence droughts and floods, and the overall shape of the pdf, e.g. skewness and kurtosis; (b) estimate the changes in the properties of the pdf controlled by the large-scale in a future climate. We first describe the significant dependence of the observed precipitation pdf conditioned on circulation regimes over CONUS. The regime states, and the number of regimes, are obtained by a method that assures a high degree of significance, and a high degree of pattern correlation between the states in a regime and its average. The regime-conditioned pdfs yield information on times scales from intra-seasonal to inter-annual. We then apply this method to atmospheric simulations run with the EC-Earth version 3 model for historical sea-surface temperatures (SST) and future (RCP8.5 CMIP5 scenario) estimates of SST, at resolutions T255 and T799, to understand what dynamically controlled changes in the precipitation pdf can be expected in a future climate.

  6. Large-scale distribution and activity of prokaryotes in deep-sea surface sediments of the Mediterranean Sea and the adjacent Atlantic Ocean.

    Directory of Open Access Journals (Sweden)

    Donato Giovannelli

    Full Text Available The deep-sea represents a substantial portion of the biosphere and has a major influence on carbon cycling and global biogeochemistry. Benthic deep-sea prokaryotes have crucial roles in this ecosystem, with their recycling of organic matter from the photic zone. Despite this, little is known about the large-scale distribution of prokaryotes in the surface deep-sea sediments. To assess the influence of environmental and trophic variables on the large-scale distribution of prokaryotes, we investigated the prokaryotic assemblage composition (Bacteria to Archaea and Euryarchaeota to Crenarchaeota ratio and activity in the surface deep-sea sediments of the Mediterranean Sea and the adjacent North Atlantic Ocean. Prokaryotic abundance and biomass did not vary significantly across the Mediterranean Sea; however, there were depth-related trends in all areas. The abundance of prokaryotes was positively correlated with the sedimentary concentration of protein, an indicator of the quality and bioavailability of organic matter. Moving eastwards, the Bacteria contribution to the total prokaryotes decreased, which appears to be linked to the more oligotrophic conditions of the Eastern Mediterranean basins. Despite the increased importance of Archaea, the contributions of Crenarchaeota Marine Group I to the total pool was relatively constant across the investigated stations, with the exception of Matapan-Vavilov Deep, in which Euryarchaeota Marine Group II dominated. Overall, our data suggest that deeper areas of the Mediterranean Sea share more similar communities with each other than with shallower sites. Freshness and quality of sedimentary organic matter were identified through Generalized Additive Model analysis as the major factors for describing the variation in the prokaryotic community structure and activity in the surface deep-sea sediments. Longitude was also important in explaining the observed variability, which suggests that the overlying water

  7. A large-scale distribution of milk-based fortified spreads: evidence for a new approach in regions with high burden of acute malnutrition.

    Directory of Open Access Journals (Sweden)

    Isabelle Defourny

    Full Text Available BACKGROUND: There are 146 million underweight children in the developing world, which contribute to up to half of the world's child deaths. In high burden regions for malnutrition, the treatment of individual children is limited by available resources. Here, we evaluate a large-scale distribution of a nutritional supplement on the prevention of wasting. METHODS AND FINDINGS: A new ready-to-use food (RUF was developed as a diet supplement for children under three. The intervention consisted of six monthly distributions of RUF during the 2007 hunger gap in a district of Maradi region, Niger, for approximately 60,000 children (length: 60-85 cm. At each distribution, all children over 65 cm had their Mid-Upper Arm Circumference (MUAC recorded. Admission trends for severe wasting (WFH<70% NCHS in Maradi, 2002-2005 show an increase every year during the hunger gap. In contrast, in 2007, throughout the period of the distribution, the incidence of severe acute malnutrition (MUAC<110 mm remained at extremely low levels. Comparison of year-over-year admissions to the therapeutic feeding program shows that the 2007 blanket distribution had essentially the same flattening effect on the seasonal rise in admissions as the 2006 individualized treatment of almost 60,000 children moderately wasted. CONCLUSIONS: These results demonstrate the potential for distribution of fortified spreads to reduce the incidence of severe wasting in large population of children 6-36 months of age. Although further information is needed on the cost-effectiveness of such distributions, these results highlight the importance of re-evaluating current nutritional strategies and international recommendations for high burden areas of childhood malnutrition.

  8. Large Scale Solar Heating

    DEFF Research Database (Denmark)

    Heller, Alfred

    2001-01-01

    The main objective of the research was to evaluate large-scale solar heating connected to district heating (CSDHP), to build up a simulation tool and to demonstrate the application of the simulation tool for design studies and on a local energy planning case. The evaluation was mainly carried out......). Simulation programs are proposed as control supporting tool for daily operation and performance prediction of central solar heating plants. Finaly the CSHP technolgy is put into persepctive with respect to alternatives and a short discussion on the barries and breakthrough of the technology are given....... model is designed and validated on the Marstal case. Applying the Danish Reference Year, a design tool is presented. The simulation tool is used for proposals for application of alternative designs, including high-performance solar collector types (trough solar collectors, vaccum pipe collectors...

  9. Large-scale galaxy bias

    Science.gov (United States)

    Jeong, Donghui; Desjacques, Vincent; Schmidt, Fabian

    2018-01-01

    Here, we briefly introduce the key results of the recent review (arXiv:1611.09787), whose abstract is as following. This review presents a comprehensive overview of galaxy bias, that is, the statistical relation between the distribution of galaxies and matter. We focus on large scales where cosmic density fields are quasi-linear. On these scales, the clustering of galaxies can be described by a perturbative bias expansion, and the complicated physics of galaxy formation is absorbed by a finite set of coefficients of the expansion, called bias parameters. The review begins with a detailed derivation of this very important result, which forms the basis of the rigorous perturbative description of galaxy clustering, under the assumptions of General Relativity and Gaussian, adiabatic initial conditions. Key components of the bias expansion are all leading local gravitational observables, which include the matter density but also tidal fields and their time derivatives. We hence expand the definition of local bias to encompass all these contributions. This derivation is followed by a presentation of the peak-background split in its general form, which elucidates the physical meaning of the bias parameters, and a detailed description of the connection between bias parameters and galaxy (or halo) statistics. We then review the excursion set formalism and peak theory which provide predictions for the values of the bias parameters. In the remainder of the review, we consider the generalizations of galaxy bias required in the presence of various types of cosmological physics that go beyond pressureless matter with adiabatic, Gaussian initial conditions: primordial non-Gaussianity, massive neutrinos, baryon-CDM isocurvature perturbations, dark energy, and modified gravity. Finally, we discuss how the description of galaxy bias in the galaxies' rest frame is related to clustering statistics measured from the observed angular positions and redshifts in actual galaxy catalogs.

  10. Large scale tracking algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Ross L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Love, Joshua Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Melgaard, David Kennett [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Karelitz, David B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pitts, Todd Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Zollweg, Joshua David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Anderson, Dylan Z. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Nandy, Prabal [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Whitlow, Gary L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bender, Daniel A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Byrne, Raymond Harry [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  11. The large-scale distribution of ammonia oxidizers in paddy soils is driven by soil pH, geographic distance and climatic factors

    Directory of Open Access Journals (Sweden)

    Hangwei eHu

    2015-09-01

    Full Text Available Paddy soils distribute widely from temperate to tropical regions, and are characterized by intensive nitrogen fertilization practices in China. Mounting evidence has confirmed the functional importance of ammonia-oxidizing archaea (AOA and bacteria (AOB in soil nitrification, but little is known about their biogeographic distribution patterns in paddy ecosystems. Here, we used barcoded pyrosequencing to characterize the effects of climatic, geochemical and spatial factors on the distribution of ammonia oxidizers from 11 representative rice-growing regions (75-1945 km apart of China. Potential nitrification rates varied greatly by more than three orders of magnitude, and were significantly correlated with the abundances of AOA and AOB. The community composition of ammonia oxidizer was affected by multiple factors, but changes in relative abundances of the major lineages could be best predicted by soil pH. The alpha diversity of AOA and AOB displayed contrasting trends over the gradients of latitude and atmospheric temperature, indicating a possible niche separation between AOA and AOB along the latitude. The Bray-Curtis dissimilarities in ammonia-oxidizing community structure significantly increased with increasing geographical distance, indicating that more geographically distant paddy fields tend to harbor more dissimilar ammonia oxidizers. Variation partitioning analysis revealed that spatial, geochemical and climatic factors could jointly explain majority of the data variation, and were important drivers defining the ecological niches of AOA and AOB. Our findings suggest that both AOA and AOB are of functional importance in paddy soil nitrification, and ammonia oxidizers in paddy ecosystems exhibit large-scale biogeographic patterns shaped by soil pH, geographic distance, and climatic factors.

  12. Large-scale microfluidics providing high-resolution and high-throughput screening of Caenorhabditis elegans poly-glutamine aggregation model

    Science.gov (United States)

    Mondal, Sudip; Hegarty, Evan; Martin, Chris; Gökçe, Sertan Kutal; Ghorashian, Navid; Ben-Yakar, Adela

    2016-10-01

    Next generation drug screening could benefit greatly from in vivo studies, using small animal models such as Caenorhabditis elegans for hit identification and lead optimization. Current in vivo assays can operate either at low throughput with high resolution or with low resolution at high throughput. To enable both high-throughput and high-resolution imaging of C. elegans, we developed an automated microfluidic platform. This platform can image 15 z-stacks of ~4,000 C. elegans from 96 different populations using a large-scale chip with a micron resolution in 16 min. Using this platform, we screened ~100,000 animals of the poly-glutamine aggregation model on 25 chips. We tested the efficacy of ~1,000 FDA-approved drugs in improving the aggregation phenotype of the model and identified four confirmed hits. This robust platform now enables high-content screening of various C. elegans disease models at the speed and cost of in vitro cell-based assays.

  13. Provider risk factors for medication administration error alerts: analyses of a large-scale closed-loop medication administration system using RFID and barcode.

    Science.gov (United States)

    Hwang, Yeonsoo; Yoon, Dukyong; Ahn, Eun Kyoung; Hwang, Hee; Park, Rae Woong

    2016-12-01

    To determine the risk factors and rate of medication administration error (MAE) alerts by analyzing large-scale medication administration data and related error logs automatically recorded in a closed-loop medication administration system using radio-frequency identification and barcodes. The subject hospital adopted a closed-loop medication administration system. All medication administrations in the general wards were automatically recorded in real-time using radio-frequency identification, barcodes, and hand-held point-of-care devices. MAE alert logs recorded during a full 1 year of 2012. We evaluated risk factors for MAE alerts including administration time, order type, medication route, the number of medication doses administered, and factors associated with nurse practices by logistic regression analysis. A total of 2 874 539 medication dose records from 30 232 patients (882.6 patient-years) were included in 2012. We identified 35 082 MAE alerts (1.22% of total medication doses). The MAE alerts were significantly related to administration at non-standard time [odds ratio (OR) 1.559, 95% confidence interval (CI) 1.515-1.604], emergency order (OR 1.527, 95%CI 1.464-1.594), and the number of medication doses administered (OR 0.993, 95%CI 0.992-0.993). Medication route, nurse's employment duration, and working schedule were also significantly related. The MAE alert rate was 1.22% over the 1-year observation period in the hospital examined in this study. The MAE alerts were significantly related to administration time, order type, medication route, the number of medication doses administered, nurse's employment duration, and working schedule. The real-time closed-loop medication administration system contributed to improving patient safety by preventing potential MAEs. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  14. Distribution of persistent organic pollutants, polycyclic aromatic hydrocarbons and trace elements in soil and vegetation following a large scale landfill fire in northern Greece.

    Science.gov (United States)

    Chrysikou, Loukia; Gemenetzis, Panagiotis; Kouras, Athanasios; Manoli, Evangelia; Terzi, Eleni; Samara, Constantini

    2008-02-01

    Polycyclic aromatic hydrocarbons (PAHs), polychlorinated biphenyls (PCBs), organochlorine pesticides (OCPs), including hexaclorocyclohexanes (HCHs) and DDTs, as well as trace elements were determined in soil and vegetation samples collected from the surrounding area of the landfill "Tagarades", the biggest in northern Greece, following a large scale fire involving approximately 50,000 tons of municipal waste. High concentrations of total PAHs, PCBs and heavy metals were found inside the landfill (1475 microg kg(-1) dw, 399 microg kg(-1) dw and 29.8 mg kg(-1) dw, respectively), whereas concentrations in the surrounding soils were by far lower ranging between 11.2-28.1 microg kg(-1) dw for PAHs, 4.02-11.2 microg kg(-1) dw for PCBs and 575-1207 mg kg(-1) dw for heavy metals. The distribution of HCHs and DDTs were quite different since certain soils exhibited equal or higher concentrations than the landfill. In vegetation, the concentrations of PAHs, PCBs, HCHs and DDTs ranged from 14.1-34.7, 3.64-25.9, 1.41-32.1 and 0.61-4.03 microg kg(-1) dw, respectively, while those of heavy metals from 81 to 159 mg kg(-1) dw. The results of the study indicated soil and vegetation pollution levels in the surroundings of the landfill comparable to those reported for other Greek locations. The impact from the landfill fire was not evident partially due to the presence of recent and past inputs from other activities (agriculture, vehicular transport, earlier landfill fires).

  15. Extracting Useful Semantic Information from Large Scale Corpora of Text

    Science.gov (United States)

    Mendoza, Ray Padilla, Jr.

    2012-01-01

    Extracting and representing semantic information from large scale corpora is at the crux of computer-assisted knowledge generation. Semantic information depends on collocation extraction methods, mathematical models used to represent distributional information, and weighting functions which transform the space. This dissertation provides a…

  16. Network robustness under large-scale attacks

    CERN Document Server

    Zhou, Qing; Liu, Ruifang; Cui, Shuguang

    2014-01-01

    Network Robustness under Large-Scale Attacks provides the analysis of network robustness under attacks, with a focus on large-scale correlated physical attacks. The book begins with a thorough overview of the latest research and techniques to analyze the network responses to different types of attacks over various network topologies and connection models. It then introduces a new large-scale physical attack model coined as area attack, under which a new network robustness measure is introduced and applied to study the network responses. With this book, readers will learn the necessary tools to evaluate how a complex network responds to random and possibly correlated attacks.

  17. Taurus II Stage Test Simulations: Using Large-Scale CFD Simulations to Provide Critical Insight into Plume Induced Environments During Design

    Science.gov (United States)

    Struzenberg, L. L.; West, J. S.

    2011-01-01

    This paper describes the use of targeted Loci/CHEM CFD simulations to evaluate the effects of a dual-engine first-stage hot-fire test on an evolving integrated launch pad/test article design. This effort was undertaken as a part of the NESC Independent Assessment of the Taurus II Stage Test Series. The underlying conceptual model included development of a series of computational models and simulations to analyze the plume induced environments on the pad, facility structures and test article. A pathfinder simulation was first developed, capable of providing quick-turn around evaluation of plume impingement pressures on the flame deflector. Results from this simulation were available in time to provide data for an ongoing structural assessment of the deflector. The resulting recommendation was available in a timely manner and was incorporated into construction schedule for the new launch stand under construction at Wallops Flight Facility. A series of Reynolds-Averaged Navier-Stokes (RANS) quasi-steady simulations representative of various key elements of the test profile was performed to identify potential concerns with the test configuration and test profile. As required, unsteady Hybrid-RANS/LES simulations were performed, to provide additional insight into critical aspects of the test sequence. Modifications to the test-specific hardware and facility structures thermal protection as well as modifications to the planned hot-fire test profile were implemented based on these simulation results.

  18. Large-Scale Evolutionary Analysis of Genes and Supergene Clusters from Terpenoid Modular Pathways Provides Insights into Metabolic Diversification in Flowering Plants

    Science.gov (United States)

    Hofberger, Johannes A.; Ramirez, Aldana M.; van den Bergh, Erik; Zhu, Xinguang; Bouwmeester, Harro J.; Schuurink, Robert C.; Schranz, M. Eric

    2015-01-01

    An important component of plant evolution is the plethora of pathways producing more than 200,000 biochemically diverse specialized metabolites with pharmacological, nutritional and ecological significance. To unravel dynamics underlying metabolic diversification, it is critical to determine lineage-specific gene family expansion in a phylogenomics framework. However, robust functional annotation is often only available for core enzymes catalyzing committed reaction steps within few model systems. In a genome informatics approach, we extracted information from early-draft gene-space assemblies and non-redundant transcriptomes to identify protein families involved in isoprenoid biosynthesis. Isoprenoids comprise terpenoids with various roles in plant-environment interaction, such as pollinator attraction or pathogen defense. Combining lines of evidence provided by synteny, sequence homology and Hidden-Markov-Modelling, we screened 17 genomes including 12 major crops and found evidence for 1,904 proteins associated with terpenoid biosynthesis. Our terpenoid genes set contains evidence for 840 core terpene-synthases and 338 triterpene-specific synthases. We further identified 190 prenyltransferases, 39 isopentenyl-diphosphate isomerases as well as 278 and 219 proteins involved in mevalonate and methylerithrol pathways, respectively. Assessing the impact of gene and genome duplication to lineage-specific terpenoid pathway expansion, we illustrated key events underlying terpenoid metabolic diversification within 250 million years of flowering plant radiation. By quantifying Angiosperm-wide versatility and phylogenetic relationships of pleiotropic gene families in terpenoid modular pathways, our analysis offers significant insight into evolutionary dynamics underlying diversification of plant secondary metabolism. Furthermore, our data provide a blueprint for future efforts to identify and more rapidly clone terpenoid biosynthetic genes from any plant species. PMID

  19. Large-Scale Evolutionary Analysis of Genes and Supergene Clusters from Terpenoid Modular Pathways Provides Insights into Metabolic Diversification in Flowering Plants.

    Science.gov (United States)

    Hofberger, Johannes A; Ramirez, Aldana M; Bergh, Erik van den; Zhu, Xinguang; Bouwmeester, Harro J; Schuurink, Robert C; Schranz, M Eric

    2015-01-01

    An important component of plant evolution is the plethora of pathways producing more than 200,000 biochemically diverse specialized metabolites with pharmacological, nutritional and ecological significance. To unravel dynamics underlying metabolic diversification, it is critical to determine lineage-specific gene family expansion in a phylogenomics framework. However, robust functional annotation is often only available for core enzymes catalyzing committed reaction steps within few model systems. In a genome informatics approach, we extracted information from early-draft gene-space assemblies and non-redundant transcriptomes to identify protein families involved in isoprenoid biosynthesis. Isoprenoids comprise terpenoids with various roles in plant-environment interaction, such as pollinator attraction or pathogen defense. Combining lines of evidence provided by synteny, sequence homology and Hidden-Markov-Modelling, we screened 17 genomes including 12 major crops and found evidence for 1,904 proteins associated with terpenoid biosynthesis. Our terpenoid genes set contains evidence for 840 core terpene-synthases and 338 triterpene-specific synthases. We further identified 190 prenyltransferases, 39 isopentenyl-diphosphate isomerases as well as 278 and 219 proteins involved in mevalonate and methylerithrol pathways, respectively. Assessing the impact of gene and genome duplication to lineage-specific terpenoid pathway expansion, we illustrated key events underlying terpenoid metabolic diversification within 250 million years of flowering plant radiation. By quantifying Angiosperm-wide versatility and phylogenetic relationships of pleiotropic gene families in terpenoid modular pathways, our analysis offers significant insight into evolutionary dynamics underlying diversification of plant secondary metabolism. Furthermore, our data provide a blueprint for future efforts to identify and more rapidly clone terpenoid biosynthetic genes from any plant species.

  20. Large-Scale Prediction of Seagrass Distribution Integrating Landscape Metrics and Environmental Factors: The Case of Cymodocea nodosa (Mediterranean–Atlantic)

    KAUST Repository

    Chefaoui, Rosa M.

    2015-05-05

    Understanding the factors that affect seagrass meadows encompassing their entire range of distribution is challenging yet important for their conservation. Here, we predict the realized and potential distribution for the species Cymodocea nodosa modelling its environmental niche in the Mediterranean and adjacent Atlantic coastlines. We use a combination of environmental variables and landscape metrics to perform a suite of predictive algorithms which enables examination of the niche and find suitable habitats for the species. The most relevant environmental variables defining the distribution of C. nodosa were sea surface temperature (SST) and salinity. We found suitable habitats at SST from 5.8 °C to 26.4 °C and salinity ranging from 17.5 to 39.3. Optimal values of mean winter wave height ranged between 1.2 and 1.5 m, while waves higher than 2.5 m seemed to limit the presence of the species. The influence of nutrients and pH, despite having weight on the models, was not so clear in terms of ranges that confine the distribution of the species. Landscape metrics able to capture variation in the coastline enhanced significantly the accuracy of the models, despite the limitations caused by the scale of the study. We found potential suitable areas not occupied by the seagrass mainly in coastal regions of North Africa and the Adriatic coast of Italy. The present study describes the realized and potential distribution of a seagrass species, providing the first global model of the factors that can be shaping the environmental niche of C. nodosa throughout its range. We identified the variables constraining its distribution as well as thresholds delineating its environmental niche. Landscape metrics showed promising prospects for the prediction of coastal species dependent on the shape of the coast. By contrasting predictive approaches, we defined the variables affecting the distributional areas that seem unsuitable for C. nodosa as well as those suitable habitats not

  1. Large-Scale Sequence Comparison.

    Science.gov (United States)

    Lal, Devi; Verma, Mansi

    2017-01-01

    There are millions of sequences deposited in genomic databases, and it is an important task to categorize them according to their structural and functional roles. Sequence comparison is a prerequisite for proper categorization of both DNA and protein sequences, and helps in assigning a putative or hypothetical structure and function to a given sequence. There are various methods available for comparing sequences, alignment being first and foremost for sequences with a small number of base pairs as well as for large-scale genome comparison. Various tools are available for performing pairwise large sequence comparison. The best known tools either perform global alignment or generate local alignments between the two sequences. In this chapter we first provide basic information regarding sequence comparison. This is followed by the description of the PAM and BLOSUM matrices that form the basis of sequence comparison. We also give a practical overview of currently available methods such as BLAST and FASTA, followed by a description and overview of tools available for genome comparison including LAGAN, MumMER, BLASTZ, and AVID.

  2. Large Scale EOF Analysis of Climate Data

    Science.gov (United States)

    Prabhat, M.; Gittens, A.; Kashinath, K.; Cavanaugh, N. R.; Mahoney, M.

    2016-12-01

    We present a distributed approach towards extracting EOFs from 3D climate data. We implement the method in Apache Spark, and process multi-TB sized datasets on O(1000-10,000) cores. We apply this method to latitude-weighted ocean temperature data from CSFR, a 2.2 terabyte-sized data set comprising ocean and subsurface reanalysis measurements collected at 41 levels in the ocean, at 6 hour intervals over 31 years. We extract the first 100 EOFs of this full data set and compare to the EOFs computed simply on the surface temperature field. Our analyses provide evidence of Kelvin and Rossy waves and components of large-scale modes of oscillation including the ENSO and PDO that are not visible in the usual SST EOFs. Further, they provide information on the the most influential parts of the ocean, such as the thermocline, that exist below the surface. Work is ongoing to understand the factors determining the depth-varying spatial patterns observed in the EOFs. We will experiment with weighting schemes to appropriately account for the differing depths of the observations. We also plan to apply the same distributed approach to analysis of analysis of 3D atmospheric climatic data sets, including multiple variables. Because the atmosphere changes on a quicker time-scale than the ocean, we expect that the results will demonstrate an even greater advantage to computing 3D EOFs in lieu of 2D EOFs.

  3. Accelerating sustainability in large-scale facilities

    CERN Multimedia

    Marina Giampietro

    2011-01-01

    Scientific research centres and large-scale facilities are intrinsically energy intensive, but how can big science improve its energy management and eventually contribute to the environmental cause with new cleantech? CERN’s commitment to providing tangible answers to these questions was sealed in the first workshop on energy management for large scale scientific infrastructures held in Lund, Sweden, on the 13-14 October.   Participants at the energy management for large scale scientific infrastructures workshop. The workshop, co-organised with the European Spallation Source (ESS) and  the European Association of National Research Facilities (ERF), tackled a recognised need for addressing energy issues in relation with science and technology policies. It brought together more than 150 representatives of Research Infrastrutures (RIs) and energy experts from Europe and North America. “Without compromising our scientific projects, we can ...

  4. Column Store for GWAC: A High-cadence, High-density, Large-scale Astronomical Light Curve Pipeline and Distributed Shared-nothing Database

    Science.gov (United States)

    Wan, Meng; Wu, Chao; Wang, Jing; Qiu, Yulei; Xin, Liping; Mullender, Sjoerd; Mühleisen, Hannes; Scheers, Bart; Zhang, Ying; Nes, Niels; Kersten, Martin; Huang, Yongpan; Deng, Jinsong; Wei, Jianyan

    2016-11-01

    The ground-based wide-angle camera array (GWAC), a part of the SVOM space mission, will search for various types of optical transients by continuously imaging a field of view (FOV) of 5000 degrees2 every 15 s. Each exposure consists of 36 × 4k × 4k pixels, typically resulting in 36 × ˜175,600 extracted sources. For a modern time-domain astronomy project like GWAC, which produces massive amounts of data with a high cadence, it is challenging to search for short timescale transients in both real-time and archived data, and to build long-term light curves for variable sources. Here, we develop a high-cadence, high-density light curve pipeline (HCHDLP) to process the GWAC data in real-time, and design a distributed shared-nothing database to manage the massive amount of archived data which will be used to generate a source catalog with more than 100 billion records during 10 years of operation. First, we develop HCHDLP based on the column-store DBMS of MonetDB, taking advantage of MonetDB’s high performance when applied to massive data processing. To realize the real-time functionality of HCHDLP, we optimize the pipeline in its source association function, including both time and space complexity from outside the database (SQL semantic) and inside (RANGE-JOIN implementation), as well as in its strategy of building complex light curves. The optimized source association function is accelerated by three orders of magnitude. Second, we build a distributed database using a two-level time partitioning strategy via the MERGE TABLE and REMOTE TABLE technology of MonetDB. Intensive tests validate that our database architecture is able to achieve both linear scalability in response time and concurrent access by multiple users. In summary, our studies provide guidance for a solution to GWAC in real-time data processing and management of massive data.

  5. Factors influencing the large-scale distribution of Hg° in the Mexico City area and over the North Pacific

    Directory of Open Access Journals (Sweden)

    D. Blake

    2008-04-01

    Full Text Available Gas-phase elemental mercury (Hg° was measured aboard the NASA DC-8 aircraft during the Intercontinental Chemical Transport Experiment Phase B (INTEX-B campaign in spring 2006. Flights were conducted around Mexico City and on two subsequent deployments over the North Pacific based out of Honolulu, Hawaii and Anchorage, Alaska. Data obtained from 0.15–12 km altitude showed that Hg° exhibited a relatively constant vertical profile centered around 100 ppqv. Highly concentrated pollution plumes emanating from the Mexico City urban agglomeration revealed that mixing ratios of Hg° as large as 500 ppqv were related to combustion tracers such as CO, but not SO2 which is presumably released locally from coal burning, refineries, and volcanoes. Our analysis of Mexico City plumes indicated that widespread multi-source urban/industrial emissions may have a more important influence on Hg° than specific point sources. Over the Pacific, correlations with CO, CO2, CH4, and C2Cl4 were diffuse overall, but recognizable on flights out of Anchorage and Honolulu. In distinct plumes originating from the Asian continent the Hg°- CO relationship yielded an average value of ~0.56 ppqv/ppbv, in good agreement with previous findings. A prominent feature of the INTEX-B dataset was frequent total depletion of Hg° in the upper troposphere when stratospherically influenced air was encountered. Ozone data obtained with the differential absorption lidar (DIAL showed that the stratospheric impact on the tropospheric column was a common and pervasive feature on all flights out of Honolulu and Anchorage. We propose that this is likely a major factor driving large-scale seasonality in Hg° mixing ratios, especially at mid-latitudes, and an important process that should be incorporated into global chemical transport models.

  6. Large-scale solar heat

    Energy Technology Data Exchange (ETDEWEB)

    Tolonen, J.; Konttinen, P.; Lund, P. [Helsinki Univ. of Technology, Otaniemi (Finland). Dept. of Engineering Physics and Mathematics

    1998-12-31

    In this project a large domestic solar heating system was built and a solar district heating system was modelled and simulated. Objectives were to improve the performance and reduce costs of a large-scale solar heating system. As a result of the project the benefit/cost ratio can be increased by 40 % through dimensioning and optimising the system at the designing stage. (orig.)

  7. LARGE SCALE IMAGE PROCESSING IN REAL-TIME ENVIRONMENTS WITH KAFKA

    OpenAIRE

    Yoon-Ki Kim; Chang-Sung Jeong

    2017-01-01

    Recently, real-time image data generated is increasing not only in resolution but also in amount. This large-scale image originates from a large number of camera channels. There is a way to use GPU for high-speed processing of images, but it cannot be done efficiently by using single GPU for large-scale image processing. In this paper, we provide a new method for constructing a distributed environment using open source called Apache Kafka for real-time processing of large-scale images. This m...

  8. LARGE SCALE DISTRIBUTION OF ULTRA HIGH ENERGY COSMIC RAYS DETECTED AT THE PIERRE AUGER OBSERVATORY WITH ZENITH ANGLES UP TO 80 degrees

    NARCIS (Netherlands)

    Aab, A.; Abreu, P.; Aglietta, M.; Ahn, E. J.; Al Samarai, I.; Albuquerque, I. F. M.; Allekotte, I.; Allen, J.; Allison, P.; Almela, A.; Alvarez Castillo, J.; Alvarez-Muniz, J.; Batista, R. Alves; Ambrosio, M.; Aminaei, A.; Anchordoqui, L.; Andringa, S.; Aramo, C.; Aranda, M.; Arqueros, F.; Asorey, H.; Assis, P.; Aublin, J.; Ave, M.; Avenier, M.; Avila, G.; Awal, N.; Badescu, A. M.; Barber, K. B.; Baeuml, J.; Baus, C.; Beatty, J. J.; Becker, K. H.; Bellido, J. A.; Berat, C.; Bertaina, M. E.; Bertou, X.; Biermann, P. L.; Billoir, P.; Blaess, S. G.; Blanco, M.; Bleve, C.; Bluemer, H.; Bohacova, M.; Boncioli, D.; Bonifazi, C.; Bonino, R.; Borodai, N.; Brack, J.; Brancus, I.; Bridgeman, A.; Brogueira, P.; Brown, W. C.; Buchholz, P.; Bueno, A.; Buitink, S.; Buscemi, M.; Caballero-Mora, K. S.; Caccianiga, B.; Caccianiga, L.; Candusso, M.; Caramete, L.; Caruso, R.; Castellina, A.; Cataldi, G.; Cazon, L.; Cester, R.; Chavez, A. G.; Chiavassa, A.; Chinellato, J. A.; Chudoba, J.; Cilmo, M.; Clay, R. W.; Cocciolo, G.; Colalillo, R.; Coleman, A.; Collica, L.; Coluccia, M. R.; Conceicao, R.; Contreras, F.; Cooper, M. J.; Cordier, A.; Coutu, S.; Covault, C. E.; Cronin, J.; Curutiu, A.; Dallier, R.; Daniel, B.; Dasso, S.; Daumiller, K.; Dawson, B. R.; de Almeida, R. M.; De Domenico, M.; de Jong, S. J.; de Mello Neto, J. R. T.; De Mitri, I.; de Oliveira, J.; de Souza, V.; del Peral, L.; Deligny, O.; Dembinski, H.; Dhital, N.; Di Giulio, C.; Di Matteo, A.; Diaz, J. C.; Diaz Castro, M. L.; Diogo, F.; Dobrigkeit, C.; Docters, W.; D'Olivo, J. C.; Dorofeev, A.; Hasankiadeh, Q. Dorosti; Dova, M. T.; Ebr, J.; Engel, R.; Erdmann, M.; Erfani, M.; Escobar, C. O.; Espadanal, J.; Etchegoyen, A.; Luis, P. Facal San; Falcke, H.; Fang, K.; Farrar, G.; Fauth, A. C.; Fazzini, N.; Ferguson, A. P.; Fernandes, M.; Fick, B.; Figueira, J. M.; Filevich, A.; Filipcic, A.; Fox, B. D.; Fratu, O.; Freire, M. M.; Froehlich, U.; Fuchs, B.; Fujii, T.; Gaior, R.; Garcia, B.; Garcia-Gamez, D.; Garcia-Pinto, D.; Garilli, G.; Gascon Bravo, A.; Gate, F.; Gemmeke, H.; Ghia, P. L.; Giaccari, U.; Giammarchi, M.; Giller, M.; Glaser, C.; Glass, H.; Gomez Berisso, M.; Gomez Vitale, P. F.; Goncalves, P.; Gonzalez, J. G.; Gonzalez, N.; Gookin, B.; Gordon, J.; Gorgi, A.; Gorham, P.; Gouffon, P.; Grebe, S.; Griffith, N.; Grillo, A. F.; Grubb, T. D.; Guarino, F.; Guedes, G. P.; Hampel, M. R.; Hansen, P.; Harari, D.; Harrison, T. A.; Hartmann, S.; Harton, J. L.; Haungs, A.; Hebbeker, T.; Heck, D.; Heimann, P.; Herve, A. E.; Hill, G. C.; Hojvat, C.; Hollon, N.; Holt, E.; Homola, P.; Horandel, J. R.; Horvath, P.; Hrabovsky, M.; Huber, D.; Huege, T.; Insolia, A.; Isar, P. G.; Jandt, I.; Jansen, S.; Jarne, C.; Josebachuili, M.; Kaeaepae, A.; Kambeitz, O.; Kampert, K. H.; Kasper, P.; Katkov, I.; Kegl, B.; Keilhauer, B.; Keivani, A.; Kemp, E.; Kieckhafer, R. M.; Klages, H. O.; Kleifges, M.; Kleinfeller, J.; Krause, R.; Krohm, N.; Kroemer, O.; Kruppke-Hansen, D.; Kuempel, D.; Kunka, N.; LaHurd, D.; Latronico, L.; Lauer, R.; Lauscher, M.; Lautridou, P.; Le Coz, S.; Leao, M. S. A. B.; Lebrun, D.; Lebrun, P.; Leigui de Oliveira, M. A.; Letessier-Selvon, A.; Lhenry-Yvon, I.; Link, K.; Lopez, R.; Louedec, K.; Lozano Bahilo, J.; Lu, L.; Lucero, A.; Ludwig, M.; Malacari, M.; Maldera, S.; Mallamaci, M.; Maller, J.; Mandat, D.; Mantsch, P.; Mariazzi, A. G.; Marin, V.; Maris, I. C.; Marsella, G.; Martello, D.; Martin, L.; Martinez, H.; Martinez Bravo, O.; Martraire, D.; Masias Meza, J. J.; Mathes, H. J.; Mathys, S.; Matthews, J.; Matthews, J. A. J.; Matthiae, G.; Maurel, D.; Maurizio, D.; Mayotte, E.; Mazur, P. O.; Medina, C.; Medina-Tanco, G.; Meissner, R.; Melissas, M.; Melo, D.; Menshikov, A.; Messina, S.; Meyhandan, R.; Micanovic, S.; Micheletti, M. I.; Middendorf, L.; Minaya, I. A.; Miramonti, L.; Mitrica, B.; Molina-Bueno, L.; Mollerach, S.; Monasor, M.; Ragaigne, D. Monnier; Montanet, F.; Morello, C.; Mostafa, M.; Moura, C. A.; Muller, M. A.; Mueller, G.; Mueller, S.; Muenchmeyer, M.; Mussa, R.; Navarra, G.; Navas, S.; Necesal, P.; Nellen, L.; Nelles, A.; Neuser, J.; Nguyen, P. H.; Niechciol, M.; Niemietz, L.; Niggemann, T.; Nitz, D.; Nosek, D.; Novotny, V.; Nozka, L.; Ochilo, L.; Oikonomou, F.; Olinto, A.; Oliveira, M.; Pacheco, N.; Pakk Selmi-Dei, D.; Palatka, M.; Pallotta, J.; Palmieri, N.; Papenbreer, P.; Parente, G.; Parra, A.; Paul, T.; Pech, M.; Pekala, J.; Pelayo, R.; Pepe, I. M.; Perrone, L.; Petermann, E.; Peters, C.; Petrera, S.; Petrov, Y.; Phuntsok, J.; Piegaia, R.; Pierog, T.; Pieroni, P.; Pimenta, M.; Pirronello, V.; Platino, M.; Plum, M.; Porcelli, A.; Porowski, C.; Prado, R. R.; Privitera, P.; Prouza, M.; Purrello, V.; Quel, E. J.; Querchfeld, S.; Quinn, S.; Rautenberg, J.; Ravel, O.; Ravignani, D.; Revenu, B.; Ridky, J.; Riggi, S.; Risse, M.; Ristori, P.; Rizi, V.; Rodrigues de Carvalho, W.; Fernandez, G. Rodriguez; Rodriguez Rojo, J.; Rodriguez-Frias, M. D.; Rogozin, D.; Ros, G.; Rosado, J.; Rossler, T.; Roth, M.; Roulet, E.; Rovero, A. C.; Saffi, S. J.; Saftoiu, A.; Salamida, F.; Salazar, H.; Saleh, A.; Greus, F. Salesa; Salina, G.; Sanchez, F.; Sanchez-Lucas, P.; Santo, C. E.; Santos, E.; Santos, E. M.; Sarazin, F.; Sarkar, B.; Sarmento, R.; Sato, R.; Scharf, N.; Scherini, V.; Schieler, H.; Schiffer, P.; Schmidt, D.; Scholten, O.; Schoorlemmer, H.; Schovanek, P.; Schroeder, F. G.; Schulz, A.; Schulz, J.; Schumacher, J.; Sciutto, S. J.; Segreto, A.; Settimo, M.; Shadkam, A.; Shellard, R. C.; Sidelnik, I.; Sigl, G.; Sima, O.; Smialkowski, A.; Smida, R.; Snow, G. R.; Sommers, P.; Sorokin, J.; Squartini, R.; Srivastava, Y. N.; Stanic, S.; Stapleton, J.; Stasielak, J.; Stephan, M.; Stutz, A.; Suarez, F.; Suomijaervi, T.; Supanitsky, A. D.; Sutherland, M. S.; Swain, J.; Szadkowski, Z.; Szuba, M.; Taborda, O. A.; Tapia, A.; Tepe, A.; Theodoro, V. M.; Timmermans, C.; Todero Peixoto, C. J.; Toma, G.; Tomankova, L.; Tome, B.; Tonachini, A.; Torralba Elipe, G.; Torres Machado, D.; Travnicek, P.; Trovato, E.; Ulrich, R.; Unger, M.; Urban, M.; Valdes Galicia, J. F.; Valino, I.; Valore, L.; van Aar, G.; van Bodegom, P.; van den Berg, A. M.; van Velzen, S.; van Vliet, A.; Varela, E.; Vargas Cardenas, B.; Varner, G.; Vazquez, J. R.; Vazquez, R. A.; Veberic, D.; Verzi, V.; Vicha, J.; Videla, M.; Villasenor, L.; Vlcek, B.; Vorobiov, S.; Wahlberg, H.; Wainberg, O.; Walz, D.; Watson, A. A.; Weber, M.; Weidenhaupt, K.; Weindl, A.; Werner, F.; Widom, A.; Wiencke, L.; Wilczynska, B.; Wilczynski, H.; Williams, C.; Winchen, T.; Wittkowski, D.; Wundheiler, B.; Wykes, S.; Yamamoto, T.; Yapici, T.; Yuan, G.; Yushkov, A.; Zamorano, B.; Zas, E.; Zavrtanik, D.; Zavrtanik, M.; Zepeda, A.; Zhou, J.; Zhu, Y.; Zimbres Silva, M.; Ziolkowski, M.; Zuccarello, F.

    2015-01-01

    We present the results of an analysis of the large angular scale distribution of the arrival directions of cosmic rays with energy above 4 EeV detected at the Pierre Auger Observatory including for the first time events with zenith angle between 60 degrees and 80 degrees. We perform two Rayleigh

  9. Large Scale Distribution of Ultra High Energy Cosmic Rays Detected at the Pierre Auger Observatory with Zenith Angles up to 80°

    NARCIS (Netherlands)

    Aab, A.; Abreu, P.; Aglietta, M.; Ahn, E. J.; Samarai, I. Al; Albuquerque, I. F. M.; Allekotte, I.; Allen, J.; Allison, P.; Almela, A.; Alvarez Castillo, J.; Alvarez-Muñiz, J.; Alves Batista, R.; Ambrosio, M.; Aminaei, A.; Anchordoqui, L.; Andringa, S.; Aramo, C.; Aranda, V. M.; Arqueros, F.; Asorey, H.; Assis, P.; Aublin, J.; Ave, M.; Avenier, M.; Avila, G.; Awal, N.; Badescu, A. M.; Barber, K. B.; Bäuml, J.; Baus, C.; Beatty, J. J.; Becker, K. H.; Bellido, J. A.; Berat, C.; Bertaina, M. E.; Bertou, X.; Biermann, P. L.; Billoir, P.; Blaess, S. G.; Blanco, M.; Bleve, C.; Blümer, H.; Boháčová, M.; Boncioli, D.; Bonifazi, C.; Bonino, R.; Borodai, N.; Brack, J.; Brancus, I.; Bridgeman, A.; Brogueira, P.; Brown, W. C.; Buchholz, P.; Bueno, A.; Buitink, S.; Buscemi, M.; Caballero-Mora, K. S.; Caccianiga, B.; Caccianiga, L.; Candusso, M.; Caramete, L.; Caruso, R.; Castellina, A.; Cataldi, G.; Cazon, L.; Cester, R.; Chavez, A. G.; Chiavassa, A.; Chinellato, J. A.; Chudoba, J.; Cilmo, M.; Clay, R. W.; Cocciolo, G.; Colalillo, R.; Coleman, A.; Collica, L.; Coluccia, M. R.; Conceição, R.; Contreras, F.; Cooper, M. J.; Cordier, A.; Coutu, S.; Covault, C. E.; Cronin, J.; Curutiu, A.; Dallier, R.; Daniel, B.; Dasso, S.; Daumiller, K.; Dawson, B. R.; de Almeida, R. M.; De Domenico, M.; de Jong, S. J.; de Mello Neto, J. R. T.; De Mitri, I.; de Oliveira, J.; de Souza, V.; del Peral, L.; Deligny, O.; Dembinski, H.; Dhital, N.; Di Giulio, C.; Di Matteo, A.; Diaz, J. C.; Díaz Castro, M. L.; Diogo, F.; Dobrigkeit, C.; Docters, W.; D’Olivo, J. C.; Dorofeev, A.; Dorosti Hasankiadeh, Q.; Dova, M. T.; Ebr, J.; Engel, R.; Erdmann, M.; Erfani, M.; Escobar, C. O.; Espadanal, J.; Etchegoyen, A.; Facal San Luis, P.; Falcke, H.; Fang, K.; Farrar, G.; Fauth, A. C.; Fazzini, N.; Ferguson, A. P.; Fernandes, M.; Fick, B.; Figueira, J. M.; Filevich, A.; Filipčič, A.; Fox, B. D.; Fratu, O.; Freire, M. M.; Fröhlich, U.; Fuchs, B.; Fujii, T.; Gaior, R.; García, B.; Garcia-Gamez, D.; Garcia-Pinto, D.; Garilli, G.; Gascon Bravo, A.; Gate, F.; Gemmeke, H.; Ghia, P. L.; Giaccari, U.; Giammarchi, M.; Giller, M.; Glaser, C.; Glass, H.; Gómez Berisso, M.; Gómez Vitale, P. F.; Gonçalves, P.; Gonzalez, J. G.; González, N.; Gookin, B.; Gordon, J.; Gorgi, A.; Gorham, P.; Gouffon, P.; Grebe, S.; Griffith, N.; Grillo, A. F.; Grubb, T. D.; Guarino, F.; Guedes, G. P.; Hampel, M. R.; Hansen, P.; Harari, D.; Harrison, T. A.; Hartmann, S.; Harton, J. L.; Haungs, A.; Hebbeker, T.; Heck, D.; Heimann, P.; Herve, A. E.; Hill, G. C.; Hojvat, C.; Hollon, N.; Holt, E.; Homola, P.; Hörandel, J. R.; Horvath, P.; Hrabovský, M.; Huber, D.; Huege, T.; Insolia, A.; Isar, P. G.; Jandt, I.; Jansen, S.; Jarne, C.; Josebachuili, M.; Kääpä, A.; Kambeitz, O.; Kampert, K. H.; Kasper, P.; Katkov, I.; Kégl, B.; Keilhauer, B.; Keivani, A.; Kemp, E.; Kieckhafer, R. M.; Klages, H. O.; Kleifges, M.; Kleinfeller, J.; Krause, R.; Krohm, N.; Krömer, O.; Kruppke-Hansen, D.; Kuempel, D.; Kunka, N.; LaHurd, D.; Latronico, L.; Lauer, R.; Lauscher, M.; Lautridou, P.; Le Coz, S.; Leão, M. S. A. B.; Lebrun, D.; Lebrun, P.; Leigui de Oliveira, M. A.; Letessier-Selvon, A.; Lhenry-Yvon, I.; Link, K.; López, R.; Louedec, K.; Lozano Bahilo, J.; Lu, L.; Lucero, A.; Ludwig, M.; Malacari, M.; Maldera, S.; Mallamaci, M.; Maller, J.; Mandat, D.; Mantsch, P.; Mariazzi, A. G.; Marin, V.; Mariş, I. C.; Marsella, G.; Martello, D.; Martin, L.; Martinez, H.; Martínez Bravo, O.; Martraire, D.; Masías Meza, J. J.; Mathes, H. J.; Mathys, S.; Matthews, J.; Matthews, J. A. J.; Matthiae, G.; Maurel, D.; Maurizio, D.; Mayotte, E.; Mazur, P. O.; Medina, C.; Medina-Tanco, G.; Meissner, R.; Melissas, M.; Melo, D.; Menshikov, A.; Messina, S.; Meyhandan, R.; Mićanović, S.; Micheletti, M. I.; Middendorf, L.; Minaya, I. A.; Miramonti, L.; Mitrica, B.; Molina-Bueno, L.; Mollerach, S.; Monasor, M.; Monnier Ragaigne, D.; Montanet, F.; Morello, C.; Mostafá, M.; Moura, C. A.; Muller, M. A.; Müller, G.; Müller, S.; Münchmeyer, M.; Mussa, R.; Navarra, G.; Navas, S.; Necesal, P.; Nellen, L.; Nelles, A.; Neuser, J.; Nguyen, P. H.; Niechciol, M.; Niemietz, L.; Niggemann, T.; Nitz, D.; Nosek, D.; Novotny, V.; Nožka, L.; Ochilo, L.; Oikonomou, F.; Olinto, A.; Oliveira, M.; Pacheco, N.; Pakk Selmi-Dei, D.; Palatka, M.; Pallotta, J.; Palmieri, N.; Papenbreer, P.; Parente, G.; Parra, A.; Paul, T.; Pech, M.; Pȩkala, J.; Pelayo, R.; Pepe, I. M.; Perrone, L.; Petermann, E.; Peters, C.; Petrera, S.; Petrov, Y.; Phuntsok, J.; Piegaia, R.; Pierog, T.; Pieroni, P.; Pimenta, M.; Pirronello, V.; Platino, M.; Plum, M.; Porcelli, A.; Porowski, C.; Prado, R. R.; Privitera, P.; Prouza, M.; Purrello, V.; Quel, E. J.; Querchfeld, S.; Quinn, S.; Rautenberg, J.; Ravel, O.; Ravignani, D.; Revenu, B.; Ridky, J.; Riggi, S.; Risse, M.; Ristori, P.; Rizi, V.; Rodrigues de Carvalho, W.; Rodriguez Fernandez, G.; Rodriguez Rojo, J.; Rodríguez-Frías, M. D.; Rogozin, D.; Ros, G.; Rosado, J.; Rossler, T.; Roth, M.; Roulet, E.; Rovero, A. C.; Saffi, S. J.; Saftoiu, A.; Salamida, F.; Salazar, H.; Saleh, A.; Salesa Greus, F.; Salina, G.; Sánchez, F.; Sanchez-Lucas, P.; Santo, C. E.; Santos, E.; Santos, E. M.; Sarazin, F.; Sarkar, B.; Sarmento, R.; Sato, R.; Scharf, N.; Scherini, V.; Schieler, H.; Schiffer, P.; Schmidt, D.; Scholten, O.; Schoorlemmer, H.; Schovánek, P.; Schröder, F. G.; Schulz, A.; Schulz, J.; Schumacher, J.; Sciutto, S. J.; Segreto, A.; Settimo, M.; Shadkam, A.; Shellard, R. C.; Sidelnik, I.; Sigl, G.; Sima, O.; Śmiałkowski, A.; Šmída, R.; Snow, G. R.; Sommers, P.; Sorokin, J.; Squartini, R.; Srivastava, Y. N.; Stanič, S.; Stapleton, J.; Stasielak, J.; Stephan, M.; Stutz, A.; Suarez, F.; Suomijärvi, T.; Supanitsky, A. D.; Sutherland, M. S.; Swain, J.; Szadkowski, Z.; Szuba, M.; Taborda, O. A.; Tapia, A.; Tepe, A.; Theodoro, V. M.; Timmermans, C.; Todero Peixoto, C. J.; Toma, G.; Tomankova, L.; Tomé, B.; Tonachini, A.; Torralba Elipe, G.; Torres Machado, D.; Travnicek, P.; Trovato, E.; Ulrich, R.; Unger, M.; Urban, M.; Valdés Galicia, J. F.; Valiño, I.; Valore, L.; van Aar, G.; van Bodegom, P.; van den Berg, A. M.; van Velzen, S.; van Vliet, A.; Varela, E.; Vargas Cárdenas, B.; Varner, G.; Vázquez, J. R.; Vázquez, R. A.; Veberič, D.; Verzi, V.; Vicha, J.; Videla, M.; Villaseñor, L.; Vlcek, B.; Vorobiov, S.; Wahlberg, H.; Wainberg, O.; Walz, D.; Watson, A. A.; Weber, M.; Weidenhaupt, K.; Weindl, A.; Werner, F.; Widom, A.; Wiencke, L.; Wilczyńska, B.; Wilczyński, H.; Williams, C.; Winchen, T.; Wittkowski, D.; Wundheiler, B.; Wykes, S.; Yamamoto, T.; Yapici, T.; Yuan, G.; Yushkov, A.; Zamorano, B.; Zas, E.; Zavrtanik, D.; Zavrtanik, M.; Zepeda, A.; Zhou, J.; Zhu, Y.; Zimbres Silva, M.; Ziolkowski, M.; Zuccarello, F.

    2015-01-01

    We present the results of an analysis of the large angular scale distribution of the arrival directions of cosmic rays with energy above 4 EeV detected at the Pierre Auger Observatory including for the first time events with zenith angle between 60° and 80°. We perform two Rayleigh analyses, one in

  10. Skewness of citation impact data and covariates of citation distributions: A large-scale empirical analysis based on Web of Science data

    NARCIS (Netherlands)

    Bornmann, L.; Leydesdorff, L.

    Using percentile shares, one can visualize and analyze the skewness in bibliometric data across disciplines and over time. The resulting figures can be intuitively interpreted and are more suitable for detailed analysis of the effects of independent and control variables on distributions than

  11. Modeling of Carbon Tetrachloride Flow and Transport in the Subsurface of the 200 West Disposal Sites: Large-Scale Model Configuration and Prediction of Future Carbon Tetrachloride Distribution Beneath the 216-Z-9 Disposal Site

    Energy Technology Data Exchange (ETDEWEB)

    Oostrom, Mart; Thorne, Paul D.; Zhang, Z. F.; Last, George V.; Truex, Michael J.

    2008-12-17

    Three-dimensional simulations considered migration of dense, nonaqueous phase liquid (DNAPL) consisting of CT and co disposed organics in the subsurface as a function of the properties and distribution of subsurface sediments and of the properties and disposal history of the waste. Simulations of CT migration were conducted using the Water-Oil-Air mode of Subsurface Transport Over Multiple Phases (STOMP) simulator. A large-scale model was configured to model CT and waste water discharge from the major CT and waste-water disposal sites.

  12. Large Scale Distribution of Ultra High Energy Cosmic Rays Detected at the Pierre Auger Observatory with Zenith Angles up to 80°

    Science.gov (United States)

    Aab, A.; Abreu, P.; Aglietta, M.; Ahn, E. J.; Samarai, I. Al; Albuquerque, I. F. M.; Allekotte, I.; Allen, J.; Allison, P.; Almela, A.; Alvarez Castillo, J.; Alvarez-Muñiz, J.; Alves Batista, R.; Ambrosio, M.; Aminaei, A.; Anchordoqui, L.; Andringa, S.; Aramo, C.; Aranda, V. M.; Arqueros, F.; Asorey, H.; Assis, P.; Aublin, J.; Ave, M.; Avenier, M.; Avila, G.; Awal, N.; Badescu, A. M.; Barber, K. B.; Bäuml, J.; Baus, C.; Beatty, J. J.; Becker, K. H.; Bellido, J. A.; Berat, C.; Bertaina, M. E.; Bertou, X.; Biermann, P. L.; Billoir, P.; Blaess, S. G.; Blanco, M.; Bleve, C.; Blümer, H.; Boháčová, M.; Boncioli, D.; Bonifazi, C.; Bonino, R.; Borodai, N.; Brack, J.; Brancus, I.; Bridgeman, A.; Brogueira, P.; Brown, W. C.; Buchholz, P.; Bueno, A.; Buitink, S.; Buscemi, M.; Caballero-Mora, K. S.; Caccianiga, B.; Caccianiga, L.; Candusso, M.; Caramete, L.; Caruso, R.; Castellina, A.; Cataldi, G.; Cazon, L.; Cester, R.; Chavez, A. G.; Chiavassa, A.; Chinellato, J. A.; Chudoba, J.; Cilmo, M.; Clay, R. W.; Cocciolo, G.; Colalillo, R.; Coleman, A.; Collica, L.; Coluccia, M. R.; Conceição, R.; Contreras, F.; Cooper, M. J.; Cordier, A.; Coutu, S.; Covault, C. E.; Cronin, J.; Curutiu, A.; Dallier, R.; Daniel, B.; Dasso, S.; Daumiller, K.; Dawson, B. R.; de Almeida, R. M.; De Domenico, M.; de Jong, S. J.; de Mello Neto, J. R. T.; De Mitri, I.; de Oliveira, J.; de Souza, V.; del Peral, L.; Deligny, O.; Dembinski, H.; Dhital, N.; Di Giulio, C.; Di Matteo, A.; Diaz, J. C.; Díaz Castro, M. L.; Diogo, F.; Dobrigkeit, C.; Docters, W.; D'Olivo, J. C.; Dorofeev, A.; Dorosti Hasankiadeh, Q.; Dova, M. T.; Ebr, J.; Engel, R.; Erdmann, M.; Erfani, M.; Escobar, C. O.; Espadanal, J.; Etchegoyen, A.; Facal San Luis, P.; Falcke, H.; Fang, K.; Farrar, G.; Fauth, A. C.; Fazzini, N.; Ferguson, A. P.; Fernandes, M.; Fick, B.; Figueira, J. M.; Filevich, A.; Filipčič, A.; Fox, B. D.; Fratu, O.; Freire, M. M.; Fröhlich, U.; Fuchs, B.; Fujii, T.; Gaior, R.; García, B.; Garcia-Gamez, D.; Garcia-Pinto, D.; Garilli, G.; Gascon Bravo, A.; Gate, F.; Gemmeke, H.; Ghia, P. L.; Giaccari, U.; Giammarchi, M.; Giller, M.; Glaser, C.; Glass, H.; Gómez Berisso, M.; Gómez Vitale, P. F.; Gonçalves, P.; Gonzalez, J. G.; González, N.; Gookin, B.; Gordon, J.; Gorgi, A.; Gorham, P.; Gouffon, P.; Grebe, S.; Griffith, N.; Grillo, A. F.; Grubb, T. D.; Guarino, F.; Guedes, G. P.; Hampel, M. R.; Hansen, P.; Harari, D.; Harrison, T. A.; Hartmann, S.; Harton, J. L.; Haungs, A.; Hebbeker, T.; Heck, D.; Heimann, P.; Herve, A. E.; Hill, G. C.; Hojvat, C.; Hollon, N.; Holt, E.; Homola, P.; Hörandel, J. R.; Horvath, P.; Hrabovský, M.; Huber, D.; Huege, T.; Insolia, A.; Isar, P. G.; Jandt, I.; Jansen, S.; Jarne, C.; Josebachuili, M.; Kääpä, A.; Kambeitz, O.; Kampert, K. H.; Kasper, P.; Katkov, I.; Kégl, B.; Keilhauer, B.; Keivani, A.; Kemp, E.; Kieckhafer, R. M.; Klages, H. O.; Kleifges, M.; Kleinfeller, J.; Krause, R.; Krohm, N.; Krömer, O.; Kruppke-Hansen, D.; Kuempel, D.; Kunka, N.; LaHurd, D.; Latronico, L.; Lauer, R.; Lauscher, M.; Lautridou, P.; Le Coz, S.; Leão, M. S. A. B.; Lebrun, D.; Lebrun, P.; Leigui de Oliveira, M. A.; Letessier-Selvon, A.; Lhenry-Yvon, I.; Link, K.; López, R.; Louedec, K.; Lozano Bahilo, J.; Lu, L.; Lucero, A.; Ludwig, M.; Malacari, M.; Maldera, S.; Mallamaci, M.; Maller, J.; Mandat, D.; Mantsch, P.; Mariazzi, A. G.; Marin, V.; Mariş, I. C.; Marsella, G.; Martello, D.; Martin, L.; Martinez, H.; Martínez Bravo, O.; Martraire, D.; Masías Meza, J. J.; Mathes, H. J.; Mathys, S.; Matthews, J.; Matthews, J. A. J.; Matthiae, G.; Maurel, D.; Maurizio, D.; Mayotte, E.; Mazur, P. O.; Medina, C.; Medina-Tanco, G.; Meissner, R.; Melissas, M.; Melo, D.; Menshikov, A.; Messina, S.; Meyhandan, R.; Mićanović, S.; Micheletti, M. I.; Middendorf, L.; Minaya, I. A.; Miramonti, L.; Mitrica, B.; Molina-Bueno, L.; Mollerach, S.; Monasor, M.; Monnier Ragaigne, D.; Montanet, F.; Morello, C.; Mostafá, M.; Moura, C. A.; Muller, M. A.; Müller, G.; Müller, S.; Münchmeyer, M.; Mussa, R.; Navarra, G.; Navas, S.; Necesal, P.; Nellen, L.; Nelles, A.; Neuser, J.; Nguyen, P. H.; Niechciol, M.; Niemietz, L.; Niggemann, T.; Nitz, D.; Nosek, D.; Novotny, V.; Nožka, L.; Ochilo, L.; Oikonomou, F.; Olinto, A.; Oliveira, M.; Pacheco, N.; Pakk Selmi-Dei, D.; Palatka, M.; Pallotta, J.; Palmieri, N.; Papenbreer, P.; Parente, G.; Parra, A.; Paul, T.; Pech, M.; Pȩkala, J.; Pelayo, R.; Pepe, I. M.; Perrone, L.; Petermann, E.; Peters, C.; Petrera, S.; Petrov, Y.; Phuntsok, J.; Piegaia, R.; Pierog, T.; Pieroni, P.; Pimenta, M.; Pirronello, V.; Platino, M.; Plum, M.; Porcelli, A.; Porowski, C.; Prado, R. R.; Privitera, P.; Prouza, M.; Purrello, V.; Quel, E. J.; Querchfeld, S.; Quinn, S.; Rautenberg, J.; Ravel, O.; Ravignani, D.; Revenu, B.; Ridky, J.; Riggi, S.; Risse, M.; Ristori, P.; Rizi, V.; Rodrigues de Carvalho, W.; Rodriguez Fernandez, G.; Rodriguez Rojo, J.; Rodríguez-Frías, M. D.; Rogozin, D.; Ros, G.; Rosado, J.; Rossler, T.; Roth, M.; Roulet, E.; Rovero, A. C.; Saffi, S. J.; Saftoiu, A.; Salamida, F.; Salazar, H.; Saleh, A.; Salesa Greus, F.; Salina, G.; Sánchez, F.; Sanchez-Lucas, P.; Santo, C. E.; Santos, E.; Santos, E. M.; Sarazin, F.; Sarkar, B.; Sarmento, R.; Sato, R.; Scharf, N.; Scherini, V.; Schieler, H.; Schiffer, P.; Schmidt, D.; Scholten, O.; Schoorlemmer, H.; Schovánek, P.; Schröder, F. G.; Schulz, A.; Schulz, J.; Schumacher, J.; Sciutto, S. J.; Segreto, A.; Settimo, M.; Shadkam, A.; Shellard, R. C.; Sidelnik, I.; Sigl, G.; Sima, O.; Śmiałkowski, A.; Šmída, R.; Snow, G. R.; Sommers, P.; Sorokin, J.; Squartini, R.; Srivastava, Y. N.; Stanič, S.; Stapleton, J.; Stasielak, J.; Stephan, M.; Stutz, A.; Suarez, F.; Suomijärvi, T.; Supanitsky, A. D.; Sutherland, M. S.; Swain, J.; Szadkowski, Z.; Szuba, M.; Taborda, O. A.; Tapia, A.; Tepe, A.; Theodoro, V. M.; Timmermans, C.; Todero Peixoto, C. J.; Toma, G.; Tomankova, L.; Tomé, B.; Tonachini, A.; Torralba Elipe, G.; Torres Machado, D.; Travnicek, P.; Trovato, E.; Ulrich, R.; Unger, M.; Urban, M.; Valdés Galicia, J. F.; Valiño, I.; Valore, L.; van Aar, G.; van Bodegom, P.; van den Berg, A. M.; van Velzen, S.; van Vliet, A.; Varela, E.; Vargas Cárdenas, B.; Varner, G.; Vázquez, J. R.; Vázquez, R. A.; Veberič, D.; Verzi, V.; Vicha, J.; Videla, M.; Villaseñor, L.; Vlcek, B.; Vorobiov, S.; Wahlberg, H.; Wainberg, O.; Walz, D.; Watson, A. A.; Weber, M.; Weidenhaupt, K.; Weindl, A.; Werner, F.; Widom, A.; Wiencke, L.; Wilczyńska, B.; Wilczyński, H.; Williams, C.; Winchen, T.; Wittkowski, D.; Wundheiler, B.; Wykes, S.; Yamamoto, T.; Yapici, T.; Yuan, G.; Yushkov, A.; Zamorano, B.; Zas, E.; Zavrtanik, D.; Zavrtanik, M.; Zepeda, A.; Zhou, J.; Zhu, Y.; Zimbres Silva, M.; Ziolkowski, M.; Zuccarello, F.

    2015-04-01

    We present the results of an analysis of the large angular scale distribution of the arrival directions of cosmic rays with energy above 4 EeV detected at the Pierre Auger Observatory including for the first time events with zenith angle between 60° and 80°. We perform two Rayleigh analyses, one in the right ascension and one in the azimuth angle distributions, that are sensitive to modulations in right ascension and declination, respectively. The largest departure from isotropy appears in the E\\gt 8 EeV energy bin, with an amplitude for the first harmonic in right ascension r1α =(4.4+/- 1.0)× {{10}-2}, that has a chance probability P(≥slant r1α )=6.4× {{10}-5}, reinforcing the hint previously reported with vertical events alone.

  13. Large-scale gene flow in the barnacle Jehlius cirratus and contrasts with other broadly-distributed taxa along the Chilean coast

    Directory of Open Access Journals (Sweden)

    Baoying Guo

    2017-02-01

    Full Text Available We evaluate the population genetic structure of the intertidal barnacle Jehlius cirratus across a broad portion of its geographic distribution using data from the mitochondrial cytochrome oxidase I (COI gene region. Despite sampling diversity from over 3,000 km of the linear range of this species, there is only slight regional structure indicated, with overall Φ CT of 0.036 (p < 0.001 yet no support for isolation by distance. While these results suggest greater structure than previous studies of J. cirratus had indicated, the pattern of diversity is still far more subtle than in other similarly-distributed species with similar larval and life history traits. We compare these data and results with recent findings in four other intertidal species that have planktotrophic larvae. There are no clear patterns among these taxa that can be associated with intertidal depth or other known life history traits.

  14. Japanese large-scale interferometers

    CERN Document Server

    Kuroda, K; Miyoki, S; Ishizuka, H; Taylor, C T; Yamamoto, K; Miyakawa, O; Fujimoto, M K; Kawamura, S; Takahashi, R; Yamazaki, T; Arai, K; Tatsumi, D; Ueda, A; Fukushima, M; Sato, S; Shintomi, T; Yamamoto, A; Suzuki, T; Saitô, Y; Haruyama, T; Sato, N; Higashi, Y; Uchiyama, T; Tomaru, T; Tsubono, K; Ando, M; Takamori, A; Numata, K; Ueda, K I; Yoneda, H; Nakagawa, K; Musha, M; Mio, N; Moriwaki, S; Somiya, K; Araya, A; Kanda, N; Telada, S; Sasaki, M; Tagoshi, H; Nakamura, T; Tanaka, T; Ohara, K

    2002-01-01

    The objective of the TAMA 300 interferometer was to develop advanced technologies for kilometre scale interferometers and to observe gravitational wave events in nearby galaxies. It was designed as a power-recycled Fabry-Perot-Michelson interferometer and was intended as a step towards a final interferometer in Japan. The present successful status of TAMA is presented. TAMA forms a basis for LCGT (large-scale cryogenic gravitational wave telescope), a 3 km scale cryogenic interferometer to be built in the Kamioka mine in Japan, implementing cryogenic mirror techniques. The plan of LCGT is schematically described along with its associated R and D.

  15. Large scale biomimetic membrane arrays

    DEFF Research Database (Denmark)

    Hansen, Jesper Søndergaard; Perry, Mark; Vogel, Jörg

    2009-01-01

    To establish planar biomimetic membranes across large scale partition aperture arrays, we created a disposable single-use horizontal chamber design that supports combined optical-electrical measurements. Functional lipid bilayers could easily and efficiently be established across CO2 laser micro...... peptides and proteins. Next, we tested the scalability of the biomimetic membrane design by establishing lipid bilayers in rectangular 24 x 24 and hexagonal 24 x 27 aperture arrays, respectively. The results presented show that the design is suitable for further developments of sensitive biosensor assays...

  16. 8th International Symposium on Intelligent Distributed Computing & Workshop on Cyber Security and Resilience of Large-Scale Systems & 6th International Workshop on Multi-Agent Systems Technology and Semantics

    CERN Document Server

    Braubach, Lars; Venticinque, Salvatore; Badica, Costin

    2015-01-01

    This book represents the combined peer-reviewed proceedings of the Eight International Symposium on Intelligent Distributed Computing - IDC'2014, of the Workshop on Cyber Security and Resilience of Large-Scale Systems - WSRL-2014, and of the Sixth International Workshop on Multi-Agent Systems Technology and Semantics- MASTS-2014. All the events were held in Madrid, Spain, during September 3-5, 2014. The 47 contributions published in this book address several topics related to theory and applications of the intelligent distributed computing and multi-agent systems, including: agent-based data processing, ambient intelligence, collaborative systems, cryptography and security, distributed algorithms, grid and cloud computing, information extraction, knowledge management, big data and ontologies, social networks, swarm intelligence or videogames amongst others.

  17. Two distinct mtDNA lineages of the blue crab reveal large-scale population structure in its native Atlantic distribution

    Science.gov (United States)

    Alaniz Rodrigues, Marcos; Dumont, Luiz Felipe Cestari; dos Santos, Cléverson Rannieri Meira; D'Incao, Fernando; Weiss, Steven; Froufe, Elsa

    2017-10-01

    For the first time, a molecular approach was used to evaluate the phylogenetic structure of the disjunct native American distribution of the blue crab Callinectes sapidus. Population structure was investigated by sequencing 648bp of the Cytochrome oxidase subunit 1 (COI), in a total of 138 sequences stemming from individual samples from both the northern and southern hemispheres of the Western Atlantic distribution of the species. A Bayesian approach was used to construct a phylogenetic tree for all samples, and a 95% confidence parsimony network was created to depict the relationship among haplotypes. Results revealed two highly distinct lineages, one containing all samples from the United States and some from Brazil (lineage 1) and the second restricted to Brazil (lineage 2). In addition, gene flow (at least for females) was detected among estuaries at local scales and there is evidence for shared haplotypes in the south. Furthermore, the findings of this investigation support the contemporary introduction of haplotypes that have apparently spread from the south to the north Atlantic.

  18. Large scale structure statistics: Finite volume effects

    Science.gov (United States)

    Colombi, S.; Bouchet, F. R.; Schaeffer, R.

    1994-01-01

    We study finite volume effects on the count probability distribution function PN(l) and the averaged Q-body correlations Xi-barQ (2 less than or = Q less than or equal 5). These statistics are computed for cubic cells, of size l. We use as an example the case of the matter distribution of a cold dark matter (CDM) universe involving approximately 3 x 105 particles. The main effect of the finiteness of the sampled volume is to induce an abrupt cut-off on the function PN(l) at large N. This clear signature makes an analysis of the consequences easy, and one can envisage a correction procedure. As a matter of fact, we demonstrate how an unfair sample can strongly affect the estimates of the functions Xi-barQ for Q greater than or = 3 (and decrease the measured zero of the two-body correlation function). We propose a method to correct for this are fact, or at least to evaluate the corresponding errors. We show that the correlations are systematically underestimated by direct measurements. We find that, once corrected, the statistical properties of the CDM universe appear compatible with the scaling relation SQ identically equals Xi-bar2 exp Q-1 = constant with respect to scale, in the non-linear regime; it was not the case with direct measurments. However, we note a deviation from scaling at scales close to the correlation length. It is probably due to the transition between the highly non-linear regime and the weakly correlated regime, where the functions SQ also seem to present a plateau. We apply the same procedure to simulations with hot dark matter (HDM) and white noise initial conditions, with similar results. Our method thus provides the first accurate measurement of the normalized skewness, S3, and the normalized kurtosis, S4, for three typical models of large scale structure formation in an expanding universe.

  19. Large Scale Structure of the Universe

    Science.gov (United States)

    Kaplinghat, Manoj

    2011-03-01

    These notes are based on 4 lectures given at Theoretical Advanced Study Institute in 2009 on the large scale structure of the universe. They provide a pedagogical introduction to the temporal evolution of linear density perturbations in the universe and a discussion of how density perturbations on small scales depend on the particle properties of dark matter. The notes assume the reader is familiar with the concepts and mathematics required to describe isotropic and homogeneous cosmology.

  20. Age and sex specific timing, frequency, and spatial distribu-tion of horseshoe crab spawning in Delaware Bay: Insights from a large-scale radio telemetry array

    Directory of Open Access Journals (Sweden)

    David R. SMITH, Lorne J. BROUSSEAU, Mary T. MANDT, Michael J. MILLARD

    2010-10-01

    Full Text Available To study horseshoe crab Limulus polyphemus spawning behavior and migration over a large-spatial extent (>100 km, we arrayed fixed station radio receivers throughout Delaware Bay and deployed radio transmitters and archival tags on adult horseshoe crabs prior to their spawning season. We tagged and released 160 females and 60 males in 2004 and 217 females in 2005. The array covered approximately 140 km of shoreline. Recapture rates were >70% with multi-year recaptures. We categorized adult age by carapace wear. Older females tended to spawn earlier in the season and more frequently than young females, but those tendencies were more apparent in 2004 when spawning overall occurred earlier than in 2005 when spawning was delayed possibly due to decreased water temperatures. Timing of initial spawning within a year was correlated with water temperature. After adjusting for day of first spring tide, the day of first spawning was 4 days earlier for every 1 degree (°C rise in mean daily water temperature in May. Seventy nine % of spawning occurred during nighttime high tides. Fifty five % of spawning occurred within 3 d of a spring tide, which was slightly higher than the 47% expected if spawning was uniformly distributed regardless of tidal cycle. Within the same spawning season, males and females were observed spawning or intertidally resting at more than one beach separated by >5 km. Between years, most (77% did not return to spawn at the same beach. Probability of stranding was strongly age dependent for males and females with older adults experiencing higher stranding rates. Horseshoe crabs staging in the shallow waters east of the channel spawned exclusively along the eastern (NJ shoreline, but those staging west of the channel spawned throughout the bay. Overall, several insights emerged from the use of radio telemetry, which advances our understanding of horseshoe crab ecology and will be useful in conserving the Delaware Bay horseshoe crab

  1. Conference on Large Scale Optimization

    CERN Document Server

    Hearn, D; Pardalos, P

    1994-01-01

    On February 15-17, 1993, a conference on Large Scale Optimization, hosted by the Center for Applied Optimization, was held at the University of Florida. The con­ ference was supported by the National Science Foundation, the U. S. Army Research Office, and the University of Florida, with endorsements from SIAM, MPS, ORSA and IMACS. Forty one invited speakers presented papers on mathematical program­ ming and optimal control topics with an emphasis on algorithm development, real world applications and numerical results. Participants from Canada, Japan, Sweden, The Netherlands, Germany, Belgium, Greece, and Denmark gave the meeting an important international component. At­ tendees also included representatives from IBM, American Airlines, US Air, United Parcel Serice, AT & T Bell Labs, Thinking Machines, Army High Performance Com­ puting Research Center, and Argonne National Laboratory. In addition, the NSF sponsored attendance of thirteen graduate students from universities in the United States and abro...

  2. Large scale nanopatterning of graphene

    Energy Technology Data Exchange (ETDEWEB)

    Neumann, P.L., E-mail: neumann@mfa.kfki.hu [Research Institute for Technical Physics and Materials Science (MFA) of HAS, Korean-Hungarian Joint Laboratory for Nanosciences, Budapest H-1525, P.O. Box 49 (Hungary); Budapest University of Technology and Economics (BUTE), Department of Physics, Solid State Physics Laboratory, Budapest H-1521, P.O. Box 91 (Hungary); Tovari, E.; Csonka, S. [Budapest University of Technology and Economics (BUTE), Department of Physics, Solid State Physics Laboratory, Budapest H-1521, P.O. Box 91 (Hungary); Kamaras, K. [Research Institute for Solid State Physics and Optics of HAS, Budapest H-1525, P.O. Box 49 (Hungary); Horvath, Z.E.; Biro, L.P. [Research Institute for Technical Physics and Materials Science (MFA) of HAS, Korean-Hungarian Joint Laboratory for Nanosciences, Budapest H-1525, P.O. Box 49 (Hungary)

    2012-07-01

    Recently, we have shown that the shaping of atomically perfect zig-zag oriented edges can be performed by exploiting the orientation dependent oxidation in graphene, by annealing the samples in inert atmosphere, where the oxygen source is the SiO{sub 2} substrate itself. In the present study, we showed that the large scale patterning of graphene using a conventional lithography technique can be combined with the control of crystallographic orientation and edge shaping. We applied electron beam lithography (EBL) followed by low energy O{sup +}/Ar{sup +} plasma etching for patterning mechanically exfoliated graphene flakes. As AFM imaging of the samples revealed, the controlled oxidation transformed the originally circular holes to polygonal shape with edges parallel with the zig-zag direction, showing the possibility of atomically precise, large area patterning of graphene.

  3. Economically viable large-scale hydrogen liquefaction

    Science.gov (United States)

    Cardella, U.; Decker, L.; Klein, H.

    2017-02-01

    The liquid hydrogen demand, particularly driven by clean energy applications, will rise in the near future. As industrial large scale liquefiers will play a major role within the hydrogen supply chain, production capacity will have to increase by a multiple of today’s typical sizes. The main goal is to reduce the total cost of ownership for these plants by increasing energy efficiency with innovative and simple process designs, optimized in capital expenditure. New concepts must ensure a manageable plant complexity and flexible operability. In the phase of process development and selection, a dimensioning of key equipment for large scale liquefiers, such as turbines and compressors as well as heat exchangers, must be performed iteratively to ensure technological feasibility and maturity. Further critical aspects related to hydrogen liquefaction, e.g. fluid properties, ortho-para hydrogen conversion, and coldbox configuration, must be analysed in detail. This paper provides an overview on the approach, challenges and preliminary results in the development of efficient as well as economically viable concepts for large-scale hydrogen liquefaction.

  4. Very Large Scale Distributed Information Processing Systems

    Science.gov (United States)

    1991-09-27

    Modeling Environment, In Proceedings OOPSLA󈨝, New Orleans , LA, 1989. 7 UCLA Computer Science [PGHP91] Thomas W. Page Jr., R. Guy, J Heidemann, G...Data Engineering, February 1986. [PPR83] D. Stott Parker, Jr., Gerald Popek, Gerard Rudisin, Allen Stoughton, Bruce J. Walker, Evelyn Walton, Johanna M

  5. Personalized Opportunistic Computing for CMS at Large Scale

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    **Douglas Thain** is an Associate Professor of Computer Science and Engineering at the University of Notre Dame, where he designs large scale distributed computing systems to power the needs of advanced science and...

  6. Large-scale pool fires

    Directory of Open Access Journals (Sweden)

    Steinhaus Thomas

    2007-01-01

    Full Text Available A review of research into the burning behavior of large pool fires and fuel spill fires is presented. The features which distinguish such fires from smaller pool fires are mainly associated with the fire dynamics at low source Froude numbers and the radiative interaction with the fire source. In hydrocarbon fires, higher soot levels at increased diameters result in radiation blockage effects around the perimeter of large fire plumes; this yields lower emissive powers and a drastic reduction in the radiative loss fraction; whilst there are simplifying factors with these phenomena, arising from the fact that soot yield can saturate, there are other complications deriving from the intermittency of the behavior, with luminous regions of efficient combustion appearing randomly in the outer surface of the fire according the turbulent fluctuations in the fire plume. Knowledge of the fluid flow instabilities, which lead to the formation of large eddies, is also key to understanding the behavior of large-scale fires. Here modeling tools can be effectively exploited in order to investigate the fluid flow phenomena, including RANS- and LES-based computational fluid dynamics codes. The latter are well-suited to representation of the turbulent motions, but a number of challenges remain with their practical application. Massively-parallel computational resources are likely to be necessary in order to be able to adequately address the complex coupled phenomena to the level of detail that is necessary.

  7. Handbook of Large-Scale Random Networks

    CERN Document Server

    Bollobas, Bela; Miklos, Dezso

    2008-01-01

    Covers various aspects of large-scale networks, including mathematical foundations and rigorous results of random graph theory, modeling and computational aspects of large-scale networks, as well as areas in physics, biology, neuroscience, sociology and technical areas

  8. A link between neuroscience and informatics: large-scale modeling of memory processes.

    Science.gov (United States)

    Horwitz, Barry; Smith, Jason F

    2008-04-01

    Utilizing advances in functional neuroimaging and computational neural modeling, neuroscientists have increasingly sought to investigate how distributed networks, composed of functionally defined subregions, combine to produce cognition. Large-scale, biologically realistic neural models, which integrate data from cellular, regional, whole brain, and behavioral sources, delineate specific hypotheses about how these interacting neural populations might carry out high-level cognitive tasks. In this review, we discuss neuroimaging, neural modeling, and the utility of large-scale biologically realistic models using modeling of short-term memory as an example. We present a sketch of the data regarding the neural basis of short-term memory from non-human electrophysiological, computational and neuroimaging perspectives, highlighting the multiple interacting brain regions believed to be involved. Through a review of several efforts, including our own, to combine neural modeling and neuroimaging data, we argue that large scale neural models provide specific advantages in understanding the distributed networks underlying cognition and behavior.

  9. Large scale and big data processing and management

    CERN Document Server

    Sakr, Sherif

    2014-01-01

    Large Scale and Big Data: Processing and Management provides readers with a central source of reference on the data management techniques currently available for large-scale data processing. Presenting chapters written by leading researchers, academics, and practitioners, it addresses the fundamental challenges associated with Big Data processing tools and techniques across a range of computing environments.The book begins by discussing the basic concepts and tools of large-scale Big Data processing and cloud computing. It also provides an overview of different programming models and cloud-bas

  10. Large-Scale Residential Demolition

    Science.gov (United States)

    The EPA provides resources for handling residential demolitions or renovations. This includes planning, handling harmful materials, recycling, funding, compliance assistance, good practices and regulations.

  11. Needs, opportunities, and options for large scale systems research

    Energy Technology Data Exchange (ETDEWEB)

    Thompson, G.L.

    1984-10-01

    The Office of Energy Research was recently asked to perform a study of Large Scale Systems in order to facilitate the development of a true large systems theory. It was decided to ask experts in the fields of electrical engineering, chemical engineering and manufacturing/operations research for their ideas concerning large scale systems research. The author was asked to distribute a questionnaire among these experts to find out their opinions concerning recent accomplishments and future research directions in large scale systems research. He was also requested to convene a conference which included three experts in each area as panel members to discuss the general area of large scale systems research. The conference was held on March 26--27, 1984 in Pittsburgh with nine panel members, and 15 other attendees. The present report is a summary of the ideas presented and the recommendations proposed by the attendees.

  12. Large scale DNA microsequencing device

    Energy Technology Data Exchange (ETDEWEB)

    Foote, Robert S. (Oak Ridge, TN)

    1999-01-01

    A microminiature sequencing apparatus and method provide means for simultaneously obtaining sequences of plural polynucleotide strands. The apparatus comprises a microchip into which plural channels have been etched using standard lithographic procedures and chemical wet etching. The channels include a reaction well and a separating section. Enclosing the channels is accomplished by bonding a transparent cover plate over the apparatus. A first oligonucleotide strand is chemically affixed to the apparatus through an alkyl chain. Subsequent nucleotides are selected by complementary base pair bonding. A target nucleotide strand is used to produce a family of labelled sequencing strands in each channel which are separated in the separating section. During or following separation the sequences are determined using appropriate detection means.

  13. Large scale DNA microsequencing device

    Energy Technology Data Exchange (ETDEWEB)

    Foote, R.S.

    1997-08-26

    A microminiature sequencing apparatus and method provide a means for simultaneously obtaining sequences of plural polynucleotide strands. The apparatus cosists of a microchip into which plural channels have been etched using standard lithographic procedures and chemical wet etching. The channels include a reaction well and a separating section. Enclosing the channels is accomplished by bonding a transparent cover plate over the apparatus. A first oligonucleotide strand is chemically affixed to the apparatus through an alkyl chain. Subsequent nucleotides are selected by complementary base pair bonding. A target nucleotide strand is used to produce a family of labelled sequencing strands in each channel which are separated in the separating section. During or following separation the sequences are determined using appropriate detection means. 17 figs.

  14. Large scale DNA microsequencing device

    Energy Technology Data Exchange (ETDEWEB)

    Foote, R.S.

    1999-08-31

    A microminiature sequencing apparatus and method provide means for simultaneously obtaining sequences of plural polynucleotide strands. The apparatus comprises a microchip into which plural channels have been etched using standard lithographic procedures and chemical wet etching. The channels include a reaction well and a separating section. Enclosing the channels is accomplished by bonding a transparent cover plate over the apparatus. A first oligonucleotide strand is chemically affixed to the apparatus through an alkyl chain. Subsequent nucleotides are selected by complementary base pair bonding. A target nucleotide strand is used to produce a family of labelled sequencing strands in each channel which are separated in the separating section. During or following separation the sequences are determined using appropriate detection means. 11 figs.

  15. Large scale DNA microsequencing device

    Energy Technology Data Exchange (ETDEWEB)

    Foote, Robert S. (Oak Ridge, TN)

    1997-01-01

    A microminiature sequencing apparatus and method provide means for simultaneously obtaining sequences of plural polynucleotide strands. The apparatus comprises a microchip into which plural channels have been etched using standard lithographic procedures and chemical wet etching. The channels include a reaction well and a separating section. Enclosing the channels is accomplished by bonding a transparent cover plate over the apparatus. A first oligonucleotide strand is chemically affixed to the apparatus through an alkyl chain. Subsequent nucleotides are selected by complementary base pair bonding. A target nucleotide strand is used to produce a family of labelled sequencing strands in each channel which are separated in the separating section. During or following separation the sequences are determined using appropriate detection means.

  16. Large-Scale Reform Comes of Age

    Science.gov (United States)

    Fullan, Michael

    2009-01-01

    This article reviews the history of large-scale education reform and makes the case that large-scale or whole system reform policies and strategies are becoming increasingly evident. The review briefly addresses the pre 1997 period concluding that while the pressure for reform was mounting that there were very few examples of deliberate or…

  17. Large Scale Computations in Air Pollution Modelling

    DEFF Research Database (Denmark)

    Zlatev, Z.; Brandt, J.; Builtjes, P. J. H.

    Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...

  18. Sample-Starved Large Scale Network Analysis

    Science.gov (United States)

    2016-05-05

    Applications to materials science 2.1 Foundational principles for large scale inference on structure of covariance We developed general principles for...concise but accessible format. These principles are applicable to large-scale complex network applications arising genomics , connectomics, eco-informatics...available to estimate or detect patterns in the matrix. 15. SUBJECT TERMS multivariate dependency structure multivariate spatio-temporal prediction

  19. Introducing Large-Scale Innovation in Schools

    Science.gov (United States)

    Sotiriou, Sofoklis; Riviou, Katherina; Cherouvis, Stephanos; Chelioti, Eleni; Bogner, Franz X.

    2016-08-01

    Education reform initiatives tend to promise higher effectiveness in classrooms especially when emphasis is given to e-learning and digital resources. Practical changes in classroom realities or school organization, however, are lacking. A major European initiative entitled Open Discovery Space (ODS) examined the challenge of modernizing school education via a large-scale implementation of an open-scale methodology in using technology-supported innovation. The present paper describes this innovation scheme which involved schools and teachers all over Europe, embedded technology-enhanced learning into wider school environments and provided training to teachers. Our implementation scheme consisted of three phases: (1) stimulating interest, (2) incorporating the innovation into school settings and (3) accelerating the implementation of the innovation. The scheme's impact was monitored for a school year using five indicators: leadership and vision building, ICT in the curriculum, development of ICT culture, professional development support, and school resources and infrastructure. Based on about 400 schools, our study produced four results: (1) The growth in digital maturity was substantial, even for previously high scoring schools. This was even more important for indicators such as vision and leadership" and "professional development." (2) The evolution of networking is presented graphically, showing the gradual growth of connections achieved. (3) These communities became core nodes, involving numerous teachers in sharing educational content and experiences: One out of three registered users (36 %) has shared his/her educational resources in at least one community. (4) Satisfaction scores ranged from 76 % (offer of useful support through teacher academies) to 87 % (good environment to exchange best practices). Initiatives such as ODS add substantial value to schools on a large scale.

  20. Unfolding large-scale online collaborative human dynamics

    CERN Document Server

    Zha, Yilong; Zhou, Changsong

    2015-01-01

    Large-scale interacting human activities underlie all social and economic phenomena, but quantitative understanding of regular patterns and mechanism is very challenging and still rare. Self-organized online collaborative activities with precise record of event timing provide unprecedented opportunity. Our empirical analysis of the history of millions of updates in Wikipedia shows a universal double power-law distribution of time intervals between consecutive updates of an article. We then propose a generic model to unfold collaborative human activities into three modules: (i) individual behavior characterized by Poissonian initiation of an action, (ii) human interaction captured by a cascading response to others with a power-law waiting time, and (iii) population growth due to increasing number of interacting individuals. This unfolding allows us to obtain analytical formula that is fully supported by the universal patterns in empirical data. Our modeling approaches reveal "simplicity" beyond complex interac...

  1. Large-scale structure non-Gaussianities with modal methods

    Science.gov (United States)

    Schmittfull, Marcel

    2016-10-01

    Relying on a separable modal expansion of the bispectrum, the implementation of a fast estimator for the full bispectrum of a 3d particle distribution is presented. The computational cost of accurate bispectrum estimation is negligible relative to simulation evolution, so the bispectrum can be used as a standard diagnostic whenever the power spectrum is evaluated. As an application, the time evolution of gravitational and primordial dark matter bispectra was measured in a large suite of N-body simulations. The bispectrum shape changes characteristically when the cosmic web becomes dominated by filaments and halos, therefore providing a quantitative probe of 3d structure formation. Our measured bispectra are determined by ~ 50 coefficients, which can be used as fitting formulae in the nonlinear regime and for non-Gaussian initial conditions. We also compare the measured bispectra with predictions from the Effective Field Theory of Large Scale Structures (EFTofLSS).

  2. SDI Large-Scale System Technology Study

    National Research Council Canada - National Science Library

    1986-01-01

    .... This coordination is addressed by the Battle Management function. The algorithms and technologies required to support Battle Management are the subject of the SDC Large Scale Systems Technology Study...

  3. Large Scale Metal Additive Techniques Review

    Energy Technology Data Exchange (ETDEWEB)

    Nycz, Andrzej [ORNL; Adediran, Adeola I [ORNL; Noakes, Mark W [ORNL; Love, Lonnie J [ORNL

    2016-01-01

    In recent years additive manufacturing made long strides toward becoming a main stream production technology. Particularly strong progress has been made in large-scale polymer deposition. However, large scale metal additive has not yet reached parity with large scale polymer. This paper is a review study of the metal additive techniques in the context of building large structures. Current commercial devices are capable of printing metal parts on the order of several cubic feet compared to hundreds of cubic feet for the polymer side. In order to follow the polymer progress path several factors are considered: potential to scale, economy, environment friendliness, material properties, feedstock availability, robustness of the process, quality and accuracy, potential for defects, and post processing as well as potential applications. This paper focuses on current state of art of large scale metal additive technology with a focus on expanding the geometric limits.

  4. Large-scale regions of antimatter

    Energy Technology Data Exchange (ETDEWEB)

    Grobov, A. V., E-mail: alexey.grobov@gmail.com; Rubin, S. G., E-mail: sgrubin@mephi.ru [National Research Nuclear University MEPhI (Russian Federation)

    2015-07-15

    Amodified mechanism of the formation of large-scale antimatter regions is proposed. Antimatter appears owing to fluctuations of a complex scalar field that carries a baryon charge in the inflation era.

  5. Random walk models of large-scale structure

    Indian Academy of Sciences (India)

    Abstract. This paper describes the insights gained from the excursion set approach, in which vari- ous questions about the phenomenology of large-scale structure formation can be mapped to problems associated with the first crossing distribution of appropriately defined barriers by random walks. Much of this is ...

  6. Logistic service providers and sustainable physical distribution

    Directory of Open Access Journals (Sweden)

    Stef Weijers

    2012-06-01

    Full Text Available Background: Logistic Service Providers main concern was to ensure reliability for a low price (Christopher, 2005. Dutch Logistic Service Providers still have these two aspects at the top of their list, but also have to take in a new aspect: sustainability. 88% Of the investigated Logistic Service Providers have included sustainability in the company's goals. These Logistic Service Providers have developed different strategies to achieve a higher level of sustainability. This paper presents the results of a study into what Logistic Service Providers say what they are doing, or intend to do, to improve sustainability for their transport services. In this way insight is given in the attitude of Dutch Logistic Service Providers towards sustainability and how they intend to translate this into business practise: internal solutions or new methods incorporating external partners. Methods: Various methods of the investigations were used, among which the analysis of the statements about the sustainabilityon the websites of various companies as well as the questionnaire per Internet. The research covered 50 largest logistics companies operating in the Netherlands and 60 companies that competed for the award "Lean and Green" advertised in the Netherlands. In addition, the Internet survey was answered by 41 companies that belong to the network of our university. Results: The investigation has shown that sustainability is handled by the logistics company as an integral part of the corporate strategy. In contrast, shippers depend in the choice of logistics services primarily on such classical aspects as the reliability or the price and the sustainability play a minor role. Conclusions: Trying to find methods to improve the sustainability, Dutch logistics service providers, in the first place, look for solutions that increase the efficiency and therefore the cost reduction potential. Solutions, which require the involvement of clients, were less often

  7. PKI security in large-scale healthcare networks.

    Science.gov (United States)

    Mantas, Georgios; Lymberopoulos, Dimitrios; Komninos, Nikos

    2012-06-01

    During the past few years a lot of PKI (Public Key Infrastructures) infrastructures have been proposed for healthcare networks in order to ensure secure communication services and exchange of data among healthcare professionals. However, there is a plethora of challenges in these healthcare PKI infrastructures. Especially, there are a lot of challenges for PKI infrastructures deployed over large-scale healthcare networks. In this paper, we propose a PKI infrastructure to ensure security in a large-scale Internet-based healthcare network connecting a wide spectrum of healthcare units geographically distributed within a wide region. Furthermore, the proposed PKI infrastructure facilitates the trust issues that arise in a large-scale healthcare network including multi-domain PKI infrastructures.

  8. Food appropriation through large scale land acquisitions

    Science.gov (United States)

    Rulli, Maria Cristina; D'Odorico, Paolo

    2014-05-01

    The increasing demand for agricultural products and the uncertainty of international food markets has recently drawn the attention of governments and agribusiness firms toward investments in productive agricultural land, mostly in the developing world. The targeted countries are typically located in regions that have remained only marginally utilized because of lack of modern technology. It is expected that in the long run large scale land acquisitions (LSLAs) for commercial farming will bring the technology required to close the existing crops yield gaps. While the extent of the acquired land and the associated appropriation of freshwater resources have been investigated in detail, the amount of food this land can produce and the number of people it could feed still need to be quantified. Here we use a unique dataset of land deals to provide a global quantitative assessment of the rates of crop and food appropriation potentially associated with LSLAs. We show how up to 300-550 million people could be fed by crops grown in the acquired land, should these investments in agriculture improve crop production and close the yield gap. In contrast, about 190-370 million people could be supported by this land without closing of the yield gap. These numbers raise some concern because the food produced in the acquired land is typically exported to other regions, while the target countries exhibit high levels of malnourishment. Conversely, if used for domestic consumption, the crops harvested in the acquired land could ensure food security to the local populations.

  9. GPU-based large-scale visualization

    KAUST Repository

    Hadwiger, Markus

    2013-11-19

    Recent advances in image and volume acquisition as well as computational advances in simulation have led to an explosion of the amount of data that must be visualized and analyzed. Modern techniques combine the parallel processing power of GPUs with out-of-core methods and data streaming to enable the interactive visualization of giga- and terabytes of image and volume data. A major enabler for interactivity is making both the computational and the visualization effort proportional to the amount of data that is actually visible on screen, decoupling it from the full data size. This leads to powerful display-aware multi-resolution techniques that enable the visualization of data of almost arbitrary size. The course consists of two major parts: An introductory part that progresses from fundamentals to modern techniques, and a more advanced part that discusses details of ray-guided volume rendering, novel data structures for display-aware visualization and processing, and the remote visualization of large online data collections. You will learn how to develop efficient GPU data structures and large-scale visualizations, implement out-of-core strategies and concepts such as virtual texturing that have only been employed recently, as well as how to use modern multi-resolution representations. These approaches reduce the GPU memory requirements of extremely large data to a working set size that fits into current GPUs. You will learn how to perform ray-casting of volume data of almost arbitrary size and how to render and process gigapixel images using scalable, display-aware techniques. We will describe custom virtual texturing architectures as well as recent hardware developments in this area. We will also describe client/server systems for distributed visualization, on-demand data processing and streaming, and remote visualization. We will describe implementations using OpenGL as well as CUDA, exploiting parallelism on GPUs combined with additional asynchronous

  10. Thermal activation of dislocations in large scale obstacle bypass

    Science.gov (United States)

    Sobie, Cameron; Capolungo, Laurent; McDowell, David L.; Martinez, Enrique

    2017-08-01

    Dislocation dynamics simulations have been used extensively to predict hardening caused by dislocation-obstacle interactions, including irradiation defect hardening in the athermal case. Incorporating the role of thermal energy on these interactions is possible with a framework provided by harmonic transition state theory (HTST) enabling direct access to thermally activated reaction rates using the Arrhenius equation, including rates of dislocation-obstacle bypass processes. Moving beyond unit dislocation-defect reactions to a representative environment containing a large number of defects requires coarse-graining the activation energy barriers of a population of obstacles into an effective energy barrier that accurately represents the large scale collective process. The work presented here investigates the relationship between unit dislocation-defect bypass processes and the distribution of activation energy barriers calculated for ensemble bypass processes. A significant difference between these cases is observed, which is attributed to the inherent cooperative nature of dislocation bypass processes. In addition to the dislocation-defect interaction, the morphology of the dislocation segments pinned to the defects play an important role on the activation energies for bypass. A phenomenological model for activation energy stress dependence is shown to describe well the effect of a distribution of activation energies, and a probabilistic activation energy model incorporating the stress distribution in a material is presented.

  11. Thermal power generation projects ``Large Scale Solar Heating``; EU-Thermie-Projekte ``Large Scale Solar Heating``

    Energy Technology Data Exchange (ETDEWEB)

    Kuebler, R.; Fisch, M.N. [Steinbeis-Transferzentrum Energie-, Gebaeude- und Solartechnik, Stuttgart (Germany)

    1998-12-31

    The aim of this project is the preparation of the ``Large-Scale Solar Heating`` programme for an Europe-wide development of subject technology. The following demonstration programme was judged well by the experts but was not immediately (1996) accepted for financial subsidies. In November 1997 the EU-commission provided 1,5 million ECU which allowed the realisation of an updated project proposal. By mid 1997 a small project was approved, that had been requested under the lead of Chalmes Industriteteknik (CIT) in Sweden and is mainly carried out for the transfer of technology. (orig.) [Deutsch] Ziel dieses Vorhabens ist die Vorbereitung eines Schwerpunktprogramms `Large Scale Solar Heating`, mit dem die Technologie europaweit weiterentwickelt werden sollte. Das daraus entwickelte Demonstrationsprogramm wurde von den Gutachtern positiv bewertet, konnte jedoch nicht auf Anhieb (1996) in die Foerderung aufgenommen werden. Im November 1997 wurden von der EU-Kommission dann kurzfristig noch 1,5 Mio ECU an Foerderung bewilligt, mit denen ein aktualisierter Projektvorschlag realisiert werden kann. Bereits Mitte 1997 wurde ein kleineres Vorhaben bewilligt, das unter Federfuehrung von Chalmers Industriteknik (CIT) in Schweden beantragt worden war und das vor allem dem Technologietransfer dient. (orig.)

  12. Traffic assignment models in large-scale applications

    DEFF Research Database (Denmark)

    Rasmussen, Thomas Kjær

    focuses on large-scale applications and contributes with methods to actualise the true potential of disaggregate models. To achieve this target, contributions are given to several components of traffic assignment modelling, by (i) enabling the utilisation of the increasingly available data sources...... on individual behaviour in the model specification, (ii) proposing a method to use disaggregate Revealed Preference (RP) data to estimate utility functions and provide evidence on the value of congestion and the value of reliability, (iii) providing a method to account for individual mis...... is essential in the development and validation of realistic models for large-scale applications. Nowadays, modern technology facilitates easy access to RP data and allows large-scale surveys. The resulting datasets are, however, usually very large and hence data processing is necessary to extract the pieces...

  13. High Fidelity Simulations of Large-Scale Wireless Networks

    Energy Technology Data Exchange (ETDEWEB)

    Onunkwo, Uzoma [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Benz, Zachary [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-11-01

    The worldwide proliferation of wireless connected devices continues to accelerate. There are 10s of billions of wireless links across the planet with an additional explosion of new wireless usage anticipated as the Internet of Things develops. Wireless technologies do not only provide convenience for mobile applications, but are also extremely cost-effective to deploy. Thus, this trend towards wireless connectivity will only continue and Sandia must develop the necessary simulation technology to proactively analyze the associated emerging vulnerabilities. Wireless networks are marked by mobility and proximity-based connectivity. The de facto standard for exploratory studies of wireless networks is discrete event simulations (DES). However, the simulation of large-scale wireless networks is extremely difficult due to prohibitively large turnaround time. A path forward is to expedite simulations with parallel discrete event simulation (PDES) techniques. The mobility and distance-based connectivity associated with wireless simulations, however, typically doom PDES and fail to scale (e.g., OPNET and ns-3 simulators). We propose a PDES-based tool aimed at reducing the communication overhead between processors. The proposed solution will use light-weight processes to dynamically distribute computation workload while mitigating communication overhead associated with synchronizations. This work is vital to the analytics and validation capabilities of simulation and emulation at Sandia. We have years of experience in Sandia’s simulation and emulation projects (e.g., MINIMEGA and FIREWHEEL). Sandia’s current highly-regarded capabilities in large-scale emulations have focused on wired networks, where two assumptions prevent scalable wireless studies: (a) the connections between objects are mostly static and (b) the nodes have fixed locations.

  14. Ecohydrological modeling for large-scale environmental impact assessment.

    Science.gov (United States)

    Woznicki, Sean A; Nejadhashemi, A Pouyan; Abouali, Mohammad; Herman, Matthew R; Esfahanian, Elaheh; Hamaamin, Yaseen A; Zhang, Zhen

    2016-02-01

    Ecohydrological models are frequently used to assess the biological integrity of unsampled streams. These models vary in complexity and scale, and their utility depends on their final application. Tradeoffs are usually made in model scale, where large-scale models are useful for determining broad impacts of human activities on biological conditions, and regional-scale (e.g. watershed or ecoregion) models provide stakeholders greater detail at the individual stream reach level. Given these tradeoffs, the objective of this study was to develop large-scale stream health models with reach level accuracy similar to regional-scale models thereby allowing for impacts assessments and improved decision-making capabilities. To accomplish this, four measures of biological integrity (Ephemeroptera, Plecoptera, and Trichoptera taxa (EPT), Family Index of Biotic Integrity (FIBI), Hilsenhoff Biotic Index (HBI), and fish Index of Biotic Integrity (IBI)) were modeled based on four thermal classes (cold, cold-transitional, cool, and warm) of streams that broadly dictate the distribution of aquatic biota in Michigan. The Soil and Water Assessment Tool (SWAT) was used to simulate streamflow and water quality in seven watersheds and the Hydrologic Index Tool was used to calculate 171 ecologically relevant flow regime variables. Unique variables were selected for each thermal class using a Bayesian variable selection method. The variables were then used in development of adaptive neuro-fuzzy inference systems (ANFIS) models of EPT, FIBI, HBI, and IBI. ANFIS model accuracy improved when accounting for stream thermal class rather than developing a global model. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Balancing modern Power System with large scale of wind power

    DEFF Research Database (Denmark)

    Basit, Abdul; Altin, Müfit; Hansen, Anca Daniela

    2014-01-01

    Power system operators must ensure robust, secure and reliable power system operation even with a large scale integration of wind power. Electricity generated from the intermittent wind in large propor-tion may impact on the control of power system balance and thus deviations in the power system...... frequency in small or islanded power systems or tie line power flows in interconnected power systems. Therefore, the large scale integration of wind power into the power system strongly concerns the secure and stable grid operation. To ensure the stable power system operation, the evolving power system has...... to be analysed with improved analytical tools and techniques. This paper proposes techniques for the active power balance control in future power systems with the large scale wind power integration, where power balancing model provides the hour-ahead dispatch plan with reduced planning horizon and the real time...

  16. Large-scale networks in engineering and life sciences

    CERN Document Server

    Findeisen, Rolf; Flockerzi, Dietrich; Reichl, Udo; Sundmacher, Kai

    2014-01-01

    This edited volume provides insights into and tools for the modeling, analysis, optimization, and control of large-scale networks in the life sciences and in engineering. Large-scale systems are often the result of networked interactions between a large number of subsystems, and their analysis and control are becoming increasingly important. The chapters of this book present the basic concepts and theoretical foundations of network theory and discuss its applications in different scientific areas such as biochemical reactions, chemical production processes, systems biology, electrical circuits, and mobile agents. The aim is to identify common concepts, to understand the underlying mathematical ideas, and to inspire discussions across the borders of the various disciplines.  The book originates from the interdisciplinary summer school “Large Scale Networks in Engineering and Life Sciences” hosted by the International Max Planck Research School Magdeburg, September 26-30, 2011, and will therefore be of int...

  17. Prospects for large scale electricity storage in Denmark

    DEFF Research Database (Denmark)

    Krog Ekman, Claus; Jensen, Søren Højgaard

    2010-01-01

    with high wind power penetration. This paper presents a review of the electricity storage technologies relevant for large power systems. The paper also presents an estimation of the economic feasibility of electricity storage using the west Danish power market area as a case.......In a future power systems with additional wind power capacity there will be an increased need for large scale power management as well as reliable balancing and reserve capabilities. Different technologies for large scale electricity storage provide solutions to the different challenges arising...

  18. Large-Scale Inverse Problems and Quantification of Uncertainty

    CERN Document Server

    Biegler, Lorenz; Ghattas, Omar

    2010-01-01

    Large-scale inverse problems and associated uncertainty quantification has become an important area of research, central to a wide range of science and engineering applications. Written by leading experts in the field, Large-scale Inverse Problems and Quantification of Uncertainty focuses on the computational methods used to analyze and simulate inverse problems. The text provides PhD students, researchers, advanced undergraduate students, and engineering practitioners with the perspectives of researchers in areas of inverse problems and data assimilation, ranging from statistics and large-sca

  19. Fatigue Analysis of Large-scale Wind turbine

    Directory of Open Access Journals (Sweden)

    Zhu Yongli

    2017-01-01

    Full Text Available The paper does research on top flange fatigue damage of large-scale wind turbine generator. It establishes finite element model of top flange connection system with finite element analysis software MSC. Marc/Mentat, analyzes its fatigue strain, implements load simulation of flange fatigue working condition with Bladed software, acquires flange fatigue load spectrum with rain-flow counting method, finally, it realizes fatigue analysis of top flange with fatigue analysis software MSC. Fatigue and Palmgren-Miner linear cumulative damage theory. The analysis result indicates that its result provides new thinking for flange fatigue analysis of large-scale wind turbine generator, and possesses some practical engineering value.

  20. Large-scale approaches for glycobiology

    OpenAIRE

    Christopher T. Campbell; Yarema, Kevin J.

    2005-01-01

    Glycosylation, the attachment of carbohydrates to proteins and lipids, influences many biological processes. Despite detailed characterization of the cellular components that carry out glycosylation, a complete picture of a cell's glycoconjugates remains elusive because of the challenges inherent in characterizing complex carbohydrates. This article reviews large-scale techniques for accelerating progress in glycobiology.

  1. Computing in Large-Scale Dynamic Systems

    NARCIS (Netherlands)

    Pruteanu, A.S.

    2013-01-01

    Software applications developed for large-scale systems have always been difficult to de- velop due to problems caused by the large number of computing devices involved. Above a certain network size (roughly one hundred), necessary services such as code updating, topol- ogy discovery and data

  2. Inflation, large scale structure and particle physics

    Indian Academy of Sciences (India)

    We review experimental and theoretical developments in inflation and its application to structure formation, including the curvation idea. We then discuss a particle physics model of supersymmetric hybrid inflation at the intermediate scale in which the Higgs scalar field is responsible for large scale structure, show how such ...

  3. Management of large-scale technology

    Science.gov (United States)

    Levine, A.

    1985-01-01

    Two major themes are addressed in this assessment of the management of large-scale NASA programs: (1) how a high technology agency was a decade marked by a rapid expansion of funds and manpower in the first half and almost as rapid contraction in the second; and (2) how NASA combined central planning and control with decentralized project execution.

  4. A large-scale biomass bulk terminal

    NARCIS (Netherlands)

    Wu, M.R.

    2012-01-01

    This research explores the possibility of a large-scale bulk terminal in West Europe dedicated to handle solid and liquid biomass materials. Various issues regarding the conceptual design of such a terminal have been investigated and demonstrated in this research: the potential biomass materials

  5. Code generation for large scale applications

    NARCIS (Netherlands)

    Mark, Paul Johannes van der

    2006-01-01

    Efficient execution of large-scale application codes is a primary requirement in many cases. High efficiency can only be achieved by utilizing architecture-independent efficient algorithms and exploiting specific architecture-dependent characteristics of a given computer architecture. However,

  6. Large-scale perspective as a challenge

    NARCIS (Netherlands)

    Plomp, M.G.A.

    2012-01-01

    1. Scale forms a challenge for chain researchers: when exactly is something ‘large-scale’? What are the underlying factors (e.g. number of parties, data, objects in the chain, complexity) that determine this? It appears to be a continuum between small- and large-scale, where positioning on that

  7. Ethics of large-scale change

    DEFF Research Database (Denmark)

    Arler, Finn

    2006-01-01

    , which kind of attitude is appropriate when dealing with large-scale changes like these from an ethical point of view. Three kinds of approaches are discussed: Aldo Leopold's mountain thinking, the neoclassical economists' approach, and finally the so-called Concentric Circle Theories approach...

  8. Topological Routing in Large-Scale Networks

    DEFF Research Database (Denmark)

    Pedersen, Jens Myrup; Knudsen, Thomas Phillip; Madsen, Ole Brun

    2004-01-01

    Topological Routing to large-scale networks are discussed. Hierarchical extensions are presented along with schemes for shortest path routing, fault handling and path restoration. Further reserach in the area is discussed and perspectives on the prerequisites for practical deployment of Topological Routing...

  9. Topological Routing in Large-Scale Networks

    DEFF Research Database (Denmark)

    Pedersen, Jens Myrup; Knudsen, Thomas Phillip; Madsen, Ole Brun

    Topological Routing to large-scale networks are discussed. Hierarchical extensions are presented along with schemes for shortest path routing, fault handling and path restoration. Further reserach in the area is discussed and perspectives on the prerequisites for practical deployment of Topological Routing...

  10. Editorial: Bioprocess beyond the large scale production.

    Science.gov (United States)

    Koo, Yoon-Mo; Srinophakun, Penjit

    2015-12-01

    Advances in bioprocess engineering continue to step forward in its own field of large scale production in mid- and down-stream processes of biotechnology. Some of the recent researches are shown in this AFOB (Asian Federation of Biotechnology) Special issue of Advances in Bioprocess Engineering. Copyright © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Large-Scale Analysis of Art Proportions

    DEFF Research Database (Denmark)

    Jensen, Karl Kristoffer

    2014-01-01

    While literature often tries to impute mathematical constants into art, this large-scale study (11 databases of paintings and photos, around 200.000 items) shows a different truth. The analysis, consisting of the width/height proportions, shows a value of rarely if ever one (square...

  12. Sensitivity technologies for large scale simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Collis, Samuel Scott; Bartlett, Roscoe Ainsworth; Smith, Thomas Michael; Heinkenschloss, Matthias (Rice University, Houston, TX); Wilcox, Lucas C. (Brown University, Providence, RI); Hill, Judith C. (Carnegie Mellon University, Pittsburgh, PA); Ghattas, Omar (Carnegie Mellon University, Pittsburgh, PA); Berggren, Martin Olof (University of UppSala, Sweden); Akcelik, Volkan (Carnegie Mellon University, Pittsburgh, PA); Ober, Curtis Curry; van Bloemen Waanders, Bart Gustaaf; Keiter, Eric Richard

    2005-01-01

    Sensitivity analysis is critically important to numerous analysis algorithms, including large scale optimization, uncertainty quantification,reduced order modeling, and error estimation. Our research focused on developing tools, algorithms and standard interfaces to facilitate the implementation of sensitivity type analysis into existing code and equally important, the work was focused on ways to increase the visibility of sensitivity analysis. We attempt to accomplish the first objective through the development of hybrid automatic differentiation tools, standard linear algebra interfaces for numerical algorithms, time domain decomposition algorithms and two level Newton methods. We attempt to accomplish the second goal by presenting the results of several case studies in which direct sensitivities and adjoint methods have been effectively applied, in addition to an investigation of h-p adaptivity using adjoint based a posteriori error estimation. A mathematical overview is provided of direct sensitivities and adjoint methods for both steady state and transient simulations. Two case studies are presented to demonstrate the utility of these methods. A direct sensitivity method is implemented to solve a source inversion problem for steady state internal flows subject to convection diffusion. Real time performance is achieved using novel decomposition into offline and online calculations. Adjoint methods are used to reconstruct initial conditions of a contamination event in an external flow. We demonstrate an adjoint based transient solution. In addition, we investigated time domain decomposition algorithms in an attempt to improve the efficiency of transient simulations. Because derivative calculations are at the root of sensitivity calculations, we have developed hybrid automatic differentiation methods and implemented this approach for shape optimization for gas dynamics using the Euler equations. The hybrid automatic differentiation method was applied to a first

  13. Evaluating Large-scale National Public Management Reforms

    DEFF Research Database (Denmark)

    Breidahl, Karen Nielsen; Gjelstrup, Gunnar; Hansen, Morten Balle

    This article explores differences and similarities between two evaluations of large-scale administrative reforms which were carried out in the 2000s: The evaluation of the Norwegian NAV reform (EVANAV) and the evaluation of the Danish Local Government Reform (LGR). We provide a comparative analysis...... of the similarities and differences between the evaluations of the two reforms and discuss their pros and cons....

  14. PERSEUS-HUB: Interactive and Collective Exploration of Large-Scale Graphs

    Directory of Open Access Journals (Sweden)

    Di Jin

    2017-07-01

    Full Text Available Graphs emerge naturally in many domains, such as social science, neuroscience, transportation engineering, and more. In many cases, such graphs have millions or billions of nodes and edges, and their sizes increase daily at a fast pace. How can researchers from various domains explore large graphs interactively and efficiently to find out what is ‘important’? How can multiple researchers explore a new graph dataset collectively and “help” each other with their findings? In this article, we present Perseus-Hub, a large-scale graph mining tool that computes a set of graph properties in a distributed manner, performs ensemble, multi-view anomaly detection to highlight regions that are worth investigating, and provides users with uncluttered visualization and easy interaction with complex graph statistics. Perseus-Hub uses a Spark cluster to calculate various statistics of large-scale graphs efficiently, and aggregates the results in a summary on the master node to support interactive user exploration. In Perseus-Hub, the visualized distributions of graph statistics provide preliminary analysis to understand a graph. To perform a deeper analysis, users with little prior knowledge can leverage patterns (e.g., spikes in the power-law degree distribution marked by other users or experts. Moreover, Perseus-Hub guides users to regions of interest by highlighting anomalous nodes and helps users establish a more comprehensive understanding about the graph at hand. We demonstrate our system through the case study on real, large-scale networks.

  15. Large-scale implementation of disease control programmes: a cost-effectiveness analysis of long-lasting insecticide-treated bed net distribution channels in a malaria-endemic area of western Kenya-a study protocol.

    Science.gov (United States)

    Gama, Elvis; Were, Vincent; Ouma, Peter; Desai, Meghna; Niessen, Louis; Buff, Ann M; Kariuki, Simon

    2016-11-21

    Historically, Kenya has used various distribution models for long-lasting insecticide-treated bed nets (LLINs) with variable results in population coverage. The models presently vary widely in scale, target population and strategy. There is limited information to determine the best combination of distribution models, which will lead to sustained high coverage and are operationally efficient and cost-effective. Standardised cost information is needed in combination with programme effectiveness estimates to judge the efficiency of LLIN distribution models and options for improvement in implementing malaria control programmes. The study aims to address the information gap, estimating distribution cost and the effectiveness of different LLIN distribution models, and comparing them in an economic evaluation. Evaluation of cost and coverage will be determined for 5 different distribution models in Busia County, an area of perennial malaria transmission in western Kenya. Cost data will be collected retrospectively from health facilities, the Ministry of Health, donors and distributors. Programme-effectiveness data, defined as the number of people with access to an LLIN per 1000 population, will be collected through triangulation of data from a nationally representative, cross-sectional malaria survey, a cross-sectional survey administered to a subsample of beneficiaries in Busia County and LLIN distributors' records. Descriptive statistics and regression analysis will be used for the evaluation. A cost-effectiveness analysis will be performed from a health-systems perspective, and cost-effectiveness ratios will be calculated using bootstrapping techniques. The study has been evaluated and approved by Kenya Medical Research Institute, Scientific and Ethical Review Unit (SERU number 2997). All participants will provide written informed consent. The findings of this economic evaluation will be disseminated through peer-reviewed publications. Published by the BMJ Publishing

  16. Magnetic Helicity and Large Scale Magnetic Fields: A Primer

    Science.gov (United States)

    Blackman, Eric G.

    2015-05-01

    Magnetic fields of laboratory, planetary, stellar, and galactic plasmas commonly exhibit significant order on large temporal or spatial scales compared to the otherwise random motions within the hosting system. Such ordered fields can be measured in the case of planets, stars, and galaxies, or inferred indirectly by the action of their dynamical influence, such as jets. Whether large scale fields are amplified in situ or a remnant from previous stages of an object's history is often debated for objects without a definitive magnetic activity cycle. Magnetic helicity, a measure of twist and linkage of magnetic field lines, is a unifying tool for understanding large scale field evolution for both mechanisms of origin. Its importance stems from its two basic properties: (1) magnetic helicity is typically better conserved than magnetic energy; and (2) the magnetic energy associated with a fixed amount of magnetic helicity is minimized when the system relaxes this helical structure to the largest scale available. Here I discuss how magnetic helicity has come to help us understand the saturation of and sustenance of large scale dynamos, the need for either local or global helicity fluxes to avoid dynamo quenching, and the associated observational consequences. I also discuss how magnetic helicity acts as a hindrance to turbulent diffusion of large scale fields, and thus a helper for fossil remnant large scale field origin models in some contexts. I briefly discuss the connection between large scale fields and accretion disk theory as well. The goal here is to provide a conceptual primer to help the reader efficiently penetrate the literature.

  17. Large Scale Evolution of Convolutional Neural Networks Using Volunteer Computing

    OpenAIRE

    Desell, Travis

    2017-01-01

    This work presents a new algorithm called evolutionary exploration of augmenting convolutional topologies (EXACT), which is capable of evolving the structure of convolutional neural networks (CNNs). EXACT is in part modeled after the neuroevolution of augmenting topologies (NEAT) algorithm, with notable exceptions to allow it to scale to large scale distributed computing environments and evolve networks with convolutional filters. In addition to multithreaded and MPI versions, EXACT has been ...

  18. Large-Scale Selective Functionalization of Alkanes.

    Science.gov (United States)

    Goldberg, Karen I; Goldman, Alan S

    2017-03-21

    Great progress has been made in the past several decades concerning C-H bond functionalization. But despite many significant advances, a commercially viable large-scale process for selective alkane functionalization remains an unreached goal. Such conversions will require highly active, selective, and long-lived catalysts. In addition, essentially complete atom-economy will be required. Thus, any reagents used in transforming the alkanes must be almost free (e.g., O2, H2O, N2), or they should be incorporated into the desired large-scale product. Any side-products should be completely benign or have value as fuels (e.g., H2 or other alkanes). Progress and promising leads toward the development of such systems involving primarily molecular transition metal catalysts are described.

  19. Large-scale Heterogeneous Network Data Analysis

    Science.gov (United States)

    2012-07-31

    Information Diffusion over Crowds with Social Network.” ACM SIGGRAPH 2012. (poster)  Wan-Yu Lin, Nanyun Peng, Chun-Chao Yen, Shou-De Lin. “Online Plagiarism ...Abstract: Large-scale network is a powerful data structure allowing the depiction of relationship information between entities. Recent...we propose an unsupervised tensor-based mechanism, considering higher-order relational information , to model the complex semantics of nodes. The

  20. A large-scale biomass bulk terminal

    OpenAIRE

    Wu, M.R.

    2012-01-01

    This research explores the possibility of a large-scale bulk terminal in West Europe dedicated to handle solid and liquid biomass materials. Various issues regarding the conceptual design of such a terminal have been investigated and demonstrated in this research: the potential biomass materials that will be the major international trade flows in the future, the characteristics of these potential biomass materials, the interaction between the material properties and terminal equipment, the pe...

  1. Large-scale neuromorphic computing systems

    Science.gov (United States)

    Furber, Steve

    2016-10-01

    Neuromorphic computing covers a diverse range of approaches to information processing all of which demonstrate some degree of neurobiological inspiration that differentiates them from mainstream conventional computing systems. The philosophy behind neuromorphic computing has its origins in the seminal work carried out by Carver Mead at Caltech in the late 1980s. This early work influenced others to carry developments forward, and advances in VLSI technology supported steady growth in the scale and capability of neuromorphic devices. Recently, a number of large-scale neuromorphic projects have emerged, taking the approach to unprecedented scales and capabilities. These large-scale projects are associated with major new funding initiatives for brain-related research, creating a sense that the time and circumstances are right for progress in our understanding of information processing in the brain. In this review we present a brief history of neuromorphic engineering then focus on some of the principal current large-scale projects, their main features, how their approaches are complementary and distinct, their advantages and drawbacks, and highlight the sorts of capabilities that each can deliver to neural modellers.

  2. Large-scale film structures in space

    Science.gov (United States)

    Simon, Kirill

    Up-to-date space technology calls for not only taking account of, but also employment of, specific attributes of the outer space environment such as weightlessness, centrifugal forces, hard vacuum, powerful solar radiation. These specific characteristics of outer space allow the use of various structures in space whose development and operation is impossible and inexpedient on Earth. Currently, interest in large-scale space structures is growing; there are various projects on such multi-body space structures and experiments are being conducted for their development. Such designs are represented by spacecraft with solar sails, orbiting solar reflectors, solar energy concentrators, low frequency antennas and others. This paper examines a large-scale flexible space structure made from thin reflective film used as the working surface of the sunlight reflector or the sailcraft. Specifically, this paper deals with techniques of modeling large-scale space structure attitude motion, numerical calculation of vibrations which occur in the system after a spatial slew is performed, as well as optimal trajectory computations. Various methods of the film structure attitude control and stabilization, including optimal slewing programs, are discussed.

  3. VESPA: Very large-scale Evolutionary and Selective Pressure Analyses

    Directory of Open Access Journals (Sweden)

    Andrew E. Webb

    2017-06-01

    Full Text Available Background Large-scale molecular evolutionary analyses of protein coding sequences requires a number of preparatory inter-related steps from finding gene families, to generating alignments and phylogenetic trees and assessing selective pressure variation. Each phase of these analyses can represent significant challenges, particularly when working with entire proteomes (all protein coding sequences in a genome from a large number of species. Methods We present VESPA, software capable of automating a selective pressure analysis using codeML in addition to the preparatory analyses and summary statistics. VESPA is written in python and Perl and is designed to run within a UNIX environment. Results We have benchmarked VESPA and our results show that the method is consistent, performs well on both large scale and smaller scale datasets, and produces results in line with previously published datasets. Discussion Large-scale gene family identification, sequence alignment, and phylogeny reconstruction are all important aspects of large-scale molecular evolutionary analyses. VESPA provides flexible software for simplifying these processes along with downstream selective pressure variation analyses. The software automatically interprets results from codeML and produces simplified summary files to assist the user in better understanding the results. VESPA may be found at the following website: http://www.mol-evol.org/VESPA.

  4. Primordial Non-Gaussianity and Bispectrum Measurements in the Cosmic Microwave Background and Large-Scale Structure

    Directory of Open Access Journals (Sweden)

    Michele Liguori

    2010-01-01

    Full Text Available The most direct probe of non-Gaussian initial conditions has come from bispectrum measurements of temperature fluctuations in the Cosmic Microwave Background and of the matter and galaxy distribution at large scales. Such bispectrum estimators are expected to continue to provide the best constraints on the non-Gaussian parameters in future observations. We review and compare the theoretical and observational problems, current results, and future prospects for the detection of a nonvanishing primordial component in the bispectrum of the Cosmic Microwave Background and large-scale structure, and the relation to specific predictions from different inflationary models.

  5. Evolutionary Hierarchical Multi-Criteria Metaheuristics for Scheduling in Large-Scale Grid Systems

    CERN Document Server

    Kołodziej, Joanna

    2012-01-01

    One of the most challenging issues in modelling today's large-scale computational systems is to effectively manage highly parametrised distributed environments such as computational grids, clouds, ad hoc networks and P2P networks. Next-generation computational grids must provide a wide range of services and high performance computing infrastructures. Various types of information and data processed in the large-scale dynamic grid environment may be incomplete, imprecise, and fragmented, which complicates the specification of proper evaluation criteria and which affects both the availability of resources and the final collective decisions of users. The complexity of grid architectures and grid management may also contribute towards higher energy consumption. All of these issues necessitate the development of intelligent resource management techniques, which are capable of capturing all of this complexity and optimising meaningful metrics for a wide range of grid applications.   This book covers hot topics in t...

  6. Growth Limits in Large Scale Networks

    DEFF Research Database (Denmark)

    Knudsen, Thomas Phillip

    limitations. The rising complexity of network management with the convergence of communications platforms is shown as problematic for both automatic management feasibility and for manpower resource management. In the fourth step the scope is extended to include the present society with the DDN project as its...... main focus. Here the general perception of the nature and role in society of large scale networks as a fundamental infrastructure is analysed. This analysis focuses on the effects of the technical DDN projects and on the perception of network infrastructure as expressed by key decision makers....... A summary of the most pressing growth limits for the coming three decades is given....

  7. Colloquium: Large scale simulations on GPU clusters

    Science.gov (United States)

    Bernaschi, Massimo; Bisson, Mauro; Fatica, Massimiliano

    2015-06-01

    Graphics processing units (GPU) are currently used as a cost-effective platform for computer simulations and big-data processing. Large scale applications require that multiple GPUs work together but the efficiency obtained with cluster of GPUs is, at times, sub-optimal because the GPU features are not exploited at their best. We describe how it is possible to achieve an excellent efficiency for applications in statistical mechanics, particle dynamics and networks analysis by using suitable memory access patterns and mechanisms like CUDA streams, profiling tools, etc. Similar concepts and techniques may be applied also to other problems like the solution of Partial Differential Equations.

  8. Stabilization Algorithms for Large-Scale Problems

    DEFF Research Database (Denmark)

    Jensen, Toke Koldborg

    2006-01-01

    -curve. This heuristic is implemented as a part of a larger algorithm which is developed in collaboration with G. Rodriguez and P. C. Hansen. Last, but not least, a large part of the project has, in different ways, revolved around the object-oriented Matlab toolbox MOORe Tools developed by PhD Michael Jacobsen. New......The focus of the project is on stabilization of large-scale inverse problems where structured models and iterative algorithms are necessary for computing approximate solutions. For this purpose, we study various iterative Krylov methods and their abilities to produce regularized solutions. Some...

  9. Large scale stochastic spatio-temporal modelling with PCRaster

    Science.gov (United States)

    Karssenberg, Derek; Drost, Niels; Schmitz, Oliver; de Jong, Kor; Bierkens, Marc F. P.

    2013-04-01

    PCRaster is a software framework for building spatio-temporal models of land surface processes (http://www.pcraster.eu). Building blocks of models are spatial operations on raster maps, including a large suite of operations for water and sediment routing. These operations are available to model builders as Python functions. The software comes with Python framework classes providing control flow for spatio-temporal modelling, Monte Carlo simulation, and data assimilation (Ensemble Kalman Filter and Particle Filter). Models are built by combining the spatial operations in these framework classes. This approach enables modellers without specialist programming experience to construct large, rather complicated models, as many technical details of modelling (e.g., data storage, solving spatial operations, data assimilation algorithms) are taken care of by the PCRaster toolbox. Exploratory modelling is supported by routines for prompt, interactive visualisation of stochastic spatio-temporal data generated by the models. The high computational requirements for stochastic spatio-temporal modelling, and an increasing demand to run models over large areas at high resolution, e.g. in global hydrological modelling, require an optimal use of available, heterogeneous computing resources by the modelling framework. Current work in the context of the eWaterCycle project is on a parallel implementation of the modelling engine, capable of running on a high-performance computing infrastructure such as clusters and supercomputers. Model runs will be distributed over multiple compute nodes and multiple processors (GPUs and CPUs). Parallelization will be done by parallel execution of Monte Carlo realizations and sub regions of the modelling domain. In our approach we use multiple levels of parallelism, improving scalability considerably. On the node level we will use OpenCL, the industry standard for low-level high performance computing kernels. To combine multiple nodes we will use

  10. Tracing the origin of green macroalgal blooms based on the large scale spatio-temporal distribution of Ulva microscopic propagules and settled mature Ulva vegetative thalli in coastal regions of the Yellow Sea, China.

    Science.gov (United States)

    Huo, Yuanzi; Han, Hongbin; Hua, Liang; Wei, Zhangliang; Yu, Kefeng; Shi, Honghua; Kim, Jang Kyun; Yarish, Charles; He, Peimin

    2016-11-01

    From 2008 to 2016, massive floating green macroalgal blooms occurred annually during the summer months in the Yellow Sea. The original source of these blooms was traced based on the spatio-temporal distribution and species composition of Ulva microscopic propagules and settled Ulva vegetative thalli monthly from December 2012 to May 2013 in the Yellow Sea. High quantities of Ulva microscopic propagules in both the water column and sediments were found in the Pyropia aquaculture area along the Jiangsu coast before a green macroalgal bloom appeared in the Yellow Sea. The abundance of Ulva microscopic propagules was significantly lower in outer areas compared to in Pyropia aquaculture areas. A molecular phylogenetic analysis suggested that Ulva prolifera microscopic propagules were the dominant microscopic propagules present during the study period. The extremely low biomass of settled Ulva vegetative thalli along the coast indicated that somatic cells of settled Ulva vegetative thalli did not provide a propagule bank for the green macroalgal blooms in the Yellow Sea. The results of this study provide further supporting evidence that the floating green macroalgal blooms originate from green macroalgae attached to Pyropia aquaculture rafts along the Jiangsu coastline of the southern Yellow Sea. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Macroecological factors explain large-scale spatial population patterns of ancient agriculturalists

    NARCIS (Netherlands)

    Xu, C.; Chen, B.; Abades, S.; Reino, L.; Teng, S.; Ljungqvist, F.C.; Huang, Z.Y.X.; Liu, X.

    2015-01-01

    Aim: It has been well demonstrated that the large-scale distribution patterns of numerous species are driven by similar macroecological factors. However, understanding of this topic remains limited when applied to our own species. Here we take a large-scale look at ancient agriculturalist

  12. Large-scale carbon nanotube synthesis.

    Science.gov (United States)

    MacKenzie, Kiern J; Dunens, Oscar M; See, Chee H; Harris, Andrew T

    2008-01-01

    Carbon nanotubes (CNTs) are a form of crystalline carbon with extraordinary chemical, physical, electrical and mechanical properties, making them potentially valuable in a broad range of applications. These properties have resulted in an unprecedented level of interest in the development of techniques to manufacture CNTs, and consequently a raft of competing patents have been issued, with universities and commercial entities alike looking to obtain patent protection for their inventions. In this paper we review relevant aspects of international patent law, summarize CNT definitions and discuss patent irregularities, and discuss the implications of the widening gap between nanotechnology practice and the underlying patent law. This is followed by a review of the chemical vapour deposition technique of CNT synthesis, in particular using a fluidised bed, identified as the most promising method to date for the large-scale, low cost production of CNTs. We further examine the carbon nanotube patent space, focusing primarily on patents for CNTs produced via CVD and FBCVD techniques. This patent space is both convoluted and uncertain, and it appears likely that some form of litigation will ensue in future to ultimately determine intellectual property ownership in various regions. We also discuss the likely effect of this 'patent thicket' on the commercialisation of large-scale CNT synthesis processes.

  13. Large-scale Globally Propagating Coronal Waves

    Directory of Open Access Journals (Sweden)

    Alexander Warmuth

    2015-09-01

    Full Text Available Large-scale, globally propagating wave-like disturbances have been observed in the solar chromosphere and by inference in the corona since the 1960s. However, detailed analysis of these phenomena has only been conducted since the late 1990s. This was prompted by the availability of high-cadence coronal imaging data from numerous spaced-based instruments, which routinely show spectacular globally propagating bright fronts. Coronal waves, as these perturbations are usually referred to, have now been observed in a wide range of spectral channels, yielding a wealth of information. Many findings have supported the “classical” interpretation of the disturbances: fast-mode MHD waves or shocks that are propagating in the solar corona. However, observations that seemed inconsistent with this picture have stimulated the development of alternative models in which “pseudo waves” are generated by magnetic reconfiguration in the framework of an expanding coronal mass ejection. This has resulted in a vigorous debate on the physical nature of these disturbances. This review focuses on demonstrating how the numerous observational findings of the last one and a half decades can be used to constrain our models of large-scale coronal waves, and how a coherent physical understanding of these disturbances is finally emerging.

  14. Internationalization Measures in Large Scale Research Projects

    Science.gov (United States)

    Soeding, Emanuel; Smith, Nancy

    2017-04-01

    Internationalization measures in Large Scale Research Projects Large scale research projects (LSRP) often serve as flagships used by universities or research institutions to demonstrate their performance and capability to stakeholders and other interested parties. As the global competition among universities for the recruitment of the brightest brains has increased, effective internationalization measures have become hot topics for universities and LSRP alike. Nevertheless, most projects and universities are challenged with little experience on how to conduct these measures and make internationalization an cost efficient and useful activity. Furthermore, those undertakings permanently have to be justified with the Project PIs as important, valuable tools to improve the capacity of the project and the research location. There are a variety of measures, suited to support universities in international recruitment. These include e.g. institutional partnerships, research marketing, a welcome culture, support for science mobility and an effective alumni strategy. These activities, although often conducted by different university entities, are interlocked and can be very powerful measures if interfaced in an effective way. On this poster we display a number of internationalization measures for various target groups, identify interfaces between project management, university administration, researchers and international partners to work together, exchange information and improve processes in order to be able to recruit, support and keep the brightest heads to your project.

  15. A Reconfigurable Fabric for Accelerating Large-Scale Datacenter Services

    OpenAIRE

    Putnam, Andrew; Caulfield, Adrian M.; Chung, Eric S.; Chiou, Derek; Constantinides, Kypros; Demme, John; Esmaeilzadeh, Hadi; Fowers, Jeremy; Gopal, Gopi Prashanth; Gray, Jan; Haselman, Michael; Hauck, Scott; Heil, Stephen; Hormati, Amir; Kim, Joo-Young

    2014-01-01

    Datacenter workloads demand high computational capabilities, flexibility, power efficiency, and low cost. It is challenging to improve all of these factors simultaneously. To advance datacenter capabilities beyond what commodity server designs can provide, we have designed and built a composable, reconfigurable fabric to accelerate portions of large-scale software services. Each instantiation of the fabric consists of a 6×8 2-D torus of high-end Stratix V FPGAs embedded into a half-rack of 48...

  16. Computational tools for large-scale biological network analysis

    OpenAIRE

    Pinto, José Pedro Basto Gouveia Pereira

    2012-01-01

    Tese de doutoramento em Informática The surge of the field of Bioinformatics, among other contributions, provided biological researchers with powerful computational methods for processing and analysing the large amount of data coming from recent biological experimental techniques such as genome sequencing and other omics. Naturally, this led to the opening of new avenues of biological research among which is included the analysis of large-scale biological networks. The an...

  17. Large-Scale Integrated Carbon Nanotube Gas Sensors

    OpenAIRE

    Joondong Kim

    2012-01-01

    Carbon nanotube (CNT) is a promising one-dimensional nanostructure for various nanoscale electronics. Additionally, nanostructures would provide a significant large surface area at a fixed volume, which is an advantage for high-responsive gas sensors. However, the difficulty in fabrication processes limits the CNT gas sensors for the large-scale production. We review the viable scheme for large-area application including the CNT gas sensor fabrication and reaction mechanism with a practical d...

  18. Large-Scale Integrated Carbon Nanotube Gas Sensors

    Directory of Open Access Journals (Sweden)

    Joondong Kim

    2012-01-01

    Full Text Available Carbon nanotube (CNT is a promising one-dimensional nanostructure for various nanoscale electronics. Additionally, nanostructures would provide a significant large surface area at a fixed volume, which is an advantage for high-responsive gas sensors. However, the difficulty in fabrication processes limits the CNT gas sensors for the large-scale production. We review the viable scheme for large-area application including the CNT gas sensor fabrication and reaction mechanism with a practical demonstration.

  19. Large-scale simulation of karst processes - parameter estimation, model evaluation and quantification of uncertainty

    Science.gov (United States)

    Hartmann, A. J.

    2016-12-01

    Heterogeneity is an intrinsic property of karst systems. It results in complex hydrological behavior that is characterized by an interplay of diffuse and concentrated flow and transport. In large-scale hydrological models, these processes are usually not considered. Instead average or representative values are chosen for each of the simulated grid cells omitting many aspects of their sub-grid variability. In karst regions, this may lead to unreliable predictions when those models are used for assessing future water resources availability, floods or droughts, or when they are used for recommendations for more sustainable water management. In this contribution I present a large-scale groundwater recharge model (0.25° x 0.25° resolution) that takes into karst hydrological processes by using statistical distribution functions to express subsurface heterogeneity. The model is applied over Europe's and the Mediterranean's carbonate rock regions ( 25% of the total area). As no measurements of the variability of subsurface properties are available at this scale, a parameter estimation procedure, which uses latent heat flux and soil moisture observations and quantifies the remaining uncertainty, was applied. The model is evaluated by sensitivity analysis, comparison to other large-scale models without karst processes included and independent recharge observations. Using with historic data (2002-2012) I can show that recharge rates vary strongly over Europe and the Mediterranean. At regions with low information for parameter estimation there is a larger prediction uncertainty (for instance in desert regions). Evaluation with independent recharge estimates shows that, on average, the model provides acceptable estimates, while the other large scale models under-estimate karstic recharge. The results of the sensitivity analysis corroborate the importance of including karst heterogeneity into the model as the distribution shape factor is the most sensitive parameter for

  20. Detecting the Spatio-temporal Distribution of Soil Salinity and Its Relationship to Crop Growth in a Large-scale Arid Irrigation District Based on Sampling Experiment and Remote Sensing

    Science.gov (United States)

    Ren, D.; Huang, G., Sr.; Xu, X.; Huang, Q., Sr.; Xiong, Y.

    2016-12-01

    Soil salinity analysis on a regional scale is of great significance for protecting agriculture production and maintaining eco-environmental health in arid and semi-arid irrigated areas. In this study, the Hetao Irrigation District (Hetao) in Inner Mongolia Autonomous Region, with suffering long-term soil salinization problems, was selected as the case study area. Field sampling experiments and investigations related to soil salt contents, crop growth and yields were carried out across the whole area, during April to August in 2015. Soil salinity characteristics in space and time were systematically analyzed for Hetao as well as the corresponding impacts on crops. Remotely sensed map of soil salinity distribution for surface soil was also derived based on the Landsat OLI data with a 30 m resolution. The results elaborated the temporal and spatial dynamics of soil salinity and the relationships with irrigation, groundwater depth and crop water consumption in Hetao. In addition, the strong spatial variability of salinization was clearly presented by the remotely sensed map of soil salinity. Further, the relationship between soil salinity and crop growth was analyzed, and then the impact degrees of soil salinization on cropping pattern, leaf area index, plant height and crop yield were preliminarily revealed. Overall, this study can provide very useful information for salinization control and guide the future agricultural production and soil-water management for the arid irrigation districts analogous to Hetao.

  1. Large scale water lens for solar concentration.

    Science.gov (United States)

    Mondol, A S; Vogel, B; Bastian, G

    2015-06-01

    Properties of large scale water lenses for solar concentration were investigated. These lenses were built from readily available materials, normal tap water and hyper-elastic linear low density polyethylene foil. Exposed to sunlight, the focal lengths and light intensities in the focal spot were measured and calculated. Their optical properties were modeled with a raytracing software based on the lens shape. We have achieved a good match of experimental and theoretical data by considering wavelength dependent concentration factor, absorption and focal length. The change in light concentration as a function of water volume was examined via the resulting load on the foil and the corresponding change of shape. The latter was extracted from images and modeled by a finite element simulation.

  2. Practical Security of Large-Scale Elections: An Exploratory Case Study of Internet Voting in Estonia

    Science.gov (United States)

    Schryen, Guido

    The Estonian parliamentary election in 2007 is regarded as a success story of large-scale Internet elections. I use this election in a single case study on practical security to show that low quality of security and its management does not necessarily prevent large-scale Internet elections from being conducted. I also provide research propositions with regard to future challenges for large-scale Internet elections.

  3. Large-scale sequential quadratic programming algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Eldersveld, S.K.

    1992-09-01

    The problem addressed is the general nonlinear programming problem: finding a local minimizer for a nonlinear function subject to a mixture of nonlinear equality and inequality constraints. The methods studied are in the class of sequential quadratic programming (SQP) algorithms, which have previously proved successful for problems of moderate size. Our goal is to devise an SQP algorithm that is applicable to large-scale optimization problems, using sparse data structures and storing less curvature information but maintaining the property of superlinear convergence. The main features are: 1. The use of a quasi-Newton approximation to the reduced Hessian of the Lagrangian function. Only an estimate of the reduced Hessian matrix is required by our algorithm. The impact of not having available the full Hessian approximation is studied and alternative estimates are constructed. 2. The use of a transformation matrix Q. This allows the QP gradient to be computed easily when only the reduced Hessian approximation is maintained. 3. The use of a reduced-gradient form of the basis for the null space of the working set. This choice of basis is more practical than an orthogonal null-space basis for large-scale problems. The continuity condition for this choice is proven. 4. The use of incomplete solutions of quadratic programming subproblems. Certain iterates generated by an active-set method for the QP subproblem are used in place of the QP minimizer to define the search direction for the nonlinear problem. An implementation of the new algorithm has been obtained by modifying the code MINOS. Results and comparisons with MINOS and NPSOL are given for the new algorithm on a set of 92 test problems.

  4. Image-based Exploration of Large-Scale Pathline Fields

    KAUST Repository

    Nagoor, Omniah H.

    2014-05-27

    While real-time applications are nowadays routinely used in visualizing large nu- merical simulations and volumes, handling these large-scale datasets requires high-end graphics clusters or supercomputers to process and visualize them. However, not all users have access to powerful clusters. Therefore, it is challenging to come up with a visualization approach that provides insight to large-scale datasets on a single com- puter. Explorable images (EI) is one of the methods that allows users to handle large data on a single workstation. Although it is a view-dependent method, it combines both exploration and modification of visual aspects without re-accessing the original huge data. In this thesis, we propose a novel image-based method that applies the concept of EI in visualizing large flow-field pathlines data. The goal of our work is to provide an optimized image-based method, which scales well with the dataset size. Our approach is based on constructing a per-pixel linked list data structure in which each pixel contains a list of pathlines segments. With this view-dependent method it is possible to filter, color-code and explore large-scale flow data in real-time. In addition, optimization techniques such as early-ray termination and deferred shading are applied, which further improves the performance and scalability of our approach.

  5. Robust regression for large-scale neuroimaging studies.

    Science.gov (United States)

    Fritsch, Virgile; Da Mota, Benoit; Loth, Eva; Varoquaux, Gaël; Banaschewski, Tobias; Barker, Gareth J; Bokde, Arun L W; Brühl, Rüdiger; Butzek, Brigitte; Conrod, Patricia; Flor, Herta; Garavan, Hugh; Lemaitre, Hervé; Mann, Karl; Nees, Frauke; Paus, Tomas; Schad, Daniel J; Schümann, Gunter; Frouin, Vincent; Poline, Jean-Baptiste; Thirion, Bertrand

    2015-05-01

    Multi-subject datasets used in neuroimaging group studies have a complex structure, as they exhibit non-stationary statistical properties across regions and display various artifacts. While studies with small sample sizes can rarely be shown to deviate from standard hypotheses (such as the normality of the residuals) due to the poor sensitivity of normality tests with low degrees of freedom, large-scale studies (e.g. >100 subjects) exhibit more obvious deviations from these hypotheses and call for more refined models for statistical inference. Here, we demonstrate the benefits of robust regression as a tool for analyzing large neuroimaging cohorts. First, we use an analytic test based on robust parameter estimates; based on simulations, this procedure is shown to provide an accurate statistical control without resorting to permutations. Second, we show that robust regression yields more detections than standard algorithms using as an example an imaging genetics study with 392 subjects. Third, we show that robust regression can avoid false positives in a large-scale analysis of brain-behavior relationships with over 1500 subjects. Finally we embed robust regression in the Randomized Parcellation Based Inference (RPBI) method and demonstrate that this combination further improves the sensitivity of tests carried out across the whole brain. Altogether, our results show that robust procedures provide important advantages in large-scale neuroimaging group studies. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. Dutch logistics service providers and sustainable physical distribution

    NARCIS (Netherlands)

    Onno Omta; Hans-Heinrich Glöckner; Reinder Pieters; Stef Weijers

    2013-01-01

    As environmental concerns becoming increasingly important to logistics service providers, the question arises as to how they can achieve sustainable physical distribution practices while surviving the severe competition in freight transport. This issue is further complicated by the pressures from

  7. Large-Scale Graph Processing Using Apache Giraph

    KAUST Repository

    Sakr, Sherif

    2017-01-07

    This book takes its reader on a journey through Apache Giraph, a popular distributed graph processing platform designed to bring the power of big data processing to graph data. Designed as a step-by-step self-study guide for everyone interested in large-scale graph processing, it describes the fundamental abstractions of the system, its programming models and various techniques for using the system to process graph data at scale, including the implementation of several popular and advanced graph analytics algorithms.

  8. Segmentation by Large Scale Hypothesis Testing - Segmentation as Outlier Detection

    DEFF Research Database (Denmark)

    Darkner, Sune; Dahl, Anders Lindbjerg; Larsen, Rasmus

    2010-01-01

    locally. We propose a method based on large scale hypothesis testing with a consistent method for selecting an appropriate threshold for the given data. By estimating the background distribution we characterize the segment of interest as a set of outliers with a certain probability based on the estimated...... a microscope and we show how the method can handle transparent particles with significant glare point. The method generalizes to other problems. THis is illustrated by applying the method to camera calibration images and MRI of the midsagittal plane for gray and white matter separation and segmentation...

  9. Large-scale computing techniques for complex system simulations

    CERN Document Server

    Dubitzky, Werner; Schott, Bernard

    2012-01-01

    Complex systems modeling and simulation approaches are being adopted in a growing number of sectors, including finance, economics, biology, astronomy, and many more. Technologies ranging from distributed computing to specialized hardware are explored and developed to address the computational requirements arising in complex systems simulations. The aim of this book is to present a representative overview of contemporary large-scale computing technologies in the context of complex systems simulations applications. The intention is to identify new research directions in this field and

  10. Analysis using large-scale ringing data

    Directory of Open Access Journals (Sweden)

    Baillie, S. R.

    2004-06-01

    survival and recruitment estimates from the French CES scheme to assess the relative contributions of survival and recruitment to overall population changes. He develops a novel approach to modelling survival rates from such multi–site data by using within–year recaptures to provide a covariate of between–year recapture rates. This provided parsimonious models of variation in recapture probabilities between sites and years. The approach provides promising results for the four species investigated and can potentially be extended to similar data from other CES/MAPS schemes. The final paper by Blandine Doligez, David Thomson and Arie van Noordwijk (Doligez et al., 2004 illustrates how large-scale studies of population dynamics can be important for evaluating the effects of conservation measures. Their study is concerned with the reintroduction of White Stork populations to the Netherlands where a re–introduction programme started in 1969 had resulted in a breeding population of 396 pairs by 2000. They demonstrate the need to consider a wide range of models in order to account for potential age, time, cohort and “trap–happiness” effects. As the data are based on resightings such trap–happiness must reflect some form of heterogeneity in resighting probabilities. Perhaps surprisingly, the provision of supplementary food did not influence survival, but it may havehad an indirect effect via the alteration of migratory behaviour. Spatially explicit modelling of data gathered at many sites inevitably results in starting models with very large numbers of parameters. The problem is often complicated further by having relatively sparse data at each site, even where the total amount of data gathered is very large. Both Julliard (2004 and Doligez et al. (2004 give explicit examples of problems caused by needing to handle very large numbers of parameters and show how they overcame them for their particular data sets. Such problems involve both the choice of appropriate

  11. Distributed Smart Grid Asset Control Strategies for Providing Ancillary Services

    Energy Technology Data Exchange (ETDEWEB)

    Kalsi, Karanjit [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Zhang, Wei [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Lian, Jianming [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Marinovici, Laurentiu D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Moya, Christian [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Dagle, Jeffery E. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2013-10-30

    With large-scale plans to integrate renewable generation driven mainly by state-level renewable portfolio requirements, more resources will be needed to compensate for the uncertainty and variability associated with intermittent generation resources. Distributed assets can be used to mitigate the concerns associated with renewable energy resources and to keep costs down. Under such conditions, performing primary frequency control using only supply-side resources becomes not only prohibitively expensive but also technically difficult. It is therefore important to explore how a sufficient proportion of the loads could assume a routine role in primary frequency control to maintain the stability of the system at an acceptable cost. The main objective of this project is to develop a novel hierarchical distributed framework for frequency based load control. The framework involves two decision layers. The top decision layer determines the optimal gain for aggregated loads for each load bus. The gains are computed using decentralized robust control methods, and will be broadcast to the corresponding participating loads every control period. The second layer consists of a large number of heterogeneous devices, which switch probabilistically during contingencies so that aggregated power change matches the desired amount according to the most recently received gains. The simulation results show great potential to enable systematic design of demand-side primary frequency control with stability guarantees on the overall power system. The proposed design systematically accounts for the interactions between the total load response and bulk power system frequency dynamics. It also guarantees frequency stability under a wide range of time varying operating conditions. The local device-level load response rules fully respect the device constraints (such as temperature setpoint, compressor time delays of HVACs, or arrival and departure of the deferrable loads), which are crucial for

  12. Large-scale implementation of disease control programmes: a cost-effectiveness analysis of long-lasting insecticide-treated bed net distribution channels in a malaria-endemic area of western Kenya?a study protocol

    OpenAIRE

    Gama, Elvis; Were, Vincent; Ouma, Peter; Desai, Meghna; Niessen, Louis; Ann M Buff; Kariuki, Simon

    2016-01-01

    Introduction \\ud Historically, Kenya has used various distribution models for long-lasting insecticide-treated bed nets (LLINs) with variable results in population coverage. The models presently vary widely in scale, target population and strategy. There is limited information to determine the best combination of distribution models, which will lead to sustained high coverage and are operationally efficient and cost-effective. Standardised cost information is needed in combination with progra...

  13. Accelerating large-scale phase-field simulations with GPU

    Science.gov (United States)

    Shi, Xiaoming; Huang, Houbing; Cao, Guoping; Ma, Xingqiao

    2017-10-01

    A new package for accelerating large-scale phase-field simulations was developed by using GPU based on the semi-implicit Fourier method. The package can solve a variety of equilibrium equations with different inhomogeneity including long-range elastic, magnetostatic, and electrostatic interactions. Through using specific algorithm in Compute Unified Device Architecture (CUDA), Fourier spectral iterative perturbation method was integrated in GPU package. The Allen-Cahn equation, Cahn-Hilliard equation, and phase-field model with long-range interaction were solved based on the algorithm running on GPU respectively to test the performance of the package. From the comparison of the calculation results between the solver executed in single CPU and the one on GPU, it was found that the speed on GPU is enormously elevated to 50 times faster. The present study therefore contributes to the acceleration of large-scale phase-field simulations and provides guidance for experiments to design large-scale functional devices.

  14. Geospatial Optimization of Siting Large-Scale Solar Projects

    Energy Technology Data Exchange (ETDEWEB)

    Macknick, Jordan [National Renewable Energy Lab. (NREL), Golden, CO (United States); Quinby, Ted [National Renewable Energy Lab. (NREL), Golden, CO (United States); Caulfield, Emmet [Stanford Univ., CA (United States); Gerritsen, Margot [Stanford Univ., CA (United States); Diffendorfer, Jay [U.S. Geological Survey, Boulder, CO (United States); Haines, Seth [U.S. Geological Survey, Boulder, CO (United States)

    2014-03-01

    Recent policy and economic conditions have encouraged a renewed interest in developing large-scale solar projects in the U.S. Southwest. However, siting large-scale solar projects is complex. In addition to the quality of the solar resource, solar developers must take into consideration many environmental, social, and economic factors when evaluating a potential site. This report describes a proof-of-concept, Web-based Geographical Information Systems (GIS) tool that evaluates multiple user-defined criteria in an optimization algorithm to inform discussions and decisions regarding the locations of utility-scale solar projects. Existing siting recommendations for large-scale solar projects from governmental and non-governmental organizations are not consistent with each other, are often not transparent in methods, and do not take into consideration the differing priorities of stakeholders. The siting assistance GIS tool we have developed improves upon the existing siting guidelines by being user-driven, transparent, interactive, capable of incorporating multiple criteria, and flexible. This work provides the foundation for a dynamic siting assistance tool that can greatly facilitate siting decisions among multiple stakeholders.

  15. Accelerating large-scale phase-field simulations with GPU

    Directory of Open Access Journals (Sweden)

    Xiaoming Shi

    2017-10-01

    Full Text Available A new package for accelerating large-scale phase-field simulations was developed by using GPU based on the semi-implicit Fourier method. The package can solve a variety of equilibrium equations with different inhomogeneity including long-range elastic, magnetostatic, and electrostatic interactions. Through using specific algorithm in Compute Unified Device Architecture (CUDA, Fourier spectral iterative perturbation method was integrated in GPU package. The Allen-Cahn equation, Cahn-Hilliard equation, and phase-field model with long-range interaction were solved based on the algorithm running on GPU respectively to test the performance of the package. From the comparison of the calculation results between the solver executed in single CPU and the one on GPU, it was found that the speed on GPU is enormously elevated to 50 times faster. The present study therefore contributes to the acceleration of large-scale phase-field simulations and provides guidance for experiments to design large-scale functional devices.

  16. Utilization of Large Scale Surface Models for Detailed Visibility Analyses

    Science.gov (United States)

    Caha, J.; Kačmařík, M.

    2017-11-01

    This article demonstrates utilization of large scale surface models with small spatial resolution and high accuracy, acquired from Unmanned Aerial Vehicle scanning, for visibility analyses. The importance of large scale data for visibility analyses on the local scale, where the detail of the surface model is the most defining factor, is described. The focus is not only the classic Boolean visibility, that is usually determined within GIS, but also on so called extended viewsheds that aims to provide more information about visibility. The case study with examples of visibility analyses was performed on river Opava, near the Ostrava city (Czech Republic). The multiple Boolean viewshed analysis and global horizon viewshed were calculated to determine most prominent features and visibility barriers of the surface. Besides that, the extended viewshed showing angle difference above the local horizon, which describes angular height of the target area above the barrier, is shown. The case study proved that large scale models are appropriate data source for visibility analyses on local level. The discussion summarizes possible future applications and further development directions of visibility analyses.

  17. Large-scale diversity of slope fishes: pattern inconsistency between multiple diversity indices.

    Science.gov (United States)

    Gaertner, Jean-Claude; Maiorano, Porza; Mérigot, Bastien; Colloca, Francesco; Politou, Chrissi-Yianna; Gil De Sola, Luis; Bertrand, Jacques A; Murenu, Matteo; Durbec, Jean-Pierre; Kallianiotis, Argyris; Mannini, Alessandro

    2013-01-01

    Large-scale studies focused on the diversity of continental slope ecosystems are still rare, usually restricted to a limited number of diversity indices and mainly based on the empirical comparison of heterogeneous local data sets. In contrast, we investigate large-scale fish diversity on the basis of multiple diversity indices and using 1454 standardized trawl hauls collected throughout the upper and middle slope of the whole northern Mediterranean Sea (36°3'- 45°7' N; 5°3'W - 28°E). We have analyzed (1) the empirical relationships between a set of 11 diversity indices in order to assess their degree of complementarity/redundancy and (2) the consistency of spatial patterns exhibited by each of the complementary groups of indices. Regarding species richness, our results contrasted both the traditional view based on the hump-shaped theory for bathymetric pattern and the commonly-admitted hypothesis of a large-scale decreasing trend correlated with a similar gradient of primary production in the Mediterranean Sea. More generally, we found that the components of slope fish diversity we analyzed did not always show a consistent pattern of distribution according either to depth or to spatial areas, suggesting that they are not driven by the same factors. These results, which stress the need to extend the number of indices traditionally considered in diversity monitoring networks, could provide a basis for rethinking not only the methodological approach used in monitoring systems, but also the definition of priority zones for protection. Finally, our results call into question the feasibility of properly investigating large-scale diversity patterns using a widespread approach in ecology, which is based on the compilation of pre-existing heterogeneous and disparate data sets, in particular when focusing on indices that are very sensitive to sampling design standardization, such as species richness.

  18. Large-scale carbon fiber tests

    Science.gov (United States)

    Pride, R. A.

    1980-01-01

    A realistic release of carbon fibers was established by burning a minimum of 45 kg of carbon fiber composite aircraft structural components in each of five large scale, outdoor aviation jet fuel fire tests. This release was quantified by several independent assessments with various instruments developed specifically for these tests. The most likely values for the mass of single carbon fibers released ranged from 0.2 percent of the initial mass of carbon fiber for the source tests (zero wind velocity) to a maximum of 0.6 percent of the initial carbon fiber mass for dissemination tests (5 to 6 m/s wind velocity). Mean fiber lengths for fibers greater than 1 mm in length ranged from 2.5 to 3.5 mm. Mean diameters ranged from 3.6 to 5.3 micrometers which was indicative of significant oxidation. Footprints of downwind dissemination of the fire released fibers were measured to 19.1 km from the fire.

  19. Large scale digital atlases in neuroscience

    Science.gov (United States)

    Hawrylycz, M.; Feng, D.; Lau, C.; Kuan, C.; Miller, J.; Dang, C.; Ng, L.

    2014-03-01

    Imaging in neuroscience has revolutionized our current understanding of brain structure, architecture and increasingly its function. Many characteristics of morphology, cell type, and neuronal circuitry have been elucidated through methods of neuroimaging. Combining this data in a meaningful, standardized, and accessible manner is the scope and goal of the digital brain atlas. Digital brain atlases are used today in neuroscience to characterize the spatial organization of neuronal structures, for planning and guidance during neurosurgery, and as a reference for interpreting other data modalities such as gene expression and connectivity data. The field of digital atlases is extensive and in addition to atlases of the human includes high quality brain atlases of the mouse, rat, rhesus macaque, and other model organisms. Using techniques based on histology, structural and functional magnetic resonance imaging as well as gene expression data, modern digital atlases use probabilistic and multimodal techniques, as well as sophisticated visualization software to form an integrated product. Toward this goal, brain atlases form a common coordinate framework for summarizing, accessing, and organizing this knowledge and will undoubtedly remain a key technology in neuroscience in the future. Since the development of its flagship project of a genome wide image-based atlas of the mouse brain, the Allen Institute for Brain Science has used imaging as a primary data modality for many of its large scale atlas projects. We present an overview of Allen Institute digital atlases in neuroscience, with a focus on the challenges and opportunities for image processing and computation.

  20. Large-scale PV grid integration

    Energy Technology Data Exchange (ETDEWEB)

    Martins Souza, Hellen; Jose do Carmo, Marlon; Rocha de Oliveira, Angelo [CEFET-MG, Leopoldina (Brazil). Dept. of Control and Automation; Willer de Oliveira, Leonard [Universidade Federal de Juiz de Fora (UFJF) (Brazil). Power Systems; Ribeiro, Paulo Fernando [Technische Univ. Eindhoven (Netherlands). Electrical Energy Systems

    2012-07-01

    This paper aims to review the development of solar energy as a renewable source of electricity, which is vital to the development of a more sustainable world, pointing out challenges and technology trends. First, there will be a review of the development of photovoltaic panels, focusing on countries where the technology is in its most advanced stage, such as Germany, Netherlands, the United States and Spain as well as showing trends of this type of power generation source. Important aspects such as efficiency and production costs of photovoltaic power plants will be covered. In addition, the integration of this generation sources to the electric power system will be considered concerning their impact on system parameters as power quality, stability and protection. The existing rules and interconnection standards for large-scale integration / implementation of photovoltaic (PV) generation are reviewed and discussed. Finally, a special application case of PV in the country of Brazil is briefly mentioned considering its potential for generation and implementation in the upcoming years. (orig.)

  1. Implementing and Bounding a Cascade Heuristic for Large-Scale Optimization

    Science.gov (United States)

    2017-06-01

    BOUNDING A CASCADE HEURISTIC FOR LARGE-SCALE OPTIMIZATION by Katherine H. Guthrie June 2017 Thesis Advisor: Robert F. Dell Second Reader...AND DATES COVERED Master’s thesis 4. TITLE AND SUBTITLE IMPLEMENTING AND BOUNDING A CASCADE HEURISTIC FOR LARGE-SCALE OPTIMIZATION 5. FUNDING...release. Distribution is unlimited. 12b. DISTRIBUTION CODE 13. ABSTRACT (maximum 200 words) A cascade heuristic appeals when we are faced with a

  2. Less is more: regularization perspectives on large scale machine learning

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Deep learning based techniques provide a possible solution at the expanse of theoretical guidance and, especially, of computational requirements. It is then a key challenge for large scale machine learning to devise approaches guaranteed to be accurate and yet computationally efficient. In this talk, we will consider a regularization perspectives on machine learning appealing to classical ideas in linear algebra and inverse problems to scale-up dramatically nonparametric methods such as kernel methods, often dismissed because of prohibitive costs. Our analysis derives optimal theoretical guarantees while providing experimental results at par or out-performing state of the art approaches.

  3. Infrastructure and interfaces for large-scale numerical software.

    Energy Technology Data Exchange (ETDEWEB)

    Freitag, L.; Gropp, W. D.; Hovland, P. D.; McInnes, L. C.; Smith, B. F.

    1999-06-10

    The complexity of large-scale scientific simulations often necessitates the combined use of multiple software packages developed by different groups in areas such as adaptive mesh manipulations, scalable algebraic solvers, and optimization. Historically, these packages have been combined by using custom code. This practice inhibits experimentation with and comparison of multiple tools that provide similar functionality through different implementations. The ALICE project, a collaborative effort among researchers at Argonne National Laboratory, is exploring the use of component-based software engineering to provide better interoperability among numerical toolkits. They discuss some initial experiences in developing an infrastructure and interfaces for high-performance numerical computing.

  4. Large-Scale CFD Parallel Computing Dealing with Massive Mesh

    Directory of Open Access Journals (Sweden)

    Zhi Shang

    2013-01-01

    Full Text Available In order to run CFD codes more efficiently on large scales, the parallel computing has to be employed. For example, in industrial scales, it usually uses tens of thousands of mesh cells to capture the details of complex geometries. How to distribute these mesh cells among the multiprocessors for obtaining a good parallel computing performance (HPC is really a challenge. Due to dealing with the massive mesh cells, it is difficult for the CFD codes without parallel optimizations to handle this kind of large-scale computing. Some of the open source mesh partitioning software packages, such as Metis, ParMetis, Scotch, PT-Scotch, and Zoltan, are able to deal with the distribution of large number of mesh cells. Therefore they were employed as the parallel optimization tools ported into Code_Saturne, an open source CFD code, for testing if they can solve the issue of dealing with massive mesh cells for CFD codes. Through the studies, it was found that the mesh partitioning optimization software packages can help CFD codes not only deal with massive mesh cells but also have a good HPC.

  5. Ecogrid EU: a large scale smart grids demonstration of real time market-based integration of numerous small der and DR

    NARCIS (Netherlands)

    Ding, Y.; Nyeng, P.; Ostergaard, J.; Trong, M.D.; Pineda, S.; Kok, K.; Huitema, G.B.; Grande, O.S.

    2012-01-01

    This paper provides an overview of the Ecogrid EU project, which is a large-scale demonstration project on the Danish island Bornholm. It provides Europe a fast track evolution towards smart grid dissemination and deployment in the distribution network. Objective of Ecogrid EU is to illustrate that

  6. A cloud-based framework for large-scale traditional Chinese medical record retrieval.

    Science.gov (United States)

    Liu, Lijun; Liu, Li; Fu, Xiaodong; Huang, Qingsong; Zhang, Xianwen; Zhang, Yin

    2018-01-01

    Electronic medical records are increasingly common in medical practice. The secondary use of medical records has become increasingly important. It relies on the ability to retrieve the complete information about desired patient populations. How to effectively and accurately retrieve relevant medical records from large- scale medical big data is becoming a big challenge. Therefore, we propose an efficient and robust framework based on cloud for large-scale Traditional Chinese Medical Records (TCMRs) retrieval. We propose a parallel index building method and build a distributed search cluster, the former is used to improve the performance of index building, and the latter is used to provide high concurrent online TCMRs retrieval. Then, a real-time multi-indexing model is proposed to ensure the latest relevant TCMRs are indexed and retrieved in real-time, and a semantics-based query expansion method and a multi- factor ranking model are proposed to improve retrieval quality. Third, we implement a template-based visualization method for displaying medical reports. The proposed parallel indexing method and distributed search cluster can improve the performance of index building and provide high concurrent online TCMRs retrieval. The multi-indexing model can ensure the latest relevant TCMRs are indexed and retrieved in real-time. The semantics expansion method and the multi-factor ranking model can enhance retrieval quality. The template-based visualization method can enhance the availability and universality, where the medical reports are displayed via friendly web interface. In conclusion, compared with the current medical record retrieval systems, our system provides some advantages that are useful in improving the secondary use of large-scale traditional Chinese medical records in cloud environment. The proposed system is more easily integrated with existing clinical systems and be used in various scenarios. Copyright © 2017. Published by Elsevier Inc.

  7. Foundational perspectives on causality in large-scale brain networks

    Science.gov (United States)

    Mannino, Michael; Bressler, Steven L.

    2015-12-01

    A profusion of recent work in cognitive neuroscience has been concerned with the endeavor to uncover causal influences in large-scale brain networks. However, despite the fact that many papers give a nod to the important theoretical challenges posed by the concept of causality, this explosion of research has generally not been accompanied by a rigorous conceptual analysis of the nature of causality in the brain. This review provides both a descriptive and prescriptive account of the nature of causality as found within and between large-scale brain networks. In short, it seeks to clarify the concept of causality in large-scale brain networks both philosophically and scientifically. This is accomplished by briefly reviewing the rich philosophical history of work on causality, especially focusing on contributions by David Hume, Immanuel Kant, Bertrand Russell, and Christopher Hitchcock. We go on to discuss the impact that various interpretations of modern physics have had on our understanding of causality. Throughout all this, a central focus is the distinction between theories of deterministic causality (DC), whereby causes uniquely determine their effects, and probabilistic causality (PC), whereby causes change the probability of occurrence of their effects. We argue that, given the topological complexity of its large-scale connectivity, the brain should be considered as a complex system and its causal influences treated as probabilistic in nature. We conclude that PC is well suited for explaining causality in the brain for three reasons: (1) brain causality is often mutual; (2) connectional convergence dictates that only rarely is the activity of one neuronal population uniquely determined by another one; and (3) the causal influences exerted between neuronal populations may not have observable effects. A number of different techniques are currently available to characterize causal influence in the brain. Typically, these techniques quantify the statistical

  8. Developing Large-Scale Bayesian Networks by Composition

    Data.gov (United States)

    National Aeronautics and Space Administration — In this paper, we investigate the use of Bayesian networks to construct large-scale diagnostic systems. In particular, we consider the development of large-scale...

  9. Large Scale Simulation Platform for NODES Validation Study

    Energy Technology Data Exchange (ETDEWEB)

    Sotorrio, P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Qin, Y. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Min, L. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-04-27

    This report summarizes the Large Scale (LS) simulation platform created for the Eaton NODES project. The simulation environment consists of both wholesale market simulator and distribution simulator and includes the CAISO wholesale market model and a PG&E footprint of 25-75 feeders to validate the scalability under a scenario of 33% RPS in California with additional 17% of DERS coming from distribution and customers. The simulator can generate hourly unit commitment, 5-minute economic dispatch, and 4-second AGC regulation signals. The simulator is also capable of simulating greater than 10k individual controllable devices. Simulated DERs include water heaters, EVs, residential and light commercial HVAC/buildings, and residential-level battery storage. Feeder-level voltage regulators and capacitor banks are also simulated for feeder-level real and reactive power management and Vol/Var control.

  10. Experimental Investigation of Large-Scale Bubbly Plumes

    Energy Technology Data Exchange (ETDEWEB)

    Zboray, R.; Simiano, M.; De Cachard, F

    2004-03-01

    Carefully planned and instrumented experiments under well-defined boundary conditions have been carried out on large-scale, isothermal, bubbly plumes. The data obtained is meant to validate newly developed, high-resolution numerical tools for 3D transient, two-phase flow modelling. Several measurement techniques have been utilised to collect data from the experiments: particle image velocimetry, optical probes, electromagnetic probes, and visualisation. Bubble and liquid velocity fields, void-fraction distributions, bubble size and interfacial-area-concentration distributions have all been measured in the plume region, as well as recirculation velocities in the surrounding pool. The results obtained from the different measurement techniques have been compared. In general, the two-phase flow data obtained from the different techniques are found to be consistent, and of high enough quality for validating numerical simulation tools for 3D bubbly flows. (author)

  11. Large-scale seismic waveform quality metric calculation using Hadoop

    Science.gov (United States)

    Magana-Zook, S.; Gaylord, J. M.; Knapp, D. R.; Dodge, D. A.; Ruppert, S. D.

    2016-09-01

    In this work we investigated the suitability of Hadoop MapReduce and Apache Spark for large-scale computation of seismic waveform quality metrics by comparing their performance with that of a traditional distributed implementation. The Incorporated Research Institutions for Seismology (IRIS) Data Management Center (DMC) provided 43 terabytes of broadband waveform data of which 5.1 TB of data were processed with the traditional architecture, and the full 43 TB were processed using MapReduce and Spark. Maximum performance of 0.56 terabytes per hour was achieved using all 5 nodes of the traditional implementation. We noted that I/O dominated processing, and that I/O performance was deteriorating with the addition of the 5th node. Data collected from this experiment provided the baseline against which the Hadoop results were compared. Next, we processed the full 43 TB dataset using both MapReduce and Apache Spark on our 18-node Hadoop cluster. These experiments were conducted multiple times with various subsets of the data so that we could build models to predict performance as a function of dataset size. We found that both MapReduce and Spark significantly outperformed the traditional reference implementation. At a dataset size of 5.1 terabytes, both Spark and MapReduce were about 15 times faster than the reference implementation. Furthermore, our performance models predict that for a dataset of 350 terabytes, Spark running on a 100-node cluster would be about 265 times faster than the reference implementation. We do not expect that the reference implementation deployed on a 100-node cluster would perform significantly better than on the 5-node cluster because the I/O performance cannot be made to scale. Finally, we note that although Big Data technologies clearly provide a way to process seismic waveform datasets in a high-performance and scalable manner, the technology is still rapidly changing, requires a high degree of investment in personnel, and will likely

  12. Solving Large-scale Eigenvalue Problems in SciDACApplications

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Chao

    2005-06-29

    Large-scale eigenvalue problems arise in a number of DOE applications. This paper provides an overview of the recent development of eigenvalue computation in the context of two SciDAC applications. We emphasize the importance of Krylov subspace methods, and point out its limitations. We discuss the value of alternative approaches that are more amenable to the use of preconditioners, and report the progression using the multi-level algebraic sub-structuring techniques to speed up eigenvalue calculation. In addition to methods for linear eigenvalue problems, we also examine new approaches to solving two types of non-linear eigenvalue problems arising from SciDAC applications.

  13. Large-scale File System Design and Architecture

    Directory of Open Access Journals (Sweden)

    V. Dynda

    2002-01-01

    Full Text Available This paper deals with design issues of a global file system, aiming to provide transparent data availability, security against loss and disclosure, and support for mobile and disconnected clients.First, the paper surveys general challenges and requirements for large-scale file systems, and then the design of particular elementary parts of the proposed file system is presented. This includes the design of the raw system architecture, the design of dynamic file replication with appropriate data consistency, file location and data security.Our proposed system is called Gaston, and will be referred further in the text under this name or its abbreviation GFS (Gaston File System.

  14. Large-Scale Query and XMatch, Entering the Parallel Zone

    Science.gov (United States)

    Nieto-Santisteban, M. A.; Thakar, A. R.; Szalay, A. S.; Gray, J.

    2006-07-01

    Current and future astronomical surveys are producing catalogs with millions and billions of objects. On-line access to such big datasets for data mining and cross-correlation is usually as highly desired as unfeasible. Providing these capabilities is becoming critical for the Virtual Observatory framework. In this paper we present various performance tests that show how using Relational Database Management Systems (RDBMS) and a Zoning algorithm to partition and parallelize the computation, we can facilitate large-scale query and cross-match.

  15. Cosmology from large-scale structure observations: a subjective review

    Science.gov (United States)

    Bilicki, Maciej; Hellwing, Wojciech A.

    2017-08-01

    In these lecture notes we give a brief overview of cosmological inference from the observed large-scale structure (LSS) of the Universe. After a general introduction, we briefly summarize the current status of the standard cosmological model, ΛCDM, and then discuss a few general puzzles related to this otherwise successful model. Next, after a concise presentation of LSS properties, we describe various observational cosmological probes, such as baryon acoustic oscillations, redshift space distortions and weak gravitational lensing. We also provide examples of how the rapidly developing technique of cross-correlations of cosmological datasets is applied. Finally, we briefly mention the promise brought to cosmology by gravitational wave detections.

  16. Cosmological constraints from large-scale structure growth rate measurements

    Science.gov (United States)

    Pavlov, Anatoly; Farooq, Omer; Ratra, Bharat

    2014-07-01

    We compile a list of 14 independent measurements of a large-scale structure growth rate between redshifts 0.067≤z≤0.8 and use this to place constraints on model parameters of constant and time-evolving general-relativistic dark energy cosmologies. With the assumption that gravity is well modeled by general relativity, we discover that growth-rate data provide restrictive cosmological parameter constraints. In combination with type Ia supernova apparent magnitude versus redshift data and Hubble parameter measurements, the growth rate data are consistent with the standard spatially flat ΛCDM model, as well as with mildly evolving dark energy density cosmological models.

  17. Large scale dynamics of protoplanetary discs

    Science.gov (United States)

    Béthune, William

    2017-08-01

    Planets form in the gaseous and dusty disks orbiting young stars. These protoplanetary disks are dispersed in a few million years, being accreted onto the central star or evaporated into the interstellar medium. To explain the observed accretion rates, it is commonly assumed that matter is transported through the disk by turbulence, although the mechanism sustaining turbulence is uncertain. On the other side, irradiation by the central star could heat up the disk surface and trigger a photoevaporative wind, but thermal effects cannot account for the observed acceleration and collimation of the wind into a narrow jet perpendicular to the disk plane. Both issues can be solved if the disk is sensitive to magnetic fields. Weak fields lead to the magnetorotational instability, whose outcome is a state of sustained turbulence. Strong fields can slow down the disk, causing it to accrete while launching a collimated wind. However, the coupling between the disk and the neutral gas is done via electric charges, each of which is outnumbered by several billion neutral molecules. The imperfect coupling between the magnetic field and the neutral gas is described in terms of "non-ideal" effects, introducing new dynamical behaviors. This thesis is devoted to the transport processes happening inside weakly ionized and weakly magnetized accretion disks; the role of microphysical effects on the large-scale dynamics of the disk is of primary importance. As a first step, I exclude the wind and examine the impact of non-ideal effects on the turbulent properties near the disk midplane. I show that the flow can spontaneously organize itself if the ionization fraction is low enough; in this case, accretion is halted and the disk exhibits axisymmetric structures, with possible consequences on planetary formation. As a second step, I study the launching of disk winds via a global model of stratified disk embedded in a warm atmosphere. This model is the first to compute non-ideal effects from

  18. Application of visible/near infrared spectroscopy to quality control of fresh fruits and vegetables in large-scale mass distribution channels: a preliminary test on carrots and tomatoes.

    Science.gov (United States)

    Beghi, Roberto; Giovenzana, Valentina; Tugnolo, Alessio; Guidetti, Riccardo

    2017-11-02

    The market for fruits and vegetables is mainly controlled by the mass distribution channel (MDC). MDC buyers do not have useful instruments to rapidly evaluate the quality of the products. Decisions by the buyers are driven primarily by pricing strategies rather than product quality. Simple, rapid and easy-to-use methods for objectively evaluating the quality of postharvest products are needed. The present study aimed to use visible and near-infrared (vis/NIR) spectroscopy to estimate some qualitative parameters of two low-price products (carrots and tomatoes) of various brands, as well as evaluate the applicability of this technique for use in stores. A non-destructive optical system (vis/NIR spectrophotometer with a reflection probe, spectral range 450-1650 nm) was tested. The differences in quality among carrots and tomatoes purchased from 13 stores on various dates were examined. The reference quality parameters (firmness, water content, soluble solids content, pH and colour) were correlated with the spectral readings. The models derived from the optical data gave positive results, in particular for the prediction of the soluble solids content and the colour, with better results for tomatoes than for carrots. The application of optical techniques may help MDC buyers to monitor the quality of postharvest products, leading to an effective optimization of the entire supply chain. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.

  19. The combustion behavior of large scale lithium titanate battery

    Science.gov (United States)

    Huang, Peifeng; Wang, Qingsong; Li, Ke; Ping, Ping; Sun, Jinhua

    2015-01-01

    Safety problem is always a big obstacle for lithium battery marching to large scale application. However, the knowledge on the battery combustion behavior is limited. To investigate the combustion behavior of large scale lithium battery, three 50 Ah Li(NixCoyMnz)O2/Li4Ti5O12 batteries under different state of charge (SOC) were heated to fire. The flame size variation is depicted to analyze the combustion behavior directly. The mass loss rate, temperature and heat release rate are used to analyze the combustion behavior in reaction way deeply. Based on the phenomenon, the combustion process is divided into three basic stages, even more complicated at higher SOC with sudden smoke flow ejected. The reason is that a phase change occurs in Li(NixCoyMnz)O2 material from layer structure to spinel structure. The critical temperatures of ignition are at 112–121°C on anode tab and 139 to 147°C on upper surface for all cells. But the heating time and combustion time become shorter with the ascending of SOC. The results indicate that the battery fire hazard increases with the SOC. It is analyzed that the internal short and the Li+ distribution are the main causes that lead to the difference. PMID:25586064

  20. Large Scale Land Acquisition as a driver of slope instability

    Science.gov (United States)

    Danilo Chiarelli, Davide; Rulli, Maria Cristina; Davis, Kyle F.; D'Odorico, Paolo

    2017-04-01

    Forests play a key role in preventing shallow landslides and deforestation has been analyzed as one of the main causes of increased mass wasting in hillsplopes undergoing land cover change. In the last few years vast tracts of lands have been acquired by foreign investors to satisfy an increasing demand for agricultural products. Large Scale Land Acquisitions (LSLA) often entail the conversion of forested landscapes into agricultural fields. Mozambique has been a major target of LSLAs and there is evidence that many of the acquired land have recently undergone forest clearing. The Zambezia Province in Mozambique has lost more than 500000ha of forest from 2000 to 2014; 25.4% of them were in areas acquired by large scale land investors. According to Land Matrix, an open-source database of reported land deals, there are currently 123 intended and confirmed deals in Mozambique; collectively, they account for 2.34million ha, the majority of which are located in forested areas. This study analyses the relationship between deforestation taking place inside LSLA areas(usually for agricultural purpose) and the likelihood of landslides occurrence in the Zambezia province in Mozambique. To this aim we use a spatially distributed and physically based model that couples slope stability analysis with a hillslope scale hydrological model and we compare the change in slope stability associated the forest loss documented by satellite imagery.

  1. Extending large-scale forest inventories to assess urban forests.

    Science.gov (United States)

    Corona, Piermaria; Agrimi, Mariagrazia; Baffetta, Federica; Barbati, Anna; Chiriacò, Maria Vincenza; Fattorini, Lorenzo; Pompei, Enrico; Valentini, Riccardo; Mattioli, Walter

    2012-03-01

    Urban areas are continuously expanding today, extending their influence on an increasingly large proportion of woods and trees located in or nearby urban and urbanizing areas, the so-called urban forests. Although these forests have the potential for significantly improving the quality the urban environment and the well-being of the urban population, data to quantify the extent and characteristics of urban forests are still lacking or fragmentary on a large scale. In this regard, an expansion of the domain of multipurpose forest inventories like National Forest Inventories (NFIs) towards urban forests would be required. To this end, it would be convenient to exploit the same sampling scheme applied in NFIs to assess the basic features of urban forests. This paper considers approximately unbiased estimators of abundance and coverage of urban forests, together with estimators of the corresponding variances, which can be achieved from the first phase of most large-scale forest inventories. A simulation study is carried out in order to check the performance of the considered estimators under various situations involving the spatial distribution of the urban forests over the study area. An application is worked out on the data from the Italian NFI.

  2. Algorithm 896: LSA: Algorithms for Large-Scale Optimization

    Czech Academy of Sciences Publication Activity Database

    Lukšan, Ladislav; Matonoha, Ctirad; Vlček, Jan

    2009-01-01

    Roč. 36, č. 3 (2009), 16-1-16-29 ISSN 0098-3500 R&D Projects: GA AV ČR IAA1030405; GA ČR GP201/06/P397 Institutional research plan: CEZ:AV0Z10300504 Keywords : algorithms * design * large-scale optimization * large-scale nonsmooth optimization * large-scale nonlinear least squares * large-scale nonlinear minimax * large-scale systems of nonlinear equations * sparse problems * partially separable problems * limited-memory methods * discrete Newton methods * quasi-Newton methods * primal interior-point methods Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 1.904, year: 2009

  3. Autonomic Computing Paradigm For Large Scale Scientific And Engineering Applications

    Science.gov (United States)

    Hariri, S.; Yang, J.; Zhang, Y.

    2005-12-01

    Large-scale distributed scientific applications are highly adaptive and heterogeneous in terms of their computational requirements. The computational complexity associated with each computational region or domain varies continuously and dramatically both in space and time throughout the whole life cycle of the application execution. Furthermore, the underlying distributed computing environment is similarly complex and dynamic in the availabilities and capacities of the computing resources. These challenges combined together make the current paradigms, which are based on passive components and static compositions, ineffectual. Autonomic Computing paradigm is an approach that efficiently addresses the complexity and dynamism of large scale scientific and engineering applications and realizes the self-management of these applications. In this presentation, we present an Autonomic Runtime Manager (ARM) that supports the development of autonomic applications. The ARM includes two modules: online monitoring and analysis module and autonomic planning and scheduling module. The ARM behaves as a closed-loop control system that dynamically controls and manages the execution of the applications at runtime. It regularly senses the state changes of both the applications and the underlying computing resources. It then uses these runtime information and prior knowledge about the application behavior and its physics to identify the appropriate solution methods as well as the required computing and storage resources. Consequently this approach enables us to develop autonomic applications, which are capable of self-management and self-optimization. We have developed and implemented the autonomic computing paradigms for several large scale applications such as wild fire simulations, simulations of flow through variably saturated geologic formations, and life sciences. The distributed wildfire simulation models the wildfire spread behavior by considering such factors as fuel

  4. Large-scale assembly of colloidal particles

    Science.gov (United States)

    Yang, Hongta

    This study reports a simple, roll-to-roll compatible coating technology for producing three-dimensional highly ordered colloidal crystal-polymer composites, colloidal crystals, and macroporous polymer membranes. A vertically beveled doctor blade is utilized to shear align silica microsphere-monomer suspensions to form large-area composites in a single step. The polymer matrix and the silica microspheres can be selectively removed to create colloidal crystals and self-standing macroporous polymer membranes. The thickness of the shear-aligned crystal is correlated with the viscosity of the colloidal suspension and the coating speed, and the correlations can be qualitatively explained by adapting the mechanisms developed for conventional doctor blade coating. Five important research topics related to the application of large-scale three-dimensional highly ordered macroporous films by doctor blade coating are covered in this study. The first topic describes the invention in large area and low cost color reflective displays. This invention is inspired by the heat pipe technology. The self-standing macroporous polymer films exhibit brilliant colors which originate from the Bragg diffractive of visible light form the three-dimensional highly ordered air cavities. The colors can be easily changed by tuning the size of the air cavities to cover the whole visible spectrum. When the air cavities are filled with a solvent which has the same refractive index as that of the polymer, the macroporous polymer films become completely transparent due to the index matching. When the solvent trapped in the cavities is evaporated by in-situ heating, the sample color changes back to brilliant color. This process is highly reversible and reproducible for thousands of cycles. The second topic reports the achievement of rapid and reversible vapor detection by using 3-D macroporous photonic crystals. Capillary condensation of a condensable vapor in the interconnected macropores leads to the

  5. On the Phenomenology of an Accelerated Large-Scale Universe

    Directory of Open Access Journals (Sweden)

    Martiros Khurshudyan

    2016-10-01

    Full Text Available In this review paper, several new results towards the explanation of the accelerated expansion of the large-scale universe is discussed. On the other hand, inflation is the early-time accelerated era and the universe is symmetric in the sense of accelerated expansion. The accelerated expansion of is one of the long standing problems in modern cosmology, and physics in general. There are several well defined approaches to solve this problem. One of them is an assumption concerning the existence of dark energy in recent universe. It is believed that dark energy is responsible for antigravity, while dark matter has gravitational nature and is responsible, in general, for structure formation. A different approach is an appropriate modification of general relativity including, for instance, f ( R and f ( T theories of gravity. On the other hand, attempts to build theories of quantum gravity and assumptions about existence of extra dimensions, possible variability of the gravitational constant and the speed of the light (among others, provide interesting modifications of general relativity applicable to problems of modern cosmology, too. In particular, here two groups of cosmological models are discussed. In the first group the problem of the accelerated expansion of large-scale universe is discussed involving a new idea, named the varying ghost dark energy. On the other hand, the second group contains cosmological models addressed to the same problem involving either new parameterizations of the equation of state parameter of dark energy (like varying polytropic gas, or nonlinear interactions between dark energy and dark matter. Moreover, for cosmological models involving varying ghost dark energy, massless particle creation in appropriate radiation dominated universe (when the background dynamics is due to general relativity is demonstrated as well. Exploring the nature of the accelerated expansion of the large-scale universe involving generalized

  6. LARGE-SCALE CO2 TRANSPORTATION AND DEEP OCEAN SEQUESTRATION

    Energy Technology Data Exchange (ETDEWEB)

    Hamid Sarv

    1999-03-01

    Technical and economical feasibility of large-scale CO{sub 2} transportation and ocean sequestration at depths of 3000 meters or grater was investigated. Two options were examined for transporting and disposing the captured CO{sub 2}. In one case, CO{sub 2} was pumped from a land-based collection center through long pipelines laid on the ocean floor. Another case considered oceanic tanker transport of liquid carbon dioxide to an offshore floating structure for vertical injection to the ocean floor. In the latter case, a novel concept based on subsurface towing of a 3000-meter pipe, and attaching it to the offshore structure was considered. Budgetary cost estimates indicate that for distances greater than 400 km, tanker transportation and offshore injection through a 3000-meter vertical pipe provides the best method for delivering liquid CO{sub 2} to deep ocean floor depressions. For shorter distances, CO{sub 2} delivery by parallel-laid, subsea pipelines is more cost-effective. Estimated costs for 500-km transport and storage at a depth of 3000 meters by subsea pipelines and tankers were 1.5 and 1.4 dollars per ton of stored CO{sub 2}, respectively. At these prices, economics of ocean disposal are highly favorable. Future work should focus on addressing technical issues that are critical to the deployment of a large-scale CO{sub 2} transportation and disposal system. Pipe corrosion, structural design of the transport pipe, and dispersion characteristics of sinking CO{sub 2} effluent plumes have been identified as areas that require further attention. Our planned activities in the next Phase include laboratory-scale corrosion testing, structural analysis of the pipeline, analytical and experimental simulations of CO{sub 2} discharge and dispersion, and the conceptual economic and engineering evaluation of large-scale implementation.

  7. KeystoneML: Optimizing Pipelines for Large-Scale Advanced Analytics

    OpenAIRE

    Sparks, Evan R.; Venkataraman, Shivaram; Kaftan, Tomer; Franklin, Michael J.; Recht, Benjamin

    2016-01-01

    Modern advanced analytics applications make use of machine learning techniques and contain multiple steps of domain-specific and general-purpose processing with high resource requirements. We present KeystoneML, a system that captures and optimizes the end-to-end large-scale machine learning applications for high-throughput training in a distributed environment with a high-level API. This approach offers increased ease of use and higher performance over existing systems for large scale learni...

  8. Statistics of Caustics in Large-Scale Structure Formation

    Science.gov (United States)

    Feldbrugge, Job L.; Hidding, Johan; van de Weygaert, Rien

    2016-10-01

    The cosmic web is a complex spatial pattern of walls, filaments, cluster nodes and underdense void regions. It emerged through gravitational amplification from the Gaussian primordial density field. Here we infer analytical expressions for the spatial statistics of caustics in the evolving large-scale mass distribution. In our analysis, following the quasi-linear Zel'dovich formalism and confined to the 1D and 2D situation, we compute number density and correlation properties of caustics in cosmic density fields that evolve from Gaussian primordial conditions. The analysis can be straightforwardly extended to the 3D situation. We moreover, are currently extending the approach to the non-linear regime of structure formation by including higher order Lagrangian approximations and Lagrangian effective field theory.

  9. Bonus algorithm for large scale stochastic nonlinear programming problems

    CERN Document Server

    Diwekar, Urmila

    2015-01-01

    This book presents the details of the BONUS algorithm and its real world applications in areas like sensor placement in large scale drinking water networks, sensor placement in advanced power systems, water management in power systems, and capacity expansion of energy systems. A generalized method for stochastic nonlinear programming based on a sampling based approach for uncertainty analysis and statistical reweighting to obtain probability information is demonstrated in this book. Stochastic optimization problems are difficult to solve since they involve dealing with optimization and uncertainty loops. There are two fundamental approaches used to solve such problems. The first being the decomposition techniques and the second method identifies problem specific structures and transforms the problem into a deterministic nonlinear programming problem. These techniques have significant limitations on either the objective function type or the underlying distributions for the uncertain variables. Moreover, these ...

  10. Testing Inflation with Large Scale Structure: Connecting Hopes with Reality

    Energy Technology Data Exchange (ETDEWEB)

    Alvarez, Marcello [Univ. of Toronto, ON (Canada); Baldauf, T. [Inst. of Advanced Studies, Princeton, NJ (United States); Bond, J. Richard [Univ. of Toronto, ON (Canada); Canadian Inst. for Advanced Research, Toronto, ON (Canada); Dalal, N. [Univ. of Illinois, Urbana-Champaign, IL (United States); Putter, R. D. [Jet Propulsion Lab., Pasadena, CA (United States); California Inst. of Technology (CalTech), Pasadena, CA (United States); Dore, O. [Jet Propulsion Lab., Pasadena, CA (United States); California Inst. of Technology (CalTech), Pasadena, CA (United States); Green, Daniel [Univ. of Toronto, ON (Canada); Canadian Inst. for Advanced Research, Toronto, ON (Canada); Hirata, Chris [The Ohio State Univ., Columbus, OH (United States); Huang, Zhiqi [Univ. of Toronto, ON (Canada); Huterer, Dragan [Univ. of Michigan, Ann Arbor, MI (United States); Jeong, Donghui [Pennsylvania State Univ., University Park, PA (United States); Johnson, Matthew C. [York Univ., Toronto, ON (Canada); Perimeter Inst., Waterloo, ON (Canada); Krause, Elisabeth [Stanford Univ., CA (United States); Loverde, Marilena [Univ. of Chicago, IL (United States); Meyers, Joel [Univ. of Toronto, ON (Canada); Meeburg, Daniel [Univ. of Toronto, ON (Canada); Senatore, Leonardo [Stanford Univ., CA (United States); Shandera, Sarah [Pennsylvania State Univ., University Park, PA (United States); Silverstein, Eva [Stanford Univ., CA (United States); Slosar, Anze [Brookhaven National Lab. (BNL), Upton, NY (United States); Smith, Kendrick [Perimeter Inst., Waterloo, Toronto, ON (Canada); Zaldarriaga, Matias [Univ. of Toronto, ON (Canada); Assassi, Valentin [Cambridge Univ. (United Kingdom); Braden, Jonathan [Univ. of Toronto, ON (Canada); Hajian, Amir [Univ. of Toronto, ON (Canada); Kobayashi, Takeshi [Perimeter Inst., Waterloo, Toronto, ON (Canada); Univ. of Toronto, ON (Canada); Stein, George [Univ. of Toronto, ON (Canada); Engelen, Alexander van [Univ. of Toronto, ON (Canada)

    2014-12-15

    The statistics of primordial curvature fluctuations are our window into the period of inflation, where these fluctuations were generated. To date, the cosmic microwave background has been the dominant source of information about these perturbations. Large-scale structure is, however, from where drastic improvements should originate. In this paper, we explain the theoretical motivations for pursuing such measurements and the challenges that lie ahead. In particular, we discuss and identify theoretical targets regarding the measurement of primordial non-Gaussianity. We argue that when quantified in terms of the local (equilateral) template amplitude f$loc\\atop{NL}$ (f$eq\\atop{NL}$), natural target levels of sensitivity are Δf$loc, eq\\atop{NL}$ ≃ 1. We highlight that such levels are within reach of future surveys by measuring 2-, 3- and 4-point statistics of the galaxy spatial distribution. This paper summarizes a workshop held at CITA (University of Toronto) on October 23-24, 2014.

  11. Large-scale stabilization control of input-constrained quadrotor

    Directory of Open Access Journals (Sweden)

    Jun Jiang

    2016-10-01

    Full Text Available The quadrotor has been the most popular aircraft in the last decade due to its excellent dynamics and continues to attract ever-increasing research interest. Delivering a quadrotor from a large fixed-wing aircraft is a promising application of quadrotors. In such an application, the quadrotor needs to switch from a highly unstable status, featured as large initial states, to a safe and stable flight status. This is the so-called large-scale stability control problem. In such an extreme scenario, the quadrotor is at risk of actuator saturation. This can cause the controller to update incorrectly and lead the quadrotor to spiral and crash. In this article, to safely control the quadrotor in such scenarios, the control input constraint is analyzed. The key states of a quadrotor dynamic model are selected, and a two-dimensional dynamic model is extracted based on a symmetrical body configuration. A generalized point-wise min-norm nonlinear control method is proposed based on the Lyapunov function, and large-scale stability control is hence achieved. An enhanced point-wise, min-norm control is further provided to improve the attitude control performance, with altitude performance degenerating slightly. Simulation results showed that the proposed control methods can stabilize the input-constrained quadrotor and the enhanced method can improve the performance of the quadrotor in critical states.

  12. Remote Sensing Image Classification With Large-Scale Gaussian Processes

    Science.gov (United States)

    Morales-Alvarez, Pablo; Perez-Suay, Adrian; Molina, Rafael; Camps-Valls, Gustau

    2018-02-01

    Current remote sensing image classification problems have to deal with an unprecedented amount of heterogeneous and complex data sources. Upcoming missions will soon provide large data streams that will make land cover/use classification difficult. Machine learning classifiers can help at this, and many methods are currently available. A popular kernel classifier is the Gaussian process classifier (GPC), since it approaches the classification problem with a solid probabilistic treatment, thus yielding confidence intervals for the predictions as well as very competitive results to state-of-the-art neural networks and support vector machines. However, its computational cost is prohibitive for large scale applications, and constitutes the main obstacle precluding wide adoption. This paper tackles this problem by introducing two novel efficient methodologies for Gaussian Process (GP) classification. We first include the standard random Fourier features approximation into GPC, which largely decreases its computational cost and permits large scale remote sensing image classification. In addition, we propose a model which avoids randomly sampling a number of Fourier frequencies, and alternatively learns the optimal ones within a variational Bayes approach. The performance of the proposed methods is illustrated in complex problems of cloud detection from multispectral imagery and infrared sounding data. Excellent empirical results support the proposal in both computational cost and accuracy.

  13. Maestro: An Orchestration Framework for Large-Scale WSN Simulations

    Directory of Open Access Journals (Sweden)

    Laurynas Riliskis

    2014-03-01

    Full Text Available Contemporary wireless sensor networks (WSNs have evolved into large and complex systems and are one of the main technologies used in cyber-physical systems and the Internet of Things. Extensive research on WSNs has led to the development of diverse solutions at all levels of software architecture, including protocol stacks for communications. This multitude of solutions is due to the limited computational power and restrictions on energy consumption that must be accounted for when designing typical WSN systems. It is therefore challenging to develop, test and validate even small WSN applications, and this process can easily consume significant resources. Simulations are inexpensive tools for testing, verifying and generally experimenting with new technologies in a repeatable fashion. Consequently, as the size of the systems to be tested increases, so does the need for large-scale simulations. This article describes a tool called Maestro for the automation of large-scale simulation and investigates the feasibility of using cloud computing facilities for such task. Using tools that are built into Maestro, we demonstrate a feasible approach for benchmarking cloud infrastructure in order to identify cloud Virtual Machine (VMinstances that provide an optimal balance of performance and cost for a given simulation.

  14. Maestro: An Orchestration Framework for Large-Scale WSN Simulations

    Science.gov (United States)

    Riliskis, Laurynas; Osipov, Evgeny

    2014-01-01

    Contemporary wireless sensor networks (WSNs) have evolved into large and complex systems and are one of the main technologies used in cyber-physical systems and the Internet of Things. Extensive research on WSNs has led to the development of diverse solutions at all levels of software architecture, including protocol stacks for communications. This multitude of solutions is due to the limited computational power and restrictions on energy consumption that must be accounted for when designing typical WSN systems. It is therefore challenging to develop, test and validate even small WSN applications, and this process can easily consume significant resources. Simulations are inexpensive tools for testing, verifying and generally experimenting with new technologies in a repeatable fashion. Consequently, as the size of the systems to be tested increases, so does the need for large-scale simulations. This article describes a tool called Maestro for the automation of large-scale simulation and investigates the feasibility of using cloud computing facilities for such task. Using tools that are built into Maestro, we demonstrate a feasible approach for benchmarking cloud infrastructure in order to identify cloud Virtual Machine (VM)instances that provide an optimal balance of performance and cost for a given simulation. PMID:24647123

  15. Large scale floodplain mapping using a hydrogeomorphic method

    Science.gov (United States)

    Nardi, F.; Yan, K.; Di Baldassarre, G.; Grimaldi, S.

    2013-12-01

    Floodplain landforms are clearly distinguishable as respect to adjacent hillslopes being the trace of severe floods that shaped the terrain. As a result digital topography intrinsically contains the floodplain information, this works presents the results of the application of a DEM-based large scale hydrogeomorphic floodplain delineation method. The proposed approach, based on the integration of terrain analysis algorithms in a GIS framework, automatically identifies the potentially frequently saturated zones of riparian areas by analysing the maximum flood flow heights associated to stream network nodes as respect to surrounding uplands. Flow heights are estimated by imposing a Leopold's law that scales with the contributing area. Presented case studies include the floodplain map of large river basins for the entire Italian territory , that are also used for calibrating the Leopold scaling parameters, as well as additional large international river basins in different climatic and geomorphic characteristics posing the base for the use of such approach for global floodplain mapping. The proposed tool could be useful to detect the hydrological change since it can easily provide maps to verify the flood impact on human activities and vice versa how the human activities changed in floodplain areas at large scale.

  16. Maestro: an orchestration framework for large-scale WSN simulations.

    Science.gov (United States)

    Riliskis, Laurynas; Osipov, Evgeny

    2014-03-18

    Contemporary wireless sensor networks (WSNs) have evolved into large and complex systems and are one of the main technologies used in cyber-physical systems and the Internet of Things. Extensive research on WSNs has led to the development of diverse solutions at all levels of software architecture, including protocol stacks for communications. This multitude of solutions is due to the limited computational power and restrictions on energy consumption that must be accounted for when designing typical WSN systems. It is therefore challenging to develop, test and validate even small WSN applications, and this process can easily consume significant resources. Simulations are inexpensive tools for testing, verifying and generally experimenting with new technologies in a repeatable fashion. Consequently, as the size of the systems to be tested increases, so does the need for large-scale simulations. This article describes a tool called Maestro for the automation of large-scale simulation and investigates the feasibility of using cloud computing facilities for such task. Using tools that are built into Maestro, we demonstrate a feasible approach for benchmarking cloud infrastructure in order to identify cloud Virtual Machine (VM)instances that provide an optimal balance of performance and cost for a given simulation.

  17. Large-scale assessment of activity landscape feature probabilities of bioactive compounds.

    Science.gov (United States)

    Kayastha, Shilva; Dimova, Dilyana; Iyer, Preeti; Vogt, Martin; Bajorath, Jürgen

    2014-02-24

    Activity landscape representations integrate pairwise compound similarity and potency relationships and provide direct access to characteristic structure-activity relationship features in compound data sets. Because pairwise compound comparisons provide the foundation of activity landscape design, the assessment of specific landscape features such as activity cliffs has generally been confined to the level of compound pairs. A conditional probability-based approach has been applied herein to assign most probable activity landscape features to individual compounds. For example, for a given data set compound, it was determined if it would preferentially engage in the formation of activity cliffs or other landscape features. In a large-scale effort, we have determined conditional activity landscape feature probabilities for more than 160,000 compounds with well-defined activity annotations contained in 427 different target-based data sets. These landscape feature probabilities provide a detailed view of how different activity landscape features are distributed over currently available bioactive compounds.

  18. Large-Scale Structures of Planetary Systems

    Science.gov (United States)

    Murray-Clay, Ruth; Rogers, Leslie A.

    2015-12-01

    A class of solar system analogs has yet to be identified among the large crop of planetary systems now observed. However, since most observed worlds are more easily detectable than direct analogs of the Sun's planets, the frequency of systems with structures similar to our own remains unknown. Identifying the range of possible planetary system architectures is complicated by the large number of physical processes that affect the formation and dynamical evolution of planets. I will present two ways of organizing planetary system structures. First, I will suggest that relatively few physical parameters are likely to differentiate the qualitative architectures of different systems. Solid mass in a protoplanetary disk is perhaps the most obvious possible controlling parameter, and I will give predictions for correlations between planetary system properties that we would expect to be present if this is the case. In particular, I will suggest that the solar system's structure is representative of low-metallicity systems that nevertheless host giant planets. Second, the disk structures produced as young stars are fed by their host clouds may play a crucial role. Using the observed distribution of RV giant planets as a function of stellar mass, I will demonstrate that invoking ice lines to determine where gas giants can form requires fine tuning. I will suggest that instead, disk structures built during early accretion have lasting impacts on giant planet distributions, and disk clean-up differentially affects the orbital distributions of giant and lower-mass planets. These two organizational hypotheses have different implications for the solar system's context, and I will suggest observational tests that may allow them to be validated or falsified.

  19. Multimodal Solutions for Large Scale Evacuation

    Science.gov (United States)

    2009-12-30

    In this research, a multimodal transportation model was developed attending the needs of emergency situations, and the solutions provided by the model could be used to moderate congestion during such events. The model incorporated features such as la...

  20. Multimodal solutions for large scale evacuations.

    Science.gov (United States)

    2009-12-30

    In this research, a multimodal transportation model was developed attending the needs of emergency situations, : and the solutions provided by the model could be used to moderate congestion during such events. : The model incorporated features such a...

  1. Trace: a high-throughput tomographic reconstruction engine for large-scale datasets.

    Science.gov (United States)

    Bicer, Tekin; Gürsoy, Doğa; Andrade, Vincent De; Kettimuthu, Rajkumar; Scullin, William; Carlo, Francesco De; Foster, Ian T

    2017-01-01

    Modern synchrotron light sources and detectors produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used imaging techniques that generates data at tens of gigabytes per second is computed tomography (CT). Although CT experiments result in rapid data generation, the analysis and reconstruction of the collected data may require hours or even days of computation time with a medium-sized workstation, which hinders the scientific progress that relies on the results of analysis. We present Trace, a data-intensive computing engine that we have developed to enable high-performance implementation of iterative tomographic reconstruction algorithms for parallel computers. Trace provides fine-grained reconstruction of tomography datasets using both (thread-level) shared memory and (process-level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations that we apply to the replicated reconstruction objects and evaluate them using tomography datasets collected at the Advanced Photon Source. Our experimental evaluations show that our optimizations and parallelization techniques can provide 158× speedup using 32 compute nodes (384 cores) over a single-core configuration and decrease the end-to-end processing time of a large sinogram (with 4501 × 1 × 22,400 dimensions) from 12.5 h to reconstruction engine can efficiently process large-scale tomographic data using many compute nodes and minimize reconstruction times.

  2. Synchronization control for large-scale network systems

    CERN Document Server

    Wu, Yuanqing; Su, Hongye; Shi, Peng; Wu, Zheng-Guang

    2017-01-01

    This book provides recent advances in analysis and synthesis of Large-scale network systems (LSNSs) with sampled-data communication and non-identical nodes. In its first chapter of the book presents an introduction to Synchronization of LSNSs and Algebraic Graph Theory as well as an overview of recent developments of LSNSs with sampled data control or output regulation control. The main text of the book is organized into two main parts - Part I: LSNSs with sampled-data communication and Part II: LSNSs with non-identical nodes. This monograph provides up-to-date advances and some recent developments in the analysis and synthesis issues for LSNSs with sampled-data communication and non-identical nodes. It describes the constructions of the adaptive reference generators in the first stage and the robust regulators in the second stage. Examples are presented to show the effectiveness of the proposed design techniques.

  3. Coordinated SLNR based Precoding in Large-Scale Heterogeneous Networks

    KAUST Repository

    Boukhedimi, Ikram

    2017-03-06

    This work focuses on the downlink of large-scale two-tier heterogeneous networks composed of a macro-cell overlaid by micro-cell networks. Our interest is on the design of coordinated beamforming techniques that allow to mitigate the inter-cell interference. Particularly, we consider the case in which the coordinating base stations (BSs) have imperfect knowledge of the channel state information. Under this setting, we propose a regularized SLNR based precoding design in which the regularization factor is used to allow better resilience with respect to the channel estimation errors. Based on tools from random matrix theory, we provide an analytical analysis of the SINR and SLNR performances. These results are then exploited to propose a proper setting of the regularization factor. Simulation results are finally provided in order to validate our findings and to confirm the performance of the proposed precoding scheme.

  4. Probes of large-scale structure in the universe

    Science.gov (United States)

    Suto, Yasushi; Gorski, Krzysztof; Juszkiewicz, Roman; Silk, Joseph

    1988-01-01

    A general formalism is developed which shows that the gravitational instability theory for the origin of the large-scale structure of the universe is now capable of critically confronting observational results on cosmic background radiation angular anisotropies, large-scale bulk motions, and large-scale clumpiness in the galaxy counts. The results indicate that presently advocated cosmological models will have considerable difficulty in simultaneously explaining the observational results.

  5. Development of large-scale structure in the Universe

    CERN Document Server

    Ostriker, J P

    1991-01-01

    This volume grew out of the 1988 Fermi lectures given by Professor Ostriker, and is concerned with cosmological models that take into account the large scale structure of the universe. He starts with homogeneous isotropic models of the universe and then, by considering perturbations, he leads us to modern cosmological theories of the large scale, such as superconducting strings. This will be an excellent companion for all those interested in the cosmology and the large scale nature of the universe.

  6. Double inflation: A possible resolution of the large-scale structure problem

    Energy Technology Data Exchange (ETDEWEB)

    Turner, M.S.; Villumsen, J.V.; Vittorio, N.; Silk, J.; Juszkiewicz, R.

    1986-11-01

    A model is presented for the large-scale structure of the universe in which two successive inflationary phases resulted in large small-scale and small large-scale density fluctuations. This bimodal density fluctuation spectrum in an ..cap omega.. = 1 universe dominated by hot dark matter leads to large-scale structure of the galaxy distribution that is consistent with recent observational results. In particular, large, nearly empty voids and significant large-scale peculiar velocity fields are produced over scales of approx.100 Mpc, while the small-scale structure over less than or equal to 10 Mpc resembles that in a low density universe, as observed. Detailed analytical calculations and numerical simulations are given of the spatial and velocity correlations. 38 refs., 6 figs.

  7. Preventing large-scale controlled substance diversion from within the pharmacy.

    Science.gov (United States)

    Martin, Emory S; Dzierba, Steven H; Jones, David M

    2013-05-01

    Large-scale diversion of controlled substances (CS) from within a hospital or heath system pharmacy is a rare but growing problem. It is the responsibility of pharmacy leadership to scrutinize control processes to expose weaknesses. This article reviews examples of large-scale diversion incidents and diversion techniques and provides practical strategies to stimulate enhanced CS security within the pharmacy staff. Large-scale diversion from within a pharmacy department can be averted by a pharmacist-in-charge who is informed and proactive in taking effective countermeasures.

  8. The Large Scale Structure: Polarization Aspects

    Indian Academy of Sciences (India)

    Polarized radio emission is detected at various scales in the Universe. In this document, I will briefly review our knowledge on polarized radio sources in galaxy clusters and at their outskirts, emphasizing the crucial information provided by the polarized signal on the origin and evolution of such sources. Successively, I will ...

  9. Stouffer's test in a large scale simultaneous hypothesis testing.

    Directory of Open Access Journals (Sweden)

    Sang Cheol Kim

    Full Text Available In microarray data analysis, we are often required to combine several dependent partial test results. To overcome this, many suggestions have been made in previous literature; Tippett's test and Fisher's omnibus test are most popular. Both tests have known null distributions when the partial tests are independent. However, for dependent tests, their (even, asymptotic null distributions are unknown and additional numerical procedures are required. In this paper, we revisited Stouffer's test based on z-scores and showed its advantage over the two aforementioned methods in the analysis of large-scale microarray data. The combined statistic in Stouffer's test has a normal distribution with mean 0 from the normality of the z-scores. Its variance can be estimated from the scores of genes in the experiment without an additional numerical procedure. We numerically compared the errors of Stouffer's test and the two p-value based methods, Tippett's test and Fisher's omnibus test. We also analyzed our microarray data to find differentially expressed genes by non-genotoxic and genotoxic carcinogen compounds. Both numerical study and the real application showed that Stouffer's test performed better than Tippett's method and Fisher's omnibus method with additional permutation steps.

  10. Episodic memory in aspects of large-scale brain networks

    Directory of Open Access Journals (Sweden)

    Woorim eJeong

    2015-08-01

    Full Text Available Understanding human episodic memory in aspects of large-scale brain networks has become one of the central themes in neuroscience over the last decade. Traditionally, episodic memory was regarded as mostly relying on medial temporal lobe (MTL structures. However, recent studies have suggested involvement of more widely distributed cortical network and the importance of its interactive roles in the memory process. Both direct and indirect neuro-modulations of the memory network have been tried in experimental treatments of memory disorders. In this review, we focus on the functional organization of the MTL and other neocortical areas in episodic memory. Task-related neuroimaging studies together with lesion studies suggested that specific sub-regions of the MTL are responsible for specific components of memory. However, recent studies have emphasized that connectivity within MTL structures and even their network dynamics with other cortical areas are essential in the memory process. Resting-state functional network studies also have revealed that memory function is subserved by not only the MTL system but also a distributed network, particularly the default-mode network. Furthermore, researchers have begun to investigate memory networks throughout the entire brain not restricted to the specific resting-state network. Altered patterns of functional connectivity among distributed brain regions were observed in patients with memory impairments. Recently, studies have shown that brain stimulation may impact memory through modulating functional networks, carrying future implications of a novel interventional therapy for memory impairment.

  11. Episodic memory in aspects of large-scale brain networks

    Science.gov (United States)

    Jeong, Woorim; Chung, Chun Kee; Kim, June Sic

    2015-01-01

    Understanding human episodic memory in aspects of large-scale brain networks has become one of the central themes in neuroscience over the last decade. Traditionally, episodic memory was regarded as mostly relying on medial temporal lobe (MTL) structures. However, recent studies have suggested involvement of more widely distributed cortical network and the importance of its interactive roles in the memory process. Both direct and indirect neuro-modulations of the memory network have been tried in experimental treatments of memory disorders. In this review, we focus on the functional organization of the MTL and other neocortical areas in episodic memory. Task-related neuroimaging studies together with lesion studies suggested that specific sub-regions of the MTL are responsible for specific components of memory. However, recent studies have emphasized that connectivity within MTL structures and even their network dynamics with other cortical areas are essential in the memory process. Resting-state functional network studies also have revealed that memory function is subserved by not only the MTL system but also a distributed network, particularly the default-mode network (DMN). Furthermore, researchers have begun to investigate memory networks throughout the entire brain not restricted to the specific resting-state network (RSN). Altered patterns of functional connectivity (FC) among distributed brain regions were observed in patients with memory impairments. Recently, studies have shown that brain stimulation may impact memory through modulating functional networks, carrying future implications of a novel interventional therapy for memory impairment. PMID:26321939

  12. Large-Scale Road Network Vulnerability Analysis

    OpenAIRE

    Jenelius, Erik

    2010-01-01

    Disruptions in the transport system can have severe impacts for affected individuals, businesses and the society as a whole. In this research, vulnerability is seen as the risk of unplanned system disruptions, with a focus on large, rare events. Vulnerability analysis aims to provide decision support regarding preventive and restorative actions, ideally as an integrated part of the planning process.The thesis specifically develops the methodology for vulnerability analysis of road networks an...

  13. Large-scale tattoo image retrieval

    OpenAIRE

    Manger, Daniel

    2012-01-01

    In current biometric-based identification systems, tattoos and other body modifications have shown to provide a useful source of information. Besides manual category label assignment, approaches utilizing state-of-the-art content-based image retrieval (CBIR) techniques have become increasingly popular. While local feature-based similarities of tattoo images achieve excellent retrieval accuracy, scalability to large image databases can be addressed with the popular bag-of-word model. In this p...

  14. Optimal Wind Energy Integration in Large-Scale Electric Grids

    Science.gov (United States)

    Albaijat, Mohammad H.

    The major concern in electric grid operation is operating under the most economical and reliable fashion to ensure affordability and continuity of electricity supply. This dissertation investigates the effects of such challenges, which affect electric grid reliability and economic operations. These challenges are: 1. Congestion of transmission lines, 2. Transmission lines expansion, 3. Large-scale wind energy integration, and 4. Phaser Measurement Units (PMUs) optimal placement for highest electric grid observability. Performing congestion analysis aids in evaluating the required increase of transmission line capacity in electric grids. However, it is necessary to evaluate expansion of transmission line capacity on methods to ensure optimal electric grid operation. Therefore, the expansion of transmission line capacity must enable grid operators to provide low-cost electricity while maintaining reliable operation of the electric grid. Because congestion affects the reliability of delivering power and increases its cost, the congestion analysis in electric grid networks is an important subject. Consequently, next-generation electric grids require novel methodologies for studying and managing congestion in electric grids. We suggest a novel method of long-term congestion management in large-scale electric grids. Owing to the complication and size of transmission line systems and the competitive nature of current grid operation, it is important for electric grid operators to determine how many transmission lines capacity to add. Traditional questions requiring answers are "Where" to add, "How much of transmission line capacity" to add, and "Which voltage level". Because of electric grid deregulation, transmission lines expansion is more complicated as it is now open to investors, whose main interest is to generate revenue, to build new transmission lines. Adding a new transmission capacity will help the system to relieve the transmission system congestion, create

  15. Large Scale Implementations for Twitter Sentiment Classification

    Directory of Open Access Journals (Sweden)

    Andreas Kanavos

    2017-03-01

    Full Text Available Sentiment Analysis on Twitter Data is indeed a challenging problem due to the nature, diversity and volume of the data. People tend to express their feelings freely, which makes Twitter an ideal source for accumulating a vast amount of opinions towards a wide spectrum of topics. This amount of information offers huge potential and can be harnessed to receive the sentiment tendency towards these topics. However, since no one can invest an infinite amount of time to read through these tweets, an automated decision making approach is necessary. Nevertheless, most existing solutions are limited in centralized environments only. Thus, they can only process at most a few thousand tweets. Such a sample is not representative in order to define the sentiment polarity towards a topic due to the massive number of tweets published daily. In this work, we develop two systems: the first in the MapReduce and the second in the Apache Spark framework for programming with Big Data. The algorithm exploits all hashtags and emoticons inside a tweet, as sentiment labels, and proceeds to a classification method of diverse sentiment types in a parallel and distributed manner. Moreover, the sentiment analysis tool is based on Machine Learning methodologies alongside Natural Language Processing techniques and utilizes Apache Spark’s Machine learning library, MLlib. In order to address the nature of Big Data, we introduce some pre-processing steps for achieving better results in Sentiment Analysis as well as Bloom filters to compact the storage size of intermediate data and boost the performance of our algorithm. Finally, the proposed system was trained and validated with real data crawled by Twitter, and, through an extensive experimental evaluation, we prove that our solution is efficient, robust and scalable while confirming the quality of our sentiment identification.

  16. Designing and developing portable large-scale JavaScript web applications within the Experiment Dashboard framework

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Improvements in web browser performance and web standards compliance, as well as the availability of comprehensive JavaScript libraries, provides an opportunity to develop functionally rich yet intuitive web applications that allow users to access, render and analyse data in novel ways. However, the development of such large-scale JavaScript web applications presents new challenges, in particular with regard to code sustainability and team-based work. We present an approach that meets the challenges of large-scale JavaScript web application design and development, including client-side model-view-controller architecture, design patterns, and JavaScript libraries. Furthermore, we show how the approach leads naturally to the encapsulation of the data source as a web API, allowing applications to be easily ported to new data sources. The Experiment Dashboard framework is used for the development of applications for monitoring the distributed computing activities of virtual organisations on the Worldwide LHC Co...

  17. Large Scale Experiments on Spacecraft Fire Safety

    DEFF Research Database (Denmark)

    Urban, David L.; Ruff, Gary A.; Minster, Olivier

    2012-01-01

    Full scale fire testing complemented by computer modelling has provided significant knowhow about the risk, prevention and suppression of fire in terrestrial systems (cars, ships, planes, buildings, mines, and tunnels). In comparison, no such testing has been carried out for manned spacecraft due......-based microgravity facilities or has been limited to very small fuel samples. Still, the work conducted to date has shown that fire behaviour in low-gravity is very different from that in normal-gravity, with differences observed for flammability limits, ignition delay, flame spread behaviour, flame colour and flame...

  18. Health Terrain: Visualizing Large Scale Health Data

    Science.gov (United States)

    2015-12-01

    NLP )  provides  a  means  to...Process   NLP  techniques  were  carried  out  to  process  325791  clinical  notes  that  contain  patient  discharge...format  and  sentence  splitting.  Advanced   level   NLP  was  applied  in  the  form  of  named  entity

  19. Large-scale prediction of drug-target relationships

    DEFF Research Database (Denmark)

    Kuhn, Michael; Campillos, Mónica; González, Paula

    2008-01-01

    The rapidly increasing amount of publicly available knowledge in biology and chemistry enables scientists to revisit many open problems by the systematic integration and analysis of heterogeneous novel data. The integration of relevant data does not only allow analyses at the network level......, but also provides a more global view on drug-target relations. Here we review recent attempts to apply large-scale computational analyses to predict novel interactions of drugs and targets from molecular and cellular features. In this context, we quantify the family-dependent probability of two proteins...... to bind the same ligand as function of their sequence similarity. We finally discuss how phenotypic data could help to expand our understanding of the complex mechanisms of drug action....

  20. Geophysical mapping of complex glaciogenic large-scale structures

    DEFF Research Database (Denmark)

    Høyer, Anne-Sophie

    2013-01-01

    is required to understand the structures. In practice, however, also the applicability and costs of the methods are crucial. The SkyTEM method is very cost-effective in providing dense data sets, and it is therefore recommendable to use this method initially in mapping campaigns. For more detailed structural...... information, seismic data can profitably be acquired in certain areas of interest, preferably selected on the basis of the SkyTEM data. In areas where extremely detailed information about the near-surface is required, geoelec¬tri¬cal data (resistivity information) and ground penetrating radar data (structural......This thesis presents the main results of a four year PhD study concerning the use of geophysical data in geological mapping. The study is related to the Geocenter project, “KOMPLEKS”, which focuses on the mapping of complex, large-scale geological structures. The study area is approximately 100 km2...

  1. Structural Quality of Service in Large-Scale Networks

    DEFF Research Database (Denmark)

    Pedersen, Jens Myrup

    , telephony and data. To meet the requirements of the different applications, and to handle the increased vulnerability to failures, the ability to design robust networks providing good Quality of Service is crucial. However, most planning of large-scale networks today is ad-hoc based, leading to highly......Digitalization has created the base for co-existence and convergence in communications, leading to an increasing use of multi service networks. This is for example seen in the Fiber To The Home implementations, where a single fiber is used for virtually all means of communication, including TV...... complex networks lacking predictability and global structural properties. The thesis applies the concept of Structural Quality of Service to formulate desirable global properties, and it shows how regular graph structures can be used to obtain such properties....

  2. Performance Health Monitoring of Large-Scale Systems

    Energy Technology Data Exchange (ETDEWEB)

    Rajamony, Ram [IBM Research, Austin, TX (United States)

    2014-11-20

    This report details the progress made on the ASCR funded project Performance Health Monitoring for Large Scale Systems. A large-­scale application may not achieve its full performance potential due to degraded performance of even a single subsystem. Detecting performance faults, isolating them, and taking remedial action is critical for the scale of systems on the horizon. PHM aims to develop techniques and tools that can be used to identify and mitigate such performance problems. We accomplish this through two main aspects. The PHM framework encompasses diagnostics, system monitoring, fault isolation, and performance evaluation capabilities that indicates when a performance fault has been detected, either due to an anomaly present in the system itself or due to contention for shared resources between concurrently executing jobs. Software components called the PHM Control system then build upon the capabilities provided by the PHM framework to mitigate degradation caused by performance problems.

  3. Large scale in vivo recordings to study neuronal biophysics.

    Science.gov (United States)

    Giocomo, Lisa M

    2015-06-01

    Over the last several years, technological advances have enabled researchers to more readily observe single-cell membrane biophysics in awake, behaving animals. Studies utilizing these technologies have provided important insights into the mechanisms generating functional neural codes in both sensory and non-sensory cortical circuits. Crucial for a deeper understanding of how membrane biophysics control circuit dynamics however, is a continued effort to move toward large scale studies of membrane biophysics, in terms of the numbers of neurons and ion channels examined. Future work faces a number of theoretical and technical challenges on this front but recent technological developments hold great promise for a larger scale understanding of how membrane biophysics contribute to circuit coding and computation. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. USE OF RFID AT LARGE-SCALE EVENTS

    Directory of Open Access Journals (Sweden)

    Yuusuke KAWAKITA

    2005-01-01

    Full Text Available Radio Frequency Identification (RFID devices and related technologies have received a great deal of attention for their ability to perform non-contact object identification. Systems incorporating RFID have been evaluated from a variety of perspectives. The authors constructed a networked RFID system to support event management at NetWorld+Interop 2004 Tokyo, an event that received 150,000 visitors. The system used multiple RFID readers installed at the venue and RFID tags carried by each visitor to provide a platform for running various management and visitor support applications. This paper presents the results of this field trial of RFID readability rates. It further addresses the applicability of RFID systems to visitor management, a problematic aspect of large-scale events.

  5. Measuring large-scale social networks with high resolution.

    Science.gov (United States)

    Stopczynski, Arkadiusz; Sekara, Vedran; Sapiezynski, Piotr; Cuttone, Andrea; Madsen, Mette My; Larsen, Jakob Eg; Lehmann, Sune

    2014-01-01

    This paper describes the deployment of a large-scale study designed to measure human interactions across a variety of communication channels, with high temporal resolution and spanning multiple years-the Copenhagen Networks Study. Specifically, we collect data on face-to-face interactions, telecommunication, social networks, location, and background information (personality, demographics, health, politics) for a densely connected population of 1000 individuals, using state-of-the-art smartphones as social sensors. Here we provide an overview of the related work and describe the motivation and research agenda driving the study. Additionally, the paper details the data-types measured, and the technical infrastructure in terms of both backend and phone software, as well as an outline of the deployment procedures. We document the participant privacy procedures and their underlying principles. The paper is concluded with early results from data analysis, illustrating the importance of multi-channel high-resolution approach to data collection.

  6. Measuring large-scale social networks with high resolution.

    Directory of Open Access Journals (Sweden)

    Arkadiusz Stopczynski

    Full Text Available This paper describes the deployment of a large-scale study designed to measure human interactions across a variety of communication channels, with high temporal resolution and spanning multiple years-the Copenhagen Networks Study. Specifically, we collect data on face-to-face interactions, telecommunication, social networks, location, and background information (personality, demographics, health, politics for a densely connected population of 1000 individuals, using state-of-the-art smartphones as social sensors. Here we provide an overview of the related work and describe the motivation and research agenda driving the study. Additionally, the paper details the data-types measured, and the technical infrastructure in terms of both backend and phone software, as well as an outline of the deployment procedures. We document the participant privacy procedures and their underlying principles. The paper is concluded with early results from data analysis, illustrating the importance of multi-channel high-resolution approach to data collection.

  7. Tabulation Hashing for Large-Scale Data Processing

    DEFF Research Database (Denmark)

    Dahlgaard, Søren

    The past decade has brought with it an immense amount of data from large volumes of text to network traffic data.Working with such largescale data has become an increasingly important topic, giving rise to many important problems and influential solutions. One common denominator between many...... popular algorithms and data structures for tackling these problems is randomization implemented with hash functions. A common practice in the analysis of such randomized algorithms, is to work under the abstract assumption that truly random unit cost hash functions are freely available without concern...... employed as an “inner-loop” operation and evaluation time is thus of utmost importance. This thesis seeks to bridge this gap in the theory by providing efficient families of hash functions with strong theoretical guarantees for several influential problems in the large-scale data regime. This is done...

  8. Large-scale cryopumping for controlled fusion

    Energy Technology Data Exchange (ETDEWEB)

    Pittenger, L.C.

    1977-07-25

    Vacuum pumping by freezing out or otherwise immobilizing the pumped gas is an old concept. In several plasma physics experiments for controlled fusion research, cryopumping has been used to provide clean, ultrahigh vacua. Present day fusion research devices, which rely almost universally upon neutral beams for heating, are high gas throughput systems, the pumping of which is best accomplished by cryopumping in the high mass-flow, moderate-to-high vacuum regime. Cryopumping systems have been developed for neutral beam injection systems on several fusion experiments (HVTS, TFTR) and are being developed for the overall pumping of a large, high-throughput mirror containment experiment (MFTF). In operation, these large cryopumps will require periodic defrosting, some schemes for which are discussed, along with other operational considerations. The development of cryopumps for fusion reactors is begun with the TFTR and MFTF systems. Likely paths for necessary further development for power-producing reactors are also discussed.

  9. Capabilities of Large-scale Particle Image Velocimetry to characterize shallow free-surface flows

    Science.gov (United States)

    Muste, M.; Hauet, A.; Fujita, I.; Legout, C.; Ho, H.-C.

    2014-08-01

    Irrespective of their spatial extent, free-surface shallow flows are challenging measurement environments for most instruments due to the relatively small depths and velocities typically associated with these flows. A promising candidate for enabling measurements in such conditions is Large-scale Particle Image Velocimetry (LSPIV). This technique uses a non-intrusive approach to measure two-dimensional surface velocity fields with high spatial and temporal resolutions. Although there are many publications documenting the successful use of LSPIV in various laboratory and field open-channel flow situations, its performance has not been equally substantiated for measurement in shallow flows. This paper aims at filling in this gap by demonstrating the capabilities of LSPIV to: (a) accurately evaluate complex flow patterns in shallow channel flows; and (b) estimate depth in shallow flows using exclusively LSPIV measurements. The demonstration is provided by LSPIV measurements in three shallow flow laboratory situations with flow depths ranging from 0.05 to 0.31 m. The obtained measurements illustrate the LSPIV flexibility and reliability in measuring velocities in shallow and low-velocity (near-zero) flows. Moreover, the technique is capable to evaluate and map velocity-derived quantities that are difficult to document with alternative measurement techniques (e.g. vorticity and shear stress distributions and mapping of large-scale structure in the body of water).

  10. Inflation Physics from the Cosmic Microwave Background and Large Scale Structure

    Science.gov (United States)

    Abazajian, K.N.; Arnold,K.; Austermann, J.; Benson, B.A.; Bischoff, C.; Bock, J.; Bond, J.R.; Borrill, J.; Buder, I.; Burke, D.L.; hide

    2013-01-01

    Fluctuations in the intensity and polarization of the cosmic microwave background (CMB) and the large-scale distribution of matter in the universe each contain clues about the nature of the earliest moments of time. The next generation of CMB and large-scale structure (LSS) experiments are poised to test the leading paradigm for these earliest moments---the theory of cosmic inflation---and to detect the imprints of the inflationary epoch, thereby dramatically increasing our understanding of fundamental physics and the early universe. A future CMB experiment with sufficient angular resolution and frequency coverage that surveys at least 1 of the sky to a depth of 1 uK-arcmin can deliver a constraint on the tensor-to-scalar ratio that will either result in a 5-sigma measurement of the energy scale of inflation or rule out all large-field inflation models, even in the presence of foregrounds and the gravitational lensing B-mode signal. LSS experiments, particularly spectroscopic surveys such as the Dark Energy Spectroscopic Instrument, will complement the CMB effort by improving current constraints on running of the spectral index by up to a factor of four, improving constraints on curvature by a factor of ten, and providing non-Gaussianity constraints that are competitive with the current CMB bounds.

  11. Bilevel Traffic Evacuation Model and Algorithm Design for Large-Scale Activities

    Directory of Open Access Journals (Sweden)

    Danwen Bao

    2017-01-01

    Full Text Available This paper establishes a bilevel planning model with one master and multiple slaves to solve traffic evacuation problems. The minimum evacuation network saturation and shortest evacuation time are used as the objective functions for the upper- and lower-level models, respectively. The optimizing conditions of this model are also analyzed. An improved particle swarm optimization (PSO method is proposed by introducing an electromagnetism-like mechanism to solve the bilevel model and enhance its convergence efficiency. A case study is carried out using the Nanjing Olympic Sports Center. The results indicate that, for large-scale activities, the average evacuation time of the classic model is shorter but the road saturation distribution is more uneven. Thus, the overall evacuation efficiency of the network is not high. For induced emergencies, the evacuation time of the bilevel planning model is shortened. When the audience arrival rate is increased from 50% to 100%, the evacuation time is shortened from 22% to 35%, indicating that the optimization effect of the bilevel planning model is more effective compared to the classic model. Therefore, the model and algorithm presented in this paper can provide a theoretical basis for the traffic-induced evacuation decision making of large-scale activities.

  12. Inflation physics from the cosmic microwave background and large scale structure

    Energy Technology Data Exchange (ETDEWEB)

    Abazajian, K. N.; Arnold, K.; Austermann, J.; Benson, B. A.; Bischoff, C.; Bock, J.; Bond, J. R.; Borrill, J.; Buder, I.; Burke, D. L.; Calabrese, E.; Carlstrom, J. E.; Carvalho, C. S.; Chang, C. L.; Chiang, H. C.; Church, S.; Cooray, A.; Crawford, T. M.; Crill, B. P.; Dawson, K. S.; Das, S.; Devlin, M. J.; Dobbs, M.; Dodelson, S.; Doré, O.; Dunkley, J.; Feng, J. L.; Fraisse, A.; Gallicchio, J.; Giddings, S. B.; Green, D.; Halverson, N. W.; Hanany, S.; Hanson, D.; Hildebrandt, S. R.; Hincks, A.; Hlozek, R.; Holder, G.; Holzapfel, W. L.; Honscheid, K.; Horowitz, G.; Hu, W.; Hubmayr, J.; Irwin, K.; Jackson, M.; Jones, W. C.; Kallosh, R.; Kamionkowski, M.; Keating, B.; Keisler, R.; Kinney, W.; Knox, L.; Komatsu, E.; Kovac, J.; Kuo, C. -L.; Kusaka, A.; Lawrence, C.; Lee, A. T.; Leitch, E.; Linde, A.; Linder, E.; Lubin, P.; Maldacena, J.; Martinec, E.; McMahon, J.; Miller, A.; Mukhanov, V.; Newburgh, L.; Niemack, M. D.; Nguyen, H.; Nguyen, H. T.; Page, L.; Pryke, C.; Reichardt, C. L.; Ruhl, J. E.; Sehgal, N.; Seljak, U.; Senatore, L.; Sievers, J.; Silverstein, E.; Slosar, A.; Smith, K. M.; Spergel, D.; Staggs, S. T.; Stark, A.; Stompor, R.; Vieregg, A. G.; Wang, G.; Watson, S.; Wollack, E. J.; Wu, W. L. K.; Yoon, K. W.; Zahn, O.; Zaldarriaga, M.

    2015-03-01

    Fluctuations in the intensity and polarization of the cosmic microwave background (CMB) and the large-scale distribution of matter in the universe each contain clues about the nature of the earliest moments of time. The next generation of CMB and large-scale structure (LSS) experiments are poised to test the leading paradigm for these earliest moments—the theory of cosmic inflation—and to detect the imprints of the inflationary epoch, thereby dramatically increasing our understanding of fundamental physics and the early universe. A future CMB experiment with sufficient angular resolution and frequency coverage that surveys at least 1% of the sky to a depth of 1 uK-arcmin can deliver a constraint on the tensor-to-scalar ratio that will either result in a 5σ measurement of the energy scale of inflation or rule out all large-field inflation models, even in the presence of foregrounds and the gravitational lensing B -mode signal. LSS experiments, particularly spectroscopic surveys such as the Dark Energy Spectroscopic Instrument, will complement the CMB effort by improving current constraints on running of the spectral index by up to a factor of four, improving constraints on curvature by a factor of ten, and providing non-Gaussianity constraints that are competitive with the current CMB bounds.

  13. Large Scale Observatories for Changing Cold Regions - Recent Progress and Future Vision

    Science.gov (United States)

    Wheater, H. S.; Pomeroy, J. W.; Carey, S. K.; DeBeer, C. M.

    2016-12-01

    Observatories are at the core of hydrological science and a critical resource for the detection and analysis of environmental change. The combination of multiple pressures on the water environment and new scientific opportunities provides a context where a broader vision is urgently needed. Human activities are increasingly affecting land and water management at multiple scales, so our observatories now need to more fully include the human dimensions of water, including their integration across jurisdictional boundaries and at large basin scales. And large scales are also needed to diagnose and predict impacts of climate change at regional and continental scales, and to address land-water-atmosphere interactions and feedbacks. We argue the need to build on the notable past successes of the World Climate Research Programme and move forward to a new era of globally-distributed large scale observatories. This paper introduces 2 such observatories in rapidly warming western Canada - the 405,000 km2 Saskatchewan and the 1.8 million km2 Mackenzie river basins. We review progress in these multi-scale observatories, including the use of point and small basin-scale observatory sites to observe and diagnose complex regional patterns of hydrological change. And building on new opportunities for observational systems and data assimilation, we present a vision for a pan-Canadian observing system to support the science needed for the management of future societal risk from extreme events and environmental change.

  14. Ultra-large-scale electronic structure theory and numerical algorithm

    OpenAIRE

    Hoshi, Takeo

    2008-01-01

    This article is composed of two parts; In the first part (Sec. 1), the ultra-large-scale electronic structure theory is reviewed for (i) its fundamental numerical algorithm and (ii) its role in nano-material science. The second part (Sec. 2) is devoted to the mathematical foundation of the large-scale electronic structure theory and their numerical aspects.

  15. Large-Scale Agriculture and Outgrower Schemes in Ethiopia

    DEFF Research Database (Denmark)

    Wendimu, Mengistu Assefa

    were the main reasons for the failure of large-scale jatropha plantations in Ethiopia. The findings in Chapter 3 show that when the use of family labour is combined with easy access to credit and technology, outgrowers on average achieve higher productivity than that obtained on large-scale plantations...

  16. Advances in Modelling of Large Scale Coastal Evolution

    NARCIS (Netherlands)

    Stive, M.J.F.; De Vriend, H.J.

    1995-01-01

    The attention for climate change impact on the world's coastlines has established large scale coastal evolution as a topic of wide interest. Some more recent advances in this field, focusing on the potential of mathematical models for the prediction of large scale coastal evolution, are discussed.

  17. Large-scale Agricultural Land Acquisitions in West Africa | IDRC ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    Large-scale Agricultural Land Acquisitions in West Africa. As agriculture becomes more strategically important in the world, large-scale land acquisition - or "land grabbing" - is a growing concern for West Africa. Recent increases in commodity prices have led some governments and private investors to purchase or lease ...

  18. Large scale Brownian dynamics of confined suspensions of rigid particles.

    Science.gov (United States)

    Sprinkle, Brennan; Balboa Usabiaga, Florencio; Patankar, Neelesh A; Donev, Aleksandar

    2017-12-28

    We introduce methods for large-scale Brownian Dynamics (BD) simulation of many rigid particles of arbitrary shape suspended in a fluctuating fluid. Our method adds Brownian motion to the rigid multiblob method [F. Balboa Usabiaga et al., Commun. Appl. Math. Comput. Sci. 11(2), 217-296 (2016)] at a cost comparable to the cost of deterministic simulations. We demonstrate that we can efficiently generate deterministic and random displacements for many particles using preconditioned Krylov iterative methods, if kernel methods to efficiently compute the action of the Rotne-Prager-Yamakawa (RPY) mobility matrix and its "square" root are available for the given boundary conditions. These kernel operations can be computed with near linear scaling for periodic domains using the positively split Ewald method. Here we study particles partially confined by gravity above a no-slip bottom wall using a graphical processing unit implementation of the mobility matrix-vector product, combined with a preconditioned Lanczos iteration for generating Brownian displacements. We address a major challenge in large-scale BD simulations, capturing the stochastic drift term that arises because of the configuration-dependent mobility. Unlike the widely used Fixman midpoint scheme, our methods utilize random finite differences and do not require the solution of resistance problems or the computation of the action of the inverse square root of the RPY mobility matrix. We construct two temporal schemes which are viable for large-scale simulations, an Euler-Maruyama traction scheme and a trapezoidal slip scheme, which minimize the number of mobility problems to be solved per time step while capturing the required stochastic drift terms. We validate and compare these schemes numerically by modeling suspensions of boomerang-shaped particles sedimented near a bottom wall. Using the trapezoidal scheme, we investigate the steady-state active motion in dense suspensions of confined microrollers, whose

  19. Large scale Brownian dynamics of confined suspensions of rigid particles

    Science.gov (United States)

    Sprinkle, Brennan; Balboa Usabiaga, Florencio; Patankar, Neelesh A.; Donev, Aleksandar

    2017-12-01

    We introduce methods for large-scale Brownian Dynamics (BD) simulation of many rigid particles of arbitrary shape suspended in a fluctuating fluid. Our method adds Brownian motion to the rigid multiblob method [F. Balboa Usabiaga et al., Commun. Appl. Math. Comput. Sci. 11(2), 217-296 (2016)] at a cost comparable to the cost of deterministic simulations. We demonstrate that we can efficiently generate deterministic and random displacements for many particles using preconditioned Krylov iterative methods, if kernel methods to efficiently compute the action of the Rotne-Prager-Yamakawa (RPY) mobility matrix and its "square" root are available for the given boundary conditions. These kernel operations can be computed with near linear scaling for periodic domains using the positively split Ewald method. Here we study particles partially confined by gravity above a no-slip bottom wall using a graphical processing unit implementation of the mobility matrix-vector product, combined with a preconditioned Lanczos iteration for generating Brownian displacements. We address a major challenge in large-scale BD simulations, capturing the stochastic drift term that arises because of the configuration-dependent mobility. Unlike the widely used Fixman midpoint scheme, our methods utilize random finite differences and do not require the solution of resistance problems or the computation of the action of the inverse square root of the RPY mobility matrix. We construct two temporal schemes which are viable for large-scale simulations, an Euler-Maruyama traction scheme and a trapezoidal slip scheme, which minimize the number of mobility problems to be solved per time step while capturing the required stochastic drift terms. We validate and compare these schemes numerically by modeling suspensions of boomerang-shaped particles sedimented near a bottom wall. Using the trapezoidal scheme, we investigate the steady-state active motion in dense suspensions of confined microrollers, whose

  20. The predictability of large-scale wind-driven flows

    Directory of Open Access Journals (Sweden)

    A. Mahadevan

    2001-01-01

    Full Text Available The singular values associated with optimally growing perturbations to stationary and time-dependent solutions for the general circulation in an ocean basin provide a measure of the rate at which solutions with nearby initial conditions begin to diverge, and hence, a measure of the predictability of the flow. In this paper, the singular vectors and singular values of stationary and evolving examples of wind-driven, double-gyre circulations in different flow regimes are explored. By changing the Reynolds number in simple quasi-geostrophic models of the wind-driven circulation, steady, weakly aperiodic and chaotic states may be examined. The singular vectors of the steady state reveal some of the physical mechanisms responsible for optimally growing perturbations. In time-dependent cases, the dominant singular values show significant variability in time, indicating strong variations in the predictability of the flow. When the underlying flow is weakly aperiodic, the dominant singular values co-vary with integral measures of the large-scale flow, such as the basin-integrated upper ocean kinetic energy and the transport in the western boundary current extension. Furthermore, in a reduced gravity quasi-geostrophic model of a weakly aperiodic, double-gyre flow, the behaviour of the dominant singular values may be used to predict a change in the large-scale flow, a feature not shared by an analogous two-layer model. When the circulation is in a strongly aperiodic state, the dominant singular values no longer vary coherently with integral measures of the flow. Instead, they fluctuate in a very aperiodic fashion on mesoscale time scales. The dominant singular vectors then depend strongly on the arrangement of mesoscale features in the flow and the evolved forms of the associated singular vectors have relatively short spatial scales. These results have several implications. In weakly aperiodic, periodic, and stationary regimes, the mesoscale energy

  1. Superconductivity for Large Scale Wind Turbines

    Energy Technology Data Exchange (ETDEWEB)

    R. Fair; W. Stautner; M. Douglass; R. Rajput-Ghoshal; M. Moscinski; P. Riley; D. Wagner; J. Kim; S. Hou; F. Lopez; K. Haran; J. Bray; T. Laskaris; J. Rochford; R. Duckworth

    2012-10-12

    A conceptual design has been completed for a 10MW superconducting direct drive wind turbine generator employing low temperature superconductors for the field winding. Key technology building blocks from the GE Wind and GE Healthcare businesses have been transferred across to the design of this concept machine. Wherever possible, conventional technology and production techniques have been used in order to support the case for commercialization of such a machine. Appendices A and B provide further details of the layout of the machine and the complete specification table for the concept design. Phase 1 of the program has allowed us to understand the trade-offs between the various sub-systems of such a generator and its integration with a wind turbine. A Failure Modes and Effects Analysis (FMEA) and a Technology Readiness Level (TRL) analysis have been completed resulting in the identification of high risk components within the design. The design has been analyzed from a commercial and economic point of view and Cost of Energy (COE) calculations have been carried out with the potential to reduce COE by up to 18% when compared with a permanent magnet direct drive 5MW baseline machine, resulting in a potential COE of 0.075 $/kWh. Finally, a top-level commercialization plan has been proposed to enable this technology to be transitioned to full volume production. The main body of this report will present the design processes employed and the main findings and conclusions.

  2. Large-Scale Pattern Discovery in Music

    Science.gov (United States)

    Bertin-Mahieux, Thierry

    This work focuses on extracting patterns in musical data from very large collections. The problem is split in two parts. First, we build such a large collection, the Million Song Dataset, to provide researchers access to commercial-size datasets. Second, we use this collection to study cover song recognition which involves finding harmonic patterns from audio features. Regarding the Million Song Dataset, we detail how we built the original collection from an online API, and how we encouraged other organizations to participate in the project. The result is the largest research dataset with heterogeneous sources of data available to music technology researchers. We demonstrate some of its potential and discuss the impact it already has on the field. On cover song recognition, we must revisit the existing literature since there are no publicly available results on a dataset of more than a few thousand entries. We present two solutions to tackle the problem, one using a hashing method, and one using a higher-level feature computed from the chromagram (dubbed the 2DFTM). We further investigate the 2DFTM since it has potential to be a relevant representation for any task involving audio harmonic content. Finally, we discuss the future of the dataset and the hope of seeing more work making use of the different sources of data that are linked in the Million Song Dataset. Regarding cover songs, we explain how this might be a first step towards defining a harmonic manifold of music, a space where harmonic similarities between songs would be more apparent.

  3. Learning from large-scale quality improvement through comparisons.

    Science.gov (United States)

    Ovretveit, John; Klazinga, Niek

    2012-10-01

    To discover lessons from 10 national health and social care quality programmes in the Netherlands. A mixed-methods comparison using a 'quantitative summarization of evidence for systematic comparison'. Each research team assessed whether there was evidence from their evaluation to support or refute 17 hypotheses about successful implementation of quality programmes. The programme managers carried out a similar assessment. Their assessments were represented as scores which made it possible to carry out a cross-case analysis to assess factors affecting the success of large-scale quality programmes. The researchers who evaluated each of the programmes and the leaders who organized each programme. Health and social care service organizations and national organization, which led the quality improvement programmes. This study did not make an intervention but compared experiences and evaluations of interventions carried out by national organization to health and social care service organizations to help these organizations to improve their services. The success of the national programmes, and the learning achieved by the programme organizations and care service delivery organizations. The method provided a way to summarize and compare complex information. Common factors which appeared to influence success in implementation included understanding of political processes, leader's influencing skills, as well as technical skills to manage projects and apply improvement and change methods. Others could use a similar method to make a fast, broad level, but systematic comparison across reports of improvements or programmes. Descriptions, and then comparisons of the programmes, reveal common factors which appeared to influence success in implementation. There were groups of factors which appeared to be more important for the success of certain types of programmes. It is possible that these factors may also be important for the success of large-scale improvement programmes in

  4. An effective method of large scale ontology matching.

    Science.gov (United States)

    Diallo, Gayo

    2014-01-01

    We are currently facing a proliferation of heterogeneous biomedical data sources accessible through various knowledge-based applications. These data are annotated by increasingly extensive and widely disseminated knowledge organisation systems ranging from simple terminologies and structured vocabularies to formal ontologies. In order to solve the interoperability issue, which arises due to the heterogeneity of these ontologies, an alignment task is usually performed. However, while significant effort has been made to provide tools that automatically align small ontologies containing hundreds or thousands of entities, little attention has been paid to the matching of large sized ontologies in the life sciences domain. We have designed and implemented ServOMap, an effective method for large scale ontology matching. It is a fast and efficient high precision system able to perform matching of input ontologies containing hundreds of thousands of entities. The system, which was included in the 2012 and 2013 editions of the Ontology Alignment Evaluation Initiative campaign, performed very well. It was ranked among the top systems for the large ontologies matching. We proposed an approach for large scale ontology matching relying on Information Retrieval (IR) techniques and the combination of lexical and machine learning contextual similarity computing for the generation of candidate mappings. It is particularly adapted to the life sciences domain as many of the ontologies in this domain benefit from synonym terms taken from the Unified Medical Language System and that can be used by our IR strategy. The ServOMap system we implemented is able to deal with hundreds of thousands entities with an efficient computation time.

  5. Workshop on the Federal Role in the Commercialization of Large Scale Windmill Technology (summary and papers)

    Science.gov (United States)

    Lerner, J. I.; Miller, G.

    Large-scale wind system and windmill technology and prospects for commercial applications are discussed. Barriers that may affect the commerical viability of large-scale windmill systems are identified, including the relatively poor financial condition of much of the utility industry which effectively prevents many utilities from investing substantially in any new projects. The potential market addressed by the Federal program in large-scale windmill systems is examined. Some of the factors that may limit the degree of market penetration for wind energy systems are: costs of competing fossil and nuclear fuels and technologies; rate of acceptance of new technologies; and competition from other solar technologies, including biomass, solar thermal, and photovoltaic systems. Workshop participants agreed that existing Federal legislation provides significant incentives for the commercialization of large-scale wind machines.

  6. Planck 2013 results. XVII. Gravitational lensing by large-scale structure

    CERN Document Server

    Ade, P.A.R.; Armitage-Caplan, C.; Arnaud, M.; Ashdown, M.; Atrio-Barandela, F.; Aumont, J.; Baccigalupi, C.; Banday, A.J.; Barreiro, R.B.; Bartlett, J.G.; Basak, S.; Battaner, E.; Benabed, K.; Benoit, A.; Benoit-Levy, A.; Bernard, J.P.; Bersanelli, M.; Bielewicz, P.; Bobin, J.; Bock, J.J.; Bonaldi, A.; Bonavera, L.; Bond, J.R.; Borrill, J.; Bouchet, F.R.; Bridges, M.; Bucher, M.; Burigana, C.; Butler, R.C.; Cardoso, J.F.; Catalano, A.; Challinor, A.; Chamballu, A.; Chiang, L.Y.; Chiang, H.C.; Christensen, P.R.; Church, S.; Clements, D.L.; Colombi, S.; Colombo, L.P.L.; Couchot, F.; Coulais, A.; Crill, B.P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R.D.; Davis, R.J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Dechelette, T.; Delabrouille, J.; Delouis, J.M.; Desert, F.X.; Dickinson, C.; Diego, J.M.; Dole, H.; Donzelli, S.; Dore, O.; Douspis, M.; Dunkley, J.; Dupac, X.; Efstathiou, G.; Ensslin, T.A.; Eriksen, H.K.; Finelli, F.; Forni, O.; Frailis, M.; Franceschi, E.; Galeotta, S.; Ganga, K.; Giard, M.; Giardino, G.; Giraud-Heraud, Y.; Gonzalez-Nuevo, J.; Gorski, K.M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Gudmundsson, J.E.; Hansen, F.K.; Hanson, D.; Harrison, D.; Henrot-Versille, S.; Hernandez-Monteagudo, C.; Herranz, D.; Hildebrandt, S.R.; Hivon, E.; Ho, S.; Hobson, M.; Holmes, W.A.; Hornstrup, A.; Hovest, W.; Huffenberger, K.M.; Jaffe, T.R.; Jaffe, A.H.; Jones, W.C.; Juvela, M.; Keihanen, E.; Keskitalo, R.; Kisner, T.S.; Kneissl, R.; Knoche, J.; Knox, L.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lahteenmaki, A.; Lamarre, J.M.; Lasenby, A.; Laureijs, R.J.; Lavabre, A.; Lawrence, C.R.; Leahy, J.P.; Leonardi, R.; Leon-Tavares, J.; Lesgourgues, J.; Lewis, A.; Liguori, M.; Lilje, P.B.; Linden-Vornle, M.; Lopez-Caniego, M.; Lubin, P.M.; Macias-Perez, J.F.; Maffei, B.; Maino, D.; Mandolesi, N.; Mangilli, A.; Maris, M.; Marshall, D.J.; Martin, P.G.; Martinez-Gonzalez, E.; Masi, S.; Matarrese, S.; Matthai, F.; Mazzotta, P.; Melchiorri, A.; Mendes, L.; Mennella, A.; Migliaccio, M.; Mitra, S.; Miville-Deschenes, M.A.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Moss, A.; Munshi, D.; Naselsky, P.; Nati, F.; Natoli, P.; Netterfield, C.B.; Norgaard-Nielsen, H.U.; Noviello, F.; Novikov, D.; Novikov, I.; Osborne, S.; Oxborrow, C.A.; Paci, F.; Pagano, L.; Pajot, F.; Paoletti, D.; Partridge, B.; Pasian, F.; Patanchon, G.; Perdereau, O.; Perotto, L.; Perrotta, F.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Ponthieu, N.; Popa, L.; Poutanen, T.; Pratt, G.W.; Prezeau, G.; Prunet, S.; Puget, J.L.; Pullen, A.R.; Rachen, J.P.; Rebolo, R.; Reinecke, M.; Remazeilles, M.; Renault, C.; Ricciardi, S.; Riller, T.; Ristorcelli, I.; Rocha, G.; Rosset, C.; Roudier, G.; Rowan-Robinson, M.; Rubino-Martin, J.A.; Rusholme, B.; Sandri, M.; Santos, D.; Savini, G.; Scott, D.; Seiffert, M.D.; Shellard, E.P.S.; Spencer, L.D.; Starck, J.L.; Stolyarov, V.; Stompor, R.; Sudiwala, R.; Sunyaev, R.; Sureau, F.; Sutton, D.; Suur-Uski, A.S.; Sygnet, J.F.; Tauber, J.A.; Tavagnacco, D.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Tuovinen, J.; Umana, G.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Vielva, P.; Villa, F.; Vittorio, N.; Wade, L.A.; Wandelt, B.D.; White, M.; White, S.D.M.; Yvon, D.; Zacchei, A.; Zonca, A.

    2014-01-01

    On the arcminute angular scales probed by Planck, the CMB anisotropies are gently perturbed by gravitational lensing. Here we present a detailed study of this effect, detecting lensing independently in the 100, 143, and 217GHz frequency bands with an overall significance of greater than 25sigma. We use the temperature-gradient correlations induced by lensing to reconstruct a (noisy) map of the CMB lensing potential, which provides an integrated measure of the mass distribution back to the CMB last-scattering surface. Our lensing potential map is significantly correlated with other tracers of mass, a fact which we demonstrate using several representative tracers of large-scale structure. We estimate the power spectrum of the lensing potential, finding generally good agreement with expectations from the best-fitting LCDM model for the Planck temperature power spectrum, showing that this measurement at z=1100 correctly predicts the properties of the lower-redshift, later-time structures which source the lensing ...

  7. NWChem: A comprehensive and scalable open-source solution for large scale molecular simulations

    Science.gov (United States)

    Valiev, M.; Bylaska, E. J.; Govind, N.; Kowalski, K.; Straatsma, T. P.; Van Dam, H. J. J.; Wang, D.; Nieplocha, J.; Apra, E.; Windus, T. L.; de Jong, W. A.

    2010-09-01

    The latest release of NWChem delivers an open-source computational chemistry package with extensive capabilities for large scale simulations of chemical and biological systems. Utilizing a common computational framework, diverse theoretical descriptions can be used to provide the best solution for a given scientific problem. Scalable parallel implementations and modular software design enable efficient utilization of current computational architectures. This paper provides an overview of NWChem focusing primarily on the core theoretical modules provided by the code and their parallel performance. Program summaryProgram title: NWChem Catalogue identifier: AEGI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGI_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Open Source Educational Community License No. of lines in distributed program, including test data, etc.: 11 709 543 No. of bytes in distributed program, including test data, etc.: 680 696 106 Distribution format: tar.gz Programming language: Fortran 77, C Computer: all Linux based workstations and parallel supercomputers, Windows and Apple machines Operating system: Linux, OS X, Windows Has the code been vectorised or parallelized?: Code is parallelized Classification: 2.1, 2.2, 3, 7.3, 7.7, 16.1, 16.2, 16.3, 16.10, 16.13 Nature of problem: Large-scale atomistic simulations of chemical and biological systems require efficient and reliable methods for ground and excited solutions of many-electron Hamiltonian, analysis of the potential energy surface, and dynamics. Solution method: Ground and excited solutions of many-electron Hamiltonian are obtained utilizing density-functional theory, many-body perturbation approach, and coupled cluster expansion. These solutions or a combination thereof with classical descriptions are then used to analyze potential energy surface and perform dynamical simulations. Additional comments: Full

  8. Contextual Compression of Large-Scale Wind Turbine Array Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Gruchalla, Kenny M [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Brunhart-Lupo, Nicholas J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Potter, Kristin C [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Clyne, John [National Center for Atmospheric Research (NCAR)

    2017-12-04

    Data sizes are becoming a critical issue particularly for HPC applications. We have developed a user-driven lossy wavelet-based storage model to facilitate the analysis and visualization of large-scale wind turbine array simulations. The model stores data as heterogeneous blocks of wavelet coefficients, providing high-fidelity access to user-defined data regions believed the most salient, while providing lower-fidelity access to less salient regions on a block-by-block basis. In practice, by retaining the wavelet coefficients as a function of feature saliency, we have seen data reductions in excess of 94 percent, while retaining lossless information in the turbine-wake regions most critical to analysis and providing enough (low-fidelity) contextual information in the upper atmosphere to track incoming coherent turbulent structures. Our contextual wavelet compression approach has allowed us to deliver interative visual analysis while providing the user control over where data loss, and thus reduction in accuracy, in the analysis occurs. We argue this reduced but contextualized representation is a valid approach and encourages contextual data management.

  9. Dominant modes of variability in large-scale Birkeland currents

    Science.gov (United States)

    Cousins, E. D. P.; Matsuo, Tomoko; Richmond, A. D.; Anderson, B. J.

    2015-08-01

    Properties of variability in large-scale Birkeland currents are investigated through empirical orthogonal function (EOF) analysis of 1 week of data from the Active Magnetosphere and Planetary Electrodynamics Response Experiment (AMPERE). Mean distributions and dominant modes of variability are identified for both the Northern and Southern Hemispheres. Differences in the results from the two hemispheres are observed, which are attributed to seasonal differences in conductivity (the study period occurred near solstice). A universal mean and set of dominant modes of variability are obtained through combining the hemispheric results, and it is found that the mean and first three modes of variability (EOFs) account for 38% of the total observed squared magnetic perturbations (δB2) from both hemispheres. The mean distribution represents a standard Region 1/Region 2 (R1/R2) morphology of currents and EOF 1 captures the strengthening/weakening of the average distribution and is well correlated with the north-south component of the interplanetary magnetic field (IMF). EOF 2 captures a mixture of effects including the expansion/contraction and rotation of the (R1/R2) currents; this mode correlates only weakly with possible external driving parameters. EOF 3 captures changes in the morphology of the currents in the dayside cusp region and is well correlated with the dawn-dusk component of the IMF. The higher-order EOFs capture more complex, smaller-scale variations in the Birkeland currents and appear generally uncorrelated with external driving parameters. The results of the EOF analysis described here are used for describing error covariance in a data assimilation procedure utilizing AMPERE data, as described in a companion paper.

  10. Ecogrid EU - a large scale smart grids demonstration of real time market-based integration of numerous small DER and DR

    DEFF Research Database (Denmark)

    Ding, Yi; Nyeng, Preben; Ostergaard, Jacob

    2012-01-01

    This paper provides an overview of the Ecogrid EU project, which is a large-scale demonstration project on the Danish island Bornholm. It provides Europe a fast track evolution towards smart grid dissemination and deployment in the distribution network. Objective of Ecogrid EU is to illustrate th...... customers will be equipped with demand response devices with smart controllers and smart meters, allowing them to respond to real-time prices based on their pre-programmed demand-response preferences.......This paper provides an overview of the Ecogrid EU project, which is a large-scale demonstration project on the Danish island Bornholm. It provides Europe a fast track evolution towards smart grid dissemination and deployment in the distribution network. Objective of Ecogrid EU is to illustrate...

  11. Large Scale Applications of HTS in New Zealand

    Science.gov (United States)

    Wimbush, Stuart C.

    New Zealand has one of the longest-running and most consistently funded (relative to GDP) programmes in high temperature superconductor (HTS) development and application worldwide. As a consequence, it has a sustained breadth of involvement in HTS technology development stretching from the materials discovery right through to burgeoning commercial exploitation. This review paper outlines the present large scale projects of the research team at the newly-established Robinson Research Institute of Victoria University of Wellington. These include the construction and grid-based testing of a three-phase 1 MVA 2G HTS distribution transformer utilizing Roebel cable for its high-current secondary windings and the development of a cryogen-free conduction-cooled 1.5 T YBCO-based human extremity magnetic resonance imaging system. Ongoing activities supporting applications development such as low-temperature full-current characterization of commercial superconducting wires and the implementation of inductive flux-pump technologies for efficient brushless coil excitation in superconducting magnets and rotating machines are also described.

  12. Large Scale Software Building with CMake in ATLAS

    Science.gov (United States)

    Elmsheuser, J.; Krasznahorkay, A.; Obreshkov, E.; Undrus, A.; ATLAS Collaboration

    2017-10-01

    The offline software of the ATLAS experiment at the Large Hadron Collider (LHC) serves as the platform for detector data reconstruction, simulation and analysis. It is also used in the detector’s trigger system to select LHC collision events during data taking. The ATLAS offline software consists of several million lines of C++ and Python code organized in a modular design of more than 2000 specialized packages. Because of different workflows, many stable numbered releases are in parallel production use. To accommodate specific workflow requests, software patches with modified libraries are distributed on top of existing software releases on a daily basis. The different ATLAS software applications also require a flexible build system that strongly supports unit and integration tests. Within the last year this build system was migrated to CMake. A CMake configuration has been developed that allows one to easily set up and build the above mentioned software packages. This also makes it possible to develop and test new and modified packages on top of existing releases. The system also allows one to detect and execute partial rebuilds of the release based on single package changes. The build system makes use of CPack for building RPM packages out of the software releases, and CTest for running unit and integration tests. We report on the migration and integration of the ATLAS software to CMake and show working examples of this large scale project in production.

  13. Large-scale model of mammalian thalamocortical systems.

    Science.gov (United States)

    Izhikevich, Eugene M; Edelman, Gerald M

    2008-03-04

    The understanding of the structural and dynamic complexity of mammalian brains is greatly facilitated by computer simulations. We present here a detailed large-scale thalamocortical model based on experimental measures in several mammalian species. The model spans three anatomical scales. (i) It is based on global (white-matter) thalamocortical anatomy obtained by means of diffusion tensor imaging (DTI) of a human brain. (ii) It includes multiple thalamic nuclei and six-layered cortical microcircuitry based on in vitro labeling and three-dimensional reconstruction of single neurons of cat visual cortex. (iii) It has 22 basic types of neurons with appropriate laminar distribution of their branching dendritic trees. The model simulates one million multicompartmental spiking neurons calibrated to reproduce known types of responses recorded in vitro in rats. It has almost half a billion synapses with appropriate receptor kinetics, short-term plasticity, and long-term dendritic spike-timing-dependent synaptic plasticity (dendritic STDP). The model exhibits behavioral regimes of normal brain activity that were not explicitly built-in but emerged spontaneously as the result of interactions among anatomical and dynamic processes. We describe spontaneous activity, sensitivity to changes in individual neurons, emergence of waves and rhythms, and functional connectivity on different scales.

  14. LARGE-SCALE INDICATIVE MAPPING OF SOIL RUNOFF

    Directory of Open Access Journals (Sweden)

    E. Panidi

    2017-11-01

    Full Text Available In our study we estimate relationships between quantitative parameters of relief, soil runoff regime, and spatial distribution of radioactive pollutants in the soil. The study is conducted on the test arable area located in basin of the upper Oka River (Orel region, Russia. Previously we collected rich amount of soil samples, which make it possible to investigate redistribution of the Chernobyl-origin cesium-137 in soil material and as a consequence the soil runoff magnitude at sampling points. Currently we are describing and discussing the technique applied to large-scale mapping of the soil runoff. The technique is based upon the cesium-137 radioactivity measurement in the different relief structures. Key stages are the allocation of the places for soil sampling points (we used very high resolution space imagery as a supporting data; soil samples collection and analysis; calibration of the mathematical model (using the estimated background value of the cesium-137 radioactivity; and automated compilation of the map (predictive map of the studied territory (digital elevation model is used for this purpose, and cesium-137 radioactivity can be predicted using quantitative parameters of the relief. The maps can be used as a support data for precision agriculture and for recultivation or melioration purposes.

  15. Parallel Index and Query for Large Scale Data Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Chou, Jerry; Wu, Kesheng; Ruebel, Oliver; Howison, Mark; Qiang, Ji; Prabhat,; Austin, Brian; Bethel, E. Wes; Ryne, Rob D.; Shoshani, Arie

    2011-07-18

    Modern scientific datasets present numerous data management and analysis challenges. State-of-the-art index and query technologies are critical for facilitating interactive exploration of large datasets, but numerous challenges remain in terms of designing a system for process- ing general scientific datasets. The system needs to be able to run on distributed multi-core platforms, efficiently utilize underlying I/O infrastructure, and scale to massive datasets. We present FastQuery, a novel software framework that address these challenges. FastQuery utilizes a state-of-the-art index and query technology (FastBit) and is designed to process mas- sive datasets on modern supercomputing platforms. We apply FastQuery to processing of a massive 50TB dataset generated by a large scale accelerator modeling code. We demonstrate the scalability of the tool to 11,520 cores. Motivated by the scientific need to search for inter- esting particles in this dataset, we use our framework to reduce search time from hours to tens of seconds.

  16. Generalised extreme value distributions provide a natural hypothesis for the shape of seed mass distributions.

    Directory of Open Access Journals (Sweden)

    Will Edwards

    Full Text Available Among co-occurring species, values for functionally important plant traits span orders of magnitude, are uni-modal, and generally positively skewed. Such data are usually log-transformed "for normality" but no convincing mechanistic explanation for a log-normal expectation exists. Here we propose a hypothesis for the distribution of seed masses based on generalised extreme value distributions (GEVs, a class of probability distributions used in climatology to characterise the impact of event magnitudes and frequencies; events that impose strong directional selection on biological traits. In tests involving datasets from 34 locations across the globe, GEVs described log10 seed mass distributions as well or better than conventional normalising statistics in 79% of cases, and revealed a systematic tendency for an overabundance of small seed sizes associated with low latitudes. GEVs characterise disturbance events experienced in a location to which individual species' life histories could respond, providing a natural, biological explanation for trait expression that is lacking from all previous hypotheses attempting to describe trait distributions in multispecies assemblages. We suggest that GEVs could provide a mechanistic explanation for plant trait distributions and potentially link biology and climatology under a single paradigm.

  17. Scale interactions in a mixing layer – the role of the large-scale gradients

    KAUST Repository

    Fiscaletti, D.

    2016-02-15

    © 2016 Cambridge University Press. The interaction between the large and the small scales of turbulence is investigated in a mixing layer, at a Reynolds number based on the Taylor microscale of , via direct numerical simulations. The analysis is performed in physical space, and the local vorticity root-mean-square (r.m.s.) is taken as a measure of the small-scale activity. It is found that positive large-scale velocity fluctuations correspond to large vorticity r.m.s. on the low-speed side of the mixing layer, whereas, they correspond to low vorticity r.m.s. on the high-speed side. The relationship between large and small scales thus depends on position if the vorticity r.m.s. is correlated with the large-scale velocity fluctuations. On the contrary, the correlation coefficient is nearly constant throughout the mixing layer and close to unity if the vorticity r.m.s. is correlated with the large-scale velocity gradients. Therefore, the small-scale activity appears closely related to large-scale gradients, while the correlation between the small-scale activity and the large-scale velocity fluctuations is shown to reflect a property of the large scales. Furthermore, the vorticity from unfiltered (small scales) and from low pass filtered (large scales) velocity fields tend to be aligned when examined within vortical tubes. These results provide evidence for the so-called \\'scale invariance\\' (Meneveau & Katz, Annu. Rev. Fluid Mech., vol. 32, 2000, pp. 1-32), and suggest that some of the large-scale characteristics are not lost at the small scales, at least at the Reynolds number achieved in the present simulation.

  18. Large-Scale Spray Releases: Additional Aerosol Test Results

    Energy Technology Data Exchange (ETDEWEB)

    Daniel, Richard C.; Gauglitz, Phillip A.; Burns, Carolyn A.; Fountain, Matthew S.; Shimskey, Rick W.; Billing, Justin M.; Bontha, Jagannadha R.; Kurath, Dean E.; Jenks, Jeromy WJ; MacFarlan, Paul J.; Mahoney, Lenna A.

    2013-08-01

    One of the events postulated in the hazard analysis for the Waste Treatment and Immobilization Plant (WTP) and other U.S. Department of Energy (DOE) nuclear facilities is a breach in process piping that produces aerosols with droplet sizes in the respirable range. The current approach for predicting the size and concentration of aerosols produced in a spray leak event involves extrapolating from correlations reported in the literature. These correlations are based on results obtained from small engineered spray nozzles using pure liquids that behave as a Newtonian fluid. The narrow ranges of physical properties on which the correlations are based do not cover the wide range of slurries and viscous materials that will be processed in the WTP and in processing facilities across the DOE complex. To expand the data set upon which the WTP accident and safety analyses were based, an aerosol spray leak testing program was conducted by Pacific Northwest National Laboratory (PNNL). PNNL’s test program addressed two key technical areas to improve the WTP methodology (Larson and Allen 2010). The first technical area was to quantify the role of slurry particles in small breaches where slurry particles may plug the hole and prevent high-pressure sprays. The results from an effort to address this first technical area can be found in Mahoney et al. (2012a). The second technical area was to determine aerosol droplet size distribution and total droplet volume from prototypic breaches and fluids, including sprays from larger breaches and sprays of slurries for which literature data are mostly absent. To address the second technical area, the testing program collected aerosol generation data at two scales, commonly referred to as small-scale and large-scale testing. The small-scale testing and resultant data are described in Mahoney et al. (2012b), and the large-scale testing and resultant data are presented in Schonewill et al. (2012). In tests at both scales, simulants were used

  19. Macro-economic benefit analysis of large scale building energy efficiency programs in Qatar

    Directory of Open Access Journals (Sweden)

    Moncef Krarti

    2017-12-01

    Full Text Available This paper evaluates the economic, environmental, and social benefits of large-scale energy efficiency programs for new and existing buildings in Qatar. Using data obtained from detailed energy audits, several proven energy efficiency measures have been analyzed through optimized based analysis to assess their impact on the energy performance for both new and existing buildings in Qatar. Moreover, a bottom-up analysis approach is considered to quantify the multiple benefits for implementing large-scale building energy efficiency programs for the building stock in Qatar. In particular, a more stringent energy efficiency code for the new constructions and three energy retrofit levels for the existing buildings are considered in the analysis. A novel macro-economic analysis using the concept of energy productivity is used to assess the cost-benefit of large-scale energy efficiency programs in Qatar. It is determined that the implementation of a government funded large-scale energy retrofit program for the existing building stock is highly cost-effective in Qatar. In particular, it is found that a large-scale energy efficiency retrofit program of existing buildings can provide a reduction of 11,000 GWh in annual electricity consumption and 2500 MW in peak demand as well as over 5400 kilo-ton per year in carbon emissions. In addition, over 4000 jobs per year can be created when this large-scale energy retrofit program is implemented over 10-year period.

  20. LEMON - LHC Era Monitoring for Large-Scale Infrastructures

    Science.gov (United States)

    Marian, Babik; Ivan, Fedorko; Nicholas, Hook; Hector, Lansdale Thomas; Daniel, Lenkes; Miroslav, Siket; Denis, Waldron

    2011-12-01

    At the present time computer centres are facing a massive rise in virtualization and cloud computing as these solutions bring advantages to service providers and consolidate the computer centre resources. However, as a result the monitoring complexity is increasing. Computer centre management requires not only to monitor servers, network equipment and associated software but also to collect additional environment and facilities data (e.g. temperature, power consumption, cooling efficiency, etc.) to have also a good overview of the infrastructure performance. The LHC Era Monitoring (Lemon) system is addressing these requirements for a very large scale infrastructure. The Lemon agent that collects data on every client and forwards the samples to the central measurement repository provides a flexible interface that allows rapid development of new sensors. The system allows also to report on behalf of remote devices such as switches and power supplies. Online and historical data can be visualized via a web-based interface or retrieved via command-line tools. The Lemon Alarm System component can be used for notifying the operator about error situations. In this article, an overview of the Lemon monitoring is provided together with a description of the CERN LEMON production instance. No direct comparison is made with other monitoring tool.

  1. Large-scale innovation and change in UK higher education

    National Research Council Canada - National Science Library

    Brown, Stephen

    2013-01-01

    .... The paper argues that collaborative approaches to project management offer greater prospects of effective large-scale change in universities than either management-driven top-down or more champion...

  2. Amplification of large-scale magnetic field in nonhelical magnetohydrodynamics

    KAUST Repository

    Kumar, Rohit

    2017-08-11

    It is typically assumed that the kinetic and magnetic helicities play a crucial role in the growth of large-scale dynamo. In this paper, we demonstrate that helicity is not essential for the amplification of large-scale magnetic field. For this purpose, we perform nonhelical magnetohydrodynamic (MHD) simulation, and show that the large-scale magnetic field can grow in nonhelical MHD when random external forcing is employed at scale 1/10 the box size. The energy fluxes and shell-to-shell transfer rates computed using the numerical data show that the large-scale magnetic energy grows due to the energy transfers from the velocity field at the forcing scales.

  3. Large-scale linear programs in planning and prediction.

    Science.gov (United States)

    2017-06-01

    Large-scale linear programs are at the core of many traffic-related optimization problems in both planning and prediction. Moreover, many of these involve significant uncertainty, and hence are modeled using either chance constraints, or robust optim...

  4. Reorganizing Complex Network to Improve Large-Scale Multiagent Teamwork

    National Research Council Canada - National Science Library

    Yang Xu; Pengfei Liu; Xiang Li

    2014-01-01

      Large-scale multiagent teamwork has been popular in various domains. Similar to human society infrastructure, agents only coordinate with some of the others, with a peer-to-peer complex network structure...

  5. Large-Scale Troughs on 4 Vesta: Observations and Analysis

    Science.gov (United States)

    Buczkowski, D. L.; Kahn, E.; Barnouin, O.; Wyrick, D. Y.; Gaskell, R. W.; Yingst, R. A.; Williams, D. A.; Garry, W. B.; Le Corre, L.; Nathues, A.; Scully, J. E. C.; Blewett, D.; Hiesinger, H.; Mest, S.; Schenk, P. M.; Schmedemann, N.; Krohn, K.; Jaumann, R.; Raymond, C. A.; Pieters, C. M.; Roatsch, T.; Preusker, F.; Russell, C. T.

    2012-05-01

    Images of Vesta taken by the Dawn Framing Camera reveal the presence of large- scale structural features on the surface of the asteroid. Analysis of these structures supports models for the formation of the south polar basins.

  6. Explosive Concrete Spalling during Large-Scale Fire Resistance Tests

    OpenAIRE

    Maluk, Cristian; Bisby, Luke; Giovanni P. Terrasi

    2016-01-01

    This paper presents a comprehensive investigation of explosive heat-induced spalling observed during a set of large-scale fire resistance tests (or standard furnace tests) on prestressed concrete slabs. The study, based on data from large-scale tests, examines the influence of numerous design parameters in the occurrence of spalling (age of concrete, inclusion of polypropylene fibres, depth of the slab, and prestressing level). Furthermore, a careful thermal analysis of the tested slabs is pr...

  7. Aggregation algorithm towards large-scale Boolean network analysis

    OpenAIRE

    Zhao, Y.; Kim, J.; Filippone, M.

    2013-01-01

    The analysis of large-scale Boolean network dynamics is of great importance in understanding complex phenomena where systems are characterized by a large number of components. The computational cost to reveal the number of attractors and the period of each attractor increases exponentially as the number of nodes in the networks increases. This paper presents an efficient algorithm to find attractors for medium to large-scale networks. This is achieved by analyzing subnetworks within the netwo...

  8. Boundary-Layer Bypass Transition Over Large-Scale Bodies

    Science.gov (United States)

    2016-12-16

    AFRL-AFOSR-UK-TR-2017-0007 Boundary - layer bypass transition over large-scale bodies Pierre Ricco UNIVERSITY OF SHEFFIELD, DEPARTMENT OF PSYCHOLOGY...REPORT TYPE Final 3. DATES COVERED (From - To) 01 Sep 2013 to 31 Aug 2016 4. TITLE AND SUBTITLE Boundary - layer bypass transition over large-scale...shape of the streamwise velocity profile compared to the flat-plate boundary layer . The research showed that the streamwise wavenumber plays a key role

  9. Balancing modern Power System with large scale of wind power

    OpenAIRE

    Basit, Abdul; Altin, Müfit; Hansen, Anca Daniela; Sørensen, Poul Ejnar

    2014-01-01

    Power system operators must ensure robust, secure and reliable power system operation even with a large scale integration of wind power. Electricity generated from the intermittent wind in large propor-tion may impact on the control of power system balance and thus deviations in the power system frequency in small or islanded power systems or tie line power flows in interconnected power systems. Therefore, the large scale integration of wind power into the power system strongly concerns the s...

  10. Designing and developing portable large-scale JavaScript web applications within the Experiment Dashboard framework

    Science.gov (United States)

    Andreeva, J.; Dzhunov, I.; Karavakis, E.; Kokoszkiewicz, L.; Nowotka, M.; Saiz, P.; Tuckett, D.

    2012-12-01

    Improvements in web browser performance and web standards compliance, as well as the availability of comprehensive JavaScript libraries, provides an opportunity to develop functionally rich yet intuitive web applications that allow users to access, render and analyse data in novel ways. However, the development of such large-scale JavaScript web applications presents new challenges, in particular with regard to code sustainability and team-based work. We present an approach that meets the challenges of large-scale JavaScript web application design and development, including client-side model-view-controller architecture, design patterns, and JavaScript libraries. Furthermore, we show how the approach leads naturally to the encapsulation of the data source as a web API, allowing applications to be easily ported to new data sources. The Experiment Dashboard framework is used for the development of applications for monitoring the distributed computing activities of virtual organisations on the Worldwide LHC Computing Grid. We demonstrate the benefits of the approach for large-scale JavaScript web applications in this context by examining the design of several Experiment Dashboard applications for data processing, data transfer and site status monitoring, and by showing how they have been ported for different virtual organisations and technologies.

  11. TRMM Latent Heating Retrieval and Comparisons with Field Campaigns and Large-Scale Analyses

    Science.gov (United States)

    Tao, Wei-Kuo; Takayabu, Yukuri; Lang, S.; Shige, S.; Olson, W.; Hou, A.; Jiang, X.; Zhang, C.; Lau, W.; Krishnamurti, T.; hide

    2012-01-01

    Rainfall production is a fundamental process within the Earth's hydrological cycle because it represents both a principal forcing term in surface water budgets, and its energetics corollary, latent heating (LH), is one of the principal sources of atmospheric diabatic heating. Latent heat release itself is a consequence of phase changes between the vapor, liquid, and frozen states of water. The vertical distribution of LH has a strong influence on the atmosphere, controlling large-scale tropical circulations, exciting and modulating tropical waves, maintaining the intensities of tropical cyclones, and even providing the energetics of midlatitude cyclones and other mobile midlatitude weather systems. Moreover, the processes associated with LH result in significant non-linear changes in atmospheric radiation through the creation, dissipation and modulation of clouds and precipitation. Yanai et al. (1973) utilized the meteorological data collected from a sounding network to present a pioneering work on thermodynamic budgets, which are referred to as the apparent heat source (Q1) and apparent moisture sink (Q2). Yanai's paper motivated the development of satellite-based LH algorithms and provided a theoretical background for imposing large-scale advective forcing into cloud-resolving models (CRMs). These CRM-simulated LH and Q1 data have been used to generate the look-up tables used in LH algorithms. This paper examines the retrieval, validation, and application of LH estimates based on rain rate quantities acquired from the Tropical Rainfall Measuring Mission satellite (TRMM). TRMM was launched in November 1997 as a joint enterprise between the American and Japanese space agencies -- with overriding goals of providing accurate four-dimensional estimates of rainfall and LH over the global Tropics and subtropics equatorward of 35o. Other literature has acknowledged the achievement of the first goal of obtaining an accurate rainfall climatology. This paper describes the

  12. Analytical model of the statistical properties of contrast of large-scale ionospheric inhomogeneities.

    Science.gov (United States)

    Vsekhsvyatskaya, I. S.; Evstratova, E. A.; Kalinin, Yu. K.; Romanchuk, A. A.

    1989-08-01

    A new analytical model is proposed for the distribution of variations of the relative electron-density contrast of large-scale ionospheric inhomogeneities. The model is characterized by other-than-zero skewness and kurtosis. It is shown that the model is applicable in the interval of horizontal dimensions of inhomogeneities from hundreds to thousands of kilometers.

  13. On the Estimation of Hierarchical Latent Regression Models for Large-Scale Assessments

    Science.gov (United States)

    Li, Deping; Oranje, Andreas; Jiang, Yanlin

    2009-01-01

    To find population proficiency distributions, a two-level hierarchical linear model may be applied to large-scale survey assessments such as the National Assessment of Educational Progress (NAEP). The model and parameter estimation are developed and a simulation was carried out to evaluate parameter recovery. Subsequently, both a hierarchical and…

  14. Finite Mixture Multilevel Multidimensional Ordinal IRT Models for Large Scale Cross-Cultural Research

    NARCIS (Netherlands)

    M.G. de Jong (Martijn); J-B.E.M. Steenkamp (Jan-Benedict)

    2010-01-01

    textabstractWe present a class of finite mixture multilevel multidimensional ordinal IRT models for large scale cross-cultural research. Our model is proposed for confirmatory research settings. Our prior for item parameters is a mixture distribution to accommodate situations where different groups

  15. Topological analysis of large-scale biomedical terminology structures.

    Science.gov (United States)

    Bales, Michael E; Lussier, Yves A; Johnson, Stephen B

    2007-01-01

    To characterize global structural features of large-scale biomedical terminologies using currently emerging statistical approaches. Given rapid growth of terminologies, this research was designed to address scalability. We selected 16 terminologies covering a variety of domains from the UMLS Metathesaurus, a collection of terminological systems. Each was modeled as a network in which nodes were atomic concepts and links were relationships asserted by the source vocabulary. For comparison against each terminology we created three random networks of equivalent size and density. Average node degree, node degree distribution, clustering coefficient, average path length. Eight of 16 terminologies exhibited the small-world characteristics of a short average path length and strong local clustering. An overlapping subset of nine exhibited a power law distribution in node degrees, indicative of a scale-free architecture. We attribute these features to specific design constraints. Constraints on node connectivity, common in more synthetic classification systems, localize the effects of changes and deletions. In contrast, small-world and scale-free features, common in comprehensive medical terminologies, promote flexible navigation and less restrictive organic-like growth. While thought of as synthetic, grid-like structures, some controlled terminologies are structurally indistinguishable from natural language networks. This paradoxical result suggests that terminology structure is shaped not only by formal logic-based semantics, but by rules analogous to those that govern social networks and biological systems. Graph theoretic modeling shows early promise as a framework for describing terminology structure. Deeper understanding of these techniques may inform the development of scalable terminologies and ontologies.

  16. Inflationary tensor fossils in large-scale structure

    Energy Technology Data Exchange (ETDEWEB)

    Dimastrogiovanni, Emanuela [School of Physics and Astronomy, University of Minnesota, Minneapolis, MN 55455 (United States); Fasiello, Matteo [Department of Physics, Case Western Reserve University, Cleveland, OH 44106 (United States); Jeong, Donghui [Department of Astronomy and Astrophysics, The Pennsylvania State University, University Park, PA 16802 (United States); Kamionkowski, Marc, E-mail: ema@physics.umn.edu, E-mail: mrf65@case.edu, E-mail: duj13@psu.edu, E-mail: kamion@jhu.edu [Department of Physics and Astronomy, 3400 N. Charles St., Johns Hopkins University, Baltimore, MD 21218 (United States)

    2014-12-01

    Inflation models make specific predictions for a tensor-scalar-scalar three-point correlation, or bispectrum, between one gravitational-wave (tensor) mode and two density-perturbation (scalar) modes. This tensor-scalar-scalar correlation leads to a local power quadrupole, an apparent departure from statistical isotropy in our Universe, as well as characteristic four-point correlations in the current mass distribution in the Universe. So far, the predictions for these observables have been worked out only for single-clock models in which certain consistency conditions between the tensor-scalar-scalar correlation and tensor and scalar power spectra are satisfied. Here we review the requirements on inflation models for these consistency conditions to be satisfied. We then consider several examples of inflation models, such as non-attractor and solid-inflation models, in which these conditions are put to the test. In solid inflation the simplest consistency conditions are already violated whilst in the non-attractor model we find that, contrary to the standard scenario, the tensor-scalar-scalar correlator probes directly relevant model-dependent information. We work out the predictions for observables in these models. For non-attractor inflation we find an apparent local quadrupolar departure from statistical isotropy in large-scale structure but that this power quadrupole decreases very rapidly at smaller scales. The consistency of the CMB quadrupole with statistical isotropy then constrains the distance scale that corresponds to the transition from the non-attractor to attractor phase of inflation to be larger than the currently observable horizon. Solid inflation predicts clustering fossils signatures in the current galaxy distribution that may be large enough to be detectable with forthcoming, and possibly even current, galaxy surveys.

  17. Enabling High Performance Large Scale Dense Problems through KBLAS

    KAUST Repository

    Abdelfattah, Ahmad

    2014-05-04

    KBLAS (KAUST BLAS) is a small library that provides highly optimized BLAS routines on systems accelerated with GPUs. KBLAS is entirely written in CUDA C, and targets NVIDIA GPUs with compute capability 2.0 (Fermi) or higher. The current focus is on level-2 BLAS routines, namely the general matrix vector multiplication (GEMV) kernel, and the symmetric/hermitian matrix vector multiplication (SYMV/HEMV) kernel. KBLAS provides these two kernels in all four precisions (s, d, c, and z), with support to multi-GPU systems. Through advanced optimization techniques that target latency hiding and pushing memory bandwidth to the limit, KBLAS outperforms state-of-the-art kernels by 20-90% improvement. Competitors include CUBLAS-5.5, MAGMABLAS-1.4.0, and CULAR17. The SYMV/HEMV kernel from KBLAS has been adopted by NVIDIA, and should appear in CUBLAS-6.0. KBLAS has been used in large scale simulations of multi-object adaptive optics.

  18. Primordial Non-Gaussianity in the Large-Scale Structure of the Universe

    Directory of Open Access Journals (Sweden)

    Vincent Desjacques

    2010-01-01

    generated the cosmological fluctuations observed today. Any detection of significant non-Gaussianity would thus have profound implications for our understanding of cosmic structure formation. The large-scale mass distribution in the Universe is a sensitive probe of the nature of initial conditions. Recent theoretical progress together with rapid developments in observational techniques will enable us to critically confront predictions of inflationary scenarios and set constraints as competitive as those from the Cosmic Microwave Background. In this paper, we review past and current efforts in the search for primordial non-Gaussianity in the large-scale structure of the Universe.

  19. Food security through large scale investments in agriculture

    Science.gov (United States)

    Rulli, M.; D'Odorico, P.

    2013-12-01

    Most of the human appropriation of freshwater resources is for food production. There is some concern that in the near future the finite freshwater resources available on Earth might not be sufficient to meet the increasing human demand for agricultural products. In the late 1700s Malthus argued that in the long run the humanity would not have enough resources to feed itself. Malthus' analysis, however, did not account for the emergence of technological innovations that could increase the rate of food production. The modern and contemporary history has seen at least three major technological advances that have increased humans' access to food, namely, the industrial revolution, the green revolution, and the intensification of global trade. Here we argue that a fourth revolution has just started to happen. It involves foreign direct investments in agriculture, which intensify the crop yields of potentially highly productive agricultural lands by introducing the use of more modern technologies. The increasing demand for agricultural products and the uncertainty of international food markets has recently drawn the attention of governments and agribusiness firms toward investments in productive agricultural land, mostly in the developing world. The targeted countries are typically located in regions that have remained only marginally utilized because of lack of modern technology. It is expected that in the long run large scale land acquisitions for commercial farming will bring the technology required to close the existing yield gaps. While the extent of the acquired land and the associated appropriation of freshwater resources have been investigated in detail, the amount of food this land can produce and the number of people it could feed still need to be quantified. Here we use a unique dataset of verified land deals to provide a global quantitative assessment of the rates of crop and food appropriation potentially associated with large scale land acquisitions. We

  20. Planck 2013 results. XVII. Gravitational lensing by large-scale structure

    Science.gov (United States)

    Planck Collaboration; Ade, P. A. R.; Aghanim, N.; Armitage-Caplan, C.; Arnaud, M.; Ashdown, M.; Atrio-Barandela, F.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Bartlett, J. G.; Basak, S.; Battaner, E.; Benabed, K.; Benoît, A.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bobin, J.; Bock, J. J.; Bonaldi, A.; Bonavera, L.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Bridges, M.; Bucher, M.; Burigana, C.; Butler, R. C.; Cardoso, J.-F.; Catalano, A.; Challinor, A.; Chamballu, A.; Chiang, H. C.; Chiang, L.-Y.; Christensen, P. R.; Church, S.; Clements, D. L.; Colombi, S.; Colombo, L. P. L.; Couchot, F.; Coulais, A.; Crill, B. P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R. D.; Davis, R. J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Déchelette, T.; Delabrouille, J.; Delouis, J.-M.; Désert, F.-X.; Dickinson, C.; Diego, J. M.; Dole, H.; Donzelli, S.; Doré, O.; Douspis, M.; Dunkley, J.; Dupac, X.; Efstathiou, G.; Enßlin, T. A.; Eriksen, H. K.; Finelli, F.; Forni, O.; Frailis, M.; Franceschi, E.; Galeotta, S.; Ganga, K.; Giard, M.; Giardino, G.; Giraud-Héraud, Y.; González-Nuevo, J.; Górski, K. M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Gudmundsson, J. E.; Hansen, F. K.; Hanson, D.; Harrison, D.; Henrot-Versillé, S.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Ho, S.; Hobson, M.; Holmes, W. A.; Hornstrup, A.; Hovest, W.; Huffenberger, K. M.; Jaffe, A. H.; Jaffe, T. R.; Jones, W. C.; Juvela, M.; Keihänen, E.; Keskitalo, R.; Kisner, T. S.; Kneissl, R.; Knoche, J.; Knox, L.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lähteenmäki, A.; Lamarre, J.-M.; Lasenby, A.; Laureijs, R. J.; Lavabre, A.; Lawrence, C. R.; Leahy, J. P.; Leonardi, R.; León-Tavares, J.; Lesgourgues, J.; Lewis, A.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; López-Caniego, M.; Lubin, P. M.; Macías-Pérez, J. F.; Maffei, B.; Maino, D.; Mandolesi, N.; Mangilli, A.; Maris, M.; Marshall, D. J.; Martin, P. G.; Martínez-González, E.; Masi, S.; Massardi, M.; Matarrese, S.; Matthai, F.; Mazzotta, P.; Melchiorri, A.; Mendes, L.; Mennella, A.; Migliaccio, M.; Mitra, S.; Miville-Deschênes, M.-A.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Moss, A.; Munshi, D.; Murphy, J. A.; Naselsky, P.; Nati, F.; Natoli, P.; Netterfield, C. B.; Nørgaard-Nielsen, H. U.; Noviello, F.; Novikov, D.; Novikov, I.; Osborne, S.; Oxborrow, C. A.; Paci, F.; Pagano, L.; Pajot, F.; Paoletti, D.; Partridge, B.; Pasian, F.; Patanchon, G.; Perdereau, O.; Perotto, L.; Perrotta, F.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Ponthieu, N.; Popa, L.; Poutanen, T.; Pratt, G. W.; Prézeau, G.; Prunet, S.; Puget, J.-L.; Pullen, A. R.; Rachen, J. P.; Rebolo, R.; Reinecke, M.; Remazeilles, M.; Renault, C.; Ricciardi, S.; Riller, T.; Ristorcelli, I.; Rocha, G.; Rosset, C.; Roudier, G.; Rowan-Robinson, M.; Rubiño-Martín, J. A.; Rusholme, B.; Sandri, M.; Santos, D.; Savini, G.; Scott, D.; Seiffert, M. D.; Shellard, E. P. S.; Smith, K.; Spencer, L. D.; Starck, J.-L.; Stolyarov, V.; Stompor, R.; Sudiwala, R.; Sunyaev, R.; Sureau, F.; Sutton, D.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Tavagnacco, D.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Tuovinen, J.; Umana, G.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Vielva, P.; Villa, F.; Vittorio, N.; Wade, L. A.; Wandelt, B. D.; White, M.; White, S. D. M.; Yvon, D.; Zacchei, A.; Zonca, A.

    2014-11-01

    On the arcminute angular scales probed by Planck, the cosmic microwave background (CMB) anisotropies are gently perturbed by gravitational lensing. Here we present a detailed study of this effect, detecting lensing independently in the 100, 143, and 217 GHz frequency bands with an overall significance of greater than 25σ. We use thetemperature-gradient correlations induced by lensing to reconstruct a (noisy) map of the CMB lensing potential, which provides an integrated measure of the mass distribution back to the CMB last-scattering surface. Our lensing potential map is significantly correlated with other tracers of mass, a fact which we demonstrate using several representative tracers of large-scale structure. We estimate the power spectrum of the lensing potential, finding generally good agreement with expectations from the best-fitting ΛCDM model for the Planck temperature power spectrum, showing that this measurement at z = 1100 correctly predicts the properties of the lower-redshift, later-time structures which source the lensing potential. When combined with the temperature power spectrum, our measurement provides degeneracy-breaking power for parameter constraints; it improves CMB-alone constraints on curvature by a factor of two and also partly breaks the degeneracy between the amplitude of the primordial perturbation power spectrum and the optical depth to reionization, allowing a measurement of the optical depth to reionization which is independent of large-scale polarization data. Discarding scale information, our measurement corresponds to a 4% constraint on the amplitude of the lensing potential power spectrum, or a 2% constraint on the root-mean-squared amplitude of matter fluctuations at z ~ 2.

  1. Large scale identification and categorization of protein sequences using structured logistic regression.

    Directory of Open Access Journals (Sweden)

    Bjørn P Pedersen

    Full Text Available BACKGROUND: Structured Logistic Regression (SLR is a newly developed machine learning tool first proposed in the context of text categorization. Current availability of extensive protein sequence databases calls for an automated method to reliably classify sequences and SLR seems well-suited for this task. The classification of P-type ATPases, a large family of ATP-driven membrane pumps transporting essential cations, was selected as a test-case that would generate important biological information as well as provide a proof-of-concept for the application of SLR to a large scale bioinformatics problem. RESULTS: Using SLR, we have built classifiers to identify and automatically categorize P-type ATPases into one of 11 pre-defined classes. The SLR-classifiers are compared to a Hidden Markov Model approach and shown to be highly accurate and scalable. Representing the bulk of currently known sequences, we analysed 9.3 million sequences in the UniProtKB and attempted to classify a large number of P-type ATPases. To examine the distribution of pumps on organisms, we also applied SLR to 1,123 complete genomes from the Entrez genome database. Finally, we analysed the predicted membrane topology of the identified P-type ATPases. CONCLUSIONS: Using the SLR-based classification tool we are able to run a large scale study of P-type ATPases. This study provides proof-of-concept for the application of SLR to a bioinformatics problem and the analysis of P-type ATPases pinpoints new and interesting targets for further biochemical characterization and structural analysis.

  2. Large Scale Landslide Database System Established for the Reservoirs in Southern Taiwan

    Science.gov (United States)

    Tsai, Tsai-Tsung; Tsai, Kuang-Jung; Shieh, Chjeng-Lun

    2017-04-01

    Typhoon Morakot seriously attack southern Taiwan awaken the public awareness of large scale landslide disasters. Large scale landslide disasters produce large quantity of sediment due to negative effects on the operating functions of reservoirs. In order to reduce the risk of these disasters within the study area, the establishment of a database for hazard mitigation / disaster prevention is necessary. Real time data and numerous archives of engineering data, environment information, photo, and video, will not only help people make appropriate decisions, but also bring the biggest concern for people to process and value added. The study tried to define some basic data formats / standards from collected various types of data about these reservoirs and then provide a management platform based on these formats / standards. Meanwhile, in order to satisfy the practicality and convenience, the large scale landslide disasters database system is built both provide and receive information abilities, which user can use this large scale landslide disasters database system on different type of devices. IT technology progressed extreme quick, the most modern system might be out of date anytime. In order to provide long term service, the system reserved the possibility of user define data format /standard and user define system structure. The system established by this study was based on HTML5 standard language, and use the responsive web design technology. This will make user can easily handle and develop this large scale landslide disasters database system.

  3. The large-scale properties of simulated cosmological magnetic fields

    Science.gov (United States)

    Marinacci, Federico; Vogelsberger, Mark; Mocz, Philip; Pakmor, Rüdiger

    2015-11-01

    We perform uniformly sampled large-scale cosmological simulations including magnetic fields with the moving mesh code AREPO. We run two sets of MHD simulations: one including adiabatic gas physics only; the other featuring the fiducial feedback model of the Illustris simulation. In the adiabatic case, the magnetic field amplification follows the B ∝ ρ2/3 scaling derived from `flux-freezing' arguments, with the seed field strength providing an overall normalization factor. At high baryon overdensities the amplification is enhanced by shear flows and turbulence. Feedback physics and the inclusion of radiative cooling change this picture dramatically. In haloes, gas collapses to much larger densities and the magnetic field is amplified strongly and to the same maximum intensity irrespective of the initial seed field of which any memory is lost. At lower densities a dependence on the seed field strength and orientation, which in principle can be used to constrain models of cosmic magnetogenesis, is still present. Inside the most massive haloes magnetic fields reach values of ˜ 10-100 μG, in agreement with galaxy cluster observations. The topology of the field is tangled and gives rise to rotation measure signals in reasonable agreement with the observations. However, the rotation measure signal declines too rapidly towards larger radii as compared to observational data.

  4. Large scale multiplex PCR improves pathogen detection by DNA microarrays

    Directory of Open Access Journals (Sweden)

    Krönke Martin

    2009-01-01

    Full Text Available Abstract Background Medium density DNA microchips that carry a collection of probes for a broad spectrum of pathogens, have the potential to be powerful tools for simultaneous species identification, detection of virulence factors and antimicrobial resistance determinants. However, their widespread use in microbiological diagnostics is limited by the problem of low pathogen numbers in clinical specimens revealing relatively low amounts of pathogen DNA. Results To increase the detection power of a fluorescence-based prototype-microarray designed to identify pathogenic microorganisms involved in sepsis, we propose a large scale multiplex PCR (LSplex PCR for amplification of several dozens of gene-segments of 9 pathogenic species. This protocol employs a large set of primer pairs, potentially able to amplify 800 different gene segments that correspond to the capture probes spotted on the microarray. The LSplex protocol is shown to selectively amplify only the gene segments corresponding to the specific pathogen present in the analyte. Application of LSplex increases the microarray detection of target templates by a factor of 100 to 1000. Conclusion Our data provide a proof of principle for the improvement of detection of pathogen DNA by microarray hybridization by using LSplex PCR.

  5. Countercurrent tangential chromatography for large-scale protein purification.

    Science.gov (United States)

    Shinkazh, Oleg; Kanani, Dharmesh; Barth, Morgan; Long, Matthew; Hussain, Daniar; Zydney, Andrew L

    2011-03-01

    Recent advances in cell culture technology have created significant pressure on the downstream purification process, leading to a "downstream bottleneck" in the production of recombinant therapeutic proteins for the treatment of cancer, genetic disorders, and cardiovascular disease. Countercurrent tangential chromatography overcomes many of the limitations of conventional column chromatography by having the resin (in the form of a slurry) flow through a series of static mixers and hollow fiber membrane modules. The buffers used in the binding, washing, and elution steps flow countercurrent to the resin, enabling high-resolution separations while reducing the amount of buffer needed for protein purification. The results obtained in this study provide the first experimental demonstration of the feasibility of using countercurrent tangential chromatography for the separation of a model protein mixture containing bovine serum albumin and myoglobin using a commercially available anion exchange resin. Batch uptake/desorption experiments were used in combination with critical flux data for the hollow fiber filters to design the countercurrent tangential chromatography system. A two-stage batch separation yielded the purified target protein at >99% purity with 94% recovery. The results clearly demonstrate the potential of using countercurrent tangential chromatography for the large-scale purification of therapeutic proteins. Copyright © 2010 Wiley Periodicals, Inc.

  6. Open TG-GATEs: a large-scale toxicogenomics database

    Science.gov (United States)

    Igarashi, Yoshinobu; Nakatsu, Noriyuki; Yamashita, Tomoya; Ono, Atsushi; Ohno, Yasuo; Urushidani, Tetsuro; Yamada, Hiroshi

    2015-01-01

    Toxicogenomics focuses on assessing the safety of compounds using gene expression profiles. Gene expression signatures from large toxicogenomics databases are expected to perform better than small databases in identifying biomarkers for the prediction and evaluation of drug safety based on a compound's toxicological mechanisms in animal target organs. Over the past 10 years, the Japanese Toxicogenomics Project consortium (TGP) has been developing a large-scale toxicogenomics database consisting of data from 170 compounds (mostly drugs) with the aim of improving and enhancing drug safety assessment. Most of the data generated by the project (e.g. gene expression, pathology, lot number) are freely available to the public via Open TG-GATEs (Toxicogenomics Project-Genomics Assisted Toxicity Evaluation System). Here, we provide a comprehensive overview of the database, including both gene expression data and metadata, with a description of experimental conditions and procedures used to generate the database. Open TG-GATEs is available from http://toxico.nibio.go.jp/english/index.html. PMID:25313160

  7. Weighted social networks for a large scale artificial society

    Science.gov (United States)

    Fan, Zong Chen; Duan, Wei; Zhang, Peng; Qiu, Xiao Gang

    2016-12-01

    The method of artificial society has provided a powerful way to study and explain how individual behaviors at micro level give rise to the emergence of global social phenomenon. It also creates the need for an appropriate representation of social structure which usually has a significant influence on human behaviors. It has been widely acknowledged that social networks are the main paradigm to describe social structure and reflect social relationships within a population. To generate social networks for a population of interest, considering physical distance and social distance among people, we propose a generation model of social networks for a large-scale artificial society based on human choice behavior theory under the principle of random utility maximization. As a premise, we first build an artificial society through constructing a synthetic population with a series of attributes in line with the statistical (census) data for Beijing. Then the generation model is applied to assign social relationships to each individual in the synthetic population. Compared with previous empirical findings, the results show that our model can reproduce the general characteristics of social networks, such as high clustering coefficient, significant community structure and small-world property. Our model can also be extended to a larger social micro-simulation as an input initial. It will facilitate to research and predict some social phenomenon or issues, for example, epidemic transition and rumor spreading.

  8. Flat-Land Large-Scale Electricity Storage (FLES

    Directory of Open Access Journals (Sweden)

    Schalij R.

    2012-10-01

    Full Text Available Growth of renewable sources requires a smarter electricity grid, integrating multiple solutions for large scale storage. Pumped storage still is the most valid option. The capacity of existing facilities is not sufficient to accommodate future renewable resources. New locations for additional pumped storage capacity are scarce. Mountainous areas mostly are remote and do not allow construction of large facilities for ecological reasons. In the Netherlands underground solutions were studied for many years. The use of (former coal mines was rejected after scientific research. Further research showed that solid rock formations below the (unstable coal layers can be harnessed to excavate the lower water reservoir for pumped storage, making an innovative underground solution possible. A complete plan was developed, with a capacity of 1400 MW (8 GWh daily output and a head of 1400 m. It is technically and economically feasible. Compared to conventional pumped storage it has significantly less impact on the environment. Less vulnerable locations are eligible. The reservoir on the surface (only one instead of two is relatively small. It offers also a solution for other European countries. The Dutch studies provide a valuable basis for new locations.

  9. Large-scale functional purification of recombinant HIV-1 capsid.

    Directory of Open Access Journals (Sweden)

    Magdeleine Hung

    Full Text Available During human immunodeficiency virus type-1 (HIV-1 virion maturation, capsid proteins undergo a major rearrangement to form a conical core that protects the viral nucleoprotein complexes. Mutations in the capsid sequence that alter the stability of the capsid core are deleterious to viral infectivity and replication. Recently, capsid assembly has become an attractive target for the development of a new generation of anti-retroviral agents. Drug screening efforts and subsequent structural and mechanistic studies require gram quantities of active, homogeneous and pure protein. Conventional means of laboratory purification of Escherichia coli expressed recombinant capsid protein rely on column chromatography steps that are not amenable to large-scale production. Here we present a function-based purification of wild-type and quadruple mutant capsid proteins, which relies on the inherent propensity of capsid protein to polymerize and depolymerize. This method does not require the packing of sizable chromatography columns and can generate double-digit gram quantities of functionally and biochemically well-behaved proteins with greater than 98% purity. We have used the purified capsid protein to characterize two known assembly inhibitors in our in-house developed polymerization assay and to measure their binding affinities. Our capsid purification procedure provides a robust method for purifying large quantities of a key protein in the HIV-1 life cycle, facilitating identification of the next generation anti-HIV agents.

  10. Bayesian Inversion for Large Scale Antarctic Ice Sheet Flow

    KAUST Repository

    Ghattas, Omar

    2015-01-07

    The flow of ice from the interior of polar ice sheets is the primary contributor to projected sea level rise. One of the main difficulties faced in modeling ice sheet flow is the uncertain spatially-varying Robin boundary condition that describes the resistance to sliding at the base of the ice. Satellite observations of the surface ice flow velocity, along with a model of ice as a creeping incompressible shear-thinning fluid, can be used to infer this uncertain basal boundary condition. We cast this ill-posed inverse problem in the framework of Bayesian inference, which allows us to infer not only the basal sliding parameters, but also the associated uncertainty. To overcome the prohibitive nature of Bayesian methods for large-scale inverse problems, we exploit the fact that, despite the large size of observational data, they typically provide only sparse information on model parameters. We show results for Bayesian inversion of the basal sliding parameter field for the full Antarctic continent, and demonstrate that the work required to solve the inverse problem, measured in number of forward (and adjoint) ice sheet model solves, is independent of the parameter and data dimensions

  11. Efficient Topology Estimation for Large Scale Optical Mapping

    CERN Document Server

    Elibol, Armagan; Garcia, Rafael

    2013-01-01

    Large scale optical mapping methods are in great demand among scientists who study different aspects of the seabed, and have been fostered by impressive advances in the capabilities of underwater robots in gathering optical data from the seafloor. Cost and weight constraints mean that low-cost ROVs usually have a very limited number of sensors. When a low-cost robot carries out a seafloor survey using a down-looking camera, it usually follows a predefined trajectory that provides several non time-consecutive overlapping image pairs. Finding these pairs (a process known as topology estimation) is indispensable to obtaining globally consistent mosaics and accurate trajectory estimates, which are necessary for a global view of the surveyed area, especially when optical sensors are the only data source. This book contributes to the state-of-art in large area image mosaicing methods for underwater surveys using low-cost vehicles equipped with a very limited sensor suite. The main focus has been on global alignment...

  12. Large-scale, long-term silvicultural experiments in the United States: historical overview and contemporary examples.

    Science.gov (United States)

    R. S. Seymour; J. Guldin; D. Marshall; B. Palik

    2006-01-01

    This paper provides a synopsis of large-scale, long-term silviculture experiments in the United States. Large-scale in a silvicultural context means that experimental treatment units encompass entire stands (5 to 30 ha); long-term means that results are intended to be monitored over many cutting cycles or an entire rotation, typically for many decades. Such studies...

  13. Challenges of Modeling Flood Risk at Large Scales

    Science.gov (United States)

    Guin, J.; Simic, M.; Rowe, J.

    2009-04-01

    Flood risk management is a major concern for many nations and for the insurance sector in places where this peril is insured. A prerequisite for risk management, whether in the public sector or in the private sector is an accurate estimation of the risk. Mitigation measures and traditional flood management techniques are most successful when the problem is viewed at a large regional scale such that all inter-dependencies in a river network are well understood. From an insurance perspective the jury is still out there on whether flood is an insurable peril. However, with advances in modeling techniques and computer power it is possible to develop models that allow proper risk quantification at the scale suitable for a viable insurance market for flood peril. In order to serve the insurance market a model has to be event-simulation based and has to provide financial risk estimation that forms the basis for risk pricing, risk transfer and risk management at all levels of insurance industry at large. In short, for a collection of properties, henceforth referred to as a portfolio, the critical output of the model is an annual probability distribution of economic losses from a single flood occurrence (flood event) or from an aggregation of all events in any given year. In this paper, the challenges of developing such a model are discussed in the context of Great Britain for which a model has been developed. The model comprises of several, physically motivated components so that the primary attributes of the phenomenon are accounted for. The first component, the rainfall generator simulates a continuous series of rainfall events in space and time over thousands of years, which are physically realistic while maintaining the statistical properties of rainfall at all locations over the model domain. A physically based runoff generation module feeds all the rivers in Great Britain, whose total length of stream links amounts to about 60,000 km. A dynamical flow routing

  14. Interpreting large-scale redshift-space distortion measurements

    Science.gov (United States)

    Samushia, L.; Percival, W. J.; Raccanelli, A.

    2012-03-01

    The simplest theory describing large-scale redshift-space distortions (RSD), based on linear theory and distant galaxies, depends on the growth of cosmological structure, suggesting that strong tests of general relativity can be constructed from galaxy surveys. As data sets become larger and the expected constraints more precise, the extent to which the RSD follow the simple theory needs to be assessed in order that we do not introduce systematic errors into the tests by introducing inaccurate simplifying assumptions. We study the impact of the sample geometry, non-linear processes and biases induced by our lack of understanding of the radial galaxy distribution on RSD measurements. Using Large Suite of Dark Matter Simulations of the Sloan Digital Sky Survey II (SDSS-II) luminous red galaxy data, these effects are shown to be important at the level of 20 per cent. Including them, we can accurately model the recovered clustering in these mock catalogues on scales 30-200 h-1 Mpc. Applying this analysis to robustly measure parameters describing the growth history of the Universe from the SDSS-II data gives f(z= 0.25)σ8(z= 0.25) = 0.3512 ± 0.0583 and f(z= 0.37)σ8(z= 0.37) = 0.4602 ± 0.0378 when no prior is imposed on the growth rate, and the background geometry is assumed to follow a Λ cold dark matter (ΛCDM) model with the Wilkinson Microwave Anisotropy Probe (WMAP)+Type Ia supernova priors. The standard WMAP constrained ΛCDM model with general relativity predicts f(z= 0.25)σ8(z= 0.25) = 0.4260 ± 0.0141 and f(z= 0.37)σ8(z= 0.37) = 0.4367 ± 0.0136, which is fully consistent with these measurements.

  15. Local Large-Scale Structure and the Assumption of Homogeneity

    Science.gov (United States)

    Keenan, Ryan C.; Barger, Amy J.; Cowie, Lennox L.

    2016-10-01

    Our recent estimates of galaxy counts and the luminosity density in the near-infrared (Keenan et al. 2010, 2012) indicated that the local universe may be under-dense on radial scales of several hundred megaparsecs. Such a large-scale local under-density could introduce significant biases in the measurement and interpretation of cosmological observables, such as the inferred effects of dark energy on the rate of expansion. In Keenan et al. (2013), we measured the K-band luminosity density as a function of distance from us to test for such a local under-density. We made this measurement over the redshift range 0.01 0.07, we measure an increasing luminosity density that by z ~ 0.1 rises to a value of ~ 1.5 times higher than that measured locally. This implies that the stellar mass density follows a similar trend. Assuming that the underlying dark matter distribution is traced by this luminous matter, this suggests that the local mass density may be lower than the global mass density of the universe at an amplitude and on a scale that is sufficient to introduce significant biases into the measurement of basic cosmological observables. At least one study has shown that an under-density of roughly this amplitude and scale could resolve the apparent tension between direct local measurements of the Hubble constant and those inferred by Planck team. Other theoretical studies have concluded that such an under-density could account for what looks like an accelerating expansion, even when no dark energy is present.

  16. Multi-Resolution Modeling of Large Scale Scientific Simulation Data

    Energy Technology Data Exchange (ETDEWEB)

    Baldwin, C; Abdulla, G; Critchlow, T

    2002-02-25

    Data produced by large scale scientific simulations, experiments, and observations can easily reach tera-bytes in size. The ability to examine data-sets of this magnitude, even in moderate detail, is problematic at best. Generally this scientific data consists of multivariate field quantities with complex inter-variable correlations and spatial-temporal structure. To provide scientists and engineers with the ability to explore and analyze such data sets we are using a twofold approach. First, we model the data with the objective of creating a compressed yet manageable representation. Second, with that compressed representation, we provide the user with the ability to query the resulting approximation to obtain approximate yet sufficient answers; a process called adhoc querying. This paper is concerned with a wavelet modeling technique that seeks to capture the important physical characteristics of the target scientific data. Our approach is driven by the compression, which is necessary for viable throughput, along with the end user requirements from the discovery process. Our work contrasts existing research which applies wavelets to range querying, change detection, and clustering problems by working directly with a decomposition of the data. The difference in this procedures is due primarily to the nature of the data and the requirements of the scientists and engineers. Our approach directly uses the wavelet coefficients of the data to compress as well as query. We will provide some background on the problem, describe how the wavelet decomposition is used to facilitate data compression and how queries are posed on the resulting compressed model. Results of this process will be shown for several problems of interest and we will end with some observations and conclusions about this research.

  17. Analysis for Large Scale Integration of Electric Vehicles into Power Grids

    DEFF Research Database (Denmark)

    Hu, Weihao; Chen, Zhe; Wang, Xiaoru

    2011-01-01

    Electric Vehicles (EVs) provide a significant opportunity for reducing the consumption of fossil energies and the emission of carbon dioxide. With more and more electric vehicles integrated in the power systems, it becomes important to study the effects of EV integration on the power systems......, especially the low and middle voltage level networks. In the paper, the basic structure and characteristics of the electric vehicles are introduced. The possible impacts of large scale integration of electric vehicles on the power systems especially the advantage to the integration of the renewable energies...... are discussed. Finally, the research projects related to the large scale integration of electric vehicles into the power systems are introduced, it will provide reference for large scale integration of Electric Vehicles into power grids....

  18. Horvitz-Thompson survey sample methods for estimating large-scale animal abundance

    Science.gov (United States)

    Samuel, M.D.; Garton, E.O.

    1994-01-01

    Large-scale surveys to estimate animal abundance can be useful for monitoring population status and trends, for measuring responses to management or environmental alterations, and for testing ecological hypotheses about abundance. However, large-scale surveys may be expensive and logistically complex. To ensure resources are not wasted on unattainable targets, the goals and uses of each survey should be specified carefully and alternative methods for addressing these objectives always should be considered. During survey design, the impoflance of each survey error component (spatial design, propofiion of detected animals, precision in detection) should be considered carefully to produce a complete statistically based survey. Failure to address these three survey components may produce population estimates that are inaccurate (biased low), have unrealistic precision (too precise) and do not satisfactorily meet the survey objectives. Optimum survey design requires trade-offs in these sources of error relative to the costs of sampling plots and detecting animals on plots, considerations that are specific to the spatial logistics and survey methods. The Horvitz-Thompson estimators provide a comprehensive framework for considering all three survey components during the design and analysis of large-scale wildlife surveys. Problems of spatial and temporal (especially survey to survey) heterogeneity in detection probabilities have received little consideration, but failure to account for heterogeneity produces biased population estimates. The goal of producing unbiased population estimates is in conflict with the increased variation from heterogeneous detection in the population estimate. One solution to this conflict is to use an MSE-based approach to achieve a balance between bias reduction and increased variation. Further research is needed to develop methods that address spatial heterogeneity in detection, evaluate the effects of temporal heterogeneity on survey

  19. Ground truth and mapping capability of urban areas in large scale using GE images

    Science.gov (United States)

    Ramzi, Ahmed I.

    2015-10-01

    Monitoring and mapping complex urban features (e.g. roads and buildings) from remotely sensed data multispectral and hyperspectral has gained enormous research interest. Accurate ground truth allows for high quality assessment of classified images and to verify the produced map. Ground truth can be acquired from: field using the handheld Global Positioning System (GPS) device and from Images with high resolution extracted from Google Earth in additional to field. Ground truth or training samples could be achieved from VHR satellite images such as QuickBird, Ikonos, Geoeye-1 and Wordview images. Archived images are costly for researchers in developing countries. Images from GE with high spatial resolution are free for public and can be used directly producing large scale maps, in producing LULC mapping and training samples. Google Earth (GE) provides free access to high resolution satellite imagery, but is the quality good enough to map urban areas. Costal of the Red sea, Marsa Alam could be mapped using GE images. The main objective of this research is exploring the accuracy assessment of producing large scale maps from free Google Earth imagery and to collect ground truth or training samples in limited geographical extend. This research will be performed on Marsa Alam city or located on the western shore of the Red Sea, Red sea Governorate, Egypt. Marsa Alam is located 274 km south of Hurghada. The proposed methodology involves image collection taken into consideration the resolution of collected photographs which depend on the height of view. After that, image rectification using suitable rectification methods with different number and distributions of GCPs and CPs. Database and Geographic information systems (GIS) layers were created by on-screen vectorization based on the requirement of large scale maps. Attribute data have been collected from the field. The obtained results show that the planmetric accuracy of the produced map from Google Earth Images met map

  20. Seismic safety in conducting large-scale blasts

    Science.gov (United States)

    Mashukov, I. V.; Chaplygin, V. V.; Domanov, V. P.; Semin, A. A.; Klimkin, M. A.

    2017-09-01

    In mining enterprises to prepare hard rocks for excavation a drilling and blasting method is used. With the approach of mining operations to settlements the negative effect of large-scale blasts increases. To assess the level of seismic impact of large-scale blasts the scientific staff of Siberian State Industrial University carried out expertise for coal mines and iron ore enterprises. Determination of the magnitude of surface seismic vibrations caused by mass explosions was performed using seismic receivers, an analog-digital converter with recording on a laptop. The registration results of surface seismic vibrations during production of more than 280 large-scale blasts at 17 mining enterprises in 22 settlements are presented. The maximum velocity values of the Earth’s surface vibrations are determined. The safety evaluation of seismic effect was carried out according to the permissible value of vibration velocity. For cases with exceedance of permissible values recommendations were developed to reduce the level of seismic impact.

  1. Enabling Large Scale Fine Resolution Flood Modeling Using SWAT and LISFLOOD-FP

    Science.gov (United States)

    Liu, Z.; Rajib, A.; Merwade, V.

    2016-12-01

    Due to computational burden, most large scale hydrologic models are not created to generate streamflow hydrographs for lower order ungauged streams. Similarly, most flood inundation mapping studies are performed at major stream reaches. As a result, it is not possible to get reliable flow estimates and flood extents for vast majority of the areas where no stream gauging stations are available. The objective of this study is to loosely couple spatially distributed hydrologic model, Soil and Water Assessment Tool (SWAT), with a 1D/2D hydrodynamic model, LISFLOOD-FP, for large scale fine resolution flood inundation modeling. The model setup is created for the 491,000 km2 drainage area of the Ohio River Basin in the United States. In the current framework, SWAT model is calibrated with historical streamflow data over the past 80 years (1935-2014) to provide streamflow time-series for more than 100,000 NHDPlus stream reaches in the basin. The post-calibration evaluation shows that the simulated daily streamflow has a Nash-Sutcliffe Efficiency in the range of 0.4-0.7 against observed records across the basin. Streamflow outputs from the calibrated SWAT are subsequently used to drive LISFLOOD-FP and routed along the streams/floodplain using the built-in subgrid solver. LISFLOOD-FP is set up for the Ohio River Basin using 90m digital elevation model, and is executed on high performance computing resources at Purdue University. The flood extents produced by LISFLOOD-FP show good agreement with observed inundation. The current modeling framework lays foundation for near real-time streamflow forecasting and prediction of time-varying flood inundation maps along the NHDPlus network.

  2. The Convergence of High Performance Computing and Large Scale Data Analytics

    Science.gov (United States)

    Duffy, D.; Bowen, M. K.; Thompson, J. H.; Yang, C. P.; Hu, F.; Wills, B.

    2015-12-01

    As the combinations of remote sensing observations and model outputs have grown, scientists are increasingly burdened with both the necessity and complexity of large-scale data analysis. Scientists are increasingly applying traditional high performance computing (HPC) solutions to solve their "Big Data" problems. While this approach has the benefit of limiting data movement, the HPC system is not optimized to run analytics, which can create problems that permeate throughout the HPC environment. To solve these issues and to alleviate some of the strain on the HPC environment, the NASA Center for Climate Simulation (NCCS) has created the Advanced Data Analytics Platform (ADAPT), which combines both HPC and cloud technologies to create an agile system designed for analytics. Large, commonly used data sets are stored in this system in a write once/read many file system, such as Landsat, MODIS, MERRA, and NGA. High performance virtual machines are deployed and scaled according to the individual scientist's requirements specifically for data analysis. On the software side, the NCCS and GMU are working with emerging commercial technologies and applying them to structured, binary scientific data in order to expose the data in new ways. Native NetCDF data is being stored within a Hadoop Distributed File System (HDFS) enabling storage-proximal processing through MapReduce while continuing to provide accessibility of the data to traditional applications. Once the data is stored within HDFS, an additional indexing scheme is built on top of the data and placed into a relational database. This spatiotemporal index enables extremely fast mappings of queries to data locations to dramatically speed up analytics. These are some of the first steps toward a single unified platform that optimizes for both HPC and large-scale data analysis, and this presentation will elucidate the resulting and necessary exascale architectures required for future systems.

  3. Relay discovery and selection for large-scale P2P streaming.

    Directory of Open Access Journals (Sweden)

    Chengwei Zhang

    Full Text Available In peer-to-peer networks, application relays have been commonly used to provide various networking services. The service performance often improves significantly if a relay is selected appropriately based on its network location. In this paper, we studied the location-aware relay discovery and selection problem for large-scale P2P streaming networks. In these large-scale and dynamic overlays, it incurs significant communication and computation cost to discover a sufficiently large relay candidate set and further to select one relay with good performance. The network location can be measured directly or indirectly with the tradeoffs between timeliness, overhead and accuracy. Based on a measurement study and the associated error analysis, we demonstrate that indirect measurements, such as King and Internet Coordinate Systems (ICS, can only achieve a coarse estimation of peers' network location and those methods based on pure indirect measurements cannot lead to a good relay selection. We also demonstrate that there exists significant error amplification of the commonly used "best-out-of-K" selection methodology using three RTT data sets publicly available. We propose a two-phase approach to achieve efficient relay discovery and accurate relay selection. Indirect measurements are used to narrow down a small number of high-quality relay candidates and the final relay selection is refined based on direct probing. This two-phase approach enjoys an efficient implementation using the Distributed-Hash-Table (DHT. When the DHT is constructed, the node keys carry the location information and they are generated scalably using indirect measurements, such as the ICS coordinates. The relay discovery is achieved efficiently utilizing the DHT-based search. We evaluated various aspects of this DHT-based approach, including the DHT indexing procedure, key generation under peer churn and message costs.

  4. Performance evaluation of the DCMD desalination process under bench scale and large scale module operating conditions

    KAUST Repository

    Francis, Lijo

    2014-04-01

    The flux performance of different hydrophobic microporous flat sheet commercial membranes made of poly tetrafluoroethylene (PTFE) and poly propylene (PP) was tested for Red Sea water desalination using the direct contact membrane distillation (DCMD) process, under bench scale (high δT) and large scale module (low δT) operating conditions. Membranes were characterized for their surface morphology, water contact angle, thickness, porosity, pore size and pore size distribution. The DCMD process performance was optimized using a locally designed and fabricated module aiming to maximize the flux at different levels of operating parameters, mainly feed water and coolant inlet temperatures at different temperature differences across the membrane (δT). Water vapor flux of 88.8kg/m2h was obtained using a PTFE membrane at high δT (60°C). In addition, the flux performance was compared to the first generation of a new locally synthesized and fabricated membrane made of a different class of polymer under the same conditions. A total salt rejection of 99.99% and boron rejection of 99.41% were achieved under extreme operating conditions. On the other hand, a detailed water characterization revealed that low molecular weight non-ionic molecules (ppb level) were transported with the water vapor molecules through the membrane structure. The membrane which provided the highest flux was then tested under large scale module operating conditions. The average flux of the latter study (low δT) was found to be eight times lower than that of the bench scale (high δT) operating conditions.

  5. Incorporating Endmember Variability into Linear Unmixing of Coarse Resolution Imagery: Mapping Large-Scale Impervious Surface Abundance Using a Hierarchically Object-Based Spectral Mixture Analysis

    OpenAIRE

    Chengbin Deng

    2015-01-01

    As an important indicator of anthropogenic impacts on the Earth’s surface, it is of great necessity to accurately map large-scale urbanized areas for various science and policy applications. Although spectral mixture analysis (SMA) can provide spatial distribution and quantitative fractions for better representations of urban areas, this technique is rarely explored with 1-km resolution imagery. This is due mainly to the absence of image endmembers associated with the mixed pixel problem. Con...

  6. Large Scale Processes and Extreme Floods in Brazil

    Science.gov (United States)

    Ribeiro Lima, C. H.; AghaKouchak, A.; Lall, U.

    2016-12-01

    Persistent large scale anomalies in the atmospheric circulation and ocean state have been associated with heavy rainfall and extreme floods in water basins of different sizes across the world. Such studies have emerged in the last years as a new tool to improve the traditional, stationary based approach in flood frequency analysis and flood prediction. Here we seek to advance previous studies by evaluating the dominance of large scale processes (e.g. atmospheric rivers/moisture transport) over local processes (e.g. local convection) in producing floods. We consider flood-prone regions in Brazil as case studies and the role of large scale climate processes in generating extreme floods in such regions is explored by means of observed streamflow, reanalysis data and machine learning methods. The dynamics of the large scale atmospheric circulation in the days prior to the flood events are evaluated based on the vertically integrated moisture flux and its divergence field, which are interpreted in a low-dimensional space as obtained by machine learning techniques, particularly supervised kernel principal component analysis. In such reduced dimensional space, clusters are obtained in order to better understand the role of regional moisture recycling or teleconnected moisture in producing floods of a given magnitude. The convective available potential energy (CAPE) is also used as a measure of local convection activities. We investigate for individual sites the exceedance probability in which large scale atmospheric fluxes dominate the flood process. Finally, we analyze regional patterns of floods and how the scaling law of floods with drainage area responds to changes in the climate forcing mechanisms (e.g. local vs large scale).

  7. Topology Optimization of Large Scale Stokes Flow Problems

    DEFF Research Database (Denmark)

    Aage, Niels; Poulsen, Thomas Harpsøe; Gersborg-Hansen, Allan

    2008-01-01

    This note considers topology optimization of large scale 2D and 3D Stokes flow problems using parallel computations. We solve problems with up to 1.125.000 elements in 2D and 128.000 elements in 3D on a shared memory computer consisting of Sun UltraSparc IV CPUs.......This note considers topology optimization of large scale 2D and 3D Stokes flow problems using parallel computations. We solve problems with up to 1.125.000 elements in 2D and 128.000 elements in 3D on a shared memory computer consisting of Sun UltraSparc IV CPUs....

  8. [Issues of large scale tissue culture of medicinal plant].

    Science.gov (United States)

    Lv, Dong-Mei; Yuan, Yuan; Zhan, Zhi-Lai

    2014-09-01

    In order to increase the yield and quality of the medicinal plant and enhance the competitive power of industry of medicinal plant in our country, this paper analyzed the status, problem and countermeasure of the tissue culture of medicinal plant on large scale. Although the biotechnology is one of the most efficient and promising means in production of medicinal plant, it still has problems such as stability of the material, safety of the transgenic medicinal plant and optimization of cultured condition. Establishing perfect evaluation system according to the characteristic of the medicinal plant is the key measures to assure the sustainable development of the tissue culture of medicinal plant on large scale.

  9. Large-Scale Graph Processing Analysis using Supercomputer Cluster

    Science.gov (United States)

    Vildario, Alfrido; Fitriyani; Nugraha Nurkahfi, Galih

    2017-01-01

    Graph implementation is widely use in various sector such as automotive, traffic, image processing and many more. They produce graph in large-scale dimension, cause the processing need long computational time and high specification resources. This research addressed the analysis of implementation large-scale graph using supercomputer cluster. We impelemented graph processing by using Breadth-First Search (BFS) algorithm with single destination shortest path problem. Parallel BFS implementation with Message Passing Interface (MPI) used supercomputer cluster at High Performance Computing Laboratory Computational Science Telkom University and Stanford Large Network Dataset Collection. The result showed that the implementation give the speed up averages more than 30 times and eficiency almost 90%.

  10. Generation Expansion Planning Considering Integrating Large-scale Wind Generation

    DEFF Research Database (Denmark)

    Zhang, Chunyu; Ding, Yi; Østergaard, Jacob

    2013-01-01

    Generation expansion planning (GEP) is the problem of finding the optimal strategy to plan the Construction of new generation while satisfying technical and economical constraints. In the deregulated and competitive environment, large-scale integration of wind generation (WG) in power system has...... necessitated the inclusion of more innovative and sophisticated approaches in power system investment planning. A bi-level generation expansion planning approach considering large-scale wind generation was proposed in this paper. The first phase is investment decision, while the second phase is production...

  11. Large-Scale Spray Releases: Initial Aerosol Test Results

    Energy Technology Data Exchange (ETDEWEB)

    Schonewill, Philip P.; Gauglitz, Phillip A.; Bontha, Jagannadha R.; Daniel, Richard C.; Kurath, Dean E.; Adkins, Harold E.; Billing, Justin M.; Burns, Carolyn A.; Davis, James M.; Enderlin, Carl W.; Fischer, Christopher M.; Jenks, Jeromy WJ; Lukins, Craig D.; MacFarlan, Paul J.; Shutthanandan, Janani I.; Smith, Dennese M.

    2012-12-01

    One of the events postulated in the hazard analysis at the Waste Treatment and Immobilization Plant (WTP) and other U.S. Department of Energy (DOE) nuclear facilities is a breach in process piping that produces aerosols with droplet sizes in the respirable range. The current approach for predicting the size and concentration of aerosols produced in a spray leak involves extrapolating from correlations reported in the literature. These correlations are based on results obtained from small engineered spray nozzles using pure liquids with Newtonian fluid behavior. The narrow ranges of physical properties on which the correlations are based do not cover the wide range of slurries and viscous materials that will be processed in the WTP and across processing facilities in the DOE complex. Two key technical areas were identified where testing results were needed to improve the technical basis by reducing the uncertainty due to extrapolating existing literature results. The first technical need was to quantify the role of slurry particles in small breaches where the slurry particles may plug and result in substantially reduced, or even negligible, respirable fraction formed by high-pressure sprays. The second technical need was to determine the aerosol droplet size distribution and volume from prototypic breaches and fluids, specifically including sprays from larger breaches with slurries where data from the literature are scarce. To address these technical areas, small- and large-scale test stands were constructed and operated with simulants to determine aerosol release fractions and generation rates from a range of breach sizes and geometries. The properties of the simulants represented the range of properties expected in the WTP process streams and included water, sodium salt solutions, slurries containing boehmite or gibbsite, and a hazardous chemical simulant. The effect of anti-foam agents was assessed with most of the simulants. Orifices included round holes and

  12. Large-Scale Science Observatories: Building on What We Have Learned from USArray

    Science.gov (United States)

    Woodward, R.; Busby, R.; Detrick, R. S.; Frassetto, A.

    2015-12-01

    With the NSF-sponsored EarthScope USArray observatory, the Earth science community has built the operational capability and experience to tackle scientific challenges at the largest scales, such as a Subduction Zone Observatory. In the first ten years of USArray, geophysical instruments were deployed across roughly 2% of the Earth's surface. The USArray operated a rolling deployment of seismic stations that occupied ~1,700 sites across the USA, made co-located atmospheric observations, occupied hundreds of sites with magnetotelluric sensors, expanded a backbone reference network of seismic stations, and provided instruments to PI-led teams that deployed thousands of additional seismic stations. USArray included a comprehensive outreach component that directly engaged hundreds of students at over 50 colleges and universities to locate station sites and provided Earth science exposure to roughly 1,000 landowners who hosted stations. The project also included a comprehensive data management capability that received, archived and distributed data, metadata, and data products; data were acquired and distributed in real time. The USArray project was completed on time and under budget and developed a number of best practices that can inform other large-scale science initiatives that the Earth science community is contemplating. Key strategies employed by USArray included: using a survey, rather than hypothesis-driven, mode of observation to generate comprehensive, high quality data on a large-scale for exploration and discovery; making data freely and openly available to any investigator from the very onset of the project; and using proven, commercial, off-the-shelf systems to ensure a fast start and avoid delays due to over-reliance on unproven technology or concepts. Scope was set ambitiously, but managed carefully to avoid overextending. Configuration was controlled to ensure efficient operations while providing consistent, uniform observations. Finally, community

  13. Presenting an Approach for Conducting Knowledge Architecture within Large-Scale Organizations.

    Directory of Open Access Journals (Sweden)

    Touraj Varaee

    Full Text Available Knowledge architecture (KA establishes the basic groundwork for the successful implementation of a short-term or long-term knowledge management (KM program. An example of KA is the design of a prototype before a new vehicle is manufactured. Due to a transformation to large-scale organizations, the traditional architecture of organizations is undergoing fundamental changes. This paper explores the main strengths and weaknesses in the field of KA within large-scale organizations and provides a suitable methodology and supervising framework to overcome specific limitations. This objective was achieved by applying and updating the concepts from the Zachman information architectural framework and the information architectural methodology of enterprise architecture planning (EAP. The proposed solution may be beneficial for architects in knowledge-related areas to successfully accomplish KM within large-scale organizations. The research method is descriptive; its validity is confirmed by performing a case study and polling the opinions of KA experts.

  14. Presenting an Approach for Conducting Knowledge Architecture within Large-Scale Organizations.

    Science.gov (United States)

    Varaee, Touraj; Habibi, Jafar; Mohaghar, Ali

    2015-01-01

    Knowledge architecture (KA) establishes the basic groundwork for the successful implementation of a short-term or long-term knowledge management (KM) program. An example of KA is the design of a prototype before a new vehicle is manufactured. Due to a transformation to large-scale organizations, the traditional architecture of organizations is undergoing fundamental changes. This paper explores the main strengths and weaknesses in the field of KA within large-scale organizations and provides a suitable methodology and supervising framework to overcome specific limitations. This objective was achieved by applying and updating the concepts from the Zachman information architectural framework and the information architectural methodology of enterprise architecture planning (EAP). The proposed solution may be beneficial for architects in knowledge-related areas to successfully accomplish KM within large-scale organizations. The research method is descriptive; its validity is confirmed by performing a case study and polling the opinions of KA experts.

  15. Imprint of non-linear effects on HI intensity mapping on large scales

    Science.gov (United States)

    Umeh, Obinna

    2017-06-01

    Intensity mapping of the HI brightness temperature provides a unique way of tracing large-scale structures of the Universe up to the largest possible scales. This is achieved by using a low angular resolution radio telescopes to detect emission line from cosmic neutral Hydrogen in the post-reionization Universe. We use general relativistic perturbation theory techniques to derive for the first time the full expression for the HI brightness temperature up to third order in perturbation theory without making any plane-parallel approximation. We use this result and the renormalization prescription for biased tracers to study the impact of nonlinear effects on the power spectrum of HI brightness temperature both in real and redshift space. We show how mode coupling at nonlinear order due to nonlinear bias parameters and redshift space distortion terms modulate the power spectrum on large scales. The large scale modulation may be understood to be due to the effective bias parameter and effective shot noise.

  16. Variability in large-scale wind power generation: Variability in large-scale wind power generation

    Energy Technology Data Exchange (ETDEWEB)

    Kiviluoma, Juha [VTT Technical Research Centre of Finland, Espoo Finland; Holttinen, Hannele [VTT Technical Research Centre of Finland, Espoo Finland; Weir, David [Energy Department, Norwegian Water Resources and Energy Directorate, Oslo Norway; Scharff, Richard [KTH Royal Institute of Technology, Electric Power Systems, Stockholm Sweden; Söder, Lennart [Royal Institute of Technology, Electric Power Systems, Stockholm Sweden; Menemenlis, Nickie [Institut de recherche Hydro-Québec, Montreal Canada; Cutululis, Nicolaos A. [DTU, Wind Energy, Roskilde Denmark; Danti Lopez, Irene [Electricity Research Centre, University College Dublin, Dublin Ireland; Lannoye, Eamonn [Electric Power Research Institute, Palo Alto California USA; Estanqueiro, Ana [LNEG, Laboratorio Nacional de Energia e Geologia, UESEO, Lisbon Spain; Gomez-Lazaro, Emilio [Renewable Energy Research Institute and DIEEAC/EDII-AB, Castilla-La Mancha University, Albacete Spain; Zhang, Qin [State Grid Corporation of China, Beijing China; Bai, Jianhua [State Grid Energy Research Institute Beijing, Beijing China; Wan, Yih-Huei [National Renewable Energy Laboratory, Transmission and Grid Integration Group, Golden Colorado USA; Milligan, Michael [National Renewable Energy Laboratory, Transmission and Grid Integration Group, Golden Colorado USA

    2015-10-25

    The paper demonstrates the characteristics of wind power variability and net load variability in multiple power systems based on real data from multiple years. Demonstrated characteristics include probability distribution for different ramp durations, seasonal and diurnal variability and low net load events. The comparison shows regions with low variability (Sweden, Spain and Germany), medium variability (Portugal, Ireland, Finland and Denmark) and regions with higher variability (Quebec, Bonneville Power Administration and Electric Reliability Council of Texas in North America; Gansu, Jilin and Liaoning in China; and Norway and offshore wind power in Denmark). For regions with low variability, the maximum 1 h wind ramps are below 10% of nominal capacity, and for regions with high variability, they may be close to 30%. Wind power variability is mainly explained by the extent of geographical spread, but also higher capacity factor causes higher variability. It was also shown how wind power ramps are autocorrelated and dependent on the operating output level. When wind power was concentrated in smaller area, there were outliers with high changes in wind output, which were not present in large areas with well-dispersed wind power.

  17. A Reduced Basis Framework: Application to large scale non-linear multi-physics problems

    Directory of Open Access Journals (Sweden)

    Daversin C.

    2013-12-01

    Full Text Available In this paper we present applications of the reduced basis method (RBM to large-scale non-linear multi-physics problems. We first describe the mathematical framework in place and in particular the Empirical Interpolation Method (EIM to recover an affine decomposition and then we propose an implementation using the open-source library Feel++ which provides both the reduced basis and finite element layers. Large scale numerical examples are shown and are connected to real industrial applications arising from the High Field Resistive Magnets development at the Laboratoire National des Champs Magnétiques Intenses.

  18. Fermi surface map of large-scale single-orientation graphene on SiO2

    Science.gov (United States)

    Miniussi, E.; Bernard, C.; Cun, H. Y.; Probst, B.; Leuenberger, D.; Mette, G.; Zabka, W.-D.; Weinl, M.; Haluska, M.; Schreck, M.; Osterwalder, J.; Greber, T.

    2017-11-01

    Large scale tetraoctylammonium-assisted electrochemical transfer of graphene grown on single-crystalline Ir(1 1 1) films by chemical vapour deposition is reported. The transferred samples are characterized in air with optical microscopy, Raman spectroscopy and four point transport measurements, providing the sheet resistance and the Hall carrier concentration. In vacuum we apply low energy electron diffraction and photoelectron spectroscopy that indicate transferred large-scale single orientation graphene. Angular resolved photoemission reveals a Fermi surface and a Dirac point energy which are consistent with charge neutral graphene.

  19. Tree Structure Sparsity Pattern Guided Convex Optimization for Compressive Sensing of Large-Scale Images.

    Science.gov (United States)

    Liang, Wei-Jie; Lin, Gang-Xuan; Lu, Chun-Shien

    2016-12-01

    Cost-efficient compressive sensing of large-scale images with quickly reconstructed high-quality results is very challenging. In this paper, we present an algorithm to solve convex optimization via the tree structure sparsity pattern, which can be run in the operator to reduce computation cost and maintain good quality, especially for large-scale images. We also provide convergence analysis and convergence rate analysis for the proposed method. The feasibility of our method is verified through simulations and comparison with state-of-theart algorithms.

  20. Semantic distributed resource discovery for multiple resource providers

    NARCIS (Netherlands)

    Pittaras, C.; Ghijsen, M.; Wibisono, A.; Grosso, P.; van der Ham, J.; de Laat, C.

    2012-01-01

    An emerging modus operandi among providers of cloud infrastructures is the one where they share and combine their heterogenous resources to offer end user services tailored to specific scientific and business needs. A challenge to overcome is the discovery of suitable resources among these multiple

  1. BFAST: an alignment tool for large scale genome resequencing.

    Directory of Open Access Journals (Sweden)

    Nils Homer

    2009-11-01

    Full Text Available The new generation of massively parallel DNA sequencers, combined with the challenge of whole human genome resequencing, result in the need for rapid and accurate alignment of billions of short DNA sequence reads to a large reference genome. Speed is obviously of great importance, but equally important is maintaining alignment accuracy of short reads, in the 25-100 base range, in the presence of errors and true biological variation.We introduce a new algorithm specifically optimized for this task, as well as a freely available implementation, BFAST, which can align data produced by any of current sequencing platforms, allows for user-customizable levels of speed and accuracy, supports paired end data, and provides for efficient parallel and multi-threaded computation on a computer cluster. The new method is based on creating flexible, efficient whole genome indexes to rapidly map reads to candidate alignment locations, with arbitrary multiple independent indexes allowed to achieve robustness against read errors and sequence variants. The final local alignment uses a Smith-Waterman method, with gaps to support the detection of small indels.We compare BFAST to a selection of large-scale alignment tools -- BLAT, MAQ, SHRiMP, and SOAP -- in terms of both speed and accuracy, using simulated and real-world datasets. We show BFAST can achieve substantially greater sensitivity of alignment in the context of errors and true variants, especially insertions and deletions, and minimize false mappings, while maintaining adequate speed compared to other current methods. We show BFAST can align the amount of data needed to fully resequence a human genome, one billion reads, with high sensitivity and accuracy, on a modest computer cluster in less than 24 hours. BFAST is available at (http://bfast.sourceforge.net.

  2. Large Scale Computing and Storage Requirements for Nuclear Physics Research

    Energy Technology Data Exchange (ETDEWEB)

    Gerber, Richard A.; Wasserman, Harvey J.

    2012-03-02

    IThe National Energy Research Scientific Computing Center (NERSC) is the primary computing center for the DOE Office of Science, serving approximately 4,000 users and hosting some 550 projects that involve nearly 700 codes for a wide variety of scientific disciplines. In addition to large-scale computing resources NERSC provides critical staff support and expertise to help scientists make the most efficient use of these resources to advance the scientific mission of the Office of Science. In May 2011, NERSC, DOE’s Office of Advanced Scientific Computing Research (ASCR) and DOE’s Office of Nuclear Physics (NP) held a workshop to characterize HPC requirements for NP research over the next three to five years. The effort is part of NERSC’s continuing involvement in anticipating future user needs and deploying necessary resources to meet these demands. The workshop revealed several key requirements, in addition to achieving its goal of characterizing NP computing. The key requirements include: 1. Larger allocations of computational resources at NERSC; 2. Visualization and analytics support; and 3. Support at NERSC for the unique needs of experimental nuclear physicists. This report expands upon these key points and adds others. The results are based upon representative samples, called “case studies,” of the needs of science teams within NP. The case studies were prepared by NP workshop participants and contain a summary of science goals, methods of solution, current and future computing requirements, and special software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, “multi-core” environment that is expected to dominate HPC architectures over the next few years. The report also includes a section with NERSC responses to the workshop findings. NERSC has many initiatives already underway that address key workshop findings and all of the action items are aligned with NERSC strategic plans.

  3. New Visions for Large Scale Networks: Research and Applications

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — This paper documents the findings of the March 12-14, 2001 Workshop on New Visions for Large-Scale Networks: Research and Applications. The workshops objectives were...

  4. Bottom-Up Accountability Initiatives and Large-Scale Land ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    The objective of this project is to test whether the Food and Agriculture Organization's Voluntary Guidelines on the Responsible Governance of Tenure of Land, Fisheries and Forests in the Context of National Food Security can help increase accountability for large-scale land acquisitions in Mali, Nigeria, Uganda, and South ...

  5. Mixing Metaphors: Building Infrastructure for Large Scale School Turnaround

    Science.gov (United States)

    Peurach, Donald J.; Neumerski, Christine M.

    2015-01-01

    The purpose of this analysis is to increase understanding of the possibilities and challenges of building educational infrastructure--the basic, foundational structures, systems, and resources--to support large-scale school turnaround. Building educational infrastructure often exceeds the capacity of schools, districts, and state education…

  6. Interrogating Large-Scale Land Acquisitions and Their Implications ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    Corey Piccioni

    Ghana: Two regions, the Eastern Region (banana plantation) and the Greater Accra Region. (pineapples and mango plantations). • Uganda: The research covered two sites—Amuru. (sugar plantation) and Mubende District (coffee plantation). Interrogating Large-Scale Land Acquisitions and Their. Implications for Women in ...

  7. The Large Scale Structure: Polarization Aspects R. F. Pizzo

    Indian Academy of Sciences (India)

    e-mail: pizzo@astron.nl. Abstract. Polarized radio emission is detected at various scales in the ... are located at the nodes of the filamentary large-scale structure of the cosmic web and form by subsequent merging .... After calibration, we produced Q and U channel images for each dataset and we inspected them to remove ...

  8. Re-engineering non-flow large scale maintenance organisations

    NARCIS (Netherlands)

    de Waard, A.J.

    1999-01-01

    The research described in this dissertation deals with the organisational structure of non-flow Large Scale Maintenance Organisations (LSMO's). An LSMO is an enterprise of a considerable size (i.e. more than 150 employees) with as its primary tasks the maintenance, overhaul and modification of

  9. Planck intermediate results XLII. Large-scale Galactic magnetic fields

    DEFF Research Database (Denmark)

    Adam, R.; Ade, P. A. R.; Alves, M. I. R.

    2016-01-01

    Recent models for the large-scale Galactic magnetic fields in the literature have been largely constrained by synchrotron emission and Faraday rotation measures. We use three different but representative models to compare their predicted polarized synchrotron and dust emission with that measured ...

  10. Large scale synthesis and characterization of Ni nanoparticles by ...

    Indian Academy of Sciences (India)

    ... Refresher Courses · Symposia · Live Streaming. Home; Journals; Bulletin of Materials Science; Volume 31; Issue 1. Large scale synthesis and characterization of Ni nanoparticles by solution reduction method. Huazhi Wang Xinli Kou Jie Zhang Jiangong Li. Nanomaterials Volume 31 Issue 1 February 2008 pp 97-100 ...

  11. Temporal Variation of Large Scale Flows in the Solar Interior

    Indian Academy of Sciences (India)

    We attempt to detect short-term temporal variations in the rotation rate and other large scale velocity fields in the outer part of the solar convection zone using the ring diagram technique applied to Michelson Doppler Imager (MDI) data. The measured velocity field shows variations by about 10 m/s on the scale of few days.

  12. Temporal Variation of Large Scale Flows in the Solar Interior ...

    Indian Academy of Sciences (India)

    tribpo

    Abstract. We attempt to detect short term temporal variations in the rotation rate and other large scale velocity fields in the outer part of the solar convection zone using the ring diagram technique applied to. Michelson Doppler Imager (MDI) data. The measured velocity field shows variations by about 10 m/s on the scale of ...

  13. Newton iterative methods for large scale nonlinear systems

    Energy Technology Data Exchange (ETDEWEB)

    Walker, H.F.; Turner, K.

    1993-01-01

    Objective is to develop robust, efficient Newton iterative methods for general large scale problems well suited for discretizations of partial differential equations, integral equations, and other continuous problems. A concomitant objective is to develop improved iterative linear algebra methods. We first outline research on Newton iterative methods and then review work on iterative linear algebra methods. (DLC)

  14. A Large-Scale Earth and Ocean Phenomenon

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 10; Issue 2. Tsunamis - A Large-Scale Earth and Ocean Phenomenon. Satish R Shetye. General Article Volume 10 Issue 2 February 2005 pp 8-19. Fulltext. Click here to view fulltext PDF. Permanent link:

  15. Large-Scale Innovation and Change in UK Higher Education

    Science.gov (United States)

    Brown, Stephen

    2013-01-01

    This paper reflects on challenges universities face as they respond to change. It reviews current theories and models of change management, discusses why universities are particularly difficult environments in which to achieve large scale, lasting change and reports on a recent attempt by the UK JISC to enable a range of UK universities to employ…

  16. Women's Land Rights and Working Conditions in Large-scale ...

    African Journals Online (AJOL)

    3, 2016, pp. 49-69. © Council for the Development of Social Science Research in Africa, 2017. (ISSN: 0850-3907). Women's Land Rights and Working. Conditions in Large-scale Plantations in Sub-Saharan Africa. Lotsmart Fonjong*. Abstract. Women's land rights are fundamental for women's economic empowerment.

  17. Diffuse infrared emission of the galaxy: Large scale properties

    Science.gov (United States)

    Perault, M.; Boulanger, F.; Falgarone, E.; Puget, J. L.

    1987-01-01

    The Infrared Astronomy Satellite (IRAS) survey is used to study large scale properties and the origin of the diffuse emission of the Galaxy. A careful subtraction of the zodiacal light enables longitude profiles of the galactic emission at 12, 25, 60, and 100 microns to be presented.

  18. Regeneration and propagation of reed grass for large-scale ...

    African Journals Online (AJOL)

    전서범

    2012-01-26

    Jan 26, 2012 ... 3Bioenergy Crop Research Center, National Institute of Crop Science, RDA, Muan, Jeonnam 534-833, South Korea. 4Department of Horticulture, Daegu University, Jillyang, Gyeongsan 712-714, South Korea. Accepted 8 August, 2011. An in vitro culture system for the large-scale propagation of Phragmites ...

  19. Factors Influencing Uptake of a Large Scale Curriculum Innovation.

    Science.gov (United States)

    Adey, Philip S.

    Educational research has all too often failed to be implemented on a large-scale basis. This paper describes the multiplier effect of a professional development program for teachers and for trainers in the United Kingdom, and how that program was developed, monitored, and evaluated. Cognitive Acceleration through Science Education (CASE) is a…

  20. Chain Analysis for large-scale Communication systems

    NARCIS (Netherlands)

    Grijpink, Jan|info:eu-repo/dai/nl/095130861

    2010-01-01

    The chain concept is introduced to explain how large-scale information infrastructures so often fail and sometimes even backfire. Next, the assessment framework of the doctrine of Chain-computerisation and its chain analysis procedure are outlined. In this procedure chain description precedes

  1. A variance-minimizing filter for large-scale applications

    NARCIS (Netherlands)

    Leeuwen, P.J. van

    A data-assimilation method is introduced for large-scale applications in the ocean and the atmosphere that does not rely on Gaussian assumptions, i.e. it is completely general following Bayes theorem. It is a so-called particle filter. A truly variance minimizing filter is introduced and its

  2. Large-Scale Assessments and Educational Policies in Italy

    Science.gov (United States)

    Damiani, Valeria

    2016-01-01

    Despite Italy's extensive participation in most large-scale assessments, their actual influence on Italian educational policies is less easy to identify. The present contribution aims at highlighting and explaining reasons for the weak and often inconsistent relationship between international surveys and policy-making processes in Italy.…

  3. Large-scale disasters: prediction, control, and mitigation

    National Research Council Canada - National Science Library

    Gad-El-Hak, M

    2008-01-01

    ... an integrated review of the broad research field of large-scale disasters. It establishes a common framework for predicting, controlling, and managing both manmade and natural disasters. There is a particular focus on events caused by weather and climate change. Other topics include air pollution, tsunamis, disaster modeling, the use of re...

  4. Small and large scale genomic DNA isolation protocol for chickpea ...

    African Journals Online (AJOL)

    Both small and large scale preparations were essentially suitable for PCR and Southern blot hybridization analyses, which are the key steps in crop improvement programme through marker development and genetic engineering techniques. Key words: Cicer arietinum L., phenolics, restriction enzyme digestion, PCR ...

  5. Large-Scale Machine Learning for Classification and Search

    Science.gov (United States)

    Liu, Wei

    2012-01-01

    With the rapid development of the Internet, nowadays tremendous amounts of data including images and videos, up to millions or billions, can be collected for training machine learning models. Inspired by this trend, this thesis is dedicated to developing large-scale machine learning techniques for the purpose of making classification and nearest…

  6. Newton Methods for Large Scale Problems in Machine Learning

    Science.gov (United States)

    Hansen, Samantha Leigh

    2014-01-01

    The focus of this thesis is on practical ways of designing optimization algorithms for minimizing large-scale nonlinear functions with applications in machine learning. Chapter 1 introduces the overarching ideas in the thesis. Chapters 2 and 3 are geared towards supervised machine learning applications that involve minimizing a sum of loss…

  7. Dual Decomposition for Large-Scale Power Balancing

    DEFF Research Database (Denmark)

    Halvgaard, Rasmus; Jørgensen, John Bagterp; Vandenberghe, Lieven

    2013-01-01

    Dual decomposition is applied to power balancing of exible thermal storage units. The centralized large-scale problem is decomposed into smaller subproblems and solved locallyby each unit in the Smart Grid. Convergence is achieved by coordinating the units consumption through a negotiation...

  8. Large scale solar district heating. Evaluation, modelling and designing - Appendices

    Energy Technology Data Exchange (ETDEWEB)

    Heller, A.

    2000-07-01

    The appendices present the following: A) Cad-drawing of the Marstal CSHP design. B) Key values - large-scale solar heating in Denmark. C) Monitoring - a system description. D) WMO-classification of pyranometers (solarimeters). E) The computer simulation model in TRNSYS. F) Selected papers from the author. (EHS)

  9. The Large Scale Magnetic Field and Sunspot Cycles

    Indian Academy of Sciences (India)

    tribpo

    J. Astrophys. Astr. (2000) 21, 161 162. The Large Scale Magnetic Field and Sunspot Cycles. V. I. Makarov* & A. G. Tlatov, Kislovodsk Solar Station of the Pulkovo Observatory,. Kislovodsk 357700, P.O. Box 145, Russia. *e mail: makarov@gao.spb.ru. Key words. Sun: magnetic field—sunspots—solar cycle. Extended abstract.

  10. The interaction of large scale and mesoscale environment leading to ...

    Indian Academy of Sciences (India)

    Home; Journals; Journal of Earth System Science; Volume 118; Issue 5. The interaction of large scale and mesoscale environment leading to formation of intense thunderstorms over Kolkata. Part I: Doppler radar and satellite observations. P Mukhopadhyay M Mahakur H A K Singh. Volume 118 Issue 5 October 2009 pp ...

  11. Optimization of large scale food production using Lean Manufacturing principles

    DEFF Research Database (Denmark)

    Engelund, Eva Høy; Friis, Alan; Breum, Gitte

    2009-01-01

    This paper discusses how the production principles of Lean Manufacturing (Lean) can be applied in a large-scale meal production. Lean principles are briefly presented, followed by a field study of how a kitchen at a Danish hospital has implemented Lean in the daily production. In the kitchen...

  12. The Development of Spatial Representations of Large-Scale Environments.

    Science.gov (United States)

    Herman, James F.; Siegel, Alexander W.

    This experiment investigated the effect of children's successive encounters with a large scale environment on their subsequent reconstructions of that environment. Twenty children (10 boys, 10 girls) at each of three grade levels (kindergarten, two, and five) reconstructed from memory the spatial layout of buildings in a large model town. All…

  13. Entrepreneurial Employee Activity: A Large Scale International Study

    NARCIS (Netherlands)

    Bosma, N.S.; Stam, E.; Wennekers, S.

    This paper presents the results of the first large scale international comparative study of entrepreneurial employee activity (intrapreneurship). Intrapreneurship is a more wide-spread phenomenon in high income countries than in low income countries. At the organizational level, intrapreneurs have

  14. Large-Scale Assessment and English Language Learners with Disabilities

    Science.gov (United States)

    Liu, Kristin K.; Ward, Jenna M.; Thurlow, Martha L.; Christensen, Laurene L.

    2017-01-01

    This article highlights a set of principles and guidelines, developed by a diverse group of specialists in the field, for appropriately including English language learners (ELLs) with disabilities in large-scale assessments. ELLs with disabilities make up roughly 9% of the rapidly increasing ELL population nationwide. In spite of the small overall…

  15. FMC cameras, high resolution films and very large scale mapping

    Science.gov (United States)

    Tachibana, Kikuo; Hasegawa, Hiroyuki

    1988-06-01

    Very large scale mapping (1/250) was experimented on the basis of FMC camera, high resolution film and total station surveying. The future attractive combination of precision photogrammetry and personal computer assisted terrestrial surveying was investigated from the point of view of accuracy, time effectiveness and total procedures control.

  16. Large-scale land acquisitions in Africa | IDRC - International ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    2016-08-12

    Aug 12, 2016 ... Early findings from research on large-scale land acquisitions in Africa point to five key cross-cutting issues: Uneven community impacts; Low levels of public awareness and participation; The need for fair compensation; Unclear and insecure tenure rights; Erosion of women's rights. This brief highlights five ...

  17. Large-scale ensemble averaging of ambulatory impedance cardiograms

    NARCIS (Netherlands)

    Riese, H; Groot, PFC; Van Den Berg, M; Kupper, NHM; Magnee, EHB; Rohaan, EJ; Vrijkotte, TGM; Willemsen, G; De Geus, EJC

    Impedance cardiography has been used increasingly to measure human physiological responses to emotional and mentally engaging stimuli. The validity of large-scale ensemble averaging of ambulatory impedance cardiograms was evaluated for preejection period (PEP), interbeat interval, and dZ/dt((min))

  18. Large-scale organizational change: an executive's guide

    National Research Council Canada - National Science Library

    Laszlo, Christopher; Laugel, Jean-François

    2000-01-01

    ... for managers who sail into the teeth of complexity and paradox." -Perry Pascarella, former editor-in-chief, Industry Week "Large-Scale Organizational Change opens new and promising avenues for leaders of large organizations. It shows how ecological sustainability itself changes the relationship between business and the outside world....

  19. A Chain Perspective on Large-scale Number Systems

    NARCIS (Netherlands)

    Grijpink, J.H.A.M.

    2012-01-01

    As large-scale number systems gain significance in social and economic life (electronic communication, remote electronic authentication), the correct functioning and the integrity of public number systems take on crucial importance. They are needed to uniquely indicate people, objects or phenomena

  20. Large-scale coastal impact induced by a catastrophic storm

    DEFF Research Database (Denmark)

    Fruergaard, Mikkel; Andersen, Thorbjørn Joest; Johannessen, Peter N

    breaching. Our results demonstrate that violent, millennial-scale storms can trigger significant large-scale and long-term changes on barrier coasts, and that coastal changes assumed to take place over centuries or even millennia may occur in association with a single extreme storm event....

  1. Kinematics of large scale asymmetric folds and associated smaller ...

    Indian Academy of Sciences (India)

    The first order structures in this belt are interpreted as large scale buckle folds above a subsurface decollement emphasizing the importance of detachment folding in thin skinned deformation of a sedimentary prism lying above a gneissic basement. That the folds have developed through fixed-hinge buckling is constrained ...

  2. Resolute large scale mining company contribution to health services of

    African Journals Online (AJOL)

    Introduction: In 1995 Tanzanian Government reformed the mining industry and the new policy allowed an involvement of multinational companies but the communities living near new large scale gold mines were expected to benefit from the industry in terms of socio economic, health, education, employment, safe drinking ...

  3. Large Scale Anomalies of the Cosmic Microwave Background with Planck

    DEFF Research Database (Denmark)

    Frejsel, Anne Mette

    point-parity asymmetry. Here we find that a modified curvaton model can reproduce an asymmetric behavior of the power spectrum at lowmultipoles. The other approach is to uncoverwhether some of the large scale anomalies could have a common origin in residual contamination from the Galactic radio loops...

  4. Large-Scale Environmental Influences on Aquatic Animal Health

    Science.gov (United States)

    In the latter portion of the 20th century, North America experienced numerous large-scale mortality events affecting a broad diversity of aquatic animals. Short-term forensic investigations of these events have sometimes characterized a causative agent or condition, but have rare...

  5. Unified Access Architecture for Large-Scale Scientific Datasets

    Science.gov (United States)

    Karna, Risav

    2014-05-01

    Data-intensive sciences have to deploy diverse large scale database technologies for data analytics as scientists have now been dealing with much larger volume than ever before. While array databases have bridged many gaps between the needs of data-intensive research fields and DBMS technologies (Zhang 2011), invocation of other big data tools accompanying these databases is still manual and separate the database management's interface. We identify this as an architectural challenge that will increasingly complicate the user's work flow owing to the growing number of useful but isolated and niche database tools. Such use of data analysis tools in effect leaves the burden on the user's end to synchronize the results from other data manipulation analysis tools with the database management system. To this end, we propose a unified access interface for using big data tools within large scale scientific array database using the database queries themselves to embed foreign routines belonging to the big data tools. Such an invocation of foreign data manipulation routines inside a query into a database can be made possible through a user-defined function (UDF). UDFs that allow such levels of freedom as to call modules from another language and interface back and forth between the query body and the side-loaded functions would be needed for this purpose. For the purpose of this research we attempt coupling of four widely used tools Hadoop (hadoop1), Matlab (matlab1), R (r1) and ScaLAPACK (scalapack1) with UDF feature of rasdaman (Baumann 98), an array-based data manager, for investigating this concept. The native array data model used by an array-based data manager provides compact data storage and high performance operations on ordered data such as spatial data, temporal data, and matrix-based data for linear algebra operations (scidbusr1). Performances issues arising due to coupling of tools with different paradigms, niche functionalities, separate processes and output

  6. scMRI reveals large-scale brain network abnormalities in autism.

    Directory of Open Access Journals (Sweden)

    Brandon A Zielinski

    Full Text Available Autism is a complex neurological condition characterized by childhood onset of dysfunction in multiple cognitive domains including socio-emotional function, speech and language, and processing of internally versus externally directed stimuli. Although gross brain anatomic differences in autism are well established, recent studies investigating regional differences in brain structure and function have yielded divergent and seemingly contradictory results. How regional abnormalities relate to the autistic phenotype remains unclear. We hypothesized that autism exhibits distinct perturbations in network-level brain architecture, and that cognitive dysfunction may be reflected by abnormal network structure. Network-level anatomic abnormalities in autism have not been previously described. We used structural covariance MRI to investigate network-level differences in gray matter structure within two large-scale networks strongly implicated in autism, the salience network and the default mode network, in autistic subjects and age-, gender-, and IQ-matched controls. We report specific perturbations in brain network architecture in the salience and default-mode networks consistent with clinical manifestations of autism. Extent and distribution of the salience network, involved in social-emotional regulation of environmental stimuli, is restricted in autism. In contrast, posterior elements of the default mode network have increased spatial distribution, suggesting a 'posteriorization' of this network. These findings are consistent with a network-based model of autism, and suggest a unifying interpretation of previous work. Moreover, we provide evidence of specific abnormalities in brain network architecture underlying autism that are quantifiable using standard clinical MRI.

  7. PROPERTIES IMPORTANT TO MIXING FOR WTP LARGE SCALE INTEGRATED TESTING

    Energy Technology Data Exchange (ETDEWEB)

    Koopman, D.; Martino, C.; Poirier, M.

    2012-04-26

    Large Scale Integrated Testing (LSIT) is being planned by Bechtel National, Inc. to address uncertainties in the full scale mixing performance of the Hanford Waste Treatment and Immobilization Plant (WTP). Testing will use simulated waste rather than actual Hanford waste. Therefore, the use of suitable simulants is critical to achieving the goals of the test program. External review boards have raised questions regarding the overall representativeness of simulants used in previous mixing tests. Accordingly, WTP requested the Savannah River National Laboratory (SRNL) to assist with development of simulants for use in LSIT. Among the first tasks assigned to SRNL was to develop a list of waste properties that matter to pulse-jet mixer (PJM) mixing of WTP tanks. This report satisfies Commitment 5.2.3.1 of the Department of Energy Implementation Plan for Defense Nuclear Facilities Safety Board Recommendation 2010-2: physical properties important to mixing and scaling. In support of waste simulant development, the following two objectives are the focus of this report: (1) Assess physical and chemical properties important to the testing and development of mixing scaling relationships; (2) Identify the governing properties and associated ranges for LSIT to achieve the Newtonian and non-Newtonian test objectives. This includes the properties to support testing of sampling and heel management systems. The test objectives for LSIT relate to transfer and pump out of solid particles, prototypic integrated operations, sparger operation, PJM controllability, vessel level/density measurement accuracy, sampling, heel management, PJM restart, design and safety margin, Computational Fluid Dynamics (CFD) Verification and Validation (V and V) and comparison, performance testing and scaling, and high temperature operation. The slurry properties that are most important to Performance Testing and Scaling depend on the test objective and rheological classification of the slurry (i

  8. Providing trust and interoperability to federate distributed biobanks.

    Science.gov (United States)

    Lablans, Martin; Bartholomäus, Sebastian; Uckert, Frank

    2011-01-01

    Biomedical research requires large numbers of well annotated, quality-assessed samples which often cannot be provided by a single biobank. Connecting biobanks, researchers and service providers raises numerous challenges including trust among partners and towards the infrastructure as well as interoperability problems. Therefore we develop a holistic, open-source and easy-to-use IT infrastructure. Our federated approach allows partners to reflect their organizational structures and protect their data sovereignty. The search service and the contact arrangement processes increase data sovereignty without stigmatizing for rejecting a specific cooperation. The infrastructure supports daily processes with an integrated basic sample manager and user-definable electronic case report forms. Interfaces for existing IT systems avoid re-entering of data. Moreover, resource virtualization is supported to make underutilized resources of some partners accessible to those with insufficient equipment for mutual benefit. The functionality of the resulting infrastructure is outlined in a use-case to demonstrate collaboration within a translational research network. Compared to other existing or upcoming infrastructures, our approach has ultimately the same goals, but relies on gentle incentives rather than top-down imposed progress.

  9. Development of Best Practices for Large-scale Data Management Infrastructure

    NARCIS (Netherlands)

    S. Stadtmüller; H.F. Mühleisen (Hannes); C. Bizer; M.L. Kersten (Martin); J.A. de Rijke (Arjen); F.E. Groffen (Fabian); Y. Zhang (Ying); G. Ladwig; A. Harth; M Trampus

    2012-01-01

    htmlabstractThe amount of available data for processing is constantly increasing and becomes more diverse. We collect our experiences on deploying large-scale data management tools on local-area clusters or cloud infrastructures and provide guidance to use these computing and storage

  10. Risks of large-scale use of systemic insecticides to ecosystem functioning and services

    NARCIS (Netherlands)

    Chagnon, M.; Kreutzweiser, D.; Mitchell, E.A.D.; Morrissey, C.A.; Noome, D.A.; van der Sluijs, J.P.|info:eu-repo/dai/nl/073427489

    2015-01-01

    Large-scale use of the persistent and potent neonicotinoid and fipronil insecticides has raised concerns about risks to ecosystem functions provided by a wide range of species and environments affected by these insecticides. The concept of ecosystem services is widely used in decision making in the

  11. Disentangling the dynamic core: a research program for a neurodynamics at the large-scale.

    Science.gov (United States)

    Le Van Quyen, Michel

    2003-01-01

    My purpose in this paper is to sketch a research direction based on Francisco Varela's pioneering work in neurodynamics (see also Rudrauf et al. 2003, in this issue). Very early on he argued that the internal coherence of every mental-cognitive state lies in the global self-organization of the brain activities at the large-scale, constituting a fundamental pole of integration called here a "dynamic core". Recent neuroimaging evidence appears to broadly support this hypothesis and suggests that a global brain dynamics emerges at the large scale level from the cooperative interactions among widely distributed neuronal populations. Despite a growing body of evidence supporting this view, our understanding of these large-scale brain processes remains hampered by the lack of a theoretical language for expressing these complex behaviors in dynamical terms. In this paper, I propose a rough cartography of a comprehensive approach that offers a conceptual and mathematical framework to analyze spatio-temporal large-scale brain phenomena. I emphasize how these nonlinear methods can be applied, what property might be inferred from neuronal signals, and where one might productively proceed for the future. This paper is dedicated, with respect and affection, to the memory of Francisco Varela.

  12. Large-Scale Structure Effects on Particle Motion in a Spatialy Developing Mixing Layer

    Science.gov (United States)

    Nikitopoulos, Dimitris E.; Acharya, Sumanta; Ning, Hui

    1996-11-01

    We will present results from a simulation of particle motion in a spatially developing turbulent mixing layer forced by a single frequency. The simulation is based on a model which utilizes Eulerian formulation for the carrier phase flowfield and Lagrangian formulation for the dispersed particles. The carrier-phase mean flow is determined from the classical turbulent boundary layer equations and a traditional k-ɛ turbulence model modified to include interaction terms with the imposed large-scale structure. An integral energy method is used to determine the evolution of the large-scale structure which is modeled as a spatially growing wave. The effect of the carrier phase small-scale turbulent fluctuations is taken into account by using a classical stochastic model with a gaussian PDF. The effects of the particle time scale magnitude relative to the large-scale structure timescale (Stokes number) are discussed as well as the effects of the particle injection conditions. It is found that the Stokes number for which the cross-stream dispersion of the particles is maximized has a strong dependence on the particle injection conditions. The particle injection phase relative to the large-scale structure also has a profound influence on the motion of the particles and their distribution within the mixing layer.

  13. Causal inference between bioavailability of heavy metals and environmental factors in a large-scale region.

    Science.gov (United States)

    Liu, Yuqiong; Du, Qingyun; Wang, Qi; Yu, Huanyun; Liu, Jianfeng; Tian, Yu; Chang, Chunying; Lei, Jing

    2017-07-01

    The causation between bioavailability of heavy metals and environmental factors are generally obtained from field experiments at local scales at present, and lack sufficient evidence from large scales. However, inferring causation between bioavailability of heavy metals and environmental factors across large-scale regions is challenging. Because the conventional correlation-based approaches used for causation assessments across large-scale regions, at the expense of actual causation, can result in spurious insights. In this study, a general approach framework, Intervention calculus when the directed acyclic graph (DAG) is absent (IDA) combined with the backdoor criterion (BC), was introduced to identify causation between the bioavailability of heavy metals and the potential environmental factors across large-scale regions. We take the Pearl River Delta (PRD) in China as a case study. The causal structures and effects were identified based on the concentrations of heavy metals (Zn, As, Cu, Hg, Pb, Cr, Ni and Cd) in soil (0-20 cm depth) and vegetable (lettuce) and 40 environmental factors (soil properties, extractable heavy metals and weathering indices) in 94 samples across the PRD. Results show that the bioavailability of heavy metals (Cd, Zn, Cr, Ni and As) was causally influenced by soil properties and soil weathering factors, whereas no causal factor impacted the bioavailability of Cu, Hg and Pb. No latent factor was found between the bioavailability of heavy metals and environmental factors. The causation between the bioavailability of heavy metals and environmental factors at field experiments is consistent with that on a large scale. The IDA combined with the BC provides a powerful tool to identify causation between the bioavailability of heavy metals and environmental factors across large-scale regions. Causal inference in a large system with the dynamic changes has great implications for system-based risk management. Copyright © 2017 Elsevier Ltd. All

  14. Validating Bayesian truth serum in large-scale online human experiments

    Science.gov (United States)

    Frank, Morgan R.; Cebrian, Manuel; Pickard, Galen; Rahwan, Iyad

    2017-01-01

    Bayesian truth serum (BTS) is an exciting new method for improving honesty and information quality in multiple-choice survey, but, despite the method’s mathematical reliance on large sample sizes, existing literature about BTS only focuses on small experiments. Combined with the prevalence of online survey platforms, such as Amazon’s Mechanical Turk, which facilitate surveys with hundreds or thousands of participants, BTS must be effective in large-scale experiments for BTS to become a readily accepted tool in real-world applications. We demonstrate that BTS quantifiably improves honesty in large-scale online surveys where the “honest” distribution of answers is known in expectation on aggregate. Furthermore, we explore a marketing application where “honest” answers cannot be known, but find that BTS treatment impacts the resulting distributions of answers. PMID:28494000

  15. Validating Bayesian truth serum in large-scale online human experiments.

    Science.gov (United States)

    Frank, Morgan R; Cebrian, Manuel; Pickard, Galen; Rahwan, Iyad

    2017-01-01

    Bayesian truth serum (BTS) is an exciting new method for improving honesty and information quality in multiple-choice survey, but, despite the method's mathematical reliance on large sample sizes, existing literature about BTS only focuses on small experiments. Combined with the prevalence of online survey platforms, such as Amazon's Mechanical Turk, which facilitate surveys with hundreds or thousands of participants, BTS must be effective in large-scale experiments for BTS to become a readily accepted tool in real-world applications. We demonstrate that BTS quantifiably improves honesty in large-scale online surveys where the "honest" distribution of answers is known in expectation on aggregate. Furthermore, we explore a marketing application where "honest" answers cannot be known, but find that BTS treatment impacts the resulting distributions of answers.

  16. Large Scale Computing and Storage Requirements for High Energy Physics

    Energy Technology Data Exchange (ETDEWEB)

    Gerber, Richard A.; Wasserman, Harvey

    2010-11-24

    The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility for the Department of Energy's Office of Science, providing high-performance computing (HPC) resources to more than 3,000 researchers working on about 400 projects. NERSC provides large-scale computing resources and, crucially, the support and expertise needed for scientists to make effective use of them. In November 2009, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR), and DOE's Office of High Energy Physics (HEP) held a workshop to characterize the HPC resources needed at NERSC to support HEP research through the next three to five years. The effort is part of NERSC's legacy of anticipating users needs and deploying resources to meet those demands. The workshop revealed several key points, in addition to achieving its goal of collecting and characterizing computing requirements. The chief findings: (1) Science teams need access to a significant increase in computational resources to meet their research goals; (2) Research teams need to be able to read, write, transfer, store online, archive, analyze, and share huge volumes of data; (3) Science teams need guidance and support to implement their codes on future architectures; and (4) Projects need predictable, rapid turnaround of their computational jobs to meet mission-critical time constraints. This report expands upon these key points and includes others. It also presents a number of case studies as representative of the research conducted within HEP. Workshop participants were asked to codify their requirements in this case study format, summarizing their science goals, methods of solution, current and three-to-five year computing requirements, and software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, multi-core environment that is expected to dominate HPC architectures over the next few years

  17. Large-scale machine learning for metagenomics sequence classification.

    Science.gov (United States)

    Vervier, Kévin; Mahé, Pierre; Tournoud, Maud; Veyrieras, Jean-Baptiste; Vert, Jean-Philippe

    2016-04-01

    Metagenomics characterizes the taxonomic diversity of microbial communities by sequencing DNA directly from an environmental sample. One of the main challenges in metagenomics data analysis is the binning step, where each sequenced read is assigned to a taxonomic clade. Because of the large volume of metagenomics datasets, binning methods need fast and accurate algorithms that can operate with reasonable computing requirements. While standard alignment-based methods provide state-of-the-art performance, compositional approaches that assign a taxonomic class to a DNA read based on the k-mers it contains have the potential to provide faster solutions. We propose a new rank-flexible machine learning-based compositional approach for taxonomic assignment of metagenomics reads and show that it benefits from increasing the number of fragments sampled from reference genome to tune its parameters, up to a coverage of about 10, and from increasing the k-mer size to about 12. Tuning the method involves training machine learning models on about 10(8) samples in 10(7) dimensions, which is out of reach of standard softwares but can be done efficiently with modern implementations for large-scale machine learning. The resulting method is competitive in terms of accuracy with well-established alignment and composition-based tools for problems involving a small to moderate number of candidate species and for reasonable amounts of sequencing errors. We show, however, that machine learning-based compositional approaches are still limited in their ability to deal with problems involving a greater number of species and more sensitive to sequencing errors. We finally show that the new method outperforms the state-of-the-art in its ability to classify reads from species of lineage absent from the reference database and confirm that compositional approaches achieve faster prediction times, with a gain of 2-17 times with respect to the BWA-MEM short read mapper, depending on the number of

  18. Perspective: Methods for large-scale density functional calculations on metallic systems.

    Science.gov (United States)

    Aarons, Jolyon; Sarwar, Misbah; Thompsett, David; Skylaris, Chris-Kriton

    2016-12-14

    Current research challenges in areas such as energy and bioscience have created a strong need for Density Functional Theory (DFT) calculations on metallic nanostructures of hundreds to thousands of atoms to provide understanding at the atomic level in technologically important processes such as catalysis and magnetic materials. Linear-scaling DFT methods for calculations with thousands of atoms on insulators are now reaching a level of maturity. However such methods are not applicable to metals, where the continuum of states through the chemical potential and their partial occupancies provide significant hurdles which have yet to be fully overcome. Within this perspective we outline the theory of DFT calculations on metallic systems with a focus on methods for large-scale calculations, as required for the study of metallic nanoparticles. We present early approaches for electronic energy minimization in metallic systems as well as approaches which can impose partial state occupancies from a thermal distribution without access to the electronic Hamiltonian eigenvalues, such as the classes of Fermi operator expansions and integral expansions. We then focus on the significant progress which has been made in the last decade with developments which promise to better tackle the length-scale problem in metals. We discuss the challenges presented by each method, the likely future directions that could be followed and whether an accurate linear-scaling DFT method for metals is in sight.

  19. Compact fluorescent lamp modeling for large-scale harmonic penetration studies

    OpenAIRE

    Molina, Julio; Sainz Sapera, Luis

    2014-01-01

    Low-watt compact fluorescent lamps (CFLs) may account for significant energy consumption of power distribution feeders in future years. This can lead to power-quality concerns because these devices consume highly distorted currents. In order to predict CFL harmonic current emissions into networks, CFL models capable of predicting these emissions accurately are being studied. On the other hand, these models require great computational effort in large-scale CFL penetration studies. This paper p...

  20. The workshop on iterative methods for large scale nonlinear problems

    Energy Technology Data Exchange (ETDEWEB)

    Walker, H.F. [Utah State Univ., Logan, UT (United States). Dept. of Mathematics and Statistics; Pernice, M. [Univ. of Utah, Salt Lake City, UT (United States). Utah Supercomputing Inst.

    1995-12-01

    The aim of the workshop was to bring together researchers working on large scale applications with numerical specialists of various kinds. Applications that were addressed included reactive flows (combustion and other chemically reacting flows, tokamak modeling), porous media flows, cardiac modeling, chemical vapor deposition, image restoration, macromolecular modeling, and population dynamics. Numerical areas included Newton iterative (truncated Newton) methods, Krylov subspace methods, domain decomposition and other preconditioning methods, large scale optimization and optimal control, and parallel implementations and software. This report offers a brief summary of workshop activities and information about the participants. Interested readers are encouraged to look into an online proceedings available at http://www.usi.utah.edu/logan.proceedings. In this, the material offered here is augmented with hypertext abstracts that include links to locations such as speakers` home pages, PostScript copies of talks and papers, cross-references to related talks, and other information about topics addresses at the workshop.

  1. First Mile Challenges for Large-Scale IoT

    KAUST Repository

    Bader, Ahmed

    2017-03-16

    The Internet of Things is large-scale by nature. This is not only manifested by the large number of connected devices, but also by the sheer scale of spatial traffic intensity that must be accommodated, primarily in the uplink direction. To that end, cellular networks are indeed a strong first mile candidate to accommodate the data tsunami to be generated by the IoT. However, IoT devices are required in the cellular paradigm to undergo random access procedures as a precursor to resource allocation. Such procedures impose a major bottleneck that hinders cellular networks\\' ability to support large-scale IoT. In this article, we shed light on the random access dilemma and present a case study based on experimental data as well as system-level simulations. Accordingly, a case is built for the latent need to revisit random access procedures. A call for action is motivated by listing a few potential remedies and recommendations.

  2. Large-scale innovation and change in UK higher education

    Directory of Open Access Journals (Sweden)

    Stephen Brown

    2013-09-01

    Full Text Available This paper reflects on challenges universities face as they respond to change. It reviews current theories and models of change management, discusses why universities are particularly difficult environments in which to achieve large scale, lasting change and reports on a recent attempt by the UK JISC to enable a range of UK universities to employ technology to deliver such changes. Key lessons that emerged from these experiences are reviewed covering themes of pervasiveness, unofficial systems, project creep, opposition, pressure to deliver, personnel changes and technology issues. The paper argues that collaborative approaches to project management offer greater prospects of effective large-scale change in universities than either management-driven top-down or more champion-led bottom-up methods. It also argues that while some diminution of control over project outcomes is inherent in this approach, this is outweighed by potential benefits of lasting and widespread adoption of agreed changes.

  3. Efficient solution and analysis of large-scale goal programmes

    Energy Technology Data Exchange (ETDEWEB)

    Jones, D.; Tamiz, M.

    1994-12-31

    Goal Programming (GP) models are formed by the extension of Linear Programming (LP) to include multiple, possibly conflicting, objectives. The past two decades have seen a wealth of GP applications along with theoretical papers describing methods for the analysis and solution of GPs. With the application area of GP covering and extending that of LP, GP models the size of recent large-scale LP models have arisen. Such models may contain large numbers of objectives as well as constraints and variables. These models require the adaption of LP analysis techniques to account for their size and further specialized GP-speed-ups to lower solution time. This paper investigates efficient methods of pareto-efficiency analysis and restoration, normalization, redundancy checking, preference function modelling, and interactive method application, appropriate for large-scale GPs as well as suggesting a refined simplex procedure more appropriate for solution of GPs.

  4. Evaluation of Large-scale Public Sector Reforms

    DEFF Research Database (Denmark)

    Breidahl, Karen Nielsen; Gjelstrup, Gunnar; Hansen, Hanne Foss

    2017-01-01

    and more delimited policy areas take place. In our analysis we apply four governance perspectives (rational-instrumental, rational-interest based, institutional-cultural and a chaos perspective) in a comparative analysis of the evaluations of two large-scale public sector reforms in Denmark and Norway. We......Research on the evaluation of large-scale public sector reforms is rare. This article sets out to fill that gap in the evaluation literature and argues that it is of vital importance. The impact of such reforms is considerable. Furthermore they change the context in which evaluations of other...... compare the evaluation process (focus and purpose), the evaluators and the organization of the evaluation as well as the utilization of the evaluation results. The analysis uncovers several significant findings including how the initial organization of the evaluation show strong impact on the utilization...

  5. In the fast lane: large-scale bacterial genome engineering.

    Science.gov (United States)

    Fehér, Tamás; Burland, Valerie; Pósfai, György

    2012-07-31

    The last few years have witnessed rapid progress in bacterial genome engineering. The long-established, standard ways of DNA synthesis, modification, transfer into living cells, and incorporation into genomes have given way to more effective, large-scale, robust genome modification protocols. Expansion of these engineering capabilities is due to several factors. Key advances include: (i) progress in oligonucleotide synthesis and in vitro and in vivo assembly methods, (ii) optimization of recombineering techniques, (iii) introduction of parallel, large-scale, combinatorial, and automated genome modification procedures, and (iv) rapid identification of the modifications by barcode-based analysis and sequencing. Combination of the brute force of these techniques with sophisticated bioinformatic design and modeling opens up new avenues for the analysis of gene functions and cellular network interactions, but also in engineering more effective producer strains. This review presents a summary of recent technological advances in bacterial genome engineering. Copyright © 2012 Elsevier B.V. All rights reserved.

  6. Efficient algorithms for collaborative decision making for large scale settings

    DEFF Research Database (Denmark)

    Assent, Ira

    2011-01-01

    Collaborative decision making is a successful approach in settings where data analysis and querying can be done interactively. In large scale systems with huge data volumes or many users, collaboration is often hindered by impractical runtimes. Existing work on improving collaboration focuses......, and focuses on improving runtimes regardless of where the queries are issued from. In this work, we claim that progress can be made by taking a novel, more holistic view of the problem. We discuss a new approach that combines the two strands of research on the user experience and query engine parts in order...... to bring about more effective and more efficient retrieval systems that support the users' decision making process. We sketch promising research directions for more efficient algorithms for collaborative decision making, especially for large scale systems....

  7. Performance of Grey Wolf Optimizer on large scale problems

    Science.gov (United States)

    Gupta, Shubham; Deep, Kusum

    2017-01-01

    For solving nonlinear continuous problems of optimization numerous nature inspired optimization techniques are being proposed in literature which can be implemented to solve real life problems wherein the conventional techniques cannot be applied. Grey Wolf Optimizer is one of such technique which is gaining popularity since the last two years. The objective of this paper is to investigate the performance of Grey Wolf Optimization Algorithm on large scale optimization problems. The Algorithm is implemented on 5 common scalable problems appearing in literature namely Sphere, Rosenbrock, Rastrigin, Ackley and Griewank Functions. The dimensions of these problems are varied from 50 to 1000. The results indicate that Grey Wolf Optimizer is a powerful nature inspired Optimization Algorithm for large scale problems, except Rosenbrock which is a unimodal function.

  8. Instrumentation Development for Large Scale Hypersonic Inflatable Aerodynamic Decelerator Characterization

    Science.gov (United States)

    Swanson, Gregory T.; Cassell, Alan M.

    2011-01-01

    Hypersonic Inflatable Aerodynamic Decelerator (HIAD) technology is currently being considered for multiple atmospheric entry applications as the limitations of traditional entry vehicles have been reached. The Inflatable Re-entry Vehicle Experiment (IRVE) has successfully demonstrated this technology as a viable candidate with a 3.0 m diameter vehicle sub-orbital flight. To further this technology, large scale HIADs (6.0 8.5 m) must be developed and tested. To characterize the performance of large scale HIAD technology new instrumentation concepts must be developed to accommodate the flexible nature inflatable aeroshell. Many of the concepts that are under consideration for the HIAD FY12 subsonic wind tunnel test series are discussed below.

  9. Initial condition effects on large scale structure in numerical simulations of plane mixing layers

    Science.gov (United States)

    McMullan, W. A.; Garrett, S. J.

    2016-01-01

    In this paper, Large Eddy Simulations are performed on the spatially developing plane turbulent mixing layer. The simulated mixing layers originate from initially laminar conditions. The focus of this research is on the effect of the nature of the imposed fluctuations on the large-scale spanwise and streamwise structures in the flow. Two simulations are performed; one with low-level three-dimensional inflow fluctuations obtained from pseudo-random numbers, the other with physically correlated fluctuations of the same magnitude obtained from an inflow generation technique. Where white-noise fluctuations provide the inflow disturbances, no spatially stationary streamwise vortex structure is observed, and the large-scale spanwise turbulent vortical structures grow continuously and linearly. These structures are observed to have a three-dimensional internal geometry with branches and dislocations. Where physically correlated provide the inflow disturbances a "streaky" streamwise structure that is spatially stationary is observed, with the large-scale turbulent vortical structures growing with the square-root of time. These large-scale structures are quasi-two-dimensional, on top of which the secondary structure rides. The simulation results are discussed in the context of the varying interpretations of mixing layer growth that have been postulated. Recommendations are made concerning the data required from experiments in order to produce accurate numerical simulation recreations of real flows.

  10. The Phoenix series large scale LNG pool fire experiments.

    Energy Technology Data Exchange (ETDEWEB)

    Simpson, Richard B.; Jensen, Richard Pearson; Demosthenous, Byron; Luketa, Anay Josephine; Ricks, Allen Joseph; Hightower, Marion Michael; Blanchat, Thomas K.; Helmick, Paul H.; Tieszen, Sheldon Robert; Deola, Regina Anne; Mercier, Jeffrey Alan; Suo-Anttila, Jill Marie; Miller, Timothy J.

    2010-12-01

    The increasing demand for natural gas could increase the number and frequency of Liquefied Natural Gas (LNG) tanker deliveries to ports across the United States. Because of the increasing number of shipments and the number of possible new facilities, concerns about the potential safety of the public and property from an accidental, and even more importantly intentional spills, have increased. While improvements have been made over the past decade in assessing hazards from LNG spills, the existing experimental data is much smaller in size and scale than many postulated large accidental and intentional spills. Since the physics and hazards from a fire change with fire size, there are concerns about the adequacy of current hazard prediction techniques for large LNG spills and fires. To address these concerns, Congress funded the Department of Energy (DOE) in 2008 to conduct a series of laboratory and large-scale LNG pool fire experiments at Sandia National Laboratories (Sandia) in Albuquerque, New Mexico. This report presents the test data and results of both sets of fire experiments. A series of five reduced-scale (gas burner) tests (yielding 27 sets of data) were conducted in 2007 and 2008 at Sandia's Thermal Test Complex (TTC) to assess flame height to fire diameter ratios as a function of nondimensional heat release rates for extrapolation to large-scale LNG fires. The large-scale LNG pool fire experiments were conducted in a 120 m diameter pond specially designed and constructed in Sandia's Area III large-scale test complex. Two fire tests of LNG spills of 21 and 81 m in diameter were conducted in 2009 to improve the understanding of flame height, smoke production, and burn rate and therefore the physics and hazards of large LNG spills and fires.

  11. Large scale 2D spectral compressed sensing in continuous domain

    KAUST Repository

    Cai, Jian-Feng

    2017-06-20

    We consider the problem of spectral compressed sensing in continuous domain, which aims to recover a 2-dimensional spectrally sparse signal from partially observed time samples. The signal is assumed to be a superposition of s complex sinusoids. We propose a semidefinite program for the 2D signal recovery problem. Our model is able to handle large scale 2D signals of size 500 × 500, whereas traditional approaches only handle signals of size around 20 × 20.

  12. Foundations of Large-Scale Multimedia Information Management and Retrieval

    CERN Document Server

    Chang, Edward Y

    2011-01-01

    "Foundations of Large-Scale Multimedia Information Management and Retrieval - Mathematics of Perception" covers knowledge representation and semantic analysis of multimedia data and scalability in signal extraction, data mining, and indexing. The book is divided into two parts: Part I - Knowledge Representation and Semantic Analysis focuses on the key components of mathematics of perception as it applies to data management and retrieval. These include feature selection/reduction, knowledge representation, semantic analysis, distance function formulation for measuring similarity, and

  13. Multivariate Clustering of Large-Scale Simulation Data

    Energy Technology Data Exchange (ETDEWEB)

    Eliassi-Rad, T; Critchlow, T

    2003-03-04

    Simulations of complex scientific phenomena involve the execution of massively parallel computer programs. These simulation programs generate large-scale data sets over the spatiotemporal space. Modeling such massive data sets is an essential step in helping scientists discover new information from their computer simulations. In this paper, we present a simple but effective multivariate clustering algorithm for large-scale scientific simulation data sets. Our algorithm utilizes the cosine similarity measure to cluster the field variables in a data set. Field variables include all variables except the spatial (x, y, z) and temporal (time) variables. The exclusion of the spatial space is important since 'similar' characteristics could be located (spatially) far from each other. To scale our multivariate clustering algorithm for large-scale data sets, we take advantage of the geometrical properties of the cosine similarity measure. This allows us to reduce the modeling time from O(n{sup 2}) to O(n x g(f(u))), where n is the number of data points, f(u) is a function of the user-defined clustering threshold, and g(f(u)) is the number of data points satisfying the threshold f(u). We show that on average g(f(u)) is much less than n. Finally, even though spatial variables do not play a role in building a cluster, it is desirable to associate each cluster with its correct spatial space. To achieve this, we present a linking algorithm for connecting each cluster to the appropriate nodes of the data set's topology tree (where the spatial information of the data set is stored). Our experimental evaluations on two large-scale simulation data sets illustrate the value of our multivariate clustering and linking algorithms.

  14. Multivariate Clustering of Large-Scale Scientific Simulation Data

    Energy Technology Data Exchange (ETDEWEB)

    Eliassi-Rad, T; Critchlow, T

    2003-06-13

    Simulations of complex scientific phenomena involve the execution of massively parallel computer programs. These simulation programs generate large-scale data sets over the spatio-temporal space. Modeling such massive data sets is an essential step in helping scientists discover new information from their computer simulations. In this paper, we present a simple but effective multivariate clustering algorithm for large-scale scientific simulation data sets. Our algorithm utilizes the cosine similarity measure to cluster the field variables in a data set. Field variables include all variables except the spatial (x, y, z) and temporal (time) variables. The exclusion of the spatial dimensions is important since ''similar'' characteristics could be located (spatially) far from each other. To scale our multivariate clustering algorithm for large-scale data sets, we take advantage of the geometrical properties of the cosine similarity measure. This allows us to reduce the modeling time from O(n{sup 2}) to O(n x g(f(u))), where n is the number of data points, f(u) is a function of the user-defined clustering threshold, and g(f(u)) is the number of data points satisfying f(u). We show that on average g(f(u)) is much less than n. Finally, even though spatial variables do not play a role in building clusters, it is desirable to associate each cluster with its correct spatial region. To achieve this, we present a linking algorithm for connecting each cluster to the appropriate nodes of the data set's topology tree (where the spatial information of the data set is stored). Our experimental evaluations on two large-scale simulation data sets illustrate the value of our multivariate clustering and linking algorithms.

  15. Experimental simulation of microinteractions in large scale explosions

    Energy Technology Data Exchange (ETDEWEB)

    Chen, X.; Luo, R.; Yuen, W.W.; Theofanous, T.G. [California Univ., Santa Barbara, CA (United States). Center for Risk Studies and Safety

    1998-01-01

    This paper presents data and analysis of recent experiments conducted in the SIGMA-2000 facility to simulate microinteractions in large scale explosions. Specifically, the fragmentation behavior of a high temperature molten steel drop under high pressure (beyond critical) conditions are investigated. The current data demonstrate, for the first time, the effect of high pressure in suppressing the thermal effect of fragmentation under supercritical conditions. The results support the microinteractions idea, and the ESPROSE.m prediction of fragmentation rate. (author)

  16. On the Phenomenology of an Accelerated Large-Scale Universe

    OpenAIRE

    Martiros Khurshudyan

    2016-01-01

    In this review paper, several new results towards the explanation of the accelerated expansion of the large-scale universe is discussed. On the other hand, inflation is the early-time accelerated era and the universe is symmetric in the sense of accelerated expansion. The accelerated expansion of is one of the long standing problems in modern cosmology, and physics in general. There are several well defined approaches to solve this problem. One of them is an assumption concerning the existenc...

  17. A Large-Scale Study of Online Shopping Behavior

    OpenAIRE

    Nalchigar, Soroosh; Weber, Ingmar

    2012-01-01

    The continuous growth of electronic commerce has stimulated great interest in studying online consumer behavior. Given the significant growth in online shopping, better understanding of customers allows better marketing strategies to be designed. While studies of online shopping attitude are widespread in the literature, studies of browsing habits differences in relation to online shopping are scarce. This research performs a large scale study of the relationship between Internet browsing hab...

  18. Robust regression for large-scale neuroimaging studies.

    OpenAIRE

    Bokde, Arun

    2015-01-01

    PUBLISHED Multi-subject datasets used in neuroimaging group studies have a complex structure, as they exhibit non-stationary statistical properties across regions and display various artifacts. While studies with small sample sizes can rarely be shown to deviate from standard hypotheses (such as the normality of the residuals) due to the poor sensitivity of normality tests with low degrees of freedom, large-scale studies (e.g. >100 subjects) exhibit more obvious deviations from these hypot...

  19. Computing Sampling Weights in Large-scale Assessments in Education

    OpenAIRE

    Meinck, Sabine

    2015-01-01

    Sampling weights are a reflection of sampling design; they allow us to draw valid conclusions about population features from sample data. This paper explains the fundamentals of computing sampling weights for large-scale assessments in educational research. The relationship between the nature of complex samples and best practices in developing a set of weights to enable computation of unbiased population estimates is described. Effects of sampling weights on estimates are shown...

  20. PKI security in large-scale healthcare networks

    OpenAIRE

    Mantas, G.; Lymberopoulos, D.; Komninos, N.

    2012-01-01

    During the past few years a lot of PKI (Public Key Infrastructures) infrastructures have been proposed for healthcare networks in order to ensure secure communication services and exchange of data among healthcare professionals. However, there is a plethora of challenges in these healthcare PKI infrastructures. Especially, there are a lot of challenges for PKI infrastructures deployed over large-scale healthcare networks. In this paper, we propose a PKI infrastructure to ensure security in a ...