WorldWideScience

Sample records for region wasatch-cache national

  1. 40 CFR 81.52 - Wasatch Front Intrastate Air Quality Control Region.

    Science.gov (United States)

    2010-07-01

    ... Quality Control Regions § 81.52 Wasatch Front Intrastate Air Quality Control Region. The Wasatch Front Intrastate Air Quality Control Region (Utah) consists of the territorial area encompassed by the boundaries... 40 Protection of Environment 17 2010-07-01 2010-07-01 false Wasatch Front Intrastate Air Quality...

  2. Investigating potential effects of heli-skiing on golden eagles in the Wasatch Mountains, Utah

    Science.gov (United States)

    Teryl G. Grubb; David K. Delaney; William W. Bowerman

    2007-01-01

    Implementing further research was beyond the scope of the U.S. Forest Service's 2004 Final Environmental Impact Statement (FEIS) and 2005 Wasatch Powderbird Guides (WPG) Special Use Permit Renewal process for heli-skiing in the Tri-Canyon Area in the Wasatch Mountains, just east of Salt Lake City, Utah. However, in their Record of Decision the Wasaatch-Cache (WCNF...

  3. Earthquake forecast for the Wasatch Front region of the Intermountain West

    Science.gov (United States)

    DuRoss, Christopher B.

    2016-04-18

    The Working Group on Utah Earthquake Probabilities has assessed the probability of large earthquakes in the Wasatch Front region. There is a 43 percent probability of one or more magnitude 6.75 or greater earthquakes and a 57 percent probability of one or more magnitude 6.0 or greater earthquakes in the region in the next 50 years. These results highlight the threat of large earthquakes in the region.

  4. Merging long range transportation planning with public health: a case study from Utah's Wasatch Front.

    Science.gov (United States)

    Burbidge, Shaunna K

    2010-01-01

    US transportation systems have been identified as a problem for public health, as they often encourage automobile transportation and discourage physical activity. This paper provides a case study examination of the Public Health Component of the Wasatch Front Regional Council's Regional Transportation Plan. This plan provides an example of what transportation planners at Utah's largest metropolitan planning organization (MPO) are doing to encourage physical activity through transportation. Existing active living research was used to guide recommendations using a process that included a comprehensive literature review and a review of existing state programs, advisory group and stakeholder meetings, and policy recommendations based on existing local conditions. Stakeholders from a diversity of background and interests came together with one common goal: to improve public health. Based on this collaborative process, nine policy approaches were specifically recommended for approval and integration in the Wasatch Front Regional Transportation Plan. By using current research as a guide and integrating a variety of interests, the Wasatch Front Regional Council is setting a new standard for a collaborative multi-modal focus in transportation planning, which can be replicated nationwide.

  5. 78 FR 58158 - Establishment of Class E Airspace; Wasatch, UT

    Science.gov (United States)

    2013-09-23

    ... operations within the National Airspace System. This action also makes a minor adjustment to the geographic... the geographic coordinates of the Wasatch VORTAC needed to be corrected. This action makes the... of IFR operations. The geographic coordinates of the VORTAC are adjusted from (Lat. 40[deg]51'10'' N...

  6. Web Caching

    Indian Academy of Sciences (India)

    leveraged through Web caching technology. Specifically, Web caching becomes an ... Web routing can improve the overall performance of the Internet. Web caching is similar to memory system caching - a Web cache stores Web resources in ...

  7. CryptoCache: A Secure Sharable File Cache for Roaming Users

    DEFF Research Database (Denmark)

    Jensen, Christian D.

    2000-01-01

    . Conventional distributed file systems cache everything locally or not at all; there is no possibility to cache files on nearby nodes.In this paper we present the design of a secure cache system called CryptoCache that allows roaming users to cache files on untrusted file hosting servers. The system allows...... flexible sharing of cached files among unauthenticated users, i.e. unlike most distributed file systems CryptoCache does not require a global authentication framework.Files are encrypted when they are transferred over the network and while stored on untrusted servers. The system uses public key......Small mobile computers are now sufficiently powerful to run many applications, but storage capacity remains limited so working files cannot be cached or stored locally. Even if files can be stored locally, the mobile device is not powerful enough to act as server in collaborations with other users...

  8. Applications of research from the U.S. Geological Survey program, assessment of regional earthquake hazards and risk along the Wasatch Front, Utah

    Science.gov (United States)

    Gori, Paula L.

    1993-01-01

    engineering studies. Translated earthquake hazard maps have also been developed to identify areas that are particularly vulnerable to various causes of damage such as ground shaking, surface rupturing, and liquefaction. The implementation of earthquake hazard reduction plans are now under way in various communities in Utah. The results of a survey presented in this paper indicate that technical public officials (planners and building officials) have an understanding of the earthquake hazards and how to mitigate the risks. Although the survey shows that the general public has a slightly lower concern about the potential for economic losses, they recognize the potential problems and can support a number of earthquake mitigation measures. The study suggests that many community groups along the Wasatch Front, including volunteer groups, business groups, and elected and appointed officials, are ready for action-oriented educational programs. These programs could lead to a significant reduction in the risks associated with earthquake hazards. A DATA BASE DESIGNED FOR URBAN SEISMIC HAZARDS STUDIES: A computerized data base has been designed for use in urban seismic hazards studies conducted by the U.S. Geological Survey. The design includes file structures for 16 linked data sets, which contain geological, geophysical, and seismological data used in preparing relative ground response maps of large urban areas. The data base is organized along relational data base principles. A prototype urban hazards data base has been created for evaluation in two urban areas currently under investigation: the Wasatch Front region of Utah and the Puget Sound area of Washington. The initial implementation of the urban hazards data base was accomplished on a microcomputer using dBASE III Plus software and transferred to minicomputers and a work station. A MAPPING OF GROUND-SHAKING INTENSITIES FOR SALT LAKE COUNTY, UTAH: This paper documents the development of maps showing a

  9. Caching Patterns and Implementation

    Directory of Open Access Journals (Sweden)

    Octavian Paul ROTARU

    2006-01-01

    Full Text Available Repetitious access to remote resources, usually data, constitutes a bottleneck for many software systems. Caching is a technique that can drastically improve the performance of any database application, by avoiding multiple read operations for the same data. This paper addresses the caching problems from a pattern perspective. Both Caching and caching strategies, like primed and on demand, are presented as patterns and a pattern-based flexible caching implementation is proposed.The Caching pattern provides method of expensive resources reacquisition circumvention. Primed Cache pattern is applied in situations in which the set of required resources, or at least a part of it, can be predicted, while Demand Cache pattern is applied whenever the resources set required cannot be predicted or is unfeasible to be buffered.The advantages and disadvantages of all the caching patterns presented are also discussed, and the lessons learned are applied in the implementation of the pattern-based flexible caching solution proposed.

  10. CacheCard : Caching static and dynamic content on the NIC

    NARCIS (Netherlands)

    Bos, Herbert; Huang, Kaiming

    2009-01-01

    CacheCard is a NIC-based cache for static and dynamic web content in a way that allows for implementation on simple devices like NICs. It requires neither understanding of the way dynamic data is generated, nor execution of scripts on the cache. By monitoring file system activity and potential

  11. A Scalable and Highly Configurable Cache-Aware Hybrid Flash Translation Layer

    Directory of Open Access Journals (Sweden)

    Jalil Boukhobza

    2014-03-01

    Full Text Available This paper presents a cache-aware configurable hybrid flash translation layer (FTL, named CACH-FTL. It was designed based on the observation that most state-of­­-the-art flash-specific cache systems above FTLs flush groups of pages belonging to the same data block. CACH-FTL relies on this characteristic to optimize flash write operations placement, as large groups of pages are flushed to a block-mapped region, named BMR, whereas small groups are buffered into a page-mapped region, named PMR. Page group placement is based on a configurable threshold defining the limit under which it is more cost-effective to use page mapping (PMR and wait for grouping more pages before flushing to the BMR. CACH-FTL is scalable in terms of mapping table size and flexible in terms of Input/Output (I/O workload support. CACH-FTL performs very well, as the performance difference with the ideal page-mapped FTL is less than 15% in most cases and has a mean of 4% for the best CACH-FTL configurations, while using at least 78% less memory for table mapping storage on RAM.

  12. A method cache for Patmos

    DEFF Research Database (Denmark)

    Degasperi, Philipp; Hepp, Stefan; Puffitsch, Wolfgang

    2014-01-01

    For real-time systems we need time-predictable processors. This paper presents a method cache as a time-predictable solution for instruction caching. The method cache caches whole methods (or functions) and simplifies worst-case execution time analysis. We have integrated the method cache...... in the time-predictable processor Patmos. We evaluate the method cache with a large set of embedded benchmarks. Most benchmarks show a good hit rate for a method cache size in the range between 4 and 16 KB....

  13. A Time-predictable Stack Cache

    DEFF Research Database (Denmark)

    Abbaspour, Sahar; Brandner, Florian; Schoeberl, Martin

    2013-01-01

    Real-time systems need time-predictable architectures to support static worst-case execution time (WCET) analysis. One architectural feature, the data cache, is hard to analyze when different data areas (e.g., heap allocated and stack allocated data) share the same cache. This sharing leads to le...... of a cache for stack allocated data. Our port of the LLVM C++ compiler supports the management of the stack cache. The combination of stack cache instructions and the hardware implementation of the stack cache is a further step towards timepredictable architectures.......Real-time systems need time-predictable architectures to support static worst-case execution time (WCET) analysis. One architectural feature, the data cache, is hard to analyze when different data areas (e.g., heap allocated and stack allocated data) share the same cache. This sharing leads to less...... precise results of the cache analysis part of the WCET analysis. Splitting the data cache for different data areas enables composable data cache analysis. The WCET analysis tool can analyze the accesses to these different data areas independently. In this paper we present the design and implementation...

  14. Don't make cache too complex: A simple probability-based cache management scheme for SSDs.

    Directory of Open Access Journals (Sweden)

    Seungjae Baek

    Full Text Available Solid-state drives (SSDs have recently become a common storage component in computer systems, and they are fueled by continued bit cost reductions achieved with smaller feature sizes and multiple-level cell technologies. However, as the flash memory stores more bits per cell, the performance and reliability of the flash memory degrade substantially. To solve this problem, a fast non-volatile memory (NVM-based cache has been employed within SSDs to reduce the long latency required to write data. Absorbing small writes in a fast NVM cache can also reduce the number of flash memory erase operations. To maximize the benefits of an NVM cache, it is important to increase the NVM cache utilization. In this paper, we propose and study ProCache, a simple NVM cache management scheme, that makes cache-entrance decisions based on random probability testing. Our scheme is motivated by the observation that frequently written hot data will eventually enter the cache with a high probability, and that infrequently accessed cold data will not enter the cache easily. Owing to its simplicity, ProCache is easy to implement at a substantially smaller cost than similar previously studied techniques. We evaluate ProCache and conclude that it achieves comparable performance compared to a more complex reference counter-based cache-management scheme.

  15. Cache-Conscious Radix-Decluster Projections

    NARCIS (Netherlands)

    S. Manegold (Stefan); P.A. Boncz (Peter); N.J. Nes (Niels); M.L. Kersten (Martin)

    2004-01-01

    textabstractAs CPUs become more powerful with Moore's law and memory latencies stay constant, the impact of the memory access performance bottleneck continues to grow on relational operators like join, which can exhibit random access on a memory region larger than the hardware caches. While

  16. Maintaining Web Cache Coherency

    Directory of Open Access Journals (Sweden)

    2000-01-01

    Full Text Available Document coherency is a challenging problem for Web caching. Once the documents are cached throughout the Internet, it is often difficult to keep them coherent with the origin document without generating a new traffic that could increase the traffic on the international backbone and overload the popular servers. Several solutions have been proposed to solve this problem, among them two categories have been widely discussed: the strong document coherency and the weak document coherency. The cost and the efficiency of the two categories are still a controversial issue, while in some studies the strong coherency is far too expensive to be used in the Web context, in other studies it could be maintained at a low cost. The accuracy of these analysis is depending very much on how the document updating process is approximated. In this study, we compare some of the coherence methods proposed for Web caching. Among other points, we study the side effects of these methods on the Internet traffic. The ultimate goal is to study the cache behavior under several conditions, which will cover some of the factors that play an important role in the Web cache performance evaluation and quantify their impact on the simulation accuracy. The results presented in this study show indeed some differences in the outcome of the simulation of a Web cache depending on the workload being used, and the probability distribution used to approximate updates on the cached documents. Each experiment shows two case studies that outline the impact of the considered parameter on the performance of the cache.

  17. Time-predictable Stack Caching

    DEFF Research Database (Denmark)

    Abbaspourseyedi, Sahar

    completely. Thus, in systems with hard deadlines the worst-case execution time (WCET) of the real-time software running on them needs to be bounded. Modern architectures use features such as pipelining and caches for improving the average performance. These features, however, make the WCET analysis more...... addresses, provides an opportunity to predict and tighten the WCET of accesses to data in caches. In this thesis, we introduce the time-predictable stack cache design and implementation within a time-predictable processor. We introduce several optimizations to our design for tightening the WCET while...... keeping the timepredictability of the design intact. Moreover, we provide a solution for reducing the cost of context switching in a system using the stack cache. In design of these caches, we use custom hardware and compiler support for delivering time-predictable stack data accesses. Furthermore...

  18. Cache-Aware and Cache-Oblivious Adaptive Sorting

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Fagerberg, Rolf; Moruz, Gabriel

    2005-01-01

    Two new adaptive sorting algorithms are introduced which perform an optimal number of comparisons with respect to the number of inversions in the input. The first algorithm is based on a new linear time reduction to (non-adaptive) sorting. The second algorithm is based on a new division protocol...... for the GenericSort algorithm by Estivill-Castro and Wood. From both algorithms we derive I/O-optimal cache-aware and cache-oblivious adaptive sorting algorithms. These are the first I/O-optimal adaptive sorting algorithms....

  19. Data cache organization for accurate timing analysis

    DEFF Research Database (Denmark)

    Schoeberl, Martin; Huber, Benedikt; Puffitsch, Wolfgang

    2013-01-01

    it is important to classify memory accesses as either cache hit or cache miss. The addresses of instruction fetches are known statically and static cache hit/miss classification is possible for the instruction cache. The access to data that is cached in the data cache is harder to predict statically. Several...

  20. Research on Cache Placement in ICN

    Directory of Open Access Journals (Sweden)

    Yu Zhang

    2017-08-01

    Full Text Available Ubiquitous in-network caching is one of key features of Information Centric Network, together with receiver-drive content retrieval paradigm, Information Centric Network is better support for content distribution, multicast, mobility, etc. Cache placement strategy is crucial to improving utilization of cache space and reducing the occupation of link bandwidth. Most of the literature about caching policies considers the overall cost and bandwidth, but ignores the limits of node cache capacity. This paper proposes a G-FMPH algorithm which takes into ac-count both constrains on the link bandwidth and the cache capacity of nodes. Our algorithm aims at minimizing the overall cost of contents caching afterwards. The simulation results have proved that our proposed algorithm has a better performance.

  1. Cache-Oblivious Mesh Layouts

    International Nuclear Information System (INIS)

    Yoon, S; Lindstrom, P; Pascucci, V; Manocha, D

    2005-01-01

    We present a novel method for computing cache-oblivious layouts of large meshes that improve the performance of interactive visualization and geometric processing algorithms. Given that the mesh is accessed in a reasonably coherent manner, we assume no particular data access patterns or cache parameters of the memory hierarchy involved in the computation. Furthermore, our formulation extends directly to computing layouts of multi-resolution and bounding volume hierarchies of large meshes. We develop a simple and practical cache-oblivious metric for estimating cache misses. Computing a coherent mesh layout is reduced to a combinatorial optimization problem. We designed and implemented an out-of-core multilevel minimization algorithm and tested its performance on unstructured meshes composed of tens to hundreds of millions of triangles. Our layouts can significantly reduce the number of cache misses. We have observed 2-20 times speedups in view-dependent rendering, collision detection, and isocontour extraction without any modification of the algorithms or runtime applications

  2. On the Limits of Cache-Obliviousness

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Fagerberg, Rolf

    2003-01-01

    In this paper, we present lower bounds for permuting and sorting in the cache-oblivious model. We prove that (1) I/O optimal cache-oblivious comparison based sorting is not possible without a tall cache assumption, and (2) there does not exist an I/O optimal cache-oblivious algorithm for permutin...

  3. Optimizing Maintenance of Constraint-Based Database Caches

    Science.gov (United States)

    Klein, Joachim; Braun, Susanne

    Caching data reduces user-perceived latency and often enhances availability in case of server crashes or network failures. DB caching aims at local processing of declarative queries in a DBMS-managed cache close to the application. Query evaluation must produce the same results as if done at the remote database backend, which implies that all data records needed to process such a query must be present and controlled by the cache, i. e., to achieve “predicate-specific” loading and unloading of such record sets. Hence, cache maintenance must be based on cache constraints such that “predicate completeness” of the caching units currently present can be guaranteed at any point in time. We explore how cache groups can be maintained to provide the data currently needed. Moreover, we design and optimize loading and unloading algorithms for sets of records keeping the caching units complete, before we empirically identify the costs involved in cache maintenance.

  4. Cache-Oblivious Algorithms and Data Structures

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting

    2004-01-01

    Frigo, Leiserson, Prokop and Ramachandran in 1999 introduced the ideal-cache model as a formal model of computation for developing algorithms in environments with multiple levels of caching, and coined the terminology of cache-oblivious algorithms. Cache-oblivious algorithms are described...... as standard RAM algorithms with only one memory level, i.e. without any knowledge about memory hierarchies, but are analyzed in the two-level I/O model of Aggarwal and Vitter for an arbitrary memory and block size and an optimal off-line cache replacement strategy. The result are algorithms that automatically...... apply to multi-level memory hierarchies. This paper gives an overview of the results achieved on cache-oblivious algorithms and data structures since the seminal paper by Frigo et al....

  5. Web cache location

    Directory of Open Access Journals (Sweden)

    Boffey Brian

    2004-01-01

    Full Text Available Stress placed on network infrastructure by the popularity of the World Wide Web may be partially relieved by keeping multiple copies of Web documents at geographically dispersed locations. In particular, use of proxy caches and replication provide a means of storing information 'nearer to end users'. This paper concentrates on the locational aspects of Web caching giving both an overview, from an operational research point of view, of existing research and putting forward avenues for possible further research. This area of research is in its infancy and the emphasis will be on themes and trends rather than on algorithm construction. Finally, Web caching problems are briefly related to referral systems more generally.

  6. Caching web service for TICF project

    International Nuclear Information System (INIS)

    Pais, V.F.; Stancalie, V.

    2008-01-01

    A caching web service was developed to allow caching of any object to a network cache, presented in the form of a web service. This application was used to increase the speed of previously implemented web services and for new ones. Various tests were conducted to determine the impact of using this caching web service in the existing network environment and where it should be placed in order to achieve the greatest increase in performance. Since the cache is presented to applications as a web service, it can also be used for remote access to stored data and data sharing between applications

  7. dCache, agile adoption of storage technology

    Energy Technology Data Exchange (ETDEWEB)

    Millar, A. P. [Hamburg U.; Baranova, T. [Hamburg U.; Behrmann, G. [Unlisted, DK; Bernardt, C. [Hamburg U.; Fuhrmann, P. [Hamburg U.; Litvintsev, D. O. [Fermilab; Mkrtchyan, T. [Hamburg U.; Petersen, A. [Hamburg U.; Rossi, A. [Fermilab; Schwank, K. [Hamburg U.

    2012-01-01

    For over a decade, dCache has been synonymous with large-capacity, fault-tolerant storage using commodity hardware that supports seamless data migration to and from tape. In this paper we provide some recent news of changes within dCache and the community surrounding it. We describe the flexible nature of dCache that allows both externally developed enhancements to dCache facilities and the adoption of new technologies. Finally, we present information about avenues the dCache team is exploring for possible future improvements in dCache.

  8. Test data generation for LRU cache-memory testing

    OpenAIRE

    Evgeni, Kornikhin

    2009-01-01

    System functional testing of microprocessors deals with many assembly programs of given behavior. The paper proposes new constraint-based algorithm of initial cache-memory contents generation for given behavior of assembly program (with cache misses and hits). Although algorithm works for any types of cache-memory, the paper describes algorithm in detail for basis types of cache-memory only: fully associative cache and direct mapped cache.

  9. MESI Cache Coherence Simulator for Teaching Purposes

    OpenAIRE

    Gómez Luna, Juan; Herruzo Gómez, Ezequiel; Benavides Benítez, José Ignacio

    2009-01-01

    Nowadays, the computational systems (multi and uniprocessors) need to avoid the cache coherence problem. There are some techniques to solve this problem. The MESI cache coherence protocol is one of them. This paper presents a simulator of the MESI protocol which is used for teaching the cache memory coherence on the computer systems with hierarchical memory system and for explaining the process of the cache memory location in multilevel cache memory systems. The paper shows a d...

  10. Efficient sorting using registers and caches

    DEFF Research Database (Denmark)

    Wickremesinghe, Rajiv; Arge, Lars Allan; Chase, Jeffrey S.

    2002-01-01

    . Inadequate models lead to poor algorithmic choices and an incomplete understanding of algorithm behavior on real machines.A key step toward developing better models is to quantify the performance effects of features not reflected in the models. This paper explores the effect of memory system features...... on sorting performance. We introduce a new cache-conscious sorting algorithm, R-MERGE, which achieves better performance in practice over algorithms that are superior in the theoretical models. R-MERGE is designed to minimize memory stall cycles rather than cache misses by considering features common to many......Modern computer systems have increasingly complex memory systems. Common machine models for algorithm analysis do not reflect many of the features of these systems, e.g., large register sets, lockup-free caches, cache hierarchies, associativity, cache line fetching, and streaming behavior...

  11. Cache-aware network-on-chip for chip multiprocessors

    Science.gov (United States)

    Tatas, Konstantinos; Kyriacou, Costas; Dekoulis, George; Demetriou, Demetris; Avraam, Costas; Christou, Anastasia

    2009-05-01

    This paper presents the hardware prototype of a Network-on-Chip (NoC) for a chip multiprocessor that provides support for cache coherence, cache prefetching and cache-aware thread scheduling. A NoC with support to these cache related mechanisms can assist in improving systems performance by reducing the cache miss ratio. The presented multi-core system employs the Data-Driven Multithreading (DDM) model of execution. In DDM thread scheduling is done according to data availability, thus the system is aware of the threads to be executed in the near future. This characteristic of the DDM model allows for cache aware thread scheduling and cache prefetching. The NoC prototype is a crossbar switch with output buffering that can support a cache-aware 4-node chip multiprocessor. The prototype is built on the Xilinx ML506 board equipped with a Xilinx Virtex-5 FPGA.

  12. Store operations to maintain cache coherence

    Energy Technology Data Exchange (ETDEWEB)

    Evangelinos, Constantinos; Nair, Ravi; Ohmacht, Martin

    2017-08-01

    In one embodiment, a computer-implemented method includes encountering a store operation during a compile-time of a program, where the store operation is applicable to a memory line. It is determined, by a computer processor, that no cache coherence action is necessary for the store operation. A store-without-coherence-action instruction is generated for the store operation, responsive to determining that no cache coherence action is necessary. The store-without-coherence-action instruction specifies that the store operation is to be performed without a cache coherence action, and cache coherence is maintained upon execution of the store-without-coherence-action instruction.

  13. Store operations to maintain cache coherence

    Energy Technology Data Exchange (ETDEWEB)

    Evangelinos, Constantinos; Nair, Ravi; Ohmacht, Martin

    2017-09-12

    In one embodiment, a computer-implemented method includes encountering a store operation during a compile-time of a program, where the store operation is applicable to a memory line. It is determined, by a computer processor, that no cache coherence action is necessary for the store operation. A store-without-coherence-action instruction is generated for the store operation, responsive to determining that no cache coherence action is necessary. The store-without-coherence-action instruction specifies that the store operation is to be performed without a cache coherence action, and cache coherence is maintained upon execution of the store-without-coherence-action instruction.

  14. Cache management of tape files in mass storage system

    International Nuclear Information System (INIS)

    Cheng Yaodong; Ma Nan; Yu Chuansong; Chen Gang

    2006-01-01

    This paper proposes the group-cooperative caching policy according to the characteristics of tapes and requirements of high energy physics domain. This policy integrates the advantages of traditional local caching and cooperative caching on basis of cache model. It divides cache into independent groups; the same group of cache is made of cooperating disks on network. This paper also analyzes the directory management, update algorithm and cache consistency of the policy. The experiment shows the policy can meet the requirements of data processing and mass storage in high energy physics domain very well. (authors)

  15. Reducing Competitive Cache Misses in Modern Processor Architectures

    OpenAIRE

    Prisagjanec, Milcho; Mitrevski, Pece

    2017-01-01

    The increasing number of threads inside the cores of a multicore processor, and competitive access to the shared cache memory, become the main reasons for an increased number of competitive cache misses and performance decline. Inevitably, the development of modern processor architectures leads to an increased number of cache misses. In this paper, we make an attempt to implement a technique for decreasing the number of competitive cache misses in the first level of cache memory. This tec...

  16. Software trace cache

    OpenAIRE

    Ramírez Bellido, Alejandro; Larriba Pey, Josep; Valero Cortés, Mateo

    2005-01-01

    We explore the use of compiler optimizations, which optimize the layout of instructions in memory. The target is to enable the code to make better use of the underlying hardware resources regardless of the specific details of the processor/architecture in order to increase fetch performance. The Software Trace Cache (STC) is a code layout algorithm with a broader target than previous layout optimizations. We target not only an improvement in the instruction cache hit rate, but also an increas...

  17. The dCache scientific storage cloud

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    For over a decade, the dCache team has provided software for handling big data for a diverse community of scientists. The team has also amassed a wealth of operational experience from using this software in production. With this experience, the team have refined dCache with the goal of providing a "scientific cloud": a storage solution that satisfies all requirements of a user community by exposing different facets of dCache with which users interact. Recent development, as part of this "scientific cloud" vision, has introduced a new facet: a sync-and-share service, often referred to as "dropbox-like storage". This work has been strongly focused on local requirements, but will be made available in future releases of dCache allowing others to adopt dCache solutions. In this presentation we will outline the current status of the work: both the successes and limitations, and the direction and time-scale of future work.

  18. A Distributed Cache Update Deployment Strategy in CDN

    Science.gov (United States)

    E, Xinhua; Zhu, Binjie

    2018-04-01

    The CDN management system distributes content objects to the edge of the internet to achieve the user's near access. Cache strategy is an important problem in network content distribution. A cache strategy was designed in which the content effective diffusion in the cache group, so more content was storage in the cache, and it improved the group hit rate.

  19. Search-Order Independent State Caching

    DEFF Research Database (Denmark)

    Evangelista, Sami; Kristensen, Lars Michael

    2010-01-01

    State caching is a memory reduction technique used by model checkers to alleviate the state explosion problem. It has traditionally been coupled with a depth-first search to ensure termination.We propose and experimentally evaluate an extension of the state caching method for general state...

  20. Cache Management of Big Data in Equipment Condition Assessment

    Directory of Open Access Journals (Sweden)

    Ma Yan

    2016-01-01

    Full Text Available Big data platform for equipment condition assessment is built for comprehensive analysis. The platform has various application demands. According to its response time, its application can be divided into offline, interactive and real-time types. For real-time application, its data processing efficiency is important. In general, data cache is one of the most efficient ways to improve query time. However, big data caching is different from the traditional data caching. In the paper we propose a distributed cache management framework of big data for equipment condition assessment. It consists of three parts: cache structure, cache replacement algorithm and cache placement algorithm. Cache structure is the basis of the latter two algorithms. Based on the framework and algorithms, we make full use of the characteristics of just accessing some valuable data during a period of time, and put relevant data on the neighborhood nodes, which largely reduce network transmission cost. We also validate the performance of our proposed approaches through extensive experiments. It demonstrates that the proposed cache replacement algorithm and cache management framework has higher hit rate or lower query time than LRU algorithm and round-robin algorithm.

  1. WATCHMAN: A Data Warehouse Intelligent Cache Manager

    Science.gov (United States)

    Scheuermann, Peter; Shim, Junho; Vingralek, Radek

    1996-01-01

    Data warehouses store large volumes of data which are used frequently by decision support applications. Such applications involve complex queries. Query performance in such an environment is critical because decision support applications often require interactive query response time. Because data warehouses are updated infrequently, it becomes possible to improve query performance by caching sets retrieved by queries in addition to query execution plans. In this paper we report on the design of an intelligent cache manager for sets retrieved by queries called WATCHMAN, which is particularly well suited for data warehousing environment. Our cache manager employs two novel, complementary algorithms for cache replacement and for cache admission. WATCHMAN aims at minimizing query response time and its cache replacement policy swaps out entire retrieved sets of queries instead of individual pages. The cache replacement and admission algorithms make use of a profit metric, which considers for each retrieved set its average rate of reference, its size, and execution cost of the associated query. We report on a performance evaluation based on the TPC-D and Set Query benchmarks. These experiments show that WATCHMAN achieves a substantial performance improvement in a decision support environment when compared to a traditional LRU replacement algorithm.

  2. Static analysis of worst-case stack cache behavior

    DEFF Research Database (Denmark)

    Jordan, Alexander; Brandner, Florian; Schoeberl, Martin

    2013-01-01

    Utilizing a stack cache in a real-time system can aid predictability by avoiding interference that heap memory traffic causes on the data cache. While loads and stores are guaranteed cache hits, explicit operations are responsible for managing the stack cache. The behavior of these operations can......-graph, the worst-case bounds can be efficiently yet precisely determined. Our evaluation using the MiBench benchmark suite shows that only 37% and 21% of potential stack cache operations actually store to and load from memory, respectively. Analysis times are modest, on average running between 0.46s and 1.30s per...

  3. Truth Space Method for Caching Database Queries

    Directory of Open Access Journals (Sweden)

    S. V. Mosin

    2015-01-01

    Full Text Available We propose a new method of client-side data caching for relational databases with a central server and distant clients. Data are loaded into the client cache based on queries executed on the server. Every query has the corresponding DB table – the result of the query execution. These queries have a special form called "universal relational query" based on three fundamental Relational Algebra operations: selection, projection and natural join. We have to mention that such a form is the closest one to the natural language and the majority of database search queries can be expressed in this way. Besides, this form allows us to analyze query correctness by checking lossless join property. A subsequent query may be executed in a client’s local cache if we can determine that the query result is entirely contained in the cache. For this we compare truth spaces of the logical restrictions in a new user’s query and the results of the queries execution in the cache. Such a comparison can be performed analytically , without need in additional Database queries. This method may be used to define lacking data in the cache and execute the query on the server only for these data. To do this the analytical approach is also used, what distinguishes our paper from the existing technologies. We propose four theorems for testing the required conditions. The first and the third theorems conditions allow us to define the existence of required data in cache. The second and the fourth theorems state conditions to execute queries with cache only. The problem of cache data actualizations is not discussed in this paper. However, it can be solved by cataloging queries on the server and their serving by triggers in background mode. The article is published in the author’s wording.

  4. Cache memory modelling method and system

    OpenAIRE

    Posadas Cobo, Héctor; Villar Bonet, Eugenio; Díaz Suárez, Luis

    2011-01-01

    The invention relates to a method for modelling a data cache memory of a destination processor, in order to simulate the behaviour of said data cache memory during the execution of a software code on a platform comprising said destination processor. According to the invention, the simulation is performed on a native platform having a processor different from the destination processor comprising the aforementioned data cache memory to be modelled, said modelling being performed by means of the...

  5. Wintertime Ambient Ammonia Concentrations in Northern Utah's Urban Valleys

    Science.gov (United States)

    Hammond, I. A.; Martin, R. S.; Silva, P.; Baasandorj, M.

    2017-12-01

    Many of the population centers in northern Utah are currently classified as non-attainment or serious non-attainment, Wasatch Front, for PM2.5 and previous studies have shown ammonium nitrate to often be the largest contributor to the particulate mass. Furthermore, measurements have shown several of the Wasatch Front cities and Cache Valley (UT/ID) consistently recorded some of the highest ambient ammonia (NH3) concentrations in the continental United States. As a part of the multi-organization 2017 Utah Winter Fine Particulate Study real-time NH3 concentrations were monitored in the Cache Valley at the Logan, UT site, collocated at an EPA sampling trailer near the Utah State University (USU) campus. A Picarro model G2508 was to used collect 5-sec averaged concentrations of NH3, carbon dioxide (CO2), and methane (CH4) from January 16th to February 14th, 2017. Parts of three inversion events, wherein the PM2.5 concentrations approached or exceeded the National Ambient Air Quality Standards, were captured during the sampling period, including a 10-day event from January 25th to February 4th. Concentrations of all three of the observed species showed significant accumulation during the events, with NH3 concentrations ranging from below the detection limit (70 ppb. Preliminary analysis suggested the temporal NH3 changes tracked the increase in PM2.5 throughout the inversion events; however, a one-day period of NH3 depletion during the main inversion event was observed while PM2.5 continued to increase. Additionally, a network of passive NH3 samplers (Ogawa Model 3300) were arrayed at 25 sites throughout the Cache Valley and at 11 sites located along the Wasatch Front. These networks sampled for three 7-day periods, during the same study time frame. Ion chromatographic (IC) analyses of the sample pads are not yet finalized; however, preliminary results show concentrations in the tens of ppb and seemingly spatially correlate with previous studies showing elevated

  6. Efficient Mobile Client Caching Supporting Transaction Semantics

    Directory of Open Access Journals (Sweden)

    IlYoung Chung

    2000-05-01

    Full Text Available In mobile client-server database systems, caching of frequently accessed data is an important technique that will reduce the contention on the narrow bandwidth wireless channel. As the server in mobile environments may not have any information about the state of its clients' cache(stateless server, using broadcasting approach to transmit the updated data lists to numerous concurrent mobile clients is an attractive approach. In this paper, a caching policy is proposed to maintain cache consistency for mobile computers. The proposed protocol adopts asynchronous(non-periodic broadcasting as the cache invalidation scheme, and supports transaction semantics in mobile environments. With the asynchronous broadcasting approach, the proposed protocol can improve the throughput by reducing the abortion of transactions with low communication costs. We study the performance of the protocol by means of simulation experiments.

  7. Efficacy of Code Optimization on Cache-based Processors

    Science.gov (United States)

    VanderWijngaart, Rob F.; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    The current common wisdom in the U.S. is that the powerful, cost-effective supercomputers of tomorrow will be based on commodity (RISC) micro-processors with cache memories. Already, most distributed systems in the world use such hardware as building blocks. This shift away from vector supercomputers and towards cache-based systems has brought about a change in programming paradigm, even when ignoring issues of parallelism. Vector machines require inner-loop independence and regular, non-pathological memory strides (usually this means: non-power-of-two strides) to allow efficient vectorization of array operations. Cache-based systems require spatial and temporal locality of data, so that data once read from main memory and stored in high-speed cache memory is used optimally before being written back to main memory. This means that the most cache-friendly array operations are those that feature zero or unit stride, so that each unit of data read from main memory (a cache line) contains information for the next iteration in the loop. Moreover, loops ought to be 'fat', meaning that as many operations as possible are performed on cache data-provided instruction caches do not overflow and enough registers are available. If unit stride is not possible, for example because of some data dependency, then care must be taken to avoid pathological strides, just ads on vector computers. For cache-based systems the issues are more complex, due to the effects of associativity and of non-unit block (cache line) size. But there is more to the story. Most modern micro-processors are superscalar, which means that they can issue several (arithmetic) instructions per clock cycle, provided that there are enough independent instructions in the loop body. This is another argument for providing fat loop bodies. With these restrictions, it appears fairly straightforward to produce code that will run efficiently on any cache-based system. It can be argued that although some of the important

  8. Design Space Exploration of Object Caches with Cross-Profiling

    DEFF Research Database (Denmark)

    Schoeberl, Martin; Binder, Walter; Villazon, Alex

    2011-01-01

    . However, before implementing such an object cache, an empirical analysis of different organization forms is needed. We use a cross-profiling technique based on aspect-oriented programming in order to evaluate different object cache organizations with standard Java benchmarks. From the evaluation we......To avoid data cache trashing between heap-allocated data and other data areas, a distinct object cache has been proposed for embedded real-time Java processors. This object cache uses high associativity in order to statically track different object pointers for worst-case execution-time analysis...... conclude that field access exhibits some temporal locality, but almost no spatial locality. Therefore, filling long cache lines on a miss just introduces a high miss penalty without increasing the hit rate enough to make up for the increased miss penalty. For an object cache, it is more efficient to fill...

  9. Archeological Excavations at the Wanapum Cache Site

    International Nuclear Information System (INIS)

    T. E. Marceau

    2000-01-01

    This report was prepared to document the actions taken to locate and excavate an abandoned Wanapum cache located east of the 100-H Reactor area. Evidence (i.e., glass, ceramics, metal, and wood) obtained from shovel and backhoe excavations at the Wanapum cache site indicate that the storage caches were found. The highly fragmented condition of these materials argues that the contents of the caches were collected or destroyed prior to the caches being burned and buried by mechanical equipment. While the fiber nets would have been destroyed by fire, the specialized stone weights would have remained behind. The fact that the site might have been gleaned of desirable artifacts prior to its demolition is consistent with the account by Riddell (1948) for a contemporary village site. Unfortunately, fishing equipment, owned by and used on behalf of the village, that might have returned to productive use has been irretrievably lost

  10. Engineering a Cache-Oblivious Sorting Algorithm

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Fagerberg, Rolf; Vinther, Kristoffer

    2007-01-01

    This paper is an algorithmic engineering study of cache-oblivious sorting. We investigate by empirical methods a number of implementation issues and parameter choices for the cache-oblivious sorting algorithm Lazy Funnelsort, and compare the final algorithm with Quicksort, the established standard...

  11. Exploitation of pocket gophers and their food caches by grizzly bears

    Science.gov (United States)

    Mattson, D.J.

    2004-01-01

    I investigated the exploitation of pocket gophers (Thomomys talpoides) by grizzly bears (Ursus arctos horribilis) in the Yellowstone region of the United States with the use of data collected during a study of radiomarked bears in 1977-1992. My analysis focused on the importance of pocket gophers as a source of energy and nutrients, effects of weather and site features, and importance of pocket gophers to grizzly bears in the western contiguous United States prior to historical extirpations. Pocket gophers and their food caches were infrequent in grizzly bear feces, although foraging for pocket gophers accounted for about 20-25% of all grizzly bear feeding activity during April and May. Compared with roots individually excavated by bears, pocket gopher food caches were less digestible but more easily dug out. Exploitation of gopher food caches by grizzly bears was highly sensitive to site and weather conditions and peaked during and shortly after snowmelt. This peak coincided with maximum success by bears in finding pocket gopher food caches. Exploitation was most frequent and extensive on gently sloping nonforested sites with abundant spring beauty (Claytonia lanceolata) and yampah (Perdieridia gairdneri). Pocket gophers are rare in forests, and spring beauty and yampah roots are known to be important foods of both grizzly bears and burrowing rodents. Although grizzly bears commonly exploit pocket gophers only in the Yellowstone region, this behavior was probably widespread in mountainous areas of the western contiguous United States prior to extirpations of grizzly bears within the last 150 years.

  12. Corvid re-caching without 'theory of mind': a model.

    Science.gov (United States)

    van der Vaart, Elske; Verbrugge, Rineke; Hemelrijk, Charlotte K

    2012-01-01

    Scrub jays are thought to use many tactics to protect their caches. For instance, they predominantly bury food far away from conspecifics, and if they must cache while being watched, they often re-cache their worms later, once they are in private. Two explanations have been offered for such observations, and they are intensely debated. First, the birds may reason about their competitors' mental states, with a 'theory of mind'; alternatively, they may apply behavioral rules learned in daily life. Although this second hypothesis is cognitively simpler, it does seem to require a different, ad-hoc behavioral rule for every caching and re-caching pattern exhibited by the birds. Our new theory avoids this drawback by explaining a large variety of patterns as side-effects of stress and the resulting memory errors. Inspired by experimental data, we assume that re-caching is not motivated by a deliberate effort to safeguard specific caches from theft, but by a general desire to cache more. This desire is brought on by stress, which is determined by the presence and dominance of onlookers, and by unsuccessful recovery attempts. We study this theory in two experiments similar to those done with real birds with a kind of 'virtual bird', whose behavior depends on a set of basic assumptions about corvid cognition, and a well-established model of human memory. Our results show that the 'virtual bird' acts as the real birds did; its re-caching reflects whether it has been watched, how dominant its onlooker was, and how close to that onlooker it has cached. This happens even though it cannot attribute mental states, and it has only a single behavioral rule assumed to be previously learned. Thus, our simulations indicate that corvid re-caching can be explained without sophisticated social cognition. Given our specific predictions, our theory can easily be tested empirically.

  13. Corvid re-caching without 'theory of mind': a model.

    Directory of Open Access Journals (Sweden)

    Elske van der Vaart

    Full Text Available Scrub jays are thought to use many tactics to protect their caches. For instance, they predominantly bury food far away from conspecifics, and if they must cache while being watched, they often re-cache their worms later, once they are in private. Two explanations have been offered for such observations, and they are intensely debated. First, the birds may reason about their competitors' mental states, with a 'theory of mind'; alternatively, they may apply behavioral rules learned in daily life. Although this second hypothesis is cognitively simpler, it does seem to require a different, ad-hoc behavioral rule for every caching and re-caching pattern exhibited by the birds. Our new theory avoids this drawback by explaining a large variety of patterns as side-effects of stress and the resulting memory errors. Inspired by experimental data, we assume that re-caching is not motivated by a deliberate effort to safeguard specific caches from theft, but by a general desire to cache more. This desire is brought on by stress, which is determined by the presence and dominance of onlookers, and by unsuccessful recovery attempts. We study this theory in two experiments similar to those done with real birds with a kind of 'virtual bird', whose behavior depends on a set of basic assumptions about corvid cognition, and a well-established model of human memory. Our results show that the 'virtual bird' acts as the real birds did; its re-caching reflects whether it has been watched, how dominant its onlooker was, and how close to that onlooker it has cached. This happens even though it cannot attribute mental states, and it has only a single behavioral rule assumed to be previously learned. Thus, our simulations indicate that corvid re-caching can be explained without sophisticated social cognition. Given our specific predictions, our theory can easily be tested empirically.

  14. Analysis of preemption costs for the stack cache

    DEFF Research Database (Denmark)

    Naji, Amine; Abbaspour, Sahar; Brandner, Florian

    2018-01-01

    , the analysis of the stack cache was limited to individual tasks, ignoring aspects related to multitasking. A major drawback of the original stack cache design is that, due to its simplicity, it cannot hold the data of multiple tasks at the same time. Consequently, the entire cache content needs to be saved...

  15. OPTIMAL DATA REPLACEMENT TECHNIQUE FOR COOPERATIVE CACHING IN MANET

    Directory of Open Access Journals (Sweden)

    P. Kuppusamy

    2014-09-01

    Full Text Available A cooperative caching approach improves data accessibility and reduces query latency in Mobile Ad hoc Network (MANET. Maintaining the cache is challenging issue in large MANET due to mobility, cache size and power. The previous research works on caching primarily have dealt with LRU, LFU and LRU-MIN cache replacement algorithms that offered low query latency and greater data accessibility in sparse MANET. This paper proposes Memetic Algorithm (MA to locate the better replaceable data based on neighbours interest and fitness value of cached data to store the newly arrived data. This work also elects ideal CH using Meta heuristic search Ant Colony Optimization algorithm. The simulation results shown that proposed algorithm reduces the latency, control overhead and increases the packet delivery rate than existing approach by increasing nodes and speed respectively.

  16. Hydrogeochemical and stream sediment detailed geochemical survey for Thomas Range-Wasatch, Utah. Farmington Project area

    International Nuclear Information System (INIS)

    Butz, T.R.; Bard, C.S.; Witt, D.A.; Helgerson, R.N.; Grimes, J.G.; Pritz, P.M.

    1980-01-01

    Results of the Farmington project area of the Thomas Range-Wasatch detailed geochemical survey are reported. Field and laboratory data are presented for 71 groundwater samples, 345 stream sediment samples, and 178 radiometric readings. Statistical and areal distributions of uranium and possible uranium-related variables are given. A generalized geologic map of the project area is provided, and pertinent geologic factors which may be of significance in evaluating the potential for uranium mineralization are briefly discussed. Uranium concentrations in groundwater range from <0.20 to 21.77 ppB. The highest values are from groundwaters producing from areas in or near the Norwood Tuff and Wasatch, Evanston, and/or Echo Canyon Formations, and the Farmington Canyon Complex. The uranium:boron ratio delineates an anomalous trend associated with the Farmington Canyon Complex. Variables associated with uranium in groundwaters producing from the Norwood Tuff and Wasatch, Evanston, and/or Echo Canyon Formations include the uranium:sulfate ratio, boron, barium, potassium, lithium, silicon, chloride, selenium, and vanadium. Soluble uranium concentrations (U-FL) in stream sediments range from 0.99 to 86.41 ppM. Total uranium concentrations (U-NT) range from 1.60 to 92.40 ppM. Thorium concentrations range from <2 to 47 ppM. Anomalous concentrations of these variables are associated with the Farmington Canyon Complex. Variables which are associated with uranium include cerium, sodium, niobium, phosphorus, titanium, and yttrium

  17. The Cost of Cache-Oblivious Searching

    DEFF Research Database (Denmark)

    Bender, Michael A.; Brodal, Gert Stølting; Fagerberg, Rolf

    2003-01-01

    , multilevel memory hierarchies can be modelled. It is shown that as k grows, the search costs of the optimal k-level DAM search structure and of the optimal cache-oblivious search structure rapidly converge. This demonstrates that for a multilevel memory hierarchy, a simple cache-oblivious structure almost......Tight bounds on the cost of cache-oblivious searching are proved. It is shown that no cache-oblivious search structure can guarantee that a search performs fewer than lg e log B N block transfers between any two levels of the memory hierarchy. This lower bound holds even if all of the block sizes...... the random placement of the rst element of the structure in memory. As searching in the Disk Access Model (DAM) can be performed in log B N + 1 block transfers, this result shows a separation between the 2-level DAM and cacheoblivious memory-hierarchy models. By extending the DAM model to k levels...

  18. A Two-Level Cache for Distributed Information Retrieval in Search Engines

    Directory of Open Access Journals (Sweden)

    Weizhe Zhang

    2013-01-01

    Full Text Available To improve the performance of distributed information retrieval in search engines, we propose a two-level cache structure based on the queries of the users’ logs. We extract the highest rank queries of users from the static cache, in which the queries are the most popular. We adopt the dynamic cache as an auxiliary to optimize the distribution of the cache data. We propose a distribution strategy of the cache data. The experiments prove that the hit rate, the efficiency, and the time consumption of the two-level cache have advantages compared with other structures of cache.

  19. A two-level cache for distributed information retrieval in search engines.

    Science.gov (United States)

    Zhang, Weizhe; He, Hui; Ye, Jianwei

    2013-01-01

    To improve the performance of distributed information retrieval in search engines, we propose a two-level cache structure based on the queries of the users' logs. We extract the highest rank queries of users from the static cache, in which the queries are the most popular. We adopt the dynamic cache as an auxiliary to optimize the distribution of the cache data. We propose a distribution strategy of the cache data. The experiments prove that the hit rate, the efficiency, and the time consumption of the two-level cache have advantages compared with other structures of cache.

  20. Optimal Caching in Multicast 5G Networks with Opportunistic Spectrum Access

    KAUST Repository

    Emara, Mostafa

    2018-01-15

    Cache-enabled small base station (SBS) densification is foreseen as a key component of 5G cellular networks. This architecture enables storing popular files at the network edge (i.e., SBS caches), which empowers local communication and alleviates traffic congestions at the core/backhaul network. This paper develops a mathematical framework, based on stochastic geometry, to characterize the hit probability of a cache-enabled multicast 5G network with SBS multi-channel capabilities and opportunistic spectrum access. To this end, we first derive the hit probability by characterizing opportunistic spectrum access success probabilities, service distance distributions, and coverage probabilities. The optimal caching distribution to maximize the hit probability is then computed. The performance and trade-offs of the derived optimal caching distributions are then assessed and compared with two widely employed caching distribution schemes, namely uniform and Zipf caching, through numerical results and extensive simulations. It is shown that the Zipf caching almost optimal only in scenarios with large number of available channels and large cache sizes.

  1. Compiler-Enforced Cache Coherence Using a Functional Language

    Directory of Open Access Journals (Sweden)

    Rich Wolski

    1996-01-01

    Full Text Available The cost of hardware cache coherence, both in terms of execution delay and operational cost, is substantial for scalable systems. Fortunately, compiler-generated cache management can reduce program serialization due to cache contention; increase execution performance; and reduce the cost of parallel systems by eliminating the need for more expensive hardware support. In this article, we use the Sisal functional language system as a vehicle to implement and investigate automatic, compiler-based cache management. We describe our implementation of Sisal for the IBM Power/4. The Power/4, briefly available as a product, represents an early attempt to build a shared memory machine that relies strictly on the language system for cache coherence. We discuss the issues associated with deterministic execution and program correctness on a system without hardware coherence, and demonstrate how Sisal (as a functional language is able to address those issues.

  2. Version pressure feedback mechanisms for speculative versioning caches

    Science.gov (United States)

    Eichenberger, Alexandre E.; Gara, Alan; O& #x27; Brien, Kathryn M.; Ohmacht, Martin; Zhuang, Xiaotong

    2013-03-12

    Mechanisms are provided for controlling version pressure on a speculative versioning cache. Raw version pressure data is collected based on one or more threads accessing cache lines of the speculative versioning cache. One or more statistical measures of version pressure are generated based on the collected raw version pressure data. A determination is made as to whether one or more modifications to an operation of a data processing system are to be performed based on the one or more statistical measures of version pressure, the one or more modifications affecting version pressure exerted on the speculative versioning cache. An operation of the data processing system is modified based on the one or more determined modifications, in response to a determination that one or more modifications to the operation of the data processing system are to be performed, to affect the version pressure exerted on the speculative versioning cache.

  3. Dynamic web cache publishing for IaaS clouds using Shoal

    International Nuclear Information System (INIS)

    Gable, Ian; Chester, Michael; Berghaus, Frank; Leavett-Brown, Colin; Paterson, Michael; Prior, Robert; Sobie, Randall; Taylor, Ryan; Armstrong, Patrick; Charbonneau, Andre

    2014-01-01

    We have developed a highly scalable application, called Shoal, for tracking and utilizing a distributed set of HTTP web caches. Our application uses the Squid HTTP cache. Squid servers advertise their existence to the Shoal server via AMQP messaging by running Shoal Agent. The Shoal server provides a simple REST interface that allows clients to determine their closest Squid cache. Our goal is to dynamically instantiate Squid caches on IaaS clouds in response to client demand. Shoal provides the VMs on IaaS clouds with the location of the nearest dynamically instantiated Squid Cache

  4. Search-Order Independent State Caching

    DEFF Research Database (Denmark)

    Evangelista, Sami; Kristensen, Lars Michael

    2009-01-01

    State caching is a memory reduction technique used by model checkers to alleviate the state explosion problem. It has traditionally been coupled with a depth-first search to ensure termination.We propose and experimentally evaluate an extension of the state caching method for general state...... exploring algorithms that are independent of the search order (i.e., search algorithms that partition the state space into closed (visited) states, open (to visit) states and unmet states)....

  5. dCache on Steroids - Delegated Storage Solutions

    Science.gov (United States)

    Mkrtchyan, T.; Adeyemi, F.; Ashish, A.; Behrmann, G.; Fuhrmann, P.; Litvintsev, D.; Millar, P.; Rossi, A.; Sahakyan, M.; Starek, J.

    2017-10-01

    For over a decade, dCache.org has delivered a robust software used at more than 80 Universities and research institutes around the world, allowing these sites to provide reliable storage services for the WLCG experiments as well as many other scientific communities. The flexible architecture of dCache allows running it in a wide variety of configurations and platforms - from a SoC based all-in-one Raspberry-Pi up to hundreds of nodes in a multipetabyte installation. Due to lack of managed storage at the time, dCache implemented data placement, replication and data integrity directly. Today, many alternatives are available: S3, GlusterFS, CEPH and others. While such solutions position themselves as scalable storage systems, they cannot be used by many scientific communities out of the box. The absence of community-accepted authentication and authorization mechanisms, the use of product specific protocols and the lack of namespace are some of the reasons that prevent wide-scale adoption of these alternatives. Most of these limitations are already solved by dCache. By delegating low-level storage management functionality to the above-mentioned new systems and providing the missing layer through dCache, we provide a solution which combines the benefits of both worlds - industry standard storage building blocks with the access protocols and authentication required by scientific communities. In this paper, we focus on CEPH, a popular software for clustered storage that supports file, block and object interfaces. CEPH is often used in modern computing centers, for example as a backend to OpenStack services. We will show prototypes of dCache running with a CEPH backend and discuss the benefits and limitations of such an approach. We will also outline the roadmap for supporting ‘delegated storage’ within the dCache releases.

  6. Cooperative Caching in Mobile Ad Hoc Networks Based on Data Utility

    Directory of Open Access Journals (Sweden)

    Narottam Chand

    2007-01-01

    Full Text Available Cooperative caching, which allows sharing and coordination of cached data among clients, is a potential technique to improve the data access performance and availability in mobile ad hoc networks. However, variable data sizes, frequent data updates, limited client resources, insufficient wireless bandwidth and client's mobility make cache management a challenge. In this paper, we propose a utility based cache replacement policy, least utility value (LUV, to improve the data availability and reduce the local cache miss ratio. LUV considers several factors that affect cache performance, namely access probability, distance between the requester and data source/cache, coherency and data size. A cooperative cache management strategy, Zone Cooperative (ZC, is developed that employs LUV as replacement policy. In ZC one-hop neighbors of a client form a cooperation zone since the cost for communication with them is low both in terms of energy consumption and message exchange. Simulation experiments have been conducted to evaluate the performance of LUV based ZC caching strategy. The simulation results show that, LUV replacement policy substantially outperforms the LRU policy.

  7. An Adaptive Insertion and Promotion Policy for Partitioned Shared Caches

    Science.gov (United States)

    Mahrom, Norfadila; Liebelt, Michael; Raof, Rafikha Aliana A.; Daud, Shuhaizar; Hafizah Ghazali, Nur

    2018-03-01

    Cache replacement policies in chip multiprocessors (CMP) have been investigated extensively and proven able to enhance shared cache management. However, competition among multiple processors executing different threads that require simultaneous access to a shared memory may cause cache contention and memory coherence problems on the chip. These issues also exist due to some drawbacks of the commonly used Least Recently Used (LRU) policy employed in multiprocessor systems, which are because of the cache lines residing in the cache longer than required. In image processing analysis of for example extra pulmonary tuberculosis (TB), an accurate diagnosis for tissue specimen is required. Therefore, a fast and reliable shared memory management system to execute algorithms for processing vast amount of specimen image is needed. In this paper, the effects of the cache replacement policy in a partitioned shared cache are investigated. The goal is to quantify whether better performance can be achieved by using less complex replacement strategies. This paper proposes a Middle Insertion 2 Positions Promotion (MI2PP) policy to eliminate cache misses that could adversely affect the access patterns and the throughput of the processors in the system. The policy employs a static predefined insertion point, near distance promotion, and the concept of ownership in the eviction policy to effectively improve cache thrashing and to avoid resource stealing among the processors.

  8. A Stack Cache for Real-Time Systems

    DEFF Research Database (Denmark)

    Schoeberl, Martin; Nielsen, Carsten

    2016-01-01

    Real-time systems need time-predictable computing platforms to allowfor static analysis of the worst-case execution time. Caches are important for good performance, but data caches arehard to analyze for the worst-case execution time. Stack allocated data has different properties related...

  9. 3D Dynamic Rupture Simulations along Dipping Faults, with a focus on the Wasatch Fault Zone, Utah

    Science.gov (United States)

    Withers, K.; Moschetti, M. P.

    2017-12-01

    We study dynamic rupture and ground motion from dip-slip faults in regions that have high-seismic hazard, such as the Wasatch fault zone, Utah. Previous numerical simulations have modeled deterministic ground motion along segments of this fault in the heavily populated regions near Salt Lake City but were restricted to low frequencies ( 1 Hz). We seek to better understand the rupture process and assess broadband ground motions and variability from the Wasatch Fault Zone by extending deterministic ground motion prediction to higher frequencies (up to 5 Hz). We perform simulations along a dipping normal fault (40 x 20 km along strike and width, respectively) with characteristics derived from geologic observations to generate a suite of ruptures > Mw 6.5. This approach utilizes dynamic simulations (fully physics-based models, where the initial stress drop and friction law are imposed) using a summation by parts (SBP) method. The simulations include rough-fault topography following a self-similar fractal distribution (over length scales from 100 m to the size of the fault) in addition to off-fault plasticity. Energy losses from heat and other mechanisms, modeled as anelastic attenuation, are also included, as well as free-surface topography, which can significantly affect ground motion patterns. We compare the effect of material structure and both rate and state and slip-weakening friction laws have on rupture propagation. The simulations show reduced slip and moment release in the near surface with the inclusion of plasticity, better agreeing with observations of shallow slip deficit. Long-wavelength fault geometry imparts a non-uniform stress distribution along both dip and strike, influencing the preferred rupture direction and hypocenter location, potentially important for seismic hazard estimation.

  10. A distributed storage system with dCache

    DEFF Research Database (Denmark)

    Behrmann, Gerd; Fuhrmann, Patrick; Grønager, Michael

    2008-01-01

    The LCG collaboration is encompassed by a number of Tier 1 centers. The Nordic LCG Tier 1, operated by NDGF, is in contrast to many other Tier 1 centers distributed over the Nordic countries. A distributed setup was chosen for both political and technical reasons, but also provides a number...... of unique challenges. dCache is well known and respected as a powerful distributed storage resource manager, and was chosen for implementing the storage aspects of the Nordic Tier 1. In contrast to classic dCache deployments, we deploy dCache over a WAN with limited bandwidth, high latency, frequent network...

  11. Smart caching based on mobile agent of power WebGIS platform.

    Science.gov (United States)

    Wang, Xiaohui; Wu, Kehe; Chen, Fei

    2013-01-01

    Power information construction is developing towards intensive, platform, distributed direction with the expansion of power grid and improvement of information technology. In order to meet the trend, power WebGIS was designed and developed. In this paper, we first discuss the architecture and functionality of power WebGIS, and then we study caching technology in detail, which contains dynamic display cache model, caching structure based on mobile agent, and cache data model. We have designed experiments of different data capacity to contrast performance between WebGIS with the proposed caching model and traditional WebGIS. The experimental results showed that, with the same hardware environment, the response time of WebGIS with and without caching model increased as data capacity growing, while the larger the data was, the higher the performance of WebGIS with proposed caching model improved.

  12. Behavior-aware cache hierarchy optimization for low-power multi-core embedded systems

    Science.gov (United States)

    Zhao, Huatao; Luo, Xiao; Zhu, Chen; Watanabe, Takahiro; Zhu, Tianbo

    2017-07-01

    In modern embedded systems, the increasing number of cores requires efficient cache hierarchies to ensure data throughput, but such cache hierarchies are restricted by their tumid size and interference accesses which leads to both performance degradation and wasted energy. In this paper, we firstly propose a behavior-aware cache hierarchy (BACH) which can optimally allocate the multi-level cache resources to many cores and highly improved the efficiency of cache hierarchy, resulting in low energy consumption. The BACH takes full advantage of the explored application behaviors and runtime cache resource demands as the cache allocation bases, so that we can optimally configure the cache hierarchy to meet the runtime demand. The BACH was implemented on the GEM5 simulator. The experimental results show that energy consumption of a three-level cache hierarchy can be saved from 5.29% up to 27.94% compared with other key approaches while the performance of the multi-core system even has a slight improvement counting in hardware overhead.

  13. dCache, agile adoption of storage technology

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    For over a decade, dCache has been synonymous with large-capacity, fault-tolerant storage using commodity hardware that supports seamless data migration to and from tape. Over that time, it has satisfied the requirements of various demanding scientific user communities to store their data, transfer it between sites and fast, site-local access. When the dCache project started, the focus was on managing a relatively small disk cache in front of large tape archives. Over the project's lifetime storage technology has changed. During this period, technology changes have driven down the cost-per-GiB of harddisks. This resulted in a shift towards systems where the majority of data is stored on disk. More recently, the availability of Solid State Disks, while not yet a replacement for magnetic disks, offers an intriguing opportunity for significant performance improvement if they can be used intelligently within an existing system. New technologies provide new opportunities and dCache user communities' computi...

  14. Study of cache performance in distributed environment for data processing

    International Nuclear Information System (INIS)

    Makatun, Dzmitry; Lauret, Jérôme; Šumbera, Michal

    2014-01-01

    Processing data in distributed environment has found its application in many fields of science (Nuclear and Particle Physics (NPP), astronomy, biology to name only those). Efficiently transferring data between sites is an essential part of such processing. The implementation of caching strategies in data transfer software and tools, such as the Reasoner for Intelligent File Transfer (RIFT) being developed in the STAR collaboration, can significantly decrease network load and waiting time by reusing the knowledge of data provenance as well as data placed in transfer cache to further expand on the availability of sources for files and data-sets. Though, a great variety of caching algorithms is known, a study is needed to evaluate which one can deliver the best performance in data access considering the realistic demand patterns. Records of access to the complete data-sets of NPP experiments were analyzed and used as input for computer simulations. Series of simulations were done in order to estimate the possible cache hits and cache hits per byte for known caching algorithms. The simulations were done for cache of different sizes within interval 0.001 – 90% of complete data-set and low-watermark within 0-90%. Records of data access were taken from several experiments and within different time intervals in order to validate the results. In this paper, we will discuss the different data caching strategies from canonical algorithms to hybrid cache strategies, present the results of our simulations for the diverse algorithms, debate and identify the choice for the best algorithm in the context of Physics Data analysis in NPP. While the results of those studies have been implemented in RIFT, they can also be used when setting up cache in any other computational work-flow (Cloud processing for example) or managing data storages with partial replicas of the entire data-set

  15. A Cache System Design for CMPs with Built-In Coherence Verification

    Directory of Open Access Journals (Sweden)

    Mamata Dalui

    2016-01-01

    Full Text Available This work reports an effective design of cache system for Chip Multiprocessors (CMPs. It introduces built-in logic for verification of cache coherence in CMPs realizing directory based protocol. It is developed around the cellular automata (CA machine, invented by John von Neumann in the 1950s. A special class of CA referred to as single length cycle 2-attractor cellular automata (TACA has been planted to detect the inconsistencies in cache line states of processors’ private caches. The TACA module captures coherence status of the CMPs’ cache system and memorizes any inconsistent recording of the cache line states during the processors’ reference to a memory block. Theory has been developed to empower a TACA to analyse the cache state updates and then to settle to an attractor state indicating quick decision on a faulty recording of cache line status. The introduction of segmentation of the CMPs’ processor pool ensures a better efficiency, in determining the inconsistencies, by reducing the number of computation steps in the verification logic. The hardware requirement for the verification logic points to the fact that the overhead of proposed coherence verification module is much lesser than that of the conventional verification units and is insignificant with respect to the cost involved in CMPs’ cache system.

  16. Scope-Based Method Cache Analysis

    DEFF Research Database (Denmark)

    Huber, Benedikt; Hepp, Stefan; Schoeberl, Martin

    2014-01-01

    The quest for time-predictable systems has led to the exploration of new hardware architectures that simplify analysis and reasoning in the temporal domain, while still providing competitive performance. For the instruction memory, the method cache is a conceptually attractive solution, as it req......The quest for time-predictable systems has led to the exploration of new hardware architectures that simplify analysis and reasoning in the temporal domain, while still providing competitive performance. For the instruction memory, the method cache is a conceptually attractive solution...

  17. Funnel Heap - A Cache Oblivious Priority Queue

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Fagerberg, Rolf

    2002-01-01

    The cache oblivious model of computation is a two-level memory model with the assumption that the parameters of the model are unknown to the algorithms. A consequence of this assumption is that an algorithm efficient in the cache oblivious model is automatically efficient in a multi-level memory...

  18. Value-Based Caching in Information-Centric Wireless Body Area Networks

    Directory of Open Access Journals (Sweden)

    Fadi M. Al-Turjman

    2017-01-01

    Full Text Available We propose a resilient cache replacement approach based on a Value of sensed Information (VoI policy. To resolve and fetch content when the origin is not available due to isolated in-network nodes (fragmentation and harsh operational conditions, we exploit a content caching approach. Our approach depends on four functional parameters in sensory Wireless Body Area Networks (WBANs. These four parameters are: age of data based on periodic request, popularity of on-demand requests, communication interference cost, and the duration for which the sensor node is required to operate in active mode to capture the sensed readings. These parameters are considered together to assign a value to the cached data to retain the most valuable information in the cache for prolonged time periods. The higher the value, the longer the duration for which the data will be retained in the cache. This caching strategy provides significant availability for most valuable and difficult to retrieve data in the WBANs. Extensive simulations are performed to compare the proposed scheme against other significant caching schemes in the literature while varying critical aspects in WBANs (e.g., data popularity, cache size, publisher load, connectivity-degree, and severe probabilities of node failures. These simulation results indicate that the proposed VoI-based approach is a valid tool for the retrieval of cached content in disruptive and challenging scenarios, such as the one experienced in WBANs, since it allows the retrieval of content for a long period even while experiencing severe in-network node failures.

  19. A detailed GPU cache model based on reuse distance theory

    NARCIS (Netherlands)

    Nugteren, C.; Braak, van den G.J.W.; Corporaal, H.; Bal, H.E.

    2014-01-01

    As modern GPUs rely partly on their on-chip memories to counter the imminent off-chip memory wall, the efficient use of their caches has become important for performance and energy. However, optimising cache locality systematically requires insight into and prediction of cache behaviour. On

  20. Adjustable Two-Tier Cache for IPTV Based on Segmented Streaming

    Directory of Open Access Journals (Sweden)

    Kai-Chun Liang

    2012-01-01

    Full Text Available Internet protocol TV (IPTV is a promising Internet killer application, which integrates video, voice, and data onto a single IP network, and offers viewers an innovative set of choices and control over their TV content. To provide high-quality IPTV services, an effective strategy is based on caching. This work proposes a segment-based two-tier caching approach, which divides each video into multiple segments to be cached. This approach also partitions the cache space into two layers, where the first layer mainly caches to-be-played segments and the second layer saves possibly played segments. As the segment access becomes frequent, the proposed approach enlarges the first layer and reduces the second layer, and vice versa. Because requested segments may not be accessed frequently, this work further designs an admission control mechanism to determine whether an incoming segment should be cached or not. The cache architecture takes forward/stop playback into account and may replace the unused segments under the interrupted playback. Finally, we conduct comprehensive simulation experiments to evaluate the performance of the proposed approach. The results show that our approach can yield higher hit ratio than previous work under various environmental parameters.

  1. Novel dynamic caching for hierarchically distributed video-on-demand systems

    Science.gov (United States)

    Ogo, Kenta; Matsuda, Chikashi; Nishimura, Kazutoshi

    1998-02-01

    It is difficult to simultaneously serve the millions of video streams that will be needed in the age of 'Mega-Media' networks by using only one high-performance server. To distribute the service load, caching servers should be location near users. However, in previously proposed caching mechanisms, the grade of service depends on whether the data is already cached at a caching server. To make the caching servers transparent to the users, the ability to randomly access the large volume of data stored in the central server should be supported, and the operational functions of the provided service should not be narrowly restricted. We propose a mechanism for constructing a video-stream-caching server that is transparent to the users and that will always support all special playback functions for all available programs to all the contents with a latency of only 1 or 2 seconds. This mechanism uses Variable-sized-quantum-segment- caching technique derived from an analysis of the historical usage log data generated by a line-on-demand-type service experiment and based on the basic techniques used by a time- slot-based multiple-stream video-on-demand server.

  2. Alignment of Memory Transfers of a Time-Predictable Stack Cache

    DEFF Research Database (Denmark)

    Abbaspourseyedi, Sahar; Brandner, Florian

    2014-01-01

    of complex cache states. Instead, only the occupancy level of the cache has to be determined. The memory transfers generated by the standard stack cache are not generally aligned. These unaligned accesses risk to introduce complexity to the otherwise simple WCET analysis. In this work, we investigate three...

  3. A trace-driven analysis of name and attribute caching in a distributed system

    Science.gov (United States)

    Shirriff, Ken W.; Ousterhout, John K.

    1992-01-01

    This paper presents the results of simulating file name and attribute caching on client machines in a distributed file system. The simulation used trace data gathered on a network of about 40 workstations. Caching was found to be advantageous: a cache on each client containing just 10 directories had a 91 percent hit rate on name look ups. Entry-based name caches (holding individual directory entries) had poorer performance for several reasons, resulting in a maximum hit rate of about 83 percent. File attribute caching obtained a 90 percent hit rate with a cache on each machine of the attributes for 30 files. The simulations show that maintaining cache consistency between machines is not a significant problem; only 1 in 400 name component look ups required invalidation of a remotely cached entry. Process migration to remote machines had little effect on caching. Caching was less successful in heavily shared and modified directories such as /tmp, but there weren't enough references to /tmp overall to affect the results significantly. We estimate that adding name and attribute caching to the Sprite operating system could reduce server load by 36 percent and the number of network packets by 30 percent.

  4. A Novel Cache Invalidation Scheme for Mobile Networks

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    In this paper, we propose a strategy of maintaining cache consistency in wireless mobile environments, which adds a validation server (VS) into the GPRS network, utilizes the location information of mobile terminal in SGSN located at GPRS backbone, just sends invalidation information to mobile terminal which is online in accordance with the cached data, and reduces the information amount in asynchronous transmission. This strategy enables mobile terminal to access cached data with very little computing amount, little delay and arbitrary disconnection intervals, and excels the synchronous IR and asynchronous state (AS) in the total performances.

  5. A distributed storage system with dCache

    Science.gov (United States)

    Behrmann, G.; Fuhrmann, P.; Grønager, M.; Kleist, J.

    2008-07-01

    The LCG collaboration is encompassed by a number of Tier 1 centers. The Nordic LCG Tier 1, operated by NDGF, is in contrast to many other Tier 1 centers distributed over the Nordic countries. A distributed setup was chosen for both political and technical reasons, but also provides a number of unique challenges. dCache is well known and respected as a powerful distributed storage resource manager, and was chosen for implementing the storage aspects of the Nordic Tier 1. In contrast to classic dCache deployments, we deploy dCache over a WAN with limited bandwidth, high latency, frequent network failures, and spanning many administrative domains. These properties provide unique challenges, covering topics such as security, administration, maintenance, upgradability, reliability, and performance. Our initial focus has been on implementing the GFD.47 OGF recommendation (which introduced the GridFTP 2 protocol) in dCache and the Globus Toolkit. Compared to GridFTP 1, GridFTP 2 allows for more intelligent data flow between clients and storage pools, thus enabling more efficient use of our limited bandwidth.

  6. A distributed storage system with dCache

    International Nuclear Information System (INIS)

    Behrmann, G; Groenager, M; Fuhrmann, P; Kleist, J

    2008-01-01

    The LCG collaboration is encompassed by a number of Tier 1 centers. The Nordic LCG Tier 1, operated by NDGF, is in contrast to many other Tier 1 centers distributed over the Nordic countries. A distributed setup was chosen for both political and technical reasons, but also provides a number of unique challenges. dCache is well known and respected as a powerful distributed storage resource manager, and was chosen for implementing the storage aspects of the Nordic Tier 1. In contrast to classic dCache deployments, we deploy dCache over a WAN with limited bandwidth, high latency, frequent network failures, and spanning many administrative domains. These properties provide unique challenges, covering topics such as security, administration, maintenance, upgradability, reliability, and performance. Our initial focus has been on implementing the GFD.47 OGF recommendation (which introduced the GridFTP 2 protocol) in dCache and the Globus Toolkit. Compared to GridFTP 1, GridFTP 2 allows for more intelligent data flow between clients and storage pools, thus enabling more efficient use of our limited bandwidth

  7. Efficient Context Switching for the Stack Cache: Implementation and Analysis

    DEFF Research Database (Denmark)

    Abbaspourseyedi, Sahar; Brandner, Florian; Naji, Amine

    2015-01-01

    , the analysis of the stack cache was limited to individual tasks, ignoring aspects related to multitasking. A major drawback of the original stack cache design is that, due to its simplicity, it cannot hold the data of multiple tasks at the same time. Consequently, the entire cache content needs to be saved...

  8. Energy Efficient Caching in Backhaul-Aware Cellular Networks with Dynamic Content Popularity

    Directory of Open Access Journals (Sweden)

    Jiequ Ji

    2018-01-01

    Full Text Available Caching popular contents at base stations (BSs has been regarded as an effective approach to alleviate the backhaul load and to improve the quality of service. To meet the explosive data traffic demand and to save energy consumption, energy efficiency (EE has become an extremely important performance index for the 5th generation (5G cellular networks. In general, there are two ways for improving the EE for caching, that is, improving the cache-hit rate and optimizing the cache size. In this work, we investigate the energy efficient caching problem in backhaul-aware cellular networks jointly considering these two approaches. Note that most existing works are based on the assumption that the content catalog and popularity are static. However, in practice, content popularity is dynamic. To timely estimate the dynamic content popularity, we propose a method based on shot noise model (SNM. Then we propose a distributed caching policy to improve the cache-hit rate in such a dynamic environment. Furthermore, we analyze the tradeoff between energy efficiency and cache capacity for which an optimization is formulated. We prove its convexity and derive a closed-form optimal cache capacity for maximizing the EE. Simulation results validate the proposed scheme and show that EE can be improved with appropriate choice of cache capacity.

  9. On Optimal Geographical Caching in Heterogeneous Cellular Networks

    NARCIS (Netherlands)

    Serbetci, Berksan; Goseling, Jasper

    2017-01-01

    In this work we investigate optimal geographical caching in heterogeneous cellular networks where different types of base stations (BSs) have different cache capacities. Users request files from a content library according to a known probability distribution. The performance metric is the total hit

  10. Distributed caching mechanism for various MPE software services

    CERN Document Server

    Svec, Andrej

    2017-01-01

    The MPE Software Section provides multiple software services to facilitate the testing and the operation of the CERN Accelerator complex. Continuous growth in the number of users and the amount of processed data result in the requirement of high scalability. Our current priority is to move towards a distributed and properly load balanced set of services based on containers. The aim of this project is to implement the generic caching mechanism applicable to our services and chosen architecture. The project will at first require research about the different aspects of distributed caching (persistence, no gc-caching, cache consistency etc.) and the available technologies followed by the implementation of the chosen solution. In order to validate the correctness and performance of the implementation in the last phase of the project it will be required to implement a monitoring layer and integrate it with the current ELK stack.

  11. Caching Efficiency Enhancement at Wireless Edges with Concerns on User’s Quality of Experience

    Directory of Open Access Journals (Sweden)

    Feng Li

    2018-01-01

    Full Text Available Content caching is a promising approach to enhancing bandwidth utilization and minimizing delivery delay for new-generation Internet applications. The design of content caching is based on the principles that popular contents are cached at appropriate network edges in order to reduce transmission delay and avoid backhaul bottleneck. In this paper, we propose a cooperative caching replacement and efficiency optimization scheme for IP-based wireless networks. Wireless edges are designed to establish a one-hop scope of caching information table for caching replacement in cases when there is not enough cache resource available within its own space. During the course, after receiving the caching request, every caching node should determine the weight of the required contents and provide a response according to the availability of its own caching space. Furthermore, to increase the caching efficiency from a practical perspective, we introduce the concept of quality of user experience (QoE and try to properly allocate the cache resource of the whole networks to better satisfy user demands. Different caching allocation strategies are devised to be adopted to enhance user QoE in various circumstances. Numerical results are further provided to justify the performance improvement of our proposal from various aspects.

  12. Cache timing attacks on recent microarchitectures

    DEFF Research Database (Denmark)

    Andreou, Alexandres; Bogdanov, Andrey; Tischhauser, Elmar Wolfgang

    2017-01-01

    Cache timing attacks have been known for a long time, however since the rise of cloud computing and shared hardware resources, such attacks found new potentially devastating applications. One prominent example is S$A (presented by Irazoqui et al at S&P 2015) which is a cache timing attack against...... AES or similar algorithms in virtualized environments. This paper applies variants of this cache timing attack to Intel's latest generation of microprocessors. It enables a spy-process to recover cryptographic keys, interacting with the victim processes only over TCP. The threat model is a logically...... separated but CPU co-located attacker with root privileges. We report successful and practically verified applications of this attack against a wide range of microarchitectures, from a two-core Nehalem processor (i5-650) to two-core Haswell (i7-4600M) and four-core Skylake processors (i7-6700). The attack...

  13. Unfavorable Strides in Cache Memory Systems (RNR Technical Report RNR-92-015

    Directory of Open Access Journals (Sweden)

    David H. Bailey

    1995-01-01

    Full Text Available An important issue in obtaining high performance on a scientific application running on a cache-based computer system is the behavior of the cache when data are accessed at a constant stride. Others who have discussed this issue have noted an odd phenomenon in such situations: A few particular innocent-looking strides result in sharply reduced cache efficiency. In this article, this problem is analyzed, and a simple formula is presented that accurately gives the cache efficiency for various cache parameters and data strides.

  14. Enhancing Leakage Power in CPU Cache Using Inverted Architecture

    OpenAIRE

    Bilal A. Shehada; Ahmed M. Serdah; Aiman Abu Samra

    2013-01-01

    Power consumption is an increasingly pressing problem in modern processor design. Since the on-chip caches usually consume a significant amount of power so power and energy consumption parameters have become one of the most important design constraint. It is one of the most attractive targets for power reduction. This paper presents an approach to enhance the dynamic power consumption of CPU cache using inverted cache architecture. Our assumption tries to reduce dynamic write power dissipatio...

  15. Combining instruction prefetching with partial cache locking to improve WCET in real-time systems.

    Directory of Open Access Journals (Sweden)

    Fan Ni

    Full Text Available Caches play an important role in embedded systems to bridge the performance gap between fast processor and slow memory. And prefetching mechanisms are proposed to further improve the cache performance. While in real-time systems, the application of caches complicates the Worst-Case Execution Time (WCET analysis due to its unpredictable behavior. Modern embedded processors often equip locking mechanism to improve timing predictability of the instruction cache. However, locking the whole cache may degrade the cache performance and increase the WCET of the real-time application. In this paper, we proposed an instruction-prefetching combined partial cache locking mechanism, which combines an instruction prefetching mechanism (termed as BBIP with partial cache locking to improve the WCET estimates of real-time applications. BBIP is an instruction prefetching mechanism we have already proposed to improve the worst-case cache performance and in turn the worst-case execution time. The estimations on typical real-time applications show that the partial cache locking mechanism shows remarkable WCET improvement over static analysis and full cache locking.

  16. Combining instruction prefetching with partial cache locking to improve WCET in real-time systems.

    Science.gov (United States)

    Ni, Fan; Long, Xiang; Wan, Han; Gao, Xiaopeng

    2013-01-01

    Caches play an important role in embedded systems to bridge the performance gap between fast processor and slow memory. And prefetching mechanisms are proposed to further improve the cache performance. While in real-time systems, the application of caches complicates the Worst-Case Execution Time (WCET) analysis due to its unpredictable behavior. Modern embedded processors often equip locking mechanism to improve timing predictability of the instruction cache. However, locking the whole cache may degrade the cache performance and increase the WCET of the real-time application. In this paper, we proposed an instruction-prefetching combined partial cache locking mechanism, which combines an instruction prefetching mechanism (termed as BBIP) with partial cache locking to improve the WCET estimates of real-time applications. BBIP is an instruction prefetching mechanism we have already proposed to improve the worst-case cache performance and in turn the worst-case execution time. The estimations on typical real-time applications show that the partial cache locking mechanism shows remarkable WCET improvement over static analysis and full cache locking.

  17. Effects of simulated mountain lion caching on decomposition of ungulate carcasses

    Science.gov (United States)

    Bischoff-Mattson, Z.; Mattson, D.

    2009-01-01

    Caching of animal remains is common among carnivorous species of all sizes, yet the effects of caching on larger prey are unstudied. We conducted a summer field experiment designed to test the effects of simulated mountain lion (Puma concolor) caching on mass loss, relative temperature, and odor dissemination of 9 prey-like carcasses. We deployed all but one of the carcasses in pairs, with one of each pair exposed and the other shaded and shallowly buried (cached). Caching substantially reduced wastage during dry and hot (drought) but not wet and cool (monsoon) periods, and it also reduced temperature and discernable odor to some degree during both seasons. These results are consistent with the hypotheses that caching serves to both reduce competition from arthropods and microbes and reduce odds of detection by larger vertebrates such as bears (Ursus spp.), wolves (Canis lupus), or other lions.

  18. Explicit Content Caching at Mobile Edge Networks with Cross-Layer Sensing

    Science.gov (United States)

    Chen, Lingyu; Su, Youxing; Luo, Wenbin; Hong, Xuemin; Shi, Jianghong

    2018-01-01

    The deployment density and computational power of small base stations (BSs) are expected to increase significantly in the next generation mobile communication networks. These BSs form the mobile edge network, which is a pervasive and distributed infrastructure that can empower a variety of edge/fog computing applications. This paper proposes a novel edge-computing application called explicit caching, which stores selective contents at BSs and exposes such contents to local users for interactive browsing and download. We formulate the explicit caching problem as a joint content recommendation, caching, and delivery problem, which aims to maximize the expected user quality-of-experience (QoE) with varying degrees of cross-layer sensing capability. Optimal and effective heuristic algorithms are presented to solve the problem. The theoretical performance bounds of the explicit caching system are derived in simplified scenarios. The impacts of cache storage space, BS backhaul capacity, cross-layer information, and user mobility on the system performance are simulated and discussed in realistic scenarios. Results suggest that, compared with conventional implicit caching schemes, explicit caching can better exploit the mobile edge network infrastructure for personalized content dissemination. PMID:29565313

  19. Explicit Content Caching at Mobile Edge Networks with Cross-Layer Sensing.

    Science.gov (United States)

    Chen, Lingyu; Su, Youxing; Luo, Wenbin; Hong, Xuemin; Shi, Jianghong

    2018-03-22

    The deployment density and computational power of small base stations (BSs) are expected to increase significantly in the next generation mobile communication networks. These BSs form the mobile edge network, which is a pervasive and distributed infrastructure that can empower a variety of edge/fog computing applications. This paper proposes a novel edge-computing application called explicit caching, which stores selective contents at BSs and exposes such contents to local users for interactive browsing and download. We formulate the explicit caching problem as a joint content recommendation, caching, and delivery problem, which aims to maximize the expected user quality-of-experience (QoE) with varying degrees of cross-layer sensing capability. Optimal and effective heuristic algorithms are presented to solve the problem. The theoretical performance bounds of the explicit caching system are derived in simplified scenarios. The impacts of cache storage space, BS backhaul capacity, cross-layer information, and user mobility on the system performance are simulated and discussed in realistic scenarios. Results suggest that, compared with conventional implicit caching schemes, explicit caching can better exploit the mobile edge network infrastructure for personalized content dissemination.

  20. Randomized Caches Considered Harmful in Hard Real-Time Systems

    Directory of Open Access Journals (Sweden)

    Jan Reineke

    2014-06-01

    Full Text Available We investigate the suitability of caches with randomized placement and replacement in the context of hard real-time systems. Such caches have been claimed to drastically reduce the amount of information required by static worst-case execution time (WCET analysis, and to be an enabler for measurement-based probabilistic timing analysis. We refute these claims and conclude that with prevailing static and measurement-based analysis techniques caches with deterministic placement and least-recently-used replacement are preferable over randomized ones.

  1. Learning Automata Based Caching for Efficient Data Access in Delay Tolerant Networks

    Directory of Open Access Journals (Sweden)

    Zhenjie Ma

    2018-01-01

    Full Text Available Effective data access is one of the major challenges in Delay Tolerant Networks (DTNs that are characterized by intermittent network connectivity and unpredictable node mobility. Currently, different data caching schemes have been proposed to improve the performance of data access in DTNs. However, most existing data caching schemes perform poorly due to the lack of global network state information and the changing network topology in DTNs. In this paper, we propose a novel data caching scheme based on cooperative caching in DTNs, aiming at improving the successful rate of data access and reducing the data access delay. In the proposed scheme, learning automata are utilized to select a set of caching nodes as Caching Node Set (CNS in DTNs. Unlike the existing caching schemes failing to address the challenging characteristics of DTNs, our scheme is designed to automatically self-adjust to the changing network topology through the well-designed voting and updating processes. The proposed scheme improves the overall performance of data access in DTNs compared with the former caching schemes. The simulations verify the feasibility of our scheme and the improvements in performance.

  2. Re-caching by Western scrub-jays (Aphelocoma californica cannot be attributed to stress.

    Directory of Open Access Journals (Sweden)

    James M Thom

    Full Text Available Western scrub-jays (Aphelocoma californica live double lives, storing food for the future while raiding the stores of other birds. One tactic scrub-jays employ to protect stores is "re-caching"-relocating caches out of sight of would-be thieves. Recent computational modelling work suggests that re-caching might be mediated not by complex cognition, but by a combination of memory failure and stress. The "Stress Model" asserts that re-caching is a manifestation of a general drive to cache, rather than a desire to protect existing stores. Here, we present evidence strongly contradicting the central assumption of these models: that stress drives caching, irrespective of social context. In Experiment (i, we replicate the finding that scrub-jays preferentially relocate food they were watched hiding. In Experiment (ii we find no evidence that stress increases caching. In light of our results, we argue that the Stress Model cannot account for scrub-jay re-caching.

  3. Cache and memory hierarchy design a performance directed approach

    CERN Document Server

    Przybylski, Steven A

    1991-01-01

    An authoritative book for hardware and software designers. Caches are by far the simplest and most effective mechanism for improving computer performance. This innovative book exposes the characteristics of performance-optimal single and multi-level cache hierarchies by approaching the cache design process through the novel perspective of minimizing execution times. It presents useful data on the relative performance of a wide spectrum of machines and offers empirical and analytical evaluations of the underlying phenomena. This book will help computer professionals appreciate the impact of ca

  4. High Performance Analytics with the R3-Cache

    Science.gov (United States)

    Eavis, Todd; Sayeed, Ruhan

    Contemporary data warehouses now represent some of the world’s largest databases. As these systems grow in size and complexity, however, it becomes increasingly difficult for brute force query processing approaches to meet the performance demands of end users. Certainly, improved indexing and more selective view materialization are helpful in this regard. Nevertheless, with warehouses moving into the multi-terabyte range, it is clear that the minimization of external memory accesses must be a primary performance objective. In this paper, we describe the R 3-cache, a natively multi-dimensional caching framework designed specifically to support sophisticated warehouse/OLAP environments. R 3-cache is based upon an in-memory version of the R-tree that has been extended to support buffer pages rather than disk blocks. A key strength of the R 3-cache is that it is able to utilize multi-dimensional fragments of previous query results so as to significantly minimize the frequency and scale of disk accesses. Moreover, the new caching model directly accommodates the standard relational storage model and provides mechanisms for pro-active updates that exploit the existence of query “hot spots”. The current prototype has been evaluated as a component of the Sidera DBMS, a “shared nothing” parallel OLAP server designed for multi-terabyte analytics. Experimental results demonstrate significant performance improvements relative to simpler alternatives.

  5. Adaptive Neuro-fuzzy Inference System as Cache Memory Replacement Policy

    Directory of Open Access Journals (Sweden)

    CHUNG, Y. M.

    2014-02-01

    Full Text Available To date, no cache memory replacement policy that can perform efficiently for all types of workloads is yet available. Replacement policies used in level 1 cache memory may not be suitable in level 2. In this study, we focused on developing an adaptive neuro-fuzzy inference system (ANFIS as a replacement policy for improving level 2 cache performance in terms of miss ratio. The recency and frequency of referenced blocks were used as input data for ANFIS to make decisions on replacement. MATLAB was employed as a training tool to obtain the trained ANFIS model. The trained ANFIS model was implemented on SimpleScalar. Simulations on SimpleScalar showed that the miss ratio improved by as high as 99.95419% and 99.95419% for instruction level 2 cache, and up to 98.04699% and 98.03467% for data level 2 cache compared with least recently used and least frequently used, respectively.

  6. Probabilistic Caching Placement in the Presence of Multiple Eavesdroppers

    Directory of Open Access Journals (Sweden)

    Fang Shi

    2018-01-01

    Full Text Available The wireless caching has attracted a lot of attention in recent years, since it can reduce the backhaul cost significantly and improve the user-perceived experience. The existing works on the wireless caching and transmission mainly focus on the communication scenarios without eavesdroppers. When the eavesdroppers appear, it is of vital importance to investigate the physical-layer security for the wireless caching aided networks. In this paper, a caching network is studied in the presence of multiple eavesdroppers, which can overhear the secure information transmission. We model the locations of eavesdroppers by a homogeneous Poisson Point Process (PPP, and the eavesdroppers jointly receive and decode contents through the maximum ratio combining (MRC reception which yields the worst case of wiretap. Moreover, the main performance metric is measured by the average probability of successful transmission, which is the probability of finding and successfully transmitting all the requested files within a radius R. We study the system secure transmission performance by deriving a single integral result, which is significantly affected by the probability of caching each file. Therefore, we extend to build the optimization problem of the probability of caching each file, in order to optimize the system secure transmission performance. This optimization problem is nonconvex, and we turn to use the genetic algorithm (GA to solve the problem. Finally, simulation and numerical results are provided to validate the proposed studies.

  7. Horizontally scaling dCache SRM with the Terracotta platform

    International Nuclear Information System (INIS)

    Perelmutov, T; Crawford, M; Moibenko, A; Oleynik, G

    2011-01-01

    The dCache disk caching file system has been chosen by a majority of LHC experiments' Tier 1 centers for their data storage needs. It is also deployed at many Tier 2 centers. The Storage Resource Manager (SRM) is a standardized grid storage interface and a single point of remote entry into dCache, and hence is a critical component. SRM must scale to increasing transaction rates and remain resilient against changing usage patterns. The initial implementation of the SRM service in dCache suffered from an inability to support clustered deployment, and its performance was limited by the hardware of a single node. Using the Terracotta platform[l], we added the ability to horizontally scale the dCache SRM service to run on multiple nodes in a cluster configuration, coupled with network load balancing. This gives site administrators the ability to increase the performance and reliability of SRM service to face the ever-increasing requirements of LHC data handling. In this paper we will describe the previous limitations of the architecture SRM server and how the Terracotta platform allowed us to readily convert single node service into a highly scalable clustered application.

  8. Método y sistema de modelado de memoria cache

    OpenAIRE

    Posadas Cobo, Héctor; Villar Bonet, Eugenio; Díaz Suárez, Luis

    2010-01-01

    Un método de modelado de una memoria cache de datos de un procesador destino, para simular el comportamiento de dicha memoria cache de datos en la ejecución de un código software en una plataforma que comprenda dicho procesador destino, donde dicha simulación se realiza en una plataforma nativa que tiene un procesador diferente del procesador destino que comprende dicha memoria cache de datos que se va a modelar, donde dicho modelado se realiza mediante la ejecución en dicha plataforma nativa...

  9. Fundamental Parallel Algorithms for Private-Cache Chip Multiprocessors

    DEFF Research Database (Denmark)

    Arge, Lars Allan; Goodrich, Michael T.; Nelson, Michael

    2008-01-01

    about the way cores are interconnected, for we assume that all inter-processor communication occurs through the memory hierarchy. We study several fundamental problems, including prefix sums, selection, and sorting, which often form the building blocks of other parallel algorithms. Indeed, we present...... two sorting algorithms, a distribution sort and a mergesort. Our algorithms are asymptotically optimal in terms of parallel cache accesses and space complexity under reasonable assumptions about the relationships between the number of processors, the size of memory, and the size of cache blocks....... In addition, we study sorting lower bounds in a computational model, which we call the parallel external-memory (PEM) model, that formalizes the essential properties of our algorithms for private-cache CMPs....

  10. A Scalable proxy cache for Grid Data Access

    International Nuclear Information System (INIS)

    Cristian Cirstea, Traian; Just Keijser, Jan; Arthur Koeroo, Oscar; Starink, Ronald; Alan Templon, Jeffrey

    2012-01-01

    We describe a prototype grid proxy cache system developed at Nikhef, motivated by a desire to construct the first building block of a future https-based Content Delivery Network for grid infrastructures. Two goals drove the project: firstly to provide a “native view” of the grid for desktop-type users, and secondly to improve performance for physics-analysis type use cases, where multiple passes are made over the same set of data (residing on the grid). We further constrained the design by requiring that the system should be made of standard components wherever possible. The prototype that emerged from this exercise is a horizontally-scalable, cooperating system of web server / cache nodes, fronted by a customized webDAV server. The webDAV server is custom only in the sense that it supports http redirects (providing horizontal scaling) and that the authentication module has, as back end, a proxy delegation chain that can be used by the cache nodes to retrieve files from the grid. The prototype was deployed at Nikhef and tested at a scale of several terabytes of data and approximately one hundred fast cores of computing. Both small and large files were tested, in a number of scenarios, and with various numbers of cache nodes, in order to understand the scaling properties of the system. For properly-dimensioned cache-node hardware, the system showed speedup of several integer factors for the analysis-type use cases. These results and others are presented and discussed.

  11. Evidence for cache surveillance by a scatter-hoarding rodent

    NARCIS (Netherlands)

    Hirsch, B.T.; Kays, R.; Jansen, P.A.

    2013-01-01

    The mechanisms by which food-hoarding animals are capable of remembering the locations of numerous cached food items over long time spans has been the focus of intensive research. The ‘memory enhancement hypothesis’ states that hoarders reinforce spatial memory of their caches by repeatedly

  12. A high level implementation and performance evaluation of level-I asynchronous cache on FPGA

    Directory of Open Access Journals (Sweden)

    Mansi Jhamb

    2017-07-01

    Full Text Available To bridge the ever-increasing performance gap between the processor and the main memory in a cost-effective manner, novel cache designs and implementations are indispensable. Cache is responsible for a major part of energy consumption (approx. 50% of processors. This paper presents a high level implementation of a micropipelined asynchronous architecture of L1 cache. Due to the fact that each cache memory implementation is time consuming and error-prone process, a synthesizable and a configurable model proves out to be of immense help as it aids in generating a range of caches in a reproducible and quick fashion. The micropipelined cache, implemented using C-Elements acts as a distributed message-passing system. The RTL cache model implemented in this paper, comprising of data and instruction caches has a wide array of configurable parameters. In addition to timing robustness our implementation has high average cache throughput and low latency. The implemented architecture comprises of two direct-mapped, write-through caches for data and instruction. The architecture is implemented in a Field Programmable Gate Array (FPGA chip using Very High Speed Integrated Circuit Hardware Description Language (VHSIC HDL along with advanced synthesis and place-and-route tools.

  13. Analisis Algoritma Pergantian Cache Pada Proxy Web Server Internet Dengan Simulasi

    OpenAIRE

    Nurwarsito, Heru

    2007-01-01

    Pertumbuhan jumlah client internet dari waktu ke waktu terus bertambah, maka respon akses internet menjadi semakin lambat. Untuk membantu kecepatan akses tersebut maka diperlukan cache pada Proxy Server. Penelitian ini bertujuan untuk menganalisis performansi Proxy Server pada Jaringan Internet terhadap penggunaan algoritma pergantian cache-nya.Analisis Algoritma Pergantian Cache Pada Proxy Server didesain dengan metoda pemodelan simulasi jaringan internet yang terdiri dari Web server, Proxy ...

  14. Nature as a treasure map! Teaching geoscience with the help of earth caches?!

    Science.gov (United States)

    Zecha, Stefanie; Schiller, Thomas

    2015-04-01

    This presentation looks at how earth caches are influence the learning process in the field of geo science in non-formal education. The development of mobile technologies using Global Positioning System (GPS) data to point geographical location together with the evolving Web 2.0 supporting the creation and consumption of content, suggest a potential for collaborative informal learning linked to location. With the help of the GIS in smartphones you can go directly in nature, search for information by your smartphone, and learn something about nature. Earth caches are a very good opportunity, which are organized and supervised geocaches with special information about physical geography high lights. Interested people can inform themselves about aspects in geoscience area by earth caches. The main question of this presentation is how these caches are created in relation to learning processes. As is not possible, to analyze all existing earth caches, there was focus on Bavaria and a certain feature of earth caches. At the end the authors show limits and potentials for the use of earth caches and give some remark for the future.

  15. Automated Cache Performance Analysis And Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Mohror, Kathryn [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2013-12-23

    , cache behavior could only be measured reliably in the ag- gregate across tens or hundreds of thousands of instructions. With the newest iteration of PEBS technology, cache events can be tied to a tuple of instruction pointer, target address (for both loads and stores), memory hierarchy, and observed latency. With this information we can now begin asking questions regarding the efficiency of not only regions of code, but how these regions interact with particular data structures and how these interactions evolve over time. In the short term, this information will be vital for performance analysts understanding and optimizing the behavior of their codes for the memory hierarchy. In the future, we can begin to ask how data layouts might be changed to improve performance and, for a particular application, what the theoretical optimal performance might be. The overall benefit to be produced by this effort was a commercial quality easy-to- use and scalable performance tool that will allow both beginner and experienced parallel programmers to automatically tune their applications for optimal cache usage. Effective use of such a tool can literally save weeks of performance tuning effort. Easy to use. With the proposed innovations, finding and fixing memory performance issues would be more automated and hide most to all of the performance engineer exper- tise ”under the hood” of the Open|SpeedShop performance tool. One of the biggest public benefits from the proposed innovations is that it makes performance analysis more usable to a larger group of application developers. Intuitive reporting of results. The Open|SpeedShop performance analysis tool has a rich set of intuitive, yet detailed reports for presenting performance results to application developers. Our goal was to leverage this existing technology to present the results from our memory performance addition to Open|SpeedShop. Suitable for experts as well as novices. Application performance is getting more difficult

  16. A Novel Architecture of Metadata Management System Based on Intelligent Cache

    Institute of Scientific and Technical Information of China (English)

    SONG Baoyan; ZHAO Hongwei; WANG Yan; GAO Nan; XU Jin

    2006-01-01

    This paper introduces a novel architecture of metadata management system based on intelligent cache called Metadata Intelligent Cache Controller (MICC). By using an intelligent cache to control the metadata system, MICC can deal with different scenarios such as splitting and merging of queries into sub-queries for available metadata sets in local, in order to reduce access time of remote queries. Application can find results patially from local cache and the remaining portion of the metadata that can be fetched from remote locations. Using the existing metadata, it can not only enhance the fault tolerance and load balancing of system effectively, but also improve the efficiency of access while ensuring the access quality.

  17. Organizing the pantry: cache management improves quality of overwinter food stores in a montane mammal

    Science.gov (United States)

    Jakopak, Rhiannon P.; Hall, L. Embere; Chalfoun, Anna D.

    2017-01-01

    Many mammals create food stores to enhance overwinter survival in seasonal environments. Strategic arrangement of food within caches may facilitate the physical integrity of the cache or improve access to high-quality food to ensure that cached resources meet future nutritional demands. We used the American pika (Ochotona princeps), a food-caching lagomorph, to evaluate variation in haypile (cache) structure (i.e., horizontal layering by plant functional group) in Wyoming, United States. Fifty-five percent of 62 haypiles contained at least 2 discrete layers of vegetation. Adults and juveniles layered haypiles in similar proportions. The probability of layering increased with haypile volume, but not haypile number per individual or nearby forage diversity. Vegetation cached in layered haypiles was also higher in nitrogen compared to vegetation in unlayered piles. We found that American pikas frequently structured their food caches, structured caches were larger, and the cached vegetation in structured piles was of higher nutritional quality. Improving access to stable, high-quality vegetation in haypiles, a critical overwinter food resource, may allow individuals to better persist amidst harsh conditions.

  18. Tier 3 batch system data locality via managed caches

    Science.gov (United States)

    Fischer, Max; Giffels, Manuel; Jung, Christopher; Kühn, Eileen; Quast, Günter

    2015-05-01

    Modern data processing increasingly relies on data locality for performance and scalability, whereas the common HEP approaches aim for uniform resource pools with minimal locality, recently even across site boundaries. To combine advantages of both, the High- Performance Data Analysis (HPDA) Tier 3 concept opportunistically establishes data locality via coordinated caches. In accordance with HEP Tier 3 activities, the design incorporates two major assumptions: First, only a fraction of data is accessed regularly and thus the deciding factor for overall throughput. Second, data access may fallback to non-local, making permanent local data availability an inefficient resource usage strategy. Based on this, the HPDA design generically extends available storage hierarchies into the batch system. Using the batch system itself for scheduling file locality, an array of independent caches on the worker nodes is dynamically populated with high-profile data. Cache state information is exposed to the batch system both for managing caches and scheduling jobs. As a result, users directly work with a regular, adequately sized storage system. However, their automated batch processes are presented with local replications of data whenever possible.

  19. Cache Oblivious Distribution Sweeping

    DEFF Research Database (Denmark)

    Brodal, G.S.; Fagerberg, R.

    2002-01-01

    We adapt the distribution sweeping method to the cache oblivious model. Distribution sweeping is the name used for a general approach for divide-and-conquer algorithms where the combination of solved subproblems can be viewed as a merging process of streams. We demonstrate by a series of algorith...

  20. Do Clark's nutcrackers demonstrate what-where-when memory on a cache-recovery task?

    Science.gov (United States)

    Gould, Kristy L; Ort, Amy J; Kamil, Alan C

    2012-01-01

    What-where-when (WWW) memory during cache recovery was investigated in six Clark's nutcrackers. During caching, both red- and blue-colored pine seeds were cached by the birds in holes filled with sand. Either a short (3 day) retention interval (RI) or a long (9 day) RI was followed by a recovery session during which caches were replaced with either a single seed or wooden bead depending upon the color of the cache and length of the retention interval. Knowledge of what was in the cache (seed or bead), where it was located, and when the cache had been made (3 or 9 days ago) were the three WWW memory components under investigation. Birds recovered items (bead or seed) at above chance levels, demonstrating accurate spatial memory. They also recovered seeds more than beads after the long RI, but not after the short RI, when they recovered seeds and beads equally often. The differential recovery after the long RI demonstrates that nutcrackers may have the capacity for WWW memory during this task, but it is not clear why it was influenced by RI duration.

  1. Efficacy of Code Optimization on Cache-Based Processors

    Science.gov (United States)

    VanderWijngaart, Rob F.; Saphir, William C.; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    In this paper a number of techniques for improving the cache performance of a representative piece of numerical software is presented. Target machines are popular processors from several vendors: MIPS R5000 (SGI Indy), MIPS R8000 (SGI PowerChallenge), MIPS R10000 (SGI Origin), DEC Alpha EV4 + EV5 (Cray T3D & T3E), IBM RS6000 (SP Wide-node), Intel PentiumPro (Ames' Whitney), Sun UltraSparc (NERSC's NOW). The optimizations all attempt to increase the locality of memory accesses. But they meet with rather varied and often counterintuitive success on the different computing platforms. We conclude that it may be genuinely impossible to obtain portable performance on the current generation of cache-based machines. At the least, it appears that the performance of modern commodity processors cannot be described with parameters defining the cache alone.

  2. Cache Aided Decode-and-Forward Relaying Networks: From the Spatial View

    Directory of Open Access Journals (Sweden)

    Junjuan Xia

    2018-01-01

    Full Text Available We investigate cache technique from the spatial view and study its impact on the relaying networks. In particular, we consider a dual-hop relaying network, where decode-and-forward (DF relays can assist the data transmission from the source to the destination. In addition to the traditional dual-hop relaying, we also consider the cache from the spatial view, where the source can prestore the data among the memories of the nodes around the destination. For the DF relaying networks without and with cache, we study the system performance by deriving the analytical expressions of outage probability and symbol error rate (SER. We also derive the asymptotic outage probability and SER in the high regime of transmit power, from which we find the system diversity order can be rapidly increased by using cache and the system performance can be significantly improved. Simulation and numerical results are demonstrated to verify the proposed studies and find that the system power resources can be efficiently saved by using cache technique.

  3. Wasatch and Uinta Mountains Ecoregion: Chapter 9 in Status and trends of land change in the Western United States--1973 to 2000

    Science.gov (United States)

    Brooks, Mark S.

    2012-01-01

    The Wasatch and Uinta Mountains Ecoregion covers approximately 44,176 km2 (17, 057 mi2) (fig. 1) (Omernik, 1987; U.S. Environmental Protection Agency, 1997). With the exception of a small part of the ecoregion extending into southern Wyoming and southern Idaho, the vast majority of the ecoregion is located along the eastern mountain ranges of Utah. The ecoregion is situated between the Wyoming Basin and Colorado Plateaus Ecoregions to the east and south and the Central Basin and Range Ecoregion to the west; in addition, the Middle Rockies, Snake River Basin, and Northern Basin and Range Ecoregions are nearby to the north. Considered the western front of the Rocky Mountains, the two major mountain ranges that define the Wasatch and Uinta Mountains Ecoregion include the north-south-trending Wasatch Range and east-west- trending Uinta Mountains. Both mountain ranges have been altered by multiple mountain building and burial cycles since the Precambrian era 2.6 billion years ago, and they have been shaped by glacial processes as early as 1.6 million years ago. The terrain is defined by sharp ridgelines, glacial lakes, and narrow canyons, with elevations ranging from 1,829 m in the lower canyons to 4,123 m at Kings Peak, the highest point in Utah (Milligan, 2010).

  4. A Cache Considering Role-Based Access Control and Trust in Privilege Management Infrastructure

    Institute of Scientific and Technical Information of China (English)

    ZHANG Shaomin; WANG Baoyi; ZHOU Lihua

    2006-01-01

    PMI(privilege management infrastructure) is used to perform access control to resource in an E-commerce or E-government system. With the ever-increasing need for secure transaction, the need for systems that offer a wide variety of QoS (quality-of-service) features is also growing. In order to improve the QoS of PMI system, a cache based on RBAC(Role-based Access Control) and trust is proposed. Our system is realized based on Web service. How to design the cache based on RBAC and trust in the access control model is described in detail. The algorithm to query role permission in cache and to add records in cache is dealt with. The policy to update cache is introduced also.

  5. A Unified Buffering Management with Set Divisible Cache for PCM Main Memory

    Institute of Scientific and Technical Information of China (English)

    Mei-Ying Bian; Su-Kyung Yoon; Jeong-Geun Kim; Sangjae Nam; Shin-Dug Kim

    2016-01-01

    This research proposes a phase-change memory (PCM) based main memory system with an effective combi-nation of a superblock-based adaptive buffering structure and its associated set divisible last-level cache (LLC). To achieve high performance similar to that of dynamic random-access memory (DRAM) based main memory, the superblock-based adaptive buffer (SABU) is comprised of dual DRAM buffers, i.e., an aggressive superblock-based pre-fetching buffer (SBPB) and an adaptive sub-block reusing buffer (SBRB), and a set divisible LLC based on a cache space optimization scheme. According to our experiment, the longer PCM access latency can typically be hidden using our proposed SABU, which can significantly reduce the number of writes over the PCM main memory by 26.44%. The SABU approach can reduce PCM access latency up to 0.43 times, compared with conventional DRAM main memory. Meanwhile, the average memory energy consumption can be reduced by 19.7%.

  6. Magpies can use local cues to retrieve their food caches.

    Science.gov (United States)

    Feenders, Gesa; Smulders, Tom V

    2011-03-01

    Much importance has been placed on the use of spatial cues by food-hoarding birds in the retrieval of their caches. In this study, we investigate whether food-hoarding birds can be trained to use local cues ("beacons") in their cache retrieval. We test magpies (Pica pica) in an active hoarding-retrieval paradigm, where local cues are always reliable, while spatial cues are not. Our results show that the birds use the local cues to retrieve their caches, even when occasionally contradicting spatial information is available. The design of our study does not allow us to test rigorously whether the birds prefer using local over spatial cues, nor to investigate the process through which they learn to use local cues. We furthermore provide evidence that magpies develop landmark preferences, which improve their retrieval accuracy. Our findings support the hypothesis that birds are flexible in their use of memory information, using a combination of the most reliable or salient information to retrieve their caches. © Springer-Verlag 2010

  7. Analyzing data distribution on disk pools for dCache

    Energy Technology Data Exchange (ETDEWEB)

    Halstenberg, S; Jung, C; Ressmann, D [Forschungszentrum Karlsruhe, Steinbuch Centre for Computing, Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen (Germany)

    2010-04-01

    Most Tier-1 centers of LHC Computing Grid are using dCache as their storage system. dCache uses a cost model incorporating CPU and space costs for the distribution of data on its disk pools. Storage resources at Tier-1 centers are usually upgraded once or twice a year according to given milestones. One of the effects of this procedure is the accumulation of heterogeneous hardware resources. For a dCache system, a heterogeneous set of disk pools complicates the process of weighting CPU and space costs for an efficient distribution of data. In order to evaluate the data distribution on the disk pools, the distribution is simulated in Java. The results are discussed and suggestions for improving the weight scheme are given.

  8. Massively parallel algorithms for trace-driven cache simulations

    Science.gov (United States)

    Nicol, David M.; Greenberg, Albert G.; Lubachevsky, Boris D.

    1991-01-01

    Trace driven cache simulation is central to computer design. A trace is a very long sequence of reference lines from main memory. At the t(exp th) instant, reference x sub t is hashed into a set of cache locations, the contents of which are then compared with x sub t. If at the t sup th instant x sub t is not present in the cache, then it is said to be a miss, and is loaded into the cache set, possibly forcing the replacement of some other memory line, and making x sub t present for the (t+1) sup st instant. The problem of parallel simulation of a subtrace of N references directed to a C line cache set is considered, with the aim of determining which references are misses and related statistics. A simulation method is presented for the Least Recently Used (LRU) policy, which regradless of the set size C runs in time O(log N) using N processors on the exclusive read, exclusive write (EREW) parallel model. A simpler LRU simulation algorithm is given that runs in O(C log N) time using N/log N processors. Timings are presented of the second algorithm's implementation on the MasPar MP-1, a machine with 16384 processors. A broad class of reference based line replacement policies are considered, which includes LRU as well as the Least Frequently Used and Random replacement policies. A simulation method is presented for any such policy that on any trace of length N directed to a C line set runs in the O(C log N) time with high probability using N processors on the EREW model. The algorithms are simple, have very little space overhead, and are well suited for SIMD implementation.

  9. California scrub-jays reduce visual cues available to potential pilferers by matching food colour to caching substrate.

    Science.gov (United States)

    Kelley, Laura A; Clayton, Nicola S

    2017-07-01

    Some animals hide food to consume later; however, these caches are susceptible to theft by conspecifics and heterospecifics. Caching animals can use protective strategies to minimize sensory cues available to potential pilferers, such as caching in shaded areas and in quiet substrate. Background matching (where object patterning matches the visual background) is commonly seen in prey animals to reduce conspicuousness, and caching animals may also use this tactic to hide caches, for example, by hiding coloured food in a similar coloured substrate. We tested whether California scrub-jays ( Aphelocoma californica ) camouflage their food in this way by offering them caching substrates that either matched or did not match the colour of food available for caching. We also determined whether this caching behaviour was sensitive to social context by allowing the birds to cache when a conspecific potential pilferer could be both heard and seen (acoustic and visual cues present), or unseen (acoustic cues only). When caching events could be both heard and seen by a potential pilferer, birds cached randomly in matching and non-matching substrates. However, they preferentially hid food in the substrate that matched the food colour when only acoustic cues were present. This is a novel cache protection strategy that also appears to be sensitive to social context. We conclude that studies of cache protection strategies should consider the perceptual capabilities of the cacher and potential pilferers. © 2017 The Author(s).

  10. Replication Strategy for Spatiotemporal Data Based on Distributed Caching System.

    Science.gov (United States)

    Xiong, Lian; Yang, Liu; Tao, Yang; Xu, Juan; Zhao, Lun

    2018-01-14

    The replica strategy in distributed cache can effectively reduce user access delay and improve system performance. However, developing a replica strategy suitable for varied application scenarios is still quite challenging, owing to differences in user access behavior and preferences. In this paper, a replication strategy for spatiotemporal data (RSSD) based on a distributed caching system is proposed. By taking advantage of the spatiotemporal locality and correlation of user access, RSSD mines high popularity and associated files from historical user access information, and then generates replicas and selects appropriate cache node for placement. Experimental results show that the RSSD algorithm is simple and efficient, and succeeds in significantly reducing user access delay.

  11. Optimal Caching in Multicast 5G Networks with Opportunistic Spectrum Access

    KAUST Repository

    Emara, Mostafa; Elsawy, Hesham; Sorour, Sameh; Al-Ghadhban, Samir; Alouini, Mohamed-Slim; Al-Naffouri, Tareq Y.

    2018-01-01

    Cache-enabled small base station (SBS) densification is foreseen as a key component of 5G cellular networks. This architecture enables storing popular files at the network edge (i.e., SBS caches), which empowers local communication and alleviates

  12. LPPS: A Distributed Cache Pushing Based K-Anonymity Location Privacy Preserving Scheme

    Directory of Open Access Journals (Sweden)

    Ming Chen

    2016-01-01

    Full Text Available Recent years have witnessed the rapid growth of location-based services (LBSs for mobile social network applications. To enable location-based services, mobile users are required to report their location information to the LBS servers and receive answers of location-based queries. Location privacy leak happens when such servers are compromised, which has been a primary concern for information security. To address this issue, we propose the Location Privacy Preservation Scheme (LPPS based on distributed cache pushing. Unlike existing solutions, LPPS deploys distributed cache proxies to cover users mostly visited locations and proactively push cache content to mobile users, which can reduce the risk of leaking users’ location information. The proposed LPPS includes three major process. First, we propose an algorithm to find the optimal deployment of proxies to cover popular locations. Second, we present cache strategies for location-based queries based on the Markov chain model and propose update and replacement strategies for cache content maintenance. Third, we introduce a privacy protection scheme which is proved to achieve k-anonymity guarantee for location-based services. Extensive experiments illustrate that the proposed LPPS achieves decent service coverage ratio and cache hit ratio with lower communication overhead compared to existing solutions.

  13. Enabling MPEG-2 video playback in embedded systems through improved data cache efficiency

    Science.gov (United States)

    Soderquist, Peter; Leeser, Miriam E.

    1999-01-01

    Digital video decoding, enabled by the MPEG-2 Video standard, is an important future application for embedded systems, particularly PDAs and other information appliances. Many such system require portability and wireless communication capabilities, and thus face severe limitations in size and power consumption. This places a premium on integration and efficiency, and favors software solutions for video functionality over specialized hardware. The processors in most embedded system currently lack the computational power needed to perform video decoding, but a related and equally important problem is the required data bandwidth, and the need to cost-effectively insure adequate data supply. MPEG data sets are very large, and generate significant amounts of excess memory traffic for standard data caches, up to 100 times the amount required for decoding. Meanwhile, cost and power limitations restrict cache sizes in embedded systems. Some systems, including many media processors, eliminate caches in favor of memories under direct, painstaking software control in the manner of digital signal processors. Yet MPEG data has locality which caches can exploit if properly optimized, providing fast, flexible, and automatic data supply. We propose a set of enhancements which target the specific needs of the heterogeneous types within the MPEG decoder working set. These optimizations significantly improve the efficiency of small caches, reducing cache-memory traffic by almost 70 percent, and can make an enhanced 4 KB cache perform better than a standard 1 MB cache. This performance improvement can enable high-resolution, full frame rate video playback in cheaper, smaller system than woudl otherwise be possible.

  14. The Optimization of In-Memory Space Partitioning Trees for Cache Utilization

    Science.gov (United States)

    Yeo, Myung Ho; Min, Young Soo; Bok, Kyoung Soo; Yoo, Jae Soo

    In this paper, a novel cache conscious indexing technique based on space partitioning trees is proposed. Many researchers investigated efficient cache conscious indexing techniques which improve retrieval performance of in-memory database management system recently. However, most studies considered data partitioning and targeted fast information retrieval. Existing data partitioning-based index structures significantly degrade performance due to the redundant accesses of overlapped spaces. Specially, R-tree-based index structures suffer from the propagation of MBR (Minimum Bounding Rectangle) information by updating data frequently. In this paper, we propose an in-memory space partitioning index structure for optimal cache utilization. The proposed index structure is compared with the existing index structures in terms of update performance, insertion performance and cache-utilization rate in a variety of environments. The results demonstrate that the proposed index structure offers better performance than existing index structures.

  15. Proposal and development of a reconfigurable associativity algorithm in cache memories.

    OpenAIRE

    Roberto Borges Kerr Junior

    2008-01-01

    A evolução constante dos processadores está aumentando cada vez o overhead dos acessos à memória. Tentando evitar este problema, os desenvolvedores de processadores utilizam diversas técnicas, entre elas, o emprego de memórias cache na hierarquia de memórias dos computadores. As memórias cache, por outro lado, não conseguem suprir totalmente as suas necessidades, sendo interessante alguma técnica que tornasse possível aproveitar melhor a memória cache. Para resolver este problema, autores pro...

  16. Worst-case execution time analysis-driven object cache design

    DEFF Research Database (Denmark)

    Huber, Benedikt; Puffitsch, Wolfgang; Schoeberl, Martin

    2012-01-01

    result in a WCET analysis‐friendly design. Aiming for a time‐predictable design, we therefore propose to employ WCET analysis techniques for the design space exploration of processor architectures. We evaluated different object cache configurations using static analysis techniques. The number of field......Hard real‐time systems need a time‐predictable computing platform to enable static worst‐case execution time (WCET) analysis. All performance‐enhancing features need to be WCET analyzable. However, standard data caches containing heap‐allocated data are very hard to analyze statically....... In this paper we explore a new object cache design, which is driven by the capabilities of static WCET analysis. Simulations of standard benchmarks estimating the expected average case performance usually drive computer architecture design. The design decisions derived from this methodology do not necessarily...

  17. Caching at the Mobile Edge: a Practical Implementation

    DEFF Research Database (Denmark)

    Poderys, Justas; Artuso, Matteo; Lensbøl, Claus Michael Oest

    2018-01-01

    Thanks to recent advances in mobile networks, it is becoming increasingly popular to access heterogeneous content from mobile terminals. There are, however, unique challenges in mobile networks that affect the perceived quality of experience (QoE) at the user end. One such challenge is the higher...... latency that users typically experience in mobile networks compared to wired ones. Cloud-based radio access networks with content caches at the base stations are seen as a key contributor in reducing the latency required to access content and thus improve the QoE at the mobile user terminal. In this paper...... for the mobile user obtained by caching content at the base stations. This is quantified with a comparison to non-cached content by means of ping tests (10–11% shorter times), a higher response rate for web traffic (1.73–3.6 times higher), and an improvement in the jitter (6% reduction)....

  18. EqualChance: Addressing Intra-set Write Variation to Increase Lifetime of Non-volatile Caches

    Energy Technology Data Exchange (ETDEWEB)

    Mittal, Sparsh [ORNL; Vetter, Jeffrey S [ORNL

    2014-01-01

    To address the limitations of SRAM such as high-leakage and low-density, researchers have explored use of non-volatile memory (NVM) devices, such as ReRAM (resistive RAM) and STT-RAM (spin transfer torque RAM) for designing on-chip caches. A crucial limitation of NVMs, however, is that their write endurance is low and the large intra-set write variation introduced by existing cache management policies may further exacerbate this problem, thereby reducing the cache lifetime significantly. We present EqualChance, a technique to increase cache lifetime by reducing intra-set write variation. EqualChance works by periodically changing the physical cache-block location of a write-intensive data item within a set to achieve wear-leveling. Simulations using workloads from SPEC CPU2006 suite and HPC (high-performance computing) field show that EqualChance improves the cache lifetime by 4.29X. Also, its implementation overhead is small, and it incurs very small performance and energy loss.

  19. ARC Cache: A solution for lightweight Grid sites in ATLAS

    CERN Document Server

    Garonne, Vincent; The ATLAS collaboration

    2016-01-01

    Many Grid sites have the need to reduce operational manpower, and running a storage element consumes a large amount of effort. In addition, setting up a new Grid site including a storage element involves a steep learning curve and large investment of time. For these reasons so-called storage-less sites are becoming more popular as a way to provide Grid computing resources with less operational overhead. ARC CE is a widely-used and mature Grid middleware which was designed from the start to be used on sites with no persistent storage element. Instead, it maintains a local self-managing cache of data which retains popular data for future jobs. As the cache is simply an area on a local posix shared filesystem with no external-facing service, it requires no extra maintenance. The cache can be scaled up as required by increasing the size of the filesystem or adding new filesystems. This paper describes how ARC CE and its cache are an ideal solution for lightweight Grid sites in the ATLAS experiment, and the integr...

  20. Cooperative Coding and Caching for Streaming Data in Multihop Wireless Networks

    Directory of Open Access Journals (Sweden)

    Liu Jiangchuan

    2010-01-01

    Full Text Available This paper studies the distributed caching managements for the current flourish of the streaming applications in multihop wireless networks. Many caching managements to date use randomized network coding approach, which provides an elegant solution for ubiquitous data accesses in such systems. However, the encoding, essentially a combination operation, makes the coded data difficult to be changed. In particular, to accommodate new data, the system may have to first decode all the combined data segments, remove some unimportant ones, and then reencode the data segments again. This procedure is clearly expensive for continuously evolving data storage. As such, we introduce a novel Cooperative Coding and Caching ( scheme, which allows decoding-free data removal through a triangle-like codeword organization. Its decoding performance is very close to the conventional network coding with only a sublinear overhead. Our scheme offers a promising solution to the caching management for streaming data.

  1. dCache data storage system implementations at a Tier-2 centre

    Energy Technology Data Exchange (ETDEWEB)

    Tsigenov, Oleg; Nowack, Andreas; Kress, Thomas [III. Physikalisches Institut B, RWTH Aachen (Germany)

    2009-07-01

    The experimental high energy physics groups of the RWTH Aachen University operate one of the largest Grid Tier-2 sites in the world and offer more than 2000 modern CPU cores and about 550 TB of disk space mainly to the CMS experiment and to a lesser extent to the Auger and Icecube collaborations.Running such a large data cluster requires a flexible storage system with high performance. We use dCache for this purpose and are integrated into the dCache support team to the benefit of the German Grid sites. Recently, a storage pre-production cluster has been built to study the setup and the behavior of novel dCache features within Chimera without interfering with the production system. This talk gives an overview about the practical experience gained with dCache on both the production and the testbed cluster and discusses future plans.

  2. Replication Strategy for Spatiotemporal Data Based on Distributed Caching System

    Science.gov (United States)

    Xiong, Lian; Tao, Yang; Xu, Juan; Zhao, Lun

    2018-01-01

    The replica strategy in distributed cache can effectively reduce user access delay and improve system performance. However, developing a replica strategy suitable for varied application scenarios is still quite challenging, owing to differences in user access behavior and preferences. In this paper, a replication strategy for spatiotemporal data (RSSD) based on a distributed caching system is proposed. By taking advantage of the spatiotemporal locality and correlation of user access, RSSD mines high popularity and associated files from historical user access information, and then generates replicas and selects appropriate cache node for placement. Experimental results show that the RSSD algorithm is simple and efficient, and succeeds in significantly reducing user access delay. PMID:29342897

  3. Energy-Efficient Caching for Mobile Edge Computing in 5G Networks

    Directory of Open Access Journals (Sweden)

    Zhaohui Luo

    2017-05-01

    Full Text Available Mobile Edge Computing (MEC, which is considered a promising and emerging paradigm to provide caching capabilities in proximity to mobile devices in 5G networks, enables fast, popular content delivery of delay-sensitive applications at the backhaul capacity of limited mobile networks. Most existing studies focus on cache allocation, mechanism design and coding design for caching. However, grid power supply with fixed power uninterruptedly in support of a MEC server (MECS is costly and even infeasible, especially when the load changes dynamically over time. In this paper, we investigate the energy consumption of the MECS problem in cellular networks. Given the average download latency constraints, we take the MECS’s energy consumption, backhaul capacities and content popularity distributions into account and formulate a joint optimization framework to minimize the energy consumption of the system. As a complicated joint optimization problem, we apply a genetic algorithm to solve it. Simulation results show that the proposed solution can effectively determine the near-optimal caching placement to obtain better performance in terms of energy efficiency gains compared with conventional caching placement strategies. In particular, it is shown that the proposed scheme can significantly reduce the joint cost when backhaul capacity is low.

  4. Cooperative Coding and Caching for Streaming Data in Multihop Wireless Networks

    Directory of Open Access Journals (Sweden)

    Dan Wang

    2010-01-01

    Full Text Available This paper studies the distributed caching managements for the current flourish of the streaming applications in multihop wireless networks. Many caching managements to date use randomized network coding approach, which provides an elegant solution for ubiquitous data accesses in such systems. However, the encoding, essentially a combination operation, makes the coded data difficult to be changed. In particular, to accommodate new data, the system may have to first decode all the combined data segments, remove some unimportant ones, and then reencode the data segments again. This procedure is clearly expensive for continuously evolving data storage. As such, we introduce a novel Cooperative Coding and Caching (C3 scheme, which allows decoding-free data removal through a triangle-like codeword organization. Its decoding performance is very close to the conventional network coding with only a sublinear overhead. Our scheme offers a promising solution to the caching management for streaming data.

  5. CACHING DATA STORED IN SQL SERVER FOR OPTIMIZING THE PERFORMANCE

    Directory of Open Access Journals (Sweden)

    Demian Horia

    2016-12-01

    Full Text Available This paper present the architecture of web site with different techniques used for optimize the performance of loading the web content. The architecture presented here is for e-commerce site developed on windows with MVC, IIS and Micosoft SQL Server. Caching the data is one technique used by the browsers, by the web servers itself or by proxy servers. Caching the data is made without the knowledge of users and need to provide to user the more recent information from the server. This means that caching mechanism has to be aware of any modification of data on the server. There are different information’s presented in e-commerce site related to products like images, code of product, description, properties or stock

  6. Using 87Sr/86Sr Ratios of Carbonate Minerals in Dust to Quantify Contributions from Desert Playas to the Urban Wasatch Front, Utah, USA

    Science.gov (United States)

    Goodman, M.; Carling, G. T.; Fernandez, D. P.; Rey, K.; Hale, C. A.; Nelson, S.; Hahnenberger, M.

    2017-12-01

    Desert playas are important dust sources globally, with potential harmful health impacts for nearby urban areas. The Wasatch Front (population >2 million) in western Utah, USA, is located directly downwind of several playas that contribute to poor air quality on dust event days. Additionally, the exposed lakebed of nearby Great Salt Lake is a growing dust source as water levels drop in response to drought and river diversions. To investigate contributions of playa dust to the Wasatch Front, we sampled dust emissions from the exposed lakebed of Great Salt Lake and seven playas in western Utah, including Sevier Dry Lake, and dust deposition at four locations stretching 160 km from south to north along the Wasatch Front, including Provo, Salt Lake City, Ogden, and Logan. The samples were analyzed for mineralogy, bulk chemistry, and 87Sr/86Sr ratios for source apportionment. The mineralogy of playa dust and Wasatch Front dust samples was dominated by quartz, feldspar, chlorite and calcite. Bulk geochemical composition was similar for all playa dust sources, with higher anthropogenic metal concentrations in the Wasatch Front. Strontium isotope (87Sr/86Sr) ratios in the carbonate fraction of the dust samples were variable in the playa dust sources, ranging from 0.7105 in Sevier Dry Lake to 0.7150 in Great Salt Lake, providing a powerful tool for apportioning dust. Based on 87Sr/86Sr mixing models, Great Salt Lake contributed 0% of the dust flux at Provo, 20% of the dust flux at Salt Lake City, and 40% of the dust flux at Ogden and Logan during Fall 2015. Contrastingly, Great Salt Lake dust was less important in Spring of 2016, contributing 0% of the dust flux at Provo and City and Logan. Two major dust events that occurred on 3 November 2015 and 23 April 2016 had similar wind and climate conditions as understood by HYSPLIT backward trajectories, meaning that seasonal variability in dust emissions is due to playa surface conditions rather than meteorologic conditions

  7. 77 FR 26733 - Uinta-Wasatch-Cache National Forest; Evanston-Mountain View Ranger District; Utah; Smiths Fork...

    Science.gov (United States)

    2012-05-07

    ... process or judicial review. Comments received in response to this solicitation, including names and... respondent with standing to participate in the objection process associated with this project under the HFRA or judicial review. FOR FURTHER INFORMATION CONTACT: Pete Gomben, Environmental Coordinator, at 801...

  8. Dynamic Video Streaming in Caching-enabled Wireless Mobile Networks

    OpenAIRE

    Liang, C.; Hu, S.

    2017-01-01

    Recent advances in software-defined mobile networks (SDMNs), in-network caching, and mobile edge computing (MEC) can have great effects on video services in next generation mobile networks. In this paper, we jointly consider SDMNs, in-network caching, and MEC to enhance the video service in next generation mobile networks. With the objective of maximizing the mean measurement of video quality, an optimization problem is formulated. Due to the coupling of video data rate, computing resource, a...

  9. On the Feasibility of Prefetching and Caching for Online TV Services: A Measurement Study on Hulu

    Science.gov (United States)

    Krishnappa, Dilip Kumar; Khemmarat, Samamon; Gao, Lixin; Zink, Michael

    Lately researchers are looking at ways to reduce the delay on video playback through mechanisms like prefetching and caching for Video-on-Demand (VoD) services. The usage of prefetching and caching also has the potential to reduce the amount of network bandwidth usage, as most popular requests are served from a local cache rather than the server containing the original content. In this paper, we investigate the advantages of having such a prefetching and caching scheme for a free hosting service of professionally created video (movies and TV shows) named "hulu". We look into the advantages of using a prefetching scheme where the most popular videos of the week, as provided by the hulu website, are prefetched and compare this approach with a conventional LRU caching scheme with limited storage space and a combined scheme of prefetching and caching. Results from our measurement and analysis shows that employing a basic caching scheme at the proxy yields a hit ratio of up to 77.69%, but requires storage of about 236GB. Further analysis shows that a prefetching scheme where the top-100 popular videos of the week are downloaded to the proxy yields a hit ratio of 44% with a storage requirement of 10GB. A LRU caching scheme with a storage limitation of 20GB can achieve a hit ratio of 55% but downloads 4713 videos to achieve such high hit ratio compared to 100 videos in prefetching scheme, whereas a scheme with both prefetching and caching with the same storage yields a hit ratio of 59% with download requirement of 4439 videos. We find that employing a scheme of prefetching along with caching with trade-off on the storage will yield a better hit ratio and bandwidth saving than individual caching or prefetching schemes.

  10. Performance Evaluation of Moving Small-Cell Network with Proactive Cache

    Directory of Open Access Journals (Sweden)

    Young Min Kwon

    2016-01-01

    Full Text Available Due to rapid growth in mobile traffic, mobile network operators (MNOs are considering the deployment of moving small-cells (mSCs. mSC is a user-centric network which provides voice and data services during mobility. mSC can receive and forward data traffic via wireless backhaul and sidehaul links. In addition, due to the predictive nature of users demand, mSCs can proactively cache the predicted contents in off-peak-traffic periods. Due to these characteristics, MNOs consider mSCs as a cost-efficient solution to not only enhance the system capacity but also provide guaranteed quality of service (QoS requirements to moving user equipment (UE in peak-traffic periods. In this paper, we conduct extensive system level simulations to analyze the performance of mSCs with varying cache size and content popularity and their effect on wireless backhaul load. The performance evaluation confirms that the QoS of moving small-cell UE (mSUE notably improves by using mSCs together with proactive caching. We also show that the effective use of proactive cache significantly reduces the wireless backhaul load and increases the overall network capacity.

  11. I-Structure software cache for distributed applications

    Directory of Open Access Journals (Sweden)

    Alfredo Cristóbal Salas

    2004-01-01

    Full Text Available En este artículo, describimos el caché de software I-Structure para entornos de memoria distribuida (D-ISSC, lo cual toma ventaja de la localidad de los datos mientras mantiene la capacidad de tolerancia a la latencia de sistemas de memoria I-Structure. Las facilidades de programación de los programas MPI, le ocultan los problemas de sincronización al programador. Nuestra evaluación experimental usando un conjunto de pruebas de rendimiento indica que clusters de PC con I-Structure y su mecanismo de cache D-ISSC son más robustos. El sistema puede acelerar aplicaciones de comunicación intensiva regulares e irregulares.

  12. Content Delivery in Fog-Aided Small-Cell Systems with Offline and Online Caching: An Information—Theoretic Analysis

    Directory of Open Access Journals (Sweden)

    Seyyed Mohammadreza Azimi

    2017-07-01

    Full Text Available The storage of frequently requested multimedia content at small-cell base stations (BSs can reduce the load of macro-BSs without relying on high-speed backhaul links. In this work, the optimal operation of a system consisting of a cache-aided small-cell BS and a macro-BS is investigated for both offline and online caching settings. In particular, a binary fading one-sided interference channel is considered in which the small-cell BS, whose transmission is interfered by the macro-BS, has a limited-capacity cache. The delivery time per bit (DTB is adopted as a measure of the coding latency, that is, the duration of the transmission block, required for reliable delivery. For offline caching, assuming a static set of popular contents, the minimum achievable DTB is characterized through information-theoretic achievability and converse arguments as a function of the cache capacity and of the capacity of the backhaul link connecting cloud and small-cell BS. For online caching, under a time-varying set of popular contents, the long-term (average DTB is evaluated for both proactive and reactive caching policies. Furthermore, a converse argument is developed to characterize the minimum achievable long-term DTB for online caching in terms of the minimum achievable DTB for offline caching. The performance of both online and offline caching is finally compared using numerical results.

  13. Optimal and Scalable Caching for 5G Using Reinforcement Learning of Space-Time Popularities

    Science.gov (United States)

    Sadeghi, Alireza; Sheikholeslami, Fatemeh; Giannakis, Georgios B.

    2018-02-01

    Small basestations (SBs) equipped with caching units have potential to handle the unprecedented demand growth in heterogeneous networks. Through low-rate, backhaul connections with the backbone, SBs can prefetch popular files during off-peak traffic hours, and service them to the edge at peak periods. To intelligently prefetch, each SB must learn what and when to cache, while taking into account SB memory limitations, the massive number of available contents, the unknown popularity profiles, as well as the space-time popularity dynamics of user file requests. In this work, local and global Markov processes model user requests, and a reinforcement learning (RL) framework is put forth for finding the optimal caching policy when the transition probabilities involved are unknown. Joint consideration of global and local popularity demands along with cache-refreshing costs allow for a simple, yet practical asynchronous caching approach. The novel RL-based caching relies on a Q-learning algorithm to implement the optimal policy in an online fashion, thus enabling the cache control unit at the SB to learn, track, and possibly adapt to the underlying dynamics. To endow the algorithm with scalability, a linear function approximation of the proposed Q-learning scheme is introduced, offering faster convergence as well as reduced complexity and memory requirements. Numerical tests corroborate the merits of the proposed approach in various realistic settings.

  14. A general approach for cache-oblivious range reporting and approximate range counting

    DEFF Research Database (Denmark)

    Afshani, Peyman; Hamilton, Chris; Zeh, Norbert

    2010-01-01

    We present cache-oblivious solutions to two important variants of range searching: range reporting and approximate range counting. Our main contribution is a general approach for constructing cache-oblivious data structures that provide relative (1+ε)-approximations for a general class of range c...

  15. TaPT: Temperature-Aware Dynamic Cache Optimization for Embedded Systems

    Directory of Open Access Journals (Sweden)

    Tosiron Adegbija

    2017-12-01

    Full Text Available Embedded systems have stringent design constraints, which has necessitated much prior research focus on optimizing energy consumption and/or performance. Since embedded systems typically have fewer cooling options, rising temperature, and thus temperature optimization, is an emergent concern. Most embedded systems only dissipate heat by passive convection, due to the absence of dedicated thermal management hardware mechanisms. The embedded system’s temperature not only affects the system’s reliability, but can also affect the performance, power, and cost. Thus, embedded systems require efficient thermal management techniques. However, thermal management can conflict with other optimization objectives, such as execution time and energy consumption. In this paper, we focus on managing the temperature using a synergy of cache optimization and dynamic frequency scaling, while also optimizing the execution time and energy consumption. This paper provides new insights on the impact of cache parameters on efficient temperature-aware cache tuning heuristics. In addition, we present temperature-aware phase-based tuning, TaPT, which determines Pareto optimal clock frequency and cache configurations for fine-grained execution time, energy, and temperature tradeoffs. TaPT enables autonomous system optimization and also allows designers to specify temperature constraints and optimization priorities. Experiments show that TaPT can effectively reduce execution time, energy, and temperature, while imposing minimal hardware overhead.

  16. Web Cache Prefetching as an Aspect: Towards a Dynamic-Weaving Based Solution

    DEFF Research Database (Denmark)

    Segura-Devillechaise, Marc; Menaud, Jean-Marc; Muller, Gilles

    2003-01-01

    Given the high proportion of HTTP traffic in the Internet, Web caches are crucial to reduce user access time, network latency, and bandwidth consumption. Prefetching in a Web cache can further enhance these benefits. For the best performance, however, the prefetching policy must match user and Web...

  17. Cache aware mapping of streaming apllications on a multiprocessor system-on-chip

    NARCIS (Netherlands)

    Moonen, A.J.M.; Bekooij, M.J.G.; Berg, van den R.M.J.; Meerbergen, van J.; Sciuto, D.; Peng, Z.

    2008-01-01

    Efficient use of the memory hierarchy is critical for achieving high performance in a multiprocessor system- on-chip. An external memory that is shared between processors is a bottleneck in current and future systems. Cache misses and a large cache miss penalty contribute to a low processor

  18. Fast and Cache-Oblivious Dynamic Programming with Local Dependencies

    DEFF Research Database (Denmark)

    Bille, Philip; Stöckel, Morten

    2012-01-01

    are widely used in bioinformatics to compare DNA and protein sequences. These problems can all be solved using essentially the same dynamic programming scheme over a two-dimensional matrix, where each entry depends locally on at most 3 neighboring entries. We present a simple, fast, and cache......-oblivious algorithm for this type of local dynamic programming suitable for comparing large-scale strings. Our algorithm outperforms the previous state-of-the-art solutions. Surprisingly, our new simple algorithm is competitive with a complicated, optimized, and tuned implementation of the best cache-aware algorithm...

  19. Cache and energy efficient algorithms for Nussinov's RNA Folding.

    Science.gov (United States)

    Zhao, Chunchun; Sahni, Sartaj

    2017-12-06

    An RNA folding/RNA secondary structure prediction algorithm determines the non-nested/pseudoknot-free structure by maximizing the number of complementary base pairs and minimizing the energy. Several implementations of Nussinov's classical RNA folding algorithm have been proposed. Our focus is to obtain run time and energy efficiency by reducing the number of cache misses. Three cache-efficient algorithms, ByRow, ByRowSegment and ByBox, for Nussinov's RNA folding are developed. Using a simple LRU cache model, we show that the Classical algorithm of Nussinov has the highest number of cache misses followed by the algorithms Transpose (Li et al.), ByRow, ByRowSegment, and ByBox (in this order). Extensive experiments conducted on four computational platforms-Xeon E5, AMD Athlon 64 X2, Intel I7 and PowerPC A2-using two programming languages-C and Java-show that our cache efficient algorithms are also efficient in terms of run time and energy. Our benchmarking shows that, depending on the computational platform and programming language, either ByRow or ByBox give best run time and energy performance. The C version of these algorithms reduce run time by as much as 97.2% and energy consumption by as much as 88.8% relative to Classical and by as much as 56.3% and 57.8% relative to Transpose. The Java versions reduce run time by as much as 98.3% relative to Classical and by as much as 75.2% relative to Transpose. Transpose achieves run time and energy efficiency at the expense of memory as it takes twice the memory required by Classical. The memory required by ByRow, ByRowSegment, and ByBox is the same as that of Classical. As a result, using the same amount of memory, the algorithms proposed by us can solve problems up to 40% larger than those solvable by Transpose.

  20. dCache: Big Data storage for HEP communities and beyond

    International Nuclear Information System (INIS)

    Millar, A P; Bernardt, C; Fuhrmann, P; Mkrtchyan, T; Petersen, A; Schwank, K; Behrmann, G; Litvintsev, D; Rossi, A

    2014-01-01

    With over ten years in production use dCache data storage system has evolved to match ever changing lansdcape of continually evolving storage technologies with new solutions to both existing problems and new challenges. In this paper, we present three areas of innovation in dCache: providing efficient access to data with NFS v4.1 pNFS, adoption of CDMI and WebDAV as an alternative to SRM for managing data, and integration with alternative authentication mechanisms.

  1. Evaluating the use of strontium isotopes in tree rings to record the isotopic signal of dust deposited on the Wasatch Mountains

    International Nuclear Information System (INIS)

    Miller, Olivia L.; Solomon, Douglas Kip; Fernandez, Diego P.; Cerling, Thure E.; Bowling, David R.

    2014-01-01

    Highlights: • Dust was a major contributor of Sr to soil and tree rings over Sr poor bedrocks. • Tree rings were evaluated for their use as a record of dust strontium isotope history. • The isotopic signal of dust deposited on the Wasatch Mountains changed over the past ∼75 years. - Abstract: Dust cycling from the Great Basin to the Rocky Mountains is an important component of ecological and hydrological processes. We investigated the use of strontium (Sr) concentrations and isotope ratios ( 87 Sr/ 86 Sr) in tree rings as a proxy for dust deposition. We report Sr concentrations and isotope ratios ( 87 Sr/ 86 Sr) from atmospherically deposited dust, soil, bedrock, and tree rings from the Wasatch Mountains to investigate provenance of dust landing on the Wasatch Mountains and to determine if a dust Sr record is preserved in tree rings. Trees obtained a majority of their Sr from dust, making them a useful record of dust source and deposition. Dust contributions of Sr to soils were more than 94% over quartzite, 63% over granodiorite, and 50% over limestone. Dust contributions of Sr to trees were more than 85% in trees growing over quartzite, 55% over granodiorite, and between 0% and 92% over limestone. These findings demonstrate that a dust signal was preserved in some tree rings and reflects how Sr from dust and bedrock mixes within the soil. Trees growing over quartzite were most sensitive to dust. Changes in Sr isotope ratios for a tree growing over quartzite were interpreted as changes in dust source over time. This work has laid the foundation for using tree rings as a proxy for dust deposition over time

  2. Using shadow page cache to improve isolated drivers performance.

    Science.gov (United States)

    Zheng, Hao; Dong, Xiaoshe; Wang, Endong; Chen, Baoke; Zhu, Zhengdong; Liu, Chengzhe

    2015-01-01

    With the advantage of the reusability property of the virtualization technology, users can reuse various types and versions of existing operating systems and drivers in a virtual machine, so as to customize their application environment. In order to prevent users' virtualization environments being impacted by driver faults in virtual machine, Chariot examines the correctness of driver's write operations by the method of combining a driver's write operation capture and a driver's private access control table. However, this method needs to keep the write permission of shadow page table as read-only, so as to capture isolated driver's write operations through page faults, which adversely affect the performance of the driver. Based on delaying setting frequently used shadow pages' write permissions to read-only, this paper proposes an algorithm using shadow page cache to improve the performance of isolated drivers and carefully study the relationship between the performance of drivers and the size of shadow page cache. Experimental results show that, through the shadow page cache, the performance of isolated drivers can be greatly improved without impacting Chariot's reliability too much.

  3. Lack of caching of direct-seeded Douglas fir seeds by deer mice

    International Nuclear Information System (INIS)

    Sullivan, T.P.

    1978-01-01

    Seed caching by deer mice was investigated by radiotagging seeds in forest and clear-cut areas in coastal British Columbia. Deer mice tend to cache very few Douglas fir seeds in the fall when the seed is uniformly distributed and is at densities comparable with those used in direct-seeding programs. (author)

  4. Decision-cache based XACML authorisation and anonymisation for XML documents

    OpenAIRE

    Ulltveit-Moe, Nils; Oleshchuk, Vladimir A

    2012-01-01

    Author's version of an article in the journal: Computer Standards and Interfaces. Also available from the publisher at: http://dx.doi.org/10.1016/j.csi.2011.10.007 This paper describes a decision cache for the eXtensible Access Control Markup Language (XACML) that supports fine-grained authorisation and anonymisation of XML based messages and documents down to XML attribute and element level. The decision cache is implemented as an XACML obligation service, where a specification of the XML...

  5. Turbidity and Total Suspended Solids on the Lower Cache River Watershed, AR.

    Science.gov (United States)

    Rosado-Berrios, Carlos A; Bouldin, Jennifer L

    2016-06-01

    The Cache River Watershed (CRW) in Arkansas is part of one of the largest remaining bottomland hardwood forests in the US. Although wetlands are known to improve water quality, the Cache River is listed as impaired due to sedimentation and turbidity. This study measured turbidity and total suspended solids (TSS) in seven sites of the lower CRW; six sites were located on the Bayou DeView tributary of the Cache River. Turbidity and TSS levels ranged from 1.21 to 896 NTU, and 0.17 to 386.33 mg/L respectively and had an increasing trend over the 3-year study. However, a decreasing trend from upstream to downstream in the Bayou DeView tributary was noted. Sediment loading calculated from high precipitation events and mean TSS values indicate that contributions from the Cache River main channel was approximately 6.6 times greater than contributions from Bayou DeView. Land use surrounding this river channel affects water quality as wetlands provide a filter for sediments in the Bayou DeView channel.

  6. On the Performance of the Cache Coding Protocol

    Directory of Open Access Journals (Sweden)

    Behnaz Maboudi

    2018-03-01

    Full Text Available Network coding approaches typically consider an unrestricted recoding of coded packets in the relay nodes to increase performance. However, this can expose the system to pollution attacks that cannot be detected during transmission, until the receivers attempt to recover the data. To prevent these attacks while allowing for the benefits of coding in mesh networks, the cache coding protocol was proposed. This protocol only allows recoding at the relays when the relay has received enough coded packets to decode an entire generation of packets. At that point, the relay node recodes and signs the recoded packets with its own private key, allowing the system to detect and minimize the effect of pollution attacks and making the relays accountable for changes on the data. This paper analyzes the delay performance of cache coding to understand the security-performance trade-off of this scheme. We introduce an analytical model for the case of two relays in an erasure channel relying on an absorbing Markov chain and an approximate model to estimate the performance in terms of the number of transmissions before successfully decoding at the receiver. We confirm our analysis using simulation results. We show that cache coding can overcome the security issues of unrestricted recoding with only a moderate decrease in system performance.

  7. Cache-Oblivious Search Trees via Binary Trees of Small Height

    DEFF Research Database (Denmark)

    Brodal, G.S.; Fagerberg, R.; Jacob, R.

    2002-01-01

    We propose a version of cache oblivious search trees which is simpler than the previous proposal of Bender, Demaine and Farach-Colton and has the same complexity bounds. In particular, our data structure avoids the use of weight balanced B-trees, and can be implemented as just a single array......, and range queries in worst case O(logB n + k/B) memory transfers, where k is the size of the output.The basic idea of our data structure is to maintain a dynamic binary tree of height log n+O(1) using existing methods, embed this tree in a static binary tree, which in turn is embedded in an array in a cache...... oblivious fashion, using the van Emde Boas layout of Prokop.We also investigate the practicality of cache obliviousness in the area of search trees, by providing an empirical comparison of different methods for laying out a search tree in memory....

  8. Consistencia de ejecución: una propuesta no cache coherente

    OpenAIRE

    García, Rafael B.; Ardenghi, Jorge Raúl

    2005-01-01

    La presencia de uno o varios niveles de memoria cache en los procesadores modernos, cuyo objetivo es reducir el tiempo efectivo de acceso a memoria, adquiere especial relevancia en un ambiente multiprocesador del tipo DSM dado el mucho mayor costo de las referencias a memoria en módulos remotos. Claramente, el protocolo de coherencia de cache debe responder al modelo de consistencia de memoria adoptado. El modelo secuencial SC, aceptado generalmente como el más natural, junto a una serie de m...

  9. Randomized Caches Can Be Pretty Useful to Hard Real-Time Systems

    Directory of Open Access Journals (Sweden)

    Enrico Mezzetti

    2015-03-01

    Full Text Available Cache randomization per se, and its viability for probabilistic timing analysis (PTA of critical real-time systems, are receiving increasingly close attention from the scientific community and the industrial practitioners. In fact, the very notion of introducing randomness and probabilities in time-critical systems has caused strenuous debates owing to the apparent clash that this idea has with the strictly deterministic view traditionally held for those systems. A paper recently appeared in LITES (Reineke, J. (2014. Randomized Caches Considered Harmful in Hard Real-Time Systems. LITES, 1(1, 03:1-03:13. provides a critical analysis of the weaknesses and risks entailed in using randomized caches in hard real-time systems. In order to provide the interested reader with a fuller, balanced appreciation of the subject matter, a critical analysis of the benefits brought about by that innovation should be provided also. This short paper addresses that need by revisiting the array of issues addressed in the cited work, in the light of the latest advances to the relevant state of the art. Accordingly, we show that the potential benefits of randomized caches do offset their limitations, causing them to be - when used in conjunction with PTA - a serious competitor to conventional designs.

  10. Greatly improved cache update times for conditions data with Frontier/Squid

    International Nuclear Information System (INIS)

    Dykstra, Dave; Lueking, Lee

    2009-01-01

    The CMS detector project loads copies of conditions data to over 100,000 computer cores worldwide by using a software subsystem called Frontier. This subsystem translates database queries into HTTP, looks up the results in a central database at CERN, and caches the results in an industry-standard HTTP proxy/caching server called Squid. One of the most challenging aspects of any cache system is coherency, that is, ensuring that changes made to the underlying data get propagated out to all clients in a timely manner. Recently, the Frontier system was enhanced to drastically reduce the time for changes to be propagated everywhere without heavily loading servers. The propagation time is now as low as 15 minutes for some kinds of data and no more than 60 minutes for the rest of the data. This was accomplished by taking advantage of an HTTP and Squid feature called If-Modified-Since. In order to use this feature, the Frontier server sends a Last-Modified timestamp, but since modification times are not normally tracked by Oracle databases, a PL/SQL program was developed to track the modification times of database tables. We discuss the details of this caching scheme and the obstacles overcome including database and Squid bugs.

  11. Greatly improved cache update times for conditions data with Frontier/Squid

    Energy Technology Data Exchange (ETDEWEB)

    Dykstra, Dave; Lueking, Lee, E-mail: dwd@fnal.go [Computing Division, Fermilab, Batavia, IL (United States)

    2010-04-01

    The CMS detector project loads copies of conditions data to over 100,000 computer cores worldwide by using a software subsystem called Frontier. This subsystem translates database queries into HTTP, looks up the results in a central database at CERN, and caches the results in an industry-standard HTTP proxy/caching server called Squid. One of the most challenging aspects of any cache system is coherency, that is, ensuring that changes made to the underlying data get propagated out to all clients in a timely manner. Recently, the Frontier system was enhanced to drastically reduce the time for changes to be propagated everywhere without heavily loading servers. The propagation time is now as low as 15 minutes for some kinds of data and no more than 60 minutes for the rest of the data. This was accomplished by taking advantage of an HTTP and Squid feature called If-Modified-Since. In order to use this feature, the Frontier server sends a Last-Modified timestamp, but since modification times are not normally tracked by Oracle databases, a PL/SQL program was developed to track the modification times of database tables. We discuss the details of this caching scheme and the obstacles overcome including database and Squid bugs.

  12. Sex, estradiol, and spatial memory in a food-caching corvid.

    Science.gov (United States)

    Rensel, Michelle A; Ellis, Jesse M S; Harvey, Brigit; Schlinger, Barney A

    2015-09-01

    Estrogens significantly impact spatial memory function in mammalian species. Songbirds express the estrogen synthetic enzyme aromatase at relatively high levels in the hippocampus and there is evidence from zebra finches that estrogens facilitate performance on spatial learning and/or memory tasks. It is unknown, however, whether estrogens influence hippocampal function in songbirds that naturally exhibit memory-intensive behaviors, such as cache recovery observed in many corvid species. To address this question, we examined the impact of estradiol on spatial memory in non-breeding Western scrub-jays, a species that routinely participates in food caching and retrieval in nature and in captivity. We also asked if there were sex differences in performance or responses to estradiol. Utilizing a combination of an aromatase inhibitor, fadrozole, with estradiol implants, we found that while overall cache recovery rates were unaffected by estradiol, several other indices of spatial memory, including searching efficiency and efficiency to retrieve the first item, were impaired in the presence of estradiol. In addition, males and females differed in some performance measures, although these differences appeared to be a consequence of the nature of the task as neither sex consistently out-performed the other. Overall, our data suggest that a sustained estradiol elevation in a food-caching bird impairs some, but not all, aspects of spatial memory on an innate behavioral task, at times in a sex-specific manner. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. Using Shadow Page Cache to Improve Isolated Drivers Performance

    Directory of Open Access Journals (Sweden)

    Hao Zheng

    2015-01-01

    Full Text Available With the advantage of the reusability property of the virtualization technology, users can reuse various types and versions of existing operating systems and drivers in a virtual machine, so as to customize their application environment. In order to prevent users’ virtualization environments being impacted by driver faults in virtual machine, Chariot examines the correctness of driver’s write operations by the method of combining a driver’s write operation capture and a driver’s private access control table. However, this method needs to keep the write permission of shadow page table as read-only, so as to capture isolated driver’s write operations through page faults, which adversely affect the performance of the driver. Based on delaying setting frequently used shadow pages’ write permissions to read-only, this paper proposes an algorithm using shadow page cache to improve the performance of isolated drivers and carefully study the relationship between the performance of drivers and the size of shadow page cache. Experimental results show that, through the shadow page cache, the performance of isolated drivers can be greatly improved without impacting Chariot’s reliability too much.

  14. The development of caching and object permanence in Western scrub-jays (Aphelocoma californica): which emerges first?

    Science.gov (United States)

    Salwiczek, Lucie H; Emery, Nathan J; Schlinger, Barney; Clayton, Nicola S

    2009-08-01

    Recent studies on the food-caching behavior of corvids have revealed complex physical and social skills, yet little is known about the ontogeny of food caching in relation to the development of cognitive capacities. Piagetian object permanence is the understanding that objects continue to exist even when they are no longer visible. Here, the authors focus on Piagetian Stages 3 and 4, because they are hallmarks in the cognitive development of both young children and animals. Our aim is to determine in a food-caching corvid, the Western scrub-jay, whether (1) Piagetian Stage 4 competence and tentative caching (i.e., hiding an item invisibly and retrieving it without delay), emerge concomitantly or consecutively; (2) whether experiencing the reappearance of hidden objects enhances the timing of the appearance of object permanence; and (3) discuss how the development of object permanence is related to behavioral development and sensorimotor intelligence. Our findings suggest that object permanence Stage 4 emerges before tentative caching, and independent of environmental influences, but that once the birds have developed simple object-permanence, then social learning might advance the interval after which tentative caching commences. Copyright 2009 APA, all rights reserved.

  15. Web proxy cache replacement strategies simulation, implementation, and performance evaluation

    CERN Document Server

    ElAarag, Hala; Cobb, Jake

    2013-01-01

    This work presents a study of cache replacement strategies designed for static web content. Proxy servers can improve performance by caching static web content such as cascading style sheets, java script source files, and large files such as images. This topic is particularly important in wireless ad hoc networks, in which mobile devices act as proxy servers for a group of other mobile devices. Opening chapters present an introduction to web requests and the characteristics of web objects, web proxy servers and Squid, and artificial neural networks. This is followed by a comprehensive review o

  16. Language-Based Caching of Dynamically Generated HTML

    DEFF Research Database (Denmark)

    Brabrand, Claus; Møller, Anders; Olesen, Steffan

    2002-01-01

    Increasingly, HTML documents are dynamically generated by interactive Web services. To ensure that the client is presented with the newest versions of such documents it is customary to disable client caching causing a seemingly inevitable performance penalty. In the system, dynamic HTML documents...

  17. dCache: implementing a high-end NFSv4.1 service using a Java NIO framework

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    dCache is a high performance scalable storage system widely used by HEP community. In addition to set of home grown protocols we also provide industry standard access mechanisms like WebDAV and NFSv4.1. This support places dCache as a direct competitor to commercial solutions. Nevertheless conforming to a protocol is not enough; our implementations must perform comparably or even better than commercial systems. To achieve this, dCache uses two high-end IO frameworks from well know application servers: GlassFish and JBoss. This presentation describes how we implemented an rfc1831 and rfc2203 compliant ONC RPC (Sun RPC) service based on the Grizzly NIO framework, part of the GlassFish application server. This ONC RPC service is the key component of dCache’s NFSv4.1 implementation, but is independent of dCache and available for other projects. We will also show some details of dCache NFS v4.1 implementations, describe some of the Java NIO techniques used and, finally, present details of our performance e...

  18. Planetary Sample Caching System Design Options

    Science.gov (United States)

    Collins, Curtis; Younse, Paulo; Backes, Paul

    2009-01-01

    Potential Mars Sample Return missions would aspire to collect small core and regolith samples using a rover with a sample acquisition tool and sample caching system. Samples would need to be stored in individual sealed tubes in a canister that could be transfered to a Mars ascent vehicle and returned to Earth. A sample handling, encapsulation and containerization system (SHEC) has been developed as part of an integrated system for acquiring and storing core samples for application to future potential MSR and other potential sample return missions. Requirements and design options for the SHEC system were studied and a recommended design concept developed. Two families of solutions were explored: 1)transfer of a raw sample from the tool to the SHEC subsystem and 2)transfer of a tube containing the sample to the SHEC subsystem. The recommended design utilizes sample tool bit change out as the mechanism for transferring tubes to and samples in tubes from the tool. The SHEC subsystem design, called the Bit Changeout Caching(BiCC) design, is intended for operations on a MER class rover.

  19. National uranium resource evaluation: Sheridan Quadrangle, Wyoming and Montana

    International Nuclear Information System (INIS)

    Damp, J.N.; Jennings, M.D.

    1982-04-01

    The Sheridan Quadrangle of north-central Wyoming was evaluated for uranium favorability according to specific criteria of the National Uranium Resource Evaluation program. Procedures consisted of geologic and radiometric surveys; rock, water, and sediment sampling; studying well logs; and reviewing the literature. Five favorable environments were identified. These include portions of Eocene Wasatch and Upper Cretaceous Lance sandstones of the Powder River Basin and Lower Cretaceous Pryor sandstones of the Bighorn Basin. Unfavorable environments include all Precambrian, Cambrian, Ordovician, Permian, Triassic, and Middle Jurassic rocks; the Cretaceous Thermopolis, Mowry, Cody, Meeteetse, and Bearpaw Formations; the Upper Jurassic Sundance and Morrison, the Cretaceous Frontier, Meseverde, Lance, and the Paleocene Fort Union and Eocene Willwood Formations of the Bighorn Basin; the Wasatch Formation of the Powder River Basin, excluding two favorable areas and all Oligocene and Miocene rocks. Remaining rocks are unevaluated

  20. National uranium resource evaluation: Sheridan Quadrangle, Wyoming and Montana

    Energy Technology Data Exchange (ETDEWEB)

    Damp, J N; Jennings, M D

    1982-04-01

    The Sheridan Quadrangle of north-central Wyoming was evaluated for uranium favorability according to specific criteria of the National Uranium Resource Evaluation program. Procedures consisted of geologic and radiometric surveys; rock, water, and sediment sampling; studying well logs; and reviewing the literature. Five favorable environments were identified. These include portions of Eocene Wasatch and Upper Cretaceous Lance sandstones of the Powder River Basin and Lower Cretaceous Pryor sandstones of the Bighorn Basin. Unfavorable environments include all Precambrian, Cambrian, Ordovician, Permian, Triassic, and Middle Jurassic rocks; the Cretaceous Thermopolis, Mowry, Cody, Meeteetse, and Bearpaw Formations; the Upper Jurassic Sundance and Morrison, the Cretaceous Frontier, Meseverde, Lance, and the Paleocene Fort Union and Eocene Willwood Formations of the Bighorn Basin; the Wasatch Formation of the Powder River Basin, excluding two favorable areas and all Oligocene and Miocene rocks. Remaining rocks are unevaluated.

  1. New distributive web-caching technique for VOD services

    Science.gov (United States)

    Kim, Iksoo; Woo, Yoseop; Hwang, Taejune; Choi, Jintak; Kim, Youngjune

    2002-12-01

    At present, one of the most popular services through internet is on-demand services including VOD, EOD and NOD. But the main problems for on-demand service are excessive load of server and insufficiency of network resources. Therefore the service providers require a powerful expensive server and clients are faced with long end-to-end delay and network congestion problem. This paper presents a new distributive web-caching technique for fluent VOD services using distributed proxies in Head-end-Network (HNET). The HNET consists of a Switching-Agent (SA) as a control node, some Head-end Nodes (HEN) as proxies and clients connected to HEN. And each HEN is composing a LAN. Clients request VOD services to server through a HEN and SA. The SA operates the heart of HNET, all the operations using proposed distributive caching technique perform under the control of SA. This technique stores some parts of a requested video on the corresponding HENs when clients connected to each HEN request an identical video. Thus, clients access those HENs (proxies) alternatively for acquiring video streams. Eventually, this fact leads to equi-loaded proxy (HEN). We adopt the cache replacement strategy using the combination of LRU, LFU, remove streams from other HEN prior to server streams and the method of replacing the first block of video last to reduce end-to end delay.

  2. Servidor proxy caché: comprensión y asimilación tecnológica

    Directory of Open Access Journals (Sweden)

    Carlos E. Gómez

    2012-01-01

    Full Text Available Los proveedores de acceso a Internet usualmente incluyen el concepto de aceleradores de Internet para reducir el tiempo promedio que tarda un navegador en obtener los archivos solicitados. Para los administradores del sistema es difícil elegir la configuración del servidor proxy caché, ya que es necesario decidir los valores que se deben usar en diferentes variables. En este artículo se presenta la forma como se abordó el proceso de comprensión y asimilación tecnológica del servicio de proxy caché, un servicio de alto impacto organizacional. Además, este artículo es producto del proyecto de investigación “Análisis de configuraciones de servidores proxy caché”, en el cual se estudiaron aspectos relevantes del rendimiento de Squid como servidor proxy caché.

  3. Secure File Allocation and Caching in Large-scale Distributed Systems

    DEFF Research Database (Denmark)

    Di Mauro, Alessio; Mei, Alessandro; Jajodia, Sushil

    2012-01-01

    In this paper, we present a file allocation and caching scheme that guarantees high assurance, availability, and load balancing in a large-scale distributed file system that can support dynamic updates of authorization policies. The scheme uses fragmentation and replication to store files with hi......-balancing, and reducing delay of read operations. The system offers a trade-off-between performance and security that is dynamically tunable according to the current level of threat. We validate our mechanisms with extensive simulations in an Internet-like network.......In this paper, we present a file allocation and caching scheme that guarantees high assurance, availability, and load balancing in a large-scale distributed file system that can support dynamic updates of authorization policies. The scheme uses fragmentation and replication to store files with high...... security requirements in a system composed of a majority of low-security servers. We develop mechanisms to fragment files, to allocate them into multiple servers, and to cache them as close as possible to their readers while preserving the security requirement of the files, providing load...

  4. Optical RAM-enabled cache memory and optical routing for chip multiprocessors: technologies and architectures

    Science.gov (United States)

    Pleros, Nikos; Maniotis, Pavlos; Alexoudi, Theonitsa; Fitsios, Dimitris; Vagionas, Christos; Papaioannou, Sotiris; Vyrsokinos, K.; Kanellos, George T.

    2014-03-01

    The processor-memory performance gap, commonly referred to as "Memory Wall" problem, owes to the speed mismatch between processor and electronic RAM clock frequencies, forcing current Chip Multiprocessor (CMP) configurations to consume more than 50% of the chip real-estate for caching purposes. In this article, we present our recent work spanning from Si-based integrated optical RAM cell architectures up to complete optical cache memory architectures for Chip Multiprocessor configurations. Moreover, we discuss on e/o router subsystems with up to Tb/s routing capacity for cache interconnection purposes within CMP configurations, currently pursued within the FP7 PhoxTrot project.

  5. Minimizing cache misses in an event-driven network server: A case study of TUX

    DEFF Research Database (Denmark)

    Bhatia, Sapan; Consel, Charles; Lawall, Julia Laetitia

    2006-01-01

    We analyze the performance of CPU-bound network servers and demonstrate experimentally that the degradation in the performance of these servers under high-concurrency workloads is largely due to inefficient use of the hardware caches. We then describe an approach to speeding up event-driven network...... servers by optimizing their use of the L2 CPU cache in the context of the TUX Web server, known for its robustness to heavy load. Our approach is based on a novel cache-aware memory allocator and a specific scheduling strategy that together ensure that the total working data set of the server stays...

  6. Applying Data Mining Techniques to Improve Information Security in the Cloud: A Single Cache System Approach

    Directory of Open Access Journals (Sweden)

    Amany AlShawi

    2016-01-01

    Full Text Available Presently, the popularity of cloud computing is gradually increasing day by day. The purpose of this research was to enhance the security of the cloud using techniques such as data mining with specific reference to the single cache system. From the findings of the research, it was observed that the security in the cloud could be enhanced with the single cache system. For future purposes, an Apriori algorithm can be applied to the single cache system. This can be applied by all cloud providers, vendors, data distributors, and others. Further, data objects entered into the single cache system can be extended into 12 components. Database and SPSS modelers can be used to implement the same.

  7. The Evaluation of Metals and Other Substances Released into Coal Mine Accrual Waters on the Wasatch Plateau Coal Field, Utah

    OpenAIRE

    Seierstad, Alberta J.; Adams, V. Dean; Lamarra, Vincent A.; Hoefs, Nancy J.; Hinchee, Robert E.

    1983-01-01

    Six sites on the Wasatch Plateau were chosen representing subsurface coal mines which were discharging or collecting accrual water on this coal field. Water samples were collected monthly at these sites for a period of 1 year (May 1981 to April 1982). Samples were taken before and after each mine's treatment system. Water sampels were analyzed for major anions and cations, trace metals, physical properaties, nutri...

  8. Delivery Time Minimization in Edge Caching: Synergistic Benefits of Subspace Alignment and Zero Forcing

    KAUST Repository

    Kakar, Jaber

    2017-10-29

    An emerging trend of next generation communication systems is to provide network edges with additional capabilities such as additional storage resources in the form of caches to reduce file delivery latency. To investigate this aspect, we study the fundamental limits of a cache-aided wireless network consisting of one central base station, $M$ transceivers and $K$ receivers from a latency-centric perspective. We use the normalized delivery time (NDT) to capture the per-bit latency for the worst-case file request pattern at high signal-to-noise ratios (SNR), normalized with respect to a reference interference-free system with unlimited transceiver cache capabilities. For various special cases with $M=\\\\{1,2\\\\}$ and $K=\\\\{1,2,3\\\\}$ that satisfy $M+K\\\\leq 4$, we establish the optimal tradeoff between cache storage and latency. This is facilitated through establishing a novel converse (for arbitrary $M$ and $K$) and an achievability scheme on the NDT. Our achievability scheme is a synergistic combination of multicasting, zero-forcing beamforming and interference alignment.

  9. Evict on write, a management strategy for a prefetch unit and/or first level cache in a multiprocessor system with speculative execution

    Science.gov (United States)

    Gara, Alan; Ohmacht, Martin

    2014-09-16

    In a multiprocessor system with at least two levels of cache, a speculative thread may run on a core processor in parallel with other threads. When the thread seeks to do a write to main memory, this access is to be written through the first level cache to the second level cache. After the write though, the corresponding line is deleted from the first level cache and/or prefetch unit, so that any further accesses to the same location in main memory have to be retrieved from the second level cache. The second level cache keeps track of multiple versions of data, where more than one speculative thread is running in parallel, while the first level cache does not have any of the versions during speculation. A switch allows choosing between modes of operation of a speculation blind first level cache.

  10. Using XRootD to provide caches for CernVM-FS

    CERN Document Server

    Domenighini, Matteo

    2017-01-01

    CernVM-FS recently added the possibility of using plugin for cache management. In order to investigate the capabilities and limits of such possibility, an XRootD plugin was written and benchmarked; as a byproduct, a POSIX plugin was also generated. The tests revealed that the plugin interface introduces no signicant performance over- head; moreover, the XRootD plugin performance was discovered to be worse than the ones of the built-in cache manager and the POSIX plugin. Further test of the XRootD component revealed that its per- formance is dependent on the server disk speed.

  11. Cache Timing Analysis of eStream Finalists

    DEFF Research Database (Denmark)

    Zenner, Erik

    2009-01-01

    Cache Timing Attacks have attracted a lot of cryptographic attention due to their relevance for the AES. However, their applicability to other cryptographic primitives is less well researched. In this talk, we give an overview over our analysis of the stream ciphers that were selected for phase 3...

  12. A Survey on Mobile Edge Networks: Convergence of Computing, Caching and Communications

    OpenAIRE

    Wang, Shuo; Zhang, Xing; Zhang, Yan; Wang, Lin; Yang, Juwo; Wang, Wenbo

    2017-01-01

    As the explosive growth of smart devices and the advent of many new applications, traffic volume has been growing exponentially. The traditional centralized network architecture cannot accommodate such user demands due to heavy burden on the backhaul links and long latency. Therefore, new architectures which bring network functions and contents to the network edge are proposed, i.e., mobile edge computing and caching. Mobile edge networks provide cloud computing and caching capabilities at th...

  13. Cache Timing Analysis of LFSR-based Stream Ciphers

    DEFF Research Database (Denmark)

    Zenner, Erik; Leander, Gregor; Hawkes, Philip

    2009-01-01

    Cache timing attacks are a class of side-channel attacks that is applicable against certain software implementations. They have generated significant interest when demonstrated against the Advanced Encryption Standard (AES), but have more recently also been applied against other cryptographic...

  14. Applying Data Mining Techniques to Improve Information Security in the Cloud: A Single Cache System Approach

    OpenAIRE

    Amany AlShawi

    2016-01-01

    Presently, the popularity of cloud computing is gradually increasing day by day. The purpose of this research was to enhance the security of the cloud using techniques such as data mining with specific reference to the single cache system. From the findings of the research, it was observed that the security in the cloud could be enhanced with the single cache system. For future purposes, an Apriori algorithm can be applied to the single cache system. This can be applied by all cloud providers...

  15. Architectural Development and Performance Analysis of a Primary Data Cache with Read Miss Address Prediction Capability

    National Research Council Canada - National Science Library

    Christensen, Kathryn

    1998-01-01

    .... The Predictive Read Cache (PRC) further improves the overall memory hierarchy performance by tracking the data read miss patterns of memory accesses, developing a prediction for the next access and prefetching the data into the faster cache memory...

  16. Dynamic Allocation of SPM Based on Time-Slotted Cache Conflict Graph for System Optimization

    Science.gov (United States)

    Wu, Jianping; Ling, Ming; Zhang, Yang; Mei, Chen; Wang, Huan

    This paper proposes a novel dynamic Scratch-pad Memory allocation strategy to optimize the energy consumption of the memory sub-system. Firstly, the whole program execution process is sliced into several time slots according to the temporal dimension; thereafter, a Time-Slotted Cache Conflict Graph (TSCCG) is introduced to model the behavior of Data Cache (D-Cache) conflicts within each time slot. Then, Integer Nonlinear Programming (INP) is implemented, which can avoid time-consuming linearization process, to select the most profitable data pages. Virtual Memory System (VMS) is adopted to remap those data pages, which will cause severe Cache conflicts within a time slot, to SPM. In order to minimize the swapping overhead of dynamic SPM allocation, a novel SPM controller with a tightly coupled DMA is introduced to issue the swapping operations without CPU's intervention. Last but not the least, this paper discusses the fluctuation of system energy profit based on different MMU page size as well as the Time Slot duration quantitatively. According to our design space exploration, the proposed method can optimize all of the data segments, including global data, heap and stack data in general, and reduce the total energy consumption by 27.28% on average, up to 55.22% with a marginal performance promotion. And comparing to the conventional static CCG (Cache Conflicts Graph), our approach can obtain 24.7% energy profit on average, up to 30.5% with a sight boost in performance.

  17. Hybrid caches: design and data management

    OpenAIRE

    Valero Bresó, Alejandro

    2013-01-01

    Cache memories have been usually implemented with Static Random-Access Memory (SRAM) technology since it is the fastest electronic memory technology. However, this technology consumes a high amount of leakage currents, which is a major design concern because leakage energy consumption increases as the transistor size shrinks. Alternative technologies are being considered to reduce this consumption. Among them, embedded Dynamic RAM (eDRAM) technology provides minimal area and le...

  18. Efficient Resource Scheduling by Exploiting Relay Cache for Cellular Networks

    Directory of Open Access Journals (Sweden)

    Chun He

    2015-01-01

    Full Text Available In relay-enhanced cellular systems, throughput of User Equipment (UE is constrained by the bottleneck of the two-hop link, backhaul link (or the first hop link, and access link (the second hop link. To maximize the throughput, resource allocation should be coordinated between these two hops. A common resource scheduling algorithm, Adaptive Distributed Proportional Fair, only ensures that the throughput of the first hop is greater than or equal to that of the second hop. But it cannot guarantee a good balance of the throughput and fairness between the two hops. In this paper, we propose a Two-Hop Balanced Distributed Scheduling (TBS algorithm by exploiting relay cache for non-real-time data traffic. The evolved Node Basestation (eNB adaptively adjusts the number of Resource Blocks (RBs allocated to the backhaul link and direct links based on the cache information of relays. Each relay allocates RBs for relay UEs based on the size of the relay UE’s Transport Block. We also design a relay UE’s ACK feedback mechanism to update the data at relay cache. Simulation results show that the proposed TBS can effectively improve resource utilization and achieve a good trade-off between system throughput and fairness by balancing the throughput of backhaul and access link.

  19. Caching Over-The-Top Services, the Netflix Case

    DEFF Research Database (Denmark)

    Jensen, Stefan; Jensen, Michael; Gutierrez Lopez, Jose Manuel

    2015-01-01

    Problem (LLB-CFL). The solution search processes are implemented based on Genetic Algorithms (GA), designing genetic operators highly targeted towards this specific problem. The proposed methods are applied to a case study focusing on the demand and cache specifications of Netflix, and framed into a real...

  20. Cache-Oblivious Planar Orthogonal Range Searching and Counting

    DEFF Research Database (Denmark)

    Arge, Lars; Brodal, Gerth Stølting; Fagerberg, Rolf

    2005-01-01

    present the first cache-oblivious data structure for planar orthogonal range counting, and improve on previous results for cache-oblivious planar orthogonal range searching. Our range counting structure uses O(Nlog2 N) space and answers queries using O(logB N) memory transfers, where B is the block...... size of any memory level in a multilevel memory hierarchy. Using bit manipulation techniques, the space can be further reduced to O(N). The structure can also be modified to support more general semigroup range sum queries in O(logB N) memory transfers, using O(Nlog2 N) space for three-sided queries...... and O(Nlog22 N/log2log2 N) space for four-sided queries. Based on the O(Nlog N) space range counting structure, we develop a data structure that uses O(Nlog2 N) space and answers three-sided range queries in O(logB N+T/B) memory transfers, where T is the number of reported points. Based...

  1. An ESL Approach for Energy Consumption Analysis of Cache Memories in SoC Platforms

    Directory of Open Access Journals (Sweden)

    Abel G. Silva-Filho

    2011-01-01

    Full Text Available The design of complex circuits as SoCs presents two great challenges to designers. One is the speeding up of system functionality modeling and the second is the implementation of the system in an architecture that meets performance and power consumption requirements. Thus, developing new high-level specification mechanisms for the reduction of the design effort with automatic architecture exploration is a necessity. This paper proposes an Electronic-System-Level (ESL approach for system modeling and cache energy consumption analysis of SoCs called PCacheEnergyAnalyzer. It uses as entry a high-level UML-2.0 profile model of the system and it generates a simulation model of a multicore platform that can be analyzed for cache tuning. PCacheEnergyAnalyzer performs static/dynamic energy consumption analysis of caches on platforms that may have different processors. Architecture exploration is achieved by letting designers choose different processors for platform generation and different mechanisms for cache optimization. PCacheEnergyAnalyzer has been validated with several applications of Mibench, Mediabench, and PowerStone benchmarks, and results show that it provides analysis with reduced simulation effort.

  2. Flood Frequency Analysis of Future Climate Projections in the Cache Creek Watershed

    Science.gov (United States)

    Fischer, I.; Trihn, T.; Ishida, K.; Jang, S.; Kavvas, E.; Kavvas, M. L.

    2014-12-01

    Effects of climate change on hydrologic flow regimes, particularly extreme events, necessitate modeling of future flows to best inform water resources management. Future flow projections may be modeled through the joint use of carbon emission scenarios, general circulation models and watershed models. This research effort ran 13 simulations for carbon emission scenarios (taken from the A1, A2 and B1 families) over the 21st century (2001-2100) for the Cache Creek watershed in Northern California. Atmospheric data from general circulation models, CCSM3 and ECHAM5, were dynamically downscaled to a 9 km resolution using MM5, a regional mesoscale model, before being input into the physically based watershed environmental hydrology (WEHY) model. Ensemble mean and standard deviation of simulated flows describe the expected hydrologic system response. Frequency histograms and cumulative distribution functions characterize the range of hydrologic responses that may occur. The modeled flow results comprise a dataset suitable for time series and frequency analysis allowing for more robust system characterization, including indices such as the 100 year flood return period. These results are significant for water quality management as the Cache Creek watershed is severely impacted by mercury pollution from historic mining activities. Extreme flow events control mercury fate and transport affecting the downstream water bodies of the Sacramento River and Sacramento- San Joaquin Delta which provide drinking water to over 25 million people.

  3. Memory for multiple cache locations and prey quantities in a food-hoarding songbird

    Directory of Open Access Journals (Sweden)

    Nicola eArmstrong

    2012-12-01

    Full Text Available Most animals can discriminate between pairs of numbers that are each less than four without training. However, North Island robins (Petroica longipes, a food hoarding songbird endemic to New Zealand, can discriminate between quantities of items as high as eight without training. Here we investigate whether robins are capable of other complex quantity discrimination tasks. We test whether their ability to discriminate between small quantities declines with 1. the number of cache sites containing prey rewards and 2. the length of time separating cache creation and retrieval (retention interval. Results showed that subjects generally performed above chance expectations. They were equally able to discriminate between different combinations of prey quantities that were hidden from view in 2, 3 and 4 cache sites from between 1, 10 and 60 seconds. Overall results indicate that North Island robins can process complex quantity information involving more than two discrete quantities of items for up to one minute long retention intervals without training.

  4. Instant Varnish Cache how-to

    CERN Document Server

    Moutinho, Roberto

    2013-01-01

    Filled with practical, step-by-step instructions and clear explanations for the most important and useful tasks. Get the job done and learn as you go. Easy-to-follow, step-by-step recipes which will get you started with Varnish Cache. Practical examples will help you to get set up quickly and easily.This book is aimed at system administrators and web developers who need to scale websites without tossing money on a large and costly infrastructure. It's assumed that you have some knowledge of the HTTP protocol, how browsers and server communicate with each other, and basic Linux systems.

  5. Tannin concentration enhances seed caching by scatter-hoarding rodents: An experiment using artificial ‘seeds’

    Science.gov (United States)

    Wang, Bo; Chen, Jin

    2008-11-01

    Tannins are very common among plant seeds but their effects on the fate of seeds, for example, via mediation of the feeding preferences of scatter-hoarding rodents, are poorly understood. In this study, we created a series of artificial 'seeds' that only differed in tannin concentration and the type of tannin, and placed them in a pine forest in the Shangri-La Alpine Botanical Garden, Yunnan Province of China. Two rodent species ( Apodemus latronum and A. chevrieri) showed significant preferences for 'seeds' with different tannin concentrations. A significantly higher proportion of seeds with low tannin concentration were consumed in situ compared with seeds with a higher tannin concentration. Meanwhile, the tannin concentration was significantly positively correlated with the proportion of seeds cached. The different types of tannin (hydrolysable tannin vs condensed tannin) did not differ significantly in their effect on the proportion of seeds eaten in situ vs seeds cached. Tannin concentrations had no significant effect on the distance that cached seeds were carried, which suggests that rodents may respond to different seed traits in deciding whether or not to cache seeds and how far they will transport seeds.

  6. CACHE: an extended BASIC program which computes the performance of shell and tube heat exchangers

    International Nuclear Information System (INIS)

    Tallackson, J.R.

    1976-03-01

    An extended BASIC program, CACHE, has been written to calculate steady state heat exchange rates in the core auxiliary heat exchangers, (CAHE), designed to remove afterheat from High-Temperature Gas-Cooled Reactors (HTGR). Computationally, these are unbaffled counterflow shell and tube heat exchangers. The computational method is straightforward. The exchanger is subdivided into a user-selected number of lengthwise segments; heat exchange in each segment is calculated in sequence and summed. The program takes the temperature dependencies of all thermal conductivities, viscosities and heat capacities into account providing these are expressed algebraically. CACHE is easily adapted to compute steady state heat exchange rates in any unbaffled counterflow exchanger. As now used, CACHE calculates heat removal by liquid weight from high-temperature helium and helium mixed with nitrogen, oxygen and carbon monoxide. A second program, FULTN, is described. FULTN computes the geometrical parameters required as input to CACHE. As reported herein, FULTN computes the internal dimensions of the Fulton Station CAHE. The two programs are chained to operate as one. Complete user information is supplied. The basic equations, variable lists, annotated program lists, and sample outputs with explanatory notes are included

  7. The Potential Role of Cache Mechanism for Complicated Design Optimization

    International Nuclear Information System (INIS)

    Noriyasu, Hirokawa; Fujita, Kikuo

    2002-01-01

    This paper discusses the potential role of cache mechanism for complicated design optimization While design optimization is an application of mathematical programming techniques to engineering design problems over numerical computation, its progress has been coevolutionary. The trend in such progress indicates that more complicated applications become the next target of design optimization beyond growth of computational resources. As the progress in the past two decades had required response surface techniques, decomposition techniques, etc., any new framework must be introduced for the future of design optimization methods. This paper proposes a possibility of what we call cache mechanism for mediating the coming challenge and briefly demonstrates some promises in the idea of Voronoi diagram based cumulative approximation as an example of its implementation, development of strict robust design, extension of design optimization for product variety

  8. 5G Network Communication, Caching, and Computing Algorithms Based on the Two‐Tier Game Model

    Directory of Open Access Journals (Sweden)

    Sungwook Kim

    2018-02-01

    Full Text Available In this study, we developed hybrid control algorithms in smart base stations (SBSs along with devised communication, caching, and computing techniques. In the proposed scheme, SBSs are equipped with computing power and data storage to collectively offload the computation from mobile user equipment and to cache the data from clouds. To combine in a refined manner the communication, caching, and computing algorithms, game theory is adopted to characterize competitive and cooperative interactions. The main contribution of our proposed scheme is to illuminate the ultimate synergy behind a fully integrated approach, while providing excellent adaptability and flexibility to satisfy the different performance requirements. Simulation results demonstrate that the proposed approach can outperform existing schemes by approximately 5% to 15% in terms of bandwidth utilization, access delay, and system throughput.

  9. Implementació d'una Cache per a un processador MIPS d'una FPGA

    OpenAIRE

    Riera Villanueva, Marc

    2013-01-01

    [CATALÀ] Primer s'explicarà breument l'arquitectura d'un MIPS, la jerarquia de memòria i el funcionament de la cache. Posteriorment s'explicarà com s'ha dissenyat i implementat una jerarquia de memòria per a un MIPS implementat en VHDL en una FPGA. [ANGLÈS] First, the MIPS architecture, memory hierarchy and the functioning of the cache will be explained briefly. Then, the design and implementation of a memory hierarchy for a MIPS processor implemented in VHDL on an FPGA will be explained....

  10. A Software Managed Stack Cache for Real-Time Systems

    DEFF Research Database (Denmark)

    Jordan, Alexander; Abbaspourseyedi, Sahar; Schoeberl, Martin

    2016-01-01

    In a real-time system, the use of a scratchpad memory can mitigate the difficulties related to analyzing data caches, whose behavior is inherently hard to predict. We propose to use a scratchpad memory for stack allocated data. While statically allocating stack frames for individual functions...

  11. Achieving cost/performance balance ratio using tiered storage caching techniques: A case study with CephFS

    Science.gov (United States)

    Poat, M. D.; Lauret, J.

    2017-10-01

    As demand for widely accessible storage capacity increases and usage is on the rise, steady IO performance is desired but tends to suffer within multi-user environments. Typical deployments use standard hard drives as the cost per/GB is quite low. On the other hand, HDD based solutions for storage is not known to scale well with process concurrency and soon enough, high rate of IOPs create a “random access” pattern killing performance. Though not all SSDs are alike, SSDs are an established technology often used to address this exact “random access” problem. In this contribution, we will first discuss the IO performance of many different SSD drives (tested in a comparable and standalone manner). We will then be discussing the performance and integrity of at least three low-level disk caching techniques (Flashcache, dm-cache, and bcache) including individual policies, procedures, and IO performance. Furthermore, the STAR online computing infrastructure currently hosts a POSIX-compliant Ceph distributed storage cluster - while caching is not a native feature of CephFS (only exists in the Ceph Object store), we will show how one can implement a caching mechanism profiting from an implementation at a lower level. As our illustration, we will present our CephFS setup, IO performance tests, and overall experience from such configuration. We hope this work will service the community’s interest for using disk-caching mechanisms with applicable uses such as distributed storage systems and seeking an overall IO performance gain.

  12. Storageless and caching Tier-2 models in the UK context

    Science.gov (United States)

    Cadellin Skipsey, Samuel; Dewhurst, Alastair; Crooks, David; MacMahon, Ewan; Roy, Gareth; Smith, Oliver; Mohammed, Kashif; Brew, Chris; Britton, David

    2017-10-01

    Operational and other pressures have lead to WLCG experiments moving increasingly to a stratified model for Tier-2 resources, where “fat” Tier-2s (“T2Ds”) and “thin” Tier-2s (“T2Cs”) provide different levels of service. In the UK, this distinction is also encouraged by the terms of the current GridPP5 funding model. In anticipation of this, testing has been performed on the implications, and potential implementation, of such a distinction in our resources. In particular, this presentation presents the results of testing of storage T2Cs, where the “thin” nature is expressed by the site having either no local data storage, or only a thin caching layer; data is streamed or copied from a “nearby” T2D when needed by jobs. In OSG, this model has been adopted successfully for CMS AAA sites; but the network topology and capacity in the USA is significantly different to that in the UK (and much of Europe). We present the result of several operational tests: the in-production University College London (UCL) site, which runs ATLAS workloads using storage at the Queen Mary University of London (QMUL) site; the Oxford site, which has had scaling tests performed against T2Ds in various locations in the UK (to test network effects); and the Durham site, which has been testing the specific ATLAS caching solution of “Rucio Cache” integration with ARC’s caching layer.

  13. Regional National Cooperative Observer

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NOAA publication dedicated to issues, news and recognition of observers in the National Weather Service Cooperative Observer program. Issues published regionally...

  14. Optimal Replacement Policies for Non-Uniform Cache Objects with Optional Eviction

    National Research Council Canada - National Science Library

    Bahat, Omri; Makowski, Armand M

    2002-01-01

    .... However, since the introduction of optimal replacement policies for conventional caching, the problem of finding optimal replacement policies under the factors indicated has not been studied in any systematic manner...

  15. Effective caching of shortest paths for location-based services

    DEFF Research Database (Denmark)

    Jensen, Christian S.; Thomsen, Jeppe Rishede; Yiu, Man Lung

    2012-01-01

    Web search is ubiquitous in our daily lives. Caching has been extensively used to reduce the computation time of the search engine and reduce the network traffic beyond a proxy server. Another form of web search, known as online shortest path search, is popular due to advances in geo...

  16. Study on data acquisition system based on reconfigurable cache technology

    Science.gov (United States)

    Zhang, Qinchuan; Li, Min; Jiang, Jun

    2018-03-01

    Waveform capture rate is one of the key features of digital acquisition systems, which represents the waveform processing capability of the system in a unit time. The higher the waveform capture rate is, the larger the chance to capture elusive events is and the more reliable the test result is. First, this paper analyzes the impact of several factors on the waveform capture rate of the system, then the novel technology based on reconfigurable cache is further proposed to optimize system architecture, and the simulation results show that the signal-to-noise ratio of signal, capacity, and structure of cache have significant effects on the waveform capture rate. Finally, the technology is demonstrated by the engineering practice, and the results show that the waveform capture rate of the system is improved substantially without significant increase of system's cost, and the technology proposed has a broad application prospect.

  17. Cache-aware data structure model for parallelism and dynamic load balancing

    International Nuclear Information System (INIS)

    Sridi, Marwa

    2016-01-01

    This PhD thesis is dedicated to the implementation of innovative parallel methods in the framework of fast transient fluid-structure dynamics. It improves existing methods within EUROPLEXUS software, in order to optimize the shared memory parallel strategy, complementary to the original distributed memory approach, brought together into a global hybrid strategy for clusters of multi-core nodes. Starting from a sound analysis of the state of the art concerning data structuring techniques correlated to the hierarchic memory organization of current multi-processor architectures, the proposed work introduces an approach suitable for an explicit time integration (i.e. with no linear system to solve at each step). A data structure of type 'Structure of arrays' is conserved for the global data storage, providing flexibility and efficiency for current operations on kinematics fields (displacement, velocity and acceleration). On the contrary, in the particular case of elementary operations (for internal forces generic computations, as well as fluxes computations between cell faces for fluid models), particularly time consuming but localized in the program, a temporary data structure of type 'Array of structures' is used instead, to force an efficient filling of the cache memory and increase the performance of the resolution, for both serial and shared memory parallel processing. Switching from the global structure to the temporary one is based on a cell grouping strategy, following classing cache-blocking principles but handling specifically for this work neighboring data necessary to the efficient treatment of ALE fluxes for cells on the group boundaries. The proposed approach is extensively tested, from the point of views of both the computation time and the access failures into cache memory, confronting the gains obtained within the elementary operations to the potential overhead generated by the data structure switch. Obtained results are very satisfactory, especially

  18. Enhancement web proxy cache performance using Wrapper Feature Selection methods with NB and J48

    Science.gov (United States)

    Mahmoud Al-Qudah, Dua'a.; Funke Olanrewaju, Rashidah; Wong Azman, Amelia

    2017-11-01

    Web proxy cache technique reduces response time by storing a copy of pages between client and server sides. If requested pages are cached in the proxy, there is no need to access the server. Due to the limited size and excessive cost of cache compared to the other storages, cache replacement algorithm is used to determine evict page when the cache is full. On the other hand, the conventional algorithms for replacement such as Least Recently Use (LRU), First in First Out (FIFO), Least Frequently Use (LFU), Randomized Policy etc. may discard important pages just before use. Furthermore, using conventional algorithm cannot be well optimized since it requires some decision to intelligently evict a page before replacement. Hence, most researchers propose an integration among intelligent classifiers and replacement algorithm to improves replacement algorithms performance. This research proposes using automated wrapper feature selection methods to choose the best subset of features that are relevant and influence classifiers prediction accuracy. The result present that using wrapper feature selection methods namely: Best First (BFS), Incremental Wrapper subset selection(IWSS)embedded NB and particle swarm optimization(PSO)reduce number of features and have a good impact on reducing computation time. Using PSO enhance NB classifier accuracy by 1.1%, 0.43% and 0.22% over using NB with all features, using BFS and using IWSS embedded NB respectively. PSO rises J48 accuracy by 0.03%, 1.91 and 0.04% over using J48 classifier with all features, using IWSS-embedded NB and using BFS respectively. While using IWSS embedded NB fastest NB and J48 classifiers much more than BFS and PSO. However, it reduces computation time of NB by 0.1383 and reduce computation time of J48 by 2.998.

  19. Fox squirrels match food assessment and cache effort to value and scarcity.

    Directory of Open Access Journals (Sweden)

    Mikel M Delgado

    Full Text Available Scatter hoarders must allocate time to assess items for caching, and to carry and bury each cache. Such decisions should be driven by economic variables, such as the value of the individual food items, the scarcity of these items, competition for food items and risk of pilferage by conspecifics. The fox squirrel, an obligate scatter-hoarder, assesses cacheable food items using two overt movements, head flicks and paw manipulations. These behaviors allow an examination of squirrel decision processes when storing food for winter survival. We measured wild squirrels' time allocations and frequencies of assessment and investment behaviors during periods of food scarcity (summer and abundance (fall, giving the squirrels a series of 15 items (alternating five hazelnuts and five peanuts. Assessment and investment per cache increased when resource value was higher (hazelnuts or resources were scarcer (summer, but decreased as scarcity declined (end of sessions. This is the first study to show that assessment behaviors change in response to factors that indicate daily and seasonal resource abundance, and that these factors may interact in complex ways to affect food storing decisions. Food-storing tree squirrels may be a useful and important model species to understand the complex economic decisions made under natural conditions.

  20. Killing and caching of an adult White-tailed deer, Odocoileus virginianus, by a single Gray Wolf, Canis lupus

    Science.gov (United States)

    Nelson, Michael E.

    2011-01-01

    A single Gray Wolf (Canis lupus) killed an adult male White-tailed Deer (Odocoileus virginianus) and cached the intact carcass in 76 cm of snow. The carcass was revisited and entirely consumed between four and seven days later. This is the first recorded observation of a Gray Wolf caching an entire adult deer.

  1. A Novel Two-Tier Cooperative Caching Mechanism for the Optimization of Multi-Attribute Periodic Queries in Wireless Sensor Networks

    Science.gov (United States)

    Zhou, ZhangBing; Zhao, Deng; Shu, Lei; Tsang, Kim-Fung

    2015-01-01

    Wireless sensor networks, serving as an important interface between physical environments and computational systems, have been used extensively for supporting domain applications, where multiple-attribute sensory data are queried from the network continuously and periodically. Usually, certain sensory data may not vary significantly within a certain time duration for certain applications. In this setting, sensory data gathered at a certain time slot can be used for answering concurrent queries and may be reused for answering the forthcoming queries when the variation of these data is within a certain threshold. To address this challenge, a popularity-based cooperative caching mechanism is proposed in this article, where the popularity of sensory data is calculated according to the queries issued in recent time slots. This popularity reflects the possibility that sensory data are interested in the forthcoming queries. Generally, sensory data with the highest popularity are cached at the sink node, while sensory data that may not be interested in the forthcoming queries are cached in the head nodes of divided grid cells. Leveraging these cooperatively cached sensory data, queries are answered through composing these two-tier cached data. Experimental evaluation shows that this approach can reduce the network communication cost significantly and increase the network capability. PMID:26131665

  2. Cache-Oblivious Red-Blue Line Segment Intersection

    DEFF Research Database (Denmark)

    Arge, Lars; Mølhave, Thomas; Zeh, Norbert

    2008-01-01

    We present an optimal cache-oblivious algorithm for finding all intersections between a set of non-intersecting red segments and a set of non-intersecting blue segments in the plane. Our algorithm uses $O(\\frac{N}{B}\\log_{M/B}\\frac{N}{B}+T/B)$ memory transfers, where N is the total number...... of segments, M and B are the memory and block transfer sizes of any two consecutive levels of any multilevel memory hierarchy, and T is the number of intersections....

  3. Ordering sparse matrices for cache-based systems

    International Nuclear Information System (INIS)

    Biswas, Rupak; Oliker, Leonid

    2001-01-01

    The Conjugate Gradient (CG) algorithm is the oldest and best-known Krylov subspace method used to solve sparse linear systems. Most of the coating-point operations within each CG iteration is spent performing sparse matrix-vector multiplication (SPMV). We examine how various ordering and partitioning strategies affect the performance of CG and SPMV when different programming paradigms are used on current commercial cache-based computers. However, a multithreaded implementation on the cacheless Cray MTA demonstrates high efficiency and scalability without any special ordering or partitioning

  4. A cache-friendly sampling strategy for texture-based volume rendering on GPU

    Directory of Open Access Journals (Sweden)

    Junpeng Wang

    2017-06-01

    Full Text Available The texture-based volume rendering is a memory-intensive algorithm. Its performance relies heavily on the performance of the texture cache. However, most existing texture-based volume rendering methods blindly map computational resources to texture memory and result in incoherent memory access patterns, causing low cache hit rates in certain cases. The distance between samples taken by threads of an atomic scheduling unit (e.g. a warp of 32 threads in CUDA of the GPU is a crucial factor that affects the texture cache performance. Based on this fact, we present a new sampling strategy, called Warp Marching, for the ray-casting algorithm of texture-based volume rendering. The effects of different sample organizations and different thread-pixel mappings in the ray-casting algorithm are thoroughly analyzed. Also, a pipeline manner color blending approach is introduced and the power of warp-level GPU operations is leveraged to improve the efficiency of parallel executions on the GPU. In addition, the rendering performance of the Warp Marching is view-independent, and it outperforms existing empty space skipping techniques in scenarios that need to render large dynamic volumes in a low resolution image. Through a series of micro-benchmarking and real-life data experiments, we rigorously analyze our sampling strategies and demonstrate significant performance enhancements over existing sampling methods.

  5. MonetDB/X100 - A DBMS in the CPU cache

    NARCIS (Netherlands)

    M. Zukowski (Marcin); P.A. Boncz (Peter); N.J. Nes (Niels); S. Héman (Sándor)

    2005-01-01

    textabstractX100 is a new execution engine for the MonetDB system, that improves execution speed and overcomes its main memory limitation. It introduces the concept of in-cache vectorized processing that strikes a balance between the existing column-at-a-time MIL execution primitives of MonetDB and

  6. On-chip COMA cache-coherence protocol for microgrids of microthreaded cores

    NARCIS (Netherlands)

    Zhang, L.; Jesshope, C.

    2008-01-01

    This paper describes an on-chip COMA cache coherency protocol to support the microthread model of concurrent program composition. The model gives a sound basis for building multi-core computers as it captures concurrency, abstracts communication and identifies resources, such as processor groups

  7. OneService - Generic Cache Aggregator Framework for Service Depended Cloud Applications

    NARCIS (Netherlands)

    Tekinerdogan, B.; Oral, O.A.

    2017-01-01

    Current big data cloud systems often use different data migration strategies from providers to customers. This often results in increased bandwidth usage and herewith a decrease of the performance. To enhance the performance often caching mechanisms are adopted. However, the implementations of these

  8. Model checking a cache coherence protocol for a Java DSM implementation

    NARCIS (Netherlands)

    J. Pang; W.J. Fokkink (Wan); R. Hofman (Rutger); R. Veldema

    2007-01-01

    textabstractJackal is a fine-grained distributed shared memory implementation of the Java programming language. It aims to implement Java's memory model and allows multithreaded Java programs to run unmodified on a distributed memory system. It employs a multiple-writer cache coherence

  9. USGS Topo Base Map from The National Map

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — The USGS Topographic Base Map from The National Map. This tile cached web map service combines the most current data services (Boundaries, Names, Transportation,...

  10. Application of computer graphics to generate coal resources of the Cache coal bed, Recluse geologic model area, Campbell County, Wyoming

    Science.gov (United States)

    Schneider, G.B.; Crowley, S.S.; Carey, M.A.

    1982-01-01

    Low-sulfur subbituminous coal resources have been calculated, using both manual and computer methods, for the Cache coal bed in the Recluse Model Area, which covers the White Tail Butte, Pitch Draw, Recluse, and Homestead Draw SW 7 1/2 minute quadrangles, Campbell County, Wyoming. Approximately 275 coal thickness measurements obtained from drill hole data are evenly distributed throughout the area. The Cache coal and associated beds are in the Paleocene Tongue River Member of the Fort Union Formation. The depth from the surface to the Cache bed ranges from 269 to 1,257 feet. The thickness of the coal is as much as 31 feet, but in places the Cache coal bed is absent. Comparisons between hand-drawn and computer-generated isopach maps show minimal differences. Total coal resources calculated by computer show the bed to contain 2,316 million short tons or about 6.7 percent more than the hand-calculated figure of 2,160 million short tons.

  11. Model checking a cache coherence protocol of a Java DSM implementation

    NARCIS (Netherlands)

    Pang, J.; Fokkink, W.J.; Hofman, R.; Veldema, R.S.

    2007-01-01

    Jackal is a fine-grained distributed shared memory implementation of the Java programming language. It aims to implement Java's memory model and allows multithreaded Java programs to run unmodified on a distributed memory system. It employs a multiple-writer cache coherence protocol. In this paper,

  12. Security in the CernVM File System and the Frontier Distributed Database Caching System

    International Nuclear Information System (INIS)

    Dykstra, D; Blomer, J

    2014-01-01

    Both the CernVM File System (CVMFS) and the Frontier Distributed Database Caching System (Frontier) distribute centrally updated data worldwide for LHC experiments using http proxy caches. Neither system provides privacy or access control on reading the data, but both control access to updates of the data and can guarantee the authenticity and integrity of the data transferred to clients over the internet. CVMFS has since its early days required digital signatures and secure hashes on all distributed data, and recently Frontier has added X.509-based authenticity and integrity checking. In this paper we detail and compare the security models of CVMFS and Frontier.

  13. Security in the CernVM File System and the Frontier Distributed Database Caching System

    Science.gov (United States)

    Dykstra, D.; Blomer, J.

    2014-06-01

    Both the CernVM File System (CVMFS) and the Frontier Distributed Database Caching System (Frontier) distribute centrally updated data worldwide for LHC experiments using http proxy caches. Neither system provides privacy or access control on reading the data, but both control access to updates of the data and can guarantee the authenticity and integrity of the data transferred to clients over the internet. CVMFS has since its early days required digital signatures and secure hashes on all distributed data, and recently Frontier has added X.509-based authenticity and integrity checking. In this paper we detail and compare the security models of CVMFS and Frontier.

  14. An Economic Model for Self-tuned Cloud Caching

    OpenAIRE

    Dash, Debabrata; Kantere, Verena; Ailamaki, Anastasia

    2009-01-01

    Cloud computing, the new trend for service infrastructures requires user multi-tenancy as well as minimal capital expenditure. In a cloud that services large amounts of data that are massively collected and queried, such as scientific data, users typically pay for query services. The cloud supports caching of data in order to provide quality query services. User payments cover query execution costs and maintenance of cloud infrastructure, and incur cloud profit. The challenge resides in provi...

  15. Cache Performance Optimization for SoC Vedio Applications

    OpenAIRE

    Lei Li; Wei Zhang; HuiYao An; Xing Zhang; HuaiQi Zhu

    2014-01-01

    Chip Multiprocessors (CMPs) are adopted by industry to deal with the speed limit of the single-processor. But memory access has become the bottleneck of the performance, especially in multimedia applications. In this paper, a set of management policies is proposed to improve the cache performance for a SoC platform of video application. By analyzing the behavior of Vedio Engine, the memory-friendly writeback and efficient prefetch policies are adopted. The experiment platform is simulated by ...

  16. Transient Variable Caching in Java’s Stack-Based Intermediate Representation

    Directory of Open Access Journals (Sweden)

    Paul Týma

    1999-01-01

    Full Text Available Java’s stack‐based intermediate representation (IR is typically coerced to execute on register‐based architectures. Unoptimized compiled code dutifully replicates transient variable usage designated by the programmer and common optimization practices tend to introduce further usage (i.e., CSE, Loop‐invariant Code Motion, etc.. On register based machines, often transient variables are cached within registers (when available saving the expense of actually accessing memory. Unfortunately, in stack‐based environments because of the need to push and pop the transient values, further performance improvement is possible. This paper presents Transient Variable Caching (TVC, a technique for eliminating transient variable overhead whenever possible. This optimization would find a likely home in optimizers attached to the back of popular Java compilers. Side effects of the algorithm include significant instruction reordering and introduction of many stack‐manipulation operations. This combination has proven to greatly impede the ability to decompile stack‐based IR code sequences. The code that results from the transform is faster, smaller, and greatly impedes decompilation.

  17. Broadcasted Location-Aware Data Cache for Vehicular Application

    Directory of Open Access Journals (Sweden)

    Fukuda Akira

    2007-01-01

    Full Text Available There has been increasing interest in the exploitation of advances in information technology, for example, mobile computing and wireless communications in ITS (intelligent transport systems. Classes of applications that can benefit from such an infrastructure include traffic information, roadside businesses, weather reports, entertainment, and so on. There are several wireless communication methods currently available that can be utilized for vehicular applications, such as cellular phone networks, DSRC (dedicated short-range communication, and digital broadcasting. While a cellular phone network is relatively slow and a DSRC has a very small communication area, one-segment digital terrestrial broadcasting service was launched in Japan in 2006, high-performance digital broadcasting for mobile hosts has been available recently. However, broadcast delivery methods have the drawback that clients need to wait for the required data items to appear on the broadcast channel. In this paper, we propose a new cache system to effectively prefetch and replace broadcast data using "scope" (an available area of location-dependent data and "mobility specification" (a schedule according to the direction in which a mobile host moves. We numerically evaluate the cache system on the model close to the traffic road environment, and implement the emulation system to evaluate this location-aware data delivery method for a concrete vehicular application that delivers geographic road map data to a car navigation system.

  18. Wolves, Canis lupus, carry and cache the collars of radio-collared White-tailed Deer, Odocoileus virginianus, they killed

    Science.gov (United States)

    Nelson, Michael E.; Mech, L. David

    2011-01-01

    Wolves (Canis lupus) in northeastern Minnesota cached six radio-collars (four in winter, two in spring-summer) of 202 radio-collared White-tailed Deer (Odocoileus virginianus) they killed or consumed from 1975 to 2010. A Wolf bedded on top of one collar cached in snow. We found one collar each at a Wolf den and Wolf rendezvous site, 2.5 km and 0.5 km respectively, from each deer's previous locations.

  19. Analytical derivation of traffic patterns in cache-coherent shared-memory systems

    DEFF Research Database (Denmark)

    Stuart, Matthias Bo; Sparsø, Jens

    2011-01-01

    This paper presents an analytical method to derive the worst-case traffic pattern caused by a task graph mapped to a cache-coherent shared-memory system. Our analysis allows designers to rapidly evaluate the impact of different mappings of tasks to IP cores on the traffic pattern. The accuracy...

  20. The Cost of Cache-Oblivious Searching

    DEFF Research Database (Denmark)

    Bender, Michael A.; Brodal, Gerth Stølting; Fagerberg, Rolf

    2011-01-01

    of the block sizes are limited to be powers of 2. The paper gives modified versions of the van Emde Boas layout, where the expected number of memory transfers between any two levels of the memory hierarchy is arbitrarily close to [lg e+O(lg lg B/lg B)]log  B N+O(1). This factor approaches lg e≈1.443 as B...... increases. The expectation is taken over the random placement in memory of the first element of the structure. Because searching in the disk-access machine (DAM) model can be performed in log  B N+O(1) block transfers, this result establishes a separation between the (2-level) DAM model and cache...

  1. Researching of Covert Timing Channels Based on HTTP Cache Headers in Web API

    Directory of Open Access Journals (Sweden)

    Denis Nikolaevich Kolegov

    2015-12-01

    Full Text Available In this paper, it is shown how covert timing channels based on HTTP cache headers can be implemented using different Web API of Google Drive, Dropbox and Facebook  Internet services.

  2. A Cross-Layer Framework for Designing and Optimizing Deeply-Scaled FinFET-Based Cache Memories

    Directory of Open Access Journals (Sweden)

    Alireza Shafaei

    2015-08-01

    Full Text Available This paper presents a cross-layer framework in order to design and optimize energy-efficient cache memories made of deeply-scaled FinFET devices. The proposed design framework spans device, circuit and architecture levels and considers both super- and near-threshold modes of operation. Initially, at the device-level, seven FinFET devices on a 7-nm process technology are designed in which only one geometry-related parameter (e.g., fin width, gate length, gate underlap is changed per device. Next, at the circuit-level, standard 6T and 8T SRAM cells made of these 7-nm FinFET devices are characterized and compared in terms of static noise margin, access latency, leakage power consumption, etc. Finally, cache memories with all different combinations of devices and SRAM cells are evaluated at the architecture-level using a modified version of the CACTI tool with FinFET support and other considerations for deeply-scaled technologies. Using this design framework, it is observed that L1 cache memory made of longer channel FinFET devices operating at the near-threshold regime achieves the minimum energy operation point.

  3. Federated or cached searches: providing expected performance from multiple invasive species databases

    Science.gov (United States)

    Graham, Jim; Jarnevich, Catherine S.; Simpson, Annie; Newman, Gregory J.; Stohlgren, Thomas J.

    2011-01-01

    Invasive species are a universal global problem, but the information to identify them, manage them, and prevent invasions is stored around the globe in a variety of formats. The Global Invasive Species Information Network is a consortium of organizations working toward providing seamless access to these disparate databases via the Internet. A distributed network of databases can be created using the Internet and a standard web service protocol. There are two options to provide this integration. First, federated searches are being proposed to allow users to search “deep” web documents such as databases for invasive species. A second method is to create a cache of data from the databases for searching. We compare these two methods, and show that federated searches will not provide the performance and flexibility required from users and a central cache of the datum are required to improve performance.

  4. 32 CFR 724.120 - National Capital Region (NCR).

    Science.gov (United States)

    2010-07-01

    ... 32 National Defense 5 2010-07-01 2010-07-01 false National Capital Region (NCR). 724.120 Section 724.120 National Defense Department of Defense (Continued) DEPARTMENT OF THE NAVY PERSONNEL NAVAL DISCHARGE REVIEW BOARD Definitions § 724.120 National Capital Region (NCR). The District of Columbia; Prince...

  5. CSU Final Report on the Math/CS Institute CACHE: Communication-Avoiding and Communication-Hiding at the Extreme Scale

    Energy Technology Data Exchange (ETDEWEB)

    Strout, Michelle [Colorado State University

    2014-06-10

    The CACHE project entails researching and developing new versions of numerical algorithms that result in data reuse that can be scheduled in a communication avoiding way. Since memory accesses take more time than any computation and require the most power, the focus on turning data reuse into data locality is critical to improving performance and reducing power usage in scientific simulations. This final report summarizes the accomplishments at Colorado State University as part of the CACHE project.

  6. The Use of Proxy Caches for File Access in a Multi-Tier Grid Environment

    International Nuclear Information System (INIS)

    Brun, R; Duellmann, D; Ganis, G; Janyst, L; Peters, A J; Rademakers, F; Sindrilaru, E; Hanushevsky, A

    2011-01-01

    The use of proxy caches has been extensively studied in the HEP environment for efficient access of database data and showed significant performance with only very moderate operational effort at higher grid tiers (T2, T3). In this contribution we propose to apply the same concept to the area of file access and analyse the possible performance gains, operational impact on site services and applicability to different HEP use cases. Base on a proof-of-concept studies with a modified XROOT proxy server we review the cache efficiency and overheads for access patterns of typical ROOT based analysis programs. We conclude with a discussion of the potential role of this new component at the different tiers of a distributed computing grid.

  7. Response of surface springs to longwall coal mining Wasatch Plateau, Utah

    International Nuclear Information System (INIS)

    Kadnuck, L.L.M.

    1994-01-01

    High-extraction longwall coal mining creates zones in the overburden where strata bend, fracture, or cave into the mine void. These physical alterations to the overburden stratigraphy have associated effects on the hydrologic regime. The US Bureau of Mines (SBM) studied impacts to the local hydrologic system caused by longwall mining in the Wasatch Plateau, Utah. Surface springs in the vicinity of two coal mines were evaluated for alterations in flow characteristics as mining progressed. Fourteen springs located above the mines were included in the study. Eight of the springs were located over longwall panels, four were located over barrier pillars and mains, and two ere located outside the area disturbed by mining. Flow hydrographs for each spring were compared to climatic data and time of undermining to assess if mining in the vicinity had influenced flow. Heights of fracturing and caving in the overburden resulting from seam extraction were calculated using common subsidence formulas, and used in conjunction with elevations of springs to assess if fracturing influenced the water-bearing zones studied. One spring over a panel exhibited a departure from a normally-shaped hydrograph after being undermined. Springs located over other mine structures, or outside the mine area did not show discernible effects from mining. The limited response of the springs was attributed to site-specific conditions that buffered mining impacts including the elevation of the springs above the mine level, and presence of massive sandstones and swelling clays in the overburden materials

  8. USGS Hill Shade Base Map Service from The National Map

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — USGS Hill Shade (or Shaded Relief) is a tile cache base map created from the National Elevation Dataset (NED), a seamless dataset of best available raster elevation...

  9. Broadcasted Location-Aware Data Cache for Vehicular Application

    Directory of Open Access Journals (Sweden)

    Kenya Sato

    2007-05-01

    Full Text Available There has been increasing interest in the exploitation of advances in information technology, for example, mobile computing and wireless communications in ITS (intelligent transport systems. Classes of applications that can benefit from such an infrastructure include traffic information, roadside businesses, weather reports, entertainment, and so on. There are several wireless communication methods currently available that can be utilized for vehicular applications, such as cellular phone networks, DSRC (dedicated short-range communication, and digital broadcasting. While a cellular phone network is relatively slow and a DSRC has a very small communication area, one-segment digital terrestrial broadcasting service was launched in Japan in 2006, high-performance digital broadcasting for mobile hosts has been available recently. However, broadcast delivery methods have the drawback that clients need to wait for the required data items to appear on the broadcast channel. In this paper, we propose a new cache system to effectively prefetch and replace broadcast data using “scope” (an available area of location-dependent data and “mobility specification” (a schedule according to the direction in which a mobile host moves. We numerically evaluate the cache system on the model close to the traffic road environment, and implement the emulation system to evaluate this location-aware data delivery method for a concrete vehicular application that delivers geographic road map data to a car navigation system.

  10. USGS Imagery Only Base Map Service from The National Map

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — USGS Imagery Only is a tile cache base map of orthoimagery in The National Map visible to the 1:18,000 scale. Orthoimagery data are typically high resolution images...

  11. XRootd, disk-based, caching proxy for optimization of data access, data placement and data replication

    International Nuclear Information System (INIS)

    Bauerdick, L A T; Bloom, K; Bockelman, B; Bradley, D C; Dasu, S; Dost, J M; Sfiligoi, I; Tadel, A; Tadel, M; Wuerthwein, F; Yagil, A

    2014-01-01

    Following the success of the XRootd-based US CMS data federation, the AAA project investigated extensions of the federation architecture by developing two sample implementations of an XRootd, disk-based, caching proxy. The first one simply starts fetching a whole file as soon as a file open request is received and is suitable when completely random file access is expected or it is already known that a whole file be read. The second implementation supports on-demand downloading of partial files. Extensions to the Hadoop Distributed File System have been developed to allow for an immediate fallback to network access when local HDFS storage fails to provide the requested block. Both cache implementations are in pre-production testing at UCSD.

  12. A New Caching Technique to Support Conjunctive Queries in P2P DHT

    Science.gov (United States)

    Kobatake, Koji; Tagashira, Shigeaki; Fujita, Satoshi

    P2P DHT (Peer-to-Peer Distributed Hash Table) is one of typical techniques for realizing an efficient management of shared resources distributed over a network and a keyword search over such networks in a fully distributed manner. In this paper, we propose a new method for supporting conjunctive queries in P2P DHT. The basic idea of the proposed technique is to share a global information on past trials by conducting a local caching of search results for conjunctive queries and by registering the fact to the global DHT. Such a result caching is expected to significantly reduce the amount of transmitted data compared with conventional schemes. The effect of the proposed method is experimentally evaluated by simulation. The result of experiments indicates that by using the proposed method, the amount of returned data is reduced by 60% compared with conventional P2P DHT which does not support conjunctive queries.

  13. Fault segmentation: New concepts from the Wasatch Fault Zone, Utah, USA

    Science.gov (United States)

    Duross, Christopher; Personius, Stephen F.; Crone, Anthony J.; Olig, Susan S.; Hylland, Michael D.; Lund, William R.; Schwartz, David P.

    2016-01-01

    The question of whether structural segment boundaries along multisegment normal faults such as the Wasatch fault zone (WFZ) act as persistent barriers to rupture is critical to seismic hazard analyses. We synthesized late Holocene paleoseismic data from 20 trench sites along the central WFZ to evaluate earthquake rupture length and fault segmentation. For the youngest (segment boundaries, especially for the most recent earthquakes on the north-central WFZ, are consistent with segment-controlled ruptures. However, broadly constrained earthquake times, dissimilar event times along the segments, the presence of smaller-scale (subsegment) boundaries, and areas of complex faulting permit partial-segment and multisegment (e.g., spillover) ruptures that are shorter (~20–40 km) or longer (~60–100 km) than the primary segment lengths (35–59 km). We report a segmented WFZ model that includes 24 earthquakes since ~7 ka and yields mean estimates of recurrence (1.1–1.3 kyr) and vertical slip rate (1.3–2.0 mm/yr) for the segments. However, additional rupture scenarios that include segment boundary spatial uncertainties, floating earthquakes, and multisegment ruptures are necessary to fully address epistemic uncertainties in rupture length. We compare the central WFZ to paleoseismic and historical surface ruptures in the Basin and Range Province and central Italian Apennines and conclude that displacement profiles have limited value for assessing the persistence of segment boundaries but can aid in interpreting prehistoric spillover ruptures. Our comparison also suggests that the probabilities of shorter and longer ruptures on the WFZ need to be investigated.

  14. Big Data Caching for Networking: Moving from Cloud to Edge

    OpenAIRE

    Zeydan, Engin; Baştuğ, Ejder; Bennis, Mehdi; Kader, Manhal Abdel; Karatepe, Alper; Er, Ahmet Salih; Debbah, Mérouane

    2016-01-01

    In order to cope with the relentless data tsunami in $5G$ wireless networks, current approaches such as acquiring new spectrum, deploying more base stations (BSs) and increasing nodes in mobile packet core networks are becoming ineffective in terms of scalability, cost and flexibility. In this regard, context-aware $5$G networks with edge/cloud computing and exploitation of \\emph{big data} analytics can yield significant gains to mobile operators. In this article, proactive content caching in...

  15. Delivery Time Minimization in Edge Caching: Synergistic Benefits of Subspace Alignment and Zero Forcing

    KAUST Repository

    Kakar, Jaber; Alameer, Alaa; Chaaban, Anas; Sezgin, Aydin; Paulraj, Arogyaswami

    2017-01-01

    the fundamental limits of a cache-aided wireless network consisting of one central base station, $M$ transceivers and $K$ receivers from a latency-centric perspective. We use the normalized delivery time (NDT) to capture the per-bit latency for the worst-case file

  16. Impacto de la memoria cache en la aceleración de la ejecución de algoritmo de detección de rostros en sistemas empotrados

    Directory of Open Access Journals (Sweden)

    Alejandro Cabrera Aldaya

    2012-06-01

    Full Text Available En este trabajo se analiza el impacto de la memoria cache sobre la aceleración de la ejecución del algoritmo de detección de rostros de Viola-Jones en un sistema de procesamiento basado en el procesador Microblaze empotrado en un FPGA. Se expone el algoritmo, se describe una implementación software del mismo y se analizan sus funciones más relevantes y las características de localidad de las instrucciones y los datos. Se analiza el impacto de las memorias cache de instrucciones y de datos, tanto de sus capacidades (entre 2 y 16 kB como de tamaño de línea (de 4 y 8 palabras. Los resultados obtenidos utilizando una placa de desarrollo Spartan3A Starter Kit basada en un FPGA Spartan3A XC3S700A, con el procesador Microblaze a 62,5 MHz y 64 MB de memoria externa DDR2 a 125 MHz,  muestran un mayor impacto de la cache de instrucciones que la de datos, con valores óptimos de 8kB para la cache de instrucciones y entre 4 y 16kB para la cache de datos. Con estas memorias se alcanza una aceleración de 17 veces con relación a la ejecución del algoritmo en memoria externa. El tamaño de la línea de cache tiene poca influencia sobre la aceleración del algoritmo.

  17. Effectiveness of caching in a distributed digital library system

    DEFF Research Database (Denmark)

    Hollmann, J.; Ardø, Anders; Stenstrom, P.

    2007-01-01

    as manifested by gateways that implement the interfaces to the many fulltext archives. A central research question in this approach is: What is the nature of locality in the user access stream to such a digital library? Based on access logs that drive the simulations, it is shown that client-side caching can......Today independent publishers are offering digital libraries with fulltext archives. In an attempt to provide a single user-interface to a large set of archives, the studied Article-Database-Service offers a consolidated interface to a geographically distributed set of archives. While this approach...

  18. Replicas Strategy and Cache Optimization of Video Surveillance Systems Based on Cloud Storage

    Directory of Open Access Journals (Sweden)

    Rongheng Li

    2018-04-01

    Full Text Available With the rapid development of video surveillance technology, especially the popularity of cloud-based video surveillance applications, video data begins to grow explosively. However, in the cloud-based video surveillance system, replicas occupy an amount of storage space. Also, the slow response to video playback constrains the performance of the system. In this paper, considering the characteristics of video data comprehensively, we propose a dynamic redundant replicas mechanism based on security levels that can dynamically adjust the number of replicas. Based on the location correlation between cameras, this paper also proposes a data cache strategy to improve the response speed of data reading. Experiments illustrate that: (1 our dynamic redundant replicas mechanism can save storage space while ensuring data security; (2 the cache mechanism can predict the playback behaviors of the users in advance and improve the response speed of data reading according to the location and time correlation of the front-end cameras; and (3 in terms of cloud-based video surveillance, our proposed approaches significantly outperform existing methods.

  19. dCache, towards Federated Identities & Anonymized Delegation

    Science.gov (United States)

    Ashish, A.; Millar, AP; Mkrtchyan, T.; Fuhrmann, P.; Behrmann, G.; Sahakyan, M.; Adeyemi, O. S.; Starek, J.; Litvintsev, D.; Rossi, A.

    2017-10-01

    For over a decade, dCache has relied on the authentication and authorization infrastructure (AAI) offered by VOMS, Kerberos, Xrootd etc. Although the established infrastructure has worked well and provided sufficient security, the implementation of procedures and the underlying software is often seen as a burden, especially by smaller communities trying to adopt existing HEP software stacks [1]. Moreover, scientists are increasingly dependent on service portals for data access [2]. In this paper, we describe how federated identity management systems can facilitate the transition from traditional AAI infrastructure to novel solutions like OpenID Connect. We investigate the advantages offered by OpenID Connect in regards to ‘delegation of authentication’ and ‘credential delegation for offline access’. Additionally, we demonstrate how macaroons can provide a more fine-granular authorization mechanism that supports anonymized delegation.

  20. Something different - caching applied to calculation of impedance matrix elements

    CSIR Research Space (South Africa)

    Lysko, AA

    2012-09-01

    Full Text Available of the multipliers, the approximating functions are used any required parameters, such as input impedance or gain pattern etc. The method is relatively straightforward but, especially for small to medium matrices, requires spending time on filling... of the computing the impedance matrix for the method of moments, or a similar method, such as boundary element method (BEM) [22], with the help of the flowchart shown in Figure 1. Input Parameters (a) Search the cached data for a match (b) A match found...

  1. Pattern recognition for cache management in distributed medical imaging environments.

    Science.gov (United States)

    Viana-Ferreira, Carlos; Ribeiro, Luís; Matos, Sérgio; Costa, Carlos

    2016-02-01

    Traditionally, medical imaging repositories have been supported by indoor infrastructures with huge operational costs. This paradigm is changing thanks to cloud outsourcing which not only brings technological advantages but also facilitates inter-institutional workflows. However, communication latency is one main problem in this kind of approaches, since we are dealing with tremendous volumes of data. To minimize the impact of this issue, cache and prefetching are commonly used. The effectiveness of these mechanisms is highly dependent on their capability of accurately selecting the objects that will be needed soon. This paper describes a pattern recognition system based on artificial neural networks with incremental learning to evaluate, from a set of usage pattern, which one fits the user behavior at a given time. The accuracy of the pattern recognition model in distinct training conditions was also evaluated. The solution was tested with a real-world dataset and a synthesized dataset, showing that incremental learning is advantageous. Even with very immature initial models, trained with just 1 week of data samples, the overall accuracy was very similar to the value obtained when using 75% of the long-term data for training the models. Preliminary results demonstrate an effective reduction in communication latency when using the proposed solution to feed a prefetching mechanism. The proposed approach is very interesting for cache replacement and prefetching policies due to the good results obtained since the first deployment moments.

  2. A Comparison between Fixed Priority and EDF Scheduling accounting for Cache Related Pre-emption Delays

    Directory of Open Access Journals (Sweden)

    Will Lunniss

    2014-04-01

    Full Text Available In multitasking real-time systems, the choice of scheduling algorithm is an important factor to ensure that response time requirements are met while maximising limited system resources. Two popular scheduling algorithms include fixed priority (FP and earliest deadline first (EDF. While they have been studied in great detail before, they have not been compared when taking into account cache related pre-emption delays (CRPD. Memory and cache are split into a number of blocks containing instructions and data. During a pre-emption, cache blocks from the pre-empting task can evict those of the pre-empted task. When the pre-empted task is resumed, if it then has to re-load the evicted blocks, CRPD are introduced which then affect the schedulability of the task. In this paper we compare FP and EDF scheduling algorithms in the presence of CRPD using the state-of-the-art CRPD analysis. We find that when CRPD is accounted for, the performance gains offered by EDF over FP, while still notable, are diminished. Furthermore, we find that under scenarios that cause relatively high CRPD, task layout optimisation techniques can be applied to allow FP to schedule tasksets at a similar processor utilisation to EDF. Thus making the choice of the task layout in memory as important as the choice of scheduling algorithm. This is very relevant for industry, as it is much cheaper and simpler to adjust the task layout through the linker than it is to switch the scheduling algorithm.

  3. Geometric Algorithms for Private-Cache Chip Multiprocessors

    DEFF Research Database (Denmark)

    Ajwani, Deepak; Sitchinava, Nodari; Zeh, Norbert

    2010-01-01

    -D convex hulls. These results are obtained by analyzing adaptations of either the PEM merge sort algorithm or PRAM algorithms. For the second group of problems—orthogonal line segment intersection reporting, batched range reporting, and related problems—more effort is required. What distinguishes......We study techniques for obtaining efficient algorithms for geometric problems on private-cache chip multiprocessors. We show how to obtain optimal algorithms for interval stabbing counting, 1-D range counting, weighted 2-D dominance counting, and for computing 3-D maxima, 2-D lower envelopes, and 2...... these problems from the ones in the previous group is the variable output size, which requires I/O-efficient load balancing strategies based on the contribution of the individual input elements to the output size. To obtain nearly optimal algorithms for these problems, we introduce a parallel distribution...

  4. Observations of territorial breeding common ravens caching eggs of greater sage-grouse

    Science.gov (United States)

    Howe, Kristy B.; Coates, Peter S.

    2015-01-01

    Previous investigations using continuous video monitoring of greater sage-grouse Centrocercus urophasianus nests have unambiguously identified common ravens Corvus corax as an important egg predator within the western United States. The quantity of greater sage-grouse eggs an individual common raven consumes during the nesting period and the extent to which common ravens actively hunt greater sage-grouse nests are largely unknown. However, some evidence suggests that territorial breeding common ravens, rather than nonbreeding transients, are most likely responsible for nest depredations. We describe greater sage-grouse egg depredation observations obtained opportunistically from three common raven nests located in Idaho and Nevada where depredated greater sage-grouse eggs were found at or in the immediate vicinity of the nest site, including the caching of eggs in nearby rock crevices. We opportunistically monitored these nests by counting and removing depredated eggs and shell fragments from the nest sites during each visit to determine the extent to which the common raven pairs preyed on greater sage-grouse eggs. To our knowledge, our observations represent the first evidence that breeding, territorial pairs of common ravens cache greater sage-grouse eggs and are capable of depredating multiple greater sage-grouse nests.

  5. XRootd, disk-based, caching-proxy for optimization of data-access, data-placement and data-replication

    CERN Document Server

    Tadel, Matevz

    2013-01-01

    Following the smashing success of XRootd-based USCMS data-federation, AAA project investigated extensions of the federation architecture by developing two sample implementations of an XRootd, disk-based, caching-proxy. The first one simply starts fetching a whole file as soon as a file-open request is received and is suitable when completely random file access is expected or it is already known that a whole file be read. The second implementation supports on-demand downloading of partial files. Extensions to the Hadoop file-system have been developed to allow foran immediate fallback to network access when local HDFS storage fails to provide the requested block. Tools needed to analyze and to tweak block replication factors and to inject downloaded blocks into a running HDFS installation have also been developed. Both cache implementations are in operation at UCSD and several tests were also performed at UNL and UW-M. Operational experience and applications to automatic storage healing and opportunistic compu...

  6. DIFFERENTIAL EFFECT OF NATIONAL VS. REGIONAL CELEBRITIES ON CONSUMER ATTITUDES

    OpenAIRE

    Varsha JAIN; Subhadip ROY; Abhishek KUMAR; Anusha KABRA

    2010-01-01

    The present study explores the differential effects of having a National/Regional celebrity in an advertisement/ endorsement. More specifically the study intends to find out whether a National celebrity would have a more favorable impact on consumer attitudes than a Regional celebrity when endorsing the same product. Experimental design was used as the research methodology. A 3 (National Celebrity/Regional Celebrity/No Celebrity) X 2 (High/Low Involvement Product) design was conducted on stud...

  7. Geochemistry of mercury and other constituents in subsurface sediment—Analyses from 2011 and 2012 coring campaigns, Cache Creek Settling Basin, Yolo County, California

    Science.gov (United States)

    Arias, Michelle R.; Alpers, Charles N.; Marvin-DiPasquale, Mark C.; Fuller, Christopher C.; Agee, Jennifer L.; Sneed, Michelle; Morita, Andrew Y.; Salas, Antonia

    2017-10-31

    Cache Creek Settling Basin was constructed in 1937 to trap sediment from Cache Creek before delivery to the Yolo Bypass, a flood conveyance for the Sacramento River system that is tributary to the Sacramento–San Joaquin Delta. Sediment management options being considered by stakeholders in the Cache Creek Settling Basin include sediment excavation; however, that could expose sediments containing elevated mercury concentrations from historical mercury mining in the watershed. In cooperation with the California Department of Water Resources, the U.S. Geological Survey undertook sediment coring campaigns in 2011–12 (1) to describe lateral and vertical distributions of mercury concentrations in deposits of sediment in the Cache Creek Settling Basin and (2) to improve constraint of estimates of the rate of sediment deposition in the basin.Sediment cores were collected in the Cache Creek Settling Basin, Yolo County, California, during October 2011 at 10 locations and during August 2012 at 5 other locations. Total core depths ranged from approximately 4.6 to 13.7 meters (15 to 45 feet), with penetration to about 9.1 meters (30 feet) at most locations. Unsplit cores were logged for two geophysical parameters (gamma bulk density and magnetic susceptibility); then, selected cores were split lengthwise. One half of each core was then photographed and archived, and the other half was subsampled. Initial subsamples from the cores (20-centimeter composite samples from five predetermined depths in each profile) were analyzed for total mercury, methylmercury, total reduced sulfur, iron speciation, organic content (as the percentage of weight loss on ignition), and grain-size distribution. Detailed follow-up subsampling (3-centimeter intervals) was done at six locations along an east-west transect in the southern part of the Cache Creek Settling Basin and at one location in the northern part of the basin for analyses of total mercury; organic content; and cesium-137, which was

  8. Population genetic structure and its implications for adaptive variation in memory and the hippocampus on a continental scale in food-caching black-capped chickadees.

    Science.gov (United States)

    Pravosudov, V V; Roth, T C; Forister, M L; Ladage, L D; Burg, T M; Braun, M J; Davidson, B S

    2012-09-01

    Food-caching birds rely on stored food to survive the winter, and spatial memory has been shown to be critical in successful cache recovery. Both spatial memory and the hippocampus, an area of the brain involved in spatial memory, exhibit significant geographic variation linked to climate-based environmental harshness and the potential reliance on food caches for survival. Such geographic variation has been suggested to have a heritable basis associated with differential selection. Here, we ask whether population genetic differentiation and potential isolation among multiple populations of food-caching black-capped chickadees is associated with differences in memory and hippocampal morphology by exploring population genetic structure within and among groups of populations that are divergent to different degrees in hippocampal morphology. Using mitochondrial DNA and 583 AFLP loci, we found that population divergence in hippocampal morphology is not significantly associated with neutral genetic divergence or geographic distance, but instead is significantly associated with differences in winter climate. These results are consistent with variation in a history of natural selection on memory and hippocampal morphology that creates and maintains differences in these traits regardless of population genetic structure and likely associated gene flow. Published 2012. This article is a US Government work and is in the public domain in the USA.

  9. The Caregiver Contribution to Heart Failure Self-Care (CACHS): Further Psychometric Testing of a Novel Instrument.

    Science.gov (United States)

    Buck, Harleah G; Harkness, Karen; Ali, Muhammad Usman; Carroll, Sandra L; Kryworuchko, Jennifer; McGillion, Michael

    2017-04-01

    Caregivers (CGs) contribute important assistance with heart failure (HF) self-care, including daily maintenance, symptom monitoring, and management. Until CGs' contributions to self-care can be quantified, it is impossible to characterize it, account for its impact on patient outcomes, or perform meaningful cost analyses. The purpose of this study was to conduct psychometric testing and item reduction on the recently developed 34-item Caregiver Contribution to Heart Failure Self-care (CACHS) instrument using classical and item response theory methods. Fifty CGs (mean age 63 years ±12.84; 70% female) recruited from a HF clinic completed the CACHS in 2014 and results evaluated using classical test theory and item response theory. Items would be deleted for low (.95) endorsement, low (.7) corrected item-total correlations, significant pairwise correlation coefficients, floor or ceiling effects, relatively low latent trait and item information function levels ( .5), and differential item functioning. After analysis, 14 items were excluded, resulting in a 20-item instrument (self-care maintenance eight items; monitoring seven items; and management five items). Most items demonstrated moderate to high discrimination (median 2.13, minimum .77, maximum 5.05), and appropriate item difficulty (-2.7 to 1.4). Internal consistency reliability was excellent (Cronbach α = .94, average inter-item correlation = .41) with no ceiling effects. The newly developed 20-item version of the CACHS is supported by rigorous instrument development and represents a novel instrument to measure CGs' contribution to HF self-care. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  10. NATIONAL AND SUB-NATIONAL OFFSHORING IMPACT ON EMPLOYMENT: AN APPLICATION TO MADRID REGION

    Directory of Open Access Journals (Sweden)

    María Ángeles Tobarra Gómez

    2016-07-01

    Full Text Available The effect of delocalization on a national economy has been widely studied, however subnational delocalization remains as an unvisited field for researchers. This paper studies the effects of fragmentation and the subsequent localization outside or abroad on the level of industrial and services employment in Madrid region. We work with Madrid data from regional input-output tables and estimate a labour demand function using panel data. Our results show a significant and small negative effect on regional employment of intra-industrial inputs from the national economy and abroad, while imported inputs from other sectors and origins are complementary to employment, resulting in a positive net effect on employment. The increasing specialization in main activities and the use of external providers by firms have a positive impact on the employment of Madrid region.

  11. REGIONAL JUSTICE AND NATIONAL UNITY

    Directory of Open Access Journals (Sweden)

    V. P. Loguinov

    2012-01-01

    Full Text Available In regions of Russia who lag behind the national average both in pay and in economic development, the problem of raising living standards should be solved through a nation-wide program which would be believed by peoples of Russia regardless of ethnicity. It should include creation of jobs, leveling of wages, abolition of fees for education and for a significant portion of health services, reducing utility tariffs and prices for public transportation, introduction of an income-based progressive taxation system, improving law enforcement activities, etc. Considerable attention is paid to the analysis of macroeconomic indicators of Russian economy in recent years.

  12. Strengthening Dairy Cooperative through National Development of Livestock Region

    Directory of Open Access Journals (Sweden)

    Priyono

    2016-02-01

    Full Text Available Establishment of dairy cattle development region needs to be conducted in accordance with the national dairy industry development plan. Dairy cattle regions have been designed and equipped with infrastructure supplies, supporting facilities, technologies, finance, processing, marketing, institutional and human resources. Dairy cooperative is one of the marketing channels of milk and milk products which have strategic roles to support the national dairy industry. Collaborations between dairy cooperatives and smallholder farmers within a district region have to be done based on agricultural ecosystems, agribusiness system, integrated farming and participatory approach. This may improve dairy cooperatives as an independent and competitive institution. Strengthening dairy cooperatives in national region dairy cattle was carried out through institutional inventory and dairy cooperatives performance; requirement of capital access, market and networks as well as education and managerial training; certification and accreditation feasibility analysis and information and technology utilization. Establishment of emerging dairy cooperatives towards small and micro enterprises is carried out by directing them to establish cooperatives which have legal certainty and business development opportunities. The impact of strengthening dairy cooperative may support dairy cattle development through increase population and milk production. Sustainable dairy cattle development needs to be supported by regional and national government policies.

  13. High-speed mapping of water isotopes and residence time in Cache Slough Complex, San Francisco Bay Delta, CA

    Data.gov (United States)

    Department of the Interior — Real-time, high frequency (1-second sample interval) GPS location, water quality, and water isotope (δ2H, δ18O) data was collected in the Cache Slough Complex (CSC),...

  14. Towards Cache-Enabled, Order-Aware, Ontology-Based Stream Reasoning Framework

    Energy Technology Data Exchange (ETDEWEB)

    Yan, Rui; Praggastis, Brenda L.; Smith, William P.; McGuinness, Deborah L.

    2016-08-16

    While streaming data have become increasingly more popular in business and research communities, semantic models and processing software for streaming data have not kept pace. Traditional semantic solutions have not addressed transient data streams. Semantic web languages (e.g., RDF, OWL) have typically addressed static data settings and linked data approaches have predominantly addressed static or growing data repositories. Streaming data settings have some fundamental differences; in particular, data are consumed on the fly and data may expire. Stream reasoning, a combination of stream processing and semantic reasoning, has emerged with the vision of providing "smart" processing of streaming data. C-SPARQL is a prominent stream reasoning system that handles semantic (RDF) data streams. Many stream reasoning systems including C-SPARQL use a sliding window and use data arrival time to evict data. For data streams that include expiration times, a simple arrival time scheme is inadequate if the window size does not match the expiration period. In this paper, we propose a cache-enabled, order-aware, ontology-based stream reasoning framework. This framework consumes RDF streams with expiration timestamps assigned by the streaming source. Our framework utilizes both arrival and expiration timestamps in its cache eviction policies. In addition, we introduce the notion of "semantic importance" which aims to address the relevance of data to the expected reasoning, thus enabling the eviction algorithms to be more context- and reasoning-aware when choosing what data to maintain for question answering. We evaluate this framework by implementing three different prototypes and utilizing five metrics. The trade-offs of deploying the proposed framework are also discussed.

  15. USGS Hydro Cached Base Map Service from The National Map

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — The National Hydrography Dataset (NHD) is a comprehensive set of digital spatial data that encodes information about naturally occurring and constructed bodies of...

  16. False alarms and mine seismicity: An example from the Gentry Mountain mining region, Utah. Los Alamos Source Region Project

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, S.R.

    1992-09-23

    Mining regions are a cause of concern for monitoring of nuclear test ban treaties because they present the opportunity for clandestine nuclear tests (i.e. decoupled explosions). Mining operations are often characterized by high seismicity rates and can provide the cover for excavating voids for decoupling. Chemical explosions (seemingly as part of normal mining activities) can be used to complicate the signals from a simultaneous decoupled nuclear explosion. Thus, most concern about mines has dealt with the issue of missed violations to a test ban treaty. In this study, we raise the diplomatic concern of false alarms associated with mining activities. Numerous reports and papers have been published about anomalous seismicity associated with mining activities. As part of a large discrimination study in the western US (Taylor et al., 1989), we had one earthquake that was consistently classified as an explosion. The magnitude 3.5 disturbance occurred on May 14, 1981 and was conspicuous in its lack of Love waves, relative lack of high- frequency energy, low Lg/Pg ratio, and high m{sub b} {minus} M{sub s}. A moment-tensor solution by Patton and Zandt (1991) indicated the event had a large implosional component. The event occurred in the Gentry Mountain coal mining region in the eastern Wasatch Plateau, Utah. Using a simple source representation, we modeled the event as a tabular excavation collapse that occurred as a result of normal mining activities. This study raises the importance of having a good catalogue of seismic data and information about mining activities from potential proliferant nations.

  17. I/O-Optimal Distribution Sweeping on Private-Cache Chip Multiprocessors

    DEFF Research Database (Denmark)

    Ajwani, Deepak; Sitchinava, Nodar; Zeh, Norbert

    2011-01-01

    /PB) for a number of problems on axis aligned objects; P denotes the number of cores/processors, B denotes the number of elements that fit in a cache line, N and K denote the sizes of the input and output, respectively, and sortp(N) denotes the I/O complexity of sorting N items using P processors in the PEM model...... framework was introduced recently, and a number of algorithms for problems on axis-aligned objects were obtained using this framework. The obtained algorithms were efficient but not optimal. In this paper, we improve the framework to obtain algorithms with the optimal I/O complexity of O(sortp(N) + K...

  18. A Cache-Oblivious Implicit Dictionary with the Working Set Property

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Kejlberg-Rasmussen, Casper; Truelsen, Jakob

    2010-01-01

    In this paper we present an implicit dictionary with the working set property i.e. a dictionary supporting \\op{insert}($e$), \\op{delete}($x$) and \\op{predecessor}($x$) in~$\\O(\\log n)$ time and \\op{search}($x$) in $\\O(\\log\\ell)$ time, where $n$ is the number of elements stored in the dictionary...... and $\\ell$ is the number of distinct elements searched for since the element with key~$x$ was last searched for. The dictionary stores the elements in an array of size~$n$ using \\emph{no} additional space. In the cache-oblivious model the operations \\op{insert}($e$), \\op{delete}($x$) and \\op...

  19. Feasibility Report and Environmental Statement for Water Resources Development, Cache Creek Basin, California

    Science.gov (United States)

    1979-02-01

    classified as Porno , Lake Miwok, and Patwin. Recent surveys within the Clear Lake-Cache Creek Basin have located 28 archeological sites, some of which...additional 8,400 acre-feet annually to the Lakeport area. Porno Reservoir on Kelsey Creek, being studied by Lake County, also would supplement M&l water...project on Scotts Creek could provide 9,100 acre- feet annually of irrigation water. Also, as previously discussed, Porno Reservoir would furnish

  20. Minimizing End-to-End Interference in I/O Stacks Spanning Shared Multi-Level Buffer Caches

    Science.gov (United States)

    Patrick, Christina M.

    2011-01-01

    This thesis presents an end-to-end interference minimizing uniquely designed high performance I/O stack that spans multi-level shared buffer cache hierarchies accessing shared I/O servers to deliver a seamless high performance I/O stack. In this thesis, I show that I can build a superior I/O stack which minimizes the inter-application interference…

  1. Temperature and Discharge on a Highly Altered Stream in Utah's Cache Valley

    OpenAIRE

    Pappas, Andy

    2013-01-01

    To study the River Continuum Concept (RCC) and the Serial Discontinuity Hypothesis (SDH), I looked at temperature and discharge changes along 52 km of the Little Bear River in Cache Valley, Utah. The Little Bear River is a fourth order stream with one major reservoir, a number of irrigation diversions, and one major tributary, the East Fork of the Little Bear River. Discharge data was collected at six sites on 29 September 2012 and temperature data was collected hourly at eleven sites from 1 ...

  2. Factors affecting marsh vegetation at the Liberty Island Conservation Bank in the Cache Slough region of the Sacramento–San Joaquin Delta, California

    Science.gov (United States)

    Orlando, James L.; Drexler, Judith Z.

    2017-07-07

    The Liberty Island Conservation Bank (LICB) is a tidal freshwater marsh restored for the purpose of mitigating adverse effects on sensitive fish populations elsewhere in the region. The LICB was completed in 2012 and is in the northern Cache Slough region of the Sacramento–San Joaquin Delta. The wetland vegetation at the LICB is stunted and yellow-green in color (chlorotic) compared to nearby wetlands. A study was done to investigate three potential causes of the stunted and chlorotic vegetation: (1) improper grading of the marsh plain, (2) pesticide contamination from agricultural and urban inputs upstream from the site, (3) nitrogen-deficient soil, or some combination of these. Water samples were collected from channels at five sites, and soil samples were collected from four wetlands, including the LICB, during the summer of 2015. Real-time kinematic global positioning system (RTK-GPS) elevation surveys were completed at the LICB and north Little Holland Tract, a closely situated natural marsh that has similar hydrodynamics as the LICB, but contains healthy marsh vegetation.The results showed no significant differences in carbon or nitrogen content in the surface soils or in pesticides in water among the sites. The elevation survey indicated that the mean elevation of the LICB was about 26 centimeters higher than that of the north Little Holland Tract marsh. Because marsh plain elevation largely determines the hydroperiod of a marsh, these results indicated that the LICB has a hydroperiod that differs from that of neighboring north Little Holland Tract marsh. This difference in hydroperiod contributed to the lower stature and decreased vigor of wetland vegetation at the LICB. Although the LICB cannot be regraded without great expense, it could be possible to reduce the sharp angle of the marsh edge to facilitate deeper and more frequent tidal flooding along the marsh periphery. Establishing optimal elevations for restored wetlands is necessary for obtaining

  3. National and regional asthma programmes in Europe

    OpenAIRE

    Olof Selroos; Maciej Kupczyk; Piotr Kuna; Piotr Łacwik; Jean Bousquet; David Brennan; Susanna Palkonen; Javier Contreras; Mark FitzGerald; Gunilla Hedlin; Sebastian L. Johnston; Renaud Louis; Leanne Metcalf; Samantha Walker; Antonio Moreno-Galdó

    2015-01-01

    This review presents seven national asthma programmes to support the European Asthma Research and Innovation Partnership in developing strategies to reduce asthma mortality and morbidity across Europe. From published data it appears that in order to influence asthma care, national/regional asthma programmes are more effective than conventional treatment guidelines. An asthma programme should start with the universal commitments of stakeholders at all levels and the programme has to be endorse...

  4. Using dCache in Archiving Systems oriented to Earth Observation

    Science.gov (United States)

    Garcia Gil, I.; Perez Moreno, R.; Perez Navarro, O.; Platania, V.; Ozerov, D.; Leone, R.

    2012-04-01

    The object of LAST activity (Long term data Archive Study on new Technologies) is to perform an independent study on best practices and assessment of different archiving technologies mature for operation in the short and mid-term time frame, or available in the long-term with emphasis on technologies better suited to satisfy the requirements of ESA, LTDP and other European and Canadian EO partners in terms of digital information preservation and data accessibility and exploitation. During the last phase of the project, a testing of several archiving solutions has been performed in order to evaluate their suitability. In particular, dCache, aimed to provide a file system tree view of the data repository exchanging this data with backend (tertiary) Storage Systems as well as space management, pool attraction, dataset replication, hot spot determination and recovery from disk or node failures. Connected to a tertiary storage system, dCache simulates unlimited direct access storage space. Data exchanges to and from the underlying HSM are performed automatically and invisibly to the user Dcache was created to solve the requirements of big computer centers and universities with big amounts of data, putting their efforts together and founding EMI (European Middleware Initiative). At the moment being, Dcache is mature enough to be implemented, being used by several research centers of relevance (e.g. LHC storing up to 50TB/day). This solution has been not used so far in Earth Observation and the results of the study are summarized in this article, focusing on the capacities over a simulated environment to get in line with the ESA requirements for a geographically distributed storage. The challenge of a geographically distributed storage system can be summarized as the way to provide a maximum quality for storage and dissemination services with the minimum cost.

  5. Efficiently GPU-accelerating long kernel convolutions in 3-D DIRECT TOF PET reconstruction via memory cache optimization

    Energy Technology Data Exchange (ETDEWEB)

    Ha, Sungsoo; Mueller, Klaus [Stony Brook Univ., NY (United States). Center for Visual Computing; Matej, Samuel [Pennsylvania Univ., Philadelphia, PA (United States). Dept. of Radiology

    2011-07-01

    The DIRECT represents a novel approach for 3-D Time-of-Flight (TOF) PET reconstruction. Its novelty stems from the fact that it performs all iterative predictor-corrector operations directly in image space. The projection operations now amount to convolutions in image space, using long TOF (resolution) kernels. While for spatially invariant kernels the computational complexity can be algorithmically overcome by replacing spatial convolution with multiplication in Fourier space, spatially variant kernels cannot use this shortcut. Therefore in this paper, we describe a GPU-accelerated approach for this task. However, the intricate parallel architecture of GPUs poses its own challenges, and careful memory and thread management is the key to obtaining optimal results. As convolution is mainly memory-bound we focus on the former, proposing two types of memory caching schemes that warrant best cache memory re-use by the parallel threads. In contrast to our previous two-stage algorithm, the schemes presented here are both single-stage which is more accurate. (orig.)

  6. Behavior characterization of the shared last-level cache in a chip multiprocessor

    OpenAIRE

    Benedicte Illescas, Pedro

    2014-01-01

    [CATALÀ] Aquest projecte consisteix a analitzar diferents aspectes de la jerarquia de memòria i entendre la seva influència al rendiment del sistema. Els aspectes que s'analitzaran són els algorismes de reemplaçament, els esquemes de mapeig de memòria i les polítiques de pàgina de memòria. [ANGLÈS] This project consists in analyzing different aspects of the memory hierarchy and understanding its influence in the overall system performance. The aspects that will be analyzed are cache replac...

  7. Stratigraphic architecture of a fluvial-lacustrine basin-fill succession at Desolation Canyon, Uinta Basin, Utah: Reference to Walthers’ Law and implications for the petroleum industry

    Science.gov (United States)

    Ford, Grace L.; David R. Pyles,; Dechesne, Marieke

    2016-01-01

    A continuous window into the fluvial-lacustrine basin-fill succession of the Uinta Basin is exposed along a 48-mile (77-kilometer) transect up the modern Green River from Three Fords to Sand Wash in Desolation Canyon, Utah. In ascending order the stratigraphic units are: 1) Flagstaff Limestone, 2) lower Wasatch member of the Wasatch Formation, 3) middle Wasatch member of the Wasatch Formation, 4) upper Wasatch member of the Wasatch Formation, 5) Uteland Butte member of the lower Green River Formation, 6) lower Green River Formation, 7) Renegade Tongue of the lower Green River Formation, 8) middle Green River Formation, and 9) the Mahogany oil shale zone marking the boundary between the middle and upper Green River Formations. This article uses regional field mapping, geologic maps, photographs, and descriptions of the stratigraphic unit including: 1) bounding surfaces, 2) key upward stratigraphic characteristics within the unit, and 3) longitudinal changes along the river transect. This information is used to create a north-south cross section through the basin-fill succession and a detailed geologic map of Desolation Canyon. The cross section documents stratigraphic relationships previously unreported and contrasts with earlier interpretations in two ways: 1) abrupt upward shifts in the stratigraphy documented herein, contrast with the gradual interfingering relationships proposed by Ryder et al., (1976) and Fouch et al., (1994), 2) we document fluvial deposits of the lower and middle Wasatch to be distinct and more widespread than previously recognized. In addition, we document that the Uteland Butte member of the lower Green River Formation was deposited in a lacustrine environment in Desolation Canyon.

  8. MODERNIZATION OF NATIONAL ECONOMY THROUGH DEVELOPMENT OF REGIONAL PRODUCTION INFRASTRUCTURE

    Directory of Open Access Journals (Sweden)

    T. G. Guilyadov

    2011-01-01

    Full Text Available Any region’s economy comprises production and non-production spheres which are interconnected and equivalent. Key part of any regional production sphere is its production infrastructure whose value is double: it defines the level of regional economic development on one hand, andinterrelation with the whole national economy on the other hand. The greatest and most important regional production infrastructure elements are transportation infrastructure, information/communication infrastructure and communal infrastructure. Analysis and solution of issues related to development of the basic regional production infrastructure elements as suggested in the article will be very useful for modernization of the national economy.

  9. Design to monitor trend in abundance and presence of American beaver (Castor canadensis) at the national forest scale.

    Science.gov (United States)

    Beck, Jeffrey L; Dauwalter, Daniel C; Gerow, Kenneth G; Hayward, Gregory D

    2010-05-01

    Wildlife conservationists design monitoring programs to assess population dynamics, project future population states, and evaluate the impacts of management actions on populations. Because agency mandates and conservation laws call for monitoring data to elicit management responses, it is imperative to design programs that match the administrative scale for which management decisions are made. We describe a program to monitor population trends in American beaver (Castor canadensis) on the US Department of Agriculture, Black Hills National Forest (BHNF) in southwestern South Dakota and northeastern Wyoming, USA. Beaver have been designated as a management indicator species on the BHNF because of their association with riparian and aquatic habitats and its status as a keystone species. We designed our program to monitor the density of beaver food caches (abundance) within sampling units with beaver and the proportion of sampling units with beavers present at the scale of a national forest. We designated watersheds as sampling units in a stratified random sampling design that we developed based on habitat modeling results. Habitat modeling indicated that the most suitable beaver habitat was near perennial water, near aspen (Populus tremuloides) and willow (Salix spp.), and in low gradient streams at lower elevations. Results from the initial monitoring period in October 2007 allowed us to assess costs and logistical considerations, validate our habitat model, and conduct power analyses to assess whether our sampling design could detect the level of declines in beaver stated in the monitoring objectives. Beaver food caches were located in 20 of 52 sampled watersheds. Monitoring 20 to 25 watersheds with beaver should provide sufficient power to detect 15-40% declines in the beaver food cache index as well as a twofold decline in the odds of beaver being present in watersheds. Indices of abundance, such as the beaver food cache index, provide a practical measure of

  10. Geological disposal of radioactive wastes: national commitment, local and regional involvement

    International Nuclear Information System (INIS)

    2013-07-01

    Long-term radioactive waste management, including geological disposal, involves the construction of a limited number of facilities and it is therefore a national challenge with a strong local/regional dimension. Public information, consultation and/or participation in environmental or technological decision-making are today's best practice and must take place at the different geographical and political scales. Large-scale technology projects are much more likely to be accepted when stakeholders have been involved in making them possible and have developed a sense of interest in or responsibility for them. In this way, national commitment, and local and regional involvement are two essential dimensions of the complex task of securing continued societal agreement for the deep geological disposal of radioactive wastes. Long-term radioactive waste management, including geological disposal, is a national challenge with a strong local/regional dimension. The national policy frameworks increasingly support participatory, flexible and accountable processes. Radioactive waste management institutions are evolving away from a technocratic stance, demonstrating constructive interest in learning and adapting to societal requirements. Empowerment of the local and regional actors has been growing steadily in the last decade. Regional and local players tend to take an active role concerning the siting and implementation of geological repositories. National commitment and local/regional involvement go hand-in-hand in supporting sustainable decisions for the geological disposal of radioactive waste

  11. Phosphorus in Denmark: national and regional anthropogenic flows

    DEFF Research Database (Denmark)

    Klinglmair, Manfred; Lemming, Camilla; Jensen, Lars Stoumann

    2015-01-01

    by country-wide average values. To quantify and evaluate these imbalances we integrated a country-scale and regional-scale model of the Danish anthropogenic P flows and stocks. We examine three spatial regions with regard to agriculture, as the main driver for P use, and waste management, the crucial sector......Substance flow analyses (SFA) of phosphorus (P) have been examined on a national or supra-national level in various recent studies. SFA studies of P on the country scale or larger can have limited informative value; large differences between P budgets exist within countries and are easily obscured...... for P recovery. The regions are characterised by their differences in agricultural practice, population and industrial density. We show considerable variation in P flows within the country. First, these are driven by agriculture, with mineral fertiliser inputs varying between 3 and 5 kg ha−1 yr−1...

  12. Mercury and methylmercury concentrations and loads in the Cache Creek watershed, California

    Energy Technology Data Exchange (ETDEWEB)

    Domagalski, Joseph L.; Alpers, Charles N.; Slotton, Darell G.; Suchanek, Thomas H.; Ayers, Shaun M

    2004-07-05

    Concentrations and loads of total mercury and methylmercury were measured in streams draining abandoned mercury mines and in the proximity of geothermal discharge in the Cache Creek watershed of California during a 17-month period from January 2000 through May 2001. Rainfall and runoff were lower than long-term averages during the study period. The greatest loading of mercury and methylmercury from upstream sources to downstream receiving waters, such as San Francisco Bay, generally occurred during or after winter rainfall events. During the study period, loads of mercury and methylmercury from geothermal sources tended to be greater than those from abandoned mining areas, a pattern attributable to the lack of large precipitation events capable of mobilizing significant amounts of either mercury-laden sediment or dissolved mercury and methylmercury from mine waste. Streambed sediments of Cache Creek are a significant source of mercury and methylmercury to downstream receiving bodies of water. Much of the mercury in these sediments is the result of deposition over the last 100-150 years by either storm-water runoff, from abandoned mines, or continuous discharges from geothermal areas. Several geochemical constituents were useful as natural tracers for mining and geothermal areas, including the aqueous concentrations of boron, chloride, lithium and sulfate, and the stable isotopes of hydrogen and oxygen in water. Stable isotopes of water in areas draining geothermal discharges showed a distinct trend toward enrichment of {sup 18}O compared with meteoric waters, whereas much of the runoff from abandoned mines indicated a stable isotopic pattern more consistent with local meteoric water.

  13. Mercury and methylmercury concentrations and loads in the Cache Creek watershed, California

    International Nuclear Information System (INIS)

    Domagalski, Joseph L.; Alpers, Charles N.; Slotton, Darell G.; Suchanek, Thomas H.; Ayers, Shaun M.

    2004-01-01

    Concentrations and loads of total mercury and methylmercury were measured in streams draining abandoned mercury mines and in the proximity of geothermal discharge in the Cache Creek watershed of California during a 17-month period from January 2000 through May 2001. Rainfall and runoff were lower than long-term averages during the study period. The greatest loading of mercury and methylmercury from upstream sources to downstream receiving waters, such as San Francisco Bay, generally occurred during or after winter rainfall events. During the study period, loads of mercury and methylmercury from geothermal sources tended to be greater than those from abandoned mining areas, a pattern attributable to the lack of large precipitation events capable of mobilizing significant amounts of either mercury-laden sediment or dissolved mercury and methylmercury from mine waste. Streambed sediments of Cache Creek are a significant source of mercury and methylmercury to downstream receiving bodies of water. Much of the mercury in these sediments is the result of deposition over the last 100-150 years by either storm-water runoff, from abandoned mines, or continuous discharges from geothermal areas. Several geochemical constituents were useful as natural tracers for mining and geothermal areas, including the aqueous concentrations of boron, chloride, lithium and sulfate, and the stable isotopes of hydrogen and oxygen in water. Stable isotopes of water in areas draining geothermal discharges showed a distinct trend toward enrichment of 18 O compared with meteoric waters, whereas much of the runoff from abandoned mines indicated a stable isotopic pattern more consistent with local meteoric water

  14. Comparison of the Frontier Distributed Database Caching System with NoSQL Databases

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Non-relational "NoSQL" databases such as Cassandra and CouchDB are best known for their ability to scale to large numbers of clients spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects, is based on traditional SQL databases but also has the same high scalability and wide-area distributability for an important subset of applications. This paper compares the architectures, behavior, performance, and maintainability of the two different approaches and identifies the criteria for choosing which approach to prefer over the other.

  15. Caching-Aided Collaborative D2D Operation for Predictive Data Dissemination in Industrial IoT

    OpenAIRE

    Orsino, Antonino; Kovalchukov, Roman; Samuylov, Andrey; Moltchanov, Dmitri; Andreev, Sergey; Koucheryavy, Yevgeni; Valkama, Mikko

    2018-01-01

    Industrial automation deployments constitute challenging environments where moving IoT machines may produce high-definition video and other heavy sensor data during surveying and inspection operations. Transporting massive contents to the edge network infrastructure and then eventually to the remote human operator requires reliable and high-rate radio links supported by intelligent data caching and delivery mechanisms. In this work, we address the challenges of contents dissemination in chara...

  16. Agricultural Influences on Cache Valley, Utah Air Quality During a Wintertime Inversion Episode

    Science.gov (United States)

    Silva, P. J.

    2017-12-01

    Several of northern Utah's intermountain valleys are classified as non-attainment for fine particulate matter. Past data indicate that ammonium nitrate is the major contributor to fine particles and that the gas phase ammonia concentrations are among the highest in the United States. During the 2017 Utah Winter Fine Particulate Study, USDA brought a suite of online and real-time measurement methods to sample particulate matter and potential gaseous precursors from agricultural emissions in the Cache Valley. Instruments were co-located at the State of Utah monitoring site in Smithfield, Utah from January 21st through February 12th, 2017. A Scanning mobility particle sizer (SMPS) and aerodynamic particle sizer (APS) acquired size distributions of particles from 10 nm - 10 μm in 5-min intervals. A URG ambient ion monitor (AIM) gave hourly concentrations for gas and particulate ions and a Chromatotec Trsmedor gas chromatograph obtained 10 minute measurements of gaseous sulfur species. High ammonia concentrations were detected at the Smithfield site with concentrations above 100 ppb at times, indicating a significant influence from agriculture at the sampling site. Ammonia is not the only agricultural emission elevated in Cache Valley during winter, as reduced sulfur gas concentrations of up to 20 ppb were also detected. Dimethylsulfide was the major sulfur-containing gaseous species. Analysis indicates that particle growth and particle nucleation events were both observed by the SMPS. Relationships between gas and particulate concentrations and correlations between the two will be discussed.

  17. Design issues and caching strategies for CD-ROM-based multimedia storage

    Science.gov (United States)

    Shastri, Vijnan; Rajaraman, V.; Jamadagni, H. S.; Venkat-Rangan, P.; Sampath-Kumar, Srihari

    1996-03-01

    CD-ROMs have proliferated as a distribution media for desktop machines for a large variety of multimedia applications (targeted for a single-user environment) like encyclopedias, magazines and games. With CD-ROM capacities up to 3 GB being available in the near future, they will form an integral part of Video on Demand (VoD) servers to store full-length movies and multimedia. In the first section of this paper we look at issues related to the single- user desktop environment. Since these multimedia applications are highly interactive in nature, we take a pragmatic approach, and have made a detailed study of the multimedia application behavior in terms of the I/O request patterns generated to the CD-ROM subsystem by tracing these patterns. We discuss prefetch buffer design and seek time characteristics in the context of the analysis of these traces. We also propose an adaptive main-memory hosted cache that receives caching hints from the application to reduce the latency when the user moves from one node of the hyper graph to another. In the second section we look at the use of CD-ROM in a VoD server and discuss the problem of scheduling multiple request streams and buffer management in this scenario. We adapt the C-SCAN (Circular SCAN) algorithm to suit the CD-ROM drive characteristics and prove that it is optimal in terms of buffer size management. We provide computationally inexpensive relations by which this algorithm can be implemented. We then propose an admission control algorithm which admits new request streams without disrupting the continuity of playback of the previous request streams. The algorithm also supports operations such as fast forward and replay. Finally, we discuss the problem of optimal placement of MPEG streams on CD-ROMs in the third section.

  18. National Priorities List (NPL) Site Polygons, Region 9, 2014, US EPA Region 9

    Data.gov (United States)

    U.S. Environmental Protection Agency — NPL site POLYGON locations for the US EPA Region 9. NPL (National Priorities List) sites are hazardous waste sites that are eligible for extensive long-term cleanup...

  19. National Priorities List (NPL) Site Polygons, Region 9, 2013, US EPA Region 9

    Data.gov (United States)

    U.S. Environmental Protection Agency — NPL site POLYGON locations for the US EPA Region 9. NPL (National Priorities List) sites are hazardous waste sites that are eligible for extensive long-term cleanup...

  20. National Priorities List (NPL) Site Points, Region 9, 2015, US EPA Region 9

    Data.gov (United States)

    U.S. Environmental Protection Agency — NPL site point locations for the US EPA, Region 9. NPL (National Priorities List) sites are hazardous waste sites that are eligible for extensive long-term cleanup...

  1. National Priorities List (NPL) Site Polygons, Region 9, 2015, US EPA Region 9

    Data.gov (United States)

    U.S. Environmental Protection Agency — NPL site POLYGON locations for the US EPA Region 9. NPL (National Priorities List) sites are hazardous waste sites that are eligible for extensive long-term cleanup...

  2. National Priorities List (NPL) Site Polygons, Region 9, 2012, US EPA Region 9

    Data.gov (United States)

    U.S. Environmental Protection Agency — NPL site POLYGON locations for the US EPA Region 9. NPL (National Priorities List) sites are hazardous waste sites that are eligible for extensive long-term cleanup...

  3. National Priorities List (NPL) Site Polygons, Region 9, 2010, US EPA Region 9

    Data.gov (United States)

    U.S. Environmental Protection Agency — NPL site POLYGON locations for the US EPA Region 9. NPL (National Priorities List) sites are hazardous waste sites that are eligible for extensive long-term cleanup...

  4. National Priorities List (NPL) Site Points, Region 9, 2017, US EPA Region 9

    Data.gov (United States)

    U.S. Environmental Protection Agency — NPL site point locations for the US EPA, Region 9. NPL (National Priorities List) sites are hazardous waste sites that are eligible for extensive long-term cleanup...

  5. National Priorities List (NPL) Site Points, Region 9, 2014, US EPA Region 9

    Data.gov (United States)

    U.S. Environmental Protection Agency — NPL site point locations for the US EPA Region 9. NPL (National Priorities List) sites are hazardous waste sites that are eligible for extensive long-term cleanup...

  6. National Priorities List (NPL) Site Polygons, Region 9, 2017, US EPA Region 9

    Data.gov (United States)

    U.S. Environmental Protection Agency — NPL site POLYGON locations for the US EPA Region 9. NPL (National Priorities List) sites are hazardous waste sites that are eligible for extensive long-term cleanup...

  7. Hydrogeochemical and stream sediment detailed geochemical survey for Thomas Range-Wasatch, Utah. Cottonwood project area

    International Nuclear Information System (INIS)

    Butz, T.R.; Bard, C.S.; Witt, D.A.; Helgerson, R.N.; Grimes, J.G.; Pritz, P.M.

    1980-01-01

    Results of Cottonwood project area of the Thomas Range-Wasatch detailed geochemical survey are reported. Field and laboratory data are presented for 15 groundwater samples, 79 stream sediment samples, and 85 radiometric readings. Statistical and areal distributions of uranium and possible uranium-related variables are given. A generalized geologic map of the project area is provided, and pertinent geologic factors which may be of significance in evaluating the potential for uranium mineralization are briefly discussed. Uranium concentrations in groundwater range from 0.25 to 3.89 ppB. The highest concentrations are from groundwaters from the Little Cottonwood and Ferguson Stocks. Variables that appear to be associated with uranium in groundwater include cobalt, iron, potassium, manganese, nickel, sulfate, and to a lesser extent, molybdenum and strontium. This association is attributed to the Monzonitic Little Cottonwood Stock, granodioritic to granitic and lamprophyric dikes, and known sulfide deposits. Soluble uranium concentrations (U-FL) in stream sediments range from 0.31 to 72.64 ppM. Total uranium concentrations (U-NT) range from 1.80 to 75.20 ppM. Thorium concentrations range from <2 to 48 ppM. Anomalous values for uranium and thorium are concentrated within the area of outcrop of the Little Cottonwood and Ferguson Stocks. Variables which are areally associated with high values of uranium, thorium, and the U-FL:U-NT ratio within the Little Cottonwood Stock are barium, copper, molybdenum, and zinc. High concentrations of these variables are located near sulfide deposits within the Little Cottonwood Stock

  8. Comparison of the Frontier Distributed Database Caching System with NoSQL Databases

    CERN Document Server

    Dykstra, David

    2012-01-01

    One of the main attractions of non-relational "NoSQL" databases is their ability to scale to large numbers of readers, including readers spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects for Conditions data, is based on traditional SQL databases but also has high scalability and wide-area distributability for an important subset of applications. This paper compares the major characteristics of the two different approaches and identifies the criteria for choosing which approach to prefer over the other. It also compares in some detail the NoSQL databases used by CMS and ATLAS: MongoDB, CouchDB, HBase, and Cassandra.

  9. Homeland Security: Protecting Airspace in the National Capital Region

    National Research Council Canada - National Science Library

    Elias, Bart

    2005-01-01

    .... While the administration is currently seeking to make the airspace restrictions in the National Capital Region permanent, Congress has pushed for an easing of restrictions on GA aircraft at Ronald...

  10. Regional climate change and national responsibilities

    Science.gov (United States)

    Hansen, James; Sato, Makiko

    2016-03-01

    Global warming over the past several decades is now large enough that regional climate change is emerging above the noise of natural variability, especially in the summer at middle latitudes and year-round at low latitudes. Despite the small magnitude of warming relative to weather fluctuations, effects of the warming already have notable social and economic impacts. Global warming of 2 °C relative to preindustrial would shift the ‘bell curve’ defining temperature anomalies a factor of three larger than observed changes since the middle of the 20th century, with highly deleterious consequences. There is striking incongruity between the global distribution of nations principally responsible for fossil fuel CO2 emissions, known to be the main cause of climate change, and the regions suffering the greatest consequences from the warming, a fact with substantial implications for global energy and climate policies.

  11. Study, Talk, and Action. A Report of a National Conference on Regionalism and Regionalization in American Postsecondary Education.

    Science.gov (United States)

    Martorana, S. V., Ed.; Nespoli, Lawrence A., Ed.

    This report of a National Conference on Regionalism and Regionalization in American Postsecondary Education contains an overview and summary of the final project report, a keynote address, four papers on the implications of regionalism, some reactor comments, an essay on leadership, and four descriptive accounts of operational regionalization…

  12. Developing Spatial Data Infrastructure in Croatia – Incorporating National and Regional Approach

    Directory of Open Access Journals (Sweden)

    Željko Bačić

    2010-12-01

    part of the development of the regional and European spatial data infrastructure (ESDI. In this context, Croatia has recognized South-Eastern Europe as a region sharing many similarities, whether with regards to the historical legacy, development degree, current development directions, reform activities or the SDI development stage, although it should be pointed out that there are also differences. Given the above-mentioned similarities, Croatia has instigated the regional cooperation linked to the development of both national and regional SDI’s. Concrete achievements on this road are the establishment of the regional cooperation between cadastral organization, launching of the annual regional conference on the cadastre and preparation of the first regional SDI project entitled INSPIRATION – Spatial Data Infrastructure in the Western Balkans (Inspiration project. At the European level, the SGA is member of EuroGeographics, European organisation whose purpose is the improvement of the ESDI development, including topographic information, cadastre and land information. This paper describes the role and activities of the SGA in the SDI establishment at the national, regional and European level.

  13. The revival and preservation of historical memory: national and regional aspects

    Directory of Open Access Journals (Sweden)

    S. I. Svitlenko

    2017-07-01

    Full Text Available The article reveals the urgency of the problems of revitalization and preservation of historical memory in the national and regional contexts at different stages of the past and in the present. It is shown that the historical memory of our region, like most other regions of the country, reflected the five main periods, in particular Russian, Cossack, Imperial, Soviet, modern Ukrainian. Noted that the heterogeneity of the historical memory caused rather substantial differences in the individual and collective media, despite the fact that the inhabitants of the region were a territorial community which were distinguishable features: social, ethnic, political, religious, linguistic, cultural, professional, age, sex, etc. are Focused on those key features, which caused quite stable signs of historical memory inherent in our region in different historical periods. Value is defined revival and preservation of historical memory in the development of modern national identity, the modern historical consciousness and thinking.

  14. Comparison of the Frontier Distributed Database Caching System to NoSQL Databases

    Science.gov (United States)

    Dykstra, Dave

    2012-12-01

    One of the main attractions of non-relational “NoSQL” databases is their ability to scale to large numbers of readers, including readers spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects for Conditions data, is based on traditional SQL databases but also adds high scalability and the ability to be distributed over a wide-area for an important subset of applications. This paper compares the major characteristics of the two different approaches and identifies the criteria for choosing which approach to prefer over the other. It also compares in some detail the NoSQL databases used by CMS and ATLAS: MongoDB, CouchDB, HBase, and Cassandra.

  15. Comparison of the Frontier Distributed Database Caching System to NoSQL Databases

    International Nuclear Information System (INIS)

    Dykstra, Dave

    2012-01-01

    One of the main attractions of non-relational “NoSQL” databases is their ability to scale to large numbers of readers, including readers spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects for Conditions data, is based on traditional SQL databases but also adds high scalability and the ability to be distributed over a wide-area for an important subset of applications. This paper compares the major characteristics of the two different approaches and identifies the criteria for choosing which approach to prefer over the other. It also compares in some detail the NoSQL databases used by CMS and ATLAS: MongoDB, CouchDB, HBase, and Cassandra.

  16. Comparison of the Frontier Distributed Database Caching System to NoSQL Databases

    Energy Technology Data Exchange (ETDEWEB)

    Dykstra, Dave [Fermilab

    2012-07-20

    One of the main attractions of non-relational NoSQL databases is their ability to scale to large numbers of readers, including readers spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects for Conditions data, is based on traditional SQL databases but also adds high scalability and the ability to be distributed over a wide-area for an important subset of applications. This paper compares the major characteristics of the two different approaches and identifies the criteria for choosing which approach to prefer over the other. It also compares in some detail the NoSQL databases used by CMS and ATLAS: MongoDB, CouchDB, HBase, and Cassandra.

  17. Improved cache performance in Monte Carlo transport calculations using energy banding

    Science.gov (United States)

    Siegel, A.; Smith, K.; Felker, K.; Romano, P.; Forget, B.; Beckman, P.

    2014-04-01

    We present an energy banding algorithm for Monte Carlo (MC) neutral particle transport simulations which depend on large cross section lookup tables. In MC codes, read-only cross section data tables are accessed frequently, exhibit poor locality, and are typically too much large to fit in fast memory. Thus, performance is often limited by long latencies to RAM, or by off-node communication latencies when the data footprint is very large and must be decomposed on a distributed memory machine. The proposed energy banding algorithm allows maximal temporal reuse of data in band sizes that can flexibly accommodate different architectural features. The energy banding algorithm is general and has a number of benefits compared to the traditional approach. In the present analysis we explore its potential to achieve improvements in time-to-solution on modern cache-based architectures.

  18. THE SPECIFICS OF NATIONAL REGIONAL DEVELOPMENT STRATEGY OF THE REPUBLIC OF MOLDOVA

    Directory of Open Access Journals (Sweden)

    Tatiana M. TOFAN

    2014-11-01

    Full Text Available The experience in the Republic of Moldova during 2010-2014 period to implement the National Strategy of Regional Development, has demonstrated the need to develop monitoring and evaluation methodology of these strategy, and put accent on the development of regional statistics, on the indicators of projects, regions and the national level, on the development of methodology for assessing the impact of the projects, to ensure the dissemination of results in monitoring and evaluation by placing the information on the website of the Ministry of Regional Development and Construction and Regional Development Agencies in periodic newsletters. To ensure transparency in monitoring and evaluation in the implementation of regional development policy confirms the responsibility of actors involved in the area, condition the right to take effective measures to correct the activities which do not correspond to the policy and provides the opportunity to examine the dynamics of the processes of socio-economic development of the regions.

  19. False alarms and mine seismicity: An example from the Gentry Mountain mining region, Utah

    International Nuclear Information System (INIS)

    Taylor, S.R.

    1992-01-01

    Mining regions are a cause of concern for monitoring of nuclear test ban treaties because they present the opportunity for clandestine nuclear tests (i.e. decoupled explosions). Mining operations are often characterized by high seismicity rates and can provide the cover for excavating voids for decoupling. Chemical explosions (seemingly as part of normal mining activities) can be used to complicate the signals from a simultaneous decoupled nuclear explosion. Thus, most concern about mines has dealt with the issue of missed violations to a test ban treaty. In this study, we raise the diplomatic concern of false alarms associated with mining activities. Numerous reports and papers have been published about anomalous seismicity associated with mining activities. As part of a large discrimination study in the western US (Taylor et al., 1989), we had one earthquake that was consistently classified as an explosion. The magnitude 3.5 disturbance occurred on May 14, 1981 and was conspicuous in its lack of Love waves, relative lack of high- frequency energy, low Lg/Pg ratio, and high m b - M s . A moment-tensor solution by Patton and Zandt (1991) indicated the event had a large implosional component. The event occurred in the Gentry Mountain coal mining region in the eastern Wasatch Plateau, Utah. Using a simple source representation, we modeled the event as a tabular excavation collapse that occurred as a result of normal mining activities. This study raises the importance of having a good catalogue of seismic data and information about mining activities from potential proliferant nations

  20. Potential of the Kakadu National Park Region

    Energy Technology Data Exchange (ETDEWEB)

    1988-01-01

    The Committee reviewed the potential of the Kakadu National Park region in the Northern Territory with particular reference to the nature of the resources available for exploitation and the impact of utilisation of these resources, particularly mining and tourism. Individual chapters discuss the Park, tourism, mineral resources (particularly the environmental and economic impacts of the Ranger Uranium Mine and the potential impacts of mining the Koongarra and Jabiluka deposits), the town of Jabiru, commercial fishing, other issues (the scientific resource, crocodiles, introduced species and fire), and park management and control (including a review of the role of the Office of the Supervising Scientist for the Alligator Rivers Region). A number of recommendations are made and the dissenting report of three of the Committee's members is included.

  1. National and regional economic impacts of electricity production from energy crops in the Netherlands

    NARCIS (Netherlands)

    Vlasblom, J.; Broek, R. van den; Meeusen-van Onna, M.

    1998-01-01

    Besides the known environmental benefits, national and regional economic impacts may form additional arguments for stimulating government measures in favour of electricity production from energy crops in the Netherlands. Therefore, we compared the economic impacts (at both national and regional

  2. [Model for the regional allocation of the National Health Care Fund].

    Science.gov (United States)

    Loreti, P; Muzzi, A; Bruni, G

    1989-01-01

    In 1978 a National Health Service (Servizio Sanitario Nazionale = SSN) was constituted in Italy which exercises jurisdiction in the sector of health care and is duty bound to assist all citizens. Basically speaking, the NHS is organized on three levels (national, regional and local) with the management of direct operations assigned to the (about 700) Local Health Boards (Unità Sanitaria Locale = USL) each of which covers a well determined territorial area. The Authors indicate that rarely discussed or evaluated are the procedures for the regional allocation of health care funding which is determined by Parliament within the ambit of the National Budget (The National Health Care Fund). The current allocation model distributes the available capital resources for each expense item (e.g. hospitalization, pharmaceutical assistance, etc.) on a per capita basis with respect to the regional populations modified in order to allow for differing degrees of health care requirements. The regional populations are subdivided into broad age groups (e.g. children, intermediary, the elderly) with specific weighting factors expressing the different level of health care requirements. The application of these weighting factors alters the regional populations (with no change in the total population of the country) in order to express them in equivalent units with respect to the health care need. Moreover, standardized death rates are introduced into the model as indicators of the different health risk, and their application leads to a further modification in the level of the regional populations so as to express them in equivalent units with respect to the health risk as well. Once the available financial resources have been subdivided in this "theoretical" way, the following corrective factors are applied: a) hospital mobility correction factor: the regions with a credit admissions balance are assigned an additional cost which is borne by the regions with a debit admissions balance; b

  3. Neighborhoods and mortality in Sweden: Is deprivation best assessed nationally or regionally?

    Directory of Open Access Journals (Sweden)

    Daniel Oudin Åström

    2018-01-01

    Full Text Available Background: The association between neighborhood deprivation and mortality is well established, but knowledge about whether deprivation is best assessed regionally or nationally is scarce. Objective: The present study aims to examine whether there is a difference in results when using national and county-specific neighborhood deprivation indices and whether the level of urbanization modifies the association between neighborhood deprivation and mortality. Methods: We collected data on the entire population aged above 50 residing in the 21 Swedish counties on January 1, 1990, and followed them for mortality due to all causes and for coronary heart disease. The association between neighborhood deprivation and mortality was assessed using Cox regression, assuming proportional hazards with attained age as an underlying variable, comparing the 25Š most deprived neighborhoods with the 25Š most affluent ones within each region, and using both the national and the county-specific indices. The potential interactions were also assessed. Results: The choice of a national or a county-specific index did not affect the estimates to a large extent. The effect of neighborhood deprivation on mortality in metropolitan regions (hazard ratio: 1.21 [1.20-1.22] was somewhat higher than that in the more rural southern (HR: 1.16 [1.15-1.17] and northern regions (HR: 1.11 [1.09-1.12]. Conclusions: Our data indicates that the choice of a national or a county-specific deprivation index does not influence the results to a significant extent, but may be of importance in large metropolitan regions. Furthermore, the strength of the association between neighborhood deprivation and mortality is somewhat greater in metropolitan areas than in more rural southern and northern areas. Contribution: The study contributes to a better understanding of the complex association between neighborhood and mortality.

  4. Nation vs. region: tensions in Venezuela’s post-collapse party system

    Directory of Open Access Journals (Sweden)

    Iñaki SAGARZAZU

    2011-11-01

    Full Text Available The collapse of the Venezuelan party system stirred controversy because it was considered one of the most consolidated political systems of Latin America. Several studies have analyzed the causes that contributed to this collapse. None, however, have studied the restructuring process that happened later. Through a study of all the electoral processes since 1958 this article shows the existence of tensions between forces that promote nationalization and regionalization strategies. With this analysis it’s possible to understand that partisan strategy has been essential in the nationalization/regionalization process of the different post-collapse parties.

  5. Security in the Cache and Forward Architecture for the Next Generation Internet

    Science.gov (United States)

    Hadjichristofi, G. C.; Hadjicostis, C. N.; Raychaudhuri, D.

    The future Internet architecture will be comprised predominately of wireless devices. It is evident at this stage that the TCP/IP protocol that was developed decades ago will not properly support the required network functionalities since contemporary communication profiles tend to be data-driven rather than host-based. To address this paradigm shift in data propagation, a next generation architecture has been proposed, the Cache and Forward (CNF) architecture. This research investigates security aspects of this new Internet architecture. More specifically, we discuss content privacy, secure routing, key management and trust management. We identify security weaknesses of this architecture that need to be addressed and we derive security requirements that should guide future research directions. Aspects of the research can be adopted as a step-stone as we build the future Internet.

  6. A study on the effectiveness of lockup-free caches for a Reduced Instruction Set Computer (RISC) processor

    OpenAIRE

    Tharpe, Leonard.

    1992-01-01

    Approved for public release; distribution is unlimited This thesis presents a simulation and analysis of the Reduced Instruction Set Computer (RISC) architecture and the effects on RISC performance of a lockup-free cache interface. RISC architectures achieve high performance by having a small, but sufficient, instruction set with most instructions executing in one clock cycle. Current RISC performance range from 1.5 to 2.0 CPI. The goal of RISC is to attain a CPI of 1.0. The major hind...

  7. The modelling of national wealth of the Russia’s regions

    Directory of Open Access Journals (Sweden)

    Aleksandr Ivanovich Tatarkin

    2013-12-01

    Full Text Available In the article, an application of the approach, based on the methodology of the indicative analysis, to the modeling of a condition of the national wealth of the Russia’s regions is proved, its main advantages are shown. Research purposes are: to define a quality and development level of the national capital components of constituent territories of the Russian Federation (the natural and resource, physical and human capitals; and to identify the reasons of a developing situation; to define a contribution of each subject of the Russian Federation in the developing of the country’s national wealth; to contribute to the individual approach development of the management of components of the national capital for all the Russia’s regions. The methodic allowing to transform the different indicative indicators of various measure to a balance, and also to receive and differentiate integrated estimates of components of the national capital of each territorial subject of the Russian Federation according to the offered classification is given. As an example of the assessment results of the human capital of the constituent territories of the Russian Federation in 2011 (in the rating form and diagrams of its changing for 2000-2011 in the Russian Federation’s subjects of the first and last places in rating are presented. The assessment mechanism of contribution of particular components of human capital to the development of its integrated assessment is given, and such opportunity is shown.

  8. The People of Bear Hunter Speak: Oral Histories of the Cache Valley Shoshones Regarding the Bear River Massacre

    OpenAIRE

    Crawford, Aaron L.

    2007-01-01

    The Cache Valley Shoshone are the survivors of the Bear River Massacre, where a battle between a group of US. volunteer troops from California and a Shoshone village degenerated into the worst Indian massacre in US. history, resulting in the deaths of over 200 Shoshones. The massacre occurred due to increasing tensions over land use between the Shoshones and the Mormon settlers. Following the massacre, the Shoshones attempted settling in several different locations in Box Elder County, eventu...

  9. Summary and Synthesis of Mercury Studies in the Cache Creek Watershed, California, 2000-01

    Science.gov (United States)

    Domagalski, Joseph L.; Slotton, Darell G.; Alpers, Charles N.; Suchanek, Thomas H.; Churchill, Ronald; Bloom, Nicolas; Ayers, Shaun M.; Clinkenbeard, John

    2004-01-01

    This report summarizes the principal findings of the Cache Creek, California, components of a project funded by the CALFED Bay?Delta Program entitled 'An Assessment of Ecological and Human Health Impacts of Mercury in the Bay?Delta Watershed.' A companion report summarizes the key findings of other components of the project based in the San Francisco Bay and the Delta of the Sacramento and San Joaquin Rivers. These summary documents present the more important findings of the various studies in a format intended for a wide audience. For more in-depth, scientific presentation and discussion of the research, a series of detailed technical reports of the integrated mercury studies is available at the following website: .

  10. From the Island of the Blue Dolphins: A unique 19th century cache feature from San Nicolas Island, California

    Science.gov (United States)

    Erlandson, Jon M.; Thomas-Barnett, Lisa; Vellanoweth, René L.; Schwartz, Steven J.; Muhs, Daniel R.

    2013-01-01

    A cache feature salvaged from an eroding sea cliff on San Nicolas Island produced two redwood boxes containing more than 200 artifacts of Nicoleño, Native Alaskan, and Euro-American origin. Outside the boxes were four asphaltum-coated baskets, abalone shells, a sandstone dish, and a hafted stone knife. The boxes, made from split redwood planks, contained a variety of artifacts and numerous unmodified bones and teeth from marine mammals, fish, birds, and large land mammals. Nicoleño-style artifacts include 11 knives with redwood handles and stone blades, stone projectile points, steatite ornaments and effigies, a carved stone pipe, abraders and burnishing stones, bird bone whistles, bone and shell pendants, abalone shell dishes, and two unusual barbed shell fishhooks. Artifacts of Native Alaskan style include four bone toggling harpoons, two unilaterally barbed bone harpoon heads, bone harpoon fore-shafts, a ground slate blade, and an adze blade. Objects of Euro-American origin or materials include a brass button, metal harpoon blades, and ten flaked glass bifaces. The contents of the cache feature, dating to the early-to-mid nineteenth century, provide an extraordinary window on a time of European expansion and global economic development that created unique cultural interactions and social transformations.

  11. Data Locality via Coordinated Caching for Distributed Processing

    Science.gov (United States)

    Fischer, M.; Kuehn, E.; Giffels, M.; Jung, C.

    2016-10-01

    To enable data locality, we have developed an approach of adding coordinated caches to existing compute clusters. Since the data stored locally is volatile and selected dynamically, only a fraction of local storage space is required. Our approach allows to freely select the degree at which data locality is provided. It may be used to work in conjunction with large network bandwidths, providing only highly used data to reduce peak loads. Alternatively, local storage may be scaled up to perform data analysis even with low network bandwidth. To prove the applicability of our approach, we have developed a prototype implementing all required functionality. It integrates seamlessly into batch systems, requiring practically no adjustments by users. We have now been actively using this prototype on a test cluster for HEP analyses. Specifically, it has been integral to our jet energy calibration analyses for CMS during run 2. The system has proven to be easily usable, while providing substantial performance improvements. Since confirming the applicability for our use case, we have investigated the design in a more general way. Simulations show that many infrastructure setups can benefit from our approach. For example, it may enable us to dynamically provide data locality in opportunistic cloud resources. The experience we have gained from our prototype enables us to realistically assess the feasibility for general production use.

  12. Cooking up the Culinary Nation or Savoring its Regions? Teaching Food Studies in Vietnam

    Directory of Open Access Journals (Sweden)

    Christopher Annear

    2018-05-01

    Full Text Available “In food, as in death, we feel the essential brotherhood of man.” Vietnamese Proverb This paper explores whether or not there is an identifiably Vietnamese national cuisine, one in which the ingredients, recipes, and/or dishes socially, culturally, and politically unite Vietnamese people. It contends that Vietnam, with its long history of foreign invaders, its own appropriation of the middle and southern regions, and its varied regional geographies, provides a critical example for Food Studies of the need to interrogate the idea of a national cuisine and to differentiate it from regional and local cuisines. The paper examines how cookbook authors and cooking schools have more generally sought to represent Vietnamese dishes as national, but that there is a strong argument against the claim of a Vietnamese national cuisine. We advocate a Food Studies methodology that creates an effective pedagogy that explores whether or not national populations are unified as single gastro-states or atomized by a plurality of regional cuisines. Through experiential assignments and student work we illustrate how Food Studies presents the pedagogical opportunity for students to study and learn at the intersection of national politics and the everyday lives of people, providing a framework for understanding connections of labor, gender, class, and, essentially, taste, among many other values. In the case of Vietnamese food, the critical details of ingredients, preparation, and consumption both reveal and conceal truths about the Vietnamese people.

  13. National and regional asthma programmes in Europe.

    Science.gov (United States)

    Selroos, Olof; Kupczyk, Maciej; Kuna, Piotr; Łacwik, Piotr; Bousquet, Jean; Brennan, David; Palkonen, Susanna; Contreras, Javier; FitzGerald, Mark; Hedlin, Gunilla; Johnston, Sebastian L; Louis, Renaud; Metcalf, Leanne; Walker, Samantha; Moreno-Galdó, Antonio; Papadopoulos, Nikolaos G; Rosado-Pinto, José; Powell, Pippa; Haahtela, Tari

    2015-09-01

    This review presents seven national asthma programmes to support the European Asthma Research and Innovation Partnership in developing strategies to reduce asthma mortality and morbidity across Europe. From published data it appears that in order to influence asthma care, national/regional asthma programmes are more effective than conventional treatment guidelines. An asthma programme should start with the universal commitments of stakeholders at all levels and the programme has to be endorsed by political and governmental bodies. When the national problems have been identified, the goals of the programme have to be clearly defined with measures to evaluate progress. An action plan has to be developed, including defined re-allocation of patients and existing resources, if necessary, between primary care and specialised healthcare units or hospital centres. Patients should be involved in guided self-management education and structured follow-up in relation to disease severity. The three evaluated programmes show that, thanks to rigorous efforts, it is possible to improve patients' quality of life and reduce hospitalisation, asthma mortality, sick leave and disability pensions. The direct and indirect costs, both for the individual patient and for society, can be significantly reduced. The results can form the basis for development of further programme activities in Europe. Copyright ©ERS 2015.

  14. National and regional asthma programmes in Europe

    Directory of Open Access Journals (Sweden)

    Olof Selroos

    2015-09-01

    Full Text Available This review presents seven national asthma programmes to support the European Asthma Research and Innovation Partnership in developing strategies to reduce asthma mortality and morbidity across Europe. From published data it appears that in order to influence asthma care, national/regional asthma programmes are more effective than conventional treatment guidelines. An asthma programme should start with the universal commitments of stakeholders at all levels and the programme has to be endorsed by political and governmental bodies. When the national problems have been identified, the goals of the programme have to be clearly defined with measures to evaluate progress. An action plan has to be developed, including defined re-allocation of patients and existing resources, if necessary, between primary care and specialised healthcare units or hospital centres. Patients should be involved in guided self-management education and structured follow-up in relation to disease severity. The three evaluated programmes show that, thanks to rigorous efforts, it is possible to improve patients' quality of life and reduce hospitalisation, asthma mortality, sick leave and disability pensions. The direct and indirect costs, both for the individual patient and for society, can be significantly reduced. The results can form the basis for development of further programme activities in Europe.

  15. Regional and national radiation protection activities in Egypt

    International Nuclear Information System (INIS)

    Gomaa, M.A.M.

    2008-01-01

    Radiation protection activities in Egypt go back to 1957 where the Egyptian Atomic Energy Commission (EAEC) Law was issued. Radiation protection and civil defense department was one of EAEC eighth departments. Ionizing radiation law was issued in 1960 and its executive regulation in 1962. The main aim of the present work is to through some light on the current radiation protection activities in Egypt. This includes not only the role of governmental organizations but also to the non governmental organizations. Currently a new Nuclear Safety law is understudy. Regional activities such as holding the second all African IRPA regional radiation protection congress which was held in April 2007 and national training and workshops are held regularly through EAEA, AAEA and MERRCAC. (author)

  16. The potential of the Kakadu National Park Region

    International Nuclear Information System (INIS)

    1988-11-01

    The Committee reviewed the potential of the Kakadu National Park region in the Northern Territory with particular reference to the nature of the resources available for exploitation and the impact of utilisation of these resources, particularly mining and tourism. Individual chapters discuss the Park, tourism, mineral resources (particularly the environmental and economic impacts of the Ranger Uranium Mine and the potential impacts of mining the Koongarra and Jabiluka deposits), the town of Jabiru, commercial fishing, other issues (the scientific resource, crocodiles, introduced species and fire), and park management and control (including a review of the role of the Office of the Supervising Scientist for the Alligator Rivers Region). A number of recommendations are made and the dissenting report of three of the Committee's members is included

  17. NACP Regional: National Greenhouse Gas Inventories and Aggregated Gridded Model Data

    Data.gov (United States)

    National Aeronautics and Space Administration — This data set provides two products that were derived from the recently published North American Carbon Program (NACP) Regional Synthesis 1-degree terrestrial...

  18. Coal and energy: a southern perspective. Regional characterization report for the National Coal Utilization Assessment

    Energy Technology Data Exchange (ETDEWEB)

    Boercker, F. D.; Davis, R. M.; Goff, F. G.; Olson, J. S.; Parzyck, D. C.

    1977-08-01

    This publication is the first of several reports to be produced for the National Coal Utilization Assessment, a program sponsored by the Assistant Administrator for Environment and Safety through the Division of Technology Overview of ERDA. The purpose of the report is to present the state and regional perspective on energy-related issues, especially those concerning coal production and utilization for 12 southern states. This report compiles information on the present status of: (1) state government infrastructure that deals with energy problems; (2) the balance between energy consumption and energy production; (3) the distribution of proved reserves of various mineral energy resources; (4) the major characteristics of the population; (5) the important features of the environment; and (6) the major constraints to increased coal production and utilization as perceived by the states and regional agencies. Many energy-related characteristics described vary significantly from state to state within the region. Regional and national generalizations obscure these important local variations. The report provides the state and regional perspective on energy issues so that these issues may be considered objectively and incorporated into the National Coal Utilization Assessment. This Assessment is designed to provide useful outputs for national, regional, and local energy planners.

  19. Receptor model source attributions for Utah’s Salt Lake City airshed and the impacts of wintertime secondary ammonium nitrate and ammonium chloride aerosol.

    Science.gov (United States)

    Communities along Utah’s Wasatch Front are currently developing strategies to reduce daily average PM2.5 levels to below National Ambient Air Quality Standards during wintertime, persistent, multi-day stable atmospheric conditions or cold-air pools. Speciated PM2.5 data from the ...

  20. CONSTRUCTION TECHNOLOGY OF UKRAINIAN NATIONAL HOUSING (PRYDNIPROVSK REGION IS AS AN EXAMPLE

    Directory of Open Access Journals (Sweden)

    EYVSEYEVA G. P.

    2016-03-01

    Full Text Available Problem statement. Nowadays it is difficult to see a typical, old peasant house, or different types of national confident buildings. It will take a little time and some monuments of national architecture will be difficult to find. Meanwhile, rural housing was the most massive object of traditional construction. It embodies the best achievements and experience of national architects; it is of great value for the history of Ukrainian culture, history of Ukrainian art, architecture and ethnography, sustainable construction. National art of peasant house construction of Prydniprovsk region of Ukraine, is multidimensional space and time in an array of hand-made Ukrainian art is a national architecture, its decoration, clothing filling of the interior of the house and estate, as well as plastic and spatial formation, determining loci ritual of family life of Ukrainian village since the ancient times to the present. Analysis of publications. The first publications about the Ukrainian national housing, was made in the late nineteenth - early twentieth century. These were the works of ethnographers and historians M.Sumtsova [17] and D. Bagalіya [1-4], G.Lukomskogo a little [12]. B. Kharuzin’s work is interesting in the context of our study .[19]. The interesting materials were found by us in the series of publications that have appeared in the end of XIX and beginning of XX centures and are associated with vital trend to build fire-resistant housing, and ukrainian peasant house was such kind of housing. "Nowadays such kind of peasant houses and storages are put because they cheap, strong and good and the most important is to be resistant to fire. Houses with brick and stone trying to be built by reach people, and houses with the clay and saman are built by poor people,they are elegant, strong, cheap long-existed and non-flammable " that is stated in the foreword to a small edition by I. Ulashivsky "Saman building" [18]. A small booklet" Valkovi

  1. Status of national health research systems in ten countries of the WHO African Region

    Directory of Open Access Journals (Sweden)

    Kirigia Joses M

    2006-10-01

    Full Text Available Abstract Background The World Health Organization (WHO Regional Committee for Africa, in 1998, passed a resolution (AFR/RC48/R4 which urged its Member States in the Region to develop national research policies and strategies and to build national health research capacities, particularly through resource allocation, training of senior officials, strengthening of research institutions and establishment of coordination mechanisms. The purpose of this study was to take stock of some aspects of national resources for health research in the countries of the Region; identify current constraints facing national health research systems; and propose the way forward. Methods A questionnaire was prepared and sent by pouch to all the 46 Member States in the WHO African Region through the WHO Country Representatives for facilitation and follow up. The health research focal person in each of the countries Ministry of Health (in consultation with other relevant health research bodies in the country bore the responsibility for completing the questionnaire. The data were entered and analysed in Excel spreadsheet. Results The key findings were as follows: the response rate was 21.7% (10/46; three countries had a health research policy; one country reported that it had a law relating to health research; two countries had a strategic health research plan; three countries reported that they had a functional national health research system (NHRS; two countries confirmed the existence of a functional national health research management forum (NHRMF; six countries had a functional ethical review committee (ERC; five countries had a scientific review committee (SRC; five countries reported the existence of health institutions with institutional review committees (IRC; two countries had a health research programme; and three countries had a national health research institute (NHRI and a faculty of health sciences in the national university that conducted health research

  2. National Uranium Resource Evaluation: Newcastle Quadrangle, Wyoming and South Dakota

    International Nuclear Information System (INIS)

    Santos, E.S.; Robinson, K.; Geer, K.A.; Blattspieler, J.G.

    1982-09-01

    Uranium resources of the Newcastle 1 0 x2 0 Quadrangle, Wyoming and South Dakota were evaluated to a depth of 1500 m (5000 ft) using available surface and subsurface geologic information. Many of the uranium occurrences reported in the literature and in reports of the US Atomic Energy Commission were located, sampled and described. Areas of anomalous radioactivity, interpreted from an aerial radiometric survey, were outlined. Areas favorable for uranium deposits in the subsurface were evaluated using gamma-ray logs. Based on surface and subsurface data, two areas have been delineated which are underlain by rocks deemed favorable as hosts for uranium deposits. One of these is underlain by rocks that contain fluvial arkosic facies in the Wasatch and Fort Union Formations of Tertiary age; the other is underlain by rocks containing fluvial quartzose sandstone facies of the Inyan Kara Group of Early Cretaceous age. Unfavorable environments characterize all rock units of Tertiary age above the Wasatch Formation, all rock units of Cretaceous age above the Inyan Kara Group, and most rock units of Mesozoic and Paleozoic age below the Inyan Kara Group. Unfavorable environments characterize all rock units of Cretaceous age above the Inyan Kara Group, and all rock units of Mesozoic and Paleozoic age below the Inyan Kara Group

  3. National Uranium Resource Evaluation: Newcastle Quadrangle, Wyoming and South Dakota

    Energy Technology Data Exchange (ETDEWEB)

    Santos, E S; Robinson, K; Geer, K A; Blattspieler, J G

    1982-09-01

    Uranium resources of the Newcastle 1/sup 0/x2/sup 0/ Quadrangle, Wyoming and South Dakota were evaluated to a depth of 1500 m (5000 ft) using available surface and subsurface geologic information. Many of the uranium occurrences reported in the literature and in reports of the US Atomic Energy Commission were located, sampled and described. Areas of anomalous radioactivity, interpreted from an aerial radiometric survey, were outlined. Areas favorable for uranium deposits in the subsurface were evaluated using gamma-ray logs. Based on surface and subsurface data, two areas have been delineated which are underlain by rocks deemed favorable as hosts for uranium deposits. One of these is underlain by rocks that contain fluvial arkosic facies in the Wasatch and Fort Union Formations of Tertiary age; the other is underlain by rocks containing fluvial quartzose sandstone facies of the Inyan Kara Group of Early Cretaceous age. Unfavorable environments characterize all rock units of Tertiary age above the Wasatch Formation, all rock units of Cretaceous age above the Inyan Kara Group, and most rock units of Mesozoic and Paleozoic age below the Inyan Kara Group. Unfavorable environments characterize all rock units of Cretaceous age above the Inyan Kara Group, and all rock units of Mesozoic and Paleozoic age below the Inyan Kara Group.

  4. Optimizing transformations of stencil operations for parallel object-oriented scientific frameworks on cache-based architectures

    Energy Technology Data Exchange (ETDEWEB)

    Bassetti, F.; Davis, K.; Quinlan, D.

    1998-12-31

    High-performance scientific computing relies increasingly on high-level large-scale object-oriented software frameworks to manage both algorithmic complexity and the complexities of parallelism: distributed data management, process management, inter-process communication, and load balancing. This encapsulation of data management, together with the prescribed semantics of a typical fundamental component of such object-oriented frameworks--a parallel or serial array-class library--provides an opportunity for increasingly sophisticated compile-time optimization techniques. This paper describes two optimizing transformations suitable for certain classes of numerical algorithms, one for reducing the cost of inter-processor communication, and one for improving cache utilization; demonstrates and analyzes the resulting performance gains; and indicates how these transformations are being automated.

  5. National and Regional Competitiveness in the Crisis Context. Successful Examples

    Directory of Open Access Journals (Sweden)

    Gina Cristina DIMIAN

    2011-11-01

    Full Text Available The paper addresses the issue of national and regional competitiveness in the context of socio-economic and financial crisis. Competitiveness is a complex concept which can be studied at both the firm and the local and national level.Thus, in economic terms the competitiveness is most often associated with the productivity or efficiency with which inputs are transformed into goods and services. As for the regional competitiveness it should be analyzed in terms of results (revenue, employment and in relation to its determinants: ranging from the classical production factors (capital, labour, technological progress to the “soft” factors (human capital, research and development, dissemination of knowledge.The current economic environment has revealed that countries such as China, India, Brazil and also the Czech Republic and Poland, following prudent economic policies, have managed to make from macroeconomic stability, investment in education and research some of their major drivers of economic growth.

  6. CHINA'S INTERNATIONAL TOURISM UNDER ECONOMIC TRANSITION: NATIONAL TRENDS AND REGIONAL DISPARITIES

    OpenAIRE

    Liang, Chyi-Lyi (Kathleen); Guo, Rong; Wang, Qingbin

    2003-01-01

    China's Tourism industry, especially international tourism, has expanded rapidly since its market-oriented economic reform started in 1978. There has been limited information regarding the trends and regional disparities. This paper examines the national trends of China's international tourism since 1982 and analyzes the changes in regional disparities since 1995. While the trend analysis suggests that China's international tourism is likely to keep growing at a significant rate, the analysis...

  7. Temporal locality optimizations for stencil operations for parallel object-oriented scientific frameworks on cache-based architectures

    Energy Technology Data Exchange (ETDEWEB)

    Bassetti, F.; Davis, K.; Quinlan, D.

    1998-12-01

    High-performance scientific computing relies increasingly on high-level large-scale object-oriented software frameworks to manage both algorithmic complexity and the complexities of parallelism: distributed data management, process management, inter-process communication, and load balancing. This encapsulation of data management, together with the prescribed semantics of a typical fundamental component of such object-oriented frameworks--a parallel or serial array-class library--provides an opportunity for increasingly sophisticated compile-time optimization techniques. This paper describes a technique for introducing cache blocking suitable for certain classes of numerical algorithms, demonstrates and analyzes the resulting performance gains, and indicates how this optimization transformation is being automated.

  8. Avaliação do compartilhamento das memórias cache no desempenho de arquiteturas multi-core

    OpenAIRE

    Marco Antonio Zanata Alves

    2009-01-01

    No atual contexto de inovações em multi-core, em que as novas tecnologias de integração estão fornecendo um número crescente de transistores por chip, o estudo de técnicas de aumento de vazão de dados é de suma importância para os atuais e futuros processadores multi-core e many-core. Com a contínua demanda por desempenho computacional, as memórias cache vêm sendo largamente adotadas nos diversos tipos de projetos arquiteturais de computadores. Os atuais processadores disponíveis no mercado a...

  9. An integrated GIS/remote sensing data base in North Cache soil conservation district, Utah: A pilot project for the Utah Department of Agriculture's RIMS (Resource Inventory and Monitoring System)

    Science.gov (United States)

    Wheeler, D. J.; Ridd, M. K.; Merola, J. A.

    1984-01-01

    A basic geographic information system (GIS) for the North Cache Soil Conservation District (SCD) was sought for selected resource problems. Since the resource management issues in the North Cache SCD are very complex, it is not feasible in the initial phase to generate all the physical, socioeconomic, and political baseline data needed for resolving all management issues. A selection of critical varables becomes essential. Thus, there are foud specific objectives: (1) assess resource management needs and determine which resource factors ae most fundamental for building a beginning data base; (2) evaluate the variety of data gathering and analysis techniques for the resource factors selected; (3) incorporate the resulting data into a useful and efficient digital data base; and (4) demonstrate the application of the data base to selected real world resoource management issues.

  10. THE NATIONAL ONCOLOGICAL PROGRAM IMPLEMENTATION – EXPERIENCE IN THE VLADIMIR REGION (TO THE 70TH ANNIVERSARY OF THE REGIONAL CLINICAL ONCOLOGICAL DISPENSARY

    Directory of Open Access Journals (Sweden)

    A. G. Zirin

    2017-01-01

    Full Text Available By 2010, on the background of the steady increase in the incidence of malignant tumors in the Vladimir area, primary oncological care level worked inefficiently.Condition of material and technical base of the Vladimir Regional Clinical Oncological Dispensary was also unsatisfactory. All these problems required solution in the form of the National Oncology Program realization in the region. The National Oncological Program has begun to work in the Vladimir region since 2011. Target indicators of the oncological program implementation by 2015 were established. They are: Increase of 5-year survival value of patients with malignant tumors after diagnosis date to 51.4%; Increase the number of malignant tumors early detection cases at the I–II stages up to 51%; Decrease the mortality rate of working age population to 99 per 100 000; Decrease of mortality within one year from the first time of cancer diagnosis to 27%. The following main objectives such as radically improved the material and technical base of oncology dispensary; modern methods of prevention, diagnosis and patients treatment improvement and introduction; the system providing population cancer care focused on the cancer early detection and the specialized combined antitumor treatment provision are realized in order to achieve these goals. The implementation of the tasks allowed to achieve positive dynamics of Vladimir region population cancer care indicators. All the main targets of the National Oncology Program for the Vladimir region were achieved successfully. Implementation of the National Oncology Program has had an extremely positive effect on the cancer services development of, as well as for the health of the entire population of the Vladimir region.

  11. Clock generation and distribution for the 130-nm Itanium$^{R}$ 2 processor with 6-MB on-die L3 cache

    CERN Document Server

    Tam, S; Limaye, R D

    2004-01-01

    The clock generation and distribution system for the 130-nm Itanium 2 processor operates at 1.5 GHz with a skew of 24 ps. The Itanium 2 processor features 6 MB of on-die L3 cache and has a die size of 374 mm/sup 2/. Fuse-based clock de-skew enables post-silicon clock optimization to gain higher frequency. This paper describes the clock generation, global clock distribution, local clocking, and the clock skew optimization feature.

  12. Modified stretched exponential model of computer system resources management limitations-The case of cache memory

    Science.gov (United States)

    Strzałka, Dominik; Dymora, Paweł; Mazurek, Mirosław

    2018-02-01

    In this paper we present some preliminary results in the field of computer systems management with relation to Tsallis thermostatistics and the ubiquitous problem of hardware limited resources. In the case of systems with non-deterministic behaviour, management of their resources is a key point that guarantees theirs acceptable performance and proper working. This is very wide problem that stands for many challenges in financial, transport, water and food, health, etc. areas. We focus on computer systems with attention paid to cache memory and propose to use an analytical model that is able to connect non-extensive entropy formalism, long-range dependencies, management of system resources and queuing theory. Obtained analytical results are related to the practical experiment showing interesting and valuable results.

  13. An ecological response model for the Cache la Poudre River through Fort Collins

    Science.gov (United States)

    Shanahan, Jennifer; Baker, Daniel; Bledsoe, Brian P.; Poff, LeRoy; Merritt, David M.; Bestgen, Kevin R.; Auble, Gregor T.; Kondratieff, Boris C.; Stokes, John; Lorie, Mark; Sanderson, John

    2014-01-01

    The Poudre River Ecological Response Model (ERM) is a collaborative effort initiated by the City of Fort Collins and a team of nine river scientists to provide the City with a tool to improve its understanding of the past, present, and likely future conditions of the Cache la Poudre River ecosystem. The overall ecosystem condition is described through the measurement of key ecological indicators such as shape and character of the stream channel and banks, streamside plant communities and floodplain wetlands, aquatic vegetation and insects, and fishes, both coolwater trout and warmwater native species. The 13- mile-long study area of the Poudre River flows through Fort Collins, Colorado, and is located in an ecological transition zone between the upstream, cold-water, steep-gradient system in the Front Range of the Southern Rocky Mountains and the downstream, warm-water, low-gradient reach in the Colorado high plains.

  14. Contrasting patterns of survival and dispersal in multiple habitats reveal an ecological trap in a food-caching bird.

    Science.gov (United States)

    Norris, D Ryan; Flockhart, D T Tyler; Strickland, Dan

    2013-11-01

    A comprehensive understanding of how natural and anthropogenic variation in habitat influences populations requires long-term information on how such variation affects survival and dispersal throughout the annual cycle. Gray jays Perisoreus canadensis are widespread boreal resident passerines that use cached food to survive over the winter and to begin breeding during the late winter. Using multistate capture-recapture analysis, we examined apparent survival and dispersal in relation to habitat quality in a gray jay population over 34 years (1977-2010). Prior evidence suggests that natural variation in habitat quality is driven by the proportion of conifers on territories because of their superior ability to preserve cached food. Although neither adults (>1 year) nor juveniles (conifer territories, both age classes were less likely to leave high-conifer territories and, when they did move, were more likely to disperse to high-conifer territories. In contrast, survival rates were lower on territories that were adjacent to a major highway compared to territories that did not border the highway but there was no evidence for directional dispersal towards or away from highway territories. Our results support the notion that natural variation in habitat quality is driven by the proportion of coniferous trees on territories and provide the first evidence that high-mortality highway habitats can act as an equal-preference ecological trap for birds. Reproductive success, as shown in a previous study, but not survival, is sensitive to natural variation in habitat quality, suggesting that gray jays, despite living in harsh winter conditions, likely favor the allocation of limited resources towards self-maintenance over reproduction.

  15. Implementation and integration of regional health care data networks in the Hellenic National Health Service.

    Science.gov (United States)

    Lampsas, Petros; Vidalis, Ioannis; Papanikolaou, Christos; Vagelatos, Aristides

    2002-12-01

    Modern health care is provided with close cooperation among many different institutions and professionals, using their specialized expertise in a common effort to deliver best-quality and, at the same time, cost-effective services. Within this context of the growing need for information exchange, the demand for realization of data networks interconnecting various health care institutions at a regional level, as well as a national level, has become a practical necessity. To present the technical solution that is under consideration for implementing and interconnecting regional health care data networks in the Hellenic National Health System. The most critical requirements for deploying such a regional health care data network were identified as: fast implementation, security, quality of service, availability, performance, and technical support. The solution proposed is the use of proper virtual private network technologies for implementing functionally-interconnected regional health care data networks. The regional health care data network is considered to be a critical infrastructure for further development and penetration of information and communication technologies in the Hellenic National Health System. Therefore, a technical approach was planned, in order to have a fast cost-effective implementation, conforming to certain specifications.

  16. Coordinating across scales: Building a regional marsh bird monitoring program from national and state Initiatives

    Science.gov (United States)

    Shriver, G.W.; Sauer, J.R.

    2008-01-01

    Salt marsh breeding bird populations (rails, bitterns, sparrows, etc.) in eastern North America are high conservation priorities in need of site specific and regional monitoring designed to detect population changes over time. The present status and trends of these species are unknown but anecdotal evidence of declines in many of the species has raised conservation concerns. Most of these species are listed as conservation priorities on comprehensive wildlife plans throughout the eastern U.S. National Wildlife Refuges, National Park Service units, and other wildlife conservation areas provide important salt marsh habitat. To meet management needs for these areas, and to assist regional conservation planning, survey designs are being developed to estimate abundance and population trends for these breeding bird species. The primary purpose of this project is to develop a hierarchical sampling frame for salt marsh birds in Bird Conservation Region (BCR) 30 that will provide the ability to estimate species population abundances on 1) specific sites (i.e. National Parks and National Wildlife Refuges), 2) within states or regions, and 3) within BCR 30. The entire breeding range of Saltmarsh Sharp-tailed and Coastal Plain Swamp sparrows are within BCR 30, providing an opportunity to detect population trends within the entire breeding ranges of two priority species.

  17. AirCache: A Crowd-Based Solution for Geoanchored Floating Data

    Directory of Open Access Journals (Sweden)

    Armir Bujari

    2016-01-01

    Full Text Available The Internet edge has evolved from a simple consumer of information and data to eager producer feeding sensed data at a societal scale. The crowdsensing paradigm is a representative example which has the potential to revolutionize the way we acquire and consume data. Indeed, especially in the era of smartphones, the geographical and temporal scopus of data is often local. For instance, users’ queries are more and more frequently about a nearby object, event, person, location, and so forth. These queries could certainly be processed and answered locally, without the need for contacting a remote server through the Internet. In this scenario, the data is alimented (sensed by the users and, as a consequence, data lifetime is limited by human organizational factors (e.g., mobility. From this basis, data survivability in the Area of Interest (AoI is crucial and, if not guaranteed, could undermine system deployment. Addressing this scenario, we discuss and contribute with a novel protocol named AirCache, whose aim is to guarantee data availability in the AoI while at the same time reducing the data access costs at the network edges. We assess our proposal through a simulation analysis showing that our approach effectively fulfills its design objectives.

  18. Paleoseismology of the Nephi Segment of the Wasatch Fault Zone, Juab County, Utah - Preliminary Results From Two Large Exploratory Trenches at Willow Creek

    Science.gov (United States)

    Machette, Michael N.; Crone, Anthony J.; Personius, Stephen F.; Mahan, Shannon; Dart, Richard L.; Lidke, David J.; Olig, Susan S.

    2007-01-01

    In 2004, we identified a small parcel of U.S. Forest Service land at the mouth of Willow Creek (about 5 km west of Mona, Utah) that was suitable for trenching. At the Willow Creek site, which is near the middle of the southern strand of the Nephi segment, the WFZ has vertically displaced alluvial-fan deposits >6-7 m, forming large, steep, multiple-event scarps. In May 2005, we dug two 4- to 5-m-deep backhoe trenches at the Willow Creek site, identified three colluvial wedges in each trench, and collected samples of charcoal and A-horizon organic material for AMS (acceleration mass spectrometry) radiocarbon dating, and sampled fine-grained eolian and colluvial sediment for luminescence dating. The trenches yielded a stratigraphic assemblage composed of moderately coarse-grained fluvial and debris-flow deposits and discrete colluvial wedges associated with three faulting events (P1, P2, and P3). About one-half of the net vertical displacement is accommodated by monoclinal tilting of fan deposits on the hanging-wall block, possibly related to massive ductile landslide deposits that are present beneath the Willow Creek fan. The timing of the three surface-faulting events is bracketed by radiocarbon dates and results in a much different fault chronology and higher slip rates than previously considered for this segment of the Wasatch fault zone.

  19. Potential Mechanisms Driving Population Variation in Spatial Memory and the Hippocampus in Food-caching Chickadees.

    Science.gov (United States)

    Croston, Rebecca; Branch, Carrie L; Kozlovsky, Dovid Y; Roth, Timothy C; LaDage, Lara D; Freas, Cody A; Pravosudov, Vladimir V

    2015-09-01

    Harsh environments and severe winters have been hypothesized to favor improvement of the cognitive abilities necessary for successful foraging. Geographic variation in winter climate, then, is likely associated with differences in selection pressures on cognitive ability, which could lead to evolutionary changes in cognition and its neural mechanisms, assuming that variation in these traits is heritable. Here, we focus on two species of food-caching chickadees (genus Poecile), which rely on stored food for survival over winter and require the use of spatial memory to recover their stores. These species also exhibit extensive climate-related population level variation in spatial memory and the hippocampus, including volume, the total number and size of neurons, and adults' rates of neurogenesis. Such variation could be driven by several mechanisms within the context of natural selection, including independent, population-specific selection (local adaptation), environment experience-based plasticity, developmental differences, and/or epigenetic differences. Extensive data on cognition, brain morphology, and behavior in multiple populations of these two species of chickadees along longitudinal, latitudinal, and elevational gradients in winter climate are most consistent with the hypothesis that natural selection drives the evolution of local adaptations associated with spatial memory differences among populations. Conversely, there is little support for the hypotheses that environment-induced plasticity or developmental differences are the main causes of population differences across climatic gradients. Available data on epigenetic modifications of memory ability are also inconsistent with the observed patterns of population variation, with birds living in more stressful and harsher environments having better spatial memory associated with a larger hippocampus and a larger number of hippocampal neurons. Overall, the existing data are most consistent with the

  20. Workshop on assessments of National Carbon Budgets within the Nordic Region

    DEFF Research Database (Denmark)

    Hansen, Kristina; Koyama, Aki; Lansø, Anne Sofie

    The three-day workshop organized by the three Nordic research projects; ECOCLIM, LAGGE and SnowCarbo brought together scientists and other actors from Nordic countries to communicate and discuss research on carbon budget estimations in the Nordic region. Through presentations of most recent...... research in the field and following scientific discussions, the workshop contributed to strengthen the scientific basis of the identification and quantification of major natural carbon sinks in the Nordic region on which integrated climate change abatement and management strategies and policy decisions...... status and knowledge on research on assessments of national carbon budgets as well as on projections and sensitivity to future changes in e.g. management and climate change in the Nordic Region....

  1. Workshop on assessments of National Carbon Budgets within the Nordic Region

    DEFF Research Database (Denmark)

    Mørk, Eva Thorborg; Lansø, Anne Sofie; Hansen, Kristina

    2013-01-01

    The three-day workshop organized by the three Nordic research projects; ECOCLIM, LAGGE and SnowCarbo brought together scientists and other actors from Nordic countries to communicate and discuss research on carbon budget estimations in the Nordic region. Through presentations of most recent...... research in the field and following scientific discussions, the workshop contributed to strengthen the scientific basis of the identification and quantification of major natural carbon sinks in the Nordic region on which integrated climate change abatement and management strategies and policy decisions...... status and knowledge on research on assessments of national carbon budgets as well as on projections and sensitivity to future changes in e.g. management and climate change in the Nordic Region....

  2. Traversal Caches: A Framework for FPGA Acceleration of Pointer Data Structures

    Directory of Open Access Journals (Sweden)

    James Coole

    2010-01-01

    Full Text Available Field-programmable gate arrays (FPGAs and other reconfigurable computing (RC devices have been widely shown to have numerous advantages including order of magnitude performance and power improvements compared to microprocessors for some applications. Unfortunately, FPGA usage has largely been limited to applications exhibiting sequential memory access patterns, thereby prohibiting acceleration of important applications with irregular patterns (e.g., pointer-based data structures. In this paper, we present a design pattern for RC application development that serializes irregular data structure traversals online into a traversal cache, which allows the corresponding data to be efficiently streamed to the FPGA. The paper presents a generalized framework that benefits applications with repeated traversals, which we show can achieve between 7x and 29x speedup over pointer-based software. For applications without strictly repeated traversals, we present application-specialized extensions that benefit applications with highly similar traversals by exploiting similarity to improve memory bandwidth and execute multiple traversals in parallel. We show that these extensions can achieve a speedup between 11x and 70x on a Virtex4 LX100 for Barnes-Hut n-body simulation.

  3. Caching behaviour by red squirrels may contribute to food conditioning of grizzly bears

    Directory of Open Access Journals (Sweden)

    Julia Elizabeth Put

    2017-08-01

    Full Text Available We describe an interspecific relationship wherein grizzly bears (Ursus arctos horribilis appear to seek out and consume agricultural seeds concentrated in the middens of red squirrels (Tamiasciurus hudsonicus, which had collected and cached spilled grain from a railway. We studied this interaction by estimating squirrel density, midden density and contents, and bear activity along paired transects that were near (within 50 m or far (200 m from the railway. Relative to far ones, near transects had 2.4 times more squirrel sightings, but similar numbers of squirrel middens. Among 15 middens in which agricultural products were found, 14 were near the rail and 4 subsequently exhibited evidence of bear digging. Remote cameras confirmed the presence of squirrels on the rail and bears excavating middens. We speculate that obtaining grain from squirrel middens encourages bears to seek grain on the railway, potentially contributing to their rising risk of collisions with trains.

  4. Geologic Map of the Shenandoah National Park Region, Virginia

    Science.gov (United States)

    Southworth, Scott; Aleinikoff, John N.; Bailey, Christopher M.; Burton, William C.; Crider, E.A.; Hackley, Paul C.; Smoot, Joseph P.; Tollo, Richard P.

    2009-01-01

    The geology of the Shenandoah National Park region of Virginia was studied from 1995 to 2008. The focus of the study was the park and surrounding areas to provide the National Park Service with modern geologic data for resource management. Additional geologic data of the adjacent areas are included to provide regional context. The geologic map can be used to support activities such as ecosystem delineation, land-use planning, soil mapping, groundwater availability and quality studies, aggregate resources assessment, and engineering and environmental studies. The study area is centered on the Shenandoah National Park, which is mostly situated in the western part of the Blue Ridge province. The map covers the central section and western limb of the Blue Ridge-South Mountain anticlinorium. The Skyline Drive and Appalachian National Scenic Trail straddle the drainage divide of the Blue Ridge highlands. Water drains northwestward to the South Fork of the Shenandoah River and southeastward to the James and Rappahannock Rivers. East of the park, the Blue Ridge is an area of low relief similar to the physiography of the Piedmont province. The Great Valley section of the Valley and Ridge province is west of Blue Ridge and consists of Page Valley and Massanutten Mountain. The distribution and types of surficial deposits and landforms closely correspond to the different physiographic provinces and their respective bedrock. The Shenandoah National Park is underlain by three general groups of rock units: (1) Mesoproterozoic granitic gneisses and granitoids, (2) Neoproterozoic metasedimentary rocks of the Swift Run Formation and metabasalt of the Catoctin Formation, and (3) siliciclastic rocks of the Lower Cambrian Chilhowee Group. The gneisses and granitoids mostly underlie the lowlands east of Blue Ridge but also rugged peaks like Old Rag Mountain (996 meter). Metabasalt underlies much of the highlands, like Stony Man (1,200 meters). The siliciclastic rocks underlie linear

  5. Kinematical Comparison of the 200 m Backstroke Turns between National and Regional Level Swimmers

    Directory of Open Access Journals (Sweden)

    Santiago Veiga

    2013-12-01

    Full Text Available The aims of this investigation were to determine the evolution of selected turn variables during competitive backstroke races and to compare these kinematic variables between two different levels of swimmers. Sixteen national and regional level male swimmers participant in the 200 m backstroke event at the Spanish Swimming Championships in short course (25 m were selected to analyze their turn performances. The individual distances method with two-dimensional Direct Linear Transformation (2D-DLT algorithms was used to perform race analyses. National level swimmers presented a shorter “turn time”, a longer “distance in”, a faster “underwater velocity” and “normalized underwater velocity”, and a faster “stroking velocity” than regional level swimmers, whereas no significant differences were detected between levels for the “underwater distance”. National level swimmers maintained similar “turn times” over the event and increased “underwater velocity” and “normalized underwater velocity” in the last (seventh turn segment, whereas regional level swimmers increased “turn time” in the last half of the race. For both national and regional level swimmers, turn “underwater distance” during the last three turns of the race was significantly shorter while no significant differences in distance into the wall occurred throughout the race. The skill level of the swimmers has an impact on the competitive backstroke turn segments. In a 200 m event, the underwater velocity should be maximized to maintain turn proficiency, whereas turn distance must be subordinated to the average velocity.

  6. Detecting Precontact Anthropogenic Microtopographic Features in a Forested Landscape with Lidar: A Case Study from the Upper Great Lakes Region, AD 1000-1600.

    Science.gov (United States)

    Howey, Meghan C L; Sullivan, Franklin B; Tallant, Jason; Kopple, Robert Vande; Palace, Michael W

    2016-01-01

    Forested settings present challenges for understanding the full extent of past human landscape modifications. Field-based archaeological reconnaissance in forests is low-efficiency and most remote sensing techniques are of limited utility, and together, this means many past sites and features in forests are unknown. Archaeologists have increasingly used light detection and ranging (lidar), a remote sensing tool that uses pulses of light to measure reflecting surfaces at high spatial resolution, to address these limitations. Archaeology studies using lidar have made significant progress identifying permanent structures built by large-scale complex agriculturalist societies. Largely unaccounted for, however, are numerous small and more practical modifications of landscapes by smaller-scale societies. Here we show these may also be detectable with lidar by identifying remnants of food storage pits (cache pits) created by mobile hunter-gatherers in the upper Great Lakes during Late Precontact (ca. AD 1000-1600) that now only exist as subtle microtopographic features. Years of intensive field survey identified 69 cache pit groups between two inland lakes in northern Michigan, almost all of which were located within ~500 m of a lakeshore. Applying a novel series of image processing techniques and statistical analyses to a high spatial resolution DTM we created from commercial-grade lidar, our detection routine identified 139 high potential cache pit clusters. These included most of the previously known clusters as well as several unknown clusters located >1500 m from either lakeshore, much further from lakeshores than all previously identified cultural sites. Food storage is understood to have emerged regionally as a risk-buffering strategy after AD 1000 but our results indicate the current record of hunter-gatherer cache pit food storage is markedly incomplete and this practice and its associated impact on the landscape may be greater than anticipated. Our study also

  7. A Yeast Purification System for Human Translation Initiation Factors eIF2 and eIF2B epsilon and Their Use in the Diagnosis of CACH/VWM Disease

    NARCIS (Netherlands)

    de Almeida, R.A.; Fogli, A.; Gaillard, M.; Scheper, G.C.; Boesflug-Tanguy, O.; Pavitt, G.D.

    2013-01-01

    Recessive inherited mutations in any of five subunits of the general protein synthesis factor eIF2B are responsible for a white mater neurodegenerative disease with a large clinical spectrum. The classical form is called Childhood Ataxia with CNS hypomyelination (CACH) or Vanishing White Matter

  8. 76 FR 38124 - Applications for New Awards; Americans With Disabilities Act (ADA) National Network Regional...

    Science.gov (United States)

    2011-06-29

    ... DEPARTMENT OF EDUCATION Applications for New Awards; Americans With Disabilities Act (ADA) National Network Regional Centers and ADA National Network Collaborative Research Projects AGENCY: Office... Rehabilitation Research Projects and Centers Program--Disability Rehabilitation Research Projects (DRRP)--ADA...

  9. National data centres and other means of regional cooperation in Africa: prospects and benefits

    International Nuclear Information System (INIS)

    Masawi, L.

    2002-01-01

    Participation in regional cooperation by Bulawayo National Data Centre in Zimbabwe is noted. East and Southern Africa Working Group (ESAWORG) is given as an example of such cooperation. The coming of CTBT is expected to strengthen the said group together with regional cooperation. Expected new developments are listed

  10. Regional Information Group (RIG). Energy, environmental, and socioeconomic data bases and associated software at Oak Ridge National Laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Loebl, A.S.; Malthouse, N.S.; Shonka, D.B.; Ogle, M.C.; Johnson, M.L.

    1976-10-01

    A machine readable data base has been created by the Regional Information Group, Regional and Urban Studies Section, Energy Division, Oak Ridge National Laboratory, to provide documentation for the energy, environmental, and socioeconomic data bases and associated software maintained at Oak Ridge National Laboratory. This document is produced yearly by the Regional Information Group to describe the contents and organization of this data base.

  11. Regional Information Group (RIG). Energy, environmental, and socioeconomic data bases and associated software at Oak Ridge National Laboratory

    International Nuclear Information System (INIS)

    Loebl, A.S.; Malthouse, N.S.; Shonka, D.B.; Ogle, M.C.; Johnson, M.L.

    1976-10-01

    A machine readable data base has been created by the Regional Information Group, Regional and Urban Studies Section, Energy Division, Oak Ridge National Laboratory, to provide documentation for the energy, environmental, and socioeconomic data bases and associated software maintained at Oak Ridge National Laboratory. This document is produced yearly by the Regional Information Group to describe the contents and organization of this data base

  12. Organizational factors in fire prevention: roles, obstacles, and recommendations

    Science.gov (United States)

    John R. Christiansen; William S. Folkman; Keith W. Warner; Michael L. Woolcott

    1976-01-01

    Problems being encountered in implementing fire prevention programs were explored by studying the organization for fire prevention at the Fish Lake, Uinta, and Wasatch National Forests in Utah. The study focused on role congruency in fire prevention activities and on the social and organizational obstacles to effective programs. The problems identified included lack of...

  13. Beginnings of range management: an anthology of the Sampson-Ellison photo plots (1913 to 2003) and a short history of the Great Basin Experiment Station

    Science.gov (United States)

    David A. Prevedel; E. Durant McArthur; Curtis M. Johnson

    2005-01-01

    High-elevation watersheds on the Wasatch Plateau in central Utah were severely overgrazed in the late 1800s, resulting in catastrophic flooding and mudflows through adjacent communities. Affected citizens petitioned the Federal government to establish a Forest Reserve (1902), and the Manti National Forest was established by the Transfer Act of 1905. The Great Basin...

  14. On the Perception of National Security Issues at Regional Level

    Directory of Open Access Journals (Sweden)

    Ponedelkov Aleksandr Vasilyevich

    2015-12-01

    Full Text Available The article explores the issue on the perception of the concept “national security” areas, models and methods of its maintenance by the population. The author uses materials of the sociological survey conducted by the Laboratory of problems of increasing the efficiency of state and municipal management of the South-Russian Institute of Management – branch of the Russian Presidential Academy of National Economy and Public Administration. The survey was carried out with the participation of leading experts in various aspects of national security, representing 27 Russian higher educational institutions and research centers in Moscow, Astrakhan, Barnaul, Belgorod, Dushanbe, Krasnodar, Nizhny Novgorod, Omsk, Pyatigorsk, Rostov-on-Don, Saint Petersburg, Syktyvkar, Sochi, Ufa. It is noted that as a priority political governance model that implements the basic concept of national security, respondents identified a democratic model. Most respondents believe that a unified security model in the Russian regions is ineffective, and such model should be developed taking into account the specificity of each subject. The study showed that the public’s attention to the issue of national security is not sustainable, as determined by situational factors. It is proved that the motives of anxiety formed in the Russian public mind are not sustainable, and situational. Respondents see the economic cooperation more effective incentive to maintain national interests than by force. Estimation of the population of the priority issues of security shows that most respondents appreciate the organization of work to ensure the safety and anti-terrorism security in the sphere of national relations. The findings give grounds to assert that the focus of public attention to the problem of national security does not yet occupy the leading positions. To a greater extent, respondents focused on the issues of public safety, reducing threats and risks in their daily lives

  15. 75 FR 48986 - Vendor Outreach Workshop for Small Businesses in the National Capitol Region of the United States

    Science.gov (United States)

    2010-08-12

    ... DEPARTMENT OF THE INTERIOR Office of the Secretary Vendor Outreach Workshop for Small Businesses in the National Capitol Region of the United States AGENCY: Office of the Secretary, Interior. ACTION... Interior are hosting a Vendor Outreach Workshop for small businesses in the National Capitol region of the...

  16. Assessing regional groundwater stress for nations using multiple data sources with the groundwater footprint

    International Nuclear Information System (INIS)

    Gleeson, Tom; Wada, Yoshihide

    2013-01-01

    Groundwater is a critical resource for agricultural production, ecosystems, drinking water and industry, yet groundwater depletion is accelerating, especially in a number of agriculturally important regions. Assessing the stress of groundwater resources is crucial for science-based policy and management, yet water stress assessments have often neglected groundwater and used single data sources, which may underestimate the uncertainty of the assessment. We consistently analyze and interpret groundwater stress across whole nations using multiple data sources for the first time. We focus on two nations with the highest national groundwater abstraction rates in the world, the United States and India, and use the recently developed groundwater footprint and multiple datasets of groundwater recharge and withdrawal derived from hydrologic models and data synthesis. A minority of aquifers, mostly with known groundwater depletion, show groundwater stress regardless of the input dataset. The majority of aquifers are not stressed with any input data while less than a third are stressed for some input data. In both countries groundwater stress affects agriculturally important regions. In the United States, groundwater stress impacts a lower proportion of the national area and population, and is focused in regions with lower population and water well density compared to India. Importantly, the results indicate that the uncertainty is generally greater between datasets than within datasets and that much of the uncertainty is due to recharge estimates. Assessment of groundwater stress consistently across a nation and assessment of uncertainty using multiple datasets are critical for the development of a science-based rationale for policy and management, especially with regard to where and to what extent to focus limited research and management resources. (letter)

  17. Regional Centres for Space Science and Technology Education Affiliated to the United Nations

    Science.gov (United States)

    Aquino, A. J. A.; Haubold, H. J.

    2010-05-01

    Based on resolutions of the United Nations General Assembly, Regional Centres for space science and technology education were established in India, Morocco, Nigeria, Brazil and Mexico. Simultaneously, education curricula were developed for the core disciplines of remote sensing, satellite communications, satellite meteorology, and space and atmospheric science. This paper provides a brief report on the status of the operation of the Regional Centres and draws attention to their educational activities.

  18. A Comprehensive Approach to Bi-National Regional Energy Planning in the Pacific Northwest

    Energy Technology Data Exchange (ETDEWEB)

    Matt Morrison

    2007-12-31

    The Pacific NorthWest Economic Region, a statutory organization chartered by the Northwest states of Alaska, Washington, Idaho, Montana, and Oregon, and the western Canadian provinces of British Columbia, Alberta, and the Yukon through its Energy Working Group launched a bi-national energy planning initiative designed to create a Pacific Northwest energy planning council of regional public/private stakeholders from both Canada and the US. There is an urgent need to deal with the comprehensive energy picture now before our hoped for economic recovery results in energy price spikes which are likely to happen because the current supply will not meet predicted demand. Also recent events of August 14th have shown that our bi-national energy grid system is intricately interdependent, and additional planning for future capacity is desperately needed.

  19. Smart Collaborative Caching for Information-Centric IoT in Fog Computing

    Directory of Open Access Journals (Sweden)

    Fei Song

    2017-11-01

    Full Text Available The significant changes enabled by the fog computing had demonstrated that Internet of Things (IoT urgently needs more evolutional reforms. Limited by the inflexible design philosophy; the traditional structure of a network is hard to meet the latest demands. However, Information-Centric Networking (ICN is a promising option to bridge and cover these enormous gaps. In this paper, a Smart Collaborative Caching (SCC scheme is established by leveraging high-level ICN principles for IoT within fog computing paradigm. The proposed solution is supposed to be utilized in resource pooling, content storing, node locating and other related situations. By investigating the available characteristics of ICN, some challenges of such combination are reviewed in depth. The details of building SCC, including basic model and advanced algorithms, are presented based on theoretical analysis and simplified examples. The validation focuses on two typical scenarios: simple status inquiry and complex content sharing. The number of clusters, packet loss probability and other parameters are also considered. The analytical results demonstrate that the performance of our scheme, regarding total packet number and average transmission latency, can outperform that of the original ones. We expect that the SCC will contribute an efficient solution to the related studies.

  20. Smart Collaborative Caching for Information-Centric IoT in Fog Computing.

    Science.gov (United States)

    Song, Fei; Ai, Zheng-Yang; Li, Jun-Jie; Pau, Giovanni; Collotta, Mario; You, Ilsun; Zhang, Hong-Ke

    2017-11-01

    The significant changes enabled by the fog computing had demonstrated that Internet of Things (IoT) urgently needs more evolutional reforms. Limited by the inflexible design philosophy; the traditional structure of a network is hard to meet the latest demands. However, Information-Centric Networking (ICN) is a promising option to bridge and cover these enormous gaps. In this paper, a Smart Collaborative Caching (SCC) scheme is established by leveraging high-level ICN principles for IoT within fog computing paradigm. The proposed solution is supposed to be utilized in resource pooling, content storing, node locating and other related situations. By investigating the available characteristics of ICN, some challenges of such combination are reviewed in depth. The details of building SCC, including basic model and advanced algorithms, are presented based on theoretical analysis and simplified examples. The validation focuses on two typical scenarios: simple status inquiry and complex content sharing. The number of clusters, packet loss probability and other parameters are also considered. The analytical results demonstrate that the performance of our scheme, regarding total packet number and average transmission latency, can outperform that of the original ones. We expect that the SCC will contribute an efficient solution to the related studies.

  1. A Query Cache Tool for Optimizing Repeatable and Parallel OLAP Queries

    Science.gov (United States)

    Santos, Ricardo Jorge; Bernardino, Jorge

    On-line analytical processing against data warehouse databases is a common form of getting decision making information for almost every business field. Decision support information oftenly concerns periodic values based on regular attributes, such as sales amounts, percentages, most transactioned items, etc. This means that many similar OLAP instructions are periodically repeated, and simultaneously, between the several decision makers. Our Query Cache Tool takes advantage of previously executed queries, storing their results and the current state of the data which was accessed. Future queries only need to execute against the new data, inserted since the queries were last executed, and join these results with the previous ones. This makes query execution much faster, because we only need to process the most recent data. Our tool also minimizes the execution time and resource consumption for similar queries simultaneously executed by different users, putting the most recent ones on hold until the first finish and returns the results for all of them. The stored query results are held until they are considered outdated, then automatically erased. We present an experimental evaluation of our tool using a data warehouse based on a real-world business dataset and use a set of typical decision support queries to discuss the results, showing a very high gain in query execution time.

  2. Smart Collaborative Caching for Information-Centric IoT in Fog Computing

    Science.gov (United States)

    Song, Fei; Ai, Zheng-Yang; Li, Jun-Jie; Zhang, Hong-Ke

    2017-01-01

    The significant changes enabled by the fog computing had demonstrated that Internet of Things (IoT) urgently needs more evolutional reforms. Limited by the inflexible design philosophy; the traditional structure of a network is hard to meet the latest demands. However, Information-Centric Networking (ICN) is a promising option to bridge and cover these enormous gaps. In this paper, a Smart Collaborative Caching (SCC) scheme is established by leveraging high-level ICN principles for IoT within fog computing paradigm. The proposed solution is supposed to be utilized in resource pooling, content storing, node locating and other related situations. By investigating the available characteristics of ICN, some challenges of such combination are reviewed in depth. The details of building SCC, including basic model and advanced algorithms, are presented based on theoretical analysis and simplified examples. The validation focuses on two typical scenarios: simple status inquiry and complex content sharing. The number of clusters, packet loss probability and other parameters are also considered. The analytical results demonstrate that the performance of our scheme, regarding total packet number and average transmission latency, can outperform that of the original ones. We expect that the SCC will contribute an efficient solution to the related studies. PMID:29104219

  3. Governance and Regional Variation of Homicide Rates: Evidence From Cross-National Data.

    Science.gov (United States)

    Cao, Liqun; Zhang, Yan

    2017-01-01

    Criminological theories of cross-national studies of homicide have underestimated the effects of quality governance of liberal democracy and region. Data sets from several sources are combined and a comprehensive model of homicide is proposed. Results of the spatial regression model, which controls for the effect of spatial autocorrelation, show that quality governance, human development, economic inequality, and ethnic heterogeneity are statistically significant in predicting homicide. In addition, regions of Latin America and non-Muslim Sub-Saharan Africa have significantly higher rates of homicides ceteris paribus while the effects of East Asian countries and Islamic societies are not statistically significant. These findings are consistent with the expectation of the new modernization and regional theories. © The Author(s) 2015.

  4. Percolation-theoretic bounds on the cache size of nodes in mobile opportunistic networks.

    Science.gov (United States)

    Yuan, Peiyan; Wu, Honghai; Zhao, Xiaoyan; Dong, Zhengnan

    2017-07-18

    The node buffer size has a large influence on the performance of Mobile Opportunistic Networks (MONs). This is mainly because each node should temporarily cache packets to deal with the intermittently connected links. In this paper, we study fundamental bounds on node buffer size below which the network system can not achieve the expected performance such as the transmission delay and packet delivery ratio. Given the condition that each link has the same probability p to be active in the next time slot when the link is inactive and q to be inactive when the link is active, there exists a critical value p c from a percolation perspective. If p > p c , the network is in the supercritical case, where we found that there is an achievable upper bound on the buffer size of nodes, independent of the inactive probability q. When p network is in the subcritical case, and there exists a closed-form solution for buffer occupation, which is independent of the size of the network.

  5. Headwaters, Wetlands, and Wildfires: Utilizing Landsat imagery, GIS, and Statistical Models for Mapping Wetlands in Northern Colorado's Cache la Poudre Watershed in the aftermath of the June 2012 High Park Fire

    Science.gov (United States)

    Chignell, S.; Skach, S.; Kessenich, B.; Weimer, A.; Luizza, M.; Birtwistle, A.; Evangelista, P.; Laituri, M.; Young, N.

    2013-12-01

    The June 2012 High Park Fire burned over 87,000 acres of forest and 259 homes to the west of Fort Collins, CO. The fire has had dramatic impacts on forest ecosystems; of particular concern are its effects on the Cache la Poudre watershed, as the Poudre River is one of the most important headwaters of the Colorado Front Range, providing important ecosystem and economic services before flowing into the South Platte, which in turn flows into the Missouri River. Within a week of the fire, the area received several days of torrential rains. This precipitation--in conjunction with steep riverbanks and the loss of vegetation by fire--caused soil and ash runoff to be deposited into the Poudre's channel, resulting in a river of choking mud and black sludge. Monitoring the effects of such wildfires is critical and requires establishing immediate baseline data to assess impacts over time. Of particular concern is the region's wetlands, which not only provide habitat for a rich array of flora and fauna, but help regulate river discharge, improve water quality, and aid in carbon sequestration. However, the high expense of field work and the changing nature of wetlands have left many of the area's wetland maps incomplete and in need of updating. Utilizing Landsat 5 and Landsat 8 imagery, ancillary GIS layers, and boosted regression trees modeling, the NASA DEVELOP team based at the North Central Climate Science Center at Colorado State University developed a methodology for wetland modeling within the Cache la Poudre watershed. These efforts produced a preliminary model of predicted wetlands across the landscape that correctly classified 89% of the withheld validation points and had a kappa value of approximately 0.78. This initial model is currently being refined and validated using the USGS Software for Assisted Habitat Modeling (SAHM) to run multiple models within three elevation-based 'life zones.' The ultimate goal of this ongoing project is to provide important spatial

  6. Anthropogenic Sulfur Dioxide Emissions, 1850-2005: National and Regional Data Set by Source Category, Version 2.86

    Data.gov (United States)

    National Aeronautics and Space Administration — The Anthropogenic Sulfur Dioxide Emissions, 1850-2005: National and Regional Data Set by Source Category, Version 2.86 provides annual estimates of anthropogenic...

  7. Strengthening Climate Services Capabilities and Regional Engagement at NOAA's National Climatic Data Center

    Science.gov (United States)

    Shea, E.

    2008-12-01

    The demand for sector-based climate information is rapidly expanding. In order to support this demand, it is crucial that climate information is managed in an effective, efficient, and user-conscious manner. NOAA's National Climatic Data Center is working closely with numerous partners to develop a comprehensive interface that is authoritative, accessible, and responsive to a variety of sectors, stakeholders, and other users. This talk will explore these dynamics and activities, with additional perspectives on climate services derived from the regional and global experiences of the NOAA Integrated Data and Environmental Applications (IDEA) Center in the Pacific. The author will explore the importance of engaging partners and customers in the development, implementation and emergence of a national climate service program. The presentation will draw on the author's experience in climate science and risk management programs in the Pacific, development of regional and national climate services programs and insights emerging from climate services development efforts in NCDC. In this context, the author will briefly discuss some of guiding principles for effective climate services and applications including: - Early and continuous dialogue, partnership and collaboration with users/customers; - Establishing and sustaining trust and credibility through a program of shared learning and joint problem- solving; - Understanding the societal context for climate risk management and using a problem-focused approach to the development of products and services; - Addressing information needs along a continuum of timescales from extreme events to long-term change; and - Embedding education, outreach and communications activities as critical program elements in effective climate services. By way of examples, the author will reference lessons learned from: early Pacific Island climate forecast applications and climate assessment activities; the implementation of the Pacific Climate

  8. Assessing the Status and Needs of Children and Youth in the National Capital Region

    Science.gov (United States)

    Murphey, David; Redd, Zakia; Moodie, Shannon; Knewstub, Dylan; Humble, Jill; Bell, Kelly; Cooper, Mae

    2012-01-01

    The National Capital Region (NCR) is home to more than one-and-a-half million children and youth (ages birth through 24 years). Although the NCR is known as a place with a highly transient population, if history is any guide, many of these young people will remain in this region and fundamentally shape the quality of life--not only for themselves,…

  9. Regional Integration of the Association of Southeast Asian Nations Economic Community: An Analysis of Malaysia - Association of Southeast Asian Nations Exports

    OpenAIRE

    Abidin, Irwan Shah Zainal; Haseeb, Muhammad; Islam, Rabiul

    2016-01-01

    Malaysia is a rapid growing economy especially in the Association of Southeast Asian Nations (ASEAN) region. The exports with ASEAN countries plays vital role in economic growth and development of Malaysia. Additionally, current chairmanship of ASEAN makes Malaysia more prominent in the region. Consequently, exploring the determinants of Malaysia – ASEAN-5 countries, namely Singapore, Thailand, Indonesia, Philippine and Vietnam exports performance is a fundamental objective of this study. The...

  10. Internationalization of product-service systems: Global, regional or national strategy?

    OpenAIRE

    Parry, G.; Bustinza, O. F.; Vendrell-Herrero, F.; O'Regan, N.

    2016-01-01

    This paper explores the validity of national, regional or global strategies in the provision of a music industry product service system. Quantitative analysis of cross-section data from over 70,000 respondents from 15 geographically spread countries identified a homogeneous group of so-called ‘Out of Touch’ consumers characterized by a shared attitude: they are interested in and have the money to purchase music, but no longer do so. The analysis ascertains if and how re-engaging this group in...

  11. The national psychological/personality profile of Romanians: An in depth analysis of the regional national psychological/personality profile of Romanians

    Directory of Open Access Journals (Sweden)

    David, D.

    2015-12-01

    Full Text Available In this article we perform an in depth analysis of the national psychological/personality profile of Romanians. Following recent developments in the field (see Rentfrow et al., 2013; 2015, we study the regional national psychological/personality profile of Romanians, based on the Big Five model (i.e., NEO PI/R. Using a representative sample (N1 = 1000, we performed a cluster analysis and identified two bipolar personality profiles in the population: cluster 1, called “Factor X-”, characterized by high neuroticism and low levels of extraversion, openness, agreeableness, and conscientiousness, and cluster 2, called “Factor X+”, characterized by the opposite configuration in personality traits, low neuroticism and high levels of extraversion, openness, agreeableness, and conscientiousness. The same two cluster pattern/solution emerged in other samples (N = 2200, with other Big Five-based instruments, and by using various methods of data (e.g., direct vs. reversed item score, controlling for item desirability and cluster (i.e., with and without “running means” analyses. These two profiles are quite evenly distributed in the overall population, but also across all geographical regions. Moreover, comparing the distribution of the five personality traits, we found just few small differences between the eight geographical divisions that we used for our analysis. These results suggest that the regional national psychological/personality profile of Romania is quite homogenous. Directions for harnessing the potential of both personality profiles are presented to the reader. Other implications based on the bipolar and fractal structure of the personality profile are discussed from an interdisciplinary perspective.

  12. Wildland fire, risk, and recovery: results of a national survey with regional and racial perspectives

    Science.gov (United States)

    J. Michael Bowker; Siew Hoon Lim; H. Ken Cordell; Gary T. Green; Sandra Rideout-Hanzak; Cassandra Y. Johnson

    2008-01-01

    We used a national household survey to examine knowledge, attitudes, and preferences pertaining to wildland fire. First, we present nationwide results and trends. Then, we examine opinions across region and race. Despite some regional variation, respondents are fairly consistent in their beliefs about assuming personal responsibility for living in fire-prone areas and...

  13. National uranium resource evaluation, Montrose Quadrangle, Colorado

    International Nuclear Information System (INIS)

    Goodknight, C.S.; Ludlam, J.R.

    1981-06-01

    The Montrose Quadrangle in west-central Colorado was evaluated to identify and delineate areas favorable for the occurrence of uranium deposits according to National Uranium Resource Evaluation program criteria. General surface reconnaissance and geochemical sampling were conducted in all geologic environments in the quadrangle. Preliminary data from aerial radiometric and hydrogeochemical and stream-sediment reconnaissance were analyzed and brief followup studies were performed. Twelve favorable areas were delineated in the quadrangle. Five favorable areas contain environments for magmatic-hydrothermal uranium deposits along fault zones in the Colorado mineral belt. Five areas in parts of the Harding and Entrada Sandstones and Wasatch and Ohio Creek Formations are favorable environments for sandstone-type uranium deposits. The area of late-stage rhyolite bodies related to the Lake City caldera is a favorable environment for hydroauthigenic uranium deposits. One small area is favorable for uranium deposits of uncertain genesis. All near-surface Phanerozoic sedimentary rocks are unfavorable for uranium deposits, except parts of four formations. All near-surface plutonic igneous rocks are unfavorable for uranium deposits, except five areas of vein-type deposits along Tertiary fault zones. All near-surface volcanic rocks, except one area of rhyolite bodies and several unevaluated areas, are unfavorable for uranium. All near-surface Precambrian metamorphic rocks are unfavorable for uranium deposits. Parts of two wilderness areas, two primitive areas, and most of the subsurface environment are unevaluated

  14. [Correlation anslysis of sporadic breast cancer and BRCA1 gene plymorphisms in the Han Nationality and the Mongol Nationality of Inner Mongolia Region].

    Science.gov (United States)

    Ma, Jinzhu; Liu, Ming; Zhang, Xinlai; BuRi, Gude

    2015-12-08

    To study the correlationship between the BRCA1 gene polymorphisms, especially in 2731 loci (rs799917), and sporadic breast cancer in the Han nationality and the Mongol nationality of the Inner Mongolia region. Using the prospective study method, 103 cases of patients with sporadic breast cancer (case group) and 103 cases of normal physical examination people (control group) were enrolled. PCR and direct sequencing method were used for analyzing the correlationship of 2731 loci polymorphisms of BRCA1 and sporadic breast cancer in our zone. In the case group, the age stratification, pathologic stage, immunohistochemistry and the distribution of lymph node metastasis had no significant difference in two ethnic group (P> 0.05). The age stratification of control group also had no significant difference in two ethnic group (P>0. 05). There was no statistically significant difference in age stratification of the case group and the control group (P>0.05). In the Inner Mongolia region, BRCA1 gene 2731 loci genotypes check out three genotypes: namely TT, CT and CC. The frequencies of genotype TT, CT, CC in the case group were 13.1%, 26.2%, 60.7% ( the Han nationality) and 16.7%, 28.6%, 54.7% (the Mongol nationality), respectively. Meanwhile the frequencies of allele T and allele C were 71.8% and 28.2%. In the control group, the frequencies of genotype TT, CT, CC were 18.0%, 31.1%, 50.9% ( the Han nationality) and 23.8%, 38.1%, 38.1% ( the Mongol nationality), respectively, and the frequencies of allele T and allele C were 62.9% and 37.1%. BRCA1 gene 2 731 loci gene polymorphism had no significant difference in two groups (χ(2)=3.438, P=0.752), but T allele frequency distribution in the case group was significantly increased (χ(2)=4.185, P=0.041). There is no obvious correlation between the BRCA1 gene 2731 loci and sporadic breast cancer in the Han nationality and the Mongol nationality of the Inner Mongolia region. C allele of BRCA1 gene 2731 loci may be one of the

  15. Regionalism in Educational R/D&I: A Policy Analysis for the National Institute of Education.

    Science.gov (United States)

    Hofler, Durward; And Others

    This analysis examines regionalism in the educational research, development, and innovation (R/D&I) context with particular concern for its meaning and significance for the National Institute of Education. The purpose of the analysis is to provide an understanding of regionalism that would be of help to R/D&I policy makers. It is intended…

  16. Kinematical Comparison of the 200 m Backstroke Turns between National and Regional Level Swimmers

    Science.gov (United States)

    Veiga, Santiago; Cala, Antonio; Frutos, Pablo González; Navarro, Enrique

    2013-01-01

    The aims of this investigation were to determine the evolution of selected turn variables during competitive backstroke races and to compare these kinematic variables between two different levels of swimmers. Sixteen national and regional level male swimmers participant in the 200 m backstroke event at the Spanish Swimming Championships in short course (25 m) were selected to analyze their turn performances. The individual distances method with two-dimensional Direct Linear Transformation (2D-DLT) algorithms was used to perform race analyses. National level swimmers presented a shorter “turn time”, a longer “distance in”, a faster “underwater velocity” and “normalized underwater velocity”, and a faster “stroking velocity” than regional level swimmers, whereas no significant differences were detected between levels for the “underwater distance”. National level swimmers maintained similar “turn times” over the event and increased “underwater velocity” and “normalized underwater velocity” in the last (seventh) turn segment, whereas regional level swimmers increased “turn time” in the last half of the race. For both national and regional level swimmers, turn “underwater distance” during the last three turns of the race was significantly shorter while no significant differences in distance into the wall occurred throughout the race. The skill level of the swimmers has an impact on the competitive backstroke turn segments. In a 200 m event, the underwater velocity should be maximized to maintain turn proficiency, whereas turn distance must be subordinated to the average velocity. Key Points The underwater turn velocity is as a critical variable related to the swimmers’ level of skill in a 200 m backstroke event. Best swimmers perform faster but no longer turn segments during a 200 m backstroke event. Best swimmers maintain their turn performance throughout the 200 m backstroke event by increasing the underwater velocity

  17. Políticas de reemplazo en la caché de web

    Directory of Open Access Journals (Sweden)

    Carlos Quesada Sánchez

    2006-05-01

    Full Text Available La web es el mecanismo de comunicación más utilizado en la actualidad debido a su flexibilidad y a la oferta casi interminable de herramientas para navegarla. Esto hace que día con día se agreguen alrededor de un millón de páginas en ella. De esta manera, es entonces la biblioteca más grande, con recursos textuales y de multimedia, que jamás se haya visto antes. Eso sí, es una biblioteca distribuida alrededor de todos los servidores que contienen esa información. Como fuente de consulta, es importante que la recuperación de los datos sea eficiente. Para ello existe el Web Caching, técnica mediante la cual se almacenan temporalmente algunos datos de la web en los servidores locales, de manera que no haya que pedirlos al servidor remoto cada vez que un usuario los solicita. Empero, la cantidad de memoria disponible en los servidores locales para almacenar esa información es limitada: hay que decidir cuáles objetos de la web se almacenan y cuáles no. Esto da pie a varias políticas de reemplazo que se explorarán en este artículo. Mediante un experimento de peticiones reales de la Web, compararemos el desempeño de estas técnicas.

  18. Regional Variation in Acute Kidney Injury Requiring Dialysis in the English National Health Service from 2000 to 2015 - A National Epidemiological Study.

    Directory of Open Access Journals (Sweden)

    Nitin V Kolhe

    Full Text Available The absence of effective interventions in presence of increasing national incidence and case-fatality in acute kidney injury requiring dialysis (AKI-D warrants a study of regional variation to explore any potential for improvement. We therefore studied regional variation in the epidemiology of AKI-D in English National Health Service over a period of 15 years.We analysed Hospital Episode Statistics data for all patients with a diagnosis of AKI-D, using ICD-10-CM codes, in English regions between 2000 and 2015 to study temporal changes in regional incidence and case-fatality.Of 203,758,879 completed discharges between 1st April 2000 and 31st March 2015, we identified 54,252 patients who had AKI-D in the nine regions of England. The population incidence of AKI-D increased variably in all regions over 15 years; however, the regional variation decreased from 3·3-fold to 1·3-fold (p<0·01. In a multivariable adjusted model, using London as the reference, in the period of 2000-2005, the North East (odd ratio (OR 1·38; 95%CI 1·01, 1·90, East Midlands (OR 1·38; 95%CI 1·01, 1·90 and West Midlands (OR 1·38; 95%CI 1·01, 1·90 had higher odds for death, while East of England had lower odds for death (OR 0·66; 95% CI 0·49, 0·90. The North East had higher OR in all three five-year periods as compared to the other eight regions. Adjusted case-fatality showed significant variability with temporary improvement in some regions but overall there was no significant improvement in any region over 15 years.We observed considerable regional variation in the epidemiology of AKI-D that was not entirely attributable to variations in demographic or other identifiable clinical factors. These observations make a compelling case for further research to elucidate the reasons and identify interventions to reduce the incidence and case-fatality in all regions.

  19. Data Rate Estimation for Wireless Core-to-Cache Communication in Multicore CPUs

    Directory of Open Access Journals (Sweden)

    M. Komar

    2015-01-01

    Full Text Available In this paper, a principal architecture of common purpose CPU and its main components are discussed, CPUs evolution is considered and drawbacks that prevent future CPU development are mentioned. Further, solutions proposed so far are addressed and a new CPU architecture is introduced. The proposed architecture is based on wireless cache access that enables a reliable interaction between cores in multicore CPUs using terahertz band, 0.1-10THz. The presented architecture addresses the scalability problem of existing processors and may potentially allow to scale them to tens of cores. As in-depth analysis of the applicability of the suggested architecture requires accurate prediction of traffic in current and next generations of processors, we consider a set of approaches for traffic estimation in modern CPUs discussing their benefits and drawbacks. The authors identify traffic measurements by using existing software tools as the most promising approach for traffic estimation, and they use Intel Performance Counter Monitor for this purpose. Three types of CPU loads are considered including two artificial tests and background system load. For each load type the amount of data transmitted through the L2-L3 interface is reported for various input parameters including the number of active cores and their dependences on the number of cores and operational frequency.

  20. The One Plan Project: A cooperative effort of the National Response Team and the Region 6 Regional Response Team to simplify facility emergency response planning

    International Nuclear Information System (INIS)

    Staves, J.; McCormick, K.

    1997-01-01

    The National Response Team (NRT) in coordination with the Region 6 Response Team (RRT) have developed a facility contingency plan format which would integrate all existing regulatory requirements for contingency planning. This format was developed by a multi-agency team, chaired by the USEPA Region 6, in conjunction with various industry, labor, and public interest groups. The impetus for this project came through the USEPA Office of Chemical Emergency Preparedness and Prevention (CEPPO). The current national oil and hazardous material emergency preparedness and response system is an amalgam of federal, state, local, and industrial programs which are often poorly coordinated. In a cooperative effort with the NRT, the CEPPO conducted a Presidential Review of federal agency authorities and coordination responsibilities regarding release prevention, mitigation, and response. Review recommendations led to a Pilot Project in USEPA Region 6. The Region 6 Pilot Project targeted end users in the intensely industrialized Houston Ship Channel (HSC) area, which is comprised of petroleum and petrochemical companies

  1. Region 3 - National Remedial Action Contracts / Multiple Award Competition (SOL-R3-13-00006)

    Science.gov (United States)

    Region 3 - EPA is performing market research to determine if industry has the capability and capacity to perform the work, on a national level, as described in the attached draft Statement of Work /Performance Work Statement(SOW/PWS).

  2. NATIONAL-REGIONAL COMPONENT IN THE TRAINING OFSPECIALISTS FOR THE BILINGUAL EDUCATION OF PRESCHOOL CHILDREN

    Directory of Open Access Journals (Sweden)

    Neonila Vyacheslavovna Ivanova

    2015-02-01

    Full Text Available Purpose: The article deals with some aspects of the implementation of the content of the national-regional component in the training of future specialists of preschool education.Methodology: Used in the study of the methodological principles: a systematic approach, personality, activity, polysubject (dialogical, cultural, ethnopedagogical. In accordance with the logic of scientific research work is a set of theoretical and empirical methods, the combination of which gives you the opportunity to explore the most confident object of study (methods for the study of teaching experience, methods of theoretical research.Results: Study of research and teaching experience in the implementation of national and regional content component of professional training for communicative language development of preschool children in a multilingual context and the dialogue of cultures.Practical implications: Еducational system of higher education.

  3. Sub-national entities’ participation in Brazil’s foreign policy and in regional integration processes

    Directory of Open Access Journals (Sweden)

    Deisy Ventura

    2012-09-01

    Full Text Available This article focuses on how sub-national entities’ gradual participation in Brazilian foreign policy has come about, with reference to a decentralised scenario of the decision-making process in Itamaraty, where the ministries and presidential organs have a voice on many strategic themes, mainly concerning development. The article examines the insertion of sub-national entities into the decision-making process in the Southern Common Market (Mercosur, and concludes that in spite of the incipient participation, relevant contributions to the process of regional integration have arisen. Regarding the hypothesis that the participation of the federative entities in the decision-making process generates local and regional development, we argue that this is an alternative to increasing state efficiency. In conclusion, and despite the incipient institutionalisation that does not guarantee their vote in the decision-making process, at least their voice is heard.

  4. Building "Nuestra América:" national sovereignty and regional integration in the americas

    Directory of Open Access Journals (Sweden)

    Renata Keller

    2013-12-01

    Full Text Available This article explores the history of regional integration in the Americas, drawing lessons from the diverse ways that people have sought to unite the hemisphere. It begins at the point when most of the modern nation-states of Latin America came into being: the nineteenth-century wars for independence. From there, it traces various attempts at regional integration, keeping in mind three fundamental questions: How does regional integration compromise sovereignty? Does it have to? Is it worth sacrificing sovereignty to increase integration? The article concludes that while every attempt at regional integration in the Americas has required the participants to voluntarily sacrifice some measure of their sovereignty, the most successful efforts have been those that either kept the sacrifice to a minimum or offered significant enough rewards to offset the loss of sovereignty.

  5. Student teams practice for regional robotic competition at KSC

    Science.gov (United States)

    1999-01-01

    Student teams (right and left) behind protective walls maneuver their robots on the playing field during practice rounds of the 1999 Southeastern Regional robotic competition at Kennedy Space Center Visitor Complex . Thirty schools from around the country have converged at KSC for the event that pits gladiator robots against each other in an athletic-style competition. The robots have to retrieve pillow-like disks from the floor, as well as climb onto the platform (foreground) and raise the cache of pillows to a height of eight feet. KSC is hosting the event being sponsored by the nonprofit organization For Inspiration and Recognition of Science and Technology, known as FIRST. The FIRST robotics competition is designed to provide students with a hands-on, inside look at engineering and other professional careers.

  6. Internal Controls over the Department of Defense Transit Subsidy Program within the National Capital Region

    National Research Council Canada - National Science Library

    Granetto, Paul J; Marsh, Patricia A; Pfeil, Lorin T; Gaich, Walter J; Lawrence, Demetria; Hart, Marcia T; Dickison, Ralph W; Varner, Pamela; Foth, Suellen

    2007-01-01

    DoD personnel with oversight responsibility and personnel working within the DoD transit subsidy program for the National Capital Region should read this report to obtain information about internal...

  7. SECURITY RISKS, MYTHS IN A TRANSITIONING SUB-NATIONAL REGIONAL ECONOMY (CROSS RIVER STATE AND IMAGINATIVE GEOGRAPHIES OF NIGERIA

    Directory of Open Access Journals (Sweden)

    J. K. UKWAYI

    2015-03-01

    Full Text Available The emergence of an “international community” through accumulation of perceived risks that contrasts with those risks (of considerably lower levels of seriousness compared to those perceived constitutes one of the interesting (or intriguing subjects of risks and disaster studies surrounding the 9/11 era. The constructions of “imaginative geographies”, have frequently been biased in the practices that underlie the mapping of the foreign places tend to put-down the affected regions in their “paintings” for the global community. The latter are subsequently “demonized” in their ratings of competence for participating in world trade, tourism, travel, among other social/cultural, and economic and political activities. The objective of this article is to highlight how the exaggeration of risks (contrasted to actually existing/lived risks, practices that are frequently associated with such adverse “imaginative geographies” poses sub-national regional development dilemma in Nigeria’s Niger Delta. We trace the roots of adverse “imaginative geographies” of Nigeria to the Abacha dictatorship (1993-1997. Then we highlight the mixed characteristics of the Niger Delta conditions during the “return of positive image recapture” by Nigeria’s federal government (re-democratisation of the Fourth Republic, 1999-present, re-branding campaigns; as well as adverse conditions present. Most significantly, we show that despite these adversities, a combination of favorable geographical size, differentiation, sub-national regional security programme formulation and management taking aims at diversification have created “large oases” of peace and security in Cross River State, a part of the Niger Delta that has been completely unscathed by insurgencies of the nearby sub-national region and further away national origin. Apart from identifying sub-national regions qualifying for delisting from “adverse imaginative geographies” due to

  8. The Regional Integrated Sciences and Assessments (RISA) Program, Climate Services, and Meeting the National Climate Change Adaptation Challenge

    Science.gov (United States)

    Overpeck, J. T.; Udall, B.; Miles, E.; Dow, K.; Anderson, C.; Cayan, D.; Dettinger, M.; Hartmann, H.; Jones, J.; Mote, P.; Ray, A.; Shafer, M.; White, D.

    2008-12-01

    The NOAA-led RISA Program has grown steadily to nine regions and a focus that includes both natural climate variability and human-driven climate change. The RISAs are, at their core, university-based and heavily invested in partnerships, particularly with stakeholders, NOAA, and other federal agencies. RISA research, assessment and partnerships have led to new operational climate services within NOAA and other agencies, and have become important foundations in the development of local, state and regional climate change adaptation initiatives. The RISA experience indicates that a national climate service is needed, and must include: (1) services prioritized based on stakeholder needs; (2) sustained, ongoing regional interactions with users, (3) a commitment to improve climate literacy; (4) support for assessment as an ongoing, iterative process; (5) full recognition that stakeholder decisions are seldom made using climate information alone; (6) strong interagency partnership; (7) national implementation and regional in focus; (8) capability spanning local, state, tribal, regional, national and international space scales, and weeks to millennia time scales; and (9) institutional design and scientific support flexible enough to assure the effort is nimble enough to respond to rapidly-changing stakeholder needs. The RISA experience also highlights the central role that universities must play in national climate change adaptation programs. Universities have a tradition of trusted regional stakeholder partnerships, as well as the interdisciplinary expertise - including social science, ecosystem science, law, and economics - required to meet stakeholder climate-related needs; project workforce can also shift rapidly in universities. Universities have a proven ability to build and sustain interagency partnerships. Universities excel in most forms of education and training. And universities often have proven entrepreneurship, technology transfer and private sector

  9. The water balance of the urban Salt Lake Valley: a multiple-box model validated by observations

    Science.gov (United States)

    Stwertka, C.; Strong, C.

    2012-12-01

    A main focus of the recently awarded National Science Foundation (NSF) EPSCoR Track-1 research project "innovative Urban Transitions and Arid-region Hydro-sustainability (iUTAH)" is to quantify the primary components of the water balance for the Wasatch region, and to evaluate their sensitivity to climate change and projected urban development. Building on the multiple-box model that we developed and validated for carbon dioxide (Strong et al 2011), mass balance equations for water in the atmosphere and surface are incorporated into the modeling framework. The model is used to determine how surface fluxes, ground-water transport, biological fluxes, and meteorological processes regulate water cycling within and around the urban Salt Lake Valley. The model is used to evaluate the hypotheses that increased water demand associated with urban growth in Salt Lake Valley will (1) elevate sensitivity to projected climate variability and (2) motivate more attentive management of urban water use and evaporative fluxes.

  10. Regional homogeneity of electoral space: comparative analysis (on the material of 100 national cases

    Directory of Open Access Journals (Sweden)

    A. O. Avksentiev

    2015-12-01

    Full Text Available In the article the author examines dependence on electoral behavior from territorial belonging. «Regional homogeneity» and «electoral space» categories are conceptualized. It is argued, that such regional homogeneity is a characteristic of electoral space and can be quantified. Quantitative measurement of government regional homogeneity has direct connection with risk of separatism, civil conflicts, or legitimacy crisis on deviant territories. It is proposed the formulae for evaluation of regional homogeneity quantitative method which has been based on statistics analysis instrument, especially, variation coefficient. Possible directions of study with the use of this index according to individual political subjects and the whole political space (state, region, electoral district are defined. Calculation of appropriate indexes for Ukrainian electoral space (return of 1991­2015 elections and 100 other national cases. The dynamics of Ukraine regional homogeneity on the material of 1991­2015 electoral statistics is analyzed.

  11. La face cachée de l’ancestralité. Masques et affinité chez les Matis d’Amazonie brésilienne

    OpenAIRE

    Erikson, Philippe

    2009-01-01

    La face cachée de l’ancestralité. Masques et affinité chez les Matis d’Amazonie brésilienne. Cet article montre en quoi les masques matis sont révélateurs des conceptions ouest amazoniennes de la temporalité et de la succession des générations. Après une discussion sur l’aspect cérémoniel des mascarades et sur les caractéristiques ontologiques imputées aux esprits mariwin, ce texte soutient que ces derniers, bien qu’associés aux morts du groupe et à des valeurs endogènes, représentent des aff...

  12. Geologic map of the west-central Buffalo National River region, northern Arkansas

    Science.gov (United States)

    Hudson, Mark R.; Turner, Kenzie J.

    2014-01-01

    This map summarizes the geology of the west-central Buffalo National River region in the Ozark Plateaus region of northern Arkansas. Geologically, the region lies on the southern flank of the Ozark dome, an uplift that exposes oldest rocks at its center in Missouri. Physiographically, the map area spans the Springfield Plateau, a topographic surface generally held up by Mississippian cherty limestone and the higher Boston Mountains to the south, held up by Pennsylvanian rocks. The Buffalo River flows eastward through the map area, enhancing bedrock erosion of an approximately 1,600-ft- (490-m-) thick sequence of Ordovician, Mississippian, and Pennsylvanian carbonate and clastic sedimentary rocks that have been mildly deformed by a series of faults and folds. Quaternary surficial units are present as alluvial deposits along major streams, including a series of terrace deposits from the Buffalo River, as well as colluvium and landslide deposits mantling bedrock on hillslopes.

  13. POTTERY IN CULTURE UKRAINIAN NATIONAL HOUSING OF PRIDNEPROVSKYI REGION

    Directory of Open Access Journals (Sweden)

    YEVSEEVA G. P.

    2016-07-01

    Full Text Available Formulation of the problem. Housing is a prerequisite for the existence of any rights, so the material culture of each nation it has an important place. In the exterior of ethnic expressed mainly in the way some design elements of housing and street facilities. Most ethnic specificity has interior housing, depending on external conditions: the nature of planning, construction, furniture, decoration items, dishes and more. There is a similarity residential interior throughout the residence Ukrainian. Such similarity is natural, it is in the internal space of the housing specific people represents their understanding feasibility, benefits and beauty. Beauty and the similarity of pottery that used the Ukrainian nation from its inception and up until today, confirms the unity of the aesthetic preferences of the people and the convenience of daily life. Analysis of publications. The study of Ukrainian national dishes, its specific features, artistic design tools dedicated to a number of scientific papers. Information on the national pottery central and southern regions of Ukraine are contained in the writings of scholars of the nineteenth century. and scholars period of independence. Some issues of Ukrainian pottery and its typologies considered in the work of scientists of the Soviet period. The purpose of the article is to analyze the types of pottery that were in use in the Ukrainian national housing of the Prydniprovia. Conclusions. Each peasant house, like today, a hundred years ago, saturating domestic products with dual reality hidden meaning, ancient meanings, is a kind of unique personal world, which is closely intertwined with the general social commonplace, seeing its effects and actively influencing it forms a harmonious world environment, in which modern man lives. It is therefore important to us to know, for example, not the evolution of Ukrainian houses as an insult, but its structure and nature of technology of hut building even a

  14. U.S. Environmental Protection Agency, Region 6 National Priorities List (NPL) Sites - 05/12/2014

    Data.gov (United States)

    U.S. Environmental Protection Agency — Point locations for sites in U.S. Environmental Protection Agency, Region 6 which are documented as being part of the National Priorities List as of May 12, 2014....

  15. 77 FR 9698 - Notice of Continuation of Visitor Services

    Science.gov (United States)

    2012-02-17

    ... National Parks. AMIS002-89 Lake Amistad Resort Amistad National and Marina, LLC. Recreation Area. AMIS003-87 Rough Canyon Amistad National Marina, LLC. Recreation Area. CACH001-84 White Dove, Inc., Canyon de...

  16. USGS Topo Base Map Service from The National Map

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — USGS Topo is a topographic tile cache base map that combines the most current data (Boundaries, Names, Transportation, Elevation, Hydrography, Land Cover, and other...

  17. Criteria and application methodology of physical protection of nuclear materials within the national and regional boundaries

    International Nuclear Information System (INIS)

    Rodriguez, C.E.; Cesario, R.H.; Giustina, D.H.; Canibano, J.

    1998-01-01

    Full text: The physical protection against robbery, diversion of nuclear materials and sabotage of nuclear installations by individuals or groups, has been for long time the reason of national and international concern. Even though, the obligation to create and implement an effective physical protection system for nuclear materials and installations in the territory of a given State, fall entirely on the State's Government, whether this obligation is fulfilled or not, and if it does, in what measure or up to what extent, it also concerns the rest of the States. Therefore, physical protection has become the reason for a regional co-operation. It is evident the need of co-operation in those cases where the physical protection efficiency within the territory of a given State depends also on the appropriate measures other States are taken, specially when dealing with materials been transported through national borders. The above mentioned constitute an important framework for the regional co-operation for the physical protection of nuclear materials. For that reason, the Nuclear Regulatory Authority established criteria and conditions aimed at mitigate diversions, robberies and sabotage to nuclear installations. As a working philosophy, it was established a simplify physical protection model of application in Argentina who, through the ARCAL No. 23 project, will be extrapolated to the whole Latin-American region, concluding that the application of the appropriated physical protection systems at regional level will lead to the strengthening of it at national level. (author) [es

  18. Analysis of sheltering and evacuation strategies for a national capital region nuclear detonation scenario.

    Energy Technology Data Exchange (ETDEWEB)

    Yoshimura, Ann S.; Brandt, Larry D.

    2011-12-01

    Development of an effective strategy for shelter and evacuation is among the most important planning tasks in preparation for response to a low yield, nuclear detonation in an urban area. Extensive studies have been performed and guidance published that highlight the key principles for saving lives following such an event. However, region-specific data are important in the planning process as well. This study examines some of the unique regional factors that impact planning for a 10 kT detonation in the National Capital Region. The work utilizes a single scenario to examine regional impacts as well as the shelter-evacuate decision alternatives at one exemplary point. For most Washington, DC neighborhoods, the excellent assessed shelter quality available make shelter-in-place or selective transit to a nearby shelter a compelling post-detonation strategy.

  19. [Leather dust and systematic research on occupational tumors: the national and regional registry TUNS].

    Science.gov (United States)

    Mensi, Carolina; Sieno, Claudia; Consonni, Dario; Riboldi, Luciano

    2012-01-01

    The sinonasal cancer (SNC) are a rare tumors characterized by high occupational etiologic fraction. For this reason their incidence and etiology can be actively monitored by a dedicated cancer registry. The National Registry of these tumours is situated at the Italian Institute for Occupational Safety and Prevention (ISPESL) and is based on Regional Operating Centres (ROCs). In Lombardy Region the ROC has been established at the end of 2007 with the purpose to make a systematic surveillance and therefore to support in the most suitable way the scientific research and the prevention actions in the high risk working sectors. The aims of this surveillance are: to estimate the regional incidence of SNC, to define different sources of occupational and environmental exposure both known (wood, leather, nickel, chromium) and unknown. The registry collects all the new incident cases of epithelial SNC occurring in residents in Lombardy Region since 01.01.2008. The regional Registry is managed according to National Guidelines. Until January 2010 we received 596 cases of suspected SNC; only 91 (15%) of these were actually incident cases according to the inclusion criteria of the Registry, and they were preferentially adenocarcinoma and squamous carcinoma. In 2008 the regional age-standardized incidence rate of SNC for males and females, respectively, is 0.8 and 0.5 per 100,000. Occupational or environmental exposure to wood or leather dust is ascertained in over the 50% of cases. The occupational exposure to leather dust was duo to work in shoe factories. Our preliminary findings confirm that occupational exposure to wood and leather dusts are the more relevant risk factors for SNC. The study of occupational sectors and job activity in cases without such exposure could suggest new etiologic hypothesis.

  20. Regional not-for-profit systems: can they compete with national investor-owned firms?

    Science.gov (United States)

    Hernandez, R; Hill, D B

    1984-01-01

    The relative competitive advantages of regional and national systems are summarized in Figure One. As illustrated, each type of system has unique competitive advantages at the corporate level. While it is difficult to state that either system has distinct advantages that place it in a superior position relative to the other, it seems that in the short-run investor-owned systems have operating characteristics that may result in more efficient internal functioning because of more centralized control over resource allocation and performance systems, greater possibilities for economies of scale, and greater access to capital. However, it was previously noted that growing pressures from government and the business community will lead to tighter constraints on the profitability of investments in the health care sector. The possibility of this shift suggests that the access to capital advantage enjoyed by investor-owned systems may not continue. Additionally, regional systems that are part of larger affiliated organizations such as the Sun Alliance and the Voluntary Hospitals of America are developing means to pool their access to debt funds, thus reducing the cost of capital for member institutions. The group purchasing contracts developed by these large systems also have resulted in significant savings. The distinction between regional and national systems on centralized control are becoming less pronounced. Investor-owned systems are seeking to determine how they might best decentralize selected decisions to be more responsive to local markets while not-for-profit regional systems are recognizing that they must centralize selected decisions to obtain more efficient, rational operation. The long-run outlook suggests that the competitive advantages that have been identified will become less pronounced and that both systems will survive in the marketplace.

  1. Impacts of Climate Change on Colombia’s National and Regional Security

    Science.gov (United States)

    2009-10-01

    Peru, and Chile have relatively diverse, industrialized economies and greater domestic resources and state capaci- ties for adaptation. Bolivia...Change on Colombia’s National and Regional Security References 1 Colombia, Sistema de Informacion sobre el Uso del Agua en la Agricultura y el Medio...Jul 2009. "El cambio climatico en Colombia." Unpublished report shared with authors. IDEAM. 26 "El agua para 40 millones de personas esta en riesgo

  2. Heterogeneity in regional notification patterns and its impact on aggregate national case notification data: the example of measles in Italy

    Directory of Open Access Journals (Sweden)

    Butler Alisa R

    2003-07-01

    Full Text Available Abstract Background A monthly time series of measles case notifications exists for Italy from 1949 onwards, although its usefulness is seriously undermined by extensive under-reporting which varies strikingly between regions, giving rise to the possibility of significant distortions in epidemic patterns seen in aggregated national data. Results A corrected national time series is calculated using an algorithm based upon the approximate equality between births and measles cases; under-reporting estimates are presented for each Italian region, and poor levels of reporting in Southern Italy are confirmed. Conclusion Although an order of magnitude larger, despite great heterogeneity between regions in under-reporting and in epidemic patterns, the shape of the corrected national time series remains close to that of the aggregated uncorrected data. This suggests such aggregate data may be quite robust to great heterogeneity in reporting and epidemic patterns at the regional level. The corrected data set maintains an epidemic pattern distinct from that of England and Wales.

  3. We are the opposite of you! Mirroring of national, regional and ethnic stereotypes

    Czech Academy of Sciences Publication Activity Database

    Hřebíčková, Martina; Graf, Sylvie; Tegdes, T.; Brezina, I.

    2017-01-01

    Roč. 157, č. 6 (2017), s. 703-719 ISSN 0022-4545 R&D Projects: GA ČR GA17-14387S; GA ČR GA13-25656S Institutional support: RVO:68081740 Keywords : ethnic stereotypes * five-factor model * mirroring * national stereotypes * regional stereotypes Subject RIV: AN - Psychology OBOR OECD: Psychology (including human - machine relations) Impact factor: 0.844, year: 2016

  4. Assessing global, regional, national and sub-national capacity for public health research: a bibliometric analysis of the Web of Science(TM) in 1996-2010.

    Science.gov (United States)

    Badenhorst, Anna; Mansoori, Parisa; Chan, Kit Yee

    2016-06-01

    The past two decades have seen a large increase in investment in global public health research. There is a need for increased coordination and accountability, particularly in understanding where funding is being allocated and who has capacity to perform research. In this paper, we aim to assess global, regional, national and sub-national capacity for public health research and how it is changing over time in different parts of the world. To allow comparisons of regions, countries and universities/research institutes over time, we relied on Web of Science(TM) database and used Hirsch (h) index based on 5-year-periods (h5). We defined articles relevant to public health research with 98% specificity using the combination of search terms relevant to public health, epidemiology or meta-analysis. Based on those selected papers, we computed h5 for each country of the world and their main universities/research institutes for these 5-year time periods: 1996-2000, 2001-2005 and 2006-2010. We computed h5 with a 3-year-window after each time period, to allow citations from more recent years to accumulate. Among the papers contributing to h5-core, we explored a topic/disease under investigation, "instrument" of health research used (eg, descriptive, discovery, development or delivery research); and universities/research institutes contributing to h5-core. Globally, the majority of public health research has been conducted in North America and Europe, but other regions (particularly Eastern Mediterranean and South-East Asia) are showing greater improvement rate and are rapidly gaining capacity. Moreover, several African nations performed particularly well when their research output is adjusted by their gross domestic product (GDP). In the regions gaining capacity, universities are contributing more substantially to the h-core publications than other research institutions. In all regions of the world, the topics of articles in h-core are shifting from communicable to non

  5. Relevance of PLUREL's results to policies at EU, national, regional and local level

    DEFF Research Database (Denmark)

    Fertner, Christian; Nielsen, Thomas Alexander Sick

    and results to policies and policy development at the EU-level, as well as the national and regional level. PLUREL has peri-urban land use relationships as its main focus. This includes analysis of drivers, consequences, policies and scenarios for the future. Even though PLUREL aims for pan-European coverage...... of natural resources as well as an attractive development in general. Besides these spatial relevant sector policies, the EU enforces legislation which is translated into spatial explicit instruments on sub-regional level. E.g. the Habitat and Birds Directive caused the development of Natura 2000 areas......, an EU-wide network of nature protection areas. The implementation of Trans-European Networks through funding programmes is another sector policy having an impact on land-use change and rural-urban relations. On the sub-regional scale the perception on overall goals like sustainability can be very...

  6. Journal Publication in Chile, Colombia, and Venezuela: University Responses to Global, Regional, and National Pressures and Trends

    Science.gov (United States)

    Delgado, Jorge Enrique

    2011-01-01

    Background. This project was motivated by the impressive growth that scholarly/scientific journals in Latin America have shown in recent decades. That advance is attributed to global, regional, and national pressures and trends, as well as a response to obstacles that scholars/researchers from the region face to be published in prestigious…

  7. USGS Imagery Topo Base Map Service from The National Map

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — USGS Imagery Topo is a topographic tile cache base map with orthoimagery as a backdrop, and combines the most current data (Boundaries, Names, Transportation,...

  8. National, Regional and Global Certification Bodies for Polio Eradication: A Framework for Verifying Measles Elimination.

    Science.gov (United States)

    Deblina Datta, S; Tangermann, Rudolf H; Reef, Susan; William Schluter, W; Adams, Anthony

    2017-07-01

    The Global Certification Commission (GCC), Regional Certification Commissions (RCCs), and National Certification Committees (NCCs) provide a framework of independent bodies to assist the Global Polio Eradication Initiative (GPEI) in certifying and maintaining polio eradication in a standardized, ongoing, and credible manner. Their members meet regularly to comprehensively review population immunity, surveillance, laboratory, and other data to assess polio status in the country (NCC), World Health Organization (WHO) region (RCC), or globally (GCC). These highly visible bodies provide a framework to be replicated to independently verify measles and rubella elimination in the regions and globally. © The Author 2017. Published by Oxford University Press for the Infectious Diseases Society of America.

  9. Geographic Information for Analysis of Highway Runoff-Quality Data on a National or Regional Scale in the Conterminous United States

    Science.gov (United States)

    Smieszek, Tomas W.; Granato, Gregory E.

    2000-01-01

    Spatial data are important for interpretation of water-quality information on a regional or national scale. Geographic information systems (GIS) facilitate interpretation and integration of spatial data. The geographic information and data compiled for the conterminous United States during the National Highway Runoff Water-Quality Data and Methodology Synthesis project is described in this document, which also includes information on the structure, file types, and the geographic information in the data files. This 'geodata' directory contains two subdirectories, labeled 'gisdata' and 'gisimage.' The 'gisdata' directory contains ArcInfo coverages, ArcInfo export files, shapefiles (used in ArcView), Spatial Data Transfer Standard Topological Vector Profile format files, and meta files in subdirectories organized by file type. The 'gisimage' directory contains the GIS data in common image-file formats. The spatial geodata includes two rain-zone region maps and a map of national ecosystems originally published by the U.S. Environmental Protection Agency; regional estimates of mean annual streamflow, and water hardness published by the Federal Highway Administration; and mean monthly temperature, mean annual precipitation, and mean monthly snowfall modified from data published by the National Climatic Data Center and made available to the public by the Oregon Climate Service at Oregon State University. These GIS files were compiled for qualitative spatial analysis of available data on a national and(or) regional scale and therefore should be considered as qualitative representations, not precise geographic location information.

  10. A millennium-length reconstruction of Bear River stream flow, Utah

    Science.gov (United States)

    R. J. DeRose; M. F. Bekker; S.-Y. Wang; B. M. Buckley; R. K. Kjelgren; T. Bardsley; T. M. Rittenour; E. B. Allen

    2015-01-01

    The Bear River contributes more water to the eastern Great Basin than any other river system. It is also the most significant source of water for the burgeoning Wasatch Front metropolitan area in northern Utah. Despite its importance for water resources for the region’s agricultural, urban, and wildlife needs, our understanding of the variability of Bear River’s stream...

  11. The vulnerability of tourism and recreation in the National Capital Region to climate change

    Energy Technology Data Exchange (ETDEWEB)

    Scott, D; Jones, B. [Waterloo Univ., ON (Canada). Faculty of Environmental Studies; Khaled, H.A. [National Capital Commission, Ottawa, ON (Canada)

    2005-03-15

    The potential impact of climate change on recreation and tourism in Canada's National Capital Region was assessed. The objectives of the study were to examine two important issues, including how climate change will influence the seasonality of major recreation and tourism segments in the winter and summer months. The study analysed the disparate vulnerability of recreation and tourism segments to climate variability and change, explored risks and opportunities for recreation and tourism in the region, and examined management adaptation strategies. The study was conducted in several phases involving consultation meetings with National Capital Commission staff, data compilation and development of climate change scenarios. This was followed by a climate change impact assessment. The report also provided information on the methodology used for the study and on climate change impact indicators. It was concluded that as a result of climate change, the Winterlude season would become shorter and that the timeframe for skating on the Rideau Canal was projected to be shortened. 61 refs., 23 tabs., 20 figs., 2 appendices.

  12. Assessing the hydrologic response to wildfires in mountainous regions

    Science.gov (United States)

    Havel, Aaron; Tasdighi, Ali; Arabi, Mazdak

    2018-04-01

    This study aims to understand the hydrologic responses to wildfires in mountainous regions at various spatial scales. The Soil and Water Assessment Tool (SWAT) was used to evaluate the hydrologic responses of the upper Cache la Poudre Watershed in Colorado to the 2012 High Park and Hewlett wildfire events. A baseline SWAT model was established to simulate the hydrology of the study area between the years 2000 and 2014. A procedure involving land use and curve number updating was implemented to assess the effects of wildfires. Application of the proposed procedure provides the ability to simulate the hydrologic response to wildfires seamlessly through mimicking the dynamic of the changes due to wildfires. The wildfire effects on curve numbers were determined comparing the probability distribution of curve numbers after calibrating the model for pre- and post-wildfire conditions. Daily calibration and testing of the model produced very good results. No-wildfire and wildfire scenarios were created and compared to quantify changes in average annual total runoff volume, water budgets, and full streamflow statistics at different spatial scales. At the watershed scale, wildfire conditions showed little impact on the hydrologic responses. However, a runoff increase up to 75 % was observed between the scenarios in sub-watersheds with high burn intensity. Generally, higher surface runoff and decreased subsurface flow were observed under post-wildfire conditions. Flow duration curves developed for burned sub-watersheds using full streamflow statistics showed that less frequent streamflows become greater in magnitude. A linear regression model was developed to assess the relationship between percent burned area and runoff increase in Cache la Poudre Watershed. A strong (R2 > 0.8) and significant (p statistics through application of flow duration curves revealed that the wildfires had a higher effect on peak flows, which may increase the risk of flash floods in post

  13. Experimental Evaluation of a High Speed Flywheel for an Energy Cache System

    Science.gov (United States)

    Haruna, J.; Murai, K.; Itoh, J.; Yamada, N.; Hirano, Y.; Fujimori, T.; Homma, T.

    2011-03-01

    A flywheel energy cache system (FECS) is a mechanical battery that can charge/discharge electricity by converting it into the kinetic energy of a rotating flywheel, and vice versa. Compared to a chemical battery, a FECS has great advantages in durability and lifetime, especially in hot or cold environments. Design simulations of the FECS were carried out to clarify the effects of the composition and dimensions of the flywheel rotor on the charge/discharge performance. The rotation speed of a flywheel is limited by the strength of the materials from which it is constructed. Three materials, carbon fiber-reinforced polymer (CFRP), Cr-Mo steel, and a Mg alloy were examined with respect to the required weight and rotation speed for a 3 MJ (0.8 kWh) charging/discharging energy, which is suitable for an FECS operating with a 3-5 kW photovoltaic device in an ordinary home connected to a smart grid. The results demonstrate that, for a stationary 3 MJ FECS, Cr-Mo steel was the most cost-effective, but also the heaviest, Mg-alloy had a good balance of rotation speed and weight, which should result in reduced mechanical loss and enhanced durability and lifetime of the system, and CFRP should be used for applications requiring compactness and a higher energy density. Finally, a high-speed prototype FW was analyzed to evaluate its fundamental characteristics both under acceleration and in the steady state.

  14. Experimental Evaluation of a High Speed Flywheel for an Energy Cache System

    International Nuclear Information System (INIS)

    Haruna, J; Itoh, J; Murai, K; Yamada, N; Hirano, Y; Homma, T; Fujimori, T

    2011-01-01

    A flywheel energy cache system (FECS) is a mechanical battery that can charge/discharge electricity by converting it into the kinetic energy of a rotating flywheel, and vice versa. Compared to a chemical battery, a FECS has great advantages in durability and lifetime, especially in hot or cold environments. Design simulations of the FECS were carried out to clarify the effects of the composition and dimensions of the flywheel rotor on the charge/discharge performance. The rotation speed of a flywheel is limited by the strength of the materials from which it is constructed. Three materials, carbon fiber-reinforced polymer (CFRP), Cr-Mo steel, and a Mg alloy were examined with respect to the required weight and rotation speed for a 3 MJ (0.8 kWh) charging/discharging energy, which is suitable for an FECS operating with a 3-5 kW photovoltaic device in an ordinary home connected to a smart grid. The results demonstrate that, for a stationary 3 MJ FECS, Cr-Mo steel was the most cost-effective, but also the heaviest, Mg-alloy had a good balance of rotation speed and weight, which should result in reduced mechanical loss and enhanced durability and lifetime of the system, and CFRP should be used for applications requiring compactness and a higher energy density. Finally, a high-speed prototype FW was analyzed to evaluate its fundamental characteristics both under acceleration and in the steady state.

  15. Financial Management: Processing General Services Administration Rent Bills for DoD Customers in the National Capital Region

    National Research Council Canada - National Science Library

    Granetto, Paul

    2003-01-01

    .... The Washington Headquarters Services (WHS) is responsible for the oversight and management of administrative space occupied by DoD agencies and Military departments in the National Capital Region...

  16. A multicriteria framework for producing local, regional, and national insect and disease risk maps

    Science.gov (United States)

    Frank J. Jr. Krist; Frank J. Sapio

    2010-01-01

    The construction of the 2006 National Insect and Disease Risk Map, compiled by the USDA Forest Service, State and Private Forestry Area, Forest Health Protection Unit, resulted in the development of a GIS-based, multicriteria approach for insect and disease risk mapping that can account for regional variations in forest health concerns and threats. This risk mapping...

  17. Diets of three species of anurans from the cache creek watershed, California, USA

    Science.gov (United States)

    Hothem, R.L.; Meckstroth, A.M.; Wegner, K.E.; Jennings, M.R.; Crayon, J.J.

    2009-01-01

    We evaluated the diets of three sympatric anuran species, the native Northern Pacific Treefrog, Pseudacris regilla, and Foothill Yellow-Legged Frog, Rana boylii, and the introduced American Bullfrog, Lithobates catesbeianus, based on stomach contents of frogs collected at 36 sites in 1997 and 1998. This investigation was part of a study of mercury bioaccumulation in the biota of the Cache Creek Watershed in north-central California, an area affected by mercury contamination from natural sources and abandoned mercury mines. We collected R. boylii at 22 sites, L. catesbeianus at 21 sites, and P. regilla at 13 sites. We collected both L. catesbeianus and R. boylii at nine sites and all three species at five sites. Pseudacris regilla had the least aquatic diet (100% of the samples had terrestrial prey vs. 5% with aquatic prey), followed by R. boylii (98% terrestrial, 28% aquatic), and L. catesbeianus, which had similar percentages of terrestrial (81%) and aquatic prey (74%). Observed predation by L. catesbeianus on R. boylii may indicate that interaction between these two species is significant. Based on their widespread abundance and their preference for aquatic foods, we suggest that, where present, L. catesbeianus should be the species of choice for all lethal biomonitoring of mercury in amphibians. Copyright ?? 2009 Society for the Study of Amphibians and Reptiles.

  18. National survey of crystalline rocks and recommendations of regions to be explored for high-level radioactive waste repository sites

    International Nuclear Information System (INIS)

    Smedes, H.W.

    1983-04-01

    A reconnaissance of the geological literature on large regions of exposed crystalline rocks in the United States provides the basis for evaluating if any of those regions warrant further exploration toward identifying potential sites for development of a high-level radioactive waste repository. The reconnaissance does not serve as a detailed evaluation of regions or of any smaller subunits within the regions. Site performance criteria were selected and applied insofar as a national data base exists, and guidelines were adopted that relate the data to those criteria. The criteria include consideration of size, vertical movements, faulting, earthquakes, seismically induced ground motion, Quaternary volcanic rocks, mineral deposits, high-temperature convective ground-water systems, hydraulic gradients, and erosion. Brief summaries of each major region of exposed crystalline rock, and national maps of relevant data provided the means for applying the guidelines and for recommending regions for further study. It is concluded that there is a reasonable likelihood that geologically suitable repository sites exist in each of the major regions of crystalline rocks. The recommendation is made that further studies first be conducted of the Lake Superior, Northern Appalachian and Adirondack, and the Southern Appalachian Regions. It is believed that those regions could be explored more effectively and suitable sites probably could be found, characterized, verified, and licensed more readily there than in the other regions

  19. National survey of crystalline rocks and recommendations of regions to be explored for high-level radioactive waste repository sites

    Energy Technology Data Exchange (ETDEWEB)

    Smedes, H.W.

    1983-04-01

    A reconnaissance of the geological literature on large regions of exposed crystalline rocks in the United States provides the basis for evaluating if any of those regions warrant further exploration toward identifying potential sites for development of a high-level radioactive waste repository. The reconnaissance does not serve as a detailed evaluation of regions or of any smaller subunits within the regions. Site performance criteria were selected and applied insofar as a national data base exists, and guidelines were adopted that relate the data to those criteria. The criteria include consideration of size, vertical movements, faulting, earthquakes, seismically induced ground motion, Quaternary volcanic rocks, mineral deposits, high-temperature convective ground-water systems, hydraulic gradients, and erosion. Brief summaries of each major region of exposed crystalline rock, and national maps of relevant data provided the means for applying the guidelines and for recommending regions for further study. It is concluded that there is a reasonable likelihood that geologically suitable repository sites exist in each of the major regions of crystalline rocks. The recommendation is made that further studies first be conducted of the Lake Superior, Northern Appalachian and Adirondack, and the Southern Appalachian Regions. It is believed that those regions could be explored more effectively and suitable sites probably could be found, characterized, verified, and licensed more readily there than in the other regions.

  20. Leveraging KVM Events to Detect Cache-Based Side Channel Attacks in a Virtualization Environment

    Directory of Open Access Journals (Sweden)

    Ady Wahyudi Paundu

    2018-01-01

    Full Text Available Cache-based side channel attack (CSCa techniques in virtualization systems are becoming more advanced, while defense methods against them are still perceived as nonpractical. The most recent CSCa variant called Flush + Flush has showed that the current detection methods can be easily bypassed. Within this work, we introduce a novel monitoring approach to detect CSCa operations inside a virtualization environment. We utilize the Kernel Virtual Machine (KVM event data in the kernel and process this data using a machine learning technique to identify any CSCa operation in the guest Virtual Machine (VM. We evaluate our approach using Receiver Operating Characteristic (ROC diagram of multiple attack and benign operation scenarios. Our method successfully separate the CSCa datasets from the non-CSCa datasets, on both trained and nontrained data scenarios. The successful classification also include the Flush + Flush attack scenario. We are also able to explain the classification results by extracting the set of most important features that separate both classes using their Fisher scores and show that our monitoring approach can work to detect CSCa in general. Finally, we evaluate the overhead impact of our CSCa monitoring method and show that it has a negligible computation overhead on the host and the guest VM.

  1. Referees check robots after qualifying match at regional robotic competition at KSC

    Science.gov (United States)

    1999-01-01

    Referees check the robots on the floor of the playing field after a qualifying match of the 1999 Southeastern Regional robotic competition at Kennedy Space Center Visitor Complex . Thirty schools from around the country have converged at KSC for the event that pits gladiator robots against each other in an athletic-style competition. The robots have to retrieve pillow- like disks from the floor, as well as climb onto the platform (with flags) and raise the cache of pillows to a height of eight feet. KSC is hosting the event being sponsored by the nonprofit organization For Inspiration and Recognition of Science and Technology, known as FIRST. The FIRST robotics competition is designed to provide students with a hands-on, inside look at engineering and other professional careers.

  2. MIDWESTERN REGIONAL CENTER OF THE DOE NATIONAL INSTITUTE FOR CLIMATIC CHANGE RESEARCH

    Energy Technology Data Exchange (ETDEWEB)

    Burton, Andrew J. [Michigan Technological University

    2014-02-28

    The goal of NICCR (National Institute for Climatic Change Research) was to mobilize university researchers, from all regions of the country, in support of the climatic change research objectives of DOE/BER. The NICCR Midwestern Regional Center (MRC) supported work in the following states: North Dakota, South Dakota, Nebraska, Kansas, Oklahoma, Minnesota, Iowa, Missouri, Wisconsin, Illinois, Michigan, Indiana, and Ohio. The MRC of NICCR was able to support nearly $8 million in climatic change research, including $6,671,303 for twenty projects solicited and selected by the MRC over five requests for proposals (RFPs) and $1,051,666 for the final year of ten projects from the discontinued DOE NIGEC (National Institute for Global Environmental Change) program. The projects selected and funded by the MRC resulted in 135 peer-reviewed publications and supported the training of 25 PhD students and 23 Masters students. Another 36 publications were generated by the final year of continuing NIGEC projects supported by the MRC. The projects funded by the MRC used a variety of approaches to answer questions relevant to the DOE’s climate change research program. These included experiments that manipulated temperature, moisture and other global change factors; studies that sought to understand how the distribution of species and ecosystems might change under future climates; studies that used measurements and modeling to examine current ecosystem fluxes of energy and mass and those that would exist under future conditions; and studies that synthesized existing data sets to improve our understanding of the effects of climatic change on terrestrial ecosystems. In all of these efforts, the MRC specifically sought to identify and quantify responses of terrestrial ecosystems that were not well understood or not well modeled by current efforts. The MRC also sought to better understand and model important feedbacks between terrestrial ecosystems, atmospheric chemistry, and regional

  3. National Centers for Environmental Prediction (NCEP) Regional Ocean Forecast System (ROFS) model output from 1997-01-01 to 2007-09-05

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Regional Ocean Forecast System (ROFS) was developed jointly by the Ocean Modeling Branch of the National Weather Service's Environmental Modeling Center, the...

  4. [National and regional market penetration rates of generic's high dosage buprenorphine: its evolution from 2006 to 2008, using reimbursed drug database].

    Science.gov (United States)

    Boczek, Christelle; Frauger, Elisabeth; Micallef, Joëlle; Allaria-Lapierre, Véronique; Reggio, Patrick; Sciortino, Vincent

    2012-01-01

    To assess the national market penetration rate (PR) of generic high-dosage buprenorphine (HDB) in 2008 and its evolution since their marketing (2006), and making a point for each dosage and at regional level. Retrospective study over data using national and regional health reimbursement database over three years (2006-2008). In 2008, the generic HDB's national MPR was 31%. The PR for each dosage were 45% for 0.4 mg, 36% for 2 mg and 19% for 8 mg. The (PR) based on Defined Daily Dose (DDD) was 23% in 2008, 15% in 2007 and 4% in 2006. In 2008, at the regional level, disparities were observed in the adjusted penetration rate from 15% in Île de France to 39% in Champagne Ardennes Lorraine. The national PR of generic HDB has increased. There are differences in MPR in terms of dosage and area. However, this PR is still low (in 2008, 82% of the delivered drugs are generics). © 2012 Société Française de Pharmacologie et de Thérapeutique.

  5. Regional development and regional policy

    OpenAIRE

    Šabić, Dejan; Vujadinović, Snežana

    2017-01-01

    Economic polarization is a process that is present at global, national and regional level. Economic activity is extremely spatially concentrated. Cities and developed regions use the agglomeration effect to attract labor and capital, thus achieving more favorable economic conditions than the agrarian region. Scientific research and European experiences over the past decades have contributed to the discrepancy among theorists about the causes and consequences of regional inequalities. Regional...

  6. Geology of Glacier National Park and the Flathead Region, Northwestern Montana

    Science.gov (United States)

    Ross, Clyde P.

    1959-01-01

    This report summarizes available data on two adjacent and partly overlapping regions in northwestern Montana. The first of these is Glacier National Park plus small areas east and west of the park. The second is here called, for convenience, the Flathead region; it embraces the mountains from the southern tip of Glacier Park to latitude 48 deg north and between the Great Plains on the east and Flathead Valley on the west. The fieldwork under the direction of the writer was done in 1948, 1949, 1950, and 1951, with some work in 1952 and 1953. The two regions together include parts of the Swan, Flathead, Livingstone, and Lewis Ranges. They are drained largely by branches of the Flathead River. On the east and north, however, they are penetrated by tributaries of the Missouri River and in addition by streams that flow into Canada. Roads and highways reach the borders of the regions; but there are few roads in the regions and only two highways cross them. The principal economic value of the assemblage of mountains described in the present report is as a collecting ground for snow to furnish the water used in the surrounding lowlands and as a scenic and wildlife recreation area. A few metallic deposits and lignitic coal beds are known, but these have not proved to be important and cannot, as far as can now be judged, be expected to become so. No oil except minor seeps has yet been found, and most parts of the two regions covered do not appear geologically favorable to the presence of oil in commercial quantities. The high, Hungry Horse Dam on which construction was in progress during the fieldwork now floods part of the Flathead region and will greatly influence the future of that region. The rocks range in age from Precambrian to Recent. The thickest units belong to the Belt series of Precambrian age, and special attention was paid to them. As a result, it is clear that at least the upper part of the series shows marked lateral changes within short distances. This fact

  7. [Ecological regionalization of national cotton fiber quality in China using GGE biplot analysis method].

    Science.gov (United States)

    Xu, Nai Yin; Jin, Shi Qiao; Li, Jian

    2017-01-01

    The distinctive regional characteristics of cotton fiber quality in the major cotton-producing areas in China enhance the textile use efficiency of raw cotton yarn by improving fiber quality through ecological regionalization. The "environment vs. trait" GGE biplot analysis method was adopted to explore the interaction between conventional cotton sub-regions and cotton fiber quality traits based on the datasets collected from the national cotton regional trials from 2011 to 2015. The results showed that the major cotton-producing area in China were divided into four fiber quality ecological regions, namely, the "high fiber quality ecological region", the "low micronaire ecological region", the "high fiber strength and micronaire ecological region", and the "moderate fiber quality ecological region". The high fiber quality ecological region was characterized by harmonious development of cotton fiber length, strength, micronaire value and the highest spinning consistency index, and located in the conventional cotton regions in the upper and lower reaches of Yangtze River Valley. The low micronaire value ecological region composed of the northern and south Xinjiang cotton regions was characterized by low micronaire value, relatively lower fiber strength, and relatively high spinning consistency index performance. The high fiber strength and micronaire value ecological region covered the middle reaches of Yangtze River Valley, Nanxiang Basin and Huaibei Plain, and was prominently characterized by high strength and micronaire value, and moderate performance of other traits. The moderate fiber quality ecological region included North China Plain and Loess Plateau cotton growing regions in the Yellow River Valley, and was characterized by moderate or lower performances of all fiber quality traits. This study effectively applied "environment vs. trait" GGE biplot to regionalize cotton fiber quality, which provided a helpful reference for the regiona-lized cotton growing

  8. Long-Term Prognostic Validity of Talent Selections: Comparing National and Regional Coaches, Laypersons and Novices

    Science.gov (United States)

    Schorer, Jörg; Rienhoff, Rebecca; Fischer, Lennart; Baker, Joseph

    2017-01-01

    In most sports, the development of elite athletes is a long-term process of talent identification and support. Typically, talent selection systems administer a multi-faceted strategy including national coach observations and varying physical and psychological tests when deciding who is chosen for talent development. The aim of this exploratory study was to evaluate the prognostic validity of talent selections by varying groups 10 years after they had been conducted. This study used a unique, multi-phased approach. Phase 1 involved players (n = 68) in 2001 completing a battery of general and sport-specific tests of handball ‘talent’ and performance. In Phase 2, national and regional coaches (n = 7) in 2001 who attended training camps identified the most talented players. In Phase 3, current novice and advanced handball players (n = 12 in each group) selected the most talented from short videos of matches played during the talent camp. Analyses compared predictions among all groups with a best model-fit derived from the motor tests. Results revealed little difference between regional and national coaches in the prediction of future performance and little difference in forecasting performance between novices and players. The best model-fit regression by the motor-tests outperformed all predictions. While several limitations are discussed, this study is a useful starting point for future investigations considering athlete selection decisions in talent identification in sport. PMID:28744238

  9. Long-Term Prognostic Validity of Talent Selections: Comparing National and Regional Coaches, Laypersons and Novices

    Directory of Open Access Journals (Sweden)

    Jörg Schorer

    2017-07-01

    Full Text Available In most sports, the development of elite athletes is a long-term process of talent identification and support. Typically, talent selection systems administer a multi-faceted strategy including national coach observations and varying physical and psychological tests when deciding who is chosen for talent development. The aim of this exploratory study was to evaluate the prognostic validity of talent selections by varying groups 10 years after they had been conducted. This study used a unique, multi-phased approach. Phase 1 involved players (n = 68 in 2001 completing a battery of general and sport-specific tests of handball ‘talent’ and performance. In Phase 2, national and regional coaches (n = 7 in 2001 who attended training camps identified the most talented players. In Phase 3, current novice and advanced handball players (n = 12 in each group selected the most talented from short videos of matches played during the talent camp. Analyses compared predictions among all groups with a best model-fit derived from the motor tests. Results revealed little difference between regional and national coaches in the prediction of future performance and little difference in forecasting performance between novices and players. The best model-fit regression by the motor-tests outperformed all predictions. While several limitations are discussed, this study is a useful starting point for future investigations considering athlete selection decisions in talent identification in sport.

  10. Long-Term Prognostic Validity of Talent Selections: Comparing National and Regional Coaches, Laypersons and Novices.

    Science.gov (United States)

    Schorer, Jörg; Rienhoff, Rebecca; Fischer, Lennart; Baker, Joseph

    2017-01-01

    In most sports, the development of elite athletes is a long-term process of talent identification and support. Typically, talent selection systems administer a multi-faceted strategy including national coach observations and varying physical and psychological tests when deciding who is chosen for talent development. The aim of this exploratory study was to evaluate the prognostic validity of talent selections by varying groups 10 years after they had been conducted. This study used a unique, multi-phased approach. Phase 1 involved players ( n = 68) in 2001 completing a battery of general and sport-specific tests of handball 'talent' and performance. In Phase 2, national and regional coaches ( n = 7) in 2001 who attended training camps identified the most talented players. In Phase 3, current novice and advanced handball players ( n = 12 in each group) selected the most talented from short videos of matches played during the talent camp. Analyses compared predictions among all groups with a best model-fit derived from the motor tests. Results revealed little difference between regional and national coaches in the prediction of future performance and little difference in forecasting performance between novices and players. The best model-fit regression by the motor-tests outperformed all predictions. While several limitations are discussed, this study is a useful starting point for future investigations considering athlete selection decisions in talent identification in sport.

  11. New Regional and Global HFC Projections and Effects of National Regulations and Montreal Protocol Amendment Proposals

    Science.gov (United States)

    Velders, G. J. M.

    2015-12-01

    Hydrofluorocarbons (HFCs) are used as substitutes for ozone-depleting substances that are being phased out globally under Montreal Protocol regulations. New global scenarios of HFC emissions reach 4.0-5.3 GtCO2-eq yr-1 in 2050, which corresponds to a projected growth from 2015 to 2050 which is 9% to 29% of that for CO2 over the same time period. New baseline scenarios are formulated for 10 HFC compounds, 11 geographic regions, and 13 use categories. These projections are the first to comprehensively assess production and consumption of individual HFCs in multiple use sectors and geographic regions with emission estimates constrained by atmospheric observations. In 2050, in percent of global HFC emissions, China (~30%), India and the rest of Asia (~25%), Middle East and northern Africa (~10%), and USA (~10%) are the principal source regions; and refrigeration and stationary air conditioning are the major use sectors. National regulations to limit HFC use have been adopted recently in the European Union, Japan and USA, and four proposals have been submitted in 2015 to amend the Montreal Protocol to substantially reduce growth in HFC use. Calculated baseline emissions are reduced by 90% in 2050 by implementing the North America Montreal Protocol amendment proposal. Global adoption of technologies required to meet national regulations would be sufficient to reduce 2050 baseline HFC consumption by more than 50% of that achieved with the North America proposal for most developed and developing countries. The new HFC scenarios and effects of national regulations and Montreal Protocol amendment proposals will be presented.

  12. National and Regional Representativeness of Hospital Emergency Department Visit Data in the National Syndromic Surveillance Program, United States, 2014

    Science.gov (United States)

    Coates, Ralph J.; Pérez, Alejandro; Baer, Atar; Zhou, Hong; English, Roseanne; Coletta, Michael; Dey, Achintya

    2016-01-01

    Objective We examined the representativeness of the nonfederal hospital emergency department (ED) visit data in the National Syndromic Surveillance Program (NSSP). Methods We used the 2012 American Hospital Association Annual Survey Database, other databases, and information from state and local health departments participating in the NSSP about which hospitals submitted data to the NSSP in October 2014. We compared ED visits for hospitals submitting 15 data with all ED visits in all 50 states and Washington, DC. Results Approximately 60.4 million of 134.6 million ED visits nationwide (~45%) were reported to have been submitted to the NSSP. ED visits in 5 of 10 regions and the majority of the states were substantially underrepresented in the NSSP. The NSSP ED visits were similar to national ED visits in terms of many of the characteristics of hospitals and their service areas. However, visits in hospitals with the fewest annual ED visits, in rural trauma centers, and in hospitals serving populations with high percentages of Hispanics and Asians were underrepresented. Conclusions NSSP nonfederal hospital ED visit data were representative for many hospital characteristics and in some geographic areas but were not very representative nationally and in many locations. Representativeness could be improved by increasing participation in more states and among specific types of hospitals. PMID:26883318

  13. Using caching and optimization techniques to improve performance of the Ensembl website

    Directory of Open Access Journals (Sweden)

    Smith James A

    2010-05-01

    Full Text Available Abstract Background The Ensembl web site has provided access to genomic information for almost 10 years. During this time the amount of data available through Ensembl has grown dramatically. At the same time, the World Wide Web itself has become a dramatically more important component of the scientific workflow and the way that scientists share and access data and scientific information. Since 2000, the Ensembl web interface has had three major updates and numerous smaller updates. These have largely been in response to expanding data types and valuable representations of existing data types. In 2007 it was realised that a radical new approach would be required in order to serve the project's future requirements, and development therefore focused on identifying suitable web technologies for implementation in the 2008 site redesign. Results By comparing the Ensembl website to well-known "Web 2.0" sites, we were able to identify two main areas in which cutting-edge technologies could be advantageously deployed: server efficiency and interface latency. We then evaluated the performance of the existing site using browser-based tools and Apache benchmarking, and selected appropriate technologies to overcome any issues found. Solutions included optimization of the Apache web server, introduction of caching technologies and widespread implementation of AJAX code. These improvements were successfully deployed on the Ensembl website in late 2008 and early 2009. Conclusions Web 2.0 technologies provide a flexible and efficient way to access the terabytes of data now available from Ensembl, enhancing the user experience through improved website responsiveness and a rich, interactive interface.

  14. Student teams maneuver robots in qualifying match at regional robotic competition at KSC

    Science.gov (United States)

    1999-01-01

    All four robots, maneuvered by student teams behind protective walls, converge on a corner of the playing field during qualifying matches of the 1999 Southeastern Regional robotic competition at Kennedy Space Center Visitor Complex . Thirty schools from around the country have converged at KSC for the event that pits gladiator robots against each other in an athletic-style competition. The robots have to retrieve pillow- like disks from the floor, as well as climb onto the platform (with flags) and raise the cache of pillows to a height of eight feet. KSC is hosting the event being sponsored by the nonprofit organization For Inspiration and Recognition of Science and Technology, known as FIRST. The FIRST robotics competition is designed to provide students with a hands-on, inside look at engineering and other professional careers.

  15. INTEGRATION OF BUSINESS, EDUCATION AND SCIENCE AT THE REGIONAL LEVEL FOR IMPLEMENTING THE NATIONAL TECHNOLOGICAL INITIATIVE

    Directory of Open Access Journals (Sweden)

    Innara Lyapina

    2018-01-01

    Full Text Available Current world affairs show that the post-industrial stage of development of all mature world powers’ economies is followed by creation of a new development paradigm, which is based on the economy of knowledge, science achievements, innovations, global information and communication systems, and which leads to innovative economy formation. In the context of the national innovation economy formation in the Russian Federation, prerequisites are created for integrating the efforts of business, science and education representatives to develop, produce and market high-tech products which have significant economic or social potential. And this is not only the task announced by the Russian government, but also a natural process in the country’s economy, which contributes to the increase in the integration participants’ efficiency. The result of such integrated interaction of education, science and business consists in a synergistic effect through formation of an interactive cooperation model that involves the active use of combined knowledge, ideas, technologies and other resources during innovative projects implementation. At the same time, integration processes are diverse, complex and occur in each case taking into account the integrating parties’ activity specifics. Within this framework, the goal of the research is to characterize the impact of the education, science and business integration process, on the national technological initiative implementation in the country on the whole and to study the integrating experience of these entities at the regional level. In the course of the research, the stages of the Russian national innovation economy formation process have been studied; the role of education, science and business in the National Technological Initiative implementation has been characterized; it’s been proved that educational institutions are the key link in the integration process in the chain “education – science

  16. 75 FR 16635 - Refuge Specific Regulations; Public Use; Kodiak National Wildlife Refuge

    Science.gov (United States)

    2010-04-01

    ...) calls for us, in cooperation with the Alaska Department of Fish and Game, to develop and implement a... associated facilities (outhouse, meat cache). Tent camping is unrestricted on most of the Refuge. Camping in...

  17. Making the Business Case for Regional and National Water Data Collection

    Science.gov (United States)

    Pinero, E.

    2017-12-01

    Water-related risks are becoming more and more of a concern with organizations that either depend on water use or are responsible for water services provision. Yet this concern does not always translate into a business case to support large scale water data collection. One reason is that water demand varies across sectors and physical setting. There is typically no single parameter or reason where a given entity would be interested in national or even regional scale data. Even for public sector entities, water issues are local and their jurisdiction does not span regional scale coverage. Therefore, to make the case for adequate data collection not only are technology and web platforms necessary, but one also needs a compelling business case. One way to make the case will involve raising awareness of the critical cross-cutting role of water such that sectors see the need for water data to support sustainability of other systems, such as energy, food, and resilience. Another factor will be understanding the full life cycle role of water, especially in the supply chain, and that there are many variables that drive water demand. Such an understanding will make clearer the need for more regional scale understanding. This will begin to address the apparent catch 22 that there is a need for data to understand the scope of the challenge, but until the scope of the challenge is understood, there is nno impelling business case to collect data. Examples, such as the Alliance for Water Stewardship standard and CEO Water Mandate Water Action Hub will be discussed to illustrate recent innovations in making a case for efficient collection of watershed scale and regional data.

  18. National Geo-Database for Biofuel Simulations and Regional Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Izaurralde, Roberto C.; Zhang, Xuesong; Sahajpal, Ritvik; Manowitz, David H.

    2012-04-01

    The goal of this project undertaken by GLBRC (Great Lakes Bioenergy Research Center) Area 4 (Sustainability) modelers is to develop a national capability to model feedstock supply, ethanol production, and biogeochemical impacts of cellulosic biofuels. The results of this project contribute to sustainability goals of the GLBRC; i.e. to contribute to developing a sustainable bioenergy economy: one that is profitable to farmers and refiners, acceptable to society, and environmentally sound. A sustainable bioenergy economy will also contribute, in a fundamental way, to meeting national objectives on energy security and climate mitigation. The specific objectives of this study are to: (1) develop a spatially explicit national geodatabase for conducting biofuel simulation studies; (2) model biomass productivity and associated environmental impacts of annual cellulosic feedstocks; (3) simulate production of perennial biomass feedstocks grown on marginal lands; and (4) locate possible sites for the establishment of cellulosic ethanol biorefineries. To address the first objective, we developed SENGBEM (Spatially Explicit National Geodatabase for Biofuel and Environmental Modeling), a 60-m resolution geodatabase of the conterminous USA containing data on: (1) climate, (2) soils, (3) topography, (4) hydrography, (5) land cover/ land use (LCLU), and (6) ancillary data (e.g., road networks, federal and state lands, national and state parks, etc.). A unique feature of SENGBEM is its 2008-2010 crop rotation data, a crucially important component for simulating productivity and biogeochemical cycles as well as land-use changes associated with biofuel cropping. We used the EPIC (Environmental Policy Integrated Climate) model to simulate biomass productivity and environmental impacts of annual and perennial cellulosic feedstocks across much of the USA on both croplands and marginal lands. We used data from LTER and eddy-covariance experiments within the study region to test the

  19. Preparation of the National Programme for the Spent Fuel and Radioactive Waste Management Taking Into Account Possibility of Potential Multinational/Regional Disposal Facilities Development

    International Nuclear Information System (INIS)

    Kegel, L.

    2016-01-01

    Conclusions: • Final disposal in deep geological repository (national, regional or multinational) is planed: → Implementation of disposal after NPP closure (>2065). • The strategy principle of international cooperation: → National responsibility for radioactive waste and spent fuel management is considered in parallel with active participation in international regional efforts to make progress in connection to joint regional programmes on disposal. • Implementation is challenging but technical feasible. • Timely and appropriate “nesting” of multinational solutions into national plans. • Although a multinational repository is likely not ripe for development today, actions taken now can be important to increase the likelihood of its future development

  20. Innovation – a national priority, supported by the regional development agencies

    Directory of Open Access Journals (Sweden)

    Elena ENACHE

    2013-09-01

    Full Text Available The European Union is interested in the overall performance of the group of 27, and in the national contributions in innovation. The target is to create an „Innovation Union” which aims to provide to entrepreneurs the necessary support to transform innovative ideas into products and services because it has been found that the rate is inefficient to reduce the gap between Europe and its main competitors. The competition with the emerging countries cannot also be won without carrying out the provisions of the Europe 2020 Strategy. This paper addresses the Romanian vision on innovation supported by the Regional Development Agencies whose experience can be considered best-practice model.

  1. Questions and Answers SOL-R3-13-00006: Region 3 - National Remedial Action Contracts / Multiple Award Competition

    Science.gov (United States)

    Region 3 - EPA is performing market research to determine if industry has the capability and capacity to perform the work, on a national level, as described in the attached draft Statement of Work /Performance Work Statement(SOW/PWS).

  2. Revised preliminary geologic map of the Rifle Quadrangle, Garfield County, Colorado

    Science.gov (United States)

    Shroba, R.R.; Scott, R.B.

    1997-01-01

    The Rifle quadrangle extends from the Grand Hogback monocline into the southeastern part of the Piceance basin. In the northeastern part of the map area, the Wasatch Formation is nearly vertical, and over a distance of about 1 km, the dip decreases sharply from about 70-85o to about 15-30o toward the southwest. No evidence of a fault in this zone of sharp change in dip is observed but exposures in the Shire Member of the Wasatch Formation are poor, and few marker horizons that might demonstrate offset are distinct. In the central part of the map area, the Shire Member is essentially flat lying. In the south and southwest part of the map area, the dominant dip is slightly to the north, forming an open syncline that plunges gently to the northwest. Evidence for this fold also exists in the subsurface from drill-hole data. According to Tweto (1975), folding of the early Eocene to Paleocene Wasatch Formation along the Grand Hogback reqired an early Eocene age for the last phase of Laramide compression. We find the attitude of the Wasatch Formation to be nearly horizontal, essentially parallel to the overlying Anvil Points Member of the Eocene Green River Formation; therefore, we have no information that either confirms or disputes that early Eocene was the time of the last Laramide event. Near Rifle Gap in the northeast part of the map area, the Mesaverde Group locally dips about 10o less steeply than the overlying Wasatch Formation, indicating that not only had the formation of the Hogback monocline not begun by the time the Wasatch was deposited at this locality, but the underlying Mesaverde Group was locally tilted slightly toward the present White River uplift. Also the basal part of the Atwell Gulch Member of the Wasatch Formation consists of fine-grained mudstones and siltstones containing sparse sandstone and rare conglomerates, indicating that the source of sediment was not from erosion of the adjacent Upper Cretaceous Mesaverde Group. The most likely source of

  3. Regional supply of outreach service and length of stay in psychiatric hospital among patients with schizophrenia: National case mix data analysis in Japan.

    Science.gov (United States)

    Niimura, Junko; Nakanishi, Miharu; Yamasaki, Syudo; Nishida, Atsushi

    2017-12-01

    Several clinical trials have demonstrated that linkage to an outreach service can prevent prolonged length of stay of patients at psychiatric hospitals. However, there has been no investigation of the association between length of stay in psychiatric hospital and regional supply of outreach services using national case mix data. The aim of this study was to clarify the relationship between length of stay in psychiatric hospital and regional supply of outreach services. We used data from the National Patient Survey in Japan, a nationally representative cross-sectional survey of inpatient care conducted every three years from 1996 to 2014. Data from 42,268 patients with schizophrenia who had been admitted to psychiatric hospitals were analyzed. After controlling for patient and regional characteristics, patients in regions with fewer number of visits for psychiatric nursing care at home had significantly longer length of stay in psychiatric hospitals. This finding implies that enhancement of the regional supply of outreach services would prevent prolonged length of stay in psychiatric hospitals. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. The South American energy policies: regional problems and national logics; As politicas energeticas Sul-Americanas: problemas regionais e logicas nacionais

    Energy Technology Data Exchange (ETDEWEB)

    Le Prioux, Bruna [Centro de Pesquisa e Documentacao da America Latina (CREDAL) (France)

    2010-07-01

    The international energy context in the first decade of the 21st century can be described by the following points. First, the growing concerns with climatic changes and the greenhouse effect, which the main cause is the massive use of fossil fuels. Second, the energy vulnerability, due to the mistrust in the main hydrocarbon producers, to the increasing consumption from the so-called developing countries and to the idea of a possible end of oil reserves. And third, as a consequence of the least factor, an intense speculation in international market has increased the prices of oil barrel and gas since 2005. In this context, each country tries to adapt to their way of such changes. Beyond local solutions, South American countries have historic attempts of regional integration through energy, which can be presented as a complement of national policies. This research focuses on the study of the gas energy policy of some producers' and consumers' countries in South America, their choices and procedures in the national and international scope. Thus, the main goal of this article is to analyze how national energy policies affect the regional energy action of these South American countries. In order to answer this question, our goals are: (1) diagnosing the energy potentialities and disadvantages of each country; (2) identifying concepts related to energy questions; and (3) relating the two past steps to analyze the energy interaction in South America. The countries selected to this research are: Brazil, Argentina and Chile, due to their economic magnitude in South America and their intense energy consumption; and Bolivia and Venezuela, due to their energy reserves and surplus. The study of national energy systems was made through SWOT analysis (Strengths, Weaknesses, Opportunities and Threat), in order to have a synthetically diagnosis about the energetic potentials and disadvantages of each country. Thereafter, we intersect this data with concepts as

  5. National Marine Fisheries Service Regions

    Data.gov (United States)

    Department of Homeland Security — The NOAA Coastal Services Center's Legislative Atlas is a regional geographic information system (GIS) that provides spatial data for state and federal coastal and...

  6. Use of survey data to define regional and local priorities for management on national wildlife refuges

    Science.gov (United States)

    John R. Sauer; Jennifer Casey; Harold Laskowski; Jan D. Taylor; Jane Fallon

    2005-01-01

    National Wildlife Refuges must manage habitats to support a variety of species that often have conflicting needs. To make reasonable management decisions, managers must know what species are priorities for their refuges and the relative importance of the species. Unfortunately, species priorities are often set regionally, but refuges must develop local priorities that...

  7. Twenty Years of Regional Safeguards: the ABACC System and the Synergy with the National Nuclear Material Control Systems

    International Nuclear Information System (INIS)

    Dias, Fabio C.; Palhares, Lilia C.; De Mello, Luiz A.; Vicens, Hugo E.; Maceiras, Elena; Terigi, Gabriel

    2011-01-01

    As result of the nuclear integration between Brazil and Argentina, in July 1991 the Agreement for Peaceful Uses of the Nuclear Energy (Bilateral Agreement) was signed and the Brazilian Argentine Agency for Accountancy and Control of Nuclear Material (ABACC) was created [1]. The main role assigned to ABACC was the implementation and administration of the regional control system and the coordination with the International Atomic Energy Agency (IAEA) in order to apply safeguards to all nuclear material in all nuclear activities of Argentina and Brazil. In December 1991 the IAEA, ABACC, Argentina and Brazil signed the Quadripartite Agreement (INFCIRC/435) [2]. The agreement establishes obligations similar to those established by model INFCIRC/153 comprehensive agreements. The Bilateral Agreement establishes that the Parties should make available financial and technical capabilities to support ABACC activities. In order to accomplish this challenge, the National Systems had to improve their structure and capabilities. Through the close interaction with the IAEA and ABACC, the national systems have been enriched by adopting new methodologies, implementing innovative safeguards approaches and providing specialized training to the regional inspectors. All of this also resulted in relevant technical improvements to the regional system as a whole. The approach of both neighborhoods controlling each other increased the confidence between the partners and permitted a better knowledge of their potentialities. The recognized performance of the regional system in the implementation of innovative, efficient and credible safeguards measures increased the confidence of the international community on the implementation of nuclear safeguards in Argentina and Brazil. In this paper, after twenty years of the creation of the ABACC System, the view of the Brazilian and Argentine National Authorities is presented. (authors)

  8. Proceedings of the workshop on national/regional energy-environmental modeling concepts, May 30-June 1, 1979

    Energy Technology Data Exchange (ETDEWEB)

    Ritschard, R.L.; Haven, K.F.; Ruderman, H.; Sathaye, J.

    1980-05-01

    The purpose of the workshop was to identify and evaluate approaches to regional economic and energy supply/demand forecasting that are best suited to assisting DOE in the assessment of environmental impacts of national energy policies. Specifically the DOE Office of Technology Impacts uses models to assess the impacts of technology change; to analyze differential impacts of various energy policies; and to provide an early-warning system of possible environmental constraints to energy development. Currently, OTI employs both a top-down model system to analyze national scenarios and a bottom-up assessment conducted from a regional perspective. A central theme of the workshop was to address the problem of how OTI should integrate the top-down and bottom-up approaches. The workshop was structured to use the experience of many fields of regional analysis toward resolving that problem. For the short-term, recommendations were suggested for improving the current OTI models, but most of the comments were directed toward the development of a new methodology. It was recommended that a core set of related models be developed that are modular, dynamic and consistent: they would require an inter-industry accounting framework; inter-regional linkages; and adequate documentation. Further, it was suggested that an advisory group be formed to establish the appropriate methodological framework of the model system. With regard to data used in any policy analysis model, it was recommended that OTI develop and maintain an integrated system of economic, environmental, and energy accounts that is coordinated with the statistical agencies that collect the data.

  9. Final matches of the FIRST regional robotic competition at KSC

    Science.gov (United States)

    1999-01-01

    Four robots vie for position on the playing field during the 1999 FIRST Southeastern Regional robotic competition held at KSC. Powered by 12-volt batteries and operated by remote control, the robotic gladiators spent two minutes each trying to grab, claw and hoist large, satin pillows onto their machines. Student teams, shown behind protective walls, play defense by taking away competitors' pillows and generally harassing opposing machines. Two of the robots have lifted their caches of pillows above the field, a movement which earns them points. Along with the volunteer referees, at the edge of the playing field, judges at right watch the action. FIRST is a nonprofit organization, For Inspiration and Recognition of Science and Technology. The competition comprised 27 teams, pairing high school students with engineer mentors and corporations. The FIRST robotics competition is designed to provide students with a hands-on, inside look at engineering and other professional careers.

  10. Mutual Impacts of Geocaching and Natural Environment

    Directory of Open Access Journals (Sweden)

    Jiří Schneider

    2016-01-01

    Full Text Available Rising popularity of geocaching is linked to increased risk of negative impacts on natural environment. Based on that, this paper intends to present possible approach of how to evaluate these impacts in the Landscape protected area Moravian Karst (Czech Republic and in the Vrátna dolina valley (National park Malá Fatra, Slovak Republic. Recreation along with nature conservation has been solved in these areas in the log-run and geocaching has been steadily extending offer of recreational activities. Therefore, it seems desirable to examine how geocaching affects environment and simultaneously how topography or land cover influences availability or difficulty of caches. 57 caches (i.e. one third of the total has been analyzed in the Moravian Karst and 11 caches in the Vrátna dolina valley. To assess impacts, own classification of indicators has been suggested, such as cache attendance, environment attractiveness or visually detected impacts of geocaching on natural environment. Our study revealed the major risk lies primarily in geo-highways which – with respect to soil type, land cover and intensity of cache attendance – grow rather fast. Despite the local nature of detected impacts, an increased attention shall be devoted to environment care and specifically to regulation of attendance.

  11. National and Regional Surveys of Radon Concentration in Dwellings. Review of Methodology and Measurement Techniques

    International Nuclear Information System (INIS)

    2013-01-01

    Reliable, comparable and 'fit for purpose' results are essential requirements for any decision based on analytical measurements. For the analyst, the availability of tested and validated sampling and analytical procedures is an extremely important tool for carrying out such measurements. For maximum utility, such procedures should be comprehensive, clearly formulated and readily available to both the analyst and the customer for reference. In the specific case of radon surveys, it is very important to design a survey in such a way as to obtain results that can reasonably be considered representative of a population. Since 2004, the Environment Programme of the IAEA has included activities aimed at the development of a set of procedures for the measurement of radionuclides in terrestrial environmental samples. The development of radon measurement procedures for national and regional surveys started with the collection and review of more than 160 relevant scientific papers. On the basis of this review, this publication summarizes the methodology and the measurement techniques suitable for a population representative national or regional survey on radon concentration in the indoor air of dwellings. The main elements of the survey design are described and discussed, such as the sampling scheme, the protocols, the questionnaire and the data analysis, with particular attention to the potential biases that can affect the representativeness of the results. Moreover, the main measurement techniques suitable for national surveys on indoor radon are reviewed, with particular attention to the elements that can affect the precision and accuracy of the results

  12. National and Regional Surveys of Radon Concentration in Dwellings. Review of Methodology and Measurement Techniques

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2013-12-15

    Reliable, comparable and 'fit for purpose' results are essential requirements for any decision based on analytical measurements. For the analyst, the availability of tested and validated sampling and analytical procedures is an extremely important tool for carrying out such measurements. For maximum utility, such procedures should be comprehensive, clearly formulated and readily available to both the analyst and the customer for reference. In the specific case of radon surveys, it is very important to design a survey in such a way as to obtain results that can reasonably be considered representative of a population. Since 2004, the Environment Programme of the IAEA has included activities aimed at the development of a set of procedures for the measurement of radionuclides in terrestrial environmental samples. The development of radon measurement procedures for national and regional surveys started with the collection and review of more than 160 relevant scientific papers. On the basis of this review, this publication summarizes the methodology and the measurement techniques suitable for a population representative national or regional survey on radon concentration in the indoor air of dwellings. The main elements of the survey design are described and discussed, such as the sampling scheme, the protocols, the questionnaire and the data analysis, with particular attention to the potential biases that can affect the representativeness of the results. Moreover, the main measurement techniques suitable for national surveys on indoor radon are reviewed, with particular attention to the elements that can affect the precision and accuracy of the results.

  13. Superfund Remedial Acquisition Framework SOL-R3-13-00006: Region 3 - National Remedial Action Contracts / Multiple Award Competition

    Science.gov (United States)

    Region 3 - EPA is performing market research to determine if industry has the capability and capacity to perform the work, on a national level, as described in the attached draft Statement of Work /Performance Work Statement(SOW/PWS).

  14. Initiatives for regional dialogue consideration of regional disarmament guidelines

    International Nuclear Information System (INIS)

    Marschik, R.

    1994-01-01

    The General Assembly of the United Nations adopted guidelines and recommendations for regional approaches to disarmament within the context of global security. The guidelines contain 52 principles on: relationship between regional disarmament, arm limitation and global security; general guidelines and recommendations for regional disarmament efforts; possible ways and means to assist and implement these efforts; possible role of the United Nations in aiding these efforts. Experiences gained in Europe and Near East are analysed in the framework of the situation in Northeast, South and Southeast Asia

  15. New Local, National and Regional Cereal Price Indices for Improved Identification of Food Insecurity

    Science.gov (United States)

    Brown, Molly E.; Tondel, Fabien; Thorne, Jennifer A.; Essam, Timothy; Mann, Bristol F.; Stabler, Blake; Eilerts, Gary

    2011-01-01

    Large price increases over a short time period can be indicative of a deteriorating food security situation. Food price indices developed by the United Nations Food and Agriculture Organization (FAO) are used to monitor food price trends at a global level, but largely reflect supply and demand conditions in export markets. However, reporting by the United States Agency for International Development (USAID)'s Famine Early Warning Systems Network (FEWS NET) indicates that staple cereal prices in many markets of the developing world, especially in surplus-producing areas, often have a delayed and variable response to international export market price trends. Here we present new price indices compiled for improved food security monitoring and assessment, and specifically for monitoring conditions of food access across diverse food insecure regions. We found that cereal price indices constructed using market prices within a food insecure region showed significant differences from the international cereals price, and had a variable price dispersion across markets within each marketshed. Using satellite-derived remote sensing information that estimates local production and the FAO Cereals Index as predictors, we were able to forecast movements of the local or national price indices in the remote, arid and semi-arid countries of the 38 countries examined. This work supports the need for improved decision-making about targeted aid and humanitarian relief, by providing earlier early warning of food security crises.

  16. Party Organizational Change: Formal Distribution of Power between National and Regional Levels in Italian Political Parties (1991-2012

    Directory of Open Access Journals (Sweden)

    Enrico Calossi

    2015-03-01

    Full Text Available In the last 20 years an increasing number of scholars have centred their attention on the relationships between party national structures and party sub-national branches. A relevant part of the specialized literature has interpreted party change as the by-product of the denationalization of party politics. The aim of this contribution is to investigate to what extent eight relevant Italian parties have followed patterns of organizational change, after the reforms of the municipal, provincial and regional election sys-tems; and the process of devolution of administrative powers begun during the Nineties. By focusing on two analytical dimensions (the level of involvement and the level of autonomy of party regional units, we analyse diachronically continuity and change in party formal organization, through an in-depth analysis of the statutes adopted from 1992 to 2012

  17. Forecasting Areas Vulnerable to Forest Conversion in the Tam Dao National Park Region, Vietnam

    Directory of Open Access Journals (Sweden)

    Duong Dang Khoi

    2010-04-01

    Full Text Available Tam Dao National Park (TDNP is a remaining primary forest that supports some of the highest levels of biodiversity in Vietnam. Forest conversion due to illegal logging and agricultural expansion is a major problem that is hampering biodiversity conservation efforts in the TDNP region. Yet, areas vulnerable to forest conversion are unknown. In this paper, we predicted areas vulnerable to forest changes in the TDNP region using multi-temporal remote sensing data and a multi-layer perceptron neural network (MLPNN with a Markov chain model (MLPNN-M. The MLPNN-M model predicted increasing pressure in the remaining primary forest within the park as well as on the secondary forest in the surrounding areas. The primary forest is predicted to decrease from 18.03% in 2007 to 15.10% in 2014 and 12.66% in 2021. Our results can be used to prioritize locations for future biodiversity conservation and forest management efforts. The combined use of remote sensing and spatial modeling techniques provides an effective tool for monitoring the remaining forests in the TDNP region.

  18. Deep Sea Coral National Observation Database, Northeast Region

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The national database of deep sea coral observations. Northeast version 1.0. * This database was developed by the NOAA NOS NCCOS CCMA Biogeography office as part of...

  19. National and Regional Impacts of Increasing Non-Agricultural Market Access by Developing Countries – the Case of Pakistan

    OpenAIRE

    Butt, Muhammad Shoaib; Bandara, Jayatilleke S.

    2008-01-01

    The US, the EU, Brazil and India met in Germany in June 2007 with a view to bridging differences between developed and developing countries on the Doha Round of trade negotiations. However, the talks broke down because of disagreement on the intertwined issues of agricultural protection and Non-Agricultural Market Access (NAMA). This study uses the first regional computable general equilibrium (CGE) model of Pakistan to evaluate the national and regional impacts of increasing NAMA as per two ...

  20. Regional Centres for Space Science and Technology Education and ICG Information Centres affiliated to the United Nations

    Science.gov (United States)

    Gadimova, S.; Haubold, H. J.

    2009-06-01

    Based on resolutions of the United Nations General Assembly, Regional Centres for Space Science and Technology Education were established in India, Morocco, Nigeria, Brazil and Mexico. Simultaneously, education curricula were developed for the core disciplines of remote sensing, satellite communications, satellite meteorology, and space and atmospheric science. This paper provides a brief summary on the status of the operation of the regional centres with a view to use them as information centres of the International Committee on Global Navigation Satellite Systems (ICG), and draws attention to their educational activities.

  1. Screen-Space Normal Distribution Function Caching for Consistent Multi-Resolution Rendering of Large Particle Data

    KAUST Repository

    Ibrahim, Mohamed

    2017-08-28

    Molecular dynamics (MD) simulations are crucial to investigating important processes in physics and thermodynamics. The simulated atoms are usually visualized as hard spheres with Phong shading, where individual particles and their local density can be perceived well in close-up views. However, for large-scale simulations with 10 million particles or more, the visualization of large fields-of-view usually suffers from strong aliasing artifacts, because the mismatch between data size and output resolution leads to severe under-sampling of the geometry. Excessive super-sampling can alleviate this problem, but is prohibitively expensive. This paper presents a novel visualization method for large-scale particle data that addresses aliasing while enabling interactive high-quality rendering. We introduce the novel concept of screen-space normal distribution functions (S-NDFs) for particle data. S-NDFs represent the distribution of surface normals that map to a given pixel in screen space, which enables high-quality re-lighting without re-rendering particles. In order to facilitate interactive zooming, we cache S-NDFs in a screen-space mipmap (S-MIP). Together, these two concepts enable interactive, scale-consistent re-lighting and shading changes, as well as zooming, without having to re-sample the particle data. We show how our method facilitates the interactive exploration of real-world large-scale MD simulation data in different scenarios.

  2. Screen-Space Normal Distribution Function Caching for Consistent Multi-Resolution Rendering of Large Particle Data

    KAUST Repository

    Ibrahim, Mohamed; Wickenhauser, Patrick; Rautek, Peter; Reina, Guido; Hadwiger, Markus

    2017-01-01

    Molecular dynamics (MD) simulations are crucial to investigating important processes in physics and thermodynamics. The simulated atoms are usually visualized as hard spheres with Phong shading, where individual particles and their local density can be perceived well in close-up views. However, for large-scale simulations with 10 million particles or more, the visualization of large fields-of-view usually suffers from strong aliasing artifacts, because the mismatch between data size and output resolution leads to severe under-sampling of the geometry. Excessive super-sampling can alleviate this problem, but is prohibitively expensive. This paper presents a novel visualization method for large-scale particle data that addresses aliasing while enabling interactive high-quality rendering. We introduce the novel concept of screen-space normal distribution functions (S-NDFs) for particle data. S-NDFs represent the distribution of surface normals that map to a given pixel in screen space, which enables high-quality re-lighting without re-rendering particles. In order to facilitate interactive zooming, we cache S-NDFs in a screen-space mipmap (S-MIP). Together, these two concepts enable interactive, scale-consistent re-lighting and shading changes, as well as zooming, without having to re-sample the particle data. We show how our method facilitates the interactive exploration of real-world large-scale MD simulation data in different scenarios.

  3. Distributed late-binding micro-scheduling and data caching for data-intensive workflows

    International Nuclear Information System (INIS)

    Delgado Peris, A.

    2015-01-01

    Today's world is flooded with vast amounts of digital information coming from innumerable sources. Moreover, it seems clear that this trend will only intensify in the future. Industry, society and remarkably science are not indifferent to this fact. On the contrary, they are struggling to get the most out of this data, which means that they need to capture, transfer, store and process it in a timely and efficient manner, using a wide range of computational resources. And this task is not always simple. A very representative example of the challenges posed by the management and processing of large quantities of data is that of the Large Hadron Collider experiments, which handle tens of petabytes of physics information every year. Based on the experience of one of these collaborations, we have studied the main issues involved in the management of huge volumes of data and in the completion of sizeable workflows that consume it. In this context, we have developed a general-purpose architecture for the scheduling and execution of workflows with heavy data requirements: the Task Queue. This new system builds on the late-binding overlay model, which has helped experiments to successfully overcome the problems associated to the heterogeneity and complexity of large computational grids. Our proposal introduces several enhancements to the existing systems. The execution agents of the Task Queue architecture share a Distributed Hash Table (DHT) and perform job matching and assignment cooperatively. In this way, scalability problems of centralized matching algorithms are avoided and workflow execution times are improved. Scalability makes fine-grained micro-scheduling possible and enables new functionalities, like the implementation of a distributed data cache on the execution nodes and the integration of data location information in the scheduling decisions...(Author)

  4. Reconstruction of national distribution of indoor radon concentration in Russia using results of regional indoor radon measurement programs

    International Nuclear Information System (INIS)

    Yarmoshenko, I.; Malinovsky, G.; Vasilyev, A.; Zhukovsky, M.

    2015-01-01

    The aim of the paper is a reconstruction of the national distribution and estimation of the arithmetic average indoor radon concentration in Russia using the data of official annual 4-DOZ reports. Annual 4-DOZ reports summarize results of radiation measurements in 83 regions of Russian Federation. Information on more than 400 000 indoor radon measurements includes the average indoor radon isotopes equilibrium equivalent concentration (EEC) and number of measurements by regions and by three main types of houses: wooden, one-storey non-wooden, and multi-storey non-wooden houses. To reconstruct the national distribution, all-Russian model sample was generated by integration of sub-samples created using the results of each annual regional program of indoor radon measurements in each type of buildings. According to indoor radon concentration distribution reconstruction, all-Russian average indoor radon concentration is 48 Bq/m"3. Average indoor radon concentration by region ranges from 12 to 207 Bq/m"3. The 95-th percentile of the distribution is reached at indoor radon concentration 160 Bq/m"3. - Highlights: • Reconstruction of indoor radon concentration distribution in Russia was carried out. • Data of official annual 4-DOZ reports were used. • All-Russian average indoor radon concentration is 48 Bq/m"3. • The 95-th percentile is 160 Bq/m"3.

  5. SCONES: Secure Content-Oriented Networking for Exploring Space, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — We envision a secure content-oriented internetwork as a natural generalization of the cache-and-forward architecture inherent in delay-tolerant networks. Using our...

  6. The profitability of energy investments. Impact studies made on regional and national economies; Energiainvestointien alue- ja kansantaloudellinen kannattavuustarkastelu

    Energy Technology Data Exchange (ETDEWEB)

    Karttunen, V.; Vanhanen, J.; Vehvilainen, I.; Pesola, A.; Oja, L. [Gaia Consulting Oy, Helsinki (Finland)

    2013-11-01

    This study has shed light on the question whether it is economically justifiable to accelerate the replacing of imported fossil fuels with domestic fuels. This question has been evaluated through assessing the impacts of different energy production solutions on regional and national economies. The results are based on case studies where three types of energy investments have been examined: a 140 MW biomass gasification plant operated by Vaskiluodon Voima, a 162 MW multi-fuel combined heat and power (CHP) plant operated by Kuopio Energy, and a 0.8 MW wood chip boiler investment planned for the local heating network in Kaemmenniemi, Tampere. The calculations are based on a cash flow analysis where the aim is to assess the cash flows resulting from the investments to different stakeholders. The cash flow impacts are reported on four different levels: companies, private persons, municipalities and the government. In addition, impacts on the national current account have been assessed. The analysis of the biomass gasification plant in Vaskiluoto included a comparison of the actual realised investment and a situation where the investment would not have been made and where coal-fired energy production would have continued. With the current extremely low prices of carbon credits, the traditional company-specific analysis model, where only the impacts on the cash flows of the power plant company and its owners are considered, would lead to a situation where it would have been more profitable to continue the use of coal. However, from the regional and national economy points of view, this would be an unfavourable solution: the companies and private persons that form the domestic fuel value chain (wage earners and forest owners) would miss out on a 6.6 million EUR profit. Regarding the new multi-fuel CHP plant in Haapaniemi (operated by Kuopio Energy), the analysis has compared two extremities: energy production with a maximum share (70 %) of domestic wood-based fuel and the

  7. Human Factors in the Large: Experiences from Denmark, Finland and Canada in Moving Towards Regional and National Evaluations of Health Information System Usability

    Science.gov (United States)

    Kaipio, J.; Nieminen, M.; Hyppönen, H.; Lääveri, T.; Nohr, C.; Kanstrup, A. M.; Berg Christiansen, M.; Kuo, M.-H.; Borycki, E.

    2014-01-01

    Summary Objectives The objective of this paper is to explore approaches to understanding the usability of health information systems at regional and national levels. Methods Several different methods are discussed in case studies from Denmark, Finland and Canada. They range from small scale qualitative studies involving usability testing of systems to larger scale national level questionnaire studies aimed at assessing the use and usability of health information systems by entire groups of health professionals. Results It was found that regional and national usability studies can complement smaller scale usability studies, and that they are needed in order to understand larger trends regarding system usability. Despite adoption of EHRs, many health professionals rate the usability of the systems as low. A range of usability issues have been noted when data is collected on a large scale through use of widely distributed questionnaires and websites designed to monitor user perceptions of usability. Conclusion As health information systems are deployed on a widespread basis, studies that examine systems used regionally or nationally are required. In addition, collection of large scale data on the usability of specific IT products is needed in order to complement smaller scale studies of specific systems. PMID:25123725

  8. Optimizing the Wood Value Chain in Northern Norway Taking Into Account National and Regional Economic Trade-Offs

    Directory of Open Access Journals (Sweden)

    Ulf Johansen

    2017-05-01

    Full Text Available As a consequence of past decades of extensive afforestation in Norway, mature forest volumes are increasing. National forestry politics call for sustainable and efficient resource usage and for increased regional processing. Regional policies seek to provide good conditions for such industries to be competitive and to improve regional value creation. We demonstrate how methods from operations research and regional macro-economics may complement each other to support decision makers in this process. The operations research perspective is concerned with finding an optimally designed wood value chain and an aggregated planning of its operations, taking a holistic perspective on strategic-tactical level. Using Input-Output analysis methods based on statistics and survey data, regional macro-economics helps to estimate each industry actor’s value creation and impact on society beyond immediate value chain activities. Combining these approaches in a common mathematical optimization model, a balance can be struck between industry/business and regional political interests. For a realistic case study from the northern part of coastal Norway, we explore this balance from several perspectives, investigating value chain profits, economic ripple effects and regional resource usage.

  9. A province of many eyes – Rear window and caché: when the city discloses secrets through the cinema

    Directory of Open Access Journals (Sweden)

    Eliana Kuster

    2009-06-01

    Full Text Available In the city, all people see. In the city, all people are seen. The look and its related questions – what to see, as to see, the interpretation of what is seen – Is one of the central questions of the urban space since century XIX, with the growth of the cities and the phenomenon of the multitude. The look becomes, therefore, crucial to this urban man, whom it looks to recognize in this another one – the stranger – the signals of friendship or danger. This importance of the look in the city is investigated in this essay through two films: Rear window, Alfred Hitchcock (1954, and Caché, Michael Haneke (2005. In the first movie, the personages look the city. In the other, they are seen by this city. In the two films, we have the extremities of the same process: the social life transformed into spectacle. And the cinema, playing one of its main functions: the construction of representations of the human lives in the city.

  10. The National and Regional Prevalence Rates of Disability, Type, of Disability and Severity in Saudi Arabia-Analysis of 2016 Demographic Survey Data.

    Science.gov (United States)

    Bindawas, Saad M; Vennu, Vishal

    2018-02-28

    The prevalence of disability varies between countries ranging from less than 1% to up to 30% in some countries, thus, the estimated global disability prevalence is about 15%. However, it is unknown what the current estimate of disability and its types and severity are in Saudi Arabia. Thus, the objective of this study is to estimate national and regional prevalence rates of any disability, types of disability, and their severity among Saudi populations. Data on disability status were extracted from the national demographic survey conducted in 2016 as reported by the General Authority for Statistics, Saudi Arabia (N = 20,064,970). Prevalence rates per a population of 100,000 of any disability, type of disability, and its severity were calculated at the national level and in all 13 regions. Out of 20,064,970 Saudi citizens surveyed, 667,280 citizens reported disabilities, accounting for a prevalence rate of 3326 per a population of 100,000 (3.3%). Individuals aged 60 years and above (11,014) and males (3818) had a higher prevalence rate of disability compared with females (2813). The Tabuk region has the highest rate of reported disability, at 4.3%. The prevalence rates of extreme disabilities in mobility and sight were higher in Madinah (57,343) and Northern border (41,236) regions, respectively. In Saudi Arabia, more than half a million Saudi citizens (1 out of every 30 individuals) reported the presence of disability during the year 2016. A higher prevalence rate of disability was seen among those aged 60 years and above, and males. Targeted efforts are required at the national and regional levels to expand and improve rehabilitation and social services for all people with disabilities.

  11. U.S. National and regional impacts nuclear plant life extension

    International Nuclear Information System (INIS)

    Makovick, L.; Fletcher, T.; Harrison, D.L.

    1987-01-01

    The purpose of this study was to evaluate the economic impacts of nuclear plant life extension on a national and regional level. Nuclear generating capacity is expected to reach 104 Gigawatts (119 units) in the 1994-1995 period. Nuclear units of the 1970 to 1980 vintage are expected to account for 96% of nuclear capacity. As operating licenses expire, a precipitous decline in nuclear capacity results, with an average of 5 gigawatts of capacity lost each year from 2010 to 2030. Without life extension, 95% of all nuclear capacity is retired between the years 2010 and 2030. Even with historically slow growth in electric demand and extensive fossil plant life extension, the need for new generating capacity in the 2010-2030 time period is eight times greater than installed nuclear capacity. Nuclear plant life extension costs and benefits were quantified under numerous scenarios using the DRI Electricity Market Model. Under a wide range of economic assumptions and investment requirements, nuclear plant life extension resulted in a net benefit to electricity consumers. The major source of net benefits from nuclear plant life extension results from the displacement of fossil-fired generating sources. In the most likely case, nuclear plant life extension provides a dollar 200 billion net savings through the year 2030. Regions with a large nuclear capacity share, newer nuclear units and relatively higher costs of alternative fuels benefit the most from life extension. This paper also discusses the importance of regulatory policies on nuclear plant life extension

  12. Peculiarities of Perception of Nationalism at the Regional Level (Based on the Materials of Empirical Research in the Saratov Region

    Directory of Open Access Journals (Sweden)

    Vilkov Aleksandr A.

    2016-06-01

    Full Text Available The article is devoted to the analysis of the data obtained through the mass questionnaire survey of residents of the Saratov region in 2015. The authors found the perception of nationalism by citizens, the link between political views of the respondents, and their political engagement, and political participation, as well as the influence of political parties on the expression of nationalist feelings in society. Special attention is paid to the spread of nationalist attitudes among different population groups in connection with: the negative attitude of some foreign countries to the actions of Russia in Ukraine and the reunification of the Crimea with Russia; with the aggravation of the economic crisis, the sanctions imposed by the European Union and the United States that led to such consequence as a decrease in the standard of living of the majority of Russian citizens; with the continuing influx of migrants, which increases the likelihood of xenophobia and radicalism. The authors conclude that in the minds of Russian citizens the ambiguous interpretation of nationalism dominates. On the one hand, nationalism is interpreted as a negative and dangerous phenomenon associated with the superiority of one ethnic group over the others. On the other hand, it is understood as the strengthening of state nationalism, largely defined by the patriotic feelings of love for their country in a globalized world, the desire to reduce the sovereignty, Russia’s active position in the fight against ISIL. There is a trend of decrease of ethnic nationalist and xenophobic attitudes in the face of external challenges and threats. However, this does not mean conscious establishment of the values of tolerance in the political culture of those groups of Russians who tend to manifest xenophobia. In this case there is a situational transferring of one’s rejection of the “other” to the external environment.

  13. Public health and health promotion capacity at national and regional level: a review of conceptual frameworks

    Directory of Open Access Journals (Sweden)

    Christoph Aluttis

    2014-04-01

    Full Text Available The concept of capacity building for public health has gained much attention during the last decade. National as well as international organizations increasingly focus their efforts on capacity building to improve performance in the health sector. During the past two decades, a variety of conceptual frameworks have been developed which describe relevant dimensions for public health capacity. Notably, these frameworks differ in design and conceptualization. This paper therefore reviews the existing conceptual frameworks and integrates them into one framework, which contains the most relevant dimensions for public health capacity at the country or regional level. A comprehensive literature search was performed to identify frameworks addressing public health capacity building at the national or regional level. We content-analysed these frameworks to identify the core dimensions of public health capacity. The dimensions were subsequently synthesized into a set of thematic areas to construct a conceptual framework which describes the most relevant dimensions for capacities at the national or regional level. The systematic review resulted in the identification of seven core domains for public health capacity: resources, organizational structures, workforce, partnerships, leadership and governance, knowledge development and country specific context. Accordingly, these dimensions were used to construct a framework, which describes these core domains more in detail. Our research shows that although there is no generally agreed upon model of public health capacity, a number of key domains for public health and health promotion capacity are consistently recurring in existing frameworks, regardless of their geographical location or thematic area. As only little work on the core concepts of public health capacities has yet taken place, this study adds value to the discourse by identifying these consistencies across existing frameworks and by synthesising

  14. Sources Sought / Request For Information SOL-R3-13-00006: Region 3 - National Remedial Action Contracts / Multiple Award Competition

    Science.gov (United States)

    Region 3 - EPA is performing market research to determine if industry has the capability and capacity to perform the work, on a national level, as described in the attached draft Statement of Work /Performance Work Statement(SOW/PWS).

  15. 32 CFR 1605.7 - Region Manager.

    Science.gov (United States)

    2010-07-01

    ... 32 National Defense 6 2010-07-01 2010-07-01 false Region Manager. 1605.7 Section 1605.7 National... ORGANIZATION Region Administration § 1605.7 Region Manager. (a) Subject to the direction and control of the Director of Selective Service, the Region Manager of Selective Service for each region shall be in...

  16. The Regional Dimension

    DEFF Research Database (Denmark)

    Eskjær, Mikkel Fugl

    2013-01-01

    is largely dependent on regional media systems, yet the role this regional dimension plays has been largely overlooked. This article presents a comparative study of climate-change coverage in three geo-cultural regions, The Middle East, Scandinavia, and North America, and explores the link between global......Global perspectives and national approaches have dominated studies of climate-change communication, reflecting the global nature of climate change as well as the traditional research focus on national media systems. In the absence of a global public sphere, however, transnational issue attention...... climate-change communication and regional media systems. It finds that regional variations in climate-change communication carry important communicative implications concerning perceptions of climate change's relevance and urgency...

  17. Successful integration efforts in water quality from the integrated Ocean Observing System Regional Associations and the National Water Quality Monitoring Network

    Science.gov (United States)

    Ragsdale, R.; Vowinkel, E.; Porter, D.; Hamilton, P.; Morrison, R.; Kohut, J.; Connell, B.; Kelsey, H.; Trowbridge, P.

    2011-01-01

    The Integrated Ocean Observing System (IOOS??) Regional Associations and Interagency Partners hosted a water quality workshop in January 2010 to discuss issues of nutrient enrichment and dissolved oxygen depletion (hypoxia), harmful algal blooms (HABs), and beach water quality. In 2007, the National Water Quality Monitoring Council piloted demonstration projects as part of the National Water Quality Monitoring Network (Network) for U.S. Coastal Waters and their Tributaries in three IOOS Regional Associations, and these projects are ongoing. Examples of integrated science-based solutions to water quality issues of major concern from the IOOS regions and Network demonstration projects are explored in this article. These examples illustrate instances where management decisions have benefited from decision-support tools that make use of interoperable data. Gaps, challenges, and outcomes are identified, and a proposal is made for future work toward a multiregional water quality project for beach water quality.

  18. Global, Regional, and National Burden of Rheumatic Heart Disease, 1990-2015.

    Science.gov (United States)

    Watkins, David A; Johnson, Catherine O; Colquhoun, Samantha M; Karthikeyan, Ganesan; Beaton, Andrea; Bukhman, Gene; Forouzanfar, Mohammed H; Longenecker, Christopher T; Mayosi, Bongani M; Mensah, George A; Nascimento, Bruno R; Ribeiro, Antonio L P; Sable, Craig A; Steer, Andrew C; Naghavi, Mohsen; Mokdad, Ali H; Murray, Christopher J L; Vos, Theo; Carapetis, Jonathan R; Roth, Gregory A

    2017-08-24

    Rheumatic heart disease remains an important preventable cause of cardiovascular death and disability, particularly in low-income and middle-income countries. We estimated global, regional, and national trends in the prevalence of and mortality due to rheumatic heart disease as part of the 2015 Global Burden of Disease study. We systematically reviewed data on fatal and nonfatal rheumatic heart disease for the period from 1990 through 2015. Two Global Burden of Disease analytic tools, the Cause of Death Ensemble model and DisMod-MR 2.1, were used to produce estimates of mortality and prevalence, including estimates of uncertainty. We estimated that there were 319,400 (95% uncertainty interval, 297,300 to 337,300) deaths due to rheumatic heart disease in 2015. Global age-standardized mortality due to rheumatic heart disease decreased by 47.8% (95% uncertainty interval, 44.7 to 50.9) from 1990 to 2015, but large differences were observed across regions. In 2015, the highest age-standardized mortality due to and prevalence of rheumatic heart disease were observed in Oceania, South Asia, and central sub-Saharan Africa. We estimated that in 2015 there were 33.4 million (95% uncertainty interval, 29.7 million to 43.1 million) cases of rheumatic heart disease and 10.5 million (95% uncertainty interval, 9.6 million to 11.5 million) disability-adjusted life-years due to rheumatic heart disease globally. We estimated the global disease prevalence of and mortality due to rheumatic heart disease over a 25-year period. The health-related burden of rheumatic heart disease has declined worldwide, but high rates of disease persist in some of the poorest regions in the world. (Funded by the Bill and Melinda Gates Foundation and the Medtronic Foundation.).

  19. Development of regional growth centres and impact on regional growth: A case study of Thailand’s Northeastern region

    Directory of Open Access Journals (Sweden)

    Nattapon Sang-arun

    2013-01-01

    Full Text Available This study investigates the spatial economic structure and inequality in Thailand at the national and regional levels, with a particular focus on the Northeastern region in the period from 1987 to 2007. The study has three main points: 1 examination of the economic structure and inequality at the national level and in the Northeastern region according to the Theil index, 2 determination of regional growth centres and satellite towns by using growth pole theory as a conceptual framework and incorporating spatial interaction analysis and 3 analysis of the relationship between regional growth centres and satellite towns with regard to the impact on growth and inequality. The results show that the Northeastern region is definitely the lagging region in the nation, by both gross domestic product (GDP and gross regional product (GRP per capita. It was therefore selected for a case study. Spatial analysis identified Nakhon Ratchasima, Khon Kaen, Udon Thani and Ubon Ratchathani as regional growth centres. Each of them has its own sphere of influence (or satellite towns, and the total area of regional growth centres and satellite towns are classified as sub-regions. The development of regional growth centres has a direct impact on sub-regional economic growth through economic and social relationships: urbanisation, industrial development, per capita growth, the number of higher educational institutes and so on. However, such growth negatively correlates with economic equality among the provinces in a sub-region. The inequality trend is obviously on an upswing. This study suggests that industrial links between regional growth centres and their satellite towns should be improved in order for regional growth centre development to have a consistently desirable effect on both economic growth and equality. Such a strong process means that the growth of regional growth centres will spread, leading to the development of their surrounding areas.

  20. Human dynamics and forest management: a baseline assessment of the socioeconomic characteristics of the region surrounding the El Yunque National Forest

    Science.gov (United States)

    Kathleen McGinley

    2016-01-01

    In this paper, I examine the socioeconomic dynamics and human–environment interactions in the region surrounding the El Yunque National Forest (EYNF) in northeastern Puerto Rico and their implications for policy development and sustainable resource use. As part of a larger, comprehensive assessment of the conditions and trends of the EYNF and broader region, I...