WorldWideScience

Sample records for vm image caching

  1. Using XRootD to provide caches for CernVM-FS

    CERN Document Server

    Domenighini, Matteo

    2017-01-01

    CernVM-FS recently added the possibility of using plugin for cache management. In order to investigate the capabilities and limits of such possibility, an XRootD plugin was written and benchmarked; as a byproduct, a POSIX plugin was also generated. The tests revealed that the plugin interface introduces no signicant performance over- head; moreover, the XRootD plugin performance was discovered to be worse than the ones of the built-in cache manager and the POSIX plugin. Further test of the XRootD component revealed that its per- formance is dependent on the server disk speed.

  2. Security in the CernVM File System and the Frontier Distributed Database Caching System

    CERN Document Server

    Dykstra, David

    2014-01-01

    Both the CernVM File System (CVMFS) and the Frontier Distributed Database Caching System (Frontier) distribute centrally updated data worldwide for LHC experiments using http proxy caches. Neither system provides privacy or access control on reading the data, but both control access to updates of the data and can guarantee the integrity of the data transferred to clients over the internet. CVMFS has since its early days required digital signatures and secure hashes on all distributed data, and recently both CVMFS and Frontier have added X509-based integrity checking. In this paper we detail and compare the security models of CVMFS and Frontier.

  3. Security in the CernVM File System and the Frontier Distributed Database Caching System

    Energy Technology Data Exchange (ETDEWEB)

    Dykstra, D.; Blomer, J. [CERN

    2014-01-01

    Both the CernVM File System (CVMFS) and the Frontier Distributed Database Caching System (Frontier) distribute centrally updated data worldwide for LHC experiments using http proxy caches. Neither system provides privacy or access control on reading the data, but both control access to updates of the data and can guarantee the authenticity and integrity of the data transferred to clients over the internet. CVMFS has since its early days required digital signatures and secure hashes on all distributed data, and recently Frontier has added X.509-based authenticity and integrity checking. In this paper we detail and compare the security models of CVMFS and Frontier.

  4. A Multiresolution Image Cache for Volume Rendering

    Energy Technology Data Exchange (ETDEWEB)

    LaMar, E; Pascucci, V

    2003-02-27

    The authors discuss the techniques and implementation details of the shared-memory image caching system for volume visualization and iso-surface rendering. One of the goals of the system is to decouple image generation from image display. This is done by maintaining a set of impostors for interactive display while the production of the impostor imagery is performed by a set of parallel, background processes. The system introduces a caching basis that is free of the gap/overlap artifacts of earlier caching techniques. instead of placing impostors at fixed, pre-defined positions in world space, the technique is to adaptively place impostors relative to the camera viewpoint. The positions translate with the camera but stay aligned to the data; i.e., the positions translate, but do not rotate, with the camera. The viewing transformation is factored into a translation transformation and a rotation transformation. The impostor imagery is generated using just the translation transformation and visible impostors are displayed using just the rotation transformation. Displayed image quality is improved by increasing the number of impostors and the frequency that impostors are re-rendering is improved by decreasing the number of impostors.

  5. Optimal network proxy caching for image-rich contents

    Science.gov (United States)

    Yang, Xuguang; Ramchandran, Kannan

    1999-12-01

    This paper addresses optimizing cache allocation in a distributed image database system over computer networks. We consider progressive image file formats, and `soft' caching strategies, in which each image is allocated a variable amount of cache memory, in an effort to minimize the expected image transmission delay time. A simple and efficient optimization algorithm is proposed, and is generalized to include multiple proxies in a network scenario. With optimality proven, our algorithms are surprisingly simple, and are based on sorting the images according to a special priority index. We also present an adaptive cache allocation/replacement strategy that can be incorporated into web browsers with little computational overhead. Simulation results are presented.

  6. Enhanced Image Analysis Using Cached Mobile Robots

    Directory of Open Access Journals (Sweden)

    Kabeer Mohammed

    2012-11-01

    Full Text Available In the field of Artificial intelligence Image processing plays a vital role in Decision making .Now a day’s Mobile robots work as a Network sharing Centralized Data base.All Image inputs are compared against this database and decision is made.In some cases the Centralized database is in other side of the globe and Mobile robots compare Input image through satellite link this sometime results in delays in decision making which may result in castrophe.This Research paper is about how to make image processing in mobile robots less time consuming and fast decision making.This research paper compares search techniques employed currently and optimum search method which we are going to state.Now a days Mobile robots are extensively used in environments which are dangerous to human beings.In this dangerous situations quick Decision making makes the difference between Hit and Miss this can also results in Day to day tasks performed by Mobile robots Successful or Failure.

  7. Authenticating cache.

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Tyler Barratt; Urrea, Jorge Mario

    2012-06-01

    The aim of the Authenticating Cache architecture is to ensure that machine instructions in a Read Only Memory (ROM) are legitimate from the time the ROM image is signed (immediately after compilation) to the time they are placed in the cache for the processor to consume. The proposed architecture allows the detection of ROM image modifications during distribution or when it is loaded into memory. It also ensures that modified instructions will not execute in the processor-as the cache will not be loaded with a page that fails an integrity check. The authenticity of the instruction stream can also be verified in this architecture. The combination of integrity and authenticity assurance greatly improves the security profile of a system.

  8. Authenticating cache.

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Tyler Barratt; Urrea, Jorge Mario

    2012-06-01

    The aim of the Authenticating Cache architecture is to ensure that machine instructions in a Read Only Memory (ROM) are legitimate from the time the ROM image is signed (immediately after compilation) to the time they are placed in the cache for the processor to consume. The proposed architecture allows the detection of ROM image modifications during distribution or when it is loaded into memory. It also ensures that modified instructions will not execute in the processor-as the cache will not be loaded with a page that fails an integrity check. The authenticity of the instruction stream can also be verified in this architecture. The combination of integrity and authenticity assurance greatly improves the security profile of a system.

  9. Performance Tests of CMSSW on the CernVM

    Science.gov (United States)

    Petek, Marko; Gowdy, Stephen

    2012-12-01

    The CERN Virtual Machine (CernVM) Software Appliance is a project developed in CERN with the goal of allowing the execution of the experiment's software on different operating systems in an easy way for the users. To achieve this it makes use of Virtual Machine images consisting of a JEOS (Just Enough Operating System) Linux image, bundled with CVMFS, a distributed file system for software. This image can then be run with a proper virtualizer on most of the platforms available. It also aggressively caches data on the local user's machine so that it can operate disconnected from the network. CMS wanted to compare the performance of the CMS Software running in the virtualized environment with the same software running on a native Linux box. To answer this wish, a series of tests were made on a controlled environment during 2010-2011. This work presents the results of those tests.

  10. Processor Cache

    NARCIS (Netherlands)

    Boncz, P.A.; Liu, L.; Özsu, M. Tamer

    2008-01-01

    To hide the high latencies of DRAM access, modern computer architecture now features a memory hierarchy that besides DRAM also includes SRAM cache memories, typically located on the CPU chip. Memory access first check these caches, which takes only a few cycles. Only if the needed data is not found,

  11. Porting of $\\mu$CernVM to AArch64

    CERN Document Server

    Scheffler, Felix

    2016-01-01

    $\\mu$CernVM is a virtual appliance that contains a stripped-down Linux OS connecting to a CernVM-Filesystem (CVMFS) repository that resides on a dedicated web server. In contrast to “usual” VMs, anything that is needed from this repository is only downloaded on demand, aggressively cached and eventually released again. Currently, $\\mu$CernVM is only distributed for x86-64. Recently, ARM (market leader in mobile computing) has started to enter the server market, which is still dominated by x86-64 infrastructure. However, in terms of performance/watt, AArch64 (latest ARM 64bit architecture) is a promising alternative. Facing millions of jobs to compute every day, it is thus desirable to have an HEP virtualisation solution for AArch64. In this project, $\\mu$CernVM was successfully ported to AArch64. Native and virtualised runtime performance was evaluated using ROOT6 and CMS benchmarks. It was found that VM performance is inferior to host performance across all tests. Respective numbers greatly vary between...

  12. Status and Roadmap of CernVM

    Science.gov (United States)

    Berzano, D.; Blomer, J.; Buncic, P.; Charalampidis, I.; Ganis, G.; Meusel, R.

    2015-12-01

    Cloud resources nowadays contribute an essential share of resources for computing in high-energy physics. Such resources can be either provided by private or public IaaS clouds (e.g. OpenStack, Amazon EC2, Google Compute Engine) or by volunteers computers (e.g. LHC@Home 2.0). In any case, experiments need to prepare a virtual machine image that provides the execution environment for the physics application at hand. The CernVM virtual machine since version 3 is a minimal and versatile virtual machine image capable of booting different operating systems. The virtual machine image is less than 20 megabyte in size. The actual operating system is delivered on demand by the CernVM File System. CernVM 3 has matured from a prototype to a production environment. It is used, for instance, to run LHC applications in the cloud, to tune event generators using a network of volunteer computers, and as a container for the historic Scientific Linux 5 and Scientific Linux 4 based software environments in the course of long-term data preservation efforts of the ALICE, CMS, and ALEPH experiments. We present experience and lessons learned from the use of CernVM at scale. We also provide an outlook on the upcoming developments. These developments include adding support for Scientific Linux 7, the use of container virtualization, such as provided by Docker, and the streamlining of virtual machine contextualization towards the cloud-init industry standard.

  13. Stack Caching Using Split Data Caches

    DEFF Research Database (Denmark)

    Nielsen, Carsten; Schoeberl, Martin

    2015-01-01

    In most embedded and general purpose architectures, stack data and non-stack data is cached together, meaning that writing to or loading from the stack may expel non-stack data from the data cache. Manipulation of the stack has a different memory access pattern than that of non-stack data, showin...

  14. A cache odyssey

    NARCIS (Netherlands)

    Bosch, Peter

    1994-01-01

    This thesis describes the effect of write caching on overall file system performance. It will show through simulations that extensive write caching greatly reduces average file read latency. Extensive write caching reduces the number of disk writes and minimizes disk read/write contention. By taking

  15. Caching Servers for ATLAS

    CERN Document Server

    Gardner, Robert; The ATLAS collaboration

    2016-01-01

    As many Tier 3 and some Tier 2 centers look toward streamlining operations, they are considering autonomously managed storage elements as part of the solution. These storage elements are essentially file caching servers. They can operate as whole file or data block level caches. Several implementations exist. In this paper we explore using XRootD caching servers that can operate in either mode. They can also operate autonomously (i.e. demand driven), be centrally managed (i.e. a Rucio managed cache), or operate in both modes. We explore the pros and cons of various configurations as well as practical requirements for caching to be effective. While we focus on XRootD caches, the analysis should apply to other kinds of caches as well.

  16. Caching Servers for ATLAS

    CERN Document Server

    Gardner, Robert; The ATLAS collaboration

    2017-01-01

    As many LHC Tier-3 and some Tier-2 centers look toward streamlining operations, they are considering autonomously managed storage elements as part of the solution. These storage elements are essentially file caching servers. They can operate as whole file or data block level caches. Several implementations exist. In this paper we explore using XRootD caching servers that can operate in either mode. They can also operate autonomously (i.e. demand driven), be centrally managed (i.e. a Rucio managed cache), or operate in both modes. We explore the pros and cons of various configurations as well as practical requirements for caching to be effective. While we focus on XRootD caches, the analysis should apply to other kinds of caches as well.

  17. A method cache for Patmos

    DEFF Research Database (Denmark)

    Degasperi, Philipp; Hepp, Stefan; Puffitsch, Wolfgang;

    2014-01-01

    For real-time systems we need time-predictable processors. This paper presents a method cache as a time-predictable solution for instruction caching. The method cache caches whole methods (or functions) and simplifies worst-case execution time analysis. We have integrated the method cache...

  18. Fast-earth: A global image caching architecture for fast access to remote-sensing data

    Science.gov (United States)

    Talbot, B. G.; Talbot, L. M.

    We introduce Fast-Earth, a novel server architecture that enables rapid access to remote sensing data. Fast-Earth subdivides a WGS-84 model of the earth into small 400 × 400 meter regions with fixed locations, called plats. The resulting 3,187,932,913 indexed plats are accessed with a rapid look-up algorithm. Whereas many traditional databases store large original images as a series by collection time, requiring long searches and slow access times for user queries, the Fast-Earth architecture enables rapid access. We have prototyped a system in conjunction with a Fast-Responder mobile app to demonstrate and evaluate the concepts. We found that new data could be indexed rapidly in about 10 minutes/terabyte, high-resolution images could be chipped in less than a second, and 250 kB image chips could be delivered over a 3G network in about 3 seconds. The prototype server implemented on a very small computer could handle 100 users, but the concept is scalable. Fast-Earth enables dramatic advances in rapid dissemination of remote sensing data for mobile platforms as well as desktop enterprises.

  19. MultiCache: Multilayered Cache Implementation for I/O Virtualization

    Directory of Open Access Journals (Sweden)

    Jaechun No

    2016-01-01

    Full Text Available As the virtual machine technology is becoming the essential component in the cloud environment, VDI is receiving explosive attentions from IT market due to its advantages of easier software management, greater data protection, and lower expenses. However, I/O overhead is the critical obstacle to achieve high system performance in VDI. Reducing I/O overhead in the virtualization environment is not an easy task, because it requires scrutinizing multiple software layers of guest-to-hypervisor and also hypervisor-to-host. In this paper, we propose multilayered cache implementation, called MultiCache, which combines the guest-level I/O optimization with the hypervisor-level I/O optimization. The main objective of the guest-level optimization is to mitigate the I/O latency between the back end, shared storage, and the guest VM by utilizing history logs of I/O activities in VM. On the other hand, the hypervisor-level I/O optimization was implemented to minimize the latency caused by the “passing I/O path to the host” and the “contenting physical I/O device among VMs” on the same host server. We executed the performance measurement of MultiCache using the postmark benchmark to verify its effectiveness.

  20. Composite Pseudo Associative Cache with Victim Cache for Mobile Processors

    Directory of Open Access Journals (Sweden)

    Lakshmi D. Bobbala

    2010-01-01

    Full Text Available Problem statement: Multi-core trends are becoming dominant, creating sophisticated and complicated cache structures. One of the easiest ways to design cache memory for increasing performance is to double the cache size. The big cache size is directly related to the area and power consumption. Especially in mobile processors, simple increase of the cache size may significantly affect its chip area and power. Without increasing the size of the cache, we propose a novel method to improve the overall performance. Approach: We proposed a composite cache mechanism for 1 and L2 cache to maximize cache performance within a given cache size.This technique could be used without increasing cache size and set associatively by emphasizing primary way utilization and pseudo-associatively. We also added victim cache to composite pseudo associative cache for further improvement. Results: Based on our experiments with the sampled SPEC CPU2006 workload, the proposed cache mechanism showed the remarkable reduction in cache misses without effetcing the size. Conclusion/Recommendation: The variation of performance improvement depends on benchmark, cache size and set associatively, but the proposed scheme shows more sensitivity to cache size increase than set associatively increase.

  1. Composite Pseudo Associative Cache with Victim Cache for Mobile Processors

    Directory of Open Access Journals (Sweden)

    Lakshmi D. Bobbala

    2011-01-01

    Full Text Available Problem statement: Multi-core trends are becoming dominant, creating sophisticated and complicated cache structures. One of the easiest ways to design cache memory for increasing performance is to double the cache size. The big cache size is directly related to the area and power consumption. Especially in mobile processors, simple increase of the cache size may significantly affect its chip area and power. Without increasing the size of the cache, we propose a novel method to improve the overall performance. Approach: We proposed a composite cache mechanism for 1 and L2 cache to maximize cache performance within a given cache size.This technique could be used without increasing cache size and set associatively by emphasizing primary way utilization and pseudo-associatively. We also added victim cache to composite pseudo associative cache for further improvement. Results: Based on our experiments with the sampled SPEC CPU2006 workload, the proposed cache mechanism showed the remarkable reduction in cache misses without affecting the size. Conclusion/Recommendation: The variation of performance improvement depends on benchmark, cache size and set associatively, but the proposed scheme shows more sensitivity to cache size increase than set associatively increase.

  2. Caching in Wireless Networks

    CERN Document Server

    Niesen, Urs; Wornell, Gregory

    2009-01-01

    We consider the problem of delivering content cached in a wireless network of $n$ nodes randomly located on a square of area $n$. In the most general form, this can be analyzed by considering the $2^n\\times n$-dimensional caching capacity region of the wireless network. We provide an inner bound on this caching capacity region and, in the high path-loss regime, a matching (in the scaling sense) outer bound. For large path-loss exponent, this provides an information-theoretic scaling characterization of the entire caching capacity region. Moreover, the proposed communication scheme achieving the inner bound shows that the problem of cache selection and channel coding can be solved separately without loss of order-optimality.

  3. Tag-Split Cache for Efficient GPGPU Cache Utilization

    Energy Technology Data Exchange (ETDEWEB)

    Li, Lingda; Hayes, Ari; Song, Shuaiwen; Zhang, Eddy

    2016-06-01

    Modern GPUs employ cache to improve memory system efficiency. However, large amount of cache space is underutilized due to irregular memory accesses and poor spatial locality which exhibited commonly in GPU applications. Our experiments show that using smaller cache lines could improve cache space utilization, but it also frequently suffers from significant performance loss by introducing large amount of extra cache requests. In this work, we propose a novel cache design named tag-split cache (TSC) that enables fine-grained cache storage to address the problem of cache space underutilization while keeping memory request number unchanged. TSC divides tag into two parts to reduce storage overhead, and it supports multiple cache line replacement in one cycle.

  4. Cache Creek mercury investigation

    Data.gov (United States)

    US Fish and Wildlife Service, Department of the Interior — The Cache Creek watershed is located in the California Coastal range approximately 100 miles north of San Francisco in Lake, Colusa and Yolo Counties. Wildlife...

  5. Maintaining Web Cache Coherency

    Directory of Open Access Journals (Sweden)

    2000-01-01

    Full Text Available Document coherency is a challenging problem for Web caching. Once the documents are cached throughout the Internet, it is often difficult to keep them coherent with the origin document without generating a new traffic that could increase the traffic on the international backbone and overload the popular servers. Several solutions have been proposed to solve this problem, among them two categories have been widely discussed: the strong document coherency and the weak document coherency. The cost and the efficiency of the two categories are still a controversial issue, while in some studies the strong coherency is far too expensive to be used in the Web context, in other studies it could be maintained at a low cost. The accuracy of these analysis is depending very much on how the document updating process is approximated. In this study, we compare some of the coherence methods proposed for Web caching. Among other points, we study the side effects of these methods on the Internet traffic. The ultimate goal is to study the cache behavior under several conditions, which will cover some of the factors that play an important role in the Web cache performance evaluation and quantify their impact on the simulation accuracy. The results presented in this study show indeed some differences in the outcome of the simulation of a Web cache depending on the workload being used, and the probability distribution used to approximate updates on the cached documents. Each experiment shows two case studies that outline the impact of the considered parameter on the performance of the cache.

  6. Android Virtual Machine (VM) Setup on Linux

    Science.gov (United States)

    2014-12-01

    Machine (VM) Setup on Linux Ken F Yu Computational and Information Sciences Directorate, ARL...Virtual Machine (VM) Setup on Linux 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Ken F Yu 5d. PROJECT NUMBER

  7. Investigating the role of the ventromedial prefrontal cortex (vmPFC in the assessment of brands

    Directory of Open Access Journals (Sweden)

    Jose Paulo eSantos

    2011-06-01

    Full Text Available The ventromedial prefrontal cortex (vmPFC is believed to be important in everyday preference judgments, processing emotions during decision-making. However, there is still controversy in the literature regarding the participation of the vmPFC. To further elucidate the contribution of the vmPFC in brand preference, we designed a functional magnetic resonance imaging (fMRI study where 18 subjects assessed positive, indifferent and fictitious brands. Also, both the period during and after the decision process were analyzed, hoping to unravel temporally the role of the vmPFC, using modeled and model-free fMRI analysis. Considering together the period before and after decision-making, there was activation of the vmPFC when comparing positive with indifferent or fictitious brands. However, when the decision-making period was separated from the moment after the response, and especially for positive brands, the vmPFC was more active after the choice than during the decision process itself, challenging some of the existing literature. The results of the present study support the notion that the vmPFC may be unimportant in the decision stage of brand preference, questioning theories that postulate that the vmPFC is in the origin of such a choice. Further studies are needed to investigate in detail why the vmPFC seems to be involved in brand preference only after the decision process.

  8. Engineering the CernVM-Filesystem as a High Bandwidth Distributed Filesystem for Auxiliary Physics Data

    Science.gov (United States)

    Dykstra, D.; Bockelman, B.; Blomer, J.; Herner, K.; Levshina, T.; Slyz, M.

    2015-12-01

    A common use pattern in the computing models of particle physics experiments is running many distributed applications that read from a shared set of data files. We refer to this data is auxiliary data, to distinguish it from (a) event data from the detector (which tends to be different for every job), and (b) conditions data about the detector (which tends to be the same for each job in a batch of jobs). Relatively speaking, conditions data also tends to be relatively small per job where both event data and auxiliary data are larger per job. Unlike event data, auxiliary data comes from a limited working set of shared files. Since there is spatial locality of the auxiliary data access, the use case appears to be identical to that of the CernVM- Filesystem (CVMFS). However, we show that distributing auxiliary data through CVMFS causes the existing CVMFS infrastructure to perform poorly. We utilize a CVMFS client feature called "alien cache" to cache data on existing local high-bandwidth data servers that were engineered for storing event data. This cache is shared between the worker nodes at a site and replaces caching CVMFS files on both the worker node local disks and on the site's local squids. We have tested this alien cache with the dCache NFSv4.1 interface, Lustre, and the Hadoop Distributed File System (HDFS) FUSE interface, and measured performance. In addition, we use high-bandwidth data servers at central sites to perform the CVMFS Stratum 1 function instead of the low-bandwidth web servers deployed for the CVMFS software distribution function. We have tested this using the dCache HTTP interface. As a result, we have a design for an end-to-end high-bandwidth distributed caching read-only filesystem, using existing client software already widely deployed to grid worker nodes and existing file servers already widely installed at grid sites. Files are published in a central place and are soon available on demand throughout the grid and cached locally on the

  9. Engineering the CernVM-Filesystem as a High Bandwidth Distributed Filesystem for Auxiliary Physics Data

    Energy Technology Data Exchange (ETDEWEB)

    Dykstra, D. [Fermilab; Bockelman, B. [Nebraska U.; Blomer, J. [CERN; Herner, K. [Fermilab; Levshina, T. [Fermilab; Slyz, M. [Fermilab

    2015-12-23

    A common use pattern in the computing models of particle physics experiments is running many distributed applications that read from a shared set of data files. We refer to this data is auxiliary data, to distinguish it from (a) event data from the detector (which tends to be different for every job), and (b) conditions data about the detector (which tends to be the same for each job in a batch of jobs). Relatively speaking, conditions data also tends to be relatively small per job where both event data and auxiliary data are larger per job. Unlike event data, auxiliary data comes from a limited working set of shared files. Since there is spatial locality of the auxiliary data access, the use case appears to be identical to that of the CernVM- Filesystem (CVMFS). However, we show that distributing auxiliary data through CVMFS causes the existing CVMFS infrastructure to perform poorly. We utilize a CVMFS client feature called 'alien cache' to cache data on existing local high-bandwidth data servers that were engineered for storing event data. This cache is shared between the worker nodes at a site and replaces caching CVMFS files on both the worker node local disks and on the site's local squids. We have tested this alien cache with the dCache NFSv4.1 interface, Lustre, and the Hadoop Distributed File System (HDFS) FUSE interface, and measured performance. In addition, we use high-bandwidth data servers at central sites to perform the CVMFS Stratum 1 function instead of the low-bandwidth web servers deployed for the CVMFS software distribution function. We have tested this using the dCache HTTP interface. As a result, we have a design for an end-to-end high-bandwidth distributed caching read-only filesystem, using existing client software already widely deployed to grid worker nodes and existing file servers already widely installed at grid sites. Files are published in a central place and are soon available on demand throughout the grid and cached

  10. What is CACHE?

    Science.gov (United States)

    Himmelblau, David M.; Hughes, Richard R.

    1980-01-01

    Describes various aspects of CACHE (Computer Aids for Chemical Engineering Education), a nonprofit organization whose purpose is to promote cooperation among universities, industry and government in the development and distribution of computer-related and/or technology-based educational aids for the chemical engineering profession. (CS)

  11. Cache Oblivious Distribution Sweeping

    DEFF Research Database (Denmark)

    Brodal, G.S.; Fagerberg, R.

    2002-01-01

    We adapt the distribution sweeping method to the cache oblivious model. Distribution sweeping is the name used for a general approach for divide-and-conquer algorithms where the combination of solved subproblems can be viewed as a merging process of streams. We demonstrate by a series of algorithms...

  12. Cache-Oblivious Hashing

    DEFF Research Database (Denmark)

    Pagh, Rasmus; Wei, Zhewei; Yi, Ke;

    2014-01-01

    , can be easily made cache-oblivious but it only achieves t q =1+Θ(α/b) even if a truly random hash function is used. Then we demonstrate that the block probing algorithm (Pagh et al. in SIAM Rev. 53(3):547–558, 2011) achieves t q =1+1/2 Ω(b), thus matching the cache-aware bound, if the following two......The hash table, especially its external memory version, is one of the most important index structures in large databases. Assuming a truly random hash function, it is known that in a standard external hash table with block size b, searching for a particular key only takes expected average t q =1......+1/2 Ω(b) disk accesses for any load factor α bounded away from 1. However, such near-perfect performance is achieved only when b is known and the hash table is particularly tuned for working with such a blocking. In this paper we study if it is possible to build a cache-oblivious hash table that works...

  13. Multi-Core Cache Hierarchies

    CERN Document Server

    Balasubramonian, Rajeev

    2011-01-01

    A key determinant of overall system performance and power dissipation is the cache hierarchy since access to off-chip memory consumes many more cycles and energy than on-chip accesses. In addition, multi-core processors are expected to place ever higher bandwidth demands on the memory system. All these issues make it important to avoid off-chip memory access by improving the efficiency of the on-chip cache. Future multi-core processors will have many large cache banks connected by a network and shared by many cores. Hence, many important problems must be solved: cache resources must be allocat

  14. Data Caching for XML Query

    Institute of Scientific and Technical Information of China (English)

    SU Fei; CI Lin-lin; ZHU Li-ping; ZHAO Xin-xin

    2006-01-01

    In order to apply the technique of data cache to extensible markup language (XML) database system, the XML-cache system to support data cache for XQuery is presented. According to the character of XML, the queries with nesting are normalized to facilitate the following operation. Based on the idea of incomplete tree, using the document type definition (DTD) schema tree and conditions from normalized XQuery, the results of previous queries are maintained to answer new queries, at the same time, the remainder queries are sent to XML database at the back. The results of experiment show all applications supported by XML database can use this technique to cache data for future use.

  15. Cooperative Proxy Caching for Wireless Base Stations

    Directory of Open Access Journals (Sweden)

    James Z. Wang

    2007-01-01

    Full Text Available This paper proposes a mobile cache model to facilitate the cooperative proxy caching in wireless base stations. This mobile cache model uses a network cache line to record the caching state information about a web document for effective data search and cache space management. Based on the proposed mobile cache model, a P2P cooperative proxy caching scheme is proposed to use a self-configured and self-managed virtual proxy graph (VPG, independent of the underlying wireless network structure and adaptive to the network and geographic environment changes, to achieve efficient data search, data cache and date replication. Based on demand, the aggregate effect of data caching, searching and replicating actions by individual proxy servers automatically migrates the cached web documents closer to the interested clients. In addition, a cache line migration (CLM strategy is proposed to flow and replicate the heads of network cache lines of web documents associated with a moving mobile host to the new base station during the mobile host handoff. These replicated cache line heads provide direct links to the cached web documents accessed by the moving mobile hosts in the previous base station, thus improving the mobile web caching performance. Performance studies have shown that the proposed P2P cooperative proxy caching schemes significantly outperform existing caching schemes.

  16. Cache-oblivious String Dictionaries

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Fagerberg, Rolf

    2006-01-01

    We present static cache-oblivious dictionary structures for strings which provide analogues of tries and suffix trees in the cache-oblivious model. Our construction takes as input either a set of strings to store, a single string for which all suffixes are to be stored, a trie, a compressed trie,...

  17. Data cache organization for accurate timing analysis

    DEFF Research Database (Denmark)

    Schoeberl, Martin; Huber, Benedikt; Puffitsch, Wolfgang

    2013-01-01

    Caches are essential to bridge the gap between the high latency main memory and the fast processor pipeline. Standard processor architectures implement two first-level caches to avoid a structural hazard in the pipeline: an instruction cache and a data cache. For tight worst-case execution times...

  18. On the Limits of Cache-Obliviousness

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Fagerberg, Rolf

    2003-01-01

    In this paper, we present lower bounds for permuting and sorting in the cache-oblivious model. We prove that (1) I/O optimal cache-oblivious comparison based sorting is not possible without a tall cache assumption, and (2) there does not exist an I/O optimal cache-oblivious algorithm for permutin...

  19. A Time-predictable Object Cache

    DEFF Research Database (Denmark)

    Schoeberl, Martin

    2011-01-01

    Static cache analysis for data allocated on the heap is practically impossible for standard data caches. We propose a distinct object cache for heap allocated data. The cache is highly associative to track symbolic object addresses in the static analysis. Cache lines are organized to hold single...... objects and individual fields are loaded on a miss. This cache organization is statically analyzable and improves the performance. In this paper we present the design and implementation of the object cache in a uniprocessor and chip-multiprocessor version of the Java processor JOP....

  20. A Time-predictable Stack Cache

    DEFF Research Database (Denmark)

    Abbaspourseyedi, Sahar; Brandner, Florian; Schoeberl, Martin

    2013-01-01

    precise results of the cache analysis part of the WCET analysis. Splitting the data cache for different data areas enables composable data cache analysis. The WCET analysis tool can analyze the accesses to these different data areas independently. In this paper we present the design and implementation...... of a cache for stack allocated data. Our port of the LLVM C++ compiler supports the management of the stack cache. The combination of stack cache instructions and the hardware implementation of the stack cache is a further step towards timepredictable architectures....

  1. Escalabilidade em servidores cache WWW

    OpenAIRE

    1999-01-01

    O grande crescimento em popularidade da World Wide Web tem motivado várias pesquisas com o objetivo de reduzir a latência observada pelos usuários. Os servidores cache têm-se mostrado uma ferramenta muito importante na busca desse objetivo. Embora a utilização de servidores cache tenha contribuído para diminuir o tráfego na Internet, as estratégias de cooperação utilizadas na composição de grupos (clusters) de caches normalmentetrazem uma degradação de desempenho aos servidores não sendo, por...

  2. Creep Resistance of VM12 Steel

    OpenAIRE

    Zieliński A.; Golański G.; Dobrzański J.; Sroka M.

    2016-01-01

    This article presents selected material characteristics of VM12 steel used for elements of boilers with super- and ultra-critical steam parameters. In particular, abridged and long-term creep tests with and without elongation measurement during testing and investigations of microstructural changes due to long-term impact of temperature and stress were carried out. The practical aspect of the use of creep test results in forecasting the durability of materials operating under creep conditions ...

  3. Performance Tests of CMSSW on the CernVM

    CERN Document Server

    Petek, Marko

    2012-01-01

    goal of allowing the execution of the experiment's software on different operating systems in an easy way for the users. To achieve this it makes use of Virtual Machine images consisting of a JEOS (Just Enough Operational System) Linux image, bundled with CVMFS, a distributed file system for software. This image can this be run with a proper virtualizer on most of the platforms available. It also aggressively caches data on the local user's machine so that it can operate disconnected from the network. CMS wanted to compare the performance of the CMS Software running in the virtualized environment with the same software running on a native Linux box. To answer this need a series of tests were made on a controlled environment during 2010-2011. This work presents the results of those tests.

  4. Time-predictable Stack Caching

    DEFF Research Database (Denmark)

    Abbaspourseyedi, Sahar

    complicated and less imprecise. Time-predictable computer architectures provide solutions to this problem. As accesses to the data in caches are one source of timing unpredictability, devising methods for improving the timepredictability of caches are important. Stack data, with statically analyzable......Embedded systems are computing systems for controlling and interacting with physical environments. Embedded systems with special timing constraints where the system needs to meet deadlines are referred to as real-time systems. In hard real-time systems, missing a deadline causes the system to fail...... addresses, provides an opportunity to predict and tighten the WCET of accesses to data in caches. In this thesis, we introduce the time-predictable stack cache design and implementation within a time-predictable processor. We introduce several optimizations to our design for tightening the WCET while...

  5. A Time-predictable Stack Cache

    DEFF Research Database (Denmark)

    Abbaspour, Sahar; Brandner, Florian; Schoeberl, Martin

    2013-01-01

    Real-time systems need time-predictable architectures to support static worst-case execution time (WCET) analysis. One architectural feature, the data cache, is hard to analyze when different data areas (e.g., heap allocated and stack allocated data) share the same cache. This sharing leads to less...... precise results of the cache analysis part of the WCET analysis. Splitting the data cache for different data areas enables composable data cache analysis. The WCET analysis tool can analyze the accesses to these different data areas independently. In this paper we present the design and implementation...... of a cache for stack allocated data. Our port of the LLVM C++ compiler supports the management of the stack cache. The combination of stack cache instructions and the hardware implementation of the stack cache is a further step towards timepredictable architectures....

  6. A Novel Cache Resolution Technique for Cooperative Caching in Wireless Mobile Networks

    Directory of Open Access Journals (Sweden)

    Preetha Theresa Joy

    2013-05-01

    Full Text Available Cooperative caching is used in mobile ad hoc networ ks to reduce the latency perceived by the mobile clients while retrieving data and to reduce the traffic load in the network. Caching also increases the availability of data due to server di sconnections. The implementation of a cooperative caching technique essentially involves four major design considerations (i cache placement and resolution, which decides where to pl ace and how to locate the cached data (ii Cache admission control which decides the data to b e cached (iii Cache replacement which makes the replacement decision when the cache is fu ll and (iv consistency maintenance, i.e. maintaining consistency between the data in server and cache. In this paper we propose an effective cache resolution technique, which reduces the number of messages flooded in to the network to find the requested data. The experimenta l results gives a promising result based on the metrics of studies.

  7. Creep Resistance of VM12 Steel

    Directory of Open Access Journals (Sweden)

    Zieliński A.

    2016-09-01

    Full Text Available This article presents selected material characteristics of VM12 steel used for elements of boilers with super- and ultra-critical steam parameters. In particular, abridged and long-term creep tests with and without elongation measurement during testing and investigations of microstructural changes due to long-term impact of temperature and stress were carried out. The practical aspect of the use of creep test results in forecasting the durability of materials operating under creep conditions was presented. The characteristics of steels with regard to creep tests developed in this paper are used in assessment of changes in functional properties of the material of elements operating under creep conditions.

  8. Object caching in corvids: incidence and significance.

    Science.gov (United States)

    Jacobs, Ivo F; Osvath, Mathias; Osvath, Helena; Mioduszewska, Berenika; von Bayern, Auguste M P; Kacelnik, Alex

    2014-02-01

    Food caching is a paramount model for studying relations between cognition, brain organisation and ecology in corvids. In contrast, behaviour towards inedible objects is poorly examined and understood. We review the literature on object caching in corvids and other birds, and describe an exploratory study on object caching in ravens, New Caledonian crows and jackdaws. The captive adult birds were presented with an identical set of novel objects adjacent to food. All three species cached objects, which shows the behaviour not to be restricted to juveniles, food cachers, tool-users or individuals deprived of cacheable food. The pattern of object interaction and caching did not mirror the incidence of food caching: the intensely food caching ravens indeed showed highest object caching incidence, but the rarely food caching jackdaws cached objects to similar extent as the moderate food caching New Caledonian crows. Ravens and jackdaws preferred objects with greater sphericity, but New Caledonian crows preferred stick-like objects (similar to tools). We suggest that the observed object caching might have been expressions of exploration or play, and deserves being studied in its own right because of its potential significance for tool-related behaviour and learning, rather than as an over-spill from food-caching research. This article is part of a Special Issue entitled: CO3 2013. Copyright © 2013 Elsevier B.V. All rights reserved.

  9. Cache-Aware and Cache-Oblivious Adaptive Sorting

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Fagerberg, Rolf; Moruz, Gabriel

    2005-01-01

    Two new adaptive sorting algorithms are introduced which perform an optimal number of comparisons with respect to the number of inversions in the input. The first algorithm is based on a new linear time reduction to (non-adaptive) sorting. The second algorithm is based on a new division protocol ...... for the GenericSort algorithm by Estivill-Castro and Wood. From both algorithms we derive I/O-optimal cache-aware and cache-oblivious adaptive sorting algorithms. These are the first I/O-optimal adaptive sorting algorithms....

  10. Load Balancing Algorithm for Cache Cluster

    Institute of Scientific and Technical Information of China (English)

    刘美华; 古志民; 曹元大

    2003-01-01

    By the load definition of cluster, the request is regarded as granularity to compute load and implement the load balancing in cache cluster. First, the processing power of cache-node is studied from four aspects: network bandwidth, memory capacity, disk access rate and CPU usage. Then, the weighted load of cache-node is customized. Based on this, a load-balancing algorithm that can be applied to the cache cluster is proposed. Finally, Polygraph is used as a benchmarking tool to test the cache cluster possessing the load-balancing algorithm and the cache cluster with cache array routing protocol respectively. The results show the load-balancing algorithm can improve the performance of the cache cluster.

  11. Web proxy cache replacement strategies simulation, implementation, and performance evaluation

    CERN Document Server

    ElAarag, Hala; Cobb, Jake

    2013-01-01

    This work presents a study of cache replacement strategies designed for static web content. Proxy servers can improve performance by caching static web content such as cascading style sheets, java script source files, and large files such as images. This topic is particularly important in wireless ad hoc networks, in which mobile devices act as proxy servers for a group of other mobile devices. Opening chapters present an introduction to web requests and the characteristics of web objects, web proxy servers and Squid, and artificial neural networks. This is followed by a comprehensive review o

  12. Funnel Heap - A Cache Oblivious Priority Queue

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Fagerberg, Rolf

    2002-01-01

    model. Arge et al. recently presented the first optimal cache oblivious priority queue, and demonstrated the importance of this result by providing the first cache oblivious algorithms for graph problems. Their structure uses cache oblivious sorting and selection as subroutines. In this paper, we devise...

  13. Mobility- Aware Cache Management in Wireless Environment

    Science.gov (United States)

    Kaur, Gagandeep; Saini, J. S.

    2010-11-01

    In infrastructure wireless environments, a base station provides communication links between mobile client and remote servers. Placing a proxy cache at the base station is an effective way of managing the wireless Internet bandwidth efficiently. However, in the situation of non-uniform heavy traffic, requests of all the mobile clients in the service area of the base station may cause overload in the cache. If the proxy cache has to release some cache space for the new mobile client in the environment, overload occurs. In this paper, we propose a novel cache management strategy to decrease the penalty of overloaded traffic on the proxy and to reduce the number of remote accesses by increasing the cache hit ratio. We predict the number of overload ahead of time based on its history and adapt the cache for the heavy traffic to be able to provide continuous and fair service to the current mobile clients and incoming ones. We have tested the algorithms over a real implementation of the cache management system in presence of fault tolerance and security. In our cache replacement algorithm, mobility of the clients, predicted overload number, size of the cached packets and their access frequencies are considered altogether. Performance results show that our cache management strategy outperforms the existing policies with less number of overloads and higher cache hit ratio.

  14. Caching in the Distributed Environment

    Directory of Open Access Journals (Sweden)

    Abhijit Gadkari

    2013-01-01

    Full Text Available The impact of cache is well understood in the system design domain. While the concept of cache is extensively utilized in the von Neumann architecture, the same is not true for the distributed-computing architecture. For example, consider a three-tiered Web-based business application running on a commercial RDBMS. Every time a new Web page loads, many database calls are made to fill the drop down lists on the page. Performance of the application is greatly affected by the unnecessary database calls and the network traffic between the Web server and the database server.

  15. CryptoCache: A Secure Sharable File Cache for Roaming Users

    DEFF Research Database (Denmark)

    Jensen, Christian D.

    2000-01-01

    . Conventional distributed file systems cache everything locally or not at all; there is no possibility to cache files on nearby nodes.In this paper we present the design of a secure cache system called CryptoCache that allows roaming users to cache files on untrusted file hosting servers. The system allows...... cryptography, which allows roaming users to selectively grant read and write access to others by entrusting them with respectively the public key or the private key....

  16. Secure VM for Monitoring Industrial Process Controllers

    Energy Technology Data Exchange (ETDEWEB)

    Dasgupta, Dipankar [ORNL; Ali, Mohammad Hassan [University of Memphis; Abercrombie, Robert K [ORNL; Schlicher, Bob G [ORNL; Sheldon, Frederick T [ORNL; Carvalho, Marco [Institute of Human and Machine Cognition

    2011-01-01

    In this paper, we examine the biological immune system as an autonomic system for self-protection, which has evolved over millions of years probably through extensive redesigning, testing, tuning and optimization process. The powerful information processing capabilities of the immune system, such as feature extraction, pattern recognition, learning, memory, and its distributive nature provide rich metaphors for its artificial counterpart. Our study focuses on building an autonomic defense system, using some immunological metaphors for information gathering, analyzing, decision making and launching threat and attack responses. In order to detection Stuxnet like malware, we propose to include a secure VM (or dedicated host) to the SCADA Network to monitor behavior and all software updates. This on-going research effort is not to mimic the nature but to explore and learn valuable lessons useful for self-adaptive cyber defense systems.

  17. Scope-Based Method Cache Analysis

    DEFF Research Database (Denmark)

    Huber, Benedikt; Hepp, Stefan; Schoeberl, Martin

    2014-01-01

    , as it requests memory transfers at well-defined instructions only. In this article, we present a new cache analysis framework that generalizes and improves work on cache persistence analysis. The analysis demonstrates that a global view on the cache behavior permits the precise analyses of caches which are hard......The quest for time-predictable systems has led to the exploration of new hardware architectures that simplify analysis and reasoning in the temporal domain, while still providing competitive performance. For the instruction memory, the method cache is a conceptually attractive solution...

  18. Dynamic cache resources allocation for energy efficiency

    Institute of Scientific and Technical Information of China (English)

    CHEN Li-ming; ZOU Xue-cheng; LEI Jian-ming; LIU Zheng-lin

    2009-01-01

    This article proposes a mechanism of low overhead and less runtime, termed dynamic cache resources allocation (DCRA), which allocates each application with required cache resources. The mechanism collects cache hit-miss information at runtime and then analyzes the information and decides how many cache resources should be allocated to the current executing application. The amount of cache resources varies dynamically to reduce the total number of misses and energy consumption. The study of several applications from SPEC2000 shows that significant energy saving is achieved for the application based on the DCRA with an average of 39% savings.

  19. Lidar-based Hillshade, Cached (SEE NOTE), VT State Plane Meters

    Data.gov (United States)

    Vermont Center for Geographic Information — Lidar-based hillshade image service. Cached (SEE NOTE BELOW), in VT State Plane Meters spatial reference.NOTE: This hillshade service is being initially released...

  20. Distributed Caching in a Multi-Server Environment : A study of Distributed Caching mechanisms and an evaluation of Distributed Caching Platforms available for the .NET Framework

    OpenAIRE

    Herber, Robert

    2010-01-01

    This paper discusses the problems Distributed Caching can be used to solve and evaluates a couple of Distributed Caching Platforms targeting the .NET Framework. Basic concepts and functionality that is general for all distributed caching platforms is covered in chapter 2. We discuss how Distributed Caching can resolve synchronization problems when using multiple local caches, how a caching tier can relieve the database and improve the scalability of the system, and also how memory consumption...

  1. Moving to a total VM environment

    Energy Technology Data Exchange (ETDEWEB)

    Johnston, T.Y.

    1981-08-11

    The Stanford Linear Accelerator Center is a single purpose laboratory operated by Stanford University for the Department of Energy. Its mission is to do research in High Energy (particle) physics. This research involves the use of large and complex electronic detectors. Each of these detectors is a multi-million dollar device. A part of each detector is a computer for process control and data logging. Most detectors at SLAC now use VAX 11/780s for this purpose. Most detectors record digital data via this process control computer. Consequently, physics today is not bounded by the cost of analog to digital conversion as it was in the past, and the physicist is able to run larger experiments than were feasible a decade ago. Today a medium sized experiment will produce several hundred full reels of 6250 BPI tape whereas a large experiment is a couple of thousand reels. The raw data must first be transformed into physics events using data transformation programs. The physicists then use subsets of the data to understand what went on. The subset may be anywhere from a few megabytes to 5 or 6 gigabytes of data (30 or 40 full reels of tape). This searching would be best solved interactively (if computers and I/0 devices were fast enough). Instead what we find are very dynamic batch programs that are generally changed every run. The result is that on any day there are probably around 50 to 100 physicists interacting with a half dozen different experiments who are causing us to mount around 750 to 1000 tapes a day. This has been the style of computing for the last decade. Our going to VM is part of our effort to change this style of computing and to make physics computing more effective.

  2. Engineering a Cache-Oblivious Sorting Algorithm

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Fagerberg, Rolf; Vinther, Kristoffer

    2007-01-01

    This paper is an algorithmic engineering study of cache-oblivious sorting. We investigate by empirical methods a number of implementation issues and parameter choices for the cache-oblivious sorting algorithm Lazy Funnelsort, and compare the final algorithm with Quicksort, the established standard...... for comparison-based sorting, as well as with recent cache-aware proposals. The main result is a carefully implemented cache-oblivious sorting algorithm, which our experiments show can be faster than the best Quicksort implementation we are able to find, already for input sizes well within the limits of RAM....... It is also at least as fast as the recent cache-aware implementations included in the test. On disk the difference is even more pronounced regarding Quicksort and the cache-aware algorithms, whereas the algorithm is slower than a careful implementation of multiway Mergesort such as TPIE....

  3. Exploring Instruction Cache Analysis - On Arm

    OpenAIRE

    Svedenborg, Stian Valentin

    2014-01-01

    This thesis explores the challenges of implementing an instruction cache side-channel attack on an ARM platform. The information leakage through the instruction cache is formally discussed using information theoretic metrics. A successful Prime+Probe instruction cache side-channel attack against RSA is presented, recovering 967/1024 secret key bits by observing a single decryption using a synchronous spy process. Furthermore, an unsuccessful attempt is made at decoupling the spy from the vict...

  4. Store operations to maintain cache coherence

    Energy Technology Data Exchange (ETDEWEB)

    Evangelinos, Constantinos; Nair, Ravi; Ohmacht, Martin

    2017-09-12

    In one embodiment, a computer-implemented method includes encountering a store operation during a compile-time of a program, where the store operation is applicable to a memory line. It is determined, by a computer processor, that no cache coherence action is necessary for the store operation. A store-without-coherence-action instruction is generated for the store operation, responsive to determining that no cache coherence action is necessary. The store-without-coherence-action instruction specifies that the store operation is to be performed without a cache coherence action, and cache coherence is maintained upon execution of the store-without-coherence-action instruction.

  5. Store operations to maintain cache coherence

    Energy Technology Data Exchange (ETDEWEB)

    Evangelinos, Constantinos; Nair, Ravi; Ohmacht, Martin

    2017-08-01

    In one embodiment, a computer-implemented method includes encountering a store operation during a compile-time of a program, where the store operation is applicable to a memory line. It is determined, by a computer processor, that no cache coherence action is necessary for the store operation. A store-without-coherence-action instruction is generated for the store operation, responsive to determining that no cache coherence action is necessary. The store-without-coherence-action instruction specifies that the store operation is to be performed without a cache coherence action, and cache coherence is maintained upon execution of the store-without-coherence-action instruction.

  6. The Cost of Cache-Oblivious Searching

    DEFF Research Database (Denmark)

    Bender, Michael A.; Brodal, Gert Stølting; Fagerberg, Rolf

    2003-01-01

    Tight bounds on the cost of cache-oblivious searching are proved. It is shown that no cache-oblivious search structure can guarantee that a search performs fewer than lg e log B N block transfers between any two levels of the memory hierarchy. This lower bound holds even if all of the block sizes......, multilevel memory hierarchies can be modelled. It is shown that as k grows, the search costs of the optimal k-level DAM search structure and of the optimal cache-oblivious search structure rapidly converge. This demonstrates that for a multilevel memory hierarchy, a simple cache-oblivious structure almost...

  7. Test data generation for LRU cache-memory testing

    OpenAIRE

    Evgeni, Kornikhin

    2009-01-01

    System functional testing of microprocessors deals with many assembly programs of given behavior. The paper proposes new constraint-based algorithm of initial cache-memory contents generation for given behavior of assembly program (with cache misses and hits). Although algorithm works for any types of cache-memory, the paper describes algorithm in detail for basis types of cache-memory only: fully associative cache and direct mapped cache.

  8. Optimal Worst Case Formulas Comparing Cache Memory Associativity

    OpenAIRE

    Lennerstad, Håkan; Lundberg, Lars

    1995-01-01

    Consider an arbitrary program $P$ which is to be executed on a computer with two alternative cache memories. The first cache is set associative or direct mapped. It has $k$ sets and $u$ blocks in each set, this is called a (k,u)$-cache. The other is a fully associative cache with $q$ blocks - a $(1,q)$-cache. We present formulas optimally comparing the performance of a $(k,u)$-cache compared to a $(1,q)$-cache for worst case programs. Optimal mappings of the program variables to the cache blo...

  9. Enabling μCernVM for the Interactive Use Case

    CERN Document Server

    Nicolaou, Vasilis

    2013-01-01

    The $\\mu$CernVM will be the successor of the CernVM as a new appliance to help with accessing LHC for data analysis and development. CernVM has a web appliance agent that facilitates user interaction with the virtual machine and reduces the need for executing shell commands or installing graphical applications for displaying basic information such as memory usage or performing simple tasks such as updating the operating system. The updates are done differently in the $\\mu$CernVM than mainstream Linux distributions. Its filesystem is a composition of a read-only layer that exists in the network and a read/write layer that is initilised on first boot and keeps the user changes afterwards. Thus, means are provided to avoid loss of user data and system instabilities when the operating system is updated by fetching a new read-only layer.

  10. CernVM: Minimal maintenance approach to virtualization

    Science.gov (United States)

    Buncic, Predrag; Aguado-Sanchez, Carlos; Blomer, Jakob; Harutyunyan, Artem

    2011-12-01

    CernVM is a virtual software appliance designed to support the development cycle and provide a runtime environment for the LHC experiments. It consists of three key components that differentiate it from more traditional virtual machines: a minimal Linux Operating System (OS), a specially tuned file system designed to deliver application software on demand, and contextualization tools that provide a means to easily customize and configure CernVM instances for different tasks and user communities. In this contribution we briefly describe the most important use cases for virtualization in High Energy Physics (HEP), CernVM key components and discuss how end-to-end systems corresponding to these use cases can be realized using CernVM.

  11. Workload-aware VM Scheduling on Multicore Systems

    Directory of Open Access Journals (Sweden)

    Insoon Jo

    2011-11-01

    Full Text Available In virtualized environments, performance interference between virtual machines (VMs is a key challenge. In order to mitigate resource contention, an efficient VM scheduling is positively necessary.In this paper, we propose a workload-aware VM scheduler on multi-core systems, which finds a systemwide mapping of VMs to physical cores. Our work aims not only at minimizing the number of used hosts,but at maximizing the system throughput. To achieve the first goal, our scheduler dynamically adjusts a set of used hosts. To achieve the second goal, it maps each VM on a physical core where the physical core and its host most sufficiently meet the resource requirements of the VM. Evaluation demonstrates that our scheduling ensures efficient use of data center resources.

  12. Savannah River VM--Intellect application support documentation

    Energy Technology Data Exchange (ETDEWEB)

    Carter, L.S.

    1988-09-23

    This document details the underlying support programming and structures that support the INTELLECT and KBMS products at the Savannah River Facility. The target audience for this document includes INTELLECT System Administrators, INTELLECT programmers and developers, and VM Systems Programmers.

  13. Cache Timing Analysis of HC-256

    DEFF Research Database (Denmark)

    Zenner, Erik

    2008-01-01

    In this paper, we describe an abstract model of cache timing attacks that can be used for designing ciphers. We then analyse HC-256 under this model, demonstrating a cache timing attack under certain strong assumptions. From the observations made in our analysis, we derive a number of design prin...

  14. Retention Benefit Based Intelligent Cache Replacement

    Institute of Scientific and Technical Information of China (English)

    李凌达; 陆俊林; 程旭

    2014-01-01

    The performance loss resulting from different cache misses is variable in modern systems for two reasons: 1) memory access latency is not uniform, and 2) the latency toleration ability of processor cores varies across different misses. Compared with parallel misses and store misses, isolated fetch and load misses are more costly. The variation of cache miss penalty suggests that the cache replacement policy should take it into account. To that end, first, we propose the notion of retention benefit. Retention benefits can evaluate not only the increment of processor stall cycles on cache misses, but also the reduction of processor stall cycles due to cache hits. Then, we propose Retention Benefit Based Replacement (RBR) which aims to maximize the aggregate retention benefits of blocks reserved in the cache. RBR keeps track of the total retention benefit for each block in the cache, and it preferentially evicts the block with the minimum total retention benefit on replacement. The evaluation shows that RBR can improve cache performance significantly in both single-core and multi-core environment while requiring a low storage overhead. It also outperforms other state-of-the-art techniques.

  15. Cache timing attacks on recent microarchitectures

    DEFF Research Database (Denmark)

    Andreou, Alexandres; Bogdanov, Andrey; Tischhauser, Elmar Wolfgang

    2017-01-01

    Cache timing attacks have been known for a long time, however since the rise of cloud computing and shared hardware resources, such attacks found new potentially devastating applications. One prominent example is S$A (presented by Irazoqui et al at S&P 2015) which is a cache timing attack against...

  16. Refinement verification of the lazy caching algorithm

    NARCIS (Netherlands)

    Hesselink, Wim H.

    2006-01-01

    The lazy caching algorithm of Afek et al. (ACM Trans. Program. Lang. Syst. 15, 182-206, 1993) is a protocol that allows the use of local caches with delayed updates. It results in a memory model that is not atomic (linearizable) but only sequentially consistent as defined by Lamport. In Distributed

  17. A Cache Timing Analysis of HC-256

    DEFF Research Database (Denmark)

    Zenner, Erik

    2009-01-01

    In this paper, we describe a cache-timing attack against the stream cipher HC-256, which is the strong version of eStream winner HC-128. The attack is based on an abstract model of cache timing attacks that can also be used for designing stream ciphers. From the observations made in our analysis,...

  18. Cache Energy Optimization Techniques For Modern Processors

    Energy Technology Data Exchange (ETDEWEB)

    Mittal, Sparsh [ORNL

    2013-01-01

    Modern multicore processors are employing large last-level caches, for example Intel's E7-8800 processor uses 24MB L3 cache. Further, with each CMOS technology generation, leakage energy has been dramatically increasing and hence, leakage energy is expected to become a major source of energy dissipation, especially in last-level caches (LLCs). The conventional schemes of cache energy saving either aim at saving dynamic energy or are based on properties specific to first-level caches, and thus these schemes have limited utility for last-level caches. Further, several other techniques require offline profiling or per-application tuning and hence are not suitable for product systems. In this book, we present novel cache leakage energy saving schemes for single-core and multicore systems; desktop, QoS, real-time and server systems. Also, we present cache energy saving techniques for caches designed with both conventional SRAM devices and emerging non-volatile devices such as STT-RAM (spin-torque transfer RAM). We present software-controlled, hardware-assisted techniques which use dynamic cache reconfiguration to configure the cache to the most energy efficient configuration while keeping the performance loss bounded. To profile and test a large number of potential configurations, we utilize low-overhead, micro-architecture components, which can be easily integrated into modern processor chips. We adopt a system-wide approach to save energy to ensure that cache reconfiguration does not increase energy consumption of other components of the processor. We have compared our techniques with state-of-the-art techniques and have found that our techniques outperform them in terms of energy efficiency and other relevant metrics. The techniques presented in this book have important applications in improving energy-efficiency of higher-end embedded, desktop, QoS, real-time, server processors and multitasking systems. This book is intended to be a valuable guide for both

  19. The dCache scientific storage cloud

    CERN Document Server

    CERN. Geneva

    2014-01-01

    For over a decade, the dCache team has provided software for handling big data for a diverse community of scientists. The team has also amassed a wealth of operational experience from using this software in production. With this experience, the team have refined dCache with the goal of providing a "scientific cloud": a storage solution that satisfies all requirements of a user community by exposing different facets of dCache with which users interact. Recent development, as part of this "scientific cloud" vision, has introduced a new facet: a sync-and-share service, often referred to as "dropbox-like storage". This work has been strongly focused on local requirements, but will be made available in future releases of dCache allowing others to adopt dCache solutions. In this presentation we will outline the current status of the work: both the successes and limitations, and the direction and time-scale of future work.

  20. The Cost of Cache-Oblivious Searching

    DEFF Research Database (Denmark)

    Bender, Michael A.; Brodal, Gerth Stølting; Fagerberg, Rolf

    2011-01-01

    This paper gives tight bounds on the cost of cache-oblivious searching. The paper shows that no cache-oblivious search structure can guarantee a search performance of fewer than lg elog  B N memory transfers between any two levels of the memory hierarchy. This lower bound holds even if all...... increases. The expectation is taken over the random placement in memory of the first element of the structure. Because searching in the disk-access machine (DAM) model can be performed in log  B N+O(1) block transfers, this result establishes a separation between the (2-level) DAM model and cache......-oblivious model. The DAM model naturally extends to k levels. The paper also shows that as k grows, the search costs of the optimal k-level DAM search structure and the optimal cache-oblivious search structure rapidly converge. This result demonstrates that for a multilevel memory hierarchy, a simple cache...

  1. Efficient sorting using registers and caches

    DEFF Research Database (Denmark)

    Wickremesinghe, Rajiv; Arge, Lars Allan; Chase, Jeffrey S.;

    2002-01-01

    Modern computer systems have increasingly complex memory systems. Common machine models for algorithm analysis do not reflect many of the features of these systems, e.g., large register sets, lockup-free caches, cache hierarchies, associativity, cache line fetching, and streaming behavior...... on sorting performance. We introduce a new cache-conscious sorting algorithm, R-MERGE, which achieves better performance in practice over algorithms that are superior in the theoretical models. R-MERGE is designed to minimize memory stall cycles rather than cache misses by considering features common to many....... Inadequate models lead to poor algorithmic choices and an incomplete understanding of algorithm behavior on real machines.A key step toward developing better models is to quantify the performance effects of features not reflected in the models. This paper explores the effect of memory system features...

  2. FlexiWay: A Cache Energy Saving Technique Using Fine-grained Cache Reconfiguration

    Energy Technology Data Exchange (ETDEWEB)

    Mittal, Sparsh [ORNL; Zhang, Zhao [Iowa State University; Vetter, Jeffrey S [ORNL

    2013-01-01

    Recent trends of CMOS scaling and use of large last level caches (LLCs) have led to significant increase in the leakage energy consumption of LLCs and hence, managing their energy consumption has become extremely important in modern processor design. The conventional cache energy saving techniques require offline profiling or provide only coarse granularity of cache allocation. We present FlexiWay, a cache energy saving technique which uses dynamic cache reconfiguration. FlexiWay logically divides the cache sets into multiple (e.g. 16) modules and dynamically turns off suitable and possibly different number of cache ways in each module. FlexiWay has very small implementation overhead and it provides fine-grain cache allocation even with caches of typical associativity, e.g. an 8-way cache. Microarchitectural simulations have been performed using an x86-64 simulator and workloads from SPEC2006 suite. Also, FlexiWay has been compared with two conventional energy saving techniques. The results show that FlexiWay provides largest energy saving and incurs only small loss in performance. For single, dual and quad core systems, the average energy saving using FlexiWay are 26.2%, 25.7% and 22.4%, respectively.

  3. WATCHMAN: A Data Warehouse Intelligent Cache Manager

    Science.gov (United States)

    Scheuermann, Peter; Shim, Junho; Vingralek, Radek

    1996-01-01

    Data warehouses store large volumes of data which are used frequently by decision support applications. Such applications involve complex queries. Query performance in such an environment is critical because decision support applications often require interactive query response time. Because data warehouses are updated infrequently, it becomes possible to improve query performance by caching sets retrieved by queries in addition to query execution plans. In this paper we report on the design of an intelligent cache manager for sets retrieved by queries called WATCHMAN, which is particularly well suited for data warehousing environment. Our cache manager employs two novel, complementary algorithms for cache replacement and for cache admission. WATCHMAN aims at minimizing query response time and its cache replacement policy swaps out entire retrieved sets of queries instead of individual pages. The cache replacement and admission algorithms make use of a profit metric, which considers for each retrieved set its average rate of reference, its size, and execution cost of the associated query. We report on a performance evaluation based on the TPC-D and Set Query benchmarks. These experiments show that WATCHMAN achieves a substantial performance improvement in a decision support environment when compared to a traditional LRU replacement algorithm.

  4. File caching in data intensive scientific applications

    Energy Technology Data Exchange (ETDEWEB)

    Otoo, Ekow; Rotem, Doron; Romosan, Alexandru; Seshadri, Sridhar

    2004-07-18

    We present some theoretical and experimental results of animportant caching problem that arises frequently in data intensivescientific applications. In such applications, jobs need to processseveral files simultaneously, i.e., a job can only be serviced if all itsneeded files are present in the disk cache. The set of files requested bya job is called a file-bundle. This requirement introduces the need forcache replacement algorithms based on file-bundles rather then individualfiles. We show that traditional caching algorithms such Least RecentlyUsed (LRU), and GreedyDual-Size (GDS), are not optimal in this case sincethey are not sensitive to file-bundles and may hold in the cachenon-relevant combinations of files. In this paper we propose and analyzea new cache replacement algorithm specifically adapted to deal withfile-bundles. We tested the new algorithm using a disk cache simulationmodel under a wide range of parameters such as file requestdistributions, relative cache size, file size distribution,and queuesize. In all these tests, the results show significant improvement overtraditional caching algorithms such as GDS.

  5. Truth Space Method for Caching Database Queries

    Directory of Open Access Journals (Sweden)

    S. V. Mosin

    2015-01-01

    Full Text Available We propose a new method of client-side data caching for relational databases with a central server and distant clients. Data are loaded into the client cache based on queries executed on the server. Every query has the corresponding DB table – the result of the query execution. These queries have a special form called "universal relational query" based on three fundamental Relational Algebra operations: selection, projection and natural join. We have to mention that such a form is the closest one to the natural language and the majority of database search queries can be expressed in this way. Besides, this form allows us to analyze query correctness by checking lossless join property. A subsequent query may be executed in a client’s local cache if we can determine that the query result is entirely contained in the cache. For this we compare truth spaces of the logical restrictions in a new user’s query and the results of the queries execution in the cache. Such a comparison can be performed analytically , without need in additional Database queries. This method may be used to define lacking data in the cache and execute the query on the server only for these data. To do this the analytical approach is also used, what distinguishes our paper from the existing technologies. We propose four theorems for testing the required conditions. The first and the third theorems conditions allow us to define the existence of required data in cache. The second and the fourth theorems state conditions to execute queries with cache only. The problem of cache data actualizations is not discussed in this paper. However, it can be solved by cataloging queries on the server and their serving by triggers in background mode. The article is published in the author’s wording.

  6. The Configuration Strategies on Caching for Web Servers

    Institute of Scientific and Technical Information of China (English)

    GUO Chengcheng; ZHANG Li; YAN Puliu

    2006-01-01

    The Web cluster has been a popular solution of network server system because of its scalability and cost effective ness. The cache configured in servers can result in increasing significantly performance. In this paper, we discuss the suitable configuration strategies for caching dynamic content by our experimental results. Considering the system itself can provide support for caching static Web page, such as computer memory cache and disk's own cache, we adopt a special pattern that only caches dynamic Web page in some experiments to enlarge cache space. The paper is introduced three different replacement algorithms in our cache proxy module to test the practical effects of caching dynamic pages under different conditions. The paper is chiefly analyzed the influences of generated time and accessed frequency on caching dynamic Web pages. The paper is also provided the detailed experiment results and main conclusions in the paper.

  7. Optimization of CernVM early boot process

    CERN Document Server

    Mazdin, Petra

    2015-01-01

    CernVM virtual machine is a Linux based virtual appliance optimized for High Energy Physics experiments. It is used for cloud computing, volunteer computing, and software development by the four large LHC experiments. The goal of this project is proling and optimizing the boot process of the CernVM. A key part was the development of a performance profiler for shell scripts as an extension to the popular BusyBox open source UNIX tool suite. Based on the measurements, costly shell code was replaced by more efficient, custom C programs. The results are compared to the original ones and successful optimization is proven.

  8. A Site-Based Proxy Cache

    Institute of Scientific and Technical Information of China (English)

    ZHU Jing(朱晶); YANG GuangWen(杨广文); HU Min(胡敏); SHEN MeiMing(沈美明)

    2003-01-01

    In traditional proxy caches, any visited page from any Web server is cachedindependently, ignoring connections between pages. And users still have to frequently visit indexingpages just for reaching useful informative ones, which causes significant waste of caching space andunnecessary Web traffic. In order to solve the above problem, this paper introduced a site graphmodel to describe WWW and a site-based replacement strategy has been built based on it. Theconcept of "access frequency" is developed for evaluating whether a Web page is worth beingkept in caching space. On the basis of user's access history, auxiliary navigation information isprovided to help him reach target pages more quickly. Performance test results have shown thatthe proposed proxy cache system can get higher hit ratio than traditional ones and can reduceuser's access latency effectively.

  9. Hydrologic Data Sites for Cache County, Utah

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map shows the USGS (United States Geologic Survey), NWIS (National Water Inventory System) Hydrologic Data Sites for Cache County, Utah. The scope and purpose...

  10. Search-Order Independent State Caching

    DEFF Research Database (Denmark)

    Evangelista, Sami; Kristensen, Lars Michael

    2009-01-01

    State caching is a memory reduction technique used by model checkers to alleviate the state explosion problem. It has traditionally been coupled with a depth-first search to ensure termination.We propose and experimentally evaluate an extension of the state caching method for general state...... exploring algorithms that are independent of the search order (i.e., search algorithms that partition the state space into closed (visited) states, open (to visit) states and unmet states)....

  11. Search-Order Independent State Caching

    DEFF Research Database (Denmark)

    Evangelista, Sami; Kristensen, Lars Michael

    2010-01-01

    State caching is a memory reduction technique used by model checkers to alleviate the state explosion problem. It has traditionally been coupled with a depth-first search to ensure termination.We propose and experimentally evaluate an extension of the state caching method for general state...... exploring algorithms that are independent of the search order (i.e., search algorithms that partition the state space into closed (visited) states, open (to visit) states and unmet states)....

  12. Research and Implementation of Software Used for the Remote Control for VM700T Video Measuring Instrument

    Directory of Open Access Journals (Sweden)

    Song Wenjie

    2015-01-01

    Full Text Available In this paper, the measurement software which can be used to realize remote control of the VM700T video measuring instrument is introduced. The authors can operate VM700T by a virtual panel on the client computer, select the results that the measuring equipment displayed to transmit it, and then display the image on the VM700T virtual panel in real time. The system does have some practical values and play an important role in distance learning. The functions that the system realized mainly includes four aspects: the real-time transmission of message based on the socket technology, the serial connection between server PC and VM700T measuring equipment, the image acquisition based on VFW technology and JPEG compression and decompression, and the network transmission of image files. The actual network transmission test is shown that the data acquisition method of this thesis is flexible and convenient, and the system is of extraordinary stability. It can display the measurement results in real time and basically realize the requirements of remote control. In the content, this paper includes a summary of principle, the detailed introduction of the system realization process and some related technology.

  13. PEMILIHAN DAN MIGRASI VM MENGGUNAKAN MCDM UNTUK PENINGKATAN KINERJA LAYANAN PADA CLOUD COMPUTING

    Directory of Open Access Journals (Sweden)

    Abdullah Fadil

    2016-08-01

    Full Text Available Komputasi awan atau cloud computing merupakan lingkungan yang heterogen dan terdistribusi, tersusun atas gugusan jaringan server dengan berbagai kapasitas sumber daya komputasi yang berbeda-beda guna menopang model layanan yang ada di atasnya. Virtual machine (VM dijadikan sebagai representasi dari ketersediaan sumber daya komputasi dinamis yang dapat dialokasikan dan direalokasikan sesuai dengan permintaan. Mekanisme live migration VM di antara server fisik yang terdapat di dalam data center cloud digunakan untuk mencapai konsolidasi dan memaksimalkan utilisasi VM. Pada prosedur konsoidasi vm, pemilihan dan penempatan VM sering kali menggunakan kriteria tunggal dan statis. Dalam penelitian ini diusulkan pemilihan dan penempatan VM menggunakan multi-criteria decision making (MCDM pada prosedur konsolidasi VM dinamis di lingkungan cloud data center guna meningkatkan layanan cloud computing. Pendekatan praktis digunakan dalam mengembangkan lingkungan cloud computing berbasis OpenStack Cloud dengan mengintegrasikan VM selection dan VM Placement pada prosedur konsolidasi VM menggunakan OpenStack-Neat. Hasil penelitian menunjukkan bahwa metode pemilihan dan penempatan VM melalui live migration mampu menggantikan kerugian yang disebabkan oleh down-times sebesar 11,994 detik dari waktu responnya. Peningkatan response times terjadi sebesar 6 ms ketika terjadi proses live migration VM dari host asal ke host tujuan. Response times rata-rata setiap vm yang tersebar pada compute node setelah terjadi proses live migration sebesar 67 ms yang menunjukkan keseimbangan beban pada sistem cloud computing.

  14. Efficiency of Cache Mechanism for Network Processors

    Institute of Scientific and Technical Information of China (English)

    XU Bo; CHANG Jian; HUANG Shimeng; XUE Yibo; LI Jun

    2009-01-01

    With the explosion of network bandwidth and the ever-changing requirements for diverse net-work-based applications, the traditional processing architectures, i.e., general purpose processor (GPP) and application specific integrated circuits (ASIC) cannot provide sufficient flexibility and high performance at the same time. Thus, the network processor (NP) has emerged as an altemative to meet these dual demands for today's network processing. The NP combines embedded multi-threaded cores with a dch memory hierarchy that can adapt to different networking circumstances when customized by the application developers. In to-day's NP architectures, muitithreading prevails over cache mechanism, which has achieved great success in GPP to hide memory access latencies. This paper focuses on the efficiency of the cache mechanism in an NP. Theoretical timing models of packet processing are established for evaluating cache efficiency and experi-ments are performed based on real-life network backbone traces. Testing results show that an improvement of neady 70% can be gained in throughput with assistance from the cache mechanism. Accordingly, the cache mechanism is still efficient and irreplaceable in network processing, despite the existing of multithreading.

  15. Mobile web caching in a hostile environment

    Science.gov (United States)

    Kalbfleisch, Gail A.; Movva, Sridevi; Griffin, Terry W.; Passos, Nelson L.

    2003-07-01

    In the wired Internet, it is common practice to use Web caching to reduce network utilization and improve access time to a Web page. Mobile users introduce new variables in the communication process due to the fact that a user may dynamically change its contact point within the network, accessing the Web server through a different path, which may not have access to cached pages. In hostile environments such as domestic catastrophic emergencies, field units must have a higher priority to significant information available through Web servers, requiring lower response times and reliable access via Internet enabled cell phones. In foreign lands, specific infrastructure must be implemented and activated, which requires extensive work currently being researched by defense contractors. In domestic situations, the infrastructure exists and must be equipped to provide the necessary priority to civil defense personnel. This study discusses mechanisms used in Web caching and suggests features that should be added to current cache management algorithms to provide the necessary priority, including the use of standard protocol commands to identify cacheable information and establishment of priority and aging policies to control the length of time data should be maintained in the cache.

  16. Reducing Network Traffic of Token Protocol Using Sharing Relation Cache

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Token protocol provides a new coherence framework for shared-memory multiprocessor systems.It avoids indirections of directory protocols for common cache-to-cache transfer misses, and achieves higher interconnect bandwidth and lower interconnect latency compared with snooping protocols. However, the broadcasting increases network traffic, limiting the scalability of token protocol. This paper describes an efficient technique to reduce the token protocol network traffic, called sharing relation cache. This cache provides destination set information for cache-to-cache miss requests by caching directory information for recent shared data. This paper introduces how to implement the technique in a token protocol. Simulations using SPLASH-2 benchmarks show that in a 16-core chip multiprocessor system, the cache reduced the network traffic by 15% on average.

  17. FemtoCaching: Wireless Video Content Delivery through Distributed Caching Helpers

    CERN Document Server

    Golrezaei, Negin; Dimakis, Alexandros G; Molisch, Andreas F; Caire, Giuseppe

    2011-01-01

    We suggest a novel approach to handle the ongoing explosive increase in the demand for video content in wireless/mobile devices. We envision femtocell-like base stations, which we call helpers, with weak backhaul links but large storage capacity. These helpers form a wireless distributed caching network that assists the macro base station by handling requests of popular files that have been cached. Due to the short distances between helpers and requesting devices, the transmission of cached files can be done very efficiently. A key question for such a system is the wireless distributed caching problem, i.e., which files should be cached by which helpers. If every mobile device has only access to a exactly one helper, then clearly each helper should cache the same files, namely the most popular ones. However, for the case that each mobile device can access multiple caches, the assignment of files to helpers becomes nontrivial. The theoretical contribution of our paper lies in (i) formalizing the distributed ca...

  18. Experimental results on V-M type pulse tube refrigerator

    Science.gov (United States)

    Dai, Wei; Matsubara, Yoichi; Kobayashi, Hisayasu

    2002-06-01

    This article mainly introduces experimental results on a new type pulse tube refrigerator named as V-M type pulse tube refrigerator. The main difference from Stirling type or G-M type pulse tube refrigerator is that thermal compressor similar to that of a V-M cryocooler is used instead of mechanical compressor. By using temperature difference between room temperature and liquid nitrogen, pressure wave with high to low pressure ratio around 1.2 is obtained. This pressure wave is used to generate cooling effect at the cold end. With a 20 K pre-cooler, this machine reaches lowest temperature 5.25 K by using helium 4 at 0.77 Hz, 19 bar charge pressure. DC flow plays an important role in our system. It not only influences the final obtainable lowest temperature, but also is used to increase cold end cool-down speed. Total volume of the V-M type pulse tube refrigerator is around 3.3 l. However, dead volume inside rotor housing occupies about 2.8 l and can be much reduced.

  19. A Caching Strategy for Streaming Media

    Institute of Scientific and Technical Information of China (English)

    谭劲; 余胜生; 周敬利

    2004-01-01

    It is expected that by 2003 continuous media will account for more than 50% of the data available on origin servers, this will provoke a significant change in Internet workload. Due to the high bandwidth requirements and the long-lived nature of digital video, streaming server loads and network bandwidths are proven to be major limiting factors. Aiming at the characteristics of broadband network in residential areas, this paper proposes a popularity-based server-proxy caching strategy for streaming media. According to a streaming media popularity on streaming server and proxy, this strategy caches the content of the streaming media partially or completely. The paper also proposes two formulas that calculate the popularity coefficient of a streaming media on server and proxy, and caching replacement policy. As expected, this strategy decreases the server load, reduces the traffic from streaming server to proxy, and improves client start-up latency.

  20. Cache as point of coherence in multiprocessor system

    Energy Technology Data Exchange (ETDEWEB)

    Blumrich, Matthias A.; Ceze, Luis H.; Chen, Dong; Gara, Alan; Heidelberger, Phlip; Ohmacht, Martin; Steinmacher-Burow, Burkhard; Zhuang, Xiaotong

    2016-11-29

    In a multiprocessor system, a conflict checking mechanism is implemented in the L2 cache memory. Different versions of speculative writes are maintained in different ways of the cache. A record of speculative writes is maintained in the cache directory. Conflict checking occurs as part of directory lookup. Speculative versions that do not conflict are aggregated into an aggregated version in a different way of the cache. Speculative memory access requests do not go to main memory.

  1. Design Space Exploration of Object Caches with Cross-Profiling

    DEFF Research Database (Denmark)

    Schoeberl, Martin; Binder, Walter; Villazon, Alex

    2011-01-01

    To avoid data cache trashing between heap-allocated data and other data areas, a distinct object cache has been proposed for embedded real-time Java processors. This object cache uses high associativity in order to statically track different object pointers for worst-case execution-time analysis....

  2. Cache-Oblivious Algorithms and Data Structures

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting

    2004-01-01

    as standard RAM algorithms with only one memory level, i.e. without any knowledge about memory hierarchies, but are analyzed in the two-level I/O model of Aggarwal and Vitter for an arbitrary memory and block size and an optimal off-line cache replacement strategy. The result are algorithms that automatically...... apply to multi-level memory hierarchies. This paper gives an overview of the results achieved on cache-oblivious algorithms and data structures since the seminal paper by Frigo et al....

  3. Cache-Conscious Index Mechanism for Main-Memory Databases

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Recent studies have addressed that the cache behavior is important in the design of main memory index structures. Cache-conscious indices such as the CSB+-tree are shown to outperform conventional main memory indices such as the AVL-tree and the T-tree. This paper proposes a cache-conscious version of the T-tree, CST-tree, defined according to the cache-conscious definition. To separate the keys within a node into two parts, the CST-tree can gain higher cache hit ratio.

  4. Analysis of DNS cache effects on query distribution.

    Science.gov (United States)

    Wang, Zheng

    2013-01-01

    This paper studies the DNS cache effects that occur on query distribution at the CN top-level domain (TLD) server. We first filter out the malformed DNS queries to purify the log data pollution according to six categories. A model for DNS resolution, more specifically DNS caching, is presented. We demonstrate the presence and magnitude of DNS cache effects and the cache sharing effects on the request distribution through analytic model and simulation. CN TLD log data results are provided and analyzed based on the cache model. The approximate TTL distribution for domain name is inferred quantificationally.

  5. The Cost of Cache-Oblivious Searching

    DEFF Research Database (Denmark)

    Bender, Michael A.; Brodal, Gert Stølting; Fagerberg, Rolf

    2003-01-01

    Tight bounds on the cost of cache-oblivious searching are proved. It is shown that no cache-oblivious search structure can guarantee that a search performs fewer than lg e log B N block transfers between any two levels of the memory hierarchy. This lower bound holds even if all of the block sizes...... are limited to be powers of 2. A modied version of the van Emde Boas layout is proposed, whose expected block transfers between any two levels of the memory hierarchy arbitrarily close to [lg e +O(lg lg B = lg B)] log B N +O(1). This factor approaches lg e 1:443 as B increases. The expectation is taken over......, multilevel memory hierarchies can be modelled. It is shown that as k grows, the search costs of the optimal k-level DAM search structure and of the optimal cache-oblivious search structure rapidly converge. This demonstrates that for a multilevel memory hierarchy, a simple cache-oblivious structure almost...

  6. Corvid Caching : Insights From a Cognitive Model

    NARCIS (Netherlands)

    van der Vaart, Elske; Verbrugge, Rineke; Hemelrijk, Charlotte K.

    2011-01-01

    Caching and recovery of food by corvids is well-studied, but some ambiguous results remain. To help clarify these, we built a computational cognitive model. It is inspired by similar models built for humans, and it assumes that memory strength depends on frequency and recency of use. We compared our

  7. Efficient caching for constrained skyline queries

    DEFF Research Database (Denmark)

    Mortensen, Michael Lind; Chester, Sean; Assent, Ira;

    2015-01-01

    Constrained skyline queries retrieve all points that optimize some user’s preferences subject to orthogonal range constraints, but at significant computational cost. This paper is the first to propose caching to improve constrained skyline query response time. Because arbitrary range constraints ...

  8. Multi-level Hybrid Cache: Impact and Feasibility

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Zhe [ORNL; Kim, Youngjae [ORNL; Ma, Xiaosong [ORNL; Shipman, Galen M [ORNL; Zhou, Yuanyuan [University of California, San Diego

    2012-02-01

    Storage class memories, including flash, has been attracting much attention as promising candidates fitting into in today's enterprise storage systems. In particular, since the cost and performance characteristics of flash are in-between those of DRAM and hard disks, it has been considered by many studies as an secondary caching layer underneath main memory cache. However, there has been a lack of studies of correlation and interdependency between DRAM and flash caching. This paper views this problem as a special form of multi-level caching, and tries to understand the benefits of this multi-level hybrid cache hierarchy. We reveal that significant costs could be saved by using Flash to reduce the size of DRAM cache, while maintaing the same performance. We also discuss design challenges of using flash in the caching hierarchy and present potential solutions.

  9. EFFICIENT VM LOAD BALANCING ALGORITHM FOR A CLOUD COMPUTING ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    Jasmin James

    2012-09-01

    Full Text Available Cloud computing is a fast growing area in computing research and industry today. With the advancement of the Cloud, there are new possibilities opening up on how applications can be built and how different services can be offered to the end user through Virtualization, on the internet. There are the cloud service providers who provide large scaled computing infrastructure defined on usage, and provide the infrastructure services in a very flexiblemanner which the users can scale up or down at will. The establishment of an effective load balancing algorithm and how to use Cloud computing resources efficiently for effective and efficient cloud computing is one of the Cloud computing service providers’ultimate goals. In this paper firstly analysis of different Virtual Machine (VM load balancing algorithms is done. Secondly, a new VM load balancing algorithm has been proposed and implemented for an IaaS framework in Simulated cloud computing environment; i.e. ‘Weighted Active Monitoring Load Balancing Algorithm’ using CloudSimtools, for the Datacenter to effectively load balance requests between the available virtual machines assigning a weight, in order to achieve better performance parameters such as response time and Data processing time.

  10. Cache Management of Big Data in Equipment Condition Assessment

    Directory of Open Access Journals (Sweden)

    Ma Yan

    2016-01-01

    Full Text Available Big data platform for equipment condition assessment is built for comprehensive analysis. The platform has various application demands. According to its response time, its application can be divided into offline, interactive and real-time types. For real-time application, its data processing efficiency is important. In general, data cache is one of the most efficient ways to improve query time. However, big data caching is different from the traditional data caching. In the paper we propose a distributed cache management framework of big data for equipment condition assessment. It consists of three parts: cache structure, cache replacement algorithm and cache placement algorithm. Cache structure is the basis of the latter two algorithms. Based on the framework and algorithms, we make full use of the characteristics of just accessing some valuable data during a period of time, and put relevant data on the neighborhood nodes, which largely reduce network transmission cost. We also validate the performance of our proposed approaches through extensive experiments. It demonstrates that the proposed cache replacement algorithm and cache management framework has higher hit rate or lower query time than LRU algorithm and round-robin algorithm.

  11. Ontology-Based Semantic Cache in AOKB

    Institute of Scientific and Technical Information of China (English)

    郑红; 陆汝钤; 金芝; 胡思康

    2002-01-01

    When querying on a large-scale knowledge base, a major technique of im-proving performance is to preload knowledge to minimize the number of roundtrips to theknowledge base. In this paper, an ontology-based semantic cache is proposed for an agentand ontology-oriented knowledge base (AOKB). In AOKB, an ontology is the collection of re-lationships between a group of knowledge units (agents and/or other sub-ontologies). Whenloading some agent A, its relationships with other knowledge units are examined, and thosewho have a tight semantic tie with A will be preloaded at the same time, including agents andsub-ontologies in the same ontology where A is. The preloaded agents and ontologies are savedat a semantic cache located in the memory. Test results show that up to 50% reduction inrunning time is achieved.

  12. A dual-consistency cache coherence protocol

    OpenAIRE

    Ros, Alberto; Jimborean, Alexandra

    2015-01-01

    Weak memory consistency models can maximize system performance by enabling hardware and compiler optimizations, but increase programming complexity since they do not match programmers’ intuition. The design of an efficient system with an intuitive memory model is an open challenge. This paper proposes SPEL, a dual-consistency cache coherence protocol which simultaneously guarantees the strongest memory consistency model provided by the hardware and yields improvements in both performance and ...

  13. Unilateral Antidotes to DNS Cache Poisoning

    OpenAIRE

    Herzberg, Amir; Shulman, Haya

    2012-01-01

    We investigate defenses against DNS cache poisoning focusing on mechanisms that can be readily deployed unilaterally by the resolving organisation, preferably in a single gateway or a proxy. DNS poisoning is (still) a major threat to Internet security; determined spoofing attackers are often able to circumvent currently deployed antidotes such as port randomisation. The adoption of DNSSEC, which would foil DNS poisoning, remains a long-term challenge. We discuss limitations of the prominent r...

  14. Monitoring nearest neighbor queries with cache strategies

    Institute of Scientific and Technical Information of China (English)

    PAN Peng; LU Yan-sheng

    2007-01-01

    The problem of continuously monitoring multiple K-nearest neighbor (K-NN) queries with dynamic object and query dataset is valuable for many location-based applications. A practical method is to partition the data space into grid cells, with both object and query table being indexed by this grid structure, while solving the problem by periodically joining cells of objects with queries having their influence regions intersecting the cells. In the worst case, all cells of objects will be accessed once. Object and query cache strategies are proposed to further reduce the I/O cost. With object cache strategy, queries remaining static in current processing cycle seldom need I/O cost, they can be returned quickly. The main I/O cost comes from moving queries, the query cache strategy is used to restrict their search-regions, which uses current results of queries in the main memory buffer. The queries can share not only the accessing of object pages, but also their influence regions. Theoretical analysis of the expected I/O cost is presented, with the I/O cost being about 40% that of the SEA-CNN method in the experiment results.

  15. Toxicity and medical countermeasure studies on the organophosphorus nerve agents VM and VX.

    Science.gov (United States)

    Rice, Helen; Dalton, Christopher H; Price, Matthew E; Graham, Stuart J; Green, A Christopher; Jenner, John; Groombridge, Helen J; Timperley, Christopher M

    2015-04-08

    To support the effort to eliminate the Syrian Arab Republic chemical weapons stockpile safely, there was a requirement to provide scientific advice based on experimentally derived information on both toxicity and medical countermeasures (MedCM) in the event of exposure to VM, VX or VM-VX mixtures. Complementary in vitro and in vivo studies were undertaken to inform that advice. The penetration rate of neat VM was not significantly different from that of neat VX, through either guinea pig or pig skin in vitro. The presence of VX did not affect the penetration rate of VM in mixtures of various proportions. A lethal dose of VM was approximately twice that of VX in guinea pigs poisoned via the percutaneous route. There was no interaction in mixed agent solutions which altered the in vivo toxicity of the agents. Percutaneous poisoning by VM responded to treatment with standard MedCM, although complete protection was not achieved.

  16. Using probabilistic cache scheme to construct the small world network

    Institute of Scientific and Technical Information of China (English)

    ZOU Fu-tai; YI Ping; MA Fan-yuan; LI Jian-hua

    2007-01-01

    Recently some P2P systems have constructed the small world network using the small world model so as to improve the routing performance. In this paper, we propose a novel probabilistic cache scheme to construct the small world network based on the small world model and use it to improve CAN, that is, PCCAN ( Probabilistic Cache - based CAN). PCCAN caches the long contact. It uses the worm routing replacing mechanism and probabilistic replacing strategy on the cache. The probabilistic cache scheme proves to be an efficient approach to model the small world phenomenon. Experiments in both the static and the dynamic network show that PCCAN can converge to the steady state with the cache scheme, and the routing performance is significantly improved with additional low overheads in the network compared with CAN.

  17. Static analysis of worst-case stack cache behavior

    DEFF Research Database (Denmark)

    Jordan, Alexander; Brandner, Florian; Schoeberl, Martin

    2013-01-01

    Utilizing a stack cache in a real-time system can aid predictability by avoiding interference that heap memory traffic causes on the data cache. While loads and stores are guaranteed cache hits, explicit operations are responsible for managing the stack cache. The behavior of these operations can...... be analyzed statically. We present algorithms that derive worst-case bounds on the latency-inducing operations of the stack cache. Their results can be used by a static WCET tool. By breaking the analysis down into subproblems that solve intra-procedural data-flow analysis and path searches on the call......-graph, the worst-case bounds can be efficiently yet precisely determined. Our evaluation using the MiBench benchmark suite shows that only 37% and 21% of potential stack cache operations actually store to and load from memory, respectively. Analysis times are modest, on average running between 0.46s and 1.30s per...

  18. Research and implementation of cooperative cache for PVFS

    Institute of Scientific and Technical Information of China (English)

    Wu Weiguo; Wan Qun; Zhang Hu; Liu Siqi; Qian Depei

    2008-01-01

    At present, there are many effective ways to achieve high performance in cluster system storage management, including server-end disk, server-end caching, local caching and cooperative caching. The cooperative caching mechanism shares caches among different clients so as to avoid expensive disk access costs and to improve overall throughput of cluster system. In this paper, a Single Copy Cooperative Cache model is proposed together with block lookup algorithm, block replacement algorithm and the consistency algorithm based on the model. Meanwhile, the prototype system of the model is implemented in PVFS file system. Finally, the performance of this system is tested in InfiniBand Framework, the result of which shows that in contrast to the original PVFS system, read performance of PVFS file system is improved by about two times, while write performance is reduced by nearly ten percent.

  19. An Effective Cache Algorithm for Heterogeneous Storage Systems

    Directory of Open Access Journals (Sweden)

    Yong Li

    2013-01-01

    Full Text Available Modern storage environment is commonly composed of heterogeneous storage devices. However, traditional cache algorithms exhibit performance degradation in heterogeneous storage systems because they were not designed to work with the diverse performance characteristics. In this paper, we present a new cache algorithm called HCM for heterogeneous storage systems. The HCM algorithm partitions the cache among the disks and adopts an effective scheme to balance the work across the disks. Furthermore, it applies benefit-cost analysis to choose the best allocation of cache block to improve the performance. Conducting simulations with a variety of traces and a wide range of cache size, our experiments show that HCM significantly outperforms the existing state-of-the-art storage-aware cache algorithms.

  20. Experience on QA in the CernVM File System

    CERN Document Server

    CERN. Geneva; MEUSEL, Rene

    2015-01-01

    The CernVM-File System (CVMFS) delivers experiment software installations to thousands of globally distributed nodes in the WLCG and beyond. In recent years it became a mission-critical component for offline data processing of the LHC experiments and many other collaborations. From a software engineering perspective, CVMFS is a medium-sized C++ system-level project. Following the growth of the project, we introduced a number of measures to improve the code quality, testability, and maintainability. In particular, we found very useful code reviews through github pull requests and automated unit- and integration testing. We are also transitioning to a test-driven development for new features and bug fixes. These processes are supported by a number of tools, such as Google Test, Jenkins, Docker, and others. We would like to share our experience on problems we encountered and on which processes and tools worked well for us.

  1. 3-D-eChem VM: Cheminformatics Research Infrastructure in a Downloadable Virtual Machine

    OpenAIRE

    Verhoeven, Stefan; Vass, Marton; de Esch, Iwan; Leurs, Rob; Lusher, Scott; Vriend, Gerrrit; Ritschel, Tina; de Graaf, Chris; McGuire, Ross

    2016-01-01

    3D-e-Chem VM is a freely available Virtual Machine (VM) encompassing tools, databases & workflows, including new resources developed for ligand binding site comparisons and GPCR research. The VM contains a fully functional cheminformatics infrastructure consisting of a chemistry enabled relational database system (PostgreSQL + RDKit) with a data analytics workflow tool (KNIME) and additional cheminformatics capabilities. Tools, workflows and reference data sets are made available. The wid...

  2. Experimental evaluation of multiprocessor cache-based error recovery

    Science.gov (United States)

    Janssens, Bob; Fuchs, W. K.

    1991-01-01

    Several variations of cache-based checkpointing for rollback error recovery in shared-memory multiprocessors have been recently developed. By modifying the cache replacement policy, these techniques use the inherent redundancy in the memory hierarchy to periodically checkpoint the computation state. Three schemes, different in the manner in which they avoid rollback propagation, are evaluated. By simulation with address traces from parallel applications running on an Encore Multimax shared-memory multiprocessor, the performance effect of integrating the recovery schemes in the cache coherence protocol are evaluated. The results indicate that the cache-based schemes can provide checkpointing capability with low performance overhead but uncontrollable high variability in the checkpoint interval.

  3. PERFORMANCE OF PRIVATE CACHE REPLACEMENT POLICIES FOR MULTICORE PROCESSORS

    Directory of Open Access Journals (Sweden)

    Matthew Lentz

    2014-07-01

    Full Text Available Multicore processors have become ubiquitous, both in general-purpose and special-purpose applications. With the number of transistors in a chip continuing to increase, the number of cores in a processor is also expected to increase. Cache replacement policy is an important design parameter of a cache hierarchy. As most of the processor designs have become multicore, there is a need to study cache replacement policies for multi-core systems. Previous studies have focused on the shared levels of the multicore cache hierarchy. In this study, we focus on the top level of the hierarchy, which bears the brunt of the memory requests emanating from each processor core. We measure the miss rates of various cache replacement policies, as the number of cores is steadily increased from 1 to 16. The study was done by modifying the publicly available SESC simulator, which models in detail a multicore processor with a multilevel cache hierarchy. Our experimental results show that for the private L1 caches, the LRU (Least Recently Used replacement policy outperforms all of the other replacement policies. This is in contrast to what was observed in previous studies for the shared L2 cache. The results presented in this paper are useful for hardware designers to optimize their cache designs or the program codes.

  4. Corvid re-caching without 'theory of mind': a model.

    Directory of Open Access Journals (Sweden)

    Elske van der Vaart

    Full Text Available Scrub jays are thought to use many tactics to protect their caches. For instance, they predominantly bury food far away from conspecifics, and if they must cache while being watched, they often re-cache their worms later, once they are in private. Two explanations have been offered for such observations, and they are intensely debated. First, the birds may reason about their competitors' mental states, with a 'theory of mind'; alternatively, they may apply behavioral rules learned in daily life. Although this second hypothesis is cognitively simpler, it does seem to require a different, ad-hoc behavioral rule for every caching and re-caching pattern exhibited by the birds. Our new theory avoids this drawback by explaining a large variety of patterns as side-effects of stress and the resulting memory errors. Inspired by experimental data, we assume that re-caching is not motivated by a deliberate effort to safeguard specific caches from theft, but by a general desire to cache more. This desire is brought on by stress, which is determined by the presence and dominance of onlookers, and by unsuccessful recovery attempts. We study this theory in two experiments similar to those done with real birds with a kind of 'virtual bird', whose behavior depends on a set of basic assumptions about corvid cognition, and a well-established model of human memory. Our results show that the 'virtual bird' acts as the real birds did; its re-caching reflects whether it has been watched, how dominant its onlooker was, and how close to that onlooker it has cached. This happens even though it cannot attribute mental states, and it has only a single behavioral rule assumed to be previously learned. Thus, our simulations indicate that corvid re-caching can be explained without sophisticated social cognition. Given our specific predictions, our theory can easily be tested empirically.

  5. Cache directory lookup reader set encoding for partial cache line speculation support

    Science.gov (United States)

    Gara, Alan; Ohmacht, Martin

    2014-10-21

    In a multiprocessor system, with conflict checking implemented in a directory lookup of a shared cache memory, a reader set encoding permits dynamic recordation of read accesses. The reader set encoding includes an indication of a portion of a line read, for instance by indicating boundaries of read accesses. Different encodings may apply to different types of speculative execution.

  6. Automated Cache Performance Analysis And Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Mohror, Kathryn [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2013-12-23

    While there is no lack of performance counter tools for coarse-grained measurement of cache activity, there is a critical lack of tools for relating data layout to cache behavior to application performance. Generally, any nontrivial optimizations are either not done at all, or are done ”by hand” requiring significant time and expertise. To the best of our knowledge no tool available to users measures the latency of memory reference instructions for partic- ular addresses and makes this information available to users in an easy-to-use and intuitive way. In this project, we worked to enable the Open|SpeedShop performance analysis tool to gather memory reference latency information for specific instructions and memory ad- dresses, and to gather and display this information in an easy-to-use and intuitive way to aid performance analysts in identifying problematic data structures in their codes. This tool was primarily designed for use in the supercomputer domain as well as grid, cluster, cloud-based parallel e-commerce, and engineering systems and middleware. Ultimately, we envision a tool to automate optimization of application cache layout and utilization in the Open|SpeedShop performance analysis tool. To commercialize this soft- ware, we worked to develop core capabilities for gathering enhanced memory usage per- formance data from applications and create and apply novel methods for automatic data structure layout optimizations, tailoring the overall approach to support existing supercom- puter and cluster programming models and constraints. In this Phase I project, we focused on infrastructure necessary to gather performance data and present it in an intuitive way to users. With the advent of enhanced Precise Event-Based Sampling (PEBS) counters on recent Intel processor architectures and equivalent technology on AMD processors, we are now in a position to access memory reference information for particular addresses. Prior to the introduction of PEBS counters

  7. Instant Varnish Cache how-to

    CERN Document Server

    Moutinho, Roberto

    2013-01-01

    Filled with practical, step-by-step instructions and clear explanations for the most important and useful tasks. Get the job done and learn as you go. Easy-to-follow, step-by-step recipes which will get you started with Varnish Cache. Practical examples will help you to get set up quickly and easily.This book is aimed at system administrators and web developers who need to scale websites without tossing money on a large and costly infrastructure. It's assumed that you have some knowledge of the HTTP protocol, how browsers and server communicate with each other, and basic Linux systems.

  8. Compiler-directed cache management in multiprocessors

    Science.gov (United States)

    Cheong, Hoichi; Veidenbaum, Alexander V.

    1990-01-01

    The necessity of finding alternatives to hardware-based cache coherence strategies for large-scale multiprocessor systems is discussed. Three different software-based strategies sharing the same goals and general approach are presented. They consist of a simple invalidation approach, a fast selective invalidation scheme, and a version control scheme. The strategies are suitable for shared-memory multiprocessor systems with interconnection networks and a large number of processors. Results of trace-driven simulations conducted on numerical benchmark routines to compare the performance of the three schemes are presented.

  9. A Software Managed Stack Cache for Real-Time Systems

    DEFF Research Database (Denmark)

    Jordan, Alexander; Abbaspourseyedi, Sahar; Schoeberl, Martin

    2016-01-01

    for efficiently deriving worst-case bounds through static analysis. In this paper we present the design and implementation of software managed caching of stack allocated data in a scratchpad memory. We demonstrate a compiler-aided implementation of a stack cache using the LLVM compiler framework and report on its...

  10. Cache Scheme Based on Pre-Fetch Operation in ICN.

    Directory of Open Access Journals (Sweden)

    Jie Duan

    Full Text Available Many recent researches focus on ICN (Information-Centric Network, in which named content becomes the first citizen instead of end-host. In ICN, Named content can be further divided into many small sized chunks, and chunk-based communication has merits over content-based communication. The universal in-network cache is one of the fundamental infrastructures for ICN. In this work, a chunk-level cache mechanism based on pre-fetch operation is proposed. The main idea is that, routers with cache store should pre-fetch and cache the next chunks which may be accessed in the near future according to received requests and cache policy for reducing the users' perceived latency. Two pre-fetch driven modes are present to answer when and how to pre-fetch. The LRU (Least Recently Used is employed for the cache replacement. Simulation results show that the average user perceived latency and hops can be decreased by employed this cache mechanism based on pre-fetch operation. Furthermore, we also demonstrate that the results are influenced by many factors, such as the cache capacity, Zipf parameters and pre-fetch window size.

  11. Evidence for cache surveillance by a scatter-hoarding rodent

    NARCIS (Netherlands)

    Hirsch, B. T.; Kays, R.; Jansen, P. A.

    2013-01-01

    The mechanisms by which food-hoarding animals are capable of remembering the locations of numerous cached food items over long time spans has been the focus of intensive research. The 'memory enhancement hypothesis' states that hoarders reinforce spatial memory of their caches by repeatedly revisiti

  12. Evidence for cache surveillance by a scatter-hoarding rodent

    NARCIS (Netherlands)

    Hirsch, B.T.; Kays, R.; Jansen, P.A.

    2013-01-01

    The mechanisms by which food-hoarding animals are capable of remembering the locations of numerous cached food items over long time spans has been the focus of intensive research. The ‘memory enhancement hypothesis’ states that hoarders reinforce spatial memory of their caches by repeatedly revisiti

  13. Smart Caching for Efficient Information Sharing in Distributed Information Systems

    Science.gov (United States)

    2008-09-01

    Leighton, Matthew Levine, Daniel Lewin , Rina Panigrahy (1997), “Consistent Hashing and Random Trees: Distributed Caching Protocols for Relieving Hot...Danzig, Chuck Neerdaels, Michael Schwartz and Kurt Worrell (1996), “A Hierarchical Internet Object Cache,” in USENIX Proceedings, 1996. 51 INITIAL

  14. Version pressure feedback mechanisms for speculative versioning caches

    Science.gov (United States)

    Eichenberger, Alexandre E.; Gara, Alan; O& #x27; Brien, Kathryn M.; Ohmacht, Martin; Zhuang, Xiaotong

    2013-03-12

    Mechanisms are provided for controlling version pressure on a speculative versioning cache. Raw version pressure data is collected based on one or more threads accessing cache lines of the speculative versioning cache. One or more statistical measures of version pressure are generated based on the collected raw version pressure data. A determination is made as to whether one or more modifications to an operation of a data processing system are to be performed based on the one or more statistical measures of version pressure, the one or more modifications affecting version pressure exerted on the speculative versioning cache. An operation of the data processing system is modified based on the one or more determined modifications, in response to a determination that one or more modifications to the operation of the data processing system are to be performed, to affect the version pressure exerted on the speculative versioning cache.

  15. A Refreshable, On-line Cache for HST Data Retrieval

    Science.gov (United States)

    Fraquelli, Dorothy A.; Ellis, Tracy A.; Ridgaway, Michael; DPAS Team

    2016-01-01

    We discuss upgrades to the HST Data Processing System, with an emphasis on the changes Hubble Space Telescope (HST) Archive users will experience. In particular, data are now held on-line (in a cache) removing the need to reprocess the data every time they are requested from the Archive. OTFR (on the fly reprocessing) has been replaced by a reprocessing system, which runs in the background. Data in the cache are automatically placed in the reprocessing queue when updated calibration reference files are received or when an improved calibration algorithm is installed. Data in the on-line cache are expected to be the most up to date version. These changes were phased in throughout 2015 for all active instruments.The on-line cache was populated instrument by instrument over the course of 2015. As data were placed in the cache, the flag that triggers OTFR was reset so that OTFR no longer runs on these data. "Hybrid" requests to the Archive are handled transparently, with data not yet in the cache provided via OTFR and the remaining data provided from the cache. Users do not need to make separate requests.Users of the MAST Portal will be able to download data from the cache immediately. For data not in the cache, the Portal will send the user to the standard "Retrieval Options Page," allowing the user to direct the Archive to process and deliver the data.The classic MAST Search and Retrieval interface has the same look and feel as previously. Minor changes, unrelated to the cache, have been made to the format of the Retrieval Options Page.

  16. Planetary Sample Caching System Design Options

    Science.gov (United States)

    Collins, Curtis; Younse, Paulo; Backes, Paul

    2009-01-01

    Potential Mars Sample Return missions would aspire to collect small core and regolith samples using a rover with a sample acquisition tool and sample caching system. Samples would need to be stored in individual sealed tubes in a canister that could be transfered to a Mars ascent vehicle and returned to Earth. A sample handling, encapsulation and containerization system (SHEC) has been developed as part of an integrated system for acquiring and storing core samples for application to future potential MSR and other potential sample return missions. Requirements and design options for the SHEC system were studied and a recommended design concept developed. Two families of solutions were explored: 1)transfer of a raw sample from the tool to the SHEC subsystem and 2)transfer of a tube containing the sample to the SHEC subsystem. The recommended design utilizes sample tool bit change out as the mechanism for transferring tubes to and samples in tubes from the tool. The SHEC subsystem design, called the Bit Changeout Caching(BiCC) design, is intended for operations on a MER class rover.

  17. A Lock—Based Cache Coherence Protocol for Scope Consistency

    Institute of Scientific and Technical Information of China (English)

    胡伟武; 施巍松; 等

    1998-01-01

    Directory protocols are widely adopted to maintain cache coherence of distributed shared memory multiprocessors.Although scalable to a certain extent,directory protocols are complex enough to prevent it from being used in very large scale multiprocessors with tens of thousands of nodes.his paper proposes a lock-based cache coherence protocol for scope consistency.In does not rely on directory information to maintain cache coherence.Instead,cache coherence is maintained through requiring the releasing processor of a lock to stroe all write-notices generated in the associated critical section to the lock and the acquiring processor invalidates or updates its locally cached data copies according to the write notices of the lock.To evaluate the performance of the lock-based cache coherence protocol,a software SDM system named JIAJIA is built on network of workstations.Besides the lock-based cache coherence protocol,JIAJIA also characterizes itself with its shared memory organization scheme which combines the physical memories of multiple workstations to form a large shared space.Performance measurements with SPLASH2 program suite and NAS benchmarks indicate that,compared to recent SVM systems such as CVM,higher speedup is achieved by JIAJIA.Besides,JIAJIA can solve large scale problems that cannot be solved by other SVM systems due to memory size limitation.

  18. dCache,a Distributed Storage Data Cahing System

    Institute of Scientific and Technical Information of China (English)

    MichaelErns; CharlesWaldman; 等

    2001-01-01

    This article is about a piece of middle ware,allowing to convert a dump tape based Tertiary Storage System into a multi petabyte random access device with thousands of channels.Using typical caching mechanisms,the software optimizes the access to the underlying Storage System and makes better use of possibly expensive drives and robots or allows to integrate cheap and slow devices without introducing unacceptable performance degadation.In addition,using the standard NFS2 protocol,the dCache provides a unique view into the storage repository,hiding the physical location of the file data,cached or tape only.Bulk data transfer is supported through the kerberized FTP protocol and a C-API,providing the posix file access semantics,Dataset staging and disk space management is performed invisibly to the data clients.The project is a DESY,Fermilab joint effort to overcome limitations in the usage of tertiary storage resources common to many HEP labs.The distributed cache nodes may range from high performance SGI machines to commodity CERN Linux-IDE like file server models.Different cache nodes are assumed to have different affinities to particular storage groups or file sets.Affinities may be defined manually or are calculated by the dCache based on topology considerations.Cache nodes may have different disk space management policies to match the large variety of applications from raw data to user analysis data pools.

  19. Caching Stars in the Sky: A Semantic Caching Approach to Accelerate Skyline Queries

    CERN Document Server

    Bhattacharya, Arnab; Dutta, Sourav

    2011-01-01

    Multi-criteria decision making has been made possible with the advent of skyline queries. However, processing such queries for high dimensional datasets remains a time consuming task. Real-time applications are thus infeasible, especially for non-indexed skyline techniques where the datasets arrive online. In this paper, we propose a caching mechanism that uses the semantics of previous skyline queries to improve the processing time of a new query. In addition to exact queries, utilizing such special semantics allow accelerating related queries. We achieve this by generating partial result sets guaranteed to be in the skyline sets. We also propose an index structure for efficient organization of the cached queries. Experiments on synthetic and real datasets show the effectiveness and scalability of our proposed methods.

  20. A two-level cache for distributed information retrieval in search engines.

    Science.gov (United States)

    Zhang, Weizhe; He, Hui; Ye, Jianwei

    2013-01-01

    To improve the performance of distributed information retrieval in search engines, we propose a two-level cache structure based on the queries of the users' logs. We extract the highest rank queries of users from the static cache, in which the queries are the most popular. We adopt the dynamic cache as an auxiliary to optimize the distribution of the cache data. We propose a distribution strategy of the cache data. The experiments prove that the hit rate, the efficiency, and the time consumption of the two-level cache have advantages compared with other structures of cache.

  1. A Two-Level Cache for Distributed Information Retrieval in Search Engines

    Directory of Open Access Journals (Sweden)

    Weizhe Zhang

    2013-01-01

    Full Text Available To improve the performance of distributed information retrieval in search engines, we propose a two-level cache structure based on the queries of the users’ logs. We extract the highest rank queries of users from the static cache, in which the queries are the most popular. We adopt the dynamic cache as an auxiliary to optimize the distribution of the cache data. We propose a distribution strategy of the cache data. The experiments prove that the hit rate, the efficiency, and the time consumption of the two-level cache have advantages compared with other structures of cache.

  2. Cache and memory hierarchy design a performance directed approach

    CERN Document Server

    Przybylski, Steven A

    1991-01-01

    An authoritative book for hardware and software designers. Caches are by far the simplest and most effective mechanism for improving computer performance. This innovative book exposes the characteristics of performance-optimal single and multi-level cache hierarchies by approaching the cache design process through the novel perspective of minimizing execution times. It presents useful data on the relative performance of a wide spectrum of machines and offers empirical and analytical evaluations of the underlying phenomena. This book will help computer professionals appreciate the impact of ca

  3. Performance of defect-tolerant set-associative cache memories

    Science.gov (United States)

    Frenzel, J. F.

    1991-01-01

    The increased use of on-chip cache memories has led researchers to investigate their performance in the presence of manufacturing defects. Several techniques for yield improvement are discussed and results are presented which indicate that set-associativity may be used to provide defect tolerance as well as improve the cache performance. Tradeoffs between several cache organizations and replacement strategies are investigated and it is shown that token-based replacement may be a suitable alternative to the widely-used LRU strategy.

  4. Model Checking Data Consistency for Cache Coherence Protocols

    Institute of Scientific and Technical Information of China (English)

    Hong Pan; Hui-Min Lin; Yi Lv

    2006-01-01

    A method for automatic verification of cache coherence protocols is presented, in which cache coherence protocols are modeled as concurrent value-passing processes, and control and data consistency requirement are described as formulas in first-orderμ-calculus. A model checker is employed to check if the protocol under investigation satisfies the required properties. Using this method a data consistency error has been revealed in a well-known cache coherence protocol.The error has been corrected, and the revised protocol has been shown free from data consistency error for any data domain size, by appealing to data independence technique.

  5. Randomized Caches Considered Harmful in Hard Real-Time Systems

    Directory of Open Access Journals (Sweden)

    Jan Reineke

    2014-06-01

    Full Text Available We investigate the suitability of caches with randomized placement and replacement in the context of hard real-time systems. Such caches have been claimed to drastically reduce the amount of information required by static worst-case execution time (WCET analysis, and to be an enabler for measurement-based probabilistic timing analysis. We refute these claims and conclude that with prevailing static and measurement-based analysis techniques caches with deterministic placement and least-recently-used replacement are preferable over randomized ones.

  6. A distributed storage system with dCache

    DEFF Research Database (Denmark)

    Behrmann, Gerd; Fuhrmann, Patrick; Grønager, Michael

    2008-01-01

    of unique challenges. dCache is well known and respected as a powerful distributed storage resource manager, and was chosen for implementing the storage aspects of the Nordic Tier 1. In contrast to classic dCache deployments, we deploy dCache over a WAN with limited bandwidth, high latency, frequent network...... failures, and spanning many administrative domains. These properties provide unique challenges, covering topics such as security, administration, maintenance, upgradability, reliability, and performance. Our initial focus has been on implementing the GFD.47 OGF recommendation (which introduced the Grid...

  7. A Novel Cache Invalidation Scheme for Mobile Networks

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    In this paper, we propose a strategy of maintaining cache consistency in wireless mobile environments, which adds a validation server (VS) into the GPRS network, utilizes the location information of mobile terminal in SGSN located at GPRS backbone, just sends invalidation information to mobile terminal which is online in accordance with the cached data, and reduces the information amount in asynchronous transmission. This strategy enables mobile terminal to access cached data with very little computing amount, little delay and arbitrary disconnection intervals, and excels the synchronous IR and asynchronous state (AS) in the total performances.

  8. Plant genetics: RNA cache or genome trash?

    Science.gov (United States)

    Ray, Animesh

    2005-09-01

    According to classical mendelian genetics, individuals homozygous for an allele always breed true. Lolle et al. report a pattern of non-mendelian inheritance in the hothead (hth) mutant of Arabidopsis thaliana, in which a plant homozygous at a particular locus upon self-crossing produces progeny that are 10% heterozygous; they claim that this is the result of the emerging allele having been reintroduced into the chromosome from a cache of RNA inherited from a previous generation. Here I suggest that these results are equally compatible with a gene conversion that occurred through the use as a template of DNA fragments that were inherited from a previous generation and propagated in archival form in the meristem cells that generate the plant germ lines. This alternative model is compatible with several important observations by Lolle et al..

  9. MELOC - Memory and Location Optimized Caching for Mobile Ad Hoc Networks

    Science.gov (United States)

    2011-01-01

    no. 11, pp. 1515-1532, Piscataway, NJ, USA [3] Ying- Hong Wang , Jenhui Chen, Chih -Feng Chao and Tai- Hong Yueh, A Dynamic Caching Mechanism for...caching decision based on neighboring nodes is not effective. Wang et al [3] focus on dynamic caching integrated with dynamic back up routing protocol...networks. 2.4. CACHING USING DYNAMIC BACKUP ROUTING PROTOCOL Wang et al [3] focus on dynamic caching integrated with dynamic back up routing protocol

  10. Automated Cache Performance Analysis And Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Mohror, Kathryn [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2013-12-23

    While there is no lack of performance counter tools for coarse-grained measurement of cache activity, there is a critical lack of tools for relating data layout to cache behavior to application performance. Generally, any nontrivial optimizations are either not done at all, or are done ”by hand” requiring significant time and expertise. To the best of our knowledge no tool available to users measures the latency of memory reference instructions for partic- ular addresses and makes this information available to users in an easy-to-use and intuitive way. In this project, we worked to enable the Open|SpeedShop performance analysis tool to gather memory reference latency information for specific instructions and memory ad- dresses, and to gather and display this information in an easy-to-use and intuitive way to aid performance analysts in identifying problematic data structures in their codes. This tool was primarily designed for use in the supercomputer domain as well as grid, cluster, cloud-based parallel e-commerce, and engineering systems and middleware. Ultimately, we envision a tool to automate optimization of application cache layout and utilization in the Open|SpeedShop performance analysis tool. To commercialize this soft- ware, we worked to develop core capabilities for gathering enhanced memory usage per- formance data from applications and create and apply novel methods for automatic data structure layout optimizations, tailoring the overall approach to support existing supercom- puter and cluster programming models and constraints. In this Phase I project, we focused on infrastructure necessary to gather performance data and present it in an intuitive way to users. With the advent of enhanced Precise Event-Based Sampling (PEBS) counters on recent Intel processor architectures and equivalent technology on AMD processors, we are now in a position to access memory reference information for particular addresses. Prior to the introduction of PEBS counters

  11. Toxicity and medical countermeasure studies on the organophosphorus nerve agents VM and VX

    OpenAIRE

    Rice, Helen; Dalton, Christopher H.; Price, Matthew E.; Stuart J Graham; Green, A. Christopher; Jenner, John; Groombridge, Helen J.; Timperley, Christopher M.

    2015-01-01

    To support the effort to eliminate the Syrian Arab Republic chemical weapons stockpile safely, there was a requirement to provide scientific advice based on experimentally derived information on both toxicity and medical countermeasures (MedCM) in the event of exposure to VM, VX or VM–VX mixtures. Complementary in vitro and in vivo studies were undertaken to inform that advice. The penetration rate of neat VM was not significantly different from that of neat VX, through either guinea pig or p...

  12. Reducing Soft-error Vulnerability of Caches using Data Compression

    Energy Technology Data Exchange (ETDEWEB)

    Mittal, Sparsh [ORNL; Vetter, Jeffrey S [ORNL

    2016-01-01

    With ongoing chip miniaturization and voltage scaling, particle strike-induced soft errors present increasingly severe threat to the reliability of on-chip caches. In this paper, we present a technique to reduce the vulnerability of caches to soft-errors. Our technique uses data compression to reduce the number of vulnerable data bits in the cache and performs selective duplication of more critical data-bits to provide extra protection to them. Microarchitectural simulations have shown that our technique is effective in reducing architectural vulnerability factor (AVF) of the cache and outperforms another technique. For single and dual-core system configuration, the average reduction in AVF is 5.59X and 8.44X, respectively. Also, the implementation and performance overheads of our technique are minimal and it is useful for a broad range of workloads.

  13. DepenDNS: Dependable Mechanism against DNS Cache Poisoning

    Science.gov (United States)

    Sun, Hung-Min; Chang, Wen-Hsuan; Chang, Shih-Ying; Lin, Yue-Hsun

    DNS cache poisoning attacks have been proposed for a long time. In 2008, Kaminsky enhanced the attacks to be powerful based on nonce query method. By leveraging Kaminsky's attack, phishing becomes large-scale since victims are hard to detect attacks. Hence, DNS cache poisoning is a serious threat in the current DNS infrastructure. In this paper, we propose a countermeasure, DepenDNS, to prevent from cache poisoning attacks. DepenDNS queries multiple resolvers concurrently to verify an trustworthy answer while users perform payment transactions, e.g., auction, banking. Without modifying any resolver or authority server, DepenDNS is conveniently deployed on client side. In the end of paper, we conduct several experiments on DepenDNS to show its efficiency. We believe DepenDNS is a comprehensive solution against cache poisoning attacks.

  14. Fundamental Parallel Algorithms for Private-Cache Chip Multiprocessors

    DEFF Research Database (Denmark)

    Arge, Lars Allan; Goodrich, Michael T.; Nelson, Michael

    2008-01-01

    In this paper, we study parallel algorithms for private-cache chip multiprocessors (CMPs), focusing on methods for foundational problems that are scalable with the number of cores. By focusing on private-cache CMPs, we show that we can design efficient algorithms that need no additional assumptions...... about the way cores are interconnected, for we assume that all inter-processor communication occurs through the memory hierarchy. We study several fundamental problems, including prefix sums, selection, and sorting, which often form the building blocks of other parallel algorithms. Indeed, we present...... two sorting algorithms, a distribution sort and a mergesort. Our algorithms are asymptotically optimal in terms of parallel cache accesses and space complexity under reasonable assumptions about the relationships between the number of processors, the size of memory, and the size of cache blocks...

  15. NIC atomic operation unit with caching and bandwidth mitigation

    Science.gov (United States)

    Hemmert, Karl Scott; Underwood, Keith D.; Levenhagen, Michael J.

    2016-03-01

    A network interface controller atomic operation unit and a network interface control method comprising, in an atomic operation unit of a network interface controller, using a write-through cache and employing a rate-limiting functional unit.

  16. Distributed caching mechanism for various MPE software services

    CERN Document Server

    Svec, Andrej

    2017-01-01

    The MPE Software Section provides multiple software services to facilitate the testing and the operation of the CERN Accelerator complex. Continuous growth in the number of users and the amount of processed data result in the requirement of high scalability. Our current priority is to move towards a distributed and properly load balanced set of services based on containers. The aim of this project is to implement the generic caching mechanism applicable to our services and chosen architecture. The project will at first require research about the different aspects of distributed caching (persistence, no gc-caching, cache consistency etc.) and the available technologies followed by the implementation of the chosen solution. In order to validate the correctness and performance of the implementation in the last phase of the project it will be required to implement a monitoring layer and integrate it with the current ELK stack.

  17. Cache River National Wildlife Refuge Water Resource Inventory and Assessment

    Data.gov (United States)

    US Fish and Wildlife Service, Department of the Interior — This Water Resource Inventory and Assessment (WRIA) for Cache River National Wildlife Refuge summarizes available and relevant information for refuge water...

  18. Performance Improvement of Cache Management In Cluster Based MANET

    Directory of Open Access Journals (Sweden)

    Abdulaziz Zam

    2013-08-01

    Full Text Available Caching is one of the most effective techniques used to improve the data access performance in wireless networks. Accessing data from a remote server imposes high latency and power consumption through forwarding nodes that guide the requests to the server and send data back to the clients. In addition, accessing data may be unreliable or even impossible due to erroneous wireless links and frequently disconnections. Due to the nature of MANET and its high frequent topology changes, and also small cache size and constrained power supply in mobile nodes, the management of the cache would be a challenge. To maintain the MANET’s stability and scalability, clustering is considered as an effective approach. In this paper an efficient cache management method is proposed for the Cluster Based Mobile Ad-hoc NETwork (C-B-MANET. The performance of the method is evaluated in terms of packet delivery ratio, latency and overhead metrics.

  19. Constant time worker thread allocation via configuration caching

    Science.gov (United States)

    Eichenberger, Alexandre E; O'Brien, John K. P.

    2014-11-04

    Mechanisms are provided for allocating threads for execution of a parallel region of code. A request for allocation of worker threads to execute the parallel region of code is received from a master thread. Cached thread allocation information identifying prior thread allocations that have been performed for the master thread are accessed. Worker threads are allocated to the master thread based on the cached thread allocation information. The parallel region of code is executed using the allocated worker threads.

  20. The purpose of this study is to determine the clinical usefulness of Valsalva maneuver (VM) to evaluate piriform-fossae lesions on helical CT

    Energy Technology Data Exchange (ETDEWEB)

    Yamamoto, Shoji; Yasuda, Shigeo; Kimura, Shinjiro; Ito, Hisao [Chiba Univ. (Japan). Hospital; Fujimoto, Hajime; Nasu, Katsuhiro; Motoori, Ken

    1997-07-01

    Forty-four patients who were suspected hypopharyngeal carcinoma underwent both conventional CT under quiet breath holding and helical CT under Valsalva maneuver (VMCT). All patients successfully performed Valsalva maneuver during image acquisition. Normal piriform fossae were dilated well under VM. Five fossae involved by hypopharyngeal carcinoma were poorly dilated on VMCT. In conclusion VMCT is a supportive method to evaluate piriform fossae. If piriform fossae lesions were suspected on conventional CT, VMCT should be performed. (author)

  1. Intelligent Caching Wireless Data Access in the Wireless Spectrum

    Directory of Open Access Journals (Sweden)

    Syazwa Mad Jais

    2013-05-01

    Full Text Available The evolution wireless technologies are growing rapidly in upcoming years. It is expected that many users will shift to more advanced devices that will contribute to gain higher demand in wireless spectrum. However, the capacity for the allocation frequency in the wireless spectrum is typically limited in wireless data transmission. Therefore, when the loads of wireless users in the wireless communications are increasing, the need of cache mechanisms such as Web caches or Web servers are crucial. The scalability demands on internet infrastructure keep increasing as the internet continues to grow in popularity and size. Therefore, the existence and development in Web caching technologies will contribute to bandwidth savings, network latency reduction, improve content availability and subsequently server load balancing. This paper will studies and investigates the cache performance in wireless spectrum with the purpose of dealing with the data growth since the spectrum crisis becomes a serious matter lately. The performance improvement will be observed using caching scheme which allows for time shifting and load shifting in accessing the wireless data with the better cache deployment in the network system.

  2. Cache-Conscious Data Cube Computation on a Modern Processor

    Institute of Scientific and Technical Information of China (English)

    Hua Luan; Xiao-Yong Du; Shan Wang

    2009-01-01

    Data cube computation is an important problem in the field of data warehousing and OLAP (online analytical processing). Although it has been studied extensively in the past, most of its algorithms are designed without considering CPU and cache behavior. In this paper, we first propose a cache-conscious cubing approach called CC-Cubing to efficiently compute data cubes on a modern processor. This method can enhance CPU and cache performances. It adopts an integrated depth-first and breadth-first partitioning order and partitions multiple dimensions simultaneously. The partitioning scheme improves the data spatial locality and increases the utilization of cache lines. Software prefetching techniques are then applied in the sorting phase to hide the expensive cache misses associated with data scans. In addition, a cache-aware method is used in CC-Cubing to switch the sort algorithm dynamically. Our performance study shows that CC-Cubing outperforms BUC, Star-Cubing and MM-Cubing in most cases. Then, in order to fully utilize an SMT (simultaneous multithreading) processor, we present a thread-based CC-Cubing-SMT method. This parallel method provides an improvement up to 27% for the single-threaded CC-Cubing algorithm.

  3. An investigation of DUA caching strategies for public key certificates

    Energy Technology Data Exchange (ETDEWEB)

    Cheung, T.C.

    1993-11-01

    Internet Privacy Enhanced Mail (PEM) provides security services to users of Internet electronic mail. PEM is designed with the intention that it will eventually obtain public key certificates from the X.500 directory service. However, such a capability is not present in most PEM implementations today. While the prevalent PEM implementation uses a public key certificate-based strategy, certificates are mostly distributed via e-mail exchanges, which raises several security and performance issues. In this thesis research, we changed the reference PEM implementation to make use of the X.500 directory service instead of local databases for public key certificate management. The thesis discusses some problems with using the X.500 directory service, explores the relevant issues, and develops an approach to address them. The approach makes use of a memory cache to store public key certificates. We implemented a centralized cache server and addressed the denial-of-service security problem that is present in the server. In designing the cache, we investigated several cache management strategies. One result of our study is that the use of a cache significantly improves performance. Our research also indicates that security incurs extra performance cost. Different cache replacement algorithms do not seem to yield significant performance differences, while delaying dirty-writes to the backing store does improve performance over immediate writes.

  4. Food caching in orb-web spiders (Araneae: Araneoidea)

    Science.gov (United States)

    Champion de Crespigny, Fleur E.; Herberstein, Marie E.; Elgar, Mark A.

    2001-01-01

    Caching or storing surplus prey may reduce the risk of starvation during periods of food deprivation. While this behaviour occurs in a variety of birds and mammals, it is infrequent among invertebrates. However, golden orb-web spiders, Nephila edulis, incorporate a prey cache in their relatively permanent web, which they feed on during periods of food shortage. Heavier spiders significantly reduced weight loss if they were able to access a cache, but lost weight if the cache was removed. The presence or absence of stored prey had no effect on the weight loss of lighter spiders. Furthermore, N. edulis always attacked new prey, irrespective of the number of unprocessed prey in the web. In contrast, females of Argiope keyserlingi, who build a new web every day and do not cache prey, attacked fewer new prey items if some had already been caught. Thus, a necessary pre-adaptation to the evolution of prey caching in orb-web spiders may be a durable or permanent web, such as that constructed by Nephila.

  5. Web Caching:A Way to Improve Web QoS

    Institute of Scientific and Technical Information of China (English)

    Ming-Kuan Liu; Fei-Yue Wang; Daniel Dajun Zeng

    2004-01-01

    As the Internet and World Wide Web grow at a fast pace, it is essential that the Web's performance should keep up with increased demand and expectations. Web Caching technology has.been widely accepted as one of the effective approaches to alleviating Web traffic and increase the Web Quality of Service (QoS). This paper provides an up-to-date survey of the rapidly expanding Web Caching literature. It discusses the state-of-the-art web caching schemes and techniques, with emphasis on the recent developments in Web Caching technology such as the differentiated Web services, heterogeneous caching network structures, and dynamic content caching.

  6. Megafloods and Clovis cache at Wenatchee, Washington

    Science.gov (United States)

    Waitt, Richard B.

    2016-05-01

    Immense late Wisconsin floods from glacial Lake Missoula drowned the Wenatchee reach of Washington's Columbia valley by different routes. The earliest debacles, nearly 19,000 cal yr BP, raged 335 m deep down the Columbia and built high Pangborn bar at Wenatchee. As advancing ice blocked the northwest of Columbia valley, several giant floods descended Moses Coulee and backflooded up the Columbia past Wenatchee. Ice then blocked Moses Coulee, and Grand Coulee to Quincy basin became the westmost floodway. From Quincy basin many Missoula floods backflowed 50 km upvalley to Wenatchee 18,000 to 15,500 years ago. Receding ice dammed glacial Lake Columbia centuries more-till it burst about 15,000 years ago. After Glacier Peak ashfall about 13,600 years ago, smaller great flood(s) swept down the Columbia from glacial Lake Kootenay in British Columbia. The East Wenatchee cache of huge fluted Clovis points had been laid atop Pangborn bar after the Glacier Peak ashfall, then buried by loess. Clovis people came five and a half millennia after the early gigantic Missoula floods, two and a half millennia after the last small Missoula flood, and two millennia after the glacial Lake Columbia flood. People likely saw outburst flood(s) from glacial Lake Kootenay.

  7. Megafloods and Clovis cache at Wenatchee, Washington

    Science.gov (United States)

    Waitt, Richard B.

    2016-01-01

    Immense late Wisconsin floods from glacial Lake Missoula drowned the Wenatchee reach of Washington's Columbia valley by different routes. The earliest debacles, nearly 19,000 cal yr BP, raged 335 m deep down the Columbia and built high Pangborn bar at Wenatchee. As advancing ice blocked the northwest of Columbia valley, several giant floods descended Moses Coulee and backflooded up the Columbia past Wenatchee. Ice then blocked Moses Coulee, and Grand Coulee to Quincy basin became the westmost floodway. From Quincy basin many Missoula floods backflowed 50 km upvalley to Wenatchee 18,000 to 15,500 years ago. Receding ice dammed glacial Lake Columbia centuries more—till it burst about 15,000 years ago. After Glacier Peak ashfall about 13,600 years ago, smaller great flood(s) swept down the Columbia from glacial Lake Kootenay in British Columbia. The East Wenatchee cache of huge fluted Clovis points had been laid atop Pangborn bar after the Glacier Peak ashfall, then buried by loess. Clovis people came five and a half millennia after the early gigantic Missoula floods, two and a half millennia after the last small Missoula flood, and two millennia after the glacial Lake Columbia flood. People likely saw outburst flood(s) from glacial Lake Kootenay.

  8. Ventral medial prefrontal cortex (vmPFC) as a target of the dorsolateral prefrontal modulation by transcranial direct current stimulation (tDCS) in drug addiction.

    Science.gov (United States)

    Nakamura-Palacios, Ester Miyuki; Lopes, Isabela Bittencourt Coutinho; Souza, Rodolpho Albuquerque; Klauss, Jaisa; Batista, Edson Kruger; Conti, Catarine Lima; Moscon, Janine Andrade; de Souza, Rodrigo Stênio Moll

    2016-10-01

    Here, we report some electrophysiologic and imaging effects of the transcranial direct current stimulation (tDCS) over the dorsolateral prefrontal cortex (dlPFC) in drug addiction, notably in alcohol and crack-cocaine dependence. The low resolution electromagnetic tomography (LORETA) analysis obtained through event-related potentials (ERPs) under drug-related cues, more specifically in its P3 segment (300-500 ms) in both, alcoholics and crack-cocaine users, showed that the ventral medial prefrontal cortex (vmPFC) was the brain area with the largest change towards increasing activation under drug-related cues in those subjects that kept abstinence during and after the treatment with bilateral tDCS (2 mA, 35 cm(2), cathodal left and anodal right) over dlPFC, applied repetitively (five daily sessions). In an additional study in crack-cocaine, which showed craving decreases after repetitive bilateral tDCS, we examined data originating from diffusion tensor imaging (DTI), and we found increased DTI parameters in the left connection between vmPFC and nucleus accumbens (NAcc), such as the number of voxels, fractional anisotropy (FA) and apparent diffusion coefficient (ADC), in tDCS-treated crack-cocaine users when compared to the sham-tDCS group. This increasing of DTI parameters was significantly correlated with craving decreasing after the repetitive tDCS. The vmPFC relates to the control of drug seeking, possibly by extinguishing this behavior. In our studies, the bilateral dlPFC tDCS reduced relapses and craving to the drug use, and increased the vmPFC activation under drug cues, which may be of a great importance in the control of drug use in drug addiction.

  9. A Semantic Cache Framework for Secure XML Queries

    Institute of Scientific and Technical Information of China (English)

    Jian-Hua Feng; Guo-Liang Li; Na Ta

    2008-01-01

    Secure XML query answering to protect data privacy and semantic cache to speed up XML query answering are two hot spots in current research areas of XML database systems. While both issues are explored respectively in depth,they have not been studied together, that is, the problem of semantic cache for secure XML query answering has not been addressed yet. In this paper, we present an interesting joint of these two aspects and propose an efficient framework of semantic cache for secure XML query answering, which can improve the performance of XML database systems under secure circumstances. Our framework combines access control, user privilege management over XML data and the state-of-the-art semantic XML query cache techniques, to ensure that data are presented only to authorized users in an efficient way. To the best of our knowledge, the approach we propose here is among the first beneficial efforts in a novel perspective of combining caching and security for XML database to improve system performance. The efficiency of our framework is verified by comprehensive experiments.

  10. A Scalable proxy cache for Grid Data Access

    Science.gov (United States)

    Cristian Cirstea, Traian; Just Keijser, Jan; Koeroo, Oscar Arthur; Starink, Ronald; Templon, Jeffrey Alan

    2012-12-01

    We describe a prototype grid proxy cache system developed at Nikhef, motivated by a desire to construct the first building block of a future https-based Content Delivery Network for grid infrastructures. Two goals drove the project: firstly to provide a “native view” of the grid for desktop-type users, and secondly to improve performance for physics-analysis type use cases, where multiple passes are made over the same set of data (residing on the grid). We further constrained the design by requiring that the system should be made of standard components wherever possible. The prototype that emerged from this exercise is a horizontally-scalable, cooperating system of web server / cache nodes, fronted by a customized webDAV server. The webDAV server is custom only in the sense that it supports http redirects (providing horizontal scaling) and that the authentication module has, as back end, a proxy delegation chain that can be used by the cache nodes to retrieve files from the grid. The prototype was deployed at Nikhef and tested at a scale of several terabytes of data and approximately one hundred fast cores of computing. Both small and large files were tested, in a number of scenarios, and with various numbers of cache nodes, in order to understand the scaling properties of the system. For properly-dimensioned cache-node hardware, the system showed speedup of several integer factors for the analysis-type use cases. These results and others are presented and discussed.

  11. Horizontally scaling dCache SRM with the Terracotta platform

    Science.gov (United States)

    Perelmutov, T.; Crawford, M.; Moibenko, A.; Oleynik, G.

    2011-12-01

    The dCache disk caching file system has been chosen by a majority of LHC experiments' Tier 1 centers for their data storage needs. It is also deployed at many Tier 2 centers. The Storage Resource Manager (SRM) is a standardized grid storage interface and a single point of remote entry into dCache, and hence is a critical component. SRM must scale to increasing transaction rates and remain resilient against changing usage patterns. The initial implementation of the SRM service in dCache suffered from an inability to support clustered deployment, and its performance was limited by the hardware of a single node. Using the Terracotta platform[l], we added the ability to horizontally scale the dCache SRM service to run on multiple nodes in a cluster configuration, coupled with network load balancing. This gives site administrators the ability to increase the performance and reliability of SRM service to face the ever-increasing requirements of LHC data handling. In this paper we will describe the previous limitations of the architecture SRM server and how the Terracotta platform allowed us to readily convert single node service into a highly scalable clustered application.

  12. Properties and Microstructure of Laser Welded VM12-SHC Steel Pipes Joints

    Directory of Open Access Journals (Sweden)

    Skrzypczyk A.

    2016-06-01

    Full Text Available Paper presents results of microstructure and tests of welded joints of new generation VM12-SHC martensitic steel using high power CO2 laser (LBW method with bifocal welding head. VM12-SHC is dedicated to energetic installation material, designed to replace currently used. High content of chromium and others alloying elements improve its resistance and strength characteristic. Use of VM12-SHC steel for production of the superheaters, heating chambers and walls in steam boilers resulted in various weldability researches. In article are presented results of destructive and non-destructive tests. For destructive: static bending and Vickers hardness tests, and for non-destructive: VT, RT, UT, micro and macroscopic tests were performed.

  13. Nut Caching by Blue Jays (Cyanocitta cristata L.): Implications for Tree Demography

    National Research Council Canada - National Science Library

    W. Carter Johnson; Curtis S. Adkisson; Thomas R. Crow; Mark D. Dixon

    1997-01-01

    .... Three aspects were examined: jay habitat preferences for caching, jay caching patterns before and after fire, and the influence of predation on nuts by small mammals on tree recruitment in jay territories...

  14. Temperature and leakage aware techniques to improve cache reliability

    Science.gov (United States)

    Akaaboune, Adil

    Decreasing power consumption in small devices such as handhelds, cell phones and high-performance processors is now one of the most critical design concerns. On-chip cache memories dominate the chip area in microprocessors and thus arises the need for power efficient cache memories. Cache is the simplest cost effective method to attain high speed memory hierarchy and, its performance is extremely critical for high speed computers. Cache is used by the microprocessor for channeling the performance gap between processor and main memory (RAM) hence the memory bandwidth is frequently a bottleneck which can affect the peak throughput significantly. In the design of any cache system, the tradeoffs of area/cost, performance, power consumption, and thermal management must be taken into consideration. Previous work has mainly concentrated on performance and area/cost constraints. More recent works have focused on low power design especially for portable devices and media-processing systems, however fewer research has been done on the relationship between heat management, Leakage power and cost per die. Lately, the focus of power dissipation in the new generations of microprocessors has shifted from dynamic power to idle power, a previously underestimated form of power loss that causes battery charge to drain and shutdown too early due the waste of energy. The problem has been aggravated by the aggressive scaling of process; device level method used originally by designers to enhance performance, conserve dissipation and reduces the sizes of digital circuits that are increasingly condensed. This dissertation studies the impact of hotspots, in the cache memory, on leakage consumption and microprocessor reliability and durability. The work will first prove that by eliminating hotspots in the cache memory, leakage power will be reduced and therefore, the reliability will be improved. The second technique studied is data quality management that improves the quality of the data

  15. Alignment of Memory Transfers of a Time-Predictable Stack Cache

    DEFF Research Database (Denmark)

    Abbaspourseyedi, Sahar; Brandner, Florian

    2014-01-01

    of complex cache states. Instead, only the occupancy level of the cache has to be determined. The memory transfers generated by the standard stack cache are not generally aligned. These unaligned accesses risk to introduce complexity to the otherwise simple WCET analysis. In this work, we investigate three...

  16. Scatter hoarding and cache pilferage by superior competitors: an experiment with wild boar (Sus scrofa)

    NARCIS (Netherlands)

    Suselbeek, L.; Adamczyk, V.M.A.P.; Bongers, F.; Nolet, B.A.; Prins, H.H.T.; van Wieren, S.E.; Jansen, P.A.

    2014-01-01

    Food-hoarding patterns range between larder hoarding (a few large caches) and scatter hoarding (many small caches), and are, in essence, the outcome of a hoard size–number trade-off in pilferage risk. Animals that scatter hoard are believed to do so, despite higher costs, to reduce loss of cached fo

  17. 76 FR 5781 - Uinta-Wasatch-Cache National Forest Resource Advisory Committee

    Science.gov (United States)

    2011-02-02

    ... Forest Service Uinta-Wasatch-Cache National Forest Resource Advisory Committee AGENCY: Forest Service, USDA. ACTION: Notice of meeting. SUMMARY: The Uinta-Wasatch-Cache National Forest Resource Advisory.... Written comments should be sent to Loyal Clark, Uinta-Wasatch-Cache National Forest, 88 West 100 North...

  18. 75 FR 65295 - Uinta-Wasatch-Cache National Forest Resource Advisory Committee

    Science.gov (United States)

    2010-10-22

    ... Forest Service Uinta-Wasatch-Cache National Forest Resource Advisory Committee AGENCY: Forest Service, USDA. ACTION: Notice of meetings. SUMMARY: The Uinta-Wasatch-Cache National Forest Resource Advisory... comments should be sent to Loyal Clark, Uinta-Wasatch-Cache National Forest, 88 West 100 North, Provo, Utah...

  19. 76 FR 53879 - Uinta-Wasatch-Cache National Forest Resource Advisory Committee

    Science.gov (United States)

    2011-08-30

    ... Forest Service Uinta-Wasatch-Cache National Forest Resource Advisory Committee AGENCY: Forest Service, USDA. ACTION: Notice of meeting. SUMMARY: The Uinta-Wasatch-Cache National Forest Resource Advisory... comments should be sent to Loyal Clark, Uinta-Wasatch-Cache National Forest, 88 West 100 North, Provo, Utah...

  20. 76 FR 20310 - Uinta-Wasatch-Cache National Forest Resource Advisory Committee.

    Science.gov (United States)

    2011-04-12

    ... Forest Service Uinta-Wasatch-Cache National Forest Resource Advisory Committee. AGENCY: Forest Service, USDA. ACTION: Notice of meeting. SUMMARY: The Uinta-Wasatch-Cache National Forest Resource Advisory... comments should be sent to Loyal Clark, Uinta-Wasatch-Cache National Forest, 88 West 100 North, Provo, Utah...

  1. 75 FR 34973 - Uinta-Wasatch-Cache National Forest Resource Advisory Committee

    Science.gov (United States)

    2010-06-21

    ... Forest Service Uinta-Wasatch-Cache National Forest Resource Advisory Committee AGENCY: Forest Service, USDA. ACTION: Notice of meeting. SUMMARY: The Uinta-Wasatch-Cache National Forest Resource Advisory... comments should be sent to Loyal Clark, Uinta-Wasatch-Cache National Forest, 88 West 100 North, Provo, UT...

  2. 75 FR 71669 - Uinta-Wasatch-Cache National Forest Resource Advisory Committee

    Science.gov (United States)

    2010-11-24

    ... Forest Service Uinta-Wasatch-Cache National Forest Resource Advisory Committee AGENCY: Forest Service, USDA. ACTION: Notice of Meeting. SUMMARY: The Uinta-Wasatch-Cache National Forest Resource Advisory... comments should be sent to Loyal Clark, Uinta-Wasatch-Cache National Forest, 88 West 100 North, Provo, Utah...

  3. 76 FR 28211 - Uinta-Wasatch-Cache National Forest Resource Advisory Committee

    Science.gov (United States)

    2011-05-16

    ... Forest Service Uinta-Wasatch-Cache National Forest Resource Advisory Committee AGENCY: Forest Service, USDA. ACTION: Notice of meeting. SUMMARY: The Uinta-Wasatch-Cache National Forest Resource Advisory... comments should be sent to Loyal Clark, Uinta-Wasatch-Cache National Forest, 88 West 100 North, Provo, Utah...

  4. 76 FR 14372 - Uinta-Wasatch-Cache National Forest Resource Advisory Committee

    Science.gov (United States)

    2011-03-16

    ... Forest Service Uinta-Wasatch-Cache National Forest Resource Advisory Committee AGENCY: Forest Service, USDA. ACTION: Notice of meeting. SUMMARY: The Uinta-Wasatch-Cache National Forest Resource Advisory.... Written comments should be sent to Loyal Clark, Uinta-Wasatch-Cache National Forest, 88 West 100 North...

  5. Fast and Cache-Oblivious Dynamic Programming with Local Dependencies

    DEFF Research Database (Denmark)

    Bille, Philip; Stöckel, Morten

    2012-01-01

    are widely used in bioinformatics to compare DNA and protein sequences. These problems can all be solved using essentially the same dynamic programming scheme over a two-dimensional matrix, where each entry depends locally on at most 3 neighboring entries. We present a simple, fast, and cache......-oblivious algorithm for this type of local dynamic programming suitable for comparing large-scale strings. Our algorithm outperforms the previous state-of-the-art solutions. Surprisingly, our new simple algorithm is competitive with a complicated, optimized, and tuned implementation of the best cache-aware algorithm...

  6. Fast and Cache-Oblivious Dynamic Programming with Local Dependencies

    DEFF Research Database (Denmark)

    Bille, Philip; Stöckel, Morten

    2012-01-01

    are widely used in bioinformatics to compare DNA and protein sequences. These problems can all be solved using essentially the same dynamic programming scheme over a two-dimensional matrix, where each entry depends locally on at most 3 neighboring entries. We present a simple, fast, and cache......-oblivious algorithm for this type of local dynamic programming suitable for comparing large-scale strings. Our algorithm outperforms the previous state-of-the-art solutions. Surprisingly, our new simple algorithm is competitive with a complicated, optimized, and tuned implementation of the best cache-aware algorithm....... Additionally, our new algorithm generalizes the best known theoretical complexity trade-offs for the problem....

  7. File caching in video-on-demand servers

    Science.gov (United States)

    Wang, Fu-Ching; Chang, Shin-Hung; Hung, Chi-Wei; Chang, Jia-Yang; Oyang, Yen-Jen; Lee, Meng-Huang

    1997-12-01

    This paper studies the file caching issue in video-on-demand (VOD) servers. Because the characteristics of video files are very different from those of conventional files, different type of caching algorithms must be developed. For VOD servers, the goal is to optimize resource allocation and tradeoff between memory and disk bandwidth. This paper first proves that resource allocation and tradeoff between memory and disk bandwidth is an NP-complete problem. Then, a heuristic algorithm, called the generalized relay mechanism, is introduced and a simulation-based optimization procedure is conducted to evaluate the effects of applying the generalized relay mechanism.

  8. BP Network Based Users' Interest Model in Mining WWW Cache

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    By analyzing the WWW Cache model, we bring forward a user-interest description method based on the fuzzy theory and user-interest inferential relations based on BP(back propagation) neural network. By this method, the users' interest in the WWW cache can be described and the neural network of users' interest can be constructed by positive spread of interest and the negative spread of errors. This neural network can infer the users' interest. This model is not the simple extension of the simple interest model, but the round improvement of the model and its related algorithm.

  9. Correlation of VEGF and COX-2 Expression with VM in Malignant Melanomas

    Institute of Scientific and Technical Information of China (English)

    BaocunSun; ShiwuZhang; XiulanZhao; YanxueLiu; ChunshengNi; DanfangZhang; HongQi; ZhiyongLiu; XishanHao

    2004-01-01

    OBJECTIVE To investigate the relationship between vascular epithelial growth factor (VEGF) and cyclooxygenase-2 (COX-2) in melanomas and the expressive difference of VEGF and COX-2 between melanomas with and without vasculogenic mimicry(VM).METHODS Sixty cases of malignant melanomas emoeaaea In paraffin were studied. The tumors were divided into a high-grade malignant group and a low-grade malignant group based on their tumor type, atypia and survival time of the patient. Then tissue microarrays were produced from these paraffin-embedded tumor tissues which were stained for VEGF, COX-2 and PAS. The difference in expression between VEGF and COX-2 in the malignant melanomas was compared using a grid-count. In addition, the tumors were also divided into mimicry and non-mimicry groups based on their PAS staining. Then the differences between the PAS positive and negative areas of the 2 groups were compared.RESULTS In malignant melanomas with VM, VEGF and COX-2 expression was less in tumors in which VM was absent, but VEGF, COX-2 expression in high-grade malignant melanomas was higher than that in low-grade grade malignant melanomas. Expression of VEGF was correlated with COX-2 expression.CONCLUSION VM exists in some high-grade malignant melanomas. The differences and relations between VEGF and COX-2 showed that some high-grade malignant melanomas possess a unique molecular-mechanism of tumor metastasis and blood supply.

  10. HotpathVM: An Effective JIT for Resource-constrained Devices

    DEFF Research Database (Denmark)

    Gal, Andreas; Franz, Michael; Probst, Christian

    2006-01-01

    We present a just-in-time compiler for a Java VM that is small enough to fit on resource-constrained devices, yet surprisingly effective. Our system dynamically identifies traces of frequently executed bytecode instructions (which may span several basic blocks across several methods) and compiles...... benchmarks show a speedup that in some cases rivals heavy-weight just-in-time compilers....

  11. 3D-e-Chem-VM: Structural Cheminformatics Research Infrastructure in a Freely Available Virtual Machine.

    Science.gov (United States)

    McGuire, Ross; Verhoeven, Stefan; Vass, Márton; Vriend, Gerrit; de Esch, Iwan J P; Lusher, Scott J; Leurs, Rob; Ridder, Lars; Kooistra, Albert J; Ritschel, Tina; de Graaf, Chris

    2017-02-14

    3D-e-Chem-VM is an open source, freely available Virtual Machine ( http://3d-e-chem.github.io/3D-e-Chem-VM/ ) that integrates cheminformatics and bioinformatics tools for the analysis of protein-ligand interaction data. 3D-e-Chem-VM consists of software libraries, and database and workflow tools that can analyze and combine small molecule and protein structural information in a graphical programming environment. New chemical and biological data analytics tools and workflows have been developed for the efficient exploitation of structural and pharmacological protein-ligand interaction data from proteomewide databases (e.g., ChEMBLdb and PDB), as well as customized information systems focused on, e.g., G protein-coupled receptors (GPCRdb) and protein kinases (KLIFS). The integrated structural cheminformatics research infrastructure compiled in the 3D-e-Chem-VM enables the design of new approaches in virtual ligand screening (Chemdb4VS), ligand-based metabolism prediction (SyGMa), and structure-based protein binding site comparison and bioisosteric replacement for ligand design (KRIPOdb).

  12. Overhead-Aware-Best-Fit (OABF) Resource Allocation Algorithm for Minimizing VM Launching Overhead

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Hao [IIT; Garzoglio, Gabriele [Fermilab; Ren, Shangping [IIT, Chicago; Timm, Steven [Fermilab; Noh, Seo Young [KISTI, Daejeon

    2014-11-11

    FermiCloud is a private cloud developed in Fermi National Accelerator Laboratory to provide elastic and on-demand resources for different scientific research experiments. The design goal of the FermiCloud is to automatically allocate resources for different scientific applications so that the QoS required by these applications is met and the operational cost of the FermiCloud is minimized. Our earlier research shows that VM launching overhead has large variations. If such variations are not taken into consideration when making resource allocation decisions, it may lead to poor performance and resource waste. In this paper, we show how we may use an VM launching overhead reference model to minimize VM launching overhead. In particular, we first present a training algorithm that automatically tunes a given refer- ence model to accurately reflect FermiCloud environment. Based on the tuned reference model for virtual machine launching overhead, we develop an overhead-aware-best-fit resource allocation algorithm that decides where and when to allocate resources so that the average virtual machine launching overhead is minimized. The experimental results indicate that the developed overhead-aware-best-fit resource allocation algorithm can significantly improved the VM launching time when large number of VMs are simultaneously launched.

  13. 3D-e-Chem-VM: Structural Cheminformatics Research Infrastructure in a Freely Available Virtual Machine

    Science.gov (United States)

    2017-01-01

    3D-e-Chem-VM is an open source, freely available Virtual Machine (http://3d-e-chem.github.io/3D-e-Chem-VM/) that integrates cheminformatics and bioinformatics tools for the analysis of protein–ligand interaction data. 3D-e-Chem-VM consists of software libraries, and database and workflow tools that can analyze and combine small molecule and protein structural information in a graphical programming environment. New chemical and biological data analytics tools and workflows have been developed for the efficient exploitation of structural and pharmacological protein–ligand interaction data from proteomewide databases (e.g., ChEMBLdb and PDB), as well as customized information systems focused on, e.g., G protein-coupled receptors (GPCRdb) and protein kinases (KLIFS). The integrated structural cheminformatics research infrastructure compiled in the 3D-e-Chem-VM enables the design of new approaches in virtual ligand screening (Chemdb4VS), ligand-based metabolism prediction (SyGMa), and structure-based protein binding site comparison and bioisosteric replacement for ligand design (KRIPOdb). PMID:28125221

  14. The impact of using combinatorial optimisation for static caching of posting lists

    DEFF Research Database (Denmark)

    Petersen, Casper; Simonsen, Jakob Grue; Lioma, Christina

    2015-01-01

    Caching posting lists can reduce the amount of disk I/O required to evaluate a query. Current methods use optimisation procedures for maximising the cache hit ratio. A recent method selects posting lists for static caching in a greedy manner and obtains higher hit rates than standard cache eviction...... policies such as LRU and LFU. However, a greedy method does not formally guarantee an optimal solution. We investigate whether the use of methods guaranteed, in theory, to and an approximately optimal solution would yield higher hit rates. Thus, we cast the selection of posting lists for caching...

  15. WCET-based comparison of an instruction scratchpad and a method cache

    DEFF Research Database (Denmark)

    Whitham, Jack; Schoeberl, Martin

    2014-01-01

    This paper compares two proposed alternatives to conventional instruction caches: a scratchpad memory (SPM) and a method cache. The comparison considers the true worst-case execution time (WCET) and the estimated WCET bound of programs using either an SPM or a method cache, using large numbers of...... for a method cache. If WCET bounds are derived by analysis, the WCET bounds for an instruction SPM are often lower than the bounds for a method cache. This means that an SPM may be preferable in practical systems....

  16. dCache, agile adoption of storage technology

    CERN Document Server

    CERN. Geneva

    2012-01-01

    For over a decade, dCache has been synonymous with large-capacity, fault-tolerant storage using commodity hardware that supports seamless data migration to and from tape. Over that time, it has satisfied the requirements of various demanding scientific user communities to store their data, transfer it between sites and fast, site-local access. When the dCache project started, the focus was on managing a relatively small disk cache in front of large tape archives. Over the project's lifetime storage technology has changed. During this period, technology changes have driven down the cost-per-GiB of harddisks. This resulted in a shift towards systems where the majority of data is stored on disk. More recently, the availability of Solid State Disks, while not yet a replacement for magnetic disks, offers an intriguing opportunity for significant performance improvement if they can be used intelligently within an existing system. New technologies provide new opportunities and dCache user communities' computi...

  17. Geometric Algorithms for Private-Cache Chip Multiprocessors

    DEFF Research Database (Denmark)

    Ajwani, Deepak; Sitchinava, Nodari; Zeh, Norbert

    2010-01-01

    We study techniques for obtaining efficient algorithms for geometric problems on private-cache chip multiprocessors. We show how to obtain optimal algorithms for interval stabbing counting, 1-D range counting, weighted 2-D dominance counting, and for computing 3-D maxima, 2-D lower envelopes, and 2...

  18. Cache Timing Analysis of LFSR-based Stream Ciphers

    DEFF Research Database (Denmark)

    Zenner, Erik; Leander, Gregor; Hawkes, Philip

    2009-01-01

    Cache timing attacks are a class of side-channel attacks that is applicable against certain software implementations. They have generated significant interest when demonstrated against the Advanced Encryption Standard (AES), but have more recently also been applied against other cryptographic pri...

  19. Efficient Context Switching for the Stack Cache: Implementation and Analysis

    DEFF Research Database (Denmark)

    Abbaspourseyedi, Sahar; Brandner, Florian; Naji, Amine

    2015-01-01

    The design of tailored hardware has proven a successful strategy to reduce the timing analysis overhead for (hard) real-time systems. The stack cache is an example of such a design that has been proven to provide good average-case performance, while being easy to analyze. So far, however, the ana...

  20. Cache Timing Analysis of eStream Finalists

    DEFF Research Database (Denmark)

    Zenner, Erik

    2009-01-01

    Cache Timing Attacks have attracted a lot of cryptographic attention due to their relevance for the AES. However, their applicability to other cryptographic primitives is less well researched. In this talk, we give an overview over our analysis of the stream ciphers that were selected for phase 3...

  1. Effective caching of shortest paths for location-based services

    DEFF Research Database (Denmark)

    Jensen, Christian S.; Thomsen, Jeppe Rishede; Yiu, Man Lung

    2012-01-01

    Web search is ubiquitous in our daily lives. Caching has been extensively used to reduce the computation time of the search engine and reduce the network traffic beyond a proxy server. Another form of web search, known as online shortest path search, is popular due to advances in geo...

  2. ARC Cache: A solution for lightweight Grid sites in ATLAS

    CERN Document Server

    Garonne, Vincent; The ATLAS collaboration

    2016-01-01

    Many Grid sites have the need to reduce operational manpower, and running a storage element consumes a large amount of effort. In addition, setting up a new Grid site including a storage element involves a steep learning curve and large investment of time. For these reasons so-called storage-less sites are becoming more popular as a way to provide Grid computing resources with less operational overhead. ARC CE is a widely-used and mature Grid middleware which was designed from the start to be used on sites with no persistent storage element. Instead, it maintains a local self-managing cache of data which retains popular data for future jobs. As the cache is simply an area on a local posix shared filesystem with no external-facing service, it requires no extra maintenance. The cache can be scaled up as required by increasing the size of the filesystem or adding new filesystems. This paper describes how ARC CE and its cache are an ideal solution for lightweight Grid sites in the ATLAS experiment, and the integr...

  3. Statistical Inference-Based Cache Management for Mobile Learning

    Science.gov (United States)

    Li, Qing; Zhao, Jianmin; Zhu, Xinzhong

    2009-01-01

    Supporting efficient data access in the mobile learning environment is becoming a hot research problem in recent years, and the problem becomes tougher when the clients are using light-weight mobile devices such as cell phones whose limited storage space prevents the clients from holding a large cache. A practical solution is to store the cache…

  4. Language-Based Caching of Dynamically Generated HTML

    DEFF Research Database (Denmark)

    Brabrand, Claus; Møller, Anders; Olesen, Steffan

    2002-01-01

    Increasingly, HTML documents are dynamically generated by interactive Web services. To ensure that the client is presented with the newest versions of such documents it is customary to disable client caching causing a seemingly inevitable performance penalty. In the system, dynamic HTML documents...

  5. Modified LRU Algorithm To Implement Proxy Server With Caching Policies

    Directory of Open Access Journals (Sweden)

    Jitendra Singh Kushwah

    2011-11-01

    Full Text Available In order to produce and develop a software system, it is necessary to have a method of choosing a suitable algorithm which satisfies the required quality attributes and maintains a trade-off between sometimes conflicting ones. Proxy server is placed between the real server and clients. Proxy server uses caching policies to store web documents using algorithms. For this, different algorithms are used but drawbacks of these algorithms are that it is applicable only for the video files not for other resource types. Second drawback is that it does not tell any thing about organizing the data on the disk storage of the proxy server. Third drawback is that it is difficult to implement. Fourth drawback is that they require the knowledge about the workloads on the proxy server. Major problems in previous described algorithms is that "Cold Cache Pollution". As described in the previous description all the existing algorithms used for caching suffers from various disadvantages. This paper is proposing a technique to remove the problem of cold cache pollution which is proved mathematically that it is better than the existing LRU-Distance algorithm.

  6. Cache-based memory copy hardware accelerator for multicore systems

    NARCIS (Netherlands)

    Duarte, F.; Wong, S.

    2010-01-01

    In this paper, we present a new architecture of the cache-based memory copy hardware accelerator in a multicore system supporting message passing. The accelerator is able to accelerate memory data movements, in particular memory copies. We perform an analytical analysis based on open-queuing theory

  7. Caching Over-The-Top Services, the Netflix Case

    DEFF Research Database (Denmark)

    Jensen, Stefan; Jensen, Michael; Gutierrez Lopez, Jose Manuel

    2015-01-01

    Problem (LLB-CFL). The solution search processes are implemented based on Genetic Algorithms (GA), designing genetic operators highly targeted towards this specific problem. The proposed methods are applied to a case study focusing on the demand and cache specifications of Netflix, and framed into a real...

  8. Combining instruction prefetching with partial cache locking to improve WCET in real-time systems.

    Directory of Open Access Journals (Sweden)

    Fan Ni

    Full Text Available Caches play an important role in embedded systems to bridge the performance gap between fast processor and slow memory. And prefetching mechanisms are proposed to further improve the cache performance. While in real-time systems, the application of caches complicates the Worst-Case Execution Time (WCET analysis due to its unpredictable behavior. Modern embedded processors often equip locking mechanism to improve timing predictability of the instruction cache. However, locking the whole cache may degrade the cache performance and increase the WCET of the real-time application. In this paper, we proposed an instruction-prefetching combined partial cache locking mechanism, which combines an instruction prefetching mechanism (termed as BBIP with partial cache locking to improve the WCET estimates of real-time applications. BBIP is an instruction prefetching mechanism we have already proposed to improve the worst-case cache performance and in turn the worst-case execution time. The estimations on typical real-time applications show that the partial cache locking mechanism shows remarkable WCET improvement over static analysis and full cache locking.

  9. Combining instruction prefetching with partial cache locking to improve WCET in real-time systems.

    Science.gov (United States)

    Ni, Fan; Long, Xiang; Wan, Han; Gao, Xiaopeng

    2013-01-01

    Caches play an important role in embedded systems to bridge the performance gap between fast processor and slow memory. And prefetching mechanisms are proposed to further improve the cache performance. While in real-time systems, the application of caches complicates the Worst-Case Execution Time (WCET) analysis due to its unpredictable behavior. Modern embedded processors often equip locking mechanism to improve timing predictability of the instruction cache. However, locking the whole cache may degrade the cache performance and increase the WCET of the real-time application. In this paper, we proposed an instruction-prefetching combined partial cache locking mechanism, which combines an instruction prefetching mechanism (termed as BBIP) with partial cache locking to improve the WCET estimates of real-time applications. BBIP is an instruction prefetching mechanism we have already proposed to improve the worst-case cache performance and in turn the worst-case execution time. The estimations on typical real-time applications show that the partial cache locking mechanism shows remarkable WCET improvement over static analysis and full cache locking.

  10. A trace-driven analysis of name and attribute caching in a distributed system

    Science.gov (United States)

    Shirriff, Ken W.; Ousterhout, John K.

    1992-01-01

    This paper presents the results of simulating file name and attribute caching on client machines in a distributed file system. The simulation used trace data gathered on a network of about 40 workstations. Caching was found to be advantageous: a cache on each client containing just 10 directories had a 91 percent hit rate on name look ups. Entry-based name caches (holding individual directory entries) had poorer performance for several reasons, resulting in a maximum hit rate of about 83 percent. File attribute caching obtained a 90 percent hit rate with a cache on each machine of the attributes for 30 files. The simulations show that maintaining cache consistency between machines is not a significant problem; only 1 in 400 name component look ups required invalidation of a remotely cached entry. Process migration to remote machines had little effect on caching. Caching was less successful in heavily shared and modified directories such as /tmp, but there weren't enough references to /tmp overall to affect the results significantly. We estimate that adding name and attribute caching to the Sprite operating system could reduce server load by 36 percent and the number of network packets by 30 percent.

  11. Glacial Isostatic Adjustment with ICE-6G{_}C (VM5a) and Laterally Heterogeneous Mantle Viscosity

    Science.gov (United States)

    Li, Tanghua; Wu, Patrick; Steffen, Holger

    2017-04-01

    perturbations inferred from the seismic tomography model (Bunge & Grand 2000) logarithmically. The preliminary results of these and other background viscosity profiles will be presented. References: Bunge, H.-P. & Grand, S. P. (2000). Mesozoic plate-motion history below the northeast Pacific Ocean from seismic images of the subducted Farallon slab. Nature, 405(6784):337-340. Peltier, W., Argus, D., and Drummond, R. (2015). Space geodesy constrains ice age terminal deglaciation: The global ICE-6GC (VM5a) model. Journal of Geophysical Research: Solid Earth, 120(1): 450-487. Wu, P. (2004). Using commercial finite element packages for the study of earth deformations, sea levels and the state of stress. Geophysical Journal International, 158(2): 401-408. Wu, P., Wang, H.S. & Steffen, H. (2012). The role of thermal effect on mantle seismic anomalies under Laurentia and Fennoscandia from observations of Glacial Isostatic Adjustment. Geophysical Journal International, 192(1):7-17.

  12. Mobility-Aware Caching and Computation Offloading in 5G Ultra-Dense Cellular Networks.

    Science.gov (United States)

    Chen, Min; Hao, Yixue; Qiu, Meikang; Song, Jeungeun; Wu, Di; Humar, Iztok

    2016-06-25

    Recent trends show that Internet traffic is increasingly dominated by content, which is accompanied by the exponential growth of traffic. To cope with this phenomena, network caching is introduced to utilize the storage capacity of diverse network devices. In this paper, we first summarize four basic caching placement strategies, i.e., local caching, Device-to-Device (D2D) caching, Small cell Base Station (SBS) caching and Macrocell Base Station (MBS) caching. However, studies show that so far, much of the research has ignored the impact of user mobility. Therefore, taking the effect of the user mobility into consideration, we proposes a joint mobility-aware caching and SBS density placement scheme (MS caching). In addition, differences and relationships between caching and computation offloading are discussed. We present a design of a hybrid computation offloading and support it with experimental results, which demonstrate improved performance in terms of energy cost. Finally, we discuss the design of an incentive mechanism by considering network dynamics, differentiated user's quality of experience (QoE) and the heterogeneity of mobile terminals in terms of caching and computing capabilities.

  13. Mobility-Aware Caching and Computation Offloading in 5G Ultra-Dense Cellular Networks

    Directory of Open Access Journals (Sweden)

    Min Chen

    2016-06-01

    Full Text Available Recent trends show that Internet traffic is increasingly dominated by content, which is accompanied by the exponential growth of traffic. To cope with this phenomena, network caching is introduced to utilize the storage capacity of diverse network devices. In this paper, we first summarize four basic caching placement strategies, i.e., local caching, Device-to-Device (D2D caching, Small cell Base Station (SBS caching and Macrocell Base Station (MBS caching. However, studies show that so far, much of the research has ignored the impact of user mobility. Therefore, taking the effect of the user mobility into consideration, we proposes a joint mobility-aware caching and SBS density placement scheme (MS caching. In addition, differences and relationships between caching and computation offloading are discussed. We present a design of a hybrid computation offloading and support it with experimental results, which demonstrate improved performance in terms of energy cost. Finally, we discuss the design of an incentive mechanism by considering network dynamics, differentiated user’s quality of experience (QoE and the heterogeneity of mobile terminals in terms of caching and computing capabilities.

  14. STUDIES OF SPONDYLOARTHRITIS IN RUSSIA: FROM V.M. BEKHTEREV TO OUR DAYS

    Directory of Open Access Journals (Sweden)

    A. А. Godzenko

    2016-01-01

    Full Text Available The paper briefly describes the history of spondyloarthritis  studies from the works of the outstanding Russian neurologist V.M. Bekhterev up to the present time. Special emphasis is laid on the results of the representatives of the scientific school of Professor E.R. Agababova, an organizer of the first laboratory of spondyloarthritis  in Russia. The major areas of the investigations that are currently under way in Russia are shown.

  15. Seed perishability determines the caching behaviour of a food-hoarding bird.

    Science.gov (United States)

    Neuschulz, Eike Lena; Mueller, Thomas; Bollmann, Kurt; Gugerli, Felix; Böhning-Gaese, Katrin

    2015-01-01

    Many animals hoard seeds for later consumption and establish seed caches that are often located at sites with specific environmental characteristics. One explanation for the selection of non-random caching locations is the avoidance of pilferage by other animals. Another possible hypothesis is that animals choose locations that hamper the perishability of stored food, allowing the consumption of unspoiled food items over long time periods. We examined seed perishability and pilferage avoidance as potential drivers for caching behaviour of spotted nutcrackers (Nucifraga caryocatactes) in the Swiss Alps where the birds are specialized on caching seeds of Swiss stone pine (Pinus cembra). We used seedling establishment as an inverse measure of seed perishability, as established seedlings cannot longer be consumed by nutcrackers. We recorded the environmental conditions (i.e. canopy openness and soil moisture) of seed caching, seedling establishment and pilferage sites. Our results show that sites of seed caching and seedling establishment had opposed microenvironmental conditions. Canopy openness and soil moisture were negatively related to seed caching but positively related to seedling establishment, i.e. nutcrackers cached seeds preferentially at sites where seed perishability was low. We found no effects of environmental factors on cache pilferage, i.e. neither canopy openness nor soil moisture had significant effects on pilferage rates. We thus could not relate caching behaviour to pilferage avoidance. Our study highlights the importance of seed perishability as a mechanism for seed-caching behaviour, which should be considered in future studies. Our findings could have important implications for the regeneration of plants whose seeds are dispersed by seed-caching animals, as the potential of seedlings to establish may strongly decrease if animals cache seeds at sites that favour seed perishability rather than seedling establishment.

  16. Food availability and animal space use both determine cache density of Eurasian red squirrels.

    Directory of Open Access Journals (Sweden)

    Ke Rong

    Full Text Available Scatter hoarders are not able to defend their caches. A longer hoarding distance combined with lower cache density can reduce cache losses but increase the costs of hoarding and retrieving. Scatter hoarders arrange their cache density to achieve an optimal balance between hoarding costs and main cache losses. We conducted systematic cache sampling investigations to estimate the effects of food availability on cache patterns of Eurasian red squirrels (Sciurus vulgaris. This study was conducted over a five-year period at two sample plots in a Korean pine (Pinus koraiensis-dominated forest with contrasting seed production patterns. During these investigations, the locations of nest trees were treated as indicators of squirrel space use to explore how space use affected cache pattern. The squirrels selectively hoarded heavier pine seeds farther away from seed-bearing trees. The heaviest seeds were placed in caches around nest trees regardless of the nest tree location, and this placement was not in response to decreased food availability. The cache density declined with the hoarding distance. Cache density was lower at sites with lower seed production and during poor seed years. During seed mast years, the cache density around nest trees was higher and invariant. The pine seeds were dispersed over a larger distance when seed availability was lower. Our results suggest that 1 animal space use is an important factor that affects food hoarding distance and associated cache densities, 2 animals employ different hoarding strategies based on food availability, and 3 seed dispersal outside the original stand is stimulated in poor seed years.

  17. Experimental study of one-stage VM cryocooler operating below 8 K

    Science.gov (United States)

    Pan, Changzhao; Zhang, Tong; Zhou, Yuan; Wang, Junjie

    2015-12-01

    The Vuilleumier (VM) refrigerator, known as heat driven refrigerator, is one kind of closed-cycle Stirling type regenerative refrigerator. The VM refrigerator with power being supplied by liquid nitrogen was proposed by Hogen and developed by Zhou, which shows great potential for development below 10 K. This paper describes the experimental development of a VM cryocooler operating below 8 K, which was achieved by using liquid nitrogen as a heat sink of middle cavity. The regenerator was optimized by using a part of metallic magnetic regenerator material Er3Ni to replace the lead sphere and a no-load temperature of 7.8 K was obtained. Then all the lead spheres were replaced by Er0.6Pr0.4 material and a no-load temperature of 7.35 K was obtained, which is the lowest temperature for this kind of refrigerator reported so far. The cooling power at 10 K is about 500 mW with a pressure ratio near 1.6 and a charge pressure of 1.8 MPa. Especially, the magnetic material Er0.6Pr0.4 was found to be a potential substitution for the conventional lead.

  18. A novel coupled VM-PT cryocooler operating at liquid helium temperature

    Science.gov (United States)

    Pan, Changzhao; Zhang, Tong; Zhou, Yuan; Wang, Junjie

    2016-07-01

    This paper presents experimental results on a novel two-stage gas-coupled VM-PT cryocooler, which is a one-stage VM cooler coupled a pulse tube cooler. In order to reach temperatures below the critical point of helium-4, a one-stage coaxial pulse tube cryocooler was gas-coupled on the cold end of the former VM cryocooler. The low temperature inertance tube and room temperature gas reservoir were used as phase shifters. The influence of room temperature double-inlet was first investigated, and the results showed that it added excessive heat loss. Then the inertance tube, regenerator and the length of the pulse tube were researched experimentally. Especially, the DC flow, whose function is similar to the double-orifice, was experimentally studied, and shown to contribute about 0.2 K for the no-load temperature. The minimum no-load temperature of 4.4 K was obtained with a pressure ratio near 1.5, working frequency of 2.2 Hz, and average pressure of 1.73 MPa.

  19. Energy Efficient Security Preserving VM Live Migration In Data Centers For Cloud Computing

    Directory of Open Access Journals (Sweden)

    Korir Sammy

    2012-03-01

    Full Text Available Virtualization is an innovation that has widely been utilized in modern data centers for cloud computing to realize energy-efficient operations of servers. Virtual machine (VM migration brings multiple benefits such as resource distribution and energy aware consolidation. Server consolidation achieves energy efficiency by enabling multiple instances of operating systems to run simultaneously on a single machine. With virtualization, it is possible to consolidate severs through VM live migration. However, migration of virtual machines brings extra energy consumption and serious security concerns that derail full adoption of this technology. In this paper, we propose a secure energy-aware provisioning of cloud computing resources on consolidated and virtualized platforms. Energy efficiency is achieved through just-right dynamic Round-Robin provisioning mechanism and the ability to power down sub-systems of a host system that are not required by VMs mapped to it. We further propose solutions to security challenges faced during VM live migration. We validate our approach by conducting a set of rigorous performance evaluation study using CloudSim toolkit. The experimental results show that our approach achieves reduced energy consumption in data centers while not compromising on security.

  20. DOC-a file system cache to support mobile computers

    Science.gov (United States)

    Huizinga, D. M.; Heflinger, K.

    1995-09-01

    This paper identifies design requirements of system-level support for mobile computing in small form-factor battery-powered portable computers and describes their implementation in DOC (Disconnected Operation Cache). DOC is a three-level client caching system designed and implemented to allow mobile clients to transition between connected, partially disconnected and fully disconnected modes of operation with minimal user involvement. Implemented for notebook computers, DOC addresses not only typical issues of mobile elements such as resource scarcity and fluctuations in service quality but also deals with the pitfalls of MS-DOS, the operating system which prevails in the commercial notebook market. Our experiments performed in the software engineering environment of AST Research indicate not only considerable performance gains for connected and partially disconnected modes of DOC, but also the successful operation of the disconnected mode.

  1. A novel cause of chronic viral meningoencephalitis: Cache Valley virus.

    Science.gov (United States)

    Wilson, Michael R; Suan, Dan; Duggins, Andrew; Schubert, Ryan D; Khan, Lillian M; Sample, Hannah A; Zorn, Kelsey C; Rodrigues Hoffman, Aline; Blick, Anna; Shingde, Meena; DeRisi, Joseph L

    2017-07-01

    Immunodeficient patients are particularly vulnerable to neuroinvasive infections that can be challenging to diagnose. Metagenomic next generation sequencing can identify unusual or novel microbes and is therefore well suited for investigating the etiology of chronic meningoencephalitis in immunodeficient patients. We present the case of a 34-year-old man with X-linked agammaglobulinemia from Australia suffering from 3 years of meningoencephalitis that defied an etiologic diagnosis despite extensive conventional testing, including a brain biopsy. Metagenomic next generation sequencing of his cerebrospinal fluid and brain biopsy tissue was performed to identify a causative pathogen. Sequences aligning to multiple Cache Valley virus genes were identified via metagenomic next generation sequencing. Reverse transcription polymerase chain reaction and immunohistochemistry subsequently confirmed the presence of Cache Valley virus in the brain biopsy tissue. Cache Valley virus, a mosquito-borne orthobunyavirus, has only been identified in 3 immunocompetent North American patients with acute neuroinvasive disease. The reported severity ranges from a self-limiting meningitis to a rapidly fatal meningoencephalitis with multiorgan failure. The virus has never been known to cause a chronic systemic or neurologic infection in humans. Cache Valley virus has also never previously been detected on the Australian continent. Our research subject traveled to North and South Carolina and Michigan in the weeks prior to the onset of his illness. This report demonstrates that metagenomic next generation sequencing allows for unbiased pathogen identification, the early detection of emerging viruses as they spread to new locales, and the discovery of novel disease phenotypes. Ann Neurol 2017;82:105-114. © 2017 The Authors Annals of Neurology published by Wiley Periodicals, Inc. on behalf of American Neurological Association.

  2. Current desires of conspecific observers affect cache-protection strategies in California scrub-jays and Eurasian jays.

    Science.gov (United States)

    Ostojić, Ljerka; Legg, Edward W; Brecht, Katharina F; Lange, Florian; Deininger, Chantal; Mendl, Michael; Clayton, Nicola S

    2017-01-23

    Many corvid species accurately remember the locations where they have seen others cache food, allowing them to pilfer these caches efficiently once the cachers have left the scene [1]. To protect their caches, corvids employ a suite of different cache-protection strategies that limit the observers' visual or acoustic access to the cache site [2,3]. In cases where an observer's sensory access cannot be reduced it has been suggested that cachers might be able to minimise the risk of pilfering if they avoid caching food the observer is most motivated to pilfer [4]. In the wild, corvids have been reported to pilfer others' caches as soon as possible after the caching event [5], such that the cacher might benefit from adjusting its caching behaviour according to the observer's current desire. In the current study, observers pilfered according to their current desire: they preferentially pilfered food that they were not sated on. Cachers adjusted their caching behaviour accordingly: they protected their caches by selectively caching food that observers were not motivated to pilfer. The same cache-protection behaviour was found when cachers could not see on which food the observers were sated. Thus, the cachers' ability to respond to the observer's desire might have been driven by the observer's behaviour at the time of caching.

  3. Cache Performance Optimization for SoC Vedio Applications

    Directory of Open Access Journals (Sweden)

    Lei Li

    2014-07-01

    Full Text Available Chip Multiprocessors (CMPs are adopted by industry to deal with the speed limit of the single-processor. But memory access has become the bottleneck of the performance, especially in multimedia applications. In this paper, a set of management policies is proposed to improve the cache performance for a SoC platform of video application. By analyzing the behavior of Vedio Engine, the memory-friendly writeback and efficient prefetch policies are adopted. The experiment platform is simulated by System C with ARM Cotex-A9 processor model. Experimental study shows that the performance can be improved by the proposed mechanism in contrast to the general cache without Last Level Cache (LLC: up to 18.87% Hit Rate increased, 10.62% MM Latency and 46.43% CPU Read Latency decreased for VENC/16way/64bytes; up to 52.1% Hit Rate increased, 11.43% MM Latency and 47.48% CPU Read Latency decreased for VDEC/16way/64bytes, but with only 8.62% and 4.23% Bandwidth increased respectively

  4. Efficient Resource Scheduling by Exploiting Relay Cache for Cellular Networks

    Directory of Open Access Journals (Sweden)

    Chun He

    2015-01-01

    Full Text Available In relay-enhanced cellular systems, throughput of User Equipment (UE is constrained by the bottleneck of the two-hop link, backhaul link (or the first hop link, and access link (the second hop link. To maximize the throughput, resource allocation should be coordinated between these two hops. A common resource scheduling algorithm, Adaptive Distributed Proportional Fair, only ensures that the throughput of the first hop is greater than or equal to that of the second hop. But it cannot guarantee a good balance of the throughput and fairness between the two hops. In this paper, we propose a Two-Hop Balanced Distributed Scheduling (TBS algorithm by exploiting relay cache for non-real-time data traffic. The evolved Node Basestation (eNB adaptively adjusts the number of Resource Blocks (RBs allocated to the backhaul link and direct links based on the cache information of relays. Each relay allocates RBs for relay UEs based on the size of the relay UE’s Transport Block. We also design a relay UE’s ACK feedback mechanism to update the data at relay cache. Simulation results show that the proposed TBS can effectively improve resource utilization and achieve a good trade-off between system throughput and fairness by balancing the throughput of backhaul and access link.

  5. DEAM:Decoupled, Expressive, Area-Efficient Metadata Cache

    Institute of Scientific and Technical Information of China (English)

    ‘刘鹏; 方磊; 黄巍

    2014-01-01

    Chip multiprocessor presents brand new opportunities for holistic on-chip data and coherence management solutions. An intelligent protocol should be adaptive to the fine-grain accessing behavior. And in terms of storage of metadata, the size of conventional directory grows as the square of the number of processors, making it very expensive in large-scale systems. In this paper, we propose a metadata cache framework to achieve three goals: 1) reducing the latency of data access and coherence activities, 2) saving the storage of metadata, and 3) providing support for other optimization techniques. The metadata is implemented with compact structures and tracks the dynamically changing access pattern. The pattern information is used to guide the delegation and replication of decoupled data and metadata to allow fast access. We also use our metadata cache as a building block to enhance stream prefetching. Using detailed execution-driven simulation, we demonstrate that our protocol achieves an average speedup of 1.12X compared with a shared cache protocol with 1/5 of the storage of metadata.

  6. Caching Eliminates the Wireless Bottleneck in Video Aware Wireless Networks

    Directory of Open Access Journals (Sweden)

    Andreas F. Molisch

    2014-01-01

    Full Text Available Wireless video is the main driver for rapid growth in cellular data traffic. Traditional methods for network capacity increase are very costly and do not exploit the unique features of video, especially asynchronous content reuse. In this paper we give an overview of our work that proposed and detailed a new transmission paradigm exploiting content reuse and the widespread availability of low-cost storage. Our network structure uses caching in helper stations (femtocaching and/or devices, combined with highly spectrally efficient short-range communications to deliver video files. For femtocaching, we develop optimum storage schemes and dynamic streaming policies that optimize video quality. For caching on devices, combined with device-to-device (D2D communications, we show that communications within clusters of mobile stations should be used; the cluster size can be adjusted to optimize the tradeoff between frequency reuse and the probability that a device finds a desired file cached by another device in the same cluster. In many situations the network throughput increases linearly with the number of users, and the tradeoff between throughput and outage is better than in traditional base-station centric systems. Simulation results with realistic numbers of users and channel conditions show that network throughput can be increased by two orders of magnitude compared to conventional schemes.

  7. Web Cache Prefetching as an Aspect: Towards a Dynamic-Weaving Based Solution

    DEFF Research Database (Denmark)

    Segura-Devillechaise, Marc; Menaud, Jean-Marc; Muller, Gilles

    2003-01-01

    Given the high proportion of HTTP traffic in the Internet, Web caches are crucial to reduce user access time, network latency, and bandwidth consumption. Prefetching in a Web cache can further enhance these benefits. For the best performance, however, the prefetching policy must match user and Web...... these issues. In particular, µ-Dyner provides a low overhead for aspect invocation, that meets the performance needs of Web caches....... application characteristics. Thus, new prefetching policies must be loaded dynamically as needs change.Most Web caches are large C programs, and thus adding one or more prefetching policies to an existing Web cache is a daunting task. The main problem is that prefetching concerns crosscut the cache structure...

  8. Unavoidability Routine Enrichment for Real-Time Embedded Systems by Using Cache-Locking Technique

    Directory of Open Access Journals (Sweden)

    M. Shankar Dr. M. Sridar Dr. M. Rajani

    2012-02-01

    Full Text Available In multitask, preemptive real-time systems, the use of cache memories make difficult the estimation of the response time of tasks, due to the dynamic, adaptive and non predictable behavior of cache memories. But many embedded and critical applications need the increase of performance provided by cache memories. Recent studies indicate that for application-specific embedded systems, static cache-locking helps determining the worst case execution time (WCET and cache-related pre-emption delay. The determination of upper bounds on execution times, commonly called Worst-Case Execution Times (WCETs, is a necessary step in the development and validation process for hard real-time systems. This problem is hard if the underlying processor architecture has components such as caches, pipelines, branch prediction, and other speculative components. This article describes different approaches to this problem and surveys several commercially available tools and research prototypes

  9. Utilizing Lustre file system with dCache for CMS analysis

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Y; Kim, B; Fu, Y; Bourilkov, D; Avery, P [Department of Physics, University of Florida, Gainesville, FL 32611 (United States); Rodriguez, J L [Department of Physics, Florida International University, Miami, FL 33199 (United States)

    2010-04-01

    This paper presents storage implementations that utilize the Lustre file system for CMS analysis with direct POSIX file access while keeping dCache as the frontend for data distribution and management. We describe two implementations that integrate dCache with Lustre and how to enable user data access without going through the dCache file read protocol. Our initial CMS analysis job measurement and transfer performance results are shown and the advantages of different implementations are briefly discussed.

  10. dCache, Sync-and-Share for Big Data

    Science.gov (United States)

    Millar, AP; Fuhrmann, P.; Mkrtchyan, T.; Behrmann, G.; Bernardt, C.; Buchholz, Q.; Guelzow, V.; Litvintsev, D.; Schwank, K.; Rossi, A.; van der Reest, P.

    2015-12-01

    The availability of cheap, easy-to-use sync-and-share cloud services has split the scientific storage world into the traditional big data management systems and the very attractive sync-and-share services. With the former, the location of data is well understood while the latter is mostly operated in the Cloud, resulting in a rather complex legal situation. Beside legal issues, those two worlds have little overlap in user authentication and access protocols. While traditional storage technologies, popular in HEP, are based on X.509, cloud services and sync-and-share software technologies are generally based on username/password authentication or mechanisms like SAML or Open ID Connect. Similarly, data access models offered by both are somewhat different, with sync-and-share services often using proprietary protocols. As both approaches are very attractive, dCache.org developed a hybrid system, providing the best of both worlds. To avoid reinventing the wheel, dCache.org decided to embed another Open Source project: OwnCloud. This offers the required modern access capabilities but does not support the managed data functionality needed for large capacity data storage. With this hybrid system, scientists can share files and synchronize their data with laptops or mobile devices as easy as with any other cloud storage service. On top of this, the same data can be accessed via established mechanisms, like GridFTP to serve the Globus Transfer Service or the WLCG FTS3 tool, or the data can be made available to worker nodes or HPC applications via a mounted filesystem. As dCache provides a flexible authentication module, the same user can access its storage via different authentication mechanisms; e.g., X.509 and SAML. Additionally, users can specify the desired quality of service or trigger media transitions as necessary, thus tuning data access latency to the planned access profile. Such features are a natural consequence of using dCache. We will describe the design of

  11. A Cache System Design for CMPs with Built-In Coherence Verification

    Directory of Open Access Journals (Sweden)

    Mamata Dalui

    2016-01-01

    Full Text Available This work reports an effective design of cache system for Chip Multiprocessors (CMPs. It introduces built-in logic for verification of cache coherence in CMPs realizing directory based protocol. It is developed around the cellular automata (CA machine, invented by John von Neumann in the 1950s. A special class of CA referred to as single length cycle 2-attractor cellular automata (TACA has been planted to detect the inconsistencies in cache line states of processors’ private caches. The TACA module captures coherence status of the CMPs’ cache system and memorizes any inconsistent recording of the cache line states during the processors’ reference to a memory block. Theory has been developed to empower a TACA to analyse the cache state updates and then to settle to an attractor state indicating quick decision on a faulty recording of cache line status. The introduction of segmentation of the CMPs’ processor pool ensures a better efficiency, in determining the inconsistencies, by reducing the number of computation steps in the verification logic. The hardware requirement for the verification logic points to the fact that the overhead of proposed coherence verification module is much lesser than that of the conventional verification units and is insignificant with respect to the cost involved in CMPs’ cache system.

  12. Cache-Oblivious Data Structures and Algorithms for Undirected Breadth-First Search and Shortest Paths

    DEFF Research Database (Denmark)

    Brodal, G.S.; Fagerberg, R.; Meyer, U.;

    2004-01-01

    We present improved cache-oblivious data structures and algorithms for breadth-first search and the single-source shortest path problem on undirected graphs with non-negative edge weights. Our results removes the performance gap between the currently best cache-aware algorithms for these problems...... and their cache-oblivious counterparts. Our shortest-path algorithm relies on a new data structure, called bucket heap, which is the first cache-oblivious priority queue to efficiently support a weak DecreaseKey operation....

  13. Cooperative Caching in Mobile Ad Hoc Networks Based on Data Utility

    Directory of Open Access Journals (Sweden)

    Narottam Chand

    2007-01-01

    Full Text Available Cooperative caching, which allows sharing and coordination of cached data among clients, is a potential technique to improve the data access performance and availability in mobile ad hoc networks. However, variable data sizes, frequent data updates, limited client resources, insufficient wireless bandwidth and client's mobility make cache management a challenge. In this paper, we propose a utility based cache replacement policy, least utility value (LUV, to improve the data availability and reduce the local cache miss ratio. LUV considers several factors that affect cache performance, namely access probability, distance between the requester and data source/cache, coherency and data size. A cooperative cache management strategy, Zone Cooperative (ZC, is developed that employs LUV as replacement policy. In ZC one-hop neighbors of a client form a cooperation zone since the cost for communication with them is low both in terms of energy consumption and message exchange. Simulation experiments have been conducted to evaluate the performance of LUV based ZC caching strategy. The simulation results show that, LUV replacement policy substantially outperforms the LRU policy.

  14. An assessment of the ICE6G_C(VM5a) glacial isostatic adjustment model

    Science.gov (United States)

    Purcell, A.; Tregoning, P.; Dehecq, A.

    2016-05-01

    The recent release of the next-generation global ice history model, ICE6G_C(VM5a), is likely to be of interest to a wide range of disciplines including oceanography (sea level studies), space gravity (mass balance studies), glaciology, and, of course, geodynamics (Earth rheology studies). In this paper we make an assessment of some aspects of the ICE6G_C(VM5a) model and show that the published present-day radial uplift rates are too high along the eastern side of the Antarctic Peninsula (by ˜8.6 mm/yr) and beneath the Ross Ice Shelf (by ˜5 mm/yr). Furthermore, the published spherical harmonic coefficients—which are meant to represent the dimensionless present-day changes due to glacial isostatic adjustment (GIA)—contain excessive power for degree ≥90, do not agree with physical expectations and do not represent accurately the ICE6G_C(VM5a) model. We show that the excessive power in the high-degree terms produces erroneous uplift rates when the empirical relationship of Purcell et al. (2011) is applied, but when correct Stokes coefficients are used, the empirical relationship produces excellent agreement with the fully rigorous computation of the radial velocity field, subject to the caveats first noted by Purcell et al. (2011). Using the Australian National University (ANU) groups CALSEA software package, we recompute the present-day GIA signal for the ice thickness history and Earth rheology used by Peltier et al. (2015) and provide dimensionless Stokes coefficients that can be used to correct satellite altimetry observations for GIA over oceans and by the space gravity community to separate GIA and present-day mass balance change signals. We denote the new data sets as ICE6G_ANU.

  15. 基于选择存储的CCN网络缓存发现方法%A Cache Discovery Method Based on Selective Caching in CCN

    Institute of Scientific and Technical Information of China (English)

    冯宗明; 李俊; 吴海博; 智江

    2015-01-01

    内容中心网络(Content-Centric Network, CCN)是未来互联网的重要发展方向.网内缓存是 CCN网络的重要特征,对 CCN内容传输性能具有重要影响.网内缓存的内容发现效率与网内缓存性能密切相关.传统 CCN网络缓存发现方法是通过请求包在数据平面转发,沿途机会性地命中缓存实现的,具有一定的随机性、盲目性,可能导致缓存内容无法被高效利用.本文提出一种在控制平面解决缓存可用性的方法,结合拓扑、缓存容量以及用户请求分布计算出"值得"缓存的内容进行存储,同时将其向外通告,使其参与路由计算,以便后续请求快速准确地发现并利用缓存内容.实验结果表明,本文方法可使缓存命中率提高 20%左右,服务器负载降低 15%左右.%As a promising direction of future Internet, Content-Centric Network (CCN) has attracted world-wide attention. Caching is an important feature of CCN network, which has significant impacts on the performance of content transmission. The efficiency of cached content discovery has a close relationship with the caching performance. In traditional methods of cache discovery in CCN, requests are forwarded in the data plane and caches are hit along the path in an opportunistic manner. This approach has the characteristic of randomness and blindness, which may make the cache content not be used efficiently. This paper presents a new method to achieve the availability of caching in the control plane, which utilizes topology, cache capacity and user requests distribution to calculate the worthy cache contents and store them. Meanwhile, the corresponding cache contents are announced and participate in the routing calculation, so that the subsequent requests can quickly and accurately find and exploit these cache contents. The experimental results show that this method can improve the rate of cache hit by about 20% and reduce the server load by about 15%.

  16. DSP acceleration using cache logic FPGAs

    Science.gov (United States)

    Rosenberg, Joel

    1995-09-01

    Stand-alone digital signal processors (DSPs) support many on-chip functions and are highly optimized for the demands of high-speed computing. The problem associated with this functional optimization is that the increase in performance comes at the expense of flexibility. To make the DSP general purpose enough for a wide variety of applications, a custom ASIC must be used to achieve the desired performance. DSPs and ASICs are not able to easily adapt on-the-fly to different algorithms. Even DSPs that can do this don't match the high level of optimization provided by an ASIC. Recent developments in FPGA design tools enable system designers to develop in-system reconfigurable adaptive DSP hardware. Designed to exploit register rich, dynamically recongigurable field programmable gate arrays, high speed custom DSP functions can be created and implemented, resulting in significantly improved performance for compute-intensive applications, including graphics and image processing, telecommunications, networking and instrumentation.

  17. Dynamic virtual AliEn Grid sites on Nimbus with CernVM

    Science.gov (United States)

    Harutyunyan, A.; Buncic, P.; Freeman, T.; Keahey, K.

    2010-04-01

    We describe the work on enabling one click deployment of Grid sites of AliEn Grid framework on the Nimbus 'science cloud' at the University of Chicago. The integration of computing resources of the cloud with the resource pool of AliEn Grid is achieved by leveraging two mechanisms: the Nimbus Context Broker developed at Argonne National Laboratory and the University of Chicago, and CernVM - a baseline virtual software appliance for LHC experiments developed at CERN. Two approaches of dynamic virtual AliEn Grid site deployment are presented.

  18. Dynamic virtual AliEn Grid sites on Nimbus with CernVM

    Energy Technology Data Exchange (ETDEWEB)

    Harutyunyan, A [Armenian e-Science Foundation, Yerevan (Armenia); Buncic, P [CERN, Geneva (Switzerland); Freeman, T; Keahey, K, E-mail: hartem@mail.yerphi.a, E-mail: Predrag.Buncic@cern.c, E-mail: tfreeman@mcs.anl.go, E-mail: keahey@mcs.anl.go [University of Chicago, Chicago IL (United States)

    2010-04-01

    We describe the work on enabling one click deployment of Grid sites of AliEn Grid framework on the Nimbus 'science cloud' at the University of Chicago. The integration of computing resources of the cloud with the resource pool of AliEn Grid is achieved by leveraging two mechanisms: the Nimbus Context Broker developed at Argonne National Laboratory and the University of Chicago, and CernVM - a baseline virtual software appliance for LHC experiments developed at CERN. Two approaches of dynamic virtual AliEn Grid site deployment are presented.

  19. Physical metallurgy: Scientific school of the Academician V.M. Schastlivtsev

    Science.gov (United States)

    Tabatchikova, T. I.

    2016-04-01

    This paper is to honor Academician Vadim Mikhailovich Schastlivtsev, a prominent scientist in the field of metal physics and materials science. The article comprises an analysis of the topical issues of the physical metallurgy of the early 21st century and of the contribution of V.M. Schastlivtsev and of his school to the science of phase and structural transformations in steels. In 2015, Vadim Mikhailovich celebrates his 80th birthday, and this paper is timed to this honorable date. The list of his main publications is given in it.

  20. Suported by Replacement Policy for Caching World-Wide Web Documents Based on Site-Graph Model

    Institute of Scientific and Technical Information of China (English)

    庄伟强; 胡敏; 王鼎兴; 郑纬民; 沈美明

    2001-01-01

    The hit rate, a major metric for evaluating proxy caches, is mostly limited by the replacementstrategy of proxy caches. However, in traditional proxy caches, the hit rate does not usually successfullypredict how well a proxy cache will perform because the proxy cache counts any hit in its caching space whichhas many pages without useful information, so its replacement strategy fails to determine which pages to keepand which to release. The proxy cache efficiency can be measured more accurately using the valid hit rateintroduced in this paper. An efficient replacement strategy based on the Site-Graph model for WWW(World-Wide Web) documents is also discussed in this paper. The model analyzes user access behavior as abasis for the replacement strategy. Simulation results demonstrate that the replacement strategy improvesproxy cache efficiency.``

  1. VMCast: A VM-Assisted Stability Enhancing Solution for Tree-Based Overlay Multicast.

    Science.gov (United States)

    Gu, Weidong; Zhang, Xinchang; Gong, Bin; Zhang, Wei; Wang, Lu

    2015-01-01

    Tree-based overlay multicast is an effective group communication method for media streaming applications. However, a group member's departure causes all of its descendants to be disconnected from the multicast tree for some time, which results in poor performance. The above problem is difficult to be addressed because overlay multicast tree is intrinsically instable. In this paper, we proposed a novel stability enhancing solution, VMCast, for tree-based overlay multicast. This solution uses two types of on-demand cloud virtual machines (VMs), i.e., multicast VMs (MVMs) and compensation VMs (CVMs). MVMs are used to disseminate the multicast data, whereas CVMs are used to offer streaming compensation. The used VMs in the same cloud datacenter constitute a VM cluster. Each VM cluster is responsible for a service domain (VMSD), and each group member belongs to a specific VMSD. The data source delivers the multicast data to MVMs through a reliable path, and MVMs further disseminate the data to group members along domain overlay multicast trees. The above approach structurally improves the stability of the overlay multicast tree. We further utilized CVM-based streaming compensation to enhance the stability of the data distribution in the VMSDs. VMCast can be used as an extension to existing tree-based overlay multicast solutions, to provide better services for media streaming applications. We applied VMCast to two application instances (i.e., HMTP and HCcast). The results show that it can obviously enhance the stability of the data distribution.

  2. CernVM WebAPI - Controlling Virtual Machines from the Web

    Science.gov (United States)

    Charalampidis, I.; Berzano, D.; Blomer, J.; Buncic, P.; Ganis, G.; Meusel, R.; Segal, B.

    2015-12-01

    Lately, there is a trend in scientific projects to look for computing resources in the volunteering community. In addition, to reduce the development effort required to port the scientific software stack to all the known platforms, the use of Virtual Machines (VMs)u is becoming increasingly popular. Unfortunately their use further complicates the software installation and operation, restricting the volunteer audience to sufficiently expert people. CernVM WebAPI is a software solution addressing this specific case in a way that opens wide new application opportunities. It offers a very simple API for setting-up, controlling and interfacing with a VM instance in the users computer, while in the same time offloading the user from all the burden of downloading, installing and configuring the hypervisor. WebAPI comes with a lightweight javascript library that guides the user through the application installation process. Malicious usage is prohibited by offering a per-domain PKI validation mechanism. In this contribution we will overview this new technology, discuss its security features and examine some test cases where it is already in use.

  3. VMCast: A VM-Assisted Stability Enhancing Solution for Tree-Based Overlay Multicast.

    Directory of Open Access Journals (Sweden)

    Weidong Gu

    Full Text Available Tree-based overlay multicast is an effective group communication method for media streaming applications. However, a group member's departure causes all of its descendants to be disconnected from the multicast tree for some time, which results in poor performance. The above problem is difficult to be addressed because overlay multicast tree is intrinsically instable. In this paper, we proposed a novel stability enhancing solution, VMCast, for tree-based overlay multicast. This solution uses two types of on-demand cloud virtual machines (VMs, i.e., multicast VMs (MVMs and compensation VMs (CVMs. MVMs are used to disseminate the multicast data, whereas CVMs are used to offer streaming compensation. The used VMs in the same cloud datacenter constitute a VM cluster. Each VM cluster is responsible for a service domain (VMSD, and each group member belongs to a specific VMSD. The data source delivers the multicast data to MVMs through a reliable path, and MVMs further disseminate the data to group members along domain overlay multicast trees. The above approach structurally improves the stability of the overlay multicast tree. We further utilized CVM-based streaming compensation to enhance the stability of the data distribution in the VMSDs. VMCast can be used as an extension to existing tree-based overlay multicast solutions, to provide better services for media streaming applications. We applied VMCast to two application instances (i.e., HMTP and HCcast. The results show that it can obviously enhance the stability of the data distribution.

  4. Workload-Aware and CPU Frequency Scaling for Optimal Energy Consumption in VM Allocation

    Directory of Open Access Journals (Sweden)

    Zhen Liu

    2014-01-01

    Full Text Available In the problem of VMs consolidation for cloud energy saving, different workloads will ask for different resources. Thus, considering workload characteristic, the VM placement solution will be more reasonable. In the real world, different workload works in a varied CPU utilization during its work time according to its task characteristics. That means energy consumption related to both the CPU utilization and CPU frequency. Therefore, only using the model of CPU frequency to evaluate energy consumption is insufficient. This paper theoretically verified that there will be a CPU frequency best suit for a certain CPU utilization in order to obtain the minimum energy consumption. According to this deduction, we put forward a heuristic CPU frequency scaling algorithm VP-FS (virtual machine placement with frequency scaling. In order to carry the experiments, we realized three typical greedy algorithms for VMs placement and simulate three groups of VM tasks. Our efforts show that different workloads will affect VMs allocation results. Each group of workload has its most suitable algorithm when considering the minimum used physical machines. And because of the CPU frequency scaling, VP-FS has the best results on the total energy consumption compared with the other three algorithms under any of the three groups of workloads.

  5. AWRP: Adaptive Weight Ranking Policy for Improving Cache Performance

    CERN Document Server

    Swain, Debabala; Swain, Debabrata

    2011-01-01

    Due to the huge difference in performance between the computer memory and processor, the virtual memory management plays a vital role in system performance. A Cache memory is the fast memory which is used to compensate the speed difference between the memory and processor. This paper gives an adaptive replacement policy over the traditional policy which has low overhead, better performance and is easy to implement. Simulations show that our algorithm performs better than Least-Recently-Used (LRU), First-In-First-Out (FIFO) and Clock with Adaptive Replacement (CAR).

  6. Cache-Integrated Network Interfaces: Flexible On-Chip Communication and Synchronization for Large-Scale CMPs

    OpenAIRE

    Kavadias, Stamatis; KATEVENIS, Manolis; Zampetakis, Michail; Nikolopoulos, Dimitrios S.

    2012-01-01

    Per-core scratchpad memories (or local stores) allow direct inter-core communication, with latency and energy advantages over coherent cache-based communication, especially as CMP architectures become more distributed. We have designed cache-integrated network interfaces, appropriate for scalable multicores, that combine the best of two worlds – the flexibility of caches and the efficiency of scratchpad memories: on-chip SRAM is configurably shared among caching, scratchpad, and virtualized n...

  7. A Computationally Efficient P-LRU based Optimal Cache Heap Object Replacement Policy

    Directory of Open Access Journals (Sweden)

    Burhan Ul Islam Khan

    2017-01-01

    Full Text Available The recent advancement in the field of distributed computing depicts a need of developing highly associative and less expensive cache memories for the state-of-art processors i.e., Intel Core i6, i7, etc. Hence, various conventional studies introduced cache replacement policies which are one of the prominent key factors to determine the effectiveness of a cache memory. Most of the conventional cache replacement algorithms are found to be as not so efficient on memory management and complexity analysis. Therefore, a significant and thorough analysis is required to suggest a new optimal solution for optimizing the state-of-the-art cache replacement issues. The proposed study aims to conceptualize a theoretical model for optimal cache heap object replacement. The proposed model incorporates Tree based and MRU (Most Recently Used pseudo-LRU (Least Recently Used mechanism and configures it with JVM’s garbage collector to replace the old referenced objects from the heap cache lines. The performance analysis of the proposed system illustrates that it outperforms the conventional state of art replacement policies with much lower cost and complexity. It also depicts that the percentage of hits on cache heap is relatively higher than the conventional technologies.

  8. An Efficient Schema for Cloud Systems Based on SSD Cache Technology

    Directory of Open Access Journals (Sweden)

    Jinjiang Liu

    2013-01-01

    Full Text Available Traditional caching strategy is mainly based on the memory cache, taking read-write speed as its ultimate goal. However, with the emergence of SSD, the design ideas of traditional cache are no longer applicable. Considering the read-write characteristics and times limit of erasing, the characteristics of SSD are taken into account as far as possible at the same time of designing caching strategy. In this paper, the flexible and adaptive cache strategy based on SSD is proposed, called FAC, which gives full consideration to the characteristics of SSD itself, combines traditional caching strategy design ideas, and then maximizes the role SSD has played. The core mechanism is based on the dynamic adjustment capabilities of access patterns and the efficient selection algorithm of hot data. We have developed dynamical adjust section hot data algorithm, DASH in short, to adjust the read-write area capacity to suit the current usage scenario dynamically. The experimental results show that both read and write performance of caching strategy based on SSD have improved a lot, especially for read performance. Compared with traditional caching strategy, the technique can be used in engineering to reduce write times to SSD and prolong its service life without lowering read-write performance.

  9. 77 FR 53169 - Uinta-Wasatch-Cache National Forest Resource Advisory Committee

    Science.gov (United States)

    2012-08-31

    ... Forest Service Uinta-Wasatch-Cache National Forest Resource Advisory Committee AGENCY: Forest Service, USDA. ACTION: Notice of meeting. SUMMARY: The Uinta-Wasatch-Cache National Forest Resource Advisory... provide advice and recommendations to the Forest Service concerning projects and funding consistent with...

  10. Effective Padding of Multi-Dimensional Arrays to Avoid Cache Conflict Misses

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Changwan; Bao, Wenlei; Cohen, Albert; Krishnamoorthy, Sriram; Pouchet, Louis-noel; Rastello, Fabrice; Ramanujam, J.; Sadayappan, Ponnuswamy

    2016-06-02

    Caches are used to significantly improve performance. Even with high degrees of set-associativity, the number of accessed data elements mapping to the same set in a cache can easily exceed the degree of associativity, causing conflict misses and lowered performance, even if the working set is much smaller than cache capacity. Array padding (increasing the size of array dimensions) is a well known optimization technique that can reduce conflict misses. In this paper, we develop the first algorithms for optimal padding of arrays for a set associative cache for arbitrary tile sizes, In addition, we develop the first solution to padding for nested tiles and multi-level caches. The techniques are in implemented in PAdvisor tool. Experimental results with multiple benchmarks demonstrate significant performance improvement from use of PAdvisor for padding.

  11. An SPN-Based Integrated Model for Web Prefetching and Caching

    Institute of Scientific and Technical Information of China (English)

    Lei Shi; Ying-Jie Han; Xiao-Guang Ding; Lin Wei; Zhi-Min Gu

    2006-01-01

    The World Wide Web has become the primary means for information dissemination. Due to the limited resources of the network bandwidth, users always suffer from long time waiting. Web prefetching and web caching are the primary approaches to reducing the user perceived access latency and improving the quality of services. In this paper, a Stochastic Petri Nets (SPN) based integrated web prefetching and caching model (IWPCM) is presented and the performance evaluation of IWPCM is made. The performance metrics, access latency, throughput, HR (hit ratio) and BHR (byte hit ratio) are analyzed and discussed. Simulations show that compared with caching only model (CM), IWPCM can further improve the throughput, HR and BHR efficiently and reduce the access latency. The performance evaluation based on the SPN model can provide a basis for implementation of web prefetching and caching and the combination of web prefetching and caching holds the promise of improving the QoS of web systems.

  12. A Cache Considering Role-Based Access Control and Trust in Privilege Management Infrastructure

    Institute of Scientific and Technical Information of China (English)

    ZHANG Shaomin; WANG Baoyi; ZHOU Lihua

    2006-01-01

    PMI(privilege management infrastructure) is used to perform access control to resource in an E-commerce or E-government system. With the ever-increasing need for secure transaction, the need for systems that offer a wide variety of QoS (quality-of-service) features is also growing. In order to improve the QoS of PMI system, a cache based on RBAC(Role-based Access Control) and trust is proposed. Our system is realized based on Web service. How to design the cache based on RBAC and trust in the access control model is described in detail. The algorithm to query role permission in cache and to add records in cache is dealt with. The policy to update cache is introduced also.

  13. Dispersal by rodent caching increases seed survival in multiple ways in canopy-fire ecosystems.

    Science.gov (United States)

    Peterson, N B; Parker, V T

    2016-07-01

    Seed-caching rodents have long been seen as important actors in dispersal ecology. Here, we focus on the interactions with plants in a fire-disturbance community, specifically Arctostaphylos species (Ericaceae) in California chaparral. Although mutualistic relationships between caching rodents and plants are well studied, little is known how this type of relationship functions in a disturbance-driven system, and more specifically to systems shaped by fire disturbance. By burying seeds in the soil, rodents inadvertently improve the probability of seed surviving high temperatures produced by fire. We test two aspects of vertical dispersal, depth of seed and multiple seeds in caches as two important dimensions of rodent-caching behavior. We used a laboratory experimental approach to test seed survival under different heating conditions and seed bank structures. Creating a synthetic soil seed bank and synthetic fire/heating in the laboratory allowed us to have control over surface heating, depth of seed in the soil, and seed cache size. We compared the viability of Arctostaphylos viscida seeds from different treatment groups determined by these factors and found that, as expected, seeds slightly deeper in the soil had substantial increased chances of survival during a heating event. A key result was that some seeds within a cache in shallow soil could survive fire even at a depth with a killing heat pulse compared to isolated seeds; temperature measurements indicated lower temperatures immediately below caches compared to the same depth in adjacent soil. These results suggest seed caching by rodents increases seed survival during fire events in two ways, that caches disrupt heat flow or that caches are buried below the heat pulse kill zone. The context of natural disturbance drives the significance of this mutualism and further expands theory regarding mutualisms into the domain of disturbance-driven systems.

  14. On Use of the Variable Zagreb vM2 Index in QSPR: Boiling Points of Benzenoid Hydrocarbons

    Directory of Open Access Journals (Sweden)

    Albin Jurić

    2004-12-01

    Full Text Available The variable Zagreb vM2 index is introduced and applied to the structure-boiling point modeling of benzenoid hydrocarbons. The linear model obtained (thestandard error of estimate for the fit model Sfit=6.8 oC is much better than thecorresponding model based on the original Zagreb M2 index (Sfit=16.4 oC. Surprisingly,the model based on the variable vertex-connectivity index (Sfit=6.8 oC is comparable tothe model based on vM2 index. A comparative study with models based on the vertex-connectivity index, edge-connectivity index and several distance indices favours modelsbased on the variable Zagreb vM2 index and variable vertex-connectivity index.However, the multivariate regression with two-, three- and four-descriptors givesimproved models, the best being the model with four-descriptors (but vM2 index is notamong them with Sfit=5 oC, though the four-descriptor model contaning vM2 index isonly slightly inferior (Sfit=5.3 oC.

  15. Broadcasted Location-Aware Data Cache for Vehicular Application

    Directory of Open Access Journals (Sweden)

    Fukuda Akira

    2007-01-01

    Full Text Available There has been increasing interest in the exploitation of advances in information technology, for example, mobile computing and wireless communications in ITS (intelligent transport systems. Classes of applications that can benefit from such an infrastructure include traffic information, roadside businesses, weather reports, entertainment, and so on. There are several wireless communication methods currently available that can be utilized for vehicular applications, such as cellular phone networks, DSRC (dedicated short-range communication, and digital broadcasting. While a cellular phone network is relatively slow and a DSRC has a very small communication area, one-segment digital terrestrial broadcasting service was launched in Japan in 2006, high-performance digital broadcasting for mobile hosts has been available recently. However, broadcast delivery methods have the drawback that clients need to wait for the required data items to appear on the broadcast channel. In this paper, we propose a new cache system to effectively prefetch and replace broadcast data using "scope" (an available area of location-dependent data and "mobility specification" (a schedule according to the direction in which a mobile host moves. We numerically evaluate the cache system on the model close to the traffic road environment, and implement the emulation system to evaluate this location-aware data delivery method for a concrete vehicular application that delivers geographic road map data to a car navigation system.

  16. Cache-Oblivious Planar Orthogonal Range Searching and Counting

    DEFF Research Database (Denmark)

    Arge, Lars; Brodal, Gerth Stølting; Fagerberg, Rolf;

    2005-01-01

    size of any memory level in a multilevel memory hierarchy. Using bit manipulation techniques, the space can be further reduced to O(N). The structure can also be modified to support more general semigroup range sum queries in O(logB N) memory transfers, using O(Nlog2 N) space for three-sided queries......present the first cache-oblivious data structure for planar orthogonal range counting, and improve on previous results for cache-oblivious planar orthogonal range searching. Our range counting structure uses O(Nlog2 N) space and answers queries using O(logB N) memory transfers, where B is the block...... and O(Nlog22 N/log2log2 N) space for four-sided queries. Based on the O(Nlog N) space range counting structure, we develop a data structure that uses O(Nlog2 N) space and answers three-sided range queries in O(logB N+T/B) memory transfers, where T is the number of reported points. Based...

  17. Improved Space Bounds for Cache-Oblivious Range Reporting

    DEFF Research Database (Denmark)

    Afshani, Peyman; Zeh, Norbert

    2011-01-01

    second main result shows that any cache-oblivious 2-d three-sided range reporting data structure with the optimal query bound has to use Ω(N logε N) space, thereby improving on a recent lower bound for the same problem. Using known transformations, the lower bound extends to 3-d dominance reporting and 3......We provide improved bounds on the size of cacheoblivious range reporting data structures that achieve the optimal query bound of O(logB N + K/B) block transfers. Our first main result is an O(N √ logN log logN)-space data structure that achieves this query bound for 3-d dominance reporting and 2-d...... three-sided range reporting. No cache-oblivious o(N log N/ log logN)-space data structure for these problems was known before, even when allowing a query bound of O(logO(1) 2 N + K/B) block transfers.1 Our result also implies improved space bounds for general 2-d and 3-d orthogonal range reporting. Our...

  18. Broadcasted Location-Aware Data Cache for Vehicular Application

    Directory of Open Access Journals (Sweden)

    Kenya Sato

    2007-05-01

    Full Text Available There has been increasing interest in the exploitation of advances in information technology, for example, mobile computing and wireless communications in ITS (intelligent transport systems. Classes of applications that can benefit from such an infrastructure include traffic information, roadside businesses, weather reports, entertainment, and so on. There are several wireless communication methods currently available that can be utilized for vehicular applications, such as cellular phone networks, DSRC (dedicated short-range communication, and digital broadcasting. While a cellular phone network is relatively slow and a DSRC has a very small communication area, one-segment digital terrestrial broadcasting service was launched in Japan in 2006, high-performance digital broadcasting for mobile hosts has been available recently. However, broadcast delivery methods have the drawback that clients need to wait for the required data items to appear on the broadcast channel. In this paper, we propose a new cache system to effectively prefetch and replace broadcast data using “scope” (an available area of location-dependent data and “mobility specification” (a schedule according to the direction in which a mobile host moves. We numerically evaluate the cache system on the model close to the traffic road environment, and implement the emulation system to evaluate this location-aware data delivery method for a concrete vehicular application that delivers geographic road map data to a car navigation system.

  19. Evidence against observational spatial memory for cache locations of conspecifics in marsh tits Poecile palustris.

    Science.gov (United States)

    Urhan, A Utku; Emilsson, Ellen; Brodin, Anders

    2017-01-01

    Many species in the family Paridae, such as marsh tits Poecile palustris, are large-scale scatter hoarders of food that make cryptic caches and disperse these in large year-round territories. The perhaps most well-known species in the family, the great tit Parus major, does not store food itself but is skilled in stealing caches from the other species. We have previously demonstrated that great tits are able to memorise positions of caches they have observed marsh tits make and later return and steal the food. As great tits are explorative in nature and unusually good learners, it is possible that such "memorisation of caches from a distance" is a unique ability of theirs. The other possibility is that this ability is general in the parid family. Here, we tested marsh tits in the same experimental set-up as where we previously have tested great tits. We allowed caged marsh tits to observe a caching conspecific in a specially designed indoor arena. After a retention interval of 1 or 24 h, we allowed the observer to enter the arena and search for the caches. The marsh tits showed no evidence of such observational memorization ability, and we believe that such ability is more useful for a non-hoarding species. Why should a marsh tit that memorises hundreds of their own caches in the field bother with the difficult task of memorising other individuals' caches? We argue that the close-up memorisation procedure that marsh tits use at their own caches may be a different type of observational learning than memorisation of caches made by others. For example, the latter must be done from a distance and hence may require the ability to adopt an allocentric perspective, i.e. the ability to visualise the cache from the hoarder's perspective. Members of the Paridae family are known to possess foraging techniques that are cognitively advanced. Previously, we have demonstrated that a non-hoarding parid species, the great tit P. major, is able to memorise positions of caches that

  20. 网络存储阵列中CACHE的设计%CACHE Design in Network Storage Array

    Institute of Scientific and Technical Information of China (English)

    田新宇; 马永强; 王伟

    2011-01-01

    CACHE是连接CPU与内存的一种高速缓冲存储器,用于提高系统的读写性能.本文中的CACHE正是借用了这个名词,而非真正的CACHE,用内存模拟CACHE来实现高速的数据缓冲.%CACHE is a kind of high speed buffer storage device used to connect CPU and main memory, which can improve the speed of system's read and write. The referred CACHE in this article is not real CACHE, we use main memory to implement the function of CACHE.

  1. 国外名机VM-1448功放检证

    Institute of Scientific and Technical Information of China (English)

    徐柏华

    2007-01-01

    美国的Voiceof Music Corporation(音乐之声)推出的VM-1448型多功能功率放大器,可供Phono、Toner、Tape与AOX多种音乐系统的信号进行高保真的放大。该功放由6AQ5组成双声道功率放大器,输出功率为10W×2。现经日本音响大师进行检证,表明该功放性能卓越,功能齐全,保真度高、声音动听悦耳,是一款专门用于室内播放音乐的优质功放。

  2. Client-Driven Joint Cache Management and Rate Adaptation for Dynamic Adaptive Streaming over HTTP

    Directory of Open Access Journals (Sweden)

    Chenghao Liu

    2013-01-01

    Full Text Available Due to the fact that proxy-driven proxy cache management and the client-driven streaming solution of Dynamic Adaptive Streaming over HTTP (DASH are two independent processes, some difficulties and challenges arise in media data management at the proxy cache and rate adaptation at the DASH client. This paper presents a novel client-driven joint proxy cache management and DASH rate adaptation method, named CLICRA, which moves prefetching intelligence from the proxy cache to the client. Based on the philosophy of CLICRA, this paper proposes a rate adaptation algorithm, which selects bitrates for the next media segments to be requested by using the predicted buffered media time in the client. CLICRA is realized by conveying information on the segments that are likely to be fetched subsequently to the proxy cache so that it can use the information for prefetching. Simulation results show that the proposed method outperforms the conventional segment-fetch-time-based rate adaptation and the proxy-driven proxy cache management significantly not only in streaming quality at the client but also in bandwidth and storage usage in proxy caches.

  3. A SEU-protected cache memory-based on variable associativity of sets

    Energy Technology Data Exchange (ETDEWEB)

    Zarandi, Hamid Reza [Department of Computer Engineering, Sharif University of Technology, P.O. Box 11365-9517, Tehran (Iran, Islamic Republic of)]. E-mail: zarandi@ce.sharif.edu; Miremadi, Seyed Ghassem [Department of Computer Engineering, Sharif University of Technology, P.O. Box 11365-9517, Tehran (Iran, Islamic Republic of)]. E-mail: miremadi@sharif.edu

    2007-11-15

    SRAM cache memories suffer from single event upset (SEU) faults induced by energetic particles such as neutron and alpha particles. To protect these caches, designers often use error detection and correction codes, which typically provide single-bit error detection and even correction. However, these codes have low error detection capability or incur significant performance penalties. In this paper, a protected cache scheme based on the variable associativity of sets is presented. In this scheme, cache space is divided into sets of different sizes with variable tag field lengths. The other remained bits of tags are used for protecting the tag using a new protection code. This leads to protect the cache without compromising performance and area with respect to the similar one, fully associative cache. The scheme provides high SEU detection coverage as well as high performance. Moreover, reliability and mean-time-to-failure (MTTF) equations are derived and estimated. The results obtained from fault injection experiments and several trace files from SPEC2000 reveal that the proposed scheme exhibits a good performance near to fully associative cache but can detect high percent of SEU faults.

  4. Nature as a treasure map! Teaching geoscience with the help of earth caches?!

    Science.gov (United States)

    Zecha, Stefanie; Schiller, Thomas

    2015-04-01

    This presentation looks at how earth caches are influence the learning process in the field of geo science in non-formal education. The development of mobile technologies using Global Positioning System (GPS) data to point geographical location together with the evolving Web 2.0 supporting the creation and consumption of content, suggest a potential for collaborative informal learning linked to location. With the help of the GIS in smartphones you can go directly in nature, search for information by your smartphone, and learn something about nature. Earth caches are a very good opportunity, which are organized and supervised geocaches with special information about physical geography high lights. Interested people can inform themselves about aspects in geoscience area by earth caches. The main question of this presentation is how these caches are created in relation to learning processes. As is not possible, to analyze all existing earth caches, there was focus on Bavaria and a certain feature of earth caches. At the end the authors show limits and potentials for the use of earth caches and give some remark for the future.

  5. A Scalable and Highly Configurable Cache-Aware Hybrid Flash Translation Layer

    Directory of Open Access Journals (Sweden)

    Jalil Boukhobza

    2014-03-01

    Full Text Available This paper presents a cache-aware configurable hybrid flash translation layer (FTL, named CACH-FTL. It was designed based on the observation that most state-of­­-the-art flash-specific cache systems above FTLs flush groups of pages belonging to the same data block. CACH-FTL relies on this characteristic to optimize flash write operations placement, as large groups of pages are flushed to a block-mapped region, named BMR, whereas small groups are buffered into a page-mapped region, named PMR. Page group placement is based on a configurable threshold defining the limit under which it is more cost-effective to use page mapping (PMR and wait for grouping more pages before flushing to the BMR. CACH-FTL is scalable in terms of mapping table size and flexible in terms of Input/Output (I/O workload support. CACH-FTL performs very well, as the performance difference with the ideal page-mapped FTL is less than 15% in most cases and has a mean of 4% for the best CACH-FTL configurations, while using at least 78% less memory for table mapping storage on RAM.

  6. A morphometric assessment of the intended function of cached Clovis points.

    Science.gov (United States)

    Buchanan, Briggs; Kilby, J David; Huckell, Bruce B; O'Brien, Michael J; Collard, Mark

    2012-01-01

    A number of functions have been proposed for cached Clovis points. The least complicated hypothesis is that they were intended to arm hunting weapons. It has also been argued that they were produced for use in rituals or in connection with costly signaling displays. Lastly, it has been suggested that some cached Clovis points may have been used as saws. Here we report a study in which we morphometrically compared Clovis points from caches with Clovis points recovered from kill and camp sites to test two predictions of the hypothesis that cached Clovis points were intended to arm hunting weapons: 1) cached points should be the same shape as, but generally larger than, points from kill/camp sites, and 2) cached points and points from kill/camp sites should follow the same allometric trajectory. The results of the analyses are consistent with both predictions and therefore support the hypothesis. A follow-up review of the fit between the results of the analyses and the predictions of the other hypotheses indicates that the analyses support only the hunting equipment hypothesis. We conclude from this that cached Clovis points were likely produced with the intention of using them to arm hunting weapons.

  7. A morphometric assessment of the intended function of cached Clovis points.

    Directory of Open Access Journals (Sweden)

    Briggs Buchanan

    Full Text Available A number of functions have been proposed for cached Clovis points. The least complicated hypothesis is that they were intended to arm hunting weapons. It has also been argued that they were produced for use in rituals or in connection with costly signaling displays. Lastly, it has been suggested that some cached Clovis points may have been used as saws. Here we report a study in which we morphometrically compared Clovis points from caches with Clovis points recovered from kill and camp sites to test two predictions of the hypothesis that cached Clovis points were intended to arm hunting weapons: 1 cached points should be the same shape as, but generally larger than, points from kill/camp sites, and 2 cached points and points from kill/camp sites should follow the same allometric trajectory. The results of the analyses are consistent with both predictions and therefore support the hypothesis. A follow-up review of the fit between the results of the analyses and the predictions of the other hypotheses indicates that the analyses support only the hunting equipment hypothesis. We conclude from this that cached Clovis points were likely produced with the intention of using them to arm hunting weapons.

  8. Flexible Data Dissemination Strategy For Effective Cache Consistency In Mobile Wireless Communication Networks

    Directory of Open Access Journals (Sweden)

    Kahkashan Tabassum

    2012-06-01

    Full Text Available In mobile wireless communication network, caching data items at the mobile clients is important to reduce the data access delay. However, efficient cache invalidation strategies are used to ensure the consistency between the data in the cache of mobile clients and at the database server. Servers use invalidation reports (IRs to inform the mobile clients about data item updates. This paper proposes and implements a multicast based strategy to maintain cache consistency in mobile environment using AVI as the cache invalidationscheme. The proposed algorithm is outlined as follows – To resolve a query, the mobile client searches its cache to check if its data is valid. If yes, then query is answered, otherwise the client queries the DTA (Dynamic Transmitting Agent for latest updates and the query is answered. If DTA doesn’t have the latest updates, it gets it from the server. So, the main idea here is that DTA will be multicasting updates to the clients and hence the clients need not uplink to the server individually, thus preserving the network bandwidth. The scenario of simulation is developed in Java. The results demonstrate that the traffic generated in the proposed multicast model is simplified and it also retains cache consistency when compared to the existing methods that used broadcast strategy.

  9. Caching in the presence of competitors: Are Cape ground squirrels (Xerus inauris) sensitive to audience attentiveness?

    Science.gov (United States)

    Samson, Jamie; Manser, Marta B

    2016-01-01

    When social animals cache food close to their burrow, the potential for an audience member to observe the event is significantly increased. As a consequence, in order to reduce theft it may be advantageous for animals to be sensitive to certain audience cues, such as whether they are attentive or not to the cache event. In this study, observations were made on three groups of Cape ground squirrels (Xerus inauris) in their natural habitat when they cached provisioned food items. When individuals cached within 10 m of conspecifics, we recorded the attentiveness (i.e. whether any audience members were orientated towards the cacher, had direct line of site and were not engaged in other activities) and identity of audience members. Overall, there was a preference to cache when audience members were inattentive rather than attentive. Additionally, we found rank effects related to cache avoidance whereby high-ranked individuals showed less avoidance to cache when audience members were attentive compared to medium- and low-ranked individuals. We suggest this audience sensitivity may have evolved in response to the difference in competitive ability amongst the ranks in how successful individuals are at winning foraging competitions. This study demonstrates that Cape ground squirrels have the ability to not only monitor the presence or absence of conspecifics but also discriminate individuals on the basis of their attentive state.

  10. An Effect of Route Caching Scheme in DSR for Vehicular Adhoc Networks

    Directory of Open Access Journals (Sweden)

    Poonam kori

    2012-01-01

    Full Text Available Routing is one of the most significant challenges in Vehicular ad hoc networks and is critical for the basic network operations. Nodes (vehicles in a Vehicular ad hoc network are allowed to move in anuncontrolled manner. Such node mobility results in a highly dynamic network with rapid topological changes. Caching the routing information can significantly improve the efficiency of routing mechanism in a wireless ad hoc network by reducing the access latency and bandwidth usage. Our work presents an analysis of the effects of route cache for this caching in on-demand routing protocols in Vehicular ad hoc networks. Our analysis is based on the Dynamic Source Routing protocol (DSR, which operates entirely on-demand. Using detailed simulations of Vehicular ad hoc networks, we studied a caching algorithm that utilize cache size as a design choice, and simulated each cache primarily over different movement scenarios drawn from various mobility models. We also evaluated a set of mobility metrics that allow accurate characterization of the relative difficulty that a given movement scenario presents to a Vehicularad hoc network routing protocol, and we analyze each mobility metric’s ability to predict the actual difficulty in terms of routing overhead and packet delivery ratio experienced by the routing protocolacross the highway and city traffic scenarios in our study. Finally we have shown that caching the routing data is beneficial.

  11. Adjustable Two-Tier Cache for IPTV Based on Segmented Streaming

    Directory of Open Access Journals (Sweden)

    Kai-Chun Liang

    2012-01-01

    Full Text Available Internet protocol TV (IPTV is a promising Internet killer application, which integrates video, voice, and data onto a single IP network, and offers viewers an innovative set of choices and control over their TV content. To provide high-quality IPTV services, an effective strategy is based on caching. This work proposes a segment-based two-tier caching approach, which divides each video into multiple segments to be cached. This approach also partitions the cache space into two layers, where the first layer mainly caches to-be-played segments and the second layer saves possibly played segments. As the segment access becomes frequent, the proposed approach enlarges the first layer and reduces the second layer, and vice versa. Because requested segments may not be accessed frequently, this work further designs an admission control mechanism to determine whether an incoming segment should be cached or not. The cache architecture takes forward/stop playback into account and may replace the unused segments under the interrupted playback. Finally, we conduct comprehensive simulation experiments to evaluate the performance of the proposed approach. The results show that our approach can yield higher hit ratio than previous work under various environmental parameters.

  12. An Efficient Searching and an Optimized Cache Coherence handling Scheme on DSR Routing Protocol for MANETS

    Directory of Open Access Journals (Sweden)

    Rajneesh Kumar Gujral

    2011-01-01

    Full Text Available Mobile ad hoc networks (MANETS are self-created and self organized by a collection of mobile nodes, interconnected by multi-hop wireless paths in a strictly peer to peer fashion. DSR (Dynamic Source Routing is an on-demand routing protocol for wireless ad hoc networks that floods route requests when the route is needed. Route caches in intermediate mobile node on DSR are used to reduce flooding of route requests. But with the increase in network size, node mobility and local cache of every mobile node cached route quickly become stale or inefficient. In this paper, for efficient searching, we have proposed a generic searching algorithm on associative cache memory organization to faster searching single/multiple paths for destination if exist in intermediate mobile node cache with a complexity O(n (Where n is number of bits required to represent the searched field.The other major problem of DSR is that the route maintenance mechanism does not locally repair a broken link and Stale cache information could also result in inconsistencies during the route discovery /reconstruction phase. So to deal this, we have proposed an optimized cache coherence handling scheme for on -demand routing protocol (DSR.

  13. Value-Based Caching in Information-Centric Wireless Body Area Networks

    Directory of Open Access Journals (Sweden)

    Fadi M. Al-Turjman

    2017-01-01

    Full Text Available We propose a resilient cache replacement approach based on a Value of sensed Information (VoI policy. To resolve and fetch content when the origin is not available due to isolated in-network nodes (fragmentation and harsh operational conditions, we exploit a content caching approach. Our approach depends on four functional parameters in sensory Wireless Body Area Networks (WBANs. These four parameters are: age of data based on periodic request, popularity of on-demand requests, communication interference cost, and the duration for which the sensor node is required to operate in active mode to capture the sensed readings. These parameters are considered together to assign a value to the cached data to retain the most valuable information in the cache for prolonged time periods. The higher the value, the longer the duration for which the data will be retained in the cache. This caching strategy provides significant availability for most valuable and difficult to retrieve data in the WBANs. Extensive simulations are performed to compare the proposed scheme against other significant caching schemes in the literature while varying critical aspects in WBANs (e.g., data popularity, cache size, publisher load, connectivity-degree, and severe probabilities of node failures. These simulation results indicate that the proposed VoI-based approach is a valid tool for the retrieval of cached content in disruptive and challenging scenarios, such as the one experienced in WBANs, since it allows the retrieval of content for a long period even while experiencing severe in-network node failures.

  14. Multi-Level Web Cache Model Used in Data Grid Application

    Institute of Scientific and Technical Information of China (English)

    CHEN Lei; LI Sanli

    2006-01-01

    This paper proposed a novel multilevel data cache model by Web cache (MDWC) based on network cost in data grid. By constructing a communicating tree of grid sites based on network cost and using a single leader for each data segment within each region, the MDWC makes the most use of the Web cache of other sites whose bandwidth is as broad as covering the job executing site. The experiment result indicates that the MDWC reduces data response time and data update cost by avoiding network congestions while designing on the parameters concluded by the environment of application.

  15. Minimizing cache misses in an event-driven network server: A case study of TUX

    DEFF Research Database (Denmark)

    Bhatia, Sapan; Consel, Charles; Lawall, Julia Laetitia

    2006-01-01

    servers by optimizing their use of the L2 CPU cache in the context of the TUX Web server, known for its robustness to heavy load. Our approach is based on a novel cache-aware memory allocator and a specific scheduling strategy that together ensure that the total working data set of the server stays...... in the L2 cache. Experiments show that under high concurrency, our optimizations improve the throughput of TUX by up to 40% and the number of requests serviced at the time of failure by 21%....

  16. Lazy Spilling for a Time-Predictable Stack Cache: Implementation and Analysis

    DEFF Research Database (Denmark)

    Abbaspourseyedi, Sahar; Jordan, Alexander; Brandner, Florian

    2014-01-01

    The growing complexity of modern computer architectures increasingly complicates the prediction of the run-time behavior of software. For real-time systems, where a safe estimation of the program's worst-case execution time is needed, time-predictable computer architectures promise to resolve...... this problem. A stack cache, for instance, allows the compiler to efficiently cache a program's stack, while static analysis of its behavior remains easy. Likewise, its implementation requires little hardware overhead. This work introduces an optimization of the standard stack cache to avoid redundant spilling...

  17. Smart Proactive Caching Scheme for Fast Authenticated Handoff in Wireless LAN

    Institute of Scientific and Technical Information of China (English)

    Sin-Kyu Kim; Jae-Woo Choi; Dae-Hun Nyang; Gene-Beck Hahn; Joo-Seok Song

    2007-01-01

    Handoff in IEEE 802.11 requires the repeated authentication and key exchange procedures, which will make the provision of seamless services in wireless LAN more difficult. To reduce the overhead, the proactive caching schemes have been proposed. However, they require too many control packets delivering the security context information to neighbor access points. Our contribution is made in two-fold: one is a significant decrease in the number of control packets for proactive caching and the other is a superior cache replacement algorithm.

  18. Improve Performance of Data Warehouse by Query Cache

    Science.gov (United States)

    Gour, Vishal; Sarangdevot, S. S.; Sharma, Anand; Choudhary, Vinod

    2010-11-01

    The primary goal of data warehouse is to free the information locked up in the operational database so that decision makers and business analyst can make queries, analysis and planning regardless of the data changes in operational database. As the number of queries is large, therefore, in certain cases there is reasonable probability that same query submitted by the one or multiple users at different times. Each time when query is executed, all the data of warehouse is analyzed to generate the result of that query. In this paper we will study how using query cache improves performance of Data Warehouse and try to find the common problems faced. These kinds of problems are faced by Data Warehouse administrators which are minimizes response time and improves the efficiency of query in data warehouse overall, particularly when data warehouse is updated at regular interval.

  19. MOBILE-BASED VIDEO CACHING ARCHITECTURE BASED ON BILLBOARD MANAGER

    Directory of Open Access Journals (Sweden)

    Rajesh Bose

    2015-07-01

    Full Text Available Video streaming services are very popular today. Increasingly, users can now access multimedia applications and video playback wirelessly on their mobile devices. However, a significant challenge remains in ensuring smooth and uninterrupted transmission of almost any size of video file over a 3G network, and as quickly as possible in order to optimize bandwidth consumption. In this paper, we propose to position our Billboard Manager to provide an optimal transmission rate to enable smooth video playback to a mobile device user connected to a 3G network. Our work focuses on serving user requests by mobile operators from cached resource managed by Billboard Manager, and transmitting the video files from this pool. The aim is to reduce the load placed on bandwidth resources of a mobile operator by routing away as much user requests away from the internet for having to search a video and, subsequently, if located, have it transferred back to the user.

  20. Memory-intensive benchmarks: IRAM vs. cache-based machines

    Energy Technology Data Exchange (ETDEWEB)

    Gaeke, Brian G.; Husbands, Parry; Kim, Hyun Jin; Li, Xiaoye S.; Moon, Hyun Jin; Oliker, Leonid; Yelick, Katherine A.; Biswas, Rupak

    2001-09-29

    The increasing gap between processor and memory performance has led to new architectural models for memory-intensive applications. In this paper, we explore the performance of a set of memory-intensive benchmarks and use them to compare the performance of conventional cache-based microprocessors to a mixed logic and DRAM processor called VIRAM. The benchmarks are based on problem statements, rather than specific implementations, and in each case we explore the fundamental hardware requirements of the problem, as well as alternative algorithms and data structures that can help expose fine-grained parallelism or simplify memory access patterns. The benchmarks are characterized by their memory access patterns, their basic structures, and the ratio of computation to memory operation.

  1. An Intrusion Detection System for Kaminsky DNS Cache poisoning

    Directory of Open Access Journals (Sweden)

    Dhrubajyoti Pathak, Kaushik Baruah

    2013-09-01

    Full Text Available Domain Name System (DNS is the largest and most actively distributed, hierarchical and scalable database system which plays an incredibly inevitable role behind the functioning of the Internet as we know it today. A DNS translates human readable and meaningful domain names such as www.iitg.ernet.in into an Internet Protocol (IP address such as 202.141.80.6. It is used for locating a resource on the World Wide Web. Without a DNS, the Internet services as we know it, would come to a halt. In our thesis, we proposed an Intrusion Detection System(IDS for Kaminsky cache poisoning attacks. Our system relies on the existing properties of the DNS protocol.

  2. 并行文件系统PARFSNOW++中的协作式缓冲技术研究%The Cooperative Cache Techniques Used in PARFSNOW++

    Institute of Scientific and Technical Information of China (English)

    赵欣; 陈道蓄; 谢立

    2000-01-01

    The Parallel File System based on NOW is focus more and more. This paper briefly describes a Parallel File System based on NOW-PARFSNOW++. We put special focus on the technique of Cooperative Cache. Aiming at the existed problem of current Cooperative Cache mechanism,we introduce a new mixed Cooperative Cache mechanism used in the PARFSNOW++,we also present the cache coherence mechanism and the data replace mechanism used in the mixed Cooperative Cache mechanism.

  3. The potential of value management (VM) to improve the consideration of energy efficiency within pre-construction

    Science.gov (United States)

    Tahir, Mohamad Zamhari; Nawi, Mohd Nasrun Mohd; Rajemi, Mohamad Farizal

    2016-08-01

    Energy demand and consumption in buildings will rise rapidly in the near future because of several social economics factors and this situation occurs not only in developed countries but also in developing countries such as Malaysia. There is demand towards building with energy efficiency features at this time, however most of the current buildings types are still being constructed with conventional designs, thus contribute to inefficient of energy consumption during the operation stage of the building. This paper presents the concept and the application of Value Management (VM) approach and its potential to improve consideration of energy efficiency within pre-construction process. Based on the relevant literatures, VM has provides an efficient and effective delivery system to fulfill the objectives and client's requirements. Generally in this paper, VM is discussed and scrutinized with reference to previous studies to see how these concepts contribute to better optimize the energy consumption in a building by seeking the best value energy efficiency through the design and construction process. This paper will not draw any conclusion but rather a preliminary research to propose the most energy efficiency measures to reliably accomplish a function that will meet the client's needs, desires and expectations. For further research in future, simple quantitative industry survey and VM workshops will be conducted to validate and further improve the research.

  4. VM-ADCP measured upper ocean currents in the southeastern Arabian Sea and Equatorial Indian Ocean during December, 2000

    Digital Repository Service at National Institute of Oceanography (India)

    Murty, V.S.N.; Suryanarayana, A.; Somayajulu, Y.K.; Raikar, V.; Tilvi, V.

    The Vessel-Mounted Acoustic Doppler Current Profiler (VM-ADCP) measured currents in the upper 200 m along the cruise track covering the southeastern Arabian Sea and the Eastern Equatorial Indian Ocean during northern winter monsoon (10-31 December...

  5. Mobile Acoustical Bat Monitoring Annual Summary Report CY 2012- Cache River National Wildlife Refuge

    Data.gov (United States)

    US Fish and Wildlife Service, Department of the Interior — These reports summarize bat calls collected along transects at Cache River National Wildlife Refuge for the CY 2012. Calls were classified using Bat Call ID software...

  6. Applying Data Mining Techniques to Improve Information Security in the Cloud: A Single Cache System Approach

    Directory of Open Access Journals (Sweden)

    Amany AlShawi

    2016-01-01

    Full Text Available Presently, the popularity of cloud computing is gradually increasing day by day. The purpose of this research was to enhance the security of the cloud using techniques such as data mining with specific reference to the single cache system. From the findings of the research, it was observed that the security in the cloud could be enhanced with the single cache system. For future purposes, an Apriori algorithm can be applied to the single cache system. This can be applied by all cloud providers, vendors, data distributors, and others. Further, data objects entered into the single cache system can be extended into 12 components. Database and SPSS modelers can be used to implement the same.

  7. Prospective thinking in a mustelid? Eira barbara (Carnivora) cache unripe fruits to consume them once ripened

    Science.gov (United States)

    Soley, Fernando G.; Alvarado-Díaz, Isaías

    2011-08-01

    The ability of nonhuman animals to project individual actions into the future is a hotly debated topic. We describe the caching behaviour of tayras ( Eira barbara) based on direct observations in the field, pictures from camera traps and radio telemetry, providing evidence that these mustelids pick and cache unripe fruit for future consumption. This is the first reported case of harvesting of unripe fruits by a nonhuman animal. Ripe fruits are readily taken by a variety of animals, and tayras might benefit by securing a food source before strong competition takes place. Unripe climacteric fruits need to be harvested when mature to ensure that they continue their ripening process, and tayras accurately choose mature stages of these fruits for caching. Tayras cache both native (sapote) and non-native (plantain) fruits that differ in morphology and developmental timeframes, showing sophisticated cognitive ability that might involve highly developed learning abilities and/or prospective thinking.

  8. Performance Evaluation of the Random Replacement Policy for Networks of Caches

    CERN Document Server

    Gallo, Massimo; Muscariello, Luca; Simonian, Alain; Tanguy, Christian

    2012-01-01

    The overall performance of content distribution networks as well as recently proposed information-centric networks rely on both memory and bandwidth capacities. In this framework, the hit ratio is the key performance indicator which captures the bandwidth / memory tradeo? for a given global performance.This paper focuses on the estimation of the hit ratio in a network of caches that employ the Random replacement policy. Assuming that requests are independent and identically distributed, general expressions of miss probabilities for a single Random cache are provided as well as exact results for specif?c popularity distributions. Moreover, for any Zipf popularity distribution with exponent ? > 1, we obtain asymptotic equivalents for the miss probability in the case of large cache size. We extend the analysis to networks of Random caches, when the topology is either a line or a homogeneous tree. In that case, approximations for miss probabilities across the network are derived by assuming that miss events at an...

  9. Mobile Acoustical Bat Monitoring Annual Summary Report CY 2012 to 2015 - Cache River National Wildlife Refuge

    Data.gov (United States)

    US Fish and Wildlife Service, Department of the Interior — These reports summarize bat calls collected along transects at Cache River NWR between 2012 and 2015. Calls were classified using Bat Call ID ([BCID] version 2.5a)...

  10. Cache River National Wildlife Refuge [Land Status Map: Sheet 5 of 9

    Data.gov (United States)

    US Fish and Wildlife Service, Department of the Interior — This map was produced by the Division of Realty to depict landownership at Cache River National Wildlife Refuge. It was generated from rectified aerial photography,...

  11. Energy-Efficient Caching for Mobile Edge Computing in 5G Networks

    National Research Council Canada - National Science Library

    Zhaohui Luo; Minghui LiWang; Zhijian Lin; Lianfen Huang; Xiaojiang Du; Mohsen Guizani

    2017-01-01

    Mobile Edge Computing (MEC), which is considered a promising and emerging paradigm to provide caching capabilities in proximity to mobile devices in 5G networks, enables fast, popular content delivery of delay-sensitive...

  12. Cooperative Coding and Caching for Streaming Data in Multihop Wireless Networks

    Directory of Open Access Journals (Sweden)

    Dan Wang

    2010-01-01

    Full Text Available This paper studies the distributed caching managements for the current flourish of the streaming applications in multihop wireless networks. Many caching managements to date use randomized network coding approach, which provides an elegant solution for ubiquitous data accesses in such systems. However, the encoding, essentially a combination operation, makes the coded data difficult to be changed. In particular, to accommodate new data, the system may have to first decode all the combined data segments, remove some unimportant ones, and then reencode the data segments again. This procedure is clearly expensive for continuously evolving data storage. As such, we introduce a novel Cooperative Coding and Caching (C3 scheme, which allows decoding-free data removal through a triangle-like codeword organization. Its decoding performance is very close to the conventional network coding with only a sublinear overhead. Our scheme offers a promising solution to the caching management for streaming data.

  13. Cooperative Coding and Caching for Streaming Data in Multihop Wireless Networks

    Directory of Open Access Journals (Sweden)

    Liu Jiangchuan

    2010-01-01

    Full Text Available This paper studies the distributed caching managements for the current flourish of the streaming applications in multihop wireless networks. Many caching managements to date use randomized network coding approach, which provides an elegant solution for ubiquitous data accesses in such systems. However, the encoding, essentially a combination operation, makes the coded data difficult to be changed. In particular, to accommodate new data, the system may have to first decode all the combined data segments, remove some unimportant ones, and then reencode the data segments again. This procedure is clearly expensive for continuously evolving data storage. As such, we introduce a novel Cooperative Coding and Caching ( scheme, which allows decoding-free data removal through a triangle-like codeword organization. Its decoding performance is very close to the conventional network coding with only a sublinear overhead. Our scheme offers a promising solution to the caching management for streaming data.

  14. Content Delivery in Fog-Aided Small-Cell Systems with Offline and Online Caching: An Information—Theoretic Analysis

    Directory of Open Access Journals (Sweden)

    Seyyed Mohammadreza Azimi

    2017-07-01

    Full Text Available The storage of frequently requested multimedia content at small-cell base stations (BSs can reduce the load of macro-BSs without relying on high-speed backhaul links. In this work, the optimal operation of a system consisting of a cache-aided small-cell BS and a macro-BS is investigated for both offline and online caching settings. In particular, a binary fading one-sided interference channel is considered in which the small-cell BS, whose transmission is interfered by the macro-BS, has a limited-capacity cache. The delivery time per bit (DTB is adopted as a measure of the coding latency, that is, the duration of the transmission block, required for reliable delivery. For offline caching, assuming a static set of popular contents, the minimum achievable DTB is characterized through information-theoretic achievability and converse arguments as a function of the cache capacity and of the capacity of the backhaul link connecting cloud and small-cell BS. For online caching, under a time-varying set of popular contents, the long-term (average DTB is evaluated for both proactive and reactive caching policies. Furthermore, a converse argument is developed to characterize the minimum achievable long-term DTB for online caching in terms of the minimum achievable DTB for offline caching. The performance of both online and offline caching is finally compared using numerical results.

  15. dCache: Big Data storage for HEP communities and beyond

    Energy Technology Data Exchange (ETDEWEB)

    Millar, A. P. [DESY; Behrmann, G. [Unlisted, DK; Bernardt, C. [DESY; Fuhrmann, P. [DESY; Litvintsev, D. [Fermilab; Mkrtchyan, T. [DESY; Petersen, A. [DESY; Rossi, A. [Fermilab; Schwank, K. [DESY

    2014-01-01

    With over ten years in production use dCache data storage system has evolved to match ever changing lansdcape of continually evolving storage technologies with new solutions to both existing problems and new challenges. In this paper, we present three areas of innovation in dCache: providing efficient access to data with NFS v4.1 pNFS, adoption of CDMI and WebDAV as an alternative to SRM for managing data, and integration with alternative authentication mechanisms.

  16. 分组密码Cache攻击技术研究%Cache Attacks on Block Ciphers

    Institute of Scientific and Technical Information of China (English)

    赵新杰; 王韬; 郭世泽; 刘会英

    2012-01-01

    近年来,Cache攻击已成为微处理器上分组密码实现的最大安全威胁,相关研究是密码旁路攻击的热点问题.对分组密码Cache攻击进行了综述.阐述了Cache工作原理及Cache命中与失效旁路信息差异,分析了分组密码查表Cache访问特征及泄露信息,从攻击模型、分析方法、研究进展3个方面评述了典型的分组密码Cache攻击技术,并对Cache攻击的发展特点进行了总结,最后指出了该领域研究存在的问题,展望了未来的研究方向.%In recent years, cache attack has become one of the most threatening attacks to block ciphers that implemented on microprocessors. The research in this area is a hot spot of cryptographic side channel attacks. This paper summarizes the cache attacks on block ciphers. The mechanism of cache and the side channel information difference of cache hit/miss are described. The characteristics of cache accesses and corresponding information leakages are analyzed. Several typical cache attack techniques on block ciphers are well discussed from the aspects of attack model, analysis method, research progress. Finally, the features of cache attacks are summarized, the current research pitfalls are provided, and the future directions of cache attacks are given.

  17. [The toxicity of massive dosis of VM26 (4-demethyl-epipodophyllotoxin-beta-D-thenylidene glucoside). A contribution to the therapy of advanced ovarian cancer (author's transl)].

    Science.gov (United States)

    Jankowski, R P; Vahrson, H

    1977-12-09

    19 patients with advanced ovarian cancer (FIGO-stages IIb-IV) were treated by ultra-high doses of VM26 partly according to positive oncobiograms during Multiple-drug-stoss-therapy. The toxic reactions and the cytostatic effect were investigated. The range of 57,9% remissions and the moderate toxicity suggest a specific application of VM26 in advanced ovarian cancer.

  18. NVFAT: A FAT-Compatible File System with NVRAM Write Cache for Its Metadata

    Science.gov (United States)

    Doh, In Hwan; Lee, Hyo J.; Moon, Young Je; Kim, Eunsam; Choi, Jongmoo; Lee, Donghee; Noh, Sam H.

    File systems make use of the buffer cache to enhance their performance. Traditionally, part of DRAM, which is volatile memory, is used as the buffer cache. In this paper, we consider the use of of Non-Volatile RAM (NVRAM) as a write cache for metadata of the file system in embedded systems. NVRAM is a state-of-the-art memory that provides characteristics of both non-volatility and random byte addressability. By employing NVRAM as a write cache for dirty metadata, we retain the same integrity of a file system that always synchronously writes its metadata to storage, while at the same time improving file system performance to the level of a file system that always writes asynchronously. To show quantitative results, we developed an embedded board with NVRAM and modify the VFAT file system provided in Linux 2.6.11 to accommodate the NVRAM write cache. We performed a wide range of experiments on this platform for various synthetic and realistic workloads. The results show that substantial reductions in execution time are possible from an application viewpoint. Another consequence of the write cache is its benefits at the FTL layer, leading to improved wear leveling of Flash memory and increased energy savings, which are important measures in embedded systems. From the real numbers obtained through our experiments, we show that wear leveling is improved considerably and also quantify the improvements in terms of energy.

  19. BACH:A Bandwidth-Aware Hybrid Cache Hierarchy Design with Nonvolatile Memories

    Institute of Scientific and Technical Information of China (English)

    Jishen Zhao; Cong Xu; Tao Zhang; Yuan Xie

    2016-01-01

    Limited main memory bandwidth is becoming a fundamental performance bottleneck in chip-multiprocessor (CMP) design. Yet directly increasing the peak memory bandwidth can incur high cost and power consump-tion. In this paper, we address this problem by proposing a memory, a bandwidth-aware reconfigurable cache hierarchy, BACH, with hybrid memory technologies. Components of our BACH design include a hybrid cache hierarchy, a reconfigura-tion mechanism, and a statistical prediction engine. Our hybrid cache hierarchy chooses different memory technologies with various bandwidth characteristics, such as spin-transfer torque memory (STT-MRAM), resistive memory (ReRAM), and embedded DRAM (eDRAM), to configure each level so that the peak bandwidth of the overall cache hierarchy is optimized. Our reconfiguration mechanism can dynamically adjust the cache capacity of each level based on the predicted bandwidth demands of running workloads. The bandwidth prediction is performed by our prediction engine. We evaluate the system performance gain obtained by BACH design with a set of multithreaded and multiprogrammed workloads with and without the limitation of system power budget. Compared with traditional SRAM-based cache design, BACH improves the system throughput by 58%and 14%with multithreaded and multiprogrammed workloads respectively.

  20. LastingNVCache: A Technique for Improving the Lifetime of Non-volatile Caches

    Energy Technology Data Exchange (ETDEWEB)

    Mittal, Sparsh [ORNL; Vetter, Jeffrey S [ORNL; Li, Dong [ORNL

    2014-01-01

    Use of NVM (Non-volatile memory) devices such as ReRAM (resistive RAM) and STT-RAM (spin transfer torque RAM) for designing on-chip caches holds the promise of providing a high-density, low-leakage alternative to SRAM. However, low write endurance of NVMs, along with the write-variation introduced by existing cache management schemes may significantly limit the lifetime of NVM caches. We present LastingNVCache, a technique for improving lifetime of NVM caches by mitigating the intra-set write variation. LastingNVCache works on the key idea that by periodically flushing a frequently-written data-item, the next time the block can be made to load into a cold block in the set. Through this, the future writes to that data-item can be redirected from a hot block to a cold block, which leads to improvement in the cache lifetime. Microarchitectural simulations have shown that LastingNVCache provides 6.36X, 9.79X, and 10.94X improvement in lifetime for single, dual and quad-core systems. Also, its implementation overhead is small and it outperforms a recently proposed technique for improving lifetime of NVM caches.

  1. Combining Pre-fetching and Intelligent Caching Technique (SVM to Predict Attractive Tourist Places

    Directory of Open Access Journals (Sweden)

    K.R. Baskaran

    2015-01-01

    Full Text Available Combining Web caching and Web pre-fetching techniques results in obtaining the required information almost instantaneously. It also results in improved bandwidth utilization, load reduction on the origin server and reduces access delay. Web Pre-fetching is the process of fetching some of the predicted Web pages in advance which is assumed to be used by the user in the near future and the caching is the process of storing the pre-fetched Web pages in the cache memory. In the literature many interesting works have been reported separately for Web caching and for Web pre-fetching. In this study we combine pre-fetching (using clustering and caching (using SVM to keep track of the tourist spots that are likely to be visited by the tourists in the near future based on the previous history of visits. With the help of real data it is demonstrated that our approach is superior than clustering based pre-fetching technique using traditional LRU based caching policy which does not use SVM.

  2. Milestone Report - Level-2 Milestone 5589: Modernization and Expansion of LLNL Archive Disk Cache

    Energy Technology Data Exchange (ETDEWEB)

    Shoopman, J. D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-02-04

    This report documents Livermore Computing (LC) activities in support of ASC L2 milestone 5589: Modernization and Expansion of LLNL Archive Disk Cache, due March 31, 2016. The full text of the milestone is included in Attachment 1. The description of the milestone is: Description: Configuration of archival disk cache systems will be modernized to reduce fragmentation, and new, higher capacity disk subsystems will be deployed. This will enhance archival disk cache capability for ASC archive users, enabling files written to the archives to remain resident on disk for many (6–12) months, regardless of file size. The milestone was completed in three phases. On August 26, 2015 subsystems with 6PB of disk cache were deployed for production use in LLNL’s unclassified HPSS environment. Following that, on September 23, 2015 subsystems with 9 PB of disk cache were deployed for production use in LLNL’s classified HPSS environment. On January 31, 2016, the milestone was fully satisfied when the legacy Data Direct Networks (DDN) archive disk cache subsystems were fully retired from production use in both LLNL’s unclassified and classified HPSS environments, and only the newly deployed systems were in use.

  3. LPPS: A Distributed Cache Pushing Based K-Anonymity Location Privacy Preserving Scheme

    Directory of Open Access Journals (Sweden)

    Ming Chen

    2016-01-01

    Full Text Available Recent years have witnessed the rapid growth of location-based services (LBSs for mobile social network applications. To enable location-based services, mobile users are required to report their location information to the LBS servers and receive answers of location-based queries. Location privacy leak happens when such servers are compromised, which has been a primary concern for information security. To address this issue, we propose the Location Privacy Preservation Scheme (LPPS based on distributed cache pushing. Unlike existing solutions, LPPS deploys distributed cache proxies to cover users mostly visited locations and proactively push cache content to mobile users, which can reduce the risk of leaking users’ location information. The proposed LPPS includes three major process. First, we propose an algorithm to find the optimal deployment of proxies to cover popular locations. Second, we present cache strategies for location-based queries based on the Markov chain model and propose update and replacement strategies for cache content maintenance. Third, we introduce a privacy protection scheme which is proved to achieve k-anonymity guarantee for location-based services. Extensive experiments illustrate that the proposed LPPS achieves decent service coverage ratio and cache hit ratio with lower communication overhead compared to existing solutions.

  4. Energy efficiency of on-demand video caching systems and user behavior.

    Science.gov (United States)

    Chan, Chien Aun; Wong, Elaine; Nirmalathas, Ampalavanapillai; Gygax, André F; Leckie, Christopher

    2011-12-12

    Energy-efficient video distribution systems have become an important tool to deal with the rapid growth in Internet video traffic and to maintain the environmental sustainability of the Internet. Due to the limitations in terms of energy-efficiency of the conventional server centric method for delivering video services to the end users, storing video contents closer to the end users could potentially achieve significant improvements in energy-efficiency. Because of dissimilarities in user behavior and limited cache sizes, caching systems should be designed according to the behavior of user communities. In this paper, several energy consumption models are presented to evaluate the energy savings of single-level caching and multi-level caching systems that support varying levels of similarity in user behavior. The results show that single level caching systems can achieve high energy savings for communities with high similarity in user behavior. In contrast, when user behavior is dissimilar, multi-level caching systems should be used to increase the energy efficiency.

  5. Multi-bit soft error tolerable L1 data cache based on characteristic of data value

    Institute of Scientific and Technical Information of China (English)

    WANG Dang-hui; LIU He-peng; CHEN Yi-ran

    2015-01-01

    Due to continuous decreasing feature size and increasing device density, on-chip caches have been becoming susceptible to single event upsets, which will result in multi-bit soft errors. The increasing rate of multi-bit errors could result in high risk of data corruption and even application program crashing. Traditionally, L1 D-caches have been protected from soft errors using simple parity to detect errors, and recover errors by reading correct data from L2 cache, which will induce performance penalty. This work proposes to exploit the redundancy based on the characteristic of data values. In the case of a small data value, the replica is stored in the upper half of the word. The replica of a big data value is stored in a dedicated cache line, which will sacrifice some capacity of the data cache. Experiment results show that the reliability of L1 D-cache has been improved by 65% at the cost of 1% in performance.

  6. Towards High-Performance Network Application Identification With Aggregate-Flow Cache

    Directory of Open Access Journals (Sweden)

    Fei He

    2011-05-01

    Full Text Available Classifying network traffic according to their application-layer protocols is an important task in modern networks for traffic management and network security. Existing payload-based or statistical methods ofapplication identification cannot meet the demand of both high performance and accurate identificationat the same time. We propose an application identification framework that classifies traffic at aggregateflow level leveraging aggregate-flow cache. A detailed traffic classifier designed based on this framework is illustrated to improve the throughput of payload-based identification methods. We further optimize the classifier by proposing an efficient design of aggregate-flow cache. The cache design employs a frequency-based, recency-aware replacement algorithm based on the analysis of temporal locality of aggregate-flow cache. Experiments on real-world traces show that our traffic classifier with aggregateflow cache can reduce up to 95% workload of backend identification engine. The proposed cache replacement algorithm outperforms well-known replacement algorithms, and achieves 90% of the optimal performance using only 15% of memory. The throughput of a payload-based identification system, L7-filter [1], is increased by up to 5.1 times by using our traffic classifier design.

  7. Advanced neuroblastoma: improved response rate using a multiagent regimen (OPEC) including sequential cisplatin and VM-26.

    Science.gov (United States)

    Shafford, E A; Rogers, D W; Pritchard, J

    1984-07-01

    Forty-two children, all over one year of age, were given vincristine, cyclophosphamide, and sequentially timed cisplatin and VM-26 (OPEC) or OPEC and doxorubicin (OPEC-D) as initial treatment for newly diagnosed stage III or IV neuroblastoma. Good partial response was achieved in 31 patients (74%) overall and in 28 (78%) of 36 patients whose treatment adhered to the chemotherapy protocol, compared with a 65% response rate achieved in a previous series of children treated with pulsed cyclophosphamide and vincristine with or without doxorubicin. Only six patients, including two of the six children whose treatment did not adhere to protocol, failed to respond, but there were five early deaths from treatment-related complications. Tumor response to OPEC, which was the less toxic of the two regimens, was at least as good as tumor response to OPEC-D. Cisplatin-induced morbidity was clinically significant in only one patient and was avoided in others by careful monitoring of glomerular filtration rate and hearing. Other centers should test the efficacy of OPEC or equivalent regimens in the treatment of advanced neuroblastoma.

  8. Optimal bandwidth-aware VM allocation for Infrastructure-as-a-Service

    CERN Document Server

    Dutta, Debojyoti; Post, Ian; Shinde, Rajendra

    2012-01-01

    Infrastructure-as-a-Service (IaaS) providers need to offer richer services to be competitive while optimizing their resource usage to keep costs down. Richer service offerings include new resource request models involving bandwidth guarantees between virtual machines (VMs). Thus we consider the following problem: given a VM request graph (where nodes are VMs and edges represent virtual network connectivity between the VMs) and a real data center topology, find an allocation of VMs to servers that satisfies the bandwidth guarantees for every virtual network edge---which maps to a path in the physical network---and minimizes congestion of the network. Previous work has shown that for arbitrary networks and requests, finding the optimal embedding satisfying bandwidth requests is $\\mathcal{NP}$-hard. However, in most data center architectures, the routing protocols employed are based on a spanning tree of the physical network. In this paper, we prove that the problem remains $\\mathcal{NP}$-hard even when the phys...

  9. THE DEFORMATION EFFECT OF VM SLIDER MULTI COMPLEX MACHINE SERIES ON PRECISION MACHINING

    Directory of Open Access Journals (Sweden)

    Berezhnoy S. B.

    2015-09-01

    Full Text Available The article is devoted to the problems of increasing the economic growth of the Russian Federation, the development of high-tech knowledge-intensive manufacturing industries on the basis of a fundamentally new technological order, new unmanned technologies. The measures to improve the accuracy of manufacturing of complex and large-sized parts. Currently, the technical level of many sectors of the economy is largely determined by the level of production of means of production. The basis of these means is the machine tool industry. On the basis of machine tool development we handle a comprehensive mechanization and automation of production processes in industry, construction, agriculture, transport and other industries. We completed a comprehensive analysis of the errors affecting the manufacturing precision parts. The activities for improving the accuracy of manufacture based on VM 32 multi-machine complex series were proposed. We made the analysis of the cutting forces influence and the cross-sectional shape of the slide on its deformation for various types of processing. We determined the optimal shape of the cross section of the slider to increase stiffness and reduce deformation of the slide in metal cutting

  10. Mars Rover proposed for 2018 to seek signs of life and to cache samples for potential return to Earth

    Science.gov (United States)

    Pratt, Lisa; Beaty, David; Westall, Frances; Parnell, John; Poulet, François

    2010-05-01

    Mars Rover proposed for 2018 to seek signs of life and to cache samples for potential return to Earth Lisa Pratt, David Beatty, Frances Westall, John Parnell, François Poulet, and the MRR-SAG team The search for preserved evidence of life is the keystone concept for a new generation of Mars rover capable of exploring, sampling, and caching diverse suites of rocks from outcrops. The proposed mission is conceived to address two general objectives: conduct high-priority in situ science and make concrete steps towards the possible future return of samples to Earth. We propose the name Mars Astrobiology Explorer-Cacher (MAX-C) to best reflect the dual purpose of the proposed mission. The scientific objective of the proposed MAX-C would require rover access to a site with high preservation potential for physical and chemical biosignatures in order to evaluate paleo-environmental conditions, characterize the potential for preservation of biosignatures, and access multiple sequences of geological units in a search for evidence of past life and/or prebiotic chemistry. Samples addressing a variety of high-priority scientific objectives should be collected, documented, and packaged in a manner suitable for possible return to Earth by a future mission. Relevant experience from study of ancient terrestrial strata, martian meteorites, and from the Mars exploration Rovers indicates that the proposed MAX-C's interpretive capability should include: meter to submillimeter texture (optical imaging), mineral identification, major element content, and organic molecular composition. Analytical data should be obtained by direct investigation of outcrops and should not entail acquisition of rock chips or powders. We propose, therefore, a set of arm-mounted instruments that would be capable of interrogating a relatively smooth, abraded surface by creating co-registered 2-D maps of visual texture, mineralogy and geochemical properties. This approach is judged to have particularly high

  11. Les méthodes de caching distribué dans les réseaux small cells

    OpenAIRE

    Bastug, Ejder

    2015-01-01

    This thesis explores one of the key enablers of 5G wireless networks leveraging small cell network deployments, namely proactive caching. Endowed with predictive capabilities and harnessing recent developments in storage, context-awareness and social networks, peak traffic demands can be substantially reduced by proactively serving predictable user demands, via caching at base stations and users' devices. In order to show the effectiveness of proactive caching techniques, we tackle the proble...

  12. Performance evaluation of the General Electric eXplore CT 120 micro-CT using the vmCT phantom

    Energy Technology Data Exchange (ETDEWEB)

    Bahri, M.A., E-mail: M.Bahri@ulg.ac.be [ULg-Liege University, Cyclotron Research Centre, Liege, Bat. 30, Allee du 6 aout, 8 (Belgium); Warnock, G.; Plenevaux, A. [ULg-Liege University, Cyclotron Research Centre, Liege, Bat. 30, Allee du 6 aout, 8 (Belgium); Choquet, P.; Constantinesco, A. [Biophysique et Medecine Nucleaire, Hopitaux universitaires de Strasbourg, Strasbourg (France); Salmon, E.; Luxen, A. [ULg-Liege University, Cyclotron Research Centre, Liege, Bat. 30, Allee du 6 aout, 8 (Belgium); Seret, A. [ULg-Liege University, Cyclotron Research Centre, Liege, Bat. 30, Allee du 6 aout, 8 (Belgium); ULg-Liege University, Experimental Medical Imaging, Liege (Belgium)

    2011-08-21

    The eXplore CT 120 is the latest generation micro-CT from General Electric. It is equipped with a high-power tube and a flat-panel detector. It allows high resolution and high contrast fast CT scanning of small animals. The aim of this study was to compare the performance of the eXplore CT 120 with that of the eXplore Ultra, its predecessor for which the methodology using the vmCT phantom has already been described . The phantom was imaged using typical a rat (fast scan or F) or mouse (in vivo bone scan or H) scanning protocols. With the slanted edge method, a 10% modulation transfer function (MTF) was observed at 4.4 (F) and 3.9-4.4 (H) mm{sup -1} corresponding to 114 {mu}m resolution. A fairly larger MTF was obtained by the coil method with the MTF for the thinnest coil (3.3 mm{sup -1}) equal to 0.32 (F) and 0.34 (H). The geometric accuracy was better than 0.3%. There was a highly linear (R{sup 2}>0.999) relationship between measured and expected CT numbers for both the CT number accuracy and linearity sections of the phantom. A cupping effect was clearly seen on the uniform slices and the uniformity-to-noise ratio ranged from 0.52 (F) to 0.89 (H). The air CT number depended on the amount of polycarbonate surrounding the area where it was measured; a difference as high as approximately 200 HU was observed. This hindered the calibration of this scanner in HU. This is likely due to the absence of corrections for beam hardening and scatter in the reconstruction software. However in view of the high linearity of the system, the implementation of these corrections would allow a good quality calibration of the scanner in HU. In conclusion, the eXplore CT 120 achieved a better spatial resolution than the eXplore Ultra (based on previously reported specifications) and future software developments will include beam hardening and scatter corrections that will make the new generation CT scanner even more promising.

  13. Space-geodetic and water level gauge constraints on continental uplift and tilting over North America: regional convergence of the ICE-6G_C (VM5a/VM6) models

    Science.gov (United States)

    Roy, Keven; Peltier, W. R.

    2017-08-01

    We present a series of analyses of the regional convergence of the iterative methodology that has been developed for use in the construction of global models of the glacial isostatic adjustment process. Our specific focus is upon the North American component of such models which embodied the largest concentration of grounded land ice at the Last Glacial Maximum. We show that, although the introduction of the VM6 viscosity structure helps the global ICE-6G_C (VM5a) model to improve the fit to relative sea level data from the region of forebulge collapse along the U.S. East coast, it also leads to a significant misfit to the totality of the available space-geodetic observations, which the original ICE-6G_C (VM5a) model was able to fit with high accuracy. This raises the issue of the convergence of the iterative methodology being employed in the process of model construction. We demonstrate through detailed further analysis that a fully converged solution which reconciles all available data from the continent, including additional data on the time dependent de-levelling of the Great Lakes region, is obtained through modest further modifications of both the viscosity structure of the model and the North American component of the surface mass loading history.

  14. Primary exploration of the efficiency of VM26 combined with MeCCNU therapy for the low-grade gliomas%VM26联合MeCCNU在人脑低级别胶质瘤治疗中应用的初步探讨

    Institute of Scientific and Technical Information of China (English)

    曹广辉; 谭卫国; 冯鸣; 申昊; 王君祥; 黄煜伦; 周幽心

    2011-01-01

    Objective To explore the efficiency of VM26 combined with MeCCNU therapy for the low-grade gliomas of MCMT-positive after tumor resection and radiotherapy. Methods The clinical data of 22 patients with MCMT-positive who underwent total removal of tumors with enhanced MRI imaging showing no residual tumors from Jan 2005 to Oct 2008 in our department were analyzed retrospectively. Of 22 patients . 10 were diffuse astrocytomas; 3 were oligodendrogliomas;5 were are mixed oligo-astrocytomas and 4 ependymomas. These patients received three-dimensional conformal radiotherapy 2 -4 weeks after surgery with radiation and exposure dose being ( 50~ 55 ) Gy( 6 ~ 7 )weeks. 2 weeks after the end of radiotherapy , these patients underwent VM26 comhined with MeCCNU therapy ( VM26 : 50mg/m2/day , intravenous dripping for 3 days , repeated after 8 weeks for 3 cycles.MeCCNU: 50mg/m2/day, taken orally for 3 days, repeated after 8 weeks for 4 cycles) with MRI scanning regularly. Results Of 22 MCMT-positive grade Ⅱ gliomas,6 cases were positive for both TopoⅡα and Pgp,10 and 2 were only positive for TopoⅡα and Pgp, respectively; and 4 were negative for both TopoⅡα and Pgp. The following up from 2 t0 5 years showed tumors recurred in 6 cases with average time of recurrence being 16. 6 months ,CT or MRI imaging showed no tumor recurrence in 13 ,3 underwent the second surgery due to MRI imaging showing possible tumor recurrence 2 vears after prior surgery. Pathological examination reported suspected recurring tumor was necrotic tissue. In these 22 patients , one vear survival rate of one vear was 100% and two vears was 88.9% . Conclusions VM26 combined with MeCCNU chemotherapy is able to efficaciously inhibit the growth of gliomas ,with little side-effects.safety. It's an therapeutic method for patients with low-grade MGMT-positive gliomas aftert tumor resection and radiotherapy.%目的 探讨人脑胶质瘤手术切除和放疗后O6-甲基鸟嘌呤DNA甲基转移酶(MGMT

  15. Data Locality via Coordinated Caching for Distributed Processing

    Science.gov (United States)

    Fischer, M.; Kuehn, E.; Giffels, M.; Jung, C.

    2016-10-01

    To enable data locality, we have developed an approach of adding coordinated caches to existing compute clusters. Since the data stored locally is volatile and selected dynamically, only a fraction of local storage space is required. Our approach allows to freely select the degree at which data locality is provided. It may be used to work in conjunction with large network bandwidths, providing only highly used data to reduce peak loads. Alternatively, local storage may be scaled up to perform data analysis even with low network bandwidth. To prove the applicability of our approach, we have developed a prototype implementing all required functionality. It integrates seamlessly into batch systems, requiring practically no adjustments by users. We have now been actively using this prototype on a test cluster for HEP analyses. Specifically, it has been integral to our jet energy calibration analyses for CMS during run 2. The system has proven to be easily usable, while providing substantial performance improvements. Since confirming the applicability for our use case, we have investigated the design in a more general way. Simulations show that many infrastructure setups can benefit from our approach. For example, it may enable us to dynamically provide data locality in opportunistic cloud resources. The experience we have gained from our prototype enables us to realistically assess the feasibility for general production use.

  16. An Efficient Query Rewriting Approach for Web Cached Data Management

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    With the internet development, querying data on the Web is an attention problem of involving information from distributed, and often dynamically, related Web sources. Basically, some sub-queries can be effectively cached from previous queries or materialized views in order to achieve a better query performance based on the notion of rewriting queries. In this paper, we propose a novel query-rewriting model, called Hierarchical Query Tree, for representing Web queries. Hierarchical Query Tree is a labeled tree that is suitable for representing the inherent hierarchy feature of data on the Web. Based on Hierarchical Query Tree, we use case-based approach to determine what the query results should be. The definitions of queries and query results are both represented as labeled trees. Thus, we can use the same model for representing cases and the medium query results can also be dynamically updated by the user queries. We show that our case-based method can be used to answer a new query based on the combination of previous queries, including changes of requirements and various information sources.

  17. Wireless Device-to-Device Communications with Distributed Caching

    CERN Document Server

    Golrezaei, Negin; Molisch, Andreas F

    2012-01-01

    We introduce a novel wireless device-to-device (D2D) collaboration architecture that exploits distributed storage of popular content to enable frequency reuse. We identify a fundamental conflict between collaboration distance and interference and show how to optimize the transmission power to maximize frequency reuse. Our analysis depends on the user content request statistics which are modeled by a Zipf distribution. Our main result is a closed form expression of the optimal collaboration distance as a function of the content reuse distribution parameters. We show that if the Zipf exponent of the content reuse distribution is greater than 1, it is possible to have a number of D2D interference-free collaboration pairs that scales linearly in the number of nodes. If the Zipf exponent is smaller than 1, we identify the best possible scaling in the number of D2D collaborating links. Surprisingly, a very simple distributed caching policy achieves the optimal scaling behavior and therefore there is no need to cent...

  18. Evidence that Cache Valley virus induces congenital malformations in sheep.

    Science.gov (United States)

    Chung, S I; Livingston, C W; Edwards, J F; Crandell, R W; Shope, R E; Shelton, M J; Collisson, E W

    1990-02-01

    An outbreak of congenital abnormalities occurred in sheep at San Angelo, Texas, between December 1986 and February 1987. Of 360 lambs born, 19.2% had arthrogryposis or other musculo-skeletal problems and hydranencephaly (AGH), and the total neonatal loss was 25.6%. In 1987, all ewes that were tested with AGH lambs had antibody to Cache Valley virus (CVV), whereas 62% of the ewes with normal lambs had CVV-specific antibody. Pre-colostral serum samples from AGH lambs had neutralizing antibody to CVV. An increase in prevalence of CVV-specific antibody, from 5% during the spring of 1986 to 63.4% during the winter of 1987, occurred during a time that included the gestation of these affected lambs, as well as a period of increased rainfall. The isolation of a CVV-related strain from a sentinel sheep in October 1987 confirmed the continued presence of this virus in the pasture where this outbreak occurred and provided a recent field strain for future studies.

  19. Postglacial Rebound Model ICE-6G_C (VM5a) Constrained by Geodetic and Geologic Observations

    Science.gov (United States)

    Peltier, W. R.; Argus, D. F.; Drummond, R.

    2014-12-01

    We fit the revised global model of glacial isostatic adjustment ICE-6G_C (VM5a) to all available data, consisting of several hundred GPS uplift rates, a similar number of 14C dated relative sea level histories, and 62 geologic estimates of changes in Antarctic ice thickness. The mantle viscosity profile, VM5a is a simple multi-layer fit to prior model VM2 of Peltier (1996, Science). However, the revised deglaciation history, ICE-6G (VM5a), differs significantly from previous models in the Toronto series. (1) In North America, GPS observations of vertical uplift of Earth's surface from the Canadian Base Network require the thickness of the Laurentide ice sheet at Last Glacial Maximum to be significantly revised. At Last Glacial Maximum the new model ICE-6G_C in this region, relative to ICE-5G, roughly 50 percent thicker east of Hudson Bay (in and northern Quebec and Labrador region) and roughly 30 percent thinner west of Hudson Bay (in Manitoba, Saskatchewan, and the Northwest Territories).the net change in mass, however, is small. We find that rates of gravity change determined by GRACE when corrected for the predictions of ICE-6G_C (VM5a) are significantly smaller than residuals determined on the basis of earlier models. (2) In Antarctica, we fit GPS uplift rates, geologic estimates of changes in ice thickness, and geologic constraints on the timing of ice loss. The resulting deglaciation history also differs significantly from prior models. The contribution of Antarctic ice loss to global sea level rise since Last Glacial Maximum in ICE-6G_C is 13.6 meters, less than in ICE-5G (17.5 m), but significantly larger than in both the W12A model of Whitehouse et al. [2012] (8 m) and the IJ05 R02 model of Ivins et al. [2013] (7.5 m). In ICE-6G_C rapid ice loss occurs in Antarctica from 11.5 to 8 thousands years ago, with a rapid onset at 11.5 ka thereby contributing significantly to Meltwater Pulse 1B. In ICE-6G_C (VM5a), viscous uplift of Antarctica is increasing

  20. Fast-Solving Quasi-Optimal LS-S$³$VM Based on an Extended Candidate Set.

    Science.gov (United States)

    Ma, Yuefeng; Liang, Xun; Kwok, James T; Li, Jianping; Zhou, Xiaoping; Zhang, Haiyan

    2017-02-14

    The semisupervised least squares support vector machine (LS-S³VM) is an important enhancement of least squares support vector machines in semisupervised learning. Given that most data collected from the real world are without labels, semisupervised approaches are more applicable than standard supervised approaches. Although a few training methods for LS-S³VM exist, the problem of deriving the optimal decision hyperplane efficiently and effectually has not been solved. In this paper, a fully weighted model of LS-S³VM is proposed, and a simple integer programming (IP) model is introduced through an equivalent transformation to solve the model. Based on the distances between the unlabeled data and the decision hyperplane, a new indicator is designed to represent the possibility that the label of an unlabeled datum should be reversed in each iteration during training. Using the indicator, we construct an extended candidate set consisting of the indices of unlabeled data with high possibilities, which integrates more information from unlabeled data. Our algorithm is degenerated into a special scenario of the previous algorithm when the extended candidate set is reduced into a set with only one element. Two strategies are utilized to determine the descent directions based on the extended candidate set. Furthermore, we developed a novel method for locating a good starting point based on the properties of the equivalent IP model. Combined with the extended candidate set and the carefully computed starting point, a fast algorithm to solve LS-S³VM quasi-optimally is proposed. The choice of quasi-optimal solutions results in low computational cost and avoidance of overfitting. Experiments show that our algorithm equipped with the two designed strategies is more effective than other algorithms in at least one of the following three aspects: 1) computational complexity; 2) generalization ability; and 3) flexibility. However, our algorithm and other algorithms have similar

  1. GlusterFS缓存机制研究%RESEARCH ON CACHE STRATEGY IN GLUSTERFS

    Institute of Scientific and Technical Information of China (English)

    周凡夫; 管海兵; 朱二周

    2012-01-01

    Cache strategy is widely applied to parallel file systems to improve file system performance. The paper sketches the characteristics of GlusterFS, investigates deeply into GlusterFS cache strategy, and validates through experiments the cache strategy that is analyzed by GlusterFS theory. By comparative experiments with or without caches, it is proven that GlusterFS cache strategy improves the read performance of GlusterFS.%缓存机制在并行文件系统中广泛使用,以提高文件系统的性能.简单介绍GlusterFS文件系统的特点,简叙当前文件系统中的缓存机制研究概况,对GlusterFS缓存机制进行深入研究,并通过实验对GlusterFS理论分析的缓存机制进行验证.通过有缓存和无缓存的实验的对比,证实GlusterFS的缓存机制改进了GlusterFS文件系统的读性能.

  2. Joshua tree (Yucca brevifolia) seeds are dispersed by seed-caching rodents

    Science.gov (United States)

    Vander Wall, S. B.; Esque, T.; Haines, D.; Garnett, M.; Waitman, B.A.

    2006-01-01

    Joshua tree (Yucca brevifolia) is a distinctive and charismatic plant of the Mojave Desert. Although floral biology and seed production of Joshua tree and other yuccas are well understood, the fate of Joshua tree seeds has never been studied. We tested the hypothesis that Joshua tree seeds are dispersed by seed-caching rodents. We radioactively labelled Joshua tree seeds and followed their fates at five source plants in Potosi Wash, Clark County, Nevada, USA. Rodents made a mean of 30.6 caches, usually within 30 m of the base of source plants. Caches contained a mean of 5.2 seeds buried 3-30 nun deep. A variety of rodent species appears to have prepared the caches. Three of the 836 Joshua tree seeds (0.4%) cached germinated the following spring. Seed germination using rodent exclosures was nearly 15%. More than 82% of seeds in open plots were removed by granivores, and neither microsite nor supplemental water significantly affected germination. Joshua tree produces seeds in indehiscent pods or capsules, which rodents dismantle to harvest seeds. Because there is no other known means of seed dispersal, it is possible that the Joshua tree-rodent seed dispersal interaction is an obligate mutualism for the plant.

  3. Dynamic Allocation of SPM Based on Time-Slotted Cache Conflict Graph for System Optimization

    Science.gov (United States)

    Wu, Jianping; Ling, Ming; Zhang, Yang; Mei, Chen; Wang, Huan

    This paper proposes a novel dynamic Scratch-pad Memory allocation strategy to optimize the energy consumption of the memory sub-system. Firstly, the whole program execution process is sliced into several time slots according to the temporal dimension; thereafter, a Time-Slotted Cache Conflict Graph (TSCCG) is introduced to model the behavior of Data Cache (D-Cache) conflicts within each time slot. Then, Integer Nonlinear Programming (INP) is implemented, which can avoid time-consuming linearization process, to select the most profitable data pages. Virtual Memory System (VMS) is adopted to remap those data pages, which will cause severe Cache conflicts within a time slot, to SPM. In order to minimize the swapping overhead of dynamic SPM allocation, a novel SPM controller with a tightly coupled DMA is introduced to issue the swapping operations without CPU's intervention. Last but not the least, this paper discusses the fluctuation of system energy profit based on different MMU page size as well as the Time Slot duration quantitatively. According to our design space exploration, the proposed method can optimize all of the data segments, including global data, heap and stack data in general, and reduce the total energy consumption by 27.28% on average, up to 55.22% with a marginal performance promotion. And comparing to the conventional static CCG (Cache Conflicts Graph), our approach can obtain 24.7% energy profit on average, up to 30.5% with a sight boost in performance.

  4. A Way Memoization Technique for Reducing Power Consumption of Caches in Application Specific Integrated Processors

    CERN Document Server

    Ishihara, Tohru

    2011-01-01

    This paper presents a technique for eliminating redundant cache-tag and cache-way accesses to reduce power consumption. The basic idea is to keep a small number of Most Recently Used (MRU) addresses in a Memory Address Buffer (MAB) and to omit redundant tag and way accesses when there is a MAB-hit. Since the approach keeps only tag and set-index values in the MAB, the energy and area overheads are relatively small even for a MAB with a large number of entries. Furthermore, the approach does not sacrifice the performance. In other words, neither the cycle time nor the number of executed cycles increases. The proposed technique has been applied to Fujitsu VLIW processor (FR-V) and its power saving has been estimated using NanoSim. Experiments for 32kB 2-way set associative caches show the power consumption of I-cache and D-cache can be reduced by 40% and 50%, respectively.

  5. Energy-Efficient Caching for Mobile Edge Computing in 5G Networks

    Directory of Open Access Journals (Sweden)

    Zhaohui Luo

    2017-05-01

    Full Text Available Mobile Edge Computing (MEC, which is considered a promising and emerging paradigm to provide caching capabilities in proximity to mobile devices in 5G networks, enables fast, popular content delivery of delay-sensitive applications at the backhaul capacity of limited mobile networks. Most existing studies focus on cache allocation, mechanism design and coding design for caching. However, grid power supply with fixed power uninterruptedly in support of a MEC server (MECS is costly and even infeasible, especially when the load changes dynamically over time. In this paper, we investigate the energy consumption of the MECS problem in cellular networks. Given the average download latency constraints, we take the MECS’s energy consumption, backhaul capacities and content popularity distributions into account and formulate a joint optimization framework to minimize the energy consumption of the system. As a complicated joint optimization problem, we apply a genetic algorithm to solve it. Simulation results show that the proposed solution can effectively determine the near-optimal caching placement to obtain better performance in terms of energy efficiency gains compared with conventional caching placement strategies. In particular, it is shown that the proposed scheme can significantly reduce the joint cost when backhaul capacity is low.

  6. Improving the performance of heterogeneous multi-core processors by modifying the cache coherence protocol

    Science.gov (United States)

    Fang, Juan; Hao, Xiaoting; Fan, Qingwen; Chang, Zeqing; Song, Shuying

    2017-05-01

    In the Heterogeneous multi-core architecture, CPU and GPU processor are integrated on the same chip, which poses a new challenge to the last-level cache management. In this architecture, the CPU application and the GPU application execute concurrently, accessing the last-level cache. CPU and GPU have different memory access characteristics, so that they have differences in the sensitivity of last-level cache (LLC) capacity. For many CPU applications, a reduced share of the LLC could lead to significant performance degradation. On the contrary, GPU applications can tolerate increase in memory access latency when there is sufficient thread-level parallelism. Taking into account the GPU program memory latency tolerance characteristics, this paper presents a method that let GPU applications can access to memory directly, leaving lots of LLC space for CPU applications, in improving the performance of CPU applications and does not affect the performance of GPU applications. When the CPU application is cache sensitive, and the GPU application is insensitive to the cache, the overall performance of the system is improved significantly.

  7. Molecular cloning and characterization of Vigna mungo processing enzyme 1 (VmPE-1), an asparaginyl endopeptidase possibly involved in post-translational processing of a vacuolar cysteine endopeptidase (SH-EP).

    Science.gov (United States)

    Okamoto, T; Minamikawa, T

    1999-01-01

    Asparaginyl endopeptidase is a cysteine endopeptidase that has strict substrate specificity toward the carboxy side of asparagine residues. Vigna mungo processing enzyme 1, termed VmPE-1, occurs in the cotyledons of germinated seeds of V. mungo, and is possibly involved in the post-translational processing of a vacuolar cysteine endopeptidase, designated SH-EP, which degrades seed storage protein. VmPE-1 also showed a substrate specificity to asparagine residues, and its enzymatic activity was inhibited by NEM but not E-64. In addition, purified VmPE-1 had a potential to process the recombinant SH-EP precursor to its intermediate in vitro. cDNA clones for VmPE-1 and its homologue, named VmPE-1A, were identified and sequenced, and their expressions in the cotyledons of V. mungo seedlings and other organs were investigated. VmPE-1 mRNA and SH-EP mRNA were expressed in germinated seeds at the same stage of germination although the enzymatic activity of VmPE-1 rose prior to that of SH-EP. The level of VmPE-1A mRNA continued increasing as germination proceeded. In roots, stems and leaves of fully grown plants, and in hypocotyls, VmPE-1 and VmPE-1A were little expressed. We discuss possible functions of VmPE-1 and VmPE-1A in the cotyledons of germinated seeds.

  8. [Effects of Pinus armandii seed size on rodents caching behavior and it's spatio-temporal variations].

    Science.gov (United States)

    Chen, Fan; Chen, Jin

    2011-08-01

    Pinus armandii, a native pine species, has large (about 300 mg), wingless seeds, and distributes from central to western China at an altitude of 1000-3300 m. To determine how the seed size affects rodents caching behavior, tagged seed releasing and tracking experiments were conducted at 3 sites in Northwest Yunnan province in 2006 and 2007. Our data indicated that for all sites and both years, compared with the smaller seeds, the proportions of cached large seeds were significantly higher, whereas the consumed ones were significantly lower. Meanwhile, the mean and maximum values of caching distances were also significantly increased in large seeds. Seed fate was different between the two years and within the three sites as there have different rodent community compositions.

  9. Security Enhancement Using Cache Based Reauthentication in WiMAX Based E-Learning System.

    Science.gov (United States)

    Rajagopal, Chithra; Bhuvaneshwaran, Kalaavathi

    2015-01-01

    WiMAX networks are the most suitable for E-Learning through their Broadcast and Multicast Services at rural areas. Authentication of users is carried out by AAA server in WiMAX. In E-Learning systems the users must be forced to perform reauthentication to overcome the session hijacking problem. The reauthentication of users introduces frequent delay in the data access which is crucial in delaying sensitive applications such as E-Learning. In order to perform fast reauthentication caching mechanism known as Key Caching Based Authentication scheme is introduced in this paper. Even though the cache mechanism requires extra storage to keep the user credentials, this type of mechanism reduces the 50% of the delay occurring during reauthentication.

  10. Cache-Oblivious Search Trees via Binary Trees of Small Height

    DEFF Research Database (Denmark)

    Brodal, G.S.; Fagerberg, R.; Jacob, R.

    2002-01-01

    We propose a version of cache oblivious search trees which is simpler than the previous proposal of Bender, Demaine and Farach-Colton and has the same complexity bounds. In particular, our data structure avoids the use of weight balanced B-trees, and can be implemented as just a single array......, and range queries in worst case O(logB n + k/B) memory transfers, where k is the size of the output.The basic idea of our data structure is to maintain a dynamic binary tree of height log n+O(1) using existing methods, embed this tree in a static binary tree, which in turn is embedded in an array in a cache...... oblivious fashion, using the van Emde Boas layout of Prokop.We also investigate the practicality of cache obliviousness in the area of search trees, by providing an empirical comparison of different methods for laying out a search tree in memory....

  11. A general approach for cache-oblivious range reporting and approximate range counting

    DEFF Research Database (Denmark)

    Afshani, Peyman; Hamilton, Chris; Zeh, Norbert

    2010-01-01

    We present cache-oblivious solutions to two important variants of range searching: range reporting and approximate range counting. Our main contribution is a general approach for constructing cache-oblivious data structures that provide relative (1+ε)-approximations for a general class of range...... counting queries. This class includes three-sided range counting in the plane, 3-d dominance counting, and 3-d halfspace range counting. The constructed data structures use linear space and answer queries in the optimal query bound of O(logB(N/K)) block transfers in the worst case, where K is the number...... of points in the query range. As a corollary, we also obtain the first approximate 3-d halfspace range counting and 3-d dominance counting data structures with a worst-case query time of O(log(N/K)) in internal memory. An easy but important consequence of our main result is the existence of -space cache...

  12. A Network-Aware Distributed Storage Cache for Data Intensive Environments

    Energy Technology Data Exchange (ETDEWEB)

    Tierney, B.L.; Lee, J.R.; Johnston, W.E.; Crowley, B.; Holding, M.

    1999-12-23

    Modern scientific computing involves organizing, moving, visualizing, and analyzing massive amounts of data at multiple sites around the world. The technologies, the middleware services, and the architectures that are used to build useful high-speed, wide area distributed systems, constitute the field of data intensive computing. In this paper the authors describe an architecture for data intensive applications where they use a high-speed distributed data cache as a common element for all of the sources and sinks of data. This cache-based approach provides standard interfaces to a large, application-oriented, distributed, on-line, transient storage system. They describe their implementation of this cache, how they have made it network aware, and how they do dynamic load balancing based on the current network conditions. They also show large increases in application throughput by access to knowledge of the network conditions.

  13. Web Cache在IPTV系统中的应用%Application of Web Cache in IPTV System

    Institute of Scientific and Technical Information of China (English)

    张建标; 林涛

    2007-01-01

    电信级IPTV系统向用户提供丰富的节目的同时也要考虑提高系统自身的性能.该文描述了IPTV系统架构与现有的 Web Cache 技术,通过对测试数据的分析找到影响EPG性能的瓶颈.分析了IPTV系统的特点,提出了一种适合IPTV 系统的Web cache 架构 EPG_Cache,EPG_Cache通过提供缓存部分数据的方法提高EPG的响应速度.

  14. A Network-Aware Distributed Storage Cache for Data Intensive Environments

    Energy Technology Data Exchange (ETDEWEB)

    Tierney, B.L.; Lee, J.R.; Johnston, W.E.; Crowley, B.; Holding, M.

    1999-12-23

    Modern scientific computing involves organizing, moving, visualizing, and analyzing massive amounts of data at multiple sites around the world. The technologies, the middleware services, and the architectures that are used to build useful high-speed, wide area distributed systems, constitute the field of data intensive computing. In this paper the authors describe an architecture for data intensive applications where they use a high-speed distributed data cache as a common element for all of the sources and sinks of data. This cache-based approach provides standard interfaces to a large, application-oriented, distributed, on-line, transient storage system. They describe their implementation of this cache, how they have made it network aware, and how they do dynamic load balancing based on the current network conditions. They also show large increases in application throughput by access to knowledge of the network conditions.

  15. Memory for multiple cache locations and prey quantities in a food-hoarding songbird

    Directory of Open Access Journals (Sweden)

    Nicola eArmstrong

    2012-12-01

    Full Text Available Most animals can discriminate between pairs of numbers that are each less than four without training. However, North Island robins (Petroica longipes, a food hoarding songbird endemic to New Zealand, can discriminate between quantities of items as high as eight without training. Here we investigate whether robins are capable of other complex quantity discrimination tasks. We test whether their ability to discriminate between small quantities declines with 1. the number of cache sites containing prey rewards and 2. the length of time separating cache creation and retrieval (retention interval. Results showed that subjects generally performed above chance expectations. They were equally able to discriminate between different combinations of prey quantities that were hidden from view in 2, 3 and 4 cache sites from between 1, 10 and 60 seconds. Overall results indicate that North Island robins can process complex quantity information involving more than two discrete quantities of items for up to one minute long retention intervals without training.

  16. Simplifying and speeding the management of intra-node cache coherence

    Energy Technology Data Exchange (ETDEWEB)

    Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton on Hudson, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Phillip [Cortlandt Manor, NY; Hoenicke, Dirk [Ossining, NY; Ohmacht, Martin [Yorktown Heights, NY

    2012-04-17

    A method and apparatus for managing coherence between two processors of a two processor node of a multi-processor computer system. Generally the present invention relates to a software algorithm that simplifies and significantly speeds the management of cache coherence in a message passing parallel computer, and to hardware apparatus that assists this cache coherence algorithm. The software algorithm uses the opening and closing of put/get windows to coordinate the activated required to achieve cache coherence. The hardware apparatus may be an extension to the hardware address decode, that creates, in the physical memory address space of the node, an area of virtual memory that (a) does not actually exist, and (b) is therefore able to respond instantly to read and write requests from the processing elements.

  17. Design and Implementation of a Proxy Caching System for Streaming Media

    Institute of Scientific and Technical Information of China (English)

    Tan Jin; Yu Sheng-sheng; Zhou Jing-li

    2004-01-01

    With the widespread use of streaming media application on the Internet, a significant change in Internet workload will be provoked. Caching is one of the applied techniques for enhancing the scalability of streaming system and reducing the workload of server/network. Aiming at the characteristics of broadband network in community, we propose a popularity-based server-proxy caching strategy for streaming medias, and implement the prototype of streaming proxy caching based on this strategy, using RTSP as control protocol and RTP for content transport. This system can play a role in decreasing server load, reducing the traffic from streaming server to proxy, and improving the start-up latency of the client.

  18. A Caching Strategy for Streaming Media%一种流媒体Caching策略

    Institute of Scientific and Technical Information of China (English)

    谭劲; 余胜生; 周敬利

    2004-01-01

    It is expected that by 2003 continuous media will account for more than 50% of the data available on origin servers, this will provoke a significant change in Internet workload.Due to the high bandwidth requirements and the long-lived nature of digital video, streaming server loads and network bandwidths are proven to be major limiting factors.Aiming at the characteristics of broadband network in residential areas, this paper proposes a popularity-based server-proxy caching strategy for streaming media.According to a streaming media popularity on streaming server and proxy, this strategy caches the content of the streaming media partially or completely.The paper also proposes two formulas that calculate the popularity coefficient of a streaming media on server and proxy, and caching replacement policy.As expected, this strategy decreases the server load, reduces the traffic from streaming server to proxy, and improves client start-up latency.

  19. Impact of traffic mix on caching performance in a content-centric network

    CERN Document Server

    Fricker, Christine; Roberts, James; Sbihi, Nada

    2012-01-01

    For a realistic traffic mix, we evaluate the hit rates attained in a two-layer cache hierarchy designed to reduce Internet bandwidth requirements. The model identifies four main types of content, web, file sharing, user generated content and video on demand, distinguished in terms of their traffic shares, their population and object sizes and their popularity distributions. Results demonstrate that caching VoD in access routers offers a highly favorable bandwidth memory tradeoff but that the other types of content would likely be more efficiently handled in very large capacity storage devices in the core. Evaluations are based on a simple approximation for LRU cache performance that proves highly accurate in relevant configurations.

  20. Advantages of masting in European beech: timing of granivore satiation and benefits of seed caching support the predator dispersal hypothesis.

    Science.gov (United States)

    Zwolak, Rafał; Bogdziewicz, Michał; Wróbel, Aleksandra; Crone, Elizabeth E

    2016-03-01

    The predator satiation and predator dispersal hypotheses provide alternative explanations for masting. Both assume satiation of seed-eating vertebrates. They differ in whether satiation occurs before or after seed removal and caching by granivores (predator satiation and predator dispersal, respectively). This difference is largely unrecognized, but it is demographically important because cached seeds are dispersed and often have a microsite advantage over nondispersed seeds. We conducted rodent exclosure experiments in two mast and two nonmast years to test predictions of the predator dispersal hypothesis in our study system of yellow-necked mice (Apodemus flavicollis) and European beech (Fagus sylvatica). Specifically, we tested whether the fraction of seeds removed from the forest floor is similar during mast and nonmast years (i.e., lack of satiation before seed caching), whether masting decreases the removal of cached seeds (i.e., satiation after seed storage), and whether seed caching increases the probability of seedling emergence. We found that masting did not result in satiation at the seed removal stage. However, masting decreased the removal of cached seeds, and seed caching dramatically increased the probability of seedling emergence relative to noncached seeds. European beech thus benefits from masting through the satiation of scatterhoarders that occurs only after seeds are removed and cached. Although these findings do not exclude other evolutionary advantages of beech masting, they indicate that fitness benefits of masting extend beyond the most commonly considered advantages of predator satiation and increased pollination efficiency.

  1. Killing and caching of an adult White-tailed deer, Odocoileus virginianus, by a single Gray Wolf, Canis lupus

    Science.gov (United States)

    Nelson, Michael E.

    2011-01-01

    A single Gray Wolf (Canis lupus) killed an adult male White-tailed Deer (Odocoileus virginianus) and cached the intact carcass in 76 cm of snow. The carcass was revisited and entirely consumed between four and seven days later. This is the first recorded observation of a Gray Wolf caching an entire adult deer.

  2. Prefetching J+-Tree: A Cache-Optimized Main Memory Database Index Structure

    Institute of Scientific and Technical Information of China (English)

    Hua Luan; Xiao-Yong Du; Sha Wang

    2009-01-01

    As the speed gap between main memory and modern processors continues to widen, the cache behavior becomes more important for main memory database systems (MMDBs). Indexing technique is a key component of MMDBs. Unfortunately, the predominant indexes -- B+-trees and T-trees -- have been shown to utilize cache poorly, which triggers the development of many cache-conscious indexes, such as CSB+-trees and pB+-trees. Most of these cache-conscious indexes are variants of conventional B+-trees, and have better cache performance than B+-trees. In this paper, we develop a novel J+-tree index, inspired by the Judy structure which is an associative array data structure, and propose a more cacheoptimized index -- Prefetching J+-tree (pJ+-tree), which applies prefetching to J+-tree to accelerate range scan operations. The J+-tree stores all the keys in its leaf nodes and keeps the reference values of leaf nodes in a Judy structure, which makes J+-tree not only hold the advantages of Judy (such as fast single value search) but also outperform it in other aspects. For example, J+-trees can achieve better performance on range queries than Judy. The pJ+-tree index exploits prefetching techniques to further improve the cache behavior of J+-trees and yields a speedup of 2.0 on range scans. Compared with B+-trees, CSB+-trees, pB+-trees and T-trees, our extensive experimental study shows that pJ+-trees can provide better performance on both time (search, scan, update) and space aspects.

  3. KDS-CM: A Cache Mechanism Based on Top-K Data Source for Deep Web Query

    Institute of Scientific and Technical Information of China (English)

    KOU Yue; SHEN Derong; YU Ge; LI Dong; NIE Tiezheng

    2007-01-01

    Caching is an important technique to enhance the efficiency of query processing. Unfortunately, traditional caching mechanisms are not efficient for deep Web because of storage space and dynamic maintenance limitations. In this paper, we present on providing a cache mechanism based on Top-K data source (KDS-CM) instead of result records for deep Web query.By integrating techniques from IR and Top-K, a data reorganization strategy is presented to model KDS-CM. Also some measures about cache management and optimization are proposed to improve the performances of cache effectively. Experimental results show the benefits of KDS-CM in execution cost and dynamic maintenance when compared with various alternate strategies.

  4. VM600汽轮机监视仪表的功能分析和应用%Functional Analysis and Application of the VM600 Turbine Supervisory Instrumentation

    Institute of Scientific and Technical Information of China (English)

    刘海燕

    2009-01-01

    分析了印度雅幕娜电厂汽轮机监视系统所采用的瑞士VM600仪表的功能和基本构成、组态与配置,并简单说明了使用情况.系统的配置是合适的,在机组运行期间,系统能正确反映汽轮机的运行参数,未出现信号异常.

  5. Implementació d'una Cache per a un processador MIPS d'una FPGA

    OpenAIRE

    Riera Villanueva, Marc

    2013-01-01

    [CATALÀ] Primer s'explicarà breument l'arquitectura d'un MIPS, la jerarquia de memòria i el funcionament de la cache. Posteriorment s'explicarà com s'ha dissenyat i implementat una jerarquia de memòria per a un MIPS implementat en VHDL en una FPGA. [ANGLÈS] First, the MIPS architecture, memory hierarchy and the functioning of the cache will be explained briefly. Then, the design and implementation of a memory hierarchy for a MIPS processor implemented in VHDL on an FPGA will be explained....

  6. Reader set encoding for directory of shared cache memory in multiprocessor system

    Science.gov (United States)

    Ahn, Dnaiel; Ceze, Luis H.; Gara, Alan; Ohmacht, Martin; Xiaotong, Zhuang

    2014-06-10

    In a parallel processing system with speculative execution, conflict checking occurs in a directory lookup of a cache memory that is shared by all processors. In each case, the same physical memory address will map to the same set of that cache, no matter which processor originated that access. The directory includes a dynamic reader set encoding, indicating what speculative threads have read a particular line. This reader set encoding is used in conflict checking. A bitset encoding is used to specify particular threads that have read the line.

  7. Education for sustainability and environmental education in National Geoparks. EarthCaching - a new method?

    Science.gov (United States)

    Zecha, Stefanie; Regelous, Anette

    2017-04-01

    National Geoparks are restricted areas incorporating educational resources of great importance in promoting education for sustainable development, mobilizing knowledge inherent to the EarthSciences. Different methods can be used to implement the education of sustainability. Here we present possibilities for National Geoparks to support sustainability focusing on new media and EarthCaches based on the data set of the "EarthCachers International EarthCaching" conference in Goslar in October 2015. Using an empirical study designed by ourselves we collected actual information about the environmental consciousness of Earthcachers. The data set was analyzed using SPSS and statistical methods. Here we present the results and their consequences for National Geoparks.

  8. Optimization and Analysis of Probabilistic Caching in $N$-tier Heterogeneous Networks

    OpenAIRE

    Li, Kuikui; Yang, Chenchen; Chen, Zhiyong; Tao, Meixia

    2016-01-01

    In this paper, we study the probabilistic caching for an N-tier wireless heterogeneous network (HetNet) using stochastic geometry. A general and tractable expression of the successful delivery probability (SDP) is first derived. We then optimize the caching probabilities for maximizing the SDP in the high signal-to-noise ratio (SNR) region. The problem is proved to be convex and solved efficiently. We next establish an interesting connection between N-tier HetNets and single-tier networks. Un...

  9. Antitumor activity of the two epipodophyllotoxin derivatives VP-16 and VM-26 in preclinical systems: a comparison of in vitro and in vivo drug evaluation

    DEFF Research Database (Denmark)

    Jensen, P B; Roed, H; Skovsgaard, T

    1990-01-01

    The epipodophyllotoxines VP-16 and VM-26 are chemically closely related. VM-26 has been found to be considerably more potent than VP-16 in vitro in a number of investigations. Although the drugs have been known for greater than 20 years, they have not been compared at clearly defined equitoxic do......-resistance between the two drugs suggests that they have an identical antineoplastic spectrum. VM-26 was more potent than VP-16 in vitro; however, this was not correlated to a therapeutic advantage for VM-26 over VP-16 in vivo.......The epipodophyllotoxines VP-16 and VM-26 are chemically closely related. VM-26 has been found to be considerably more potent than VP-16 in vitro in a number of investigations. Although the drugs have been known for greater than 20 years, they have not been compared at clearly defined equitoxic...... doses on an optimal schedule in vivo and it has not been clarified as to whether a therapeutic difference exists between them. A prolonged schedule is optimal for both drugs; accordingly we determined the toxicity in mice using a 5-day schedule. The dose killing 10% of the mice (LD10) was 9.4 mg...

  10. [Russian oxygen generation system "Elektron-VM": hydrogen content in electrolytically produced oxygen for breathing by International Space Station crews].

    Science.gov (United States)

    Proshkin, V Yu; Kurmazenko, E A

    2014-01-01

    The article presents the particulars of hydrogen content in electrolysis oxygen produced aboard the ISS Russian segment by oxygen generator "Elektron-VM" (SGK) for crew breathing. Hydrogen content was estimated as in the course of SGK operation in the ISS RS, so during the ground life tests. According to the investigation of hydrogen sources, the primary path of H2 appearance in oxygen is its diffusion through the porous diaphragm separating the electrolytic-cell cathode and anode chambers. Effectiveness of hydrogen oxidation in the SGK reheating unit was evaluated.

  11. Pyrroloquinoline Quinone-Dependent Cytochrome Reduction in Polyvinyl Alcohol-Degrading Pseudomonas sp. Strain VM15C

    OpenAIRE

    1989-01-01

    A polyvinyl alcohol (PVA) oxidase-deficient mutant of Pseudomonas sp. strain VM15C, strain ND1, was shown to possess PVA dehydrogenase, in which pyrroloquinoline quinone (PQQ) functions as a coenzyme. The mutant grew on PVA and required PQQ for utilization of PVA as an essential growth factor. Incubation of the membrane fraction of the mutant with PVA caused cytochrome reduction of the fraction. Furthermore, it was found that in spite of the presence of PVA oxidase, the membrane fraction of s...

  12. Capacity Managed Adaptive Videostreaming Based On Peer Cache Adaptation Mechanism Seema Safar1 , Sudha S K

    Directory of Open Access Journals (Sweden)

    Seema Safar

    2014-04-01

    Full Text Available To limit the crash against users demand for smooth video, clear audio, and performance levels specified and guaranteed by contract quality, an innovative approach to face these challenges in streaming media is considered. Here a new idea of boosting the capacity of seed servers to serve more receivers in peer to peer data streaming systems is focused .These servers complement the limited upload capacity offered by peers. The peer requests for a data segment is handled by the server or another peer with a seeding capacity of any finite number with a local cache attached in each peer, which enable the peer to temporarily store the data once requested, so it can be directly fetched by some other node near to the peer without accessing the server there by improving the performance of rendering data. The capacity of the cache in each peer can be designed based on popularity of the segment in cache. Once the peers are cached the peer , data segment request are handled by performing a distributed hash table search strategy, and seed servers boost the capacity of each peer based on utility to cost factor computed each time till it exceeds the seeding capacity. Apart from this selfish peers connected in system can be traced to check for unfaithful peers. This system efficiently allocates the peer resources there by considering the server bandwidth constraints.

  13. An ESL Approach for Energy Consumption Analysis of Cache Memories in SoC Platforms

    Directory of Open Access Journals (Sweden)

    Abel G. Silva-Filho

    2011-01-01

    Full Text Available The design of complex circuits as SoCs presents two great challenges to designers. One is the speeding up of system functionality modeling and the second is the implementation of the system in an architecture that meets performance and power consumption requirements. Thus, developing new high-level specification mechanisms for the reduction of the design effort with automatic architecture exploration is a necessity. This paper proposes an Electronic-System-Level (ESL approach for system modeling and cache energy consumption analysis of SoCs called PCacheEnergyAnalyzer. It uses as entry a high-level UML-2.0 profile model of the system and it generates a simulation model of a multicore platform that can be analyzed for cache tuning. PCacheEnergyAnalyzer performs static/dynamic energy consumption analysis of caches on platforms that may have different processors. Architecture exploration is achieved by letting designers choose different processors for platform generation and different mechanisms for cache optimization. PCacheEnergyAnalyzer has been validated with several applications of Mibench, Mediabench, and PowerStone benchmarks, and results show that it provides analysis with reduced simulation effort.

  14. Minimizing cache misses in an event-driven network server: A case study of TUX

    DEFF Research Database (Denmark)

    Bhatia, Sapan; Consel, Charles; Lawall, Julia Laetitia

    2006-01-01

    We analyze the performance of CPU-bound network servers and demonstrate experimentally that the degradation in the performance of these servers under high-concurrency workloads is largely due to inefficient use of the hardware caches. We then describe an approach to speeding up event-driven netwo...

  15. Analytical derivation of traffic patterns in cache-coherent shared-memory systems

    DEFF Research Database (Denmark)

    Stuart, Matthias Bo; Sparsø, Jens

    2011-01-01

    This paper presents an analytical method to derive the worst-case traffic pattern caused by a task graph mapped to a cache-coherent shared-memory system. Our analysis allows designers to rapidly evaluate the impact of different mappings of tasks to IP cores on the traffic pattern. The accuracy va...

  16. Randomized Caches Can Be Pretty Useful to Hard Real-Time Systems

    Directory of Open Access Journals (Sweden)

    Enrico Mezzetti

    2015-03-01

    Full Text Available Cache randomization per se, and its viability for probabilistic timing analysis (PTA of critical real-time systems, are receiving increasingly close attention from the scientific community and the industrial practitioners. In fact, the very notion of introducing randomness and probabilities in time-critical systems has caused strenuous debates owing to the apparent clash that this idea has with the strictly deterministic view traditionally held for those systems. A paper recently appeared in LITES (Reineke, J. (2014. Randomized Caches Considered Harmful in Hard Real-Time Systems. LITES, 1(1, 03:1-03:13. provides a critical analysis of the weaknesses and risks entailed in using randomized caches in hard real-time systems. In order to provide the interested reader with a fuller, balanced appreciation of the subject matter, a critical analysis of the benefits brought about by that innovation should be provided also. This short paper addresses that need by revisiting the array of issues addressed in the cited work, in the light of the latest advances to the relevant state of the art. Accordingly, we show that the potential benefits of randomized caches do offset their limitations, causing them to be - when used in conjunction with PTA - a serious competitor to conventional designs.

  17. Model checking a cache coherence protocol for a Java DSM implementation

    NARCIS (Netherlands)

    J. Pang; W.J. Fokkink (Wan); R. Hofman (Rutger); R. Veldema

    2007-01-01

    textabstractJackal is a fine-grained distributed shared memory implementation of the Java programming language. It aims to implement Java's memory model and allows multithreaded Java programs to run unmodified on a distributed memory system. It employs a multiple-writer cache coherence

  18. Model checking a cache coherence protocol for a Java DSM implementation

    NARCIS (Netherlands)

    Pang, J.; Fokkink, W.J.; Hofman, R.; Veldema, R.

    2007-01-01

    Jackal is a fine-grained distributed shared memory implementation of the Java programming language. It aims to implement Java's memory model and allows multithreaded Java programs to run unmodified on a distributed memory system. It employs a multiple-writer cache coherence protocol. In this paper,

  19. Killing of a muskox, Ovibus moschatus, by two wolves, Canis lupis, and subsequent caching

    Science.gov (United States)

    Mech, L.D.; Adams, L.G.

    1999-01-01

    The killing of a cow Muskox (Ovibos moschatus) by two Wolves (Canis lupus) in 5 minutes during summer on Ellesmere Island is described. After two of the four feedings observed, one Wolf cached a leg and regurgitated food as far as 2.3 km away and probably farther. The implications of this behavior for deriving food-consumption estimates are discussed

  20. Multiple Servers - Queue Model for Agent Based Technology in Cache Consistence Maintenance of Mobile Environment

    Directory of Open Access Journals (Sweden)

    G.Shanmugarathinam

    2013-01-01

    Full Text Available Caching is one of the important techniques in mobile computing. In caching, frequently accessed data is stored in mobile clients to avoid network traffic and improve the performance in mobile computing. In a mobile computing environment, the number of mobile users increases and requests the server for any updation, but most of the time the server is busy and the client has to wait for a long time. The cache consistency maintenance is difficult for both client and the server. This paper is proposes a technique using a queuing system consisting of one or more servers that provide services of some sort to arrive mobile hosts using agent based technology. This services mechanism of a queuing system is specified by the number of servers each server having its own queue, Agent based technology will maintain the cache consistency between the client and the server .This model saves wireless bandwidth, reduces network traffic and reduces the workload on the server. The simulation result was analyzed with previous technique and the proposed model shows significantly better performance than the earlier approach.

  1. Fast and Precise Cache Performance Estimation for Out-Of-Order Execution

    NARCIS (Netherlands)

    Douma, R.J.; Altmeyer, S.; Pimentel, A.D.

    2015-01-01

    Design space exploration (DSE) is a key ingredient of system-level design, enabling designers to quickly prune the set of possible designs and determine, e.g., the number of the processing cores, the mapping of application tasks to cores, and the core configuration such as the cache organization. Hi

  2. A Novel Coordinated Edge Caching with Request Filtration in Radio Access Network

    Directory of Open Access Journals (Sweden)

    Yang Li

    2013-01-01

    Full Text Available Content caching at the base station of the Radio Access Network (RAN is a way to reduce backhaul transmission and improve the quality of experience. So it is crucial to manage such massive microcaches to store the contents in a coordinated manner, in order to increase the overall mobile network capacity to support more number of requests. We achieve this goal in this paper with a novel caching scheme, which reduces the repeating traffic by request filtration and asynchronous multicast in a RAN. Request filtration can make the best use of the limited bandwidth and in turn ensure the good performance of the coordinated caching. Moreover, the storage at the mobile devices is also considered to be used to further reduce the backhaul traffic and improve the users’ experience. In addition, we drive the optimal cache division in this paper with the aim of reducing the average latency user perceived. The simulation results show that the proposed scheme outperforms existing algorithms.

  3. Turbidity and Total Suspended Solids on the Lower Cache River Watershed, AR.

    Science.gov (United States)

    Rosado-Berrios, Carlos A; Bouldin, Jennifer L

    2016-06-01

    The Cache River Watershed (CRW) in Arkansas is part of one of the largest remaining bottomland hardwood forests in the US. Although wetlands are known to improve water quality, the Cache River is listed as impaired due to sedimentation and turbidity. This study measured turbidity and total suspended solids (TSS) in seven sites of the lower CRW; six sites were located on the Bayou DeView tributary of the Cache River. Turbidity and TSS levels ranged from 1.21 to 896 NTU, and 0.17 to 386.33 mg/L respectively and had an increasing trend over the 3-year study. However, a decreasing trend from upstream to downstream in the Bayou DeView tributary was noted. Sediment loading calculated from high precipitation events and mean TSS values indicate that contributions from the Cache River main channel was approximately 6.6 times greater than contributions from Bayou DeView. Land use surrounding this river channel affects water quality as wetlands provide a filter for sediments in the Bayou DeView channel.

  4. A QoS Control Approach in Differentiated Web Caching Service

    Directory of Open Access Journals (Sweden)

    Ang Gao

    2011-01-01

    Full Text Available As the heterogeneity ofWeb clients increasing, the differentiated service becomes an important issue especially for e-commerce Web site. Web caching as a key accelerator on the Internet plays an important role in alleviating the client-perceived delay. To meet the Service Level Agreement (SLA for clients without excessively over-provisioning resources, this paper proposes and evaluates a novel framework for enforcing Proportional Hit Rate. The framework combines the implement of Isolated Cache Model and the usage of control-theoretical approach for storage control. With system identification, the linear model is identified as well as the controller. At every sampling time, by dynamically reallocating storage spaces for different Web classes, the controller operates to guarantee the relationship of QoS metric among classes constant. The experimental results demonstrate the proposed approach achieves differentiated caching service with the enforcement of Greedy Dual Size Frequency (GDSF, Latest Recently Used (LRU and Latest Frequently Used (LFU cache replacement policies.

  5. Acorn Caching in Tree Squirrels: Teaching Hypothesis Testing in the Park

    Science.gov (United States)

    McEuen, Amy B.; Steele, Michael A.

    2012-01-01

    We developed an exercise for a university-level ecology class that teaches hypothesis testing by examining acorn preferences and caching behavior of tree squirrels (Sciurus spp.). This exercise is easily modified to teach concepts of behavioral ecology for earlier grades, particularly high school, and provides students with a theoretical basis for…

  6. I/O-Optimal Distribution Sweeping on Private-Cache Chip Multiprocessors

    DEFF Research Database (Denmark)

    Ajwani, Deepak; Sitchinava, Nodar; Zeh, Norbert

    2011-01-01

    The parallel external memory (PEM) model has been used as a basis for the design and analysis of a wide range of algorithms for private-cache multi-core architectures. As a tool for developing geometric algorithms in this model, a parallel version of the I/O-efficient distribution sweeping framew...

  7. Exploitation of pocket gophers and their food caches by grizzly bears

    Science.gov (United States)

    Mattson, D.J.

    2004-01-01

    I investigated the exploitation of pocket gophers (Thomomys talpoides) by grizzly bears (Ursus arctos horribilis) in the Yellowstone region of the United States with the use of data collected during a study of radiomarked bears in 1977-1992. My analysis focused on the importance of pocket gophers as a source of energy and nutrients, effects of weather and site features, and importance of pocket gophers to grizzly bears in the western contiguous United States prior to historical extirpations. Pocket gophers and their food caches were infrequent in grizzly bear feces, although foraging for pocket gophers accounted for about 20-25% of all grizzly bear feeding activity during April and May. Compared with roots individually excavated by bears, pocket gopher food caches were less digestible but more easily dug out. Exploitation of gopher food caches by grizzly bears was highly sensitive to site and weather conditions and peaked during and shortly after snowmelt. This peak coincided with maximum success by bears in finding pocket gopher food caches. Exploitation was most frequent and extensive on gently sloping nonforested sites with abundant spring beauty (Claytonia lanceolata) and yampah (Perdieridia gairdneri). Pocket gophers are rare in forests, and spring beauty and yampah roots are known to be important foods of both grizzly bears and burrowing rodents. Although grizzly bears commonly exploit pocket gophers only in the Yellowstone region, this behavior was probably widespread in mountainous areas of the western contiguous United States prior to extirpations of grizzly bears within the last 150 years.

  8. Optimizing VM allocation and data placement for data-intensive applications in cloud using ACO metaheuristic algorithm

    Directory of Open Access Journals (Sweden)

    T.P. Shabeera

    2017-04-01

    Full Text Available Nowadays data-intensive applications for processing big data are being hosted in the cloud. Since the cloud environment provides virtualized resources for computation, and data-intensive applications require communication between the computing nodes, the placement of Virtual Machines (VMs and location of data affect the overall computation time. Majority of the research work reported in the current literature consider the selection of physical nodes for placing data and VMs as independent problems. This paper proposes an approach which considers VM placement and data placement hand in hand. The primary objective is to reduce cross network traffic and bandwidth usage, by placing required number of VMs and data in Physical Machines (PMs which are physically closer. The VM and data placement problem (referred as MinDistVMDataPlacement problem is defined in this paper and has been proved to be NP- Hard. This paper presents and evaluates a metaheuristic algorithm based on Ant Colony Optimization (ACO, which selects a set of adjacent PMs for placing data and VMs. Data is distributed in the physical storage devices of the selected PMs. According to the processing capacity of each PM, a set of VMs are placed on these PMs to process data stored in them. We use simulation to evaluate our algorithm. The results show that the proposed algorithm selects PMs in close proximity and the jobs executed in the VMs allocated by the proposed scheme outperforms other allocation schemes.

  9. WriteSmoothing: Improving Lifetime of Non-volatile Caches Using Intra-set Wear-leveling

    Energy Technology Data Exchange (ETDEWEB)

    Mittal, Sparsh [ORNL; Vetter, Jeffrey S [ORNL; Li, Dong [ORNL

    2014-01-01

    Driven by the trends of increasing core-count and bandwidth-wall problem, the size of last level caches (LLCs) has greatly increased. Since SRAM consumes high leakage power, researchers have explored use of non-volatile memories (NVMs) for designing caches as they provide high density and consume low leakage power. However, since NVMs have low write-endurance and the existing cache management policies are write variation-unaware, effective wear-leveling techniques are required for achieving reasonable cache lifetimes using NVMs. We present WriteSmoothing, a technique for mitigating intra-set write variation in NVM caches. WriteSmoothing logically divides the cache-sets into multiple modules. For each module, WriteSmoothing collectively records number of writes in each way for any of the sets. It then periodically makes most frequently written ways in a module unavailable to shift the write-pressure to other ways in the sets of the module. Extensive simulation results have shown that on average, for single and dual-core system configurations, WriteSmoothing improves cache lifetime by 2.17X and 2.75X, respectively. Also, its implementation overhead is small and it works well for a wide range of algorithm and system parameters.

  10. Determination of loss of tension in old green line that joining the furnaces VM1 e Panela1; Determinacao da queda de tensao na antiga linha verde de alimentaco dos fornos VM1 e Panela1

    Energy Technology Data Exchange (ETDEWEB)

    Panunzio, Paulo Armando; Magalhaes Sobrinho, Pedro [UNESP, Guaratingueta, SP (Brazil); Manfrin, Antonio Carlos [Villares Metals S.A., SP(Brazil). Unidade Sumare

    2009-11-01

    The objective of this work is to demonstrate the gains obtained with electrical energy, due to replacement of electric cables of air called 'power of the green line to arc furnaces VM1 and panel 1 of Villares Metals - Unit Sumare' by underground cables. We can mention other gains in safety, maintenance, environment and quality of power, but will not be measured. This substitution was motivated due to the existing line to the limit of its useful life, with restrictions to increase the load (expansion) and stops unwanted maintenance. It is emphasized that this determination is the only power loss due to ohmic resistance of bare aluminum conductor used in the old line (green). The economy demonstrated in this study is R$ 3,313.21 and 16.339752 MWh even if it is significantly reduced because there is a spare in MWh, which can be used within the company in which extensions are necessary. (author)

  11. Accurate and Simplified Prediction of AVF for Delay and Energy Efficient Cache Design

    Institute of Scientific and Technical Information of China (English)

    An-Guo Ma; Yu Cheng; Zuo-Cheng Xing

    2011-01-01

    With continuous technology scaling, on-chip structures are becoming more and more susceptible to soft errors. Architectural vulnerability factor (AVF) has been introduced to quantify the architectural vulnerability of on-chip structures to soft errors. Recent studies have found that designing soft error protection techniques with the awareness of AVF is greatly helpful to achieve a tradeoff between performance and reliability for several structures (i.e., issue queue, reorder buffer). Cache is one of the most susceptible components to soft errors and is commonly protected with error correcting codes (ECC). However, protecting caches closer to the processor (i.e., L1 data cache (L1D)) using ECC could result in high overhead. Protecting caches without accurate knowledge of the vulnerability characteristics may lead to over-protection. Therefore, designing AVF-aware ECC is attractive for designers to balance among performance, power and reliability for cache, especially at early design stage. In this paper, we improve the methodology of cache AVF computation and develop a new AVF estimation framework, soft error reliability analysis based on SimpleScalar. Then we characterize dynamic vulnerability behavior of L1D and detect the correlations between LID AVF and various performance metrics. We propose to employ Bayesian additive regression trees to accurately model the variation of L1D AVF and to quantitatively explain the important effects of several key performance metrics on L1D AVF. Then, we employ bump hunting technique to reduce the complexity of L1D AVF prediction and extract some simple selecting rules based on several key performance metrics, thus enabling a simplified and fast estimation of L1D AVF. Based on the simplified and fast estimation of L1D AVF, intervals of high L1D AVF can be identified online, enabling us to develop the AVF-aware ECC technique to reduce the overhead of ECC. Experimental results show that compared with traditional ECC technique

  12. YFNWR project report number 87-5: Beaver food cache survey, Yukon Flats National Wildlife Refuge, Alaska, 1986: Management study

    Data.gov (United States)

    US Fish and Wildlife Service, Department of the Interior — Beaver colonies remained relatively stable or increased slightly within the two survey areas as indicated through aerial food-cache surveys. The Lodge/Water Bodies...

  13. A Survey Of Architectural Approaches for Managing Embedded DRAM and Non-volatile On-chip Caches

    Energy Technology Data Exchange (ETDEWEB)

    Mittal, Sparsh [ORNL; Vetter, Jeffrey S [ORNL; Li, Dong [ORNL

    2014-01-01

    Recent trends of CMOS scaling and increasing number of on-chip cores have led to a large increase in the size of on-chip caches. Since SRAM has low density and consumes large amount of leakage power, its use in designing on-chip caches has become more challenging. To address this issue, researchers are exploring the use of several emerging memory technologies, such as embedded DRAM, spin transfer torque RAM, resistive RAM, phase change RAM and domain wall memory. In this paper, we survey the architectural approaches proposed for designing memory systems and, specifically, caches with these emerging memory technologies. To highlight their similarities and differences, we present a classification of these technologies and architectural approaches based on their key characteristics. We also briefly summarize the challenges in using these technologies for architecting caches. We believe that this survey will help the readers gain insights into the emerging memory device technologies, and their potential use in designing future computing systems.

  14. Servidor proxy caché: comprensión y asimilación tecnológica

    Directory of Open Access Journals (Sweden)

    Carlos E. Gómez

    2012-01-01

    Full Text Available Los proveedores de acceso a Internet usualmente incluyen el concepto de aceleradores de Internet para reducir el tiempo promedio que tarda un navegador en obtener los archivos solicitados. Para los administradores del sistema es difícil elegir la configuración del servidor proxy caché, ya que es necesario decidir los valores que se deben usar en diferentes variables. En este artículo se presenta la forma como se abordó el proceso de comprensión y asimilación tecnológica del servicio de proxy caché, un servicio de alto impacto organizacional. Además, este artículo es producto del proyecto de investigación “Análisis de configuraciones de servidores proxy caché”, en el cual se estudiaron aspectos relevantes del rendimiento de Squid como servidor proxy caché.

  15. YFNWR project report number 85-3: Beaver food cache survey, Yukon Flats National Wildlife Refuge, Alaska: Management study

    Data.gov (United States)

    US Fish and Wildlife Service, Department of the Interior — The objective of the annual beaver food cache survey is to determine trends in the relative abundance of beaver in representative drainages within the Yukon Flats...

  16. Wolves, Canis lupus, carry and cache the collars of radio-collared White-tailed Deer, Odocoileus virginianus, they killed

    Science.gov (United States)

    Nelson, Michael E.; Mech, L. David

    2011-01-01

    Wolves (Canis lupus) in northeastern Minnesota cached six radio-collars (four in winter, two in spring-summer) of 202 radio-collared White-tailed Deer (Odocoileus virginianus) they killed or consumed from 1975 to 2010. A Wolf bedded on top of one collar cached in snow. We found one collar each at a Wolf den and Wolf rendezvous site, 2.5 km and 0.5 km respectively, from each deer's previous locations.

  17. Drosophila vitelline membrane assembly: A critical role for an evolutionarily conserved cysteine in the “VM domain” of sV23

    OpenAIRE

    Wu, T; Manogaran, A.L; Beauchamp, J.M.; Waring, G L

    2010-01-01

    The vitelline membrane (VM), the oocyte proximal layer of the Drosophila eggshell, contains four major proteins (VMPs) that possess a highly conserved “VM domain” which includes three precisely spaced, evolutionarily conserved, cysteines (CX7CX8C). Focusing on sV23, this study showed that the three cysteines are not functionally equivalent. While substitution mutations at the first (C123S) or third (C140S) cysteines were tolerated, females with a substitution at the second position (C131S) we...

  18. EqualChance: Addressing Intra-set Write Variation to Increase Lifetime of Non-volatile Caches

    Energy Technology Data Exchange (ETDEWEB)

    Mittal, Sparsh [ORNL; Vetter, Jeffrey S [ORNL

    2014-01-01

    To address the limitations of SRAM such as high-leakage and low-density, researchers have explored use of non-volatile memory (NVM) devices, such as ReRAM (resistive RAM) and STT-RAM (spin transfer torque RAM) for designing on-chip caches. A crucial limitation of NVMs, however, is that their write endurance is low and the large intra-set write variation introduced by existing cache management policies may further exacerbate this problem, thereby reducing the cache lifetime significantly. We present EqualChance, a technique to increase cache lifetime by reducing intra-set write variation. EqualChance works by periodically changing the physical cache-block location of a write-intensive data item within a set to achieve wear-leveling. Simulations using workloads from SPEC CPU2006 suite and HPC (high-performance computing) field show that EqualChance improves the cache lifetime by 4.29X. Also, its implementation overhead is small, and it incurs very small performance and energy loss.

  19. Ecosystem services from keystone species: diversionary seeding and seed-caching desert rodents can enhance Indian ricegrass seedling establishment

    Science.gov (United States)

    Longland, William; Ostoja, Steven M.

    2013-01-01

    Seeds of Indian ricegrass (Achnatherum hymenoides), a native bunchgrass common to sandy soils on arid western rangelands, are naturally dispersed by seed-caching rodent species, particularly Dipodomys spp. (kangaroo rats). These animals cache large quantities of seeds when mature seeds are available on or beneath plants and recover most of their caches for consumption during the remainder of the year. Unrecovered seeds in caches account for the vast majority of Indian ricegrass seedling recruitment. We applied three different densities of white millet (Panicum miliaceum) seeds as “diversionary foods” to plots at three Great Basin study sites in an attempt to reduce rodents' over-winter cache recovery so that more Indian ricegrass seeds would remain in soil seedbanks and potentially establish new seedlings. One year after diversionary seed application, a moderate level of Indian ricegrass seedling recruitment occurred at two of our study sites in western Nevada, although there was no recruitment at the third site in eastern California. At both Nevada sites, the number of Indian ricegrass seedlings sampled along transects was significantly greater on all plots treated with diversionary seeds than on non-seeded control plots. However, the density of diversionary seeds applied to plots had a marginally non-significant effect on seedling recruitment, and it was not correlated with recruitment patterns among plots. Results suggest that application of a diversionary seed type that is preferred by seed-caching rodents provides a promising passive restoration strategy for target plant species that are dispersed by these rodents.

  20. Cache-Based Aggregate Query Shipping: An Efficient Scheme of Distributed OLAP Query Processing

    Institute of Scientific and Technical Information of China (English)

    Hua-Ming Liao; Guo-Shun Pei

    2008-01-01

    Our study introduces a novel distributed query plan refinement phase in an enhanced architecture of distributed query processing engine (DQPE). Query plan refinement generates potentially efficient distributed query plan by reusable aggregate query shipping (RAQS) approach. The approach improves response time at the cost of pre-processing time. If theoverheads could not be compensated by query results reusage, RAQS is no more favorable. Therefore a global cost estimation model is employed to get proper operators: RR_Agg, R_Agg, or R_Scan. For the purpose of reusing results of queries with aggregate function in distributed query processing, a multi-level hybrid view caching (HVC) scheme is introduced. The scheme retains the advantages of partial match and aggregate query results caching. By our solution, evaluations with distributed TPC-H queries show significant improvement on average response time.

  1. A New Caching Technique to Support Conjunctive Queries in P2P DHT

    Science.gov (United States)

    Kobatake, Koji; Tagashira, Shigeaki; Fujita, Satoshi

    P2P DHT (Peer-to-Peer Distributed Hash Table) is one of typical techniques for realizing an efficient management of shared resources distributed over a network and a keyword search over such networks in a fully distributed manner. In this paper, we propose a new method for supporting conjunctive queries in P2P DHT. The basic idea of the proposed technique is to share a global information on past trials by conducting a local caching of search results for conjunctive queries and by registering the fact to the global DHT. Such a result caching is expected to significantly reduce the amount of transmitted data compared with conventional schemes. The effect of the proposed method is experimentally evaluated by simulation. The result of experiments indicates that by using the proposed method, the amount of returned data is reduced by 60% compared with conventional P2P DHT which does not support conjunctive queries.

  2. Federated or cached searches: Providing expected performance from multiple invasive species databases

    Science.gov (United States)

    Graham, Jim; Jarnevich, Catherine S.; Simpson, Annie; Newman, Gregory J.; Stohlgren, Thomas J.

    2011-06-01

    Invasive species are a universal global problem, but the information to identify them, manage them, and prevent invasions is stored around the globe in a variety of formats. The Global Invasive Species Information Network is a consortium of organizations working toward providing seamless access to these disparate databases via the Internet. A distributed network of databases can be created using the Internet and a standard web service protocol. There are two options to provide this integration. First, federated searches are being proposed to allow users to search "deep" web documents such as databases for invasive species. A second method is to create a cache of data from the databases for searching. We compare these two methods, and show that federated searches will not provide the performance and flexibility required from users and a central cache of the datum are required to improve performance.

  3. XRootd, disk-based, caching proxy for optimization of data access, data placement and data replication

    Science.gov (United States)

    Bauerdick, L. A. T.; Bloom, K.; Bockelman, B.; Bradley, D. C.; Dasu, S.; Dost, J. M.; Sfiligoi, I.; Tadel, A.; Tadel, M.; Wuerthwein, F.; Yagil, A.; Cms Collaboration

    2014-06-01

    Following the success of the XRootd-based US CMS data federation, the AAA project investigated extensions of the federation architecture by developing two sample implementations of an XRootd, disk-based, caching proxy. The first one simply starts fetching a whole file as soon as a file open request is received and is suitable when completely random file access is expected or it is already known that a whole file be read. The second implementation supports on-demand downloading of partial files. Extensions to the Hadoop Distributed File System have been developed to allow for an immediate fallback to network access when local HDFS storage fails to provide the requested block. Both cache implementations are in pre-production testing at UCSD.

  4. Secure File Allocation and Caching in Large-scale Distributed Systems

    DEFF Research Database (Denmark)

    Di Mauro, Alessio; Mei, Alessandro; Jajodia, Sushil

    2012-01-01

    In this paper, we present a file allocation and caching scheme that guarantees high assurance, availability, and load balancing in a large-scale distributed file system that can support dynamic updates of authorization policies. The scheme uses fragmentation and replication to store files with hi......-balancing, and reducing delay of read operations. The system offers a trade-off-between performance and security that is dynamically tunable according to the current level of threat. We validate our mechanisms with extensive simulations in an Internet-like network....... security requirements in a system composed of a majority of low-security servers. We develop mechanisms to fragment files, to allocate them into multiple servers, and to cache them as close as possible to their readers while preserving the security requirement of the files, providing load...

  5. Worst-case execution time analysis-driven object cache design

    DEFF Research Database (Denmark)

    Huber, Benedikt; Puffitsch, Wolfgang; Schoeberl, Martin

    2012-01-01

    Hard real‐time systems need a time‐predictable computing platform to enable static worst‐case execution time (WCET) analysis. All performance‐enhancing features need to be WCET analyzable. However, standard data caches containing heap‐allocated data are very hard to analyze statically. In this pa...... result in overly pessimistic WCET estimations. We therefore believe that an early architecture exploration by means of static timing analysis techniques helps to identify configurations suitable for hard real‐time systems.......Hard real‐time systems need a time‐predictable computing platform to enable static worst‐case execution time (WCET) analysis. All performance‐enhancing features need to be WCET analyzable. However, standard data caches containing heap‐allocated data are very hard to analyze statically...

  6. Federated or cached searches:Providing expected performance from multiple invasive species databases

    Institute of Scientific and Technical Information of China (English)

    Jim GRAHAM; Catherine S.JARNEVICH; Annie SIMPSON; Gregory J.NEWMAN; Thomas J.STOHLGREN

    2011-01-01

    Invasive species are a universal global problem,but the information to identify them,manage them,and prevent invasions is stored around the globe in a variety of formats.The Global Invasive Species Information Network is a consortium of organizations working toward providing seamless access to these disparate databases via the Internet.A distributed network of databases can be created using the Intemet and a standard web service protocol.There are two options to provide this integration.First,federated searches are being proposed to allow users to search "deep" web documents such as databases for invasive species.A second method is to create a cache of data from the databases for searching.We compare these two methods,and show that federated searches will not provide the performance and flexibility required from users and a central cache of the datum are required to improve performance.

  7. Real-Time Scheduling in Heterogeneous Systems Considering Cache Reload Time Using Genetic Algorithms

    Science.gov (United States)

    Miryani, Mohammad Reza; Naghibzadeh, Mahmoud

    Since optimal assignment of tasks in a multiprocessor system is, in almost all practical cases, an NP-hard problem, in recent years some algorithms based on genetic algorithms have been proposed. Some of these algorithms have considered real-time applications with multiple objectives, total tardiness, completion time, etc. Here, we propose a suboptimal static scheduler of nonpreemptable tasks in hard real-time heterogeneous multiprocessor systems considering time constraints and cache reload time. The approach makes use of genetic algorithm to minimize total completion time and number of processors used, simultaneously. One important issue which makes this research different from previous ones is cache reload time. The method is implemented and the results are compared against a similar method.

  8. Multi-bit upset aware hybrid error-correction for cache in embedded processors

    Science.gov (United States)

    Jiaqi, Dong; Keni, Qiu; Weigong, Zhang; Jing, Wang; Zhenzhen, Wang; Lihua, Ding

    2015-11-01

    For the processor working in the radiation environment in space, it tends to suffer from the single event effect on circuits and system failures, due to cosmic rays and high energy particle radiation. Therefore, the reliability of the processor has become an increasingly serious issue. The BCH-based error correction code can correct multi-bit errors, but it introduces large latency overhead. This paper proposes a hybrid error correction approach that combines BCH and EDAC to correct both multi-bit and single-bit errors for caches with low cost. The proposed technique can correct up to four-bit error, and correct single-bit error in one cycle. Evaluation results show that, the proposed hybrid error-correction scheme can improve the performance of cache accesses up to 20% compared to the pure BCH scheme.

  9. Leveraging shared caches for parallel temporal blocking of stencil codes on multicore processors and clusters

    CERN Document Server

    Wittmann, Markus; Treibig, Jan; Wellein, Gerhard

    2010-01-01

    Bandwidth-starved multicore chips have become ubiquitous. It is well known that the performance of stencil codes can be improved by temporal blocking, lessening the pressure on the memory interface. We introduce a new pipelined approach that makes explicit use of shared caches in multicore environments and minimizes synchronization and boundary overhead. Benchmark results are presented for three current x86-based microprocessors, showing clearly that our optimization works best on designs with high-speed shared caches and low memory bandwidth per core. We furthermore demonstrate that simple bandwidth-based performance models are inaccurate for this kind of algorithm and employ a more elaborate, synthetic modeling procedure. Finally we show that temporal blocking can be employed successfully in a hybrid shared/distributed-memory environment, albeit with limited benefit at strong scaling.

  10. DETERMINATION OF PALEOEARTHQUAKE TIMING AND MAGNITUDES ON THE SOUTHERN SEGMENT OF THE EAST CACHE FAULT, UTAH

    OpenAIRE

    McCalpin, James P.

    2012-01-01

    We investigated the late Quaternary rupture history of the southern East Cache Fault zone [ECFZ], northern Utah with geologic mapping, paleoseismic logging of fault trenches, ground-penetrating radar, and optically stimulated luminescence dating. McCalpin (1989) indicated that the southern segment of the ECFZ consisted of three strands. We excavated four trenches across these strands, and evaluate the stratigraphy and structure of the sites. We conclude that the western fault strand of the EC...

  11. Hybrid information retrieval policies based on cooperative cache in mobile P2P networks

    Institute of Scientific and Technical Information of China (English)

    Quanqing XU; Hengtao SHEN; Zaiben CHEN; Bin CUI; Xiaofang ZHOU; Yafei DAI

    2009-01-01

    The concept of Peer-to-Peer (P2P) has been in-troduced into mobile networks, which has led to the emer-gence of mobile P2P networks, and originated potential ap-plications in many fields. However, mobile P2P networks are subject to the limitations of transmission range, and highly dynamic and unpredictable network topology, giving rise to many new challenges for efficient information retrieval. In this paper, we propose an automatic and economical hybrid information retrieval approach based on cooperative cache. In this method, the region covered by a mobile P2P network is partitioned into subregions, each of which is identified by a unique ID and known to all peers. All the subregions then constitute a mobile Kademlia (MKad) network. The pro-posed hybrid retrieval approach aims to utilize the flooding-based and Distributed Hash Table (DHT)-based schemes in MKad for indexing and searching according to the designed utility functions. To further facilitate information retrieval, we present an effective cache update method by considering all relevant factors. At the same time, the combination of two different methods for cache update is also introduced. One of them is pull based on time stamp including two different pulls: an on-demand pull and a periodical pull, and the other is a push strategy using update records. Furthermore, we provide detailed mathematical analysis on the cache hit ratio of our approach. Simulation experiments in NS-2 showed that the proposed approach is more accurate and efficient than the existing methods.

  12. Final Independent External Peer Review Report, Cache la Poudre at Greeley, Colorado General Investigation Feasibility Study

    Science.gov (United States)

    2014-06-06

    upstream of Greeley. While the main stem of the Cache la Poudre is considered a wild and scenic river in the Rocky Mountains, irrigation and gravel...are applied should be included. In addition, infestation by the emerald ash borer could alter the habitat structure and eliminate one of the tree...species considered important to the planned restoration. A description of the potential impact of the emerald ash borer on green ash should be provided

  13. GreenDelivery: Proactive Content Caching and Push with Energy-Harvesting-based Small Cells

    OpenAIRE

    Zhou, Sheng; Gong, Jie; ZHOU, Zhenyu; Chen, Wei; Niu, Zhisheng

    2015-01-01

    The explosive growth of mobile multimedia traffic calls for scalable wireless access with high quality of service and low energy cost. Motivated by the emerging energy harvesting communications, and the trend of caching multimedia contents at the access edge and user terminals, we propose a paradigm-shift framework, namely GreenDelivery, enabling efficient content delivery with energy harvesting based small cells. To resolve the two-dimensional randomness of energy harvesting and content requ...

  14. CASA: A New IFU Architecture for Power-Efficient Instruction Cache and TLB Designs

    Institute of Scientific and Technical Information of China (English)

    Han-Xin Sun; Kun-Peng Yang; Yu-Lai Zhao; Dong Tong; Xu Cheng

    2008-01-01

    The instruction fetch unit (IFU) usually dissipates a considerable portion of total chip power. In traditiona lIFU architectures, as soon as the fetch address is generated, it needs to be sent to the instruction cache and TLB arrays for instruction fetch. Since limited work can be done by the power-saving logic after the fetch address generation and before the instruction fetch, previous power-saving approaches usually suffer from the unnecessary restrictions from traditional IFU architectures. In this paper, we present CASA, a new power-aware IFU architecture, which effectively reduces the unnecessary restrictions on the power-saving approaches and provides sufficient time and information for the power-saving logic of both instruction cache and TLB. By analyzing, recording, and utilizing the key information of the dynamic instruction flow early in the front-end pipeline, CASA brings the opportunity to maximize the power efficiency and minimize the performance overhead. Compared to the baseline configuration, the leakage and dynamic power of instruction cache is reduced by 89.7% and 64.1% respectively, and the dynamic power of instruction TLB is reduced by 90.2%. Meanwhile the performance degradation in the worst case is only 0.63%. Compared to previous state-of-the-art power-saving approaches, the CASA-based approach saves IFU power more effectively, incurs less performance overhead and achieves better scalability. It is promising that CASA can stimulate further work on architectural solutions to power-efficient IFU designs.

  15. The role of seed mass on the caching decision by agoutis, Dasyprocta leporina (Rodentia: Agoutidae

    Directory of Open Access Journals (Sweden)

    Mauro Galetti

    2010-06-01

    Full Text Available It has been shown that the local extinction of large-bodied frugivores may cause cascading consequences for plant recruitment and overall plant diversity. However, to what extent the resilient mammals can compensate the role of seed dispersal in defaunated sites is poorly understood. Caviomorph rodents, especially Dasyprocta spp., are usually resilient frugivores in hunted forests and their seed caching behavior may be important for many plant species which lack primary dispersers. We compared the effect of the variation in seed mass of six vertebrate-dispersed plant species on the caching decision by the red-rumped agoutis Dasyprocta leporina Linnaeus, 1758 in a land-bridge island of the Atlantic forest, Brazil. We found a strong positive effect of seed mass on seed fate and dispersal distance, but there was a great variation between species. Agoutis never cached seeds smaller than 0.9 g and larger seeds were dispersed for longer distances. Therefore, agoutis can be important seed dispersers of large-seeded species in defaunated forests.

  16. Cache-Oblivious Implicit Predecessor Dictionaries with the Working Set Property

    CERN Document Server

    Brodal, Gerth Stølting

    2011-01-01

    In this paper we present an implicit dynamic dictionary with the working-set property, supporting \\op{insert}($e$) and \\op{delete}($e$) in $O(\\log n)$ time, \\op{predecessor}($e$) in $O(\\log \\ws{\\pred(e)})$ time, \\op{successor}($e$) in $O(\\log \\ws{\\succ(e)})$ time and \\op{search}($e$) in $O(\\log \\min(\\ws{\\pred(e)},\\ws{e}, \\ws{\\succ(e)}))$ time, where $n$ is the number of elements stored in the dictionary, $\\ws{e}$ is the number of distinct elements searched for since element $e$ was last searched for and $\\pred(e)$ and $\\succ(e)$ are the predecessor and successor of $e$, respectively. The time-bounds are all worst-case. The dictionary stores the elements in an array of size $n$ using \\emph{no} additional space. In the cache-oblivious model the $\\log$ is base $B$ and the cache-obliviousness is due to our black box use of an existing cache-oblivious implicit dictionary. This is the first implicit dictionary supporting predecessor and successor searches in the working-set bound. Previous implicit structures requi...

  17. Fox squirrels match food assessment and cache effort to value and scarcity.

    Directory of Open Access Journals (Sweden)

    Mikel M Delgado

    Full Text Available Scatter hoarders must allocate time to assess items for caching, and to carry and bury each cache. Such decisions should be driven by economic variables, such as the value of the individual food items, the scarcity of these items, competition for food items and risk of pilferage by conspecifics. The fox squirrel, an obligate scatter-hoarder, assesses cacheable food items using two overt movements, head flicks and paw manipulations. These behaviors allow an examination of squirrel decision processes when storing food for winter survival. We measured wild squirrels' time allocations and frequencies of assessment and investment behaviors during periods of food scarcity (summer and abundance (fall, giving the squirrels a series of 15 items (alternating five hazelnuts and five peanuts. Assessment and investment per cache increased when resource value was higher (hazelnuts or resources were scarcer (summer, but decreased as scarcity declined (end of sessions. This is the first study to show that assessment behaviors change in response to factors that indicate daily and seasonal resource abundance, and that these factors may interact in complex ways to affect food storing decisions. Food-storing tree squirrels may be a useful and important model species to understand the complex economic decisions made under natural conditions.

  18. Proficient Pair of Replacement Algorithms on L1 and L2 Cache for Merge Sort

    CERN Document Server

    Gupta, Richa

    2010-01-01

    Memory hierarchy is used to compete the processors speed. Cache memory is the fast memory which is used to conduit the speed difference of memory and processor. The access patterns of Level 1 cache (L1) and Level 2 cache (L2) are different, when CPU not gets the desired data in L1 then it accesses L2. Thus the replacement algorithm which works efficiently on L1 may not be as efficient on L2. Similarly various applications such as Matrix Multiplication, Web, Fast Fourier Transform (FFT) etc will have varying access pattern. Thus same replacement algorithm for all types of application may not be efficient. This paper works for getting an efficient pair of replacement algorithm on L1 and L2 for the algorithm Merge Sort. With the memory reference string of Merge Sort, we have analyzed the behavior of various existing replacement algorithms on L1. The existing replacement algorithms which are taken into consideration are: Least Recently Used (LRU), Least Frequently Used (LFU) and First In First Out (FIFO). After A...

  19. A TIME INDEX BASED APPROACH FOR CACHE SHARING IN MOBILE ADHOC NETWORKS

    Directory of Open Access Journals (Sweden)

    Lilly Sheeba S

    2011-07-01

    Full Text Available Initially wireless networks were fully infrastructure based and hence imposed the necessity to install base station. Base station leads to single point of failure and causes scalability problems. With the advent of mobile adhoc networks, these problems are mitigated, by allowing certain mobile nodes to form a dynamic and temporary communication network without any preexisting infrastructure. Caching is an important technique to enhance the performance in any network. Particularly, in MANETs, it is important to cache frequently accessed data not only to reduce average latency and wireless bandwidth but also to avoid heavy traffic near the data centre. With data being cached by mobile nodes, a request to the data centre can easily be serviced by a nearby mobile node instead of the data centre alone. In this paper we propose a system , Time Index Based Approach that focuses on providing recent data on demand basis. In this system, the data comes along with a time stamp. In our work we propose three policies namely Item Discovery, Item Admission and Item Replacement, to provide data availability even with limited resources. Data consistency is ensured here since if the mobile client receives the same data item with an updated time, the previous content along with time is replaced to provide only recent data. Data availability is promised by mobile nodes, instead of the data server. We enhance the space availability in a node by deploying automated replacement policy.

  20. Improving Performance on WWW using Intelligent Predictive Caching for Web Proxy Servers

    Directory of Open Access Journals (Sweden)

    J. B. Patil

    2011-01-01

    Full Text Available Web proxy caching is used to improve the performance of the Web infrastructure. It aims to reduce network traffic, server load, and user perceived retrieval delays. The heart of a caching system is its page replacement policy, which needs to make good replacement decisions when its cache is full and a new document needs to be stored. The latest and most popular replacement policies like GDSF and GDSF# use the file size, access frequency, and age in the decision process. The effectiveness of any replacement policy can be evaluated using two metrics: hit ratio (HR and byte hit ratio (BHR. There is always a trade-off between HR and BHR. In this paper, using three different Web proxy server logs, we use trace driven analysis to evaluate the effects of different replacement policies on the performance of a Web proxy server. We propose a modification of GDSF# policy, IPGDSF#. Our simulation results show that our proposed replacement policy IPGDSF# performs better than several policies proposed in the literature in terms of hit rate as well as byte hit rate.

  1. On the Efficacy of Source Code Optimizations for Cache-Based Systems

    Science.gov (United States)

    VanderWijngaart, Rob F.; Saphir, William C.; Saini, Subhash (Technical Monitor)

    1998-01-01

    Obtaining high performance without machine-specific tuning is an important goal of scientific application programmers. Since most scientific processing is done on commodity microprocessors with hierarchical memory systems, this goal of "portable performance" can be achieved if a common set of optimization principles is effective for all such systems. It is widely believed, or at least hoped, that portable performance can be realized. The rule of thumb for optimization on hierarchical memory systems is to maximize temporal and spatial locality of memory references by reusing data and minimizing memory access stride. We investigate the effects of a number of optimizations on the performance of three related kernels taken from a computational fluid dynamics application. Timing the kernels on a range of processors, we observe an inconsistent and often counterintuitive impact of the optimizations on performance. In particular, code variations that have a positive impact on one architecture can have a negative impact on another, and variations expected to be unimportant can produce large effects. Moreover, we find that cache miss rates-as reported by a cache simulation tool, and confirmed by hardware counters-only partially explain the results. By contrast, the compiler-generated assembly code provides more insight by revealing the importance of processor-specific instructions and of compiler maturity, both of which strongly, and sometimes unexpectedly, influence performance. We conclude that it is difficult to obtain performance portability on modern cache-based computers, and comment on the implications of this result.

  2. Non-Toxic Metabolic Management of Metastatic Cancer in VM Mice: Novel Combination of Ketogenic Diet, Ketone Supplementation, and Hyperbaric Oxygen Therapy.

    Science.gov (United States)

    Poff, A M; Ward, N; Seyfried, T N; Arnold, P; D'Agostino, D P

    2015-01-01

    The Warburg effect and tumor hypoxia underlie a unique cancer metabolic phenotype characterized by glucose dependency and aerobic fermentation. We previously showed that two non-toxic metabolic therapies - the ketogenic diet with concurrent hyperbaric oxygen (KD+HBOT) and dietary ketone supplementation - could increase survival time in the VM-M3 mouse model of metastatic cancer. We hypothesized that combining these therapies could provide an even greater therapeutic benefit in this model. Mice receiving the combination therapy demonstrated a marked reduction in tumor growth rate and metastatic spread, and lived twice as long as control animals. To further understand the effects of these metabolic therapies, we characterized the effects of high glucose (control), low glucose (LG), ketone supplementation (βHB), hyperbaric oxygen (HBOT), or combination therapy (LG+βHB+HBOT) on VM-M3 cells. Individually and combined, these metabolic therapies significantly decreased VM-M3 cell proliferation and viability. HBOT, alone or in combination with LG and βHB, increased ROS production in VM-M3 cells. This study strongly supports further investigation into this metabolic therapy as a potential non-toxic treatment for late-stage metastatic cancers.

  3. Unique properties of the classical bovine spongiform encephalopathy strain and its emergence from H-type bovine spongiform encephalopathy substantiated by VM transmission studies.

    Science.gov (United States)

    Bencsik, Anna; Leboidre, Mikael; Debeer, Sabine; Aufauvre, Claire; Baron, Thierry

    2013-03-01

    In addition to classical bovine spongiform encephalopathy (C-BSE), which is recognized as being at the origin of the human variant form of Creutzfeldt-Jakob disease, 2 rare phenotypes of BSE (H-type BSE [H-BSE] and L-type BSE [L-BSE]) were identified in 2004. H-type BSE and L-BSE are considered to be sporadic forms of prion disease in cattle because they differ from C-BSE with respect to incubation period, vacuolar pathology in the brain, and biochemical properties of the protease-resistant prion protein (PrP) in natural hosts and in some mouse models that have been tested. Recently, we showed that H-BSE transmitted to C57Bl/6 mice resulted in a dissociation of the phenotypic features, that is, some mice showed an H-BSE phenotype, whereas others had a C-BSE phenotype. Here, these 2 phenotypes were further studied in VM mice and compared with cattle C-BSE, H-BSE, and L-BSE. Serial passages from the C-BSE-like phenotype on VM mice retained similarities with C-BSE. Moreover, our results indicate that strains 301V and 301C derived from C-BSE transmitted to VM and C57Bl/6 mice, respectively, are fundamentally the same strain. These VM transmission studies confirm the unique properties of the C-BSE strain and support the emergence of a strain that resembles C-BSE from H-BSE.

  4. Non-Toxic Metabolic Management of Metastatic Cancer in VM Mice: Novel Combination of Ketogenic Diet, Ketone Supplementation, and Hyperbaric Oxygen Therapy.

    Directory of Open Access Journals (Sweden)

    A M Poff

    Full Text Available The Warburg effect and tumor hypoxia underlie a unique cancer metabolic phenotype characterized by glucose dependency and aerobic fermentation. We previously showed that two non-toxic metabolic therapies - the ketogenic diet with concurrent hyperbaric oxygen (KD+HBOT and dietary ketone supplementation - could increase survival time in the VM-M3 mouse model of metastatic cancer. We hypothesized that combining these therapies could provide an even greater therapeutic benefit in this model. Mice receiving the combination therapy demonstrated a marked reduction in tumor growth rate and metastatic spread, and lived twice as long as control animals. To further understand the effects of these metabolic therapies, we characterized the effects of high glucose (control, low glucose (LG, ketone supplementation (βHB, hyperbaric oxygen (HBOT, or combination therapy (LG+βHB+HBOT on VM-M3 cells. Individually and combined, these metabolic therapies significantly decreased VM-M3 cell proliferation and viability. HBOT, alone or in combination with LG and βHB, increased ROS production in VM-M3 cells. This study strongly supports further investigation into this metabolic therapy as a potential non-toxic treatment for late-stage metastatic cancers.

  5. NMDA antagonist, but not nNOS inhibitor, requires AMPA receptors in the ventromedial prefrontal cortex (vmPFC) to induce antidepressant-like effects

    DEFF Research Database (Denmark)

    Pereira, V. S.; Wegener, Gregers; Joca, S. R.

    2013-01-01

    Depressed individuals and stressed animals show enhanced levels of glutamate and neuronal nitric oxide synthase (nNOS) activity in limbic structures, including the vmPFC. Systemic administration of glutamatergic NMDA receptor antagonists or inhibitors of nitric oxide (NO) synthesis induces antide...

  6. EarthCache as a Tool to Promote Earth-Science in Public School Classrooms

    Science.gov (United States)

    Gochis, E. E.; Rose, W. I.; Klawiter, M.; Vye, E. C.; Engelmann, C. A.

    2011-12-01

    Geoscientists often find it difficult to bridge the gap in communication between university research and what is learned in the public schools. Today's schools operate in a high stakes environment that only allow instruction based on State and National Earth Science curriculum standards. These standards are often unknown by academics or are written in a style that obfuscates the transfer of emerging scientific research to students in the classroom. Earth Science teachers are in an ideal position to make this link because they have a background in science as well as a solid understanding of the required curriculum standards for their grade and the pedagogical expertise to pass on new information to their students. As part of the Michigan Teacher Excellence Program (MiTEP), teachers from Grand Rapids, Kalamazoo, and Jackson school districts participate in 2 week field courses with Michigan Tech University to learn from earth science experts about how the earth works. This course connects Earth Science Literacy Principles' Big Ideas and common student misconceptions with standards-based education. During the 2011 field course, we developed and began to implement a three-phase EarthCache model that will provide a geospatial interactive medium for teachers to translate the material they learn in the field to the students in their standards based classrooms. MiTEP participants use GPS and Google Earth to navigate to Michigan sites of geo-significance. At each location academic experts aide participants in making scientific observations about the locations' geologic features, and "reading the rocks" methodology to interpret the area's geologic history. The participants are then expected to develop their own EarthCache site to be used as pedagogical tool bridging the gap between standards-based classroom learning, contemporary research and unique outdoor field experiences. The final phase supports teachers in integrating inquiry based, higher-level learning student

  7. 在移动计算环境中基于移动代理的缓存失效方案%A Cache Invalidation Scheme Based on Mobile Agent in Mobile Computing Environments

    Institute of Scientific and Technical Information of China (English)

    吴劲; 卢显良; 任立勇

    2003-01-01

    Caching can reduce the bandwidth requirement in a mobile computing environment as well as minimize the energy consumption of mobile hosts. To affirm the validity of mobile host' cache content, servers periodically broadcast cache invalidation reports that contain information of data that has been updated. However, as mobile hosts may operate in sleeping mode (disconnected mode), it is possible that some reports may be missed and the clients are forced to discard the entire cache content. In this paper, we present a cache invalidation scheme base on mobile agent in mobile computing environments, which can manage consistency between mobile hosts and servers, to avoid losing cache invalidation reports.

  8. Access Driven Cache Timing Attack Against AES%AES访问驱动Cache计时攻击

    Institute of Scientific and Technical Information of China (English)

    赵新杰; 王韬; 郭世泽; 郑媛媛

    2011-01-01

    Firstly, this paper displays an access driven Cache timing attack model, proposes non-elimination and elimination two general methods to analyze Cache information leakage during AES encryption, and builds the Cache information leakage model. Next, it uses quantitative analysis to attack a sample with the above elimination analysis method, and provides some solutions for the potential problems of a real attack. Finally, this paper describes 12 local and remote attacks on AES in OpenSSL v.0.9.Sa, v.0.9.Sj. Experiment results demonstrate that:the access driven Cache timing attack has strong applicability in both local and remote environments; the AES lookup table and Cache structure decide that AES is vulnerable to this type of attack, the least sample size required to recover a full AES key is about 13; the last round A.ES implementation in OpenSSL v.0.9.Sj, which abandoned the T4 lookup table, cannot secure itself from the access driven Cache timing attack; the attack results strongly verify the correctness of the quantitative Cache information leakage theory and key analysis methods above.%首先给出了访问驱动Cache计时攻击的模型,提出了该模型下直接分析、排除分析两种通用的AES加密泄漏Cache信息分析方法;然后建立了AES加密Cache信息泄露模型,并在此基础上对排除分析攻击所需样本量进行了定量分析,给出了攻击中可能遇到问题的解决方案;最后结合OpenSSL v.0.9.8a,v.0.9.8j中两种典型的AES实现在Windows环境下进行了本地和远程攻击共12个实验.实验结果表明,访问驱动Cache计时攻击在本地和远程均具有良好的可行性;AES查找表和Cache结构本身决定了AES易遭受访问驱动Cache计时攻击威胁,攻击最小样本量仅为13;去除T4表的OpenSSL v.0.9.8j中AES最后一轮实现并不能防御该攻击;实验结果多次验证了AES加密Cache信息泄露和密钥分析理论的正确性.

  9. Altered catalytic activity of and DNA cleavage by DNA topoisomerase II from human leukemic cells selected for resistance to VM-26.

    Science.gov (United States)

    Danks, M K; Schmidt, C A; Cirtain, M C; Suttle, D P; Beck, W T

    1988-11-29

    The simultaneous development of resistance to the cytotoxic effects of several classes of natural product anticancer drugs, after exposure to only one of these agents, is referred to as multiple drug resistance (MDR). At least two distinct mechanisms for MDR have been postulated: that associated with P-glycoprotein and that thought to be due to an alteration in DNA topoisomerase II activity (at-MDR). We describe studies with two sublines of human leukemic CCRF-CEM cells approximately 50-fold resistant (CEM/VM-1) and approximately 140-fold resistant (CEM/VM-1-5) to VM-26, a drug known to interfere with DNA topoisomerase II activity. Each of these lines is cross-resistant to other drugs known to affect topoisomerase II but not cross-resistant to vinblastine, an inhibitor of mitotic spindle formation. We found little difference in the amount of immunoreactive DNA topoisomerase II in 1.0 M NaCl nuclear extracts of the two resistant and parental cell lines. However, topoisomerase II in nuclear extracts of the resistant sublines is altered in both catalytic activity (unknotting) of and DNA cleavage by this enzyme. Also, the rate at which catenation occurs is 20-30-fold slower with the CEM/VM-1-5 preparations. The effect of VM-26 on both strand passing and DNA cleavage is inversely related to the degree of primary resistance of each cell line. Our data support the hypothesis that at-MDR is due to an alteration in topoisomerase II or in a factor modulating its activity.

  10. Performance implications from sizing a VM on multi-core systems: A Data analytic application s view

    Energy Technology Data Exchange (ETDEWEB)

    Lim, Seung-Hwan [ORNL; Horey, James L [ORNL; Begoli, Edmon [ORNL; Yao, Yanjun [University of Tennessee, Knoxville (UTK); Cao, Qing [University of Tennessee, Knoxville (UTK)

    2013-01-01

    In this paper, we present a quantitative performance analysis of data analytics applications running on multi-core virtual machines. Such environments form the core of cloud computing. In addition, data analytics applications, such as Cassandra and Hadoop, are becoming increasingly popular on cloud computing platforms. This convergence necessitates a better understanding of the performance and cost implications of such hybrid systems. For example, the very rst step in hosting applications in virtualized environments, requires the user to con gure the number of virtual processors and the size of memory. To understand performance implications of this step, we benchmarked three Yahoo Cloud Serving Benchmark (YCSB) workloads in a virtualized multi-core environment. Our measurements indicate that the performance of Cassandra for YCSB workloads does not heavily depend on the processing capacity of a system, while the size of the data set is critical to performance relative to allocated memory. We also identi ed a strong relationship between the running time of workloads and various hardware events (last level cache loads, misses, and CPU migrations). From this analysis, we provide several suggestions to improve the performance of data analytics applications running on cloud computing environments.

  11. Cooperative caching scheme based on the minimization of total cost for P2P caches%P2P缓存系统中总开销最小的协作缓存策略

    Institute of Scientific and Technical Information of China (English)

    刘银龙; 汪敏; 马伟; 周旭; 胡亚辉

    2015-01-01

    为降低P2P缓存系统中的全局开销,提出一种基于总开销最小的协作缓存策略。该策略综合考虑P2P缓存系统中的传输开销和存储开销,使用跨ISP域间链路开销、流行度、文件大小、存储开销来衡量文件的缓存增益。需要替换时,首先替换掉缓存增益最小的内容。实验结果表明,所提策略能够有效降低系统的总开销。%To reduce the total cost of P2P cache system, a cooperative cache scheme based on the minimization of total cost is proposed. In the scheme, delivery cost and storage cost are taken into account, and inter-ISP cost, popularity, file size, storage cost are used to evaluate each object’s caching gain value, which is a new concept defined to estimate the benefits of storing or replacing an object. When a replacement is needed, the objects with the minimum caching gain value will be evicted. Simulation results show that the proposed scheme can effectively reduce the total cost of P2P cache system.

  12. A New Resources Provisioning Method Based on QoS Differentiation and VM Resizing in IaaS

    Directory of Open Access Journals (Sweden)

    Rongdong Hu

    2015-01-01

    Full Text Available In order to improve the host energy efficiency in IaaS, we proposed an adaptive host resource provisioning method, CoST, which is based on QoS differentiation and VM resizing. The control model can adaptively adjust control parameters according to real time application performance, in order to cope with changes in load. CoST takes advantage of the fact that different types of applications have different sensitivity degrees to performance and cost. It places two different types of VMs on the same host and dynamically adjusts their sizes based on the load forecasting and QoS feedback. It not only guarantees the performance defined in SLA, but also keeps the host running in energy-efficient state. Real Google cluster trace and host power data are used to evaluate the proposed method. Experimental results show that CoST can provide performance-sensitive application with a steady QoS and simultaneously speed up the overall processing of performance-tolerant application by 20~66%. The host energy efficiency is significantly improved by 7~23%.

  13. PROOF as a Service on the Cloud: a Virtual Analysis Facility based on the CernVM ecosystem

    CERN Document Server

    Berzano, Dario; Buncic, Predrag; Charalampidis, Ioannis; Ganis, Gerardo; Lestaris, Georgios; Meusel, René

    2014-01-01

    PROOF, the Parallel ROOT Facility, is a ROOT-based framework which enables interactive parallelism for event-based tasks on a cluster of computing nodes. Although PROOF can be used simply from within a ROOT session with no additional requirements, deploying and configuring a PROOF cluster used to be not as straightforward. Recently great efforts have been spent to make the provisioning of generic PROOF analysis facilities with zero configuration, with the added advantages of positively affecting both stability and scalability, making the deployment operations feasible even for the end user. Since a growing amount of large-scale computing resources are nowadays made available by Cloud providers in a virtualized form, we have developed the Virtual PROOF-based Analysis Facility: a cluster appliance combining the solid CernVM ecosystem and PoD (PROOF on Demand), ready to be deployed on the Cloud and leveraging some peculiar Cloud features such as elasticity. We will show how this approach is effective both for sy...

  14. DEPENDENCE OF THE SLIDER DEFORMATION AND MACHINING PRECISION ON THE MULTIPURPOSE MACHINE-TOOL COMPLEX OF VM SERIES

    Directory of Open Access Journals (Sweden)

    Berezhnoy S. B.

    2016-04-01

    Full Text Available The article is devoted to the development of high-tech metal-working industry, as well as to the use of unmanned technology. We recommended measures to improve the accuracy and quality of manufacturing of complex and large workpieces weighing up to 100 tons. To date, the technical level of many economy sectors is largely determined by the level of the production means. Based on the engineering development there is an overall automation and mechanization of production and industry processes, construction, agriculture, transport and other industries. We analyzed forms of slide sections, of errors affecting the accuracy of the workpieces manufacturing. We made simulation of the cutting forces and sliders deformations. Solved measures increase manufacturing accuracy based on multi-purpose machine tool systems of VM series. We held the analysis of the dependence of cutting forces, a slider form on its strain in different types of processing. We obtained a graph of cutting force and precision manufacturing. We defined the optimal shape of the slider cross section to increase the rigidity and reduce the slide deformation in metal cutting

  15. dCache: implementing a high-end NFSv4.1 service using a Java NIO framework

    CERN Document Server

    CERN. Geneva

    2012-01-01

    dCache is a high performance scalable storage system widely used by HEP community. In addition to set of home grown protocols we also provide industry standard access mechanisms like WebDAV and NFSv4.1. This support places dCache as a direct competitor to commercial solutions. Nevertheless conforming to a protocol is not enough; our implementations must perform comparably or even better than commercial systems. To achieve this, dCache uses two high-end IO frameworks from well know application servers: GlassFish and JBoss. This presentation describes how we implemented an rfc1831 and rfc2203 compliant ONC RPC (Sun RPC) service based on the Grizzly NIO framework, part of the GlassFish application server. This ONC RPC service is the key component of dCache’s NFSv4.1 implementation, but is independent of dCache and available for other projects. We will also show some details of dCache NFS v4.1 implementations, describe some of the Java NIO techniques used and, finally, present details of our performance e...

  16. Tuning the cache memory usage in tomographic reconstruction on standard computers with Advanced Vector eXtensions (AVX).

    Science.gov (United States)

    Agulleiro, Jose-Ignacio; Fernandez, Jose-Jesus

    2015-06-01

    Cache blocking is a technique widely used in scientific computing to minimize the exchange of information with main memory by reusing the data kept in cache memory. In tomographic reconstruction on standard computers using vector instructions, cache blocking turns out to be central to optimize performance. To this end, sinograms of the tilt-series and slices of the volumes to be reconstructed have to be divided into small blocks that fit into the different levels of cache memory. The code is then reorganized so as to operate with a block as much as possible before proceeding with another one. This data article is related to the research article titled Tomo3D 2.0 - Exploitation of Advanced Vector eXtensions (AVX) for 3D reconstruction (Agulleiro and Fernandez, 2015) [1]. Here we present data of a thorough study of the performance of tomographic reconstruction by varying cache block sizes, which allows derivation of expressions for their automatic quasi-optimal tuning.

  17. Tuning the cache memory usage in tomographic reconstruction on standard computers with Advanced Vector eXtensions (AVX

    Directory of Open Access Journals (Sweden)

    Jose-Ignacio Agulleiro

    2015-06-01

    Full Text Available Cache blocking is a technique widely used in scientific computing to minimize the exchange of information with main memory by reusing the data kept in cache memory. In tomographic reconstruction on standard computers using vector instructions, cache blocking turns out to be central to optimize performance. To this end, sinograms of the tilt-series and slices of the volumes to be reconstructed have to be divided into small blocks that fit into the different levels of cache memory. The code is then reorganized so as to operate with a block as much as possible before proceeding with another one. This data article is related to the research article titled Tomo3D 2.0 – Exploitation of Advanced Vector eXtensions (AVX for 3D reconstruction (Agulleiro and Fernandez, 2015 [1]. Here we present data of a thorough study of the performance of tomographic reconstruction by varying cache block sizes, which allows derivation of expressions for their automatic quasi-optimal tuning.

  18. An Improved Particle Swarm Optimization Based on Deluge Approach for Enhanced Hierarchical Cache Optimization in IPTV Networks

    Directory of Open Access Journals (Sweden)

    M. Somu

    2014-05-01

    Full Text Available In recent years, IP network has been considered as a new delivery network for TV services. A majority of the telecommunication industries have used IP network to offer on-demand services and linear TV services as it can offer a two-way and high-speed communication. In order to effectively and economically utilize the IP network, caching is the technique which is usually preferred. In IPTV system, a managed network is utilized to bring out TV services, the requests of Video on Demand (VOD objects are usually combined in a limited period intensively and user preferences are fluctuated dynamically. Furthermore, the VOD content updates often under the control of IPTV providers. In order to minimize this traffic and overall network cost, a segment of the video content is stored in caches closer to subscribers, for example, Digital Subscriber Line Access Multiplexer (DSLAM, a Central Office (CO and Intermediate Office (IO. The major problem focused in this approach is to determine the optimal cache memory that should be assigned in order to attain maximum cost effectiveness. This approach uses an effective Grate Deluge algorithm based Particle Swarm Optimization (GDPSO approach for attaining the optimal cache memory size which in turn minimizes the overall network cost. The analysis shows that hierarchical distributed caching can save significant network cost through the utilization of the GDPSO algorithm.

  19. Interference of Quorum Sensing by Delftia sp. VM4 Depends on the Activity of a Novel N-Acylhomoserine Lactone-Acylase.

    Directory of Open Access Journals (Sweden)

    Vimal B Maisuria

    Full Text Available Turf soil bacterial isolate Delftia sp. VM4 can degrade exogenous N-acyl homoserine lactone (AHL, hence it effectively attenuates the virulence of bacterial soft rot pathogen Pectobacterium carotovorum subsp. carotovorum strain BR1 (Pcc BR1 as a consequence of quorum sensing inhibition.Isolated Delftia sp. VM4 can grow in minimal medium supplemented with AHL as a sole source of carbon and energy. It also possesses the ability to degrade various AHL molecules in a short time interval. Delftia sp. VM4 suppresses AHL accumulation and the production of virulence determinant enzymes by Pcc BR1 without interference of the growth during co-culture cultivation. The quorum quenching activity was lost after the treatment with trypsin and proteinase K. The protein with quorum quenching activity was purified by three step process. Matrix assisted laser desorption/ionization-time of flight (MALDI-TOF and Mass spectrometry (MS/MS analysis revealed that the AHL degrading enzyme (82 kDa demonstrates homology with the NCBI database hypothetical protein (Daci_4366 of D. acidovorans SPH-1. The purified AHL acylase of Delftia sp. VM4 demonstrated optimum activity at 20-40°C and pH 6.2 as well as AHL acylase type mode of action. It possesses similarity with an α/β-hydrolase fold protein, which makes it unique among the known AHL acylases with domains of the N-terminal nucleophile (Ntn-hydrolase superfamily. In addition, the kinetic and thermodynamic parameters for hydrolysis of the different AHL substrates by purified AHL-acylase were estimated. Here we present the studies that investigate the mode of action and kinetics of AHL-degradation by purified AHL acylase from Delftia sp. VM4.We characterized an AHL-inactivating enzyme from Delftia sp. VM4, identified as AHL acylase showing distinctive similarity with α/β-hydrolase fold protein, described its biochemical and thermodynamic properties for the first time and revealed its potential application as an anti

  20. A Comparison between Fixed Priority and EDF Scheduling accounting for Cache Related Pre-emption Delays

    Directory of Open Access Journals (Sweden)

    Will Lunniss

    2014-04-01

    Full Text Available In multitasking real-time systems, the choice of scheduling algorithm is an important factor to ensure that response time requirements are met while maximising limited system resources. Two popular scheduling algorithms include fixed priority (FP and earliest deadline first (EDF. While they have been studied in great detail before, they have not been compared when taking into account cache related pre-emption delays (CRPD. Memory and cache are split into a number of blocks containing instructions and data. During a pre-emption, cache blocks from the pre-empting task can evict those of the pre-empted task. When the pre-empted task is resumed, if it then has to re-load the evicted blocks, CRPD are introduced which then affect the schedulability of the task. In this paper we compare FP and EDF scheduling algorithms in the presence of CRPD using the state-of-the-art CRPD analysis. We find that when CRPD is accounted for, the performance gains offered by EDF over FP, while still notable, are diminished. Furthermore, we find that under scenarios that cause relatively high CRPD, task layout optimisation techniques can be applied to allow FP to schedule tasksets at a similar processor utilisation to EDF. Thus making the choice of the task layout in memory as important as the choice of scheduling algorithm. This is very relevant for industry, as it is much cheaper and simpler to adjust the task layout through the linker than it is to switch the scheduling algorithm.

  1. Sample Acquisition and Caching architecture for the Mars Sample Return mission

    Science.gov (United States)

    Zacny, K.; Chu, P.; Cohen, J.; Paulsen, G.; Craft, J.; Szwarc, T.

    This paper presents a Mars Sample Return (MSR) Sample Acquisition and Caching (SAC) study developed for the three rover platforms: MER, MER+, and MSL. The study took into account 26 SAC requirements provided by the NASA Mars Exploration Program Office. For this SAC architecture, the reduction of mission risk was chosen by us as having greater priority than mass or volume. For this reason, we selected a “ One Bit per Core” approach. The enabling technology for this architecture is Honeybee Robotics' “ eccentric tubes” core breakoff approach. The breakoff approach allows the drill bits to be relatively small in diameter and in turn lightweight. Hence, the bits could be returned to Earth with the cores inside them with only a modest increase to the total returned mass, but a significant decrease in complexity. Having dedicated bits allows a reduction in the number of core transfer steps and actuators. It also alleviates the bit life problem, eliminates cross contamination, and aids in hermetic sealing. An added advantage is faster drilling time, lower power, lower energy, and lower Weight on Bit (which reduces Arm preload requirements). Drill bits are based on the BigTooth bit concept, which allows re-use of the same bit multiple times, if necessary. The proposed SAC consists of a 1) Rotary-Percussive Core Drill, 2) Bit Storage Carousel, 3) Cache, 4) Robotic Arm, and 5) Rock Abrasion and Brushing Bit (RABBit), which is deployed using the Drill. The system also includes PreView bits (for viewing of cores prior to caching) and Powder bits for acquisition of regolith or cuttings. The SAC total system mass is less than 22 kg for MER and MER+ size rovers and less than 32 kg for the MSL-size rover.

  2. Incorporating cache management behavior into seed dispersal: the effect of pericarp removal on acorn germination.

    Directory of Open Access Journals (Sweden)

    Xianfeng Yi

    Full Text Available Selecting seeds for long-term storage is a key factor for food hoarding animals. Siberian chipmunks (Tamias sibiricus remove the pericarp and scatter hoard sound acorns of Quercus mongolica over those that are insect-infested to maximize returns from caches. We have no knowledge of whether these chipmunks remove the pericarp from acorns of other species of oaks and if this behavior benefits seedling establishment. In this study, we tested whether Siberian chipmunks engage in this behavior with acorns of three other Chinese oak species, Q. variabilis, Q. aliena and Q. serrata var. brevipetiolata, and how the dispersal and germination of these acorns are affected. Our results show that when chipmunks were provided with sound and infested acorns of Quercus variabilis, Q. aliena and Q. serrata var. brevipetiolata, the two types were equally harvested and dispersed. This preference suggests that Siberian chipmunks are incapable of distinguishing between sound and insect-infested acorns. However, Siberian chipmunks removed the pericarp from acorns of these three oak species prior to dispersing and caching them. Consequently, significantly more sound acorns were scatter hoarded and more infested acorns were immediately consumed. Additionally, indoor germination experiments showed that pericarp removal by chipmunks promoted acorn germination while artificial removal showed no significant effect. Our results show that pericarp removal allows Siberian chipmunks to effectively discriminate against insect-infested acorns and may represent an adaptive behavior for cache management. Because of the germination patterns of pericarp-removed acorns, we argue that the foraging behavior of Siberian chipmunks could have potential impacts on the dispersal and germination of acorns from various oak species.

  3. Cliff swallows Petrochelidon pyrrhonota as bioindicators of environmental mercury, Cache Creek Watershed, California

    Science.gov (United States)

    Hothem, Roger L.; Trejo, Bonnie S.; Bauer, Marissa L.; Crayon, John J.

    2008-01-01

    To evaluate mercury (Hg) and other element exposure in cliff swallows (Petrochelidon pyrrhonota), eggs were collected from 16 sites within the mining-impacted Cache Creek watershed, Colusa, Lake, and Yolo counties, California, USA, in 1997-1998. Nestlings were collected from seven sites in 1998. Geometric mean total Hg (THg) concentrations ranged from 0.013 to 0.208 ??g/g wet weight (ww) in cliff swallow eggs and from 0.047 to 0.347 ??g/g ww in nestlings. Mercury detected in eggs generally followed the spatial distribution of Hg in the watershed based on proximity to both anthropogenic and natural sources. Mean Hg concentrations in samples of eggs and nestlings collected from sites near Hg sources were up to five and seven times higher, respectively, than in samples from reference sites within the watershed. Concentrations of other detected elements, including aluminum, beryllium, boron, calcium, manganese, strontium, and vanadium, were more frequently elevated at sites near Hg sources. Overall, Hg concentrations in eggs from Cache Creek were lower than those reported in eggs of tree swallows (Tachycineta bicolor) from highly contaminated locations in North America. Total Hg concentrations were lower in all Cache Creek egg samples than adverse effects levels established for other species. Total Hg concentrations in bullfrogs (Rana catesbeiana) and foothill yellow-legged frogs (Rana boylii) collected from 10 of the study sites were both positively correlated with THg concentrations in cliff swallow eggs. Our data suggest that cliff swallows are reliable bioindicators of environmental Hg. ?? Springer Science+Business Media, LLC 2007.

  4. Modelado analítico del comportamiento de memorias caché

    OpenAIRE

    Fraguela, Basilio B.

    2011-01-01

    [Resumen] El principal cuello de botella que limita las tasas de computación que pueden alcanzar los sistemas actuales radica en la diferencia creciente de velocidad entre el procesador y las memorias, Para responder a este problemas se ha dotado a los computadores de una jerarquía de niveles de memoria donde los niveles más cercanos al procesador, las memorias caché, juegan un papel fundamental. Las aproximaciones más típicas para el estudio de estas memorias, las simulaciones guiada...

  5. Single-Producer/Single-Consumer Queues on Shared Cache Multi-Core Systems

    CERN Document Server

    Torquati, Massimo

    2010-01-01

    Using efficient point-to-point communication channels is critical for implementing fine grained parallel program on modern shared cache multi-core architectures. This report discusses in detail several implementations of wait-free Single-Producer/Single-Consumer queue (SPSC), and presents a novel and efficient algorithm for the implementation of an unbounded wait-free SPSC queue (uSPSC). The correctness proof of the new algorithm, and several performance measurements based on simple synthetic benchmark and microbenchmark, are also discussed.

  6. Comparison of the Frontier Distributed Database Caching System with NoSQL Databases

    CERN Document Server

    CERN. Geneva

    2012-01-01

    Non-relational "NoSQL" databases such as Cassandra and CouchDB are best known for their ability to scale to large numbers of clients spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects, is based on traditional SQL databases but also has the same high scalability and wide-area distributability for an important subset of applications. This paper compares the architectures, behavior, performance, and maintainability of the two different approaches and identifies the criteria for choosing which approach to prefer over the other.

  7. Mercury and Methylmercury concentrations and loads in Cache Creek Basin, California, January 2000 through May 2001

    Science.gov (United States)

    Domagalski, Joseph L.; Alpers, Charles N.; Slotton, Darrell G.; Suchanek, Thomas H.; Ayers, Shaun M.

    2004-01-01

    Concentrations and mass loads of total mercury and methylmercury in streams draining abandoned mercury mines and near geothermal discharge in Cache Creek Basin, California, were measured during a 17-month period from January 2000 through May 2001. Rainfall and runoff averages during the study period were lower than long-term averages. Mass loads of mercury and methylmercury from upstream sources to downstream receiving waters, such as San Francisco Bay, were generally the highest during or after winter rainfall events. During the study period, mass loads of mercury and methylmercury from geothermal sources tended to be greater than those from abandoned mining areas because of a lack of large precipitation events capable of mobilizing significant amounts of either mercury-laden sediment or dissolved mercury and methylmercury from mine waste. Streambed sediments of Cache Creek are a source of mercury and methylmercury to downstream receiving bodies of water such as the Delta of the San Joaquin and Sacramento Rivers. Much of the mercury in these sediments was deposited over the last 150 years by erosion and stream discharge from abandoned mines or by continuous discharges from geothermal areas. Several geochemical constituents were useful as natural tracers for mining and geothermal areas. These constituents included aqueous concentrations of boron, chloride, lithium, and sulfate, and the stable isotopes of hydrogen and oxygen in water. Stable isotopes of water in areas draining geothermal discharges were enriched with more oxygen-18 relative to oxygen-16 than meteoric waters, whereas the enrichment by stable isotopes of water from much of the runoff from abandoned mines was similar to that of meteoric water. Geochemical signatures from stable isotopes and trace-element concentrations may be useful as tracers of total mercury or methylmercury from specific locations; however, mercury and methylmercury are not conservatively transported. A distinct mixing trend of

  8. An Efficient Low Complexity Low Latency Architecture for Matching of Data Encoded With Error Correcting Code Using a Cache Memory

    Directory of Open Access Journals (Sweden)

    Aswathi D

    2016-10-01

    Full Text Available An efficient architecture is introduced for the matching of data encoded with error correcting code using a cache memory is presented in brief. Using cache memory it reduces latency and complexity to an fine level. And this architecture further reduces the dynamic power without affecting the time. For the comparison of data, hamming distance along used to check whether the data match the data kept in main memory. Instead of butterfly formed weight accumulator(previous work here no other mechanism is presented for calculating hamming distance.

  9. Non-Toxic Metabolic Management of Metastatic Cancer in VM Mice: Novel Combination of Ketogenic Diet, Ketone Supplementation, and Hyperbaric Oxygen Therapy

    OpenAIRE

    2015-01-01

    The Warburg effect and tumor hypoxia underlie a unique cancer metabolic phenotype characterized by glucose dependency and aerobic fermentation. We previously showed that two non-toxic metabolic therapies - the ketogenic diet with concurrent hyperbaric oxygen (KD+HBOT) and dietary ketone supplementation - could increase survival time in the VM-M3 mouse model of metastatic cancer. We hypothesized that combining these therapies could provide an even greater therapeutic benefit in this model. Mic...

  10. Observations of territorial breeding common ravens caching eggs of greater sage-grouse

    Science.gov (United States)

    Howe, Kristy B.; Coates, Peter S.

    2015-01-01

    Previous investigations using continuous video monitoring of greater sage-grouse Centrocercus urophasianus nests have unambiguously identified common ravens Corvus corax as an important egg predator within the western United States. The quantity of greater sage-grouse eggs an individual common raven consumes during the nesting period and the extent to which common ravens actively hunt greater sage-grouse nests are largely unknown. However, some evidence suggests that territorial breeding common ravens, rather than nonbreeding transients, are most likely responsible for nest depredations. We describe greater sage-grouse egg depredation observations obtained opportunistically from three common raven nests located in Idaho and Nevada where depredated greater sage-grouse eggs were found at or in the immediate vicinity of the nest site, including the caching of eggs in nearby rock crevices. We opportunistically monitored these nests by counting and removing depredated eggs and shell fragments from the nest sites during each visit to determine the extent to which the common raven pairs preyed on greater sage-grouse eggs. To our knowledge, our observations represent the first evidence that breeding, territorial pairs of common ravens cache greater sage-grouse eggs and are capable of depredating multiple greater sage-grouse nests.

  11. Soldados Remen: Interacción social en el grupo de baile Caché

    Directory of Open Access Journals (Sweden)

    Enrique Esqueda

    2013-01-01

    Full Text Available En este estudio se examinaron los procesos de interacción social en el grupo Caché en una colonia de la delegación Iztacalco de la Ciudad de México, del año 2003 al 2007. La metodología incluyó observación participativa, aplica- ción de entrevistas y encuestas a integrantes y ex integrantes, reconstrucción de la historia de la agrupación, análisis de su ambiente social y cultural, incluidas sus reglas, liderazgos, conflictos y escisiones, empleando los postulados del sociólogo estadunidense William Foote Whyte, de la escuela de Chicago. Se concluyó que Caché presenta una forma intermedia entre un grupo de esquina y un club social; que la pertenencia a él puede explicarse como una estrategia de sostenimiento moral y material en un entorno social desfavorable para el desarrollo integral de la persona; y que supuso prestigio y estatus en la vida comunitaria de sus integrantes.

  12. Detection of Lead (Pb) in Three Environmental Matrices of the Cache River Watershed, Arkansas.

    Science.gov (United States)

    Kilmer, Mary K; Bouldin, Jennifer L

    2016-06-01

    Water bodies contaminated with lead (Pb) represent a considerable threat to both human and environmental health. The Cache River, located in northeastern Arkansas has been listed as impaired on the 303(d) list due to Pb contamination. However, historical data for the watershed is limited in both sampled waterways and analyses performed. This study measures concentrations of Pb in three environmental matrices of the Cache River Watershed; dissolved in the water column, total Pb (dissolved + particulate), and sediment-bound Pb. A variety of waterways were sampled including main channel and tributary sites. Frequency of detection and mean concentrations were compared to values for the entire Lower Mississippi Watershed. In general, no significant differences were found for the CRW when compared to the LMRW, with the exception of total Pb which was detected more frequently but at lower concentrations in the CRW than in the LMRW, and sediment Pb, which was detected at a significantly lower frequency in the CRW than the LMRW.

  13. A Unified Buffering Management with Set Divisible Cache for PCM Main Memory

    Institute of Scientific and Technical Information of China (English)

    Mei-Ying Bian; Su-Kyung Yoon; Jeong-Geun Kim; Sangjae Nam; Shin-Dug Kim

    2016-01-01

    This research proposes a phase-change memory (PCM) based main memory system with an effective combi-nation of a superblock-based adaptive buffering structure and its associated set divisible last-level cache (LLC). To achieve high performance similar to that of dynamic random-access memory (DRAM) based main memory, the superblock-based adaptive buffer (SABU) is comprised of dual DRAM buffers, i.e., an aggressive superblock-based pre-fetching buffer (SBPB) and an adaptive sub-block reusing buffer (SBRB), and a set divisible LLC based on a cache space optimization scheme. According to our experiment, the longer PCM access latency can typically be hidden using our proposed SABU, which can significantly reduce the number of writes over the PCM main memory by 26.44%. The SABU approach can reduce PCM access latency up to 0.43 times, compared with conventional DRAM main memory. Meanwhile, the average memory energy consumption can be reduced by 19.7%.

  14. Fully De-Amortized Cuckoo Hashing for Cache-Oblivious Dictionaries and Multimaps

    CERN Document Server

    Goodrich, Michael T; Mitzenmacher, Michael; Thaler, Justin

    2011-01-01

    A dictionary (or map) is a key-value store that requires all keys be unique, and a multimap is a key-value store that allows for multiple values to be associated with the same key. We design hashing-based indexing schemes for dictionaries and multimaps that achieve worst-case optimal performance for lookups and updates, with a small or negligible probability the data structure will require a rehash operation, depending on whether we are working in the the external-memory (I/O) model or one of the well-known versions of the Random Access Machine (RAM) model. One of the main features of our constructions is that they are \\emph{fully de-amortized}, meaning that their performance bounds hold without one having to tune their constructions with certain performance parameters, such as the constant factors in the exponents of failure probabilities or, in the case of the external-memory model, the size of blocks or cache lines and the size of internal memory (i.e., our external-memory algorithms are cache oblivious). ...

  15. Coherent Route Cache In Dynamic Source Routing For Ad Hoc Networks

    Directory of Open Access Journals (Sweden)

    Sofiane Boukli Hacene

    2012-02-01

    Full Text Available Ad hoc network is a set of nodes that are able to move and can be connected in an arbitrary manner. Each node acts as a router and communicates using a multi-hop wireless links. Nodes within ad hoc networks need efficient dynamic routing protocols to facilitate communication. An Efficient routing protocol can provide significant benefits to mobile ad hoc networks, in terms of both performance and reliability. Several routing protocols exist allowing and facilitating communication between mobile nodes. One of the promising routing protocols is DSR (Dynamic Source Routing. This protocol presents some problems. The major problem in DSR is that the route cache contains some inconsistence routing information; this is due to node mobility. This problem generates longer delays for data packets. In order to reduce the delays we propose a technique based on cleaning route caches for nodes within an active route. Our approach has been implemented and tested in the well known network simulator GLOMOSIM and the simulation results show that protocol performance have been enhanced.

  16. Transient Variable Caching in Java’s Stack-Based Intermediate Representation

    Directory of Open Access Journals (Sweden)

    Paul Týma

    1999-01-01

    Full Text Available Java’s stack‐based intermediate representation (IR is typically coerced to execute on register‐based architectures. Unoptimized compiled code dutifully replicates transient variable usage designated by the programmer and common optimization practices tend to introduce further usage (i.e., CSE, Loop‐invariant Code Motion, etc.. On register based machines, often transient variables are cached within registers (when available saving the expense of actually accessing memory. Unfortunately, in stack‐based environments because of the need to push and pop the transient values, further performance improvement is possible. This paper presents Transient Variable Caching (TVC, a technique for eliminating transient variable overhead whenever possible. This optimization would find a likely home in optimizers attached to the back of popular Java compilers. Side effects of the algorithm include significant instruction reordering and introduction of many stack‐manipulation operations. This combination has proven to greatly impede the ability to decompile stack‐based IR code sequences. The code that results from the transform is faster, smaller, and greatly impedes decompilation.

  17. Constraining models of postglacial rebound using space geodesy: a detailed assessment of model ICE-5G (VM2) and its relatives

    Science.gov (United States)

    Argus, Donald F.; Peltier, W. Richard

    2010-05-01

    Using global positioning system, very long baseline interferometry, satellite laser ranging and Doppler Orbitography and Radiopositioning Integrated by Satellite observations, including the Canadian Base Network and Fennoscandian BIFROST array, we constrain, in models of postglacial rebound, the thickness of the ice sheets as a function of position and time and the viscosity of the mantle as a function of depth. We test model ICE-5G VM2 T90 Rot, which well fits many hundred Holocene relative sea level histories in North America, Europe and worldwide. ICE-5G is the deglaciation history having more ice in western Canada than ICE-4G; VM2 is the mantle viscosity profile having a mean upper mantle viscosity of 0.5 × 1021Pas and a mean uppermost-lower mantle viscosity of 1.6 × 1021Pas T90 is an elastic lithosphere thickness of 90 km; and Rot designates that the model includes (rotational feedback) Earth's response to the wander of the North Pole of Earth's spin axis towards Canada at a speed of ~1° Myr-1. The vertical observations in North America show that, relative to ICE-5G, the Laurentide ice sheet at last glacial maximum (LGM) at ~26 ka was (1) much thinner in southern Manitoba, (2) thinner near Yellowknife (Northwest Territories), (3) thicker in eastern and southern Quebec and (4) thicker along the northern British Columbia-Alberta border, or that ice was unloaded from these areas later (thicker) or earlier (thinner) than in ICE-5G. The data indicate that the western Laurentide ice sheet was intermediate in mass between ICE-5G and ICE-4G. The vertical observations and GRACE gravity data together suggest that the western Laurentide ice sheet was nearly as massive as that in ICE-5G but distributed more broadly across northwestern Canada. VM2 poorly fits the horizontal observations in North America, predicting places along the margins of the Laurentide ice sheet to be moving laterally away from the ice centre at 2 mm yr-1 in ICE-4G and 3 mm yr-1 in ICE-5G, in

  18. ARC-VM: An architecture real options complexity-based valuation methodology for military systems-of-systems acquisitions

    Science.gov (United States)

    Domercant, Jean Charles

    The combination of today's national security environment and mandated acquisition policies makes it necessary for military systems to interoperate with each other to greater degrees. This growing interdependency results in complex Systems-of-Systems (SoS) that only continue to grow in complexity to meet evolving capability needs. Thus, timely and affordable acquisition becomes more difficult, especially in the face of mounting budgetary pressures. To counter this, architecting principles must be applied to SoS design. The research objective is to develop an Architecture Real Options Complexity-Based Valuation Methodology (ARC-VM) suitable for acquisition-level decision making, where there is a stated desire for more informed tradeoffs between cost, schedule, and performance during the early phases of design. First, a framework is introduced to measure architecture complexity as it directly relates to military SoS. Development of the framework draws upon a diverse set of disciplines, including Complexity Science, software architecting, measurement theory, and utility theory. Next, a Real Options based valuation strategy is developed using techniques established for financial stock options that have recently been adapted for use in business and engineering decisions. The derived complexity measure provides architects with an objective measure of complexity that focuses on relevant complex system attributes. These attributes are related to the organization and distribution of SoS functionality and the sharing and processing of resources. The use of Real Options provides the necessary conceptual and visual framework to quantifiably and traceably combine measured architecture complexity, time-valued performance levels, as well as programmatic risks and uncertainties. An example suppression of enemy air defenses (SEAD) capability demonstrates the development and usefulness of the resulting architecture complexity & Real Options based valuation methodology. Different

  19. Assessment of watershed vulnerability to climate change for the Uinta-Wasatch-Cache and Ashley National Forests, Utah

    Science.gov (United States)

    Janine Rice; Tim Bardsley; Pete Gomben; Dustin Bambrough; Stacey Weems; Sarah Leahy; Christopher Plunkett; Charles Condrat; Linda A. Joyce

    2017-01-01

    Watersheds on the Uinta-Wasatch-Cache and Ashley National Forests provide many ecosystem services, and climate change poses a risk to these services. We developed a watershed vulnerability assessment to provide scientific information for land managers facing the challenge of managing these watersheds. Literature-based information and expert elicitation is used to...

  20. Tannin concentration enhances seed caching by scatter-hoarding rodents: An experiment using artificial ‘seeds’

    Science.gov (United States)

    Wang, Bo; Chen, Jin

    2008-11-01

    Tannins are very common among plant seeds but their effects on the fate of seeds, for example, via mediation of the feeding preferences of scatter-hoarding rodents, are poorly understood. In this study, we created a series of artificial 'seeds' that only differed in tannin concentration and the type of tannin, and placed them in a pine forest in the Shangri-La Alpine Botanical Garden, Yunnan Province of China. Two rodent species ( Apodemus latronum and A. chevrieri) showed significant preferences for 'seeds' with different tannin concentrations. A significantly higher proportion of seeds with low tannin concentration were consumed in situ compared with seeds with a higher tannin concentration. Meanwhile, the tannin concentration was significantly positively correlated with the proportion of seeds cached. The different types of tannin (hydrolysable tannin vs condensed tannin) did not differ significantly in their effect on the proportion of seeds eaten in situ vs seeds cached. Tannin concentrations had no significant effect on the distance that cached seeds were carried, which suggests that rodents may respond to different seed traits in deciding whether or not to cache seeds and how far they will transport seeds.

  1. Hypergraph-Partitioning-Based Models and Methods for Exploiting Cache Locality in Sparse-Matrix Vector Multiplication

    CERN Document Server

    Akbudak, Kadir; Aykanat, Cevdet

    2012-01-01

    The sparse matrix-vector multiplication (SpMxV) is a kernel operation widely used in iterative linear solvers. The same sparse matrix is multiplied by a dense vector repeatedly in these solvers. Matrices with irregular sparsity patterns make it difficult to utilize cache locality effectively in SpMxV computations. In this work, we investigate single- and multiple-SpMxV frameworks for exploiting cache locality in SpMxV computations. For the single-SpMxV framework, we propose two cache-size-aware top-down row/column-reordering methods based on 1D and 2D sparse matrix partitioning by utilizing the column-net and enhancing the row-column-net hypergraph models of sparse matrices. The multiple-SpMxV framework depends on splitting a given matrix into a sum of multiple nonzero-disjoint matrices so that the SpMxV operation is performed as a sequence of multiple input- and output- dependent SpMxV operations. For an effective matrix splitting required in this framework, we propose a cache- size-aware top-down approach b...

  2. An Efficient Cluster Based Web Object Filters From Web Pre-Fetching And Web Caching On Web User Navigation

    Directory of Open Access Journals (Sweden)

    A. K. Santra

    2012-05-01

    Full Text Available The World Wide Web is a distributed internet system, which provides dynamic and interactive services includes on line tutoring, video/audio conferencing, e-commerce, and etc., which generated heavy demand on network resources and web servers. It increase over the past few year at a very rapidly rate, due to which the amount of traffic over the internet is increasing. As a result, the network performance has now become very slow. Web Pre-fetching and Caching is one of the effective solutions to reduce the web access latency and improve the quality of service. The existing model presented a Cluster based pre-fetching scheme identified clusters of correlated Web pages based on users access patterns. Web Pre-fetching and Caching cause significant improvements on the performance of Web infrastructure. In this paper, we present an efficient Cluster based Web Object Filters from Web Pre-fetching and Web caching scheme to evaluate the web user navigation patterns and user references of product search. Clustering of web page objects obtained from pre-fetched and web cached contents. User Navigation is evaluated from the web cluster objects with similarity retrieval in subsequent user sessions. Web Object Filters are built with the interpretation of the cluster web pages related to the unique users by discarding redundant pages. Ranking is done on users web page product preferences at multiple sessions of each individual user. The performance is measured in terms of Objective function, Number of clusters and cluster accuracy.

  3. Outliers Based Caching of Data Segment with Synchronization over Video-on Demand using P2P Computing

    Directory of Open Access Journals (Sweden)

    M. Narayanan

    2014-05-01

    Full Text Available Nowadays live streaming plays an important role in various field of real time processing like education, research etc. The videos are maintained at the server then the clients may access the videos from the server may leads the performance problem. So the videos are downloaded by the clients and send it to the requesting clients. The mesh network is suitable for the live streaming because there is no master/slave relationship among the clients and in P2P (Peer-to-Peer live streaming, each peer holds the video segment for satisfying the needs of the requesting peer. If the cache is full the data segments are replaced by using various replacement algorithms. These algorithms are mainly used in the data segment at the head part of the cache. The tail part of the data segments are never used to satisfy the peers even though it is a relevant data segments. The proposed work mainly focuses on the data segment of the tail part for live streaming is called outliers. The tail part of the cache is synchronized with the other neighboring peers with the help of the segment table. This segment table has to maintain each peer to overcome the unavailability of the data segment at the peers. There is various tag formats are proposed for representing the tail part of the cache. In future the performance will be improved in maximum level.

  4. Seed-caching by heteromyid rodents enhances seedling survival of a desert grass, Indian ricegrass (Achnatherum hymenoides)

    Science.gov (United States)

    Seeds of many plant species germinate and establish aggregated clusters of seedlings from shallowly buried seed caches (i.e., scatterhoards) made by granivorous animals. Scatterhoarding by desert heteromyid rodents facilitates the vast majority of seedling recruitment in Indian ricegrass (Achnatheru...

  5. Trace Driven Cache Attack on LBlock Algorithm%针对LBlock算法的踪迹驱动Cache攻击

    Institute of Scientific and Technical Information of China (English)

    朱嘉良; 韦永壮

    2015-01-01

    LBlock是一种轻量级分组密码算法,其由于优秀的软硬件实现性能而备受关注。目前针对LBlock的安全性研究多侧重于抵御传统的数学攻击。缓存( Cache)攻击作为一种旁路攻击技术,已经被证实对密码算法的工程实现具有实际威胁,其中踪迹驱动Cache攻击分析所需样本少、分析效率高。为此,根据LBlock的算法结构及密钥输入特点,利用访问Cache过程中密码泄露的旁路信息,给出针对LBlock算法的踪迹驱动Cache攻击。分析结果表明,该攻击选择106个明文,经过约27.71次离线加密时间即可成功恢复LBlock的全部密钥。与LBlock侧信道立方攻击和具有Feistel结构的DES算法踪迹驱动Cache攻击相比,其攻击效果更明显。%As a new lightweight block cipher,LBlock cipher receives much attention since its excellent performance on hardware and software platforms. Currently, the secure evaluation on LBlock cipher heavy relies on the traditional mathematical attacks. The cache attack is a type of side channel attacks, and it has actual threat to the secure implementation of ciphers algorithm. In all kinds of Cache attacks,trace driven Cache attack has the advantage of using less samples and having higher efficiency. Based on the structure of the cipher algorithm and the property of its key schedule,this paper proposes a trace driven Cache attack on the LBlock algorithm. This attack recovers the secret key by capturing the leaked information in the process of accessing to the Cache. Analysis result shows that this attack requires a data complexity of about 106 chosen plaintexts,and a time complexity of about 27. 71 encryption operations. Compared with the proposed side channel cube attacks on LBlock and trace driven Cache attack on DES which also has the structure of Feistel,the attack is more favorable.

  6. Nutritional ecology of a fossorial herbivore: protein N and energy value of winter caches made by the northern pocket gopher, Thomomys talpoides

    Science.gov (United States)

    Stuebe, Miki M.; Andersen, Douglas C.

    1985-01-01

    Northern pocket gophers (Thomomys talpoides) are fossorial herbivores that excavate belowground plant parts for food. In subalpine areas during autumn and winter, pocket gophers hoard plant parts in caches placed in or under snow. We examined the size and composition of 17 nival caches and tested the hypotheses that (i) cached food can provide complete energy and protein N sustenance during typical periods when burrowing is precluded by soil conditions, and (ii) cached food is a random sample of items encountered by burrowing gophers during tunnel excavation. Our data indicate that caches provide substantially more energy than protein in terms of a pocket gopher's daily maintenance requirements. Nevertheless, quantities stored are sufficient to allow individuals to endure commonly encountered adverse environmental conditions without entering negative energy or protein balance. Analysis of stomach contents and a comparison of cache composition to availability of plant species suggests that gophers consume high-protein items as they are encountered, and store low-protein items in caches.

  7. Utility of whole slide imaging and virtual microscopy in prostate pathology

    DEFF Research Database (Denmark)

    Camparo, Philippe; Egevad, Lars; Algaba, Ferran;

    2012-01-01

    surgical pathology reporting has also been explored. In this review, we discuss the utility and limitations of WSI/VM technology in the histological assessment of specimens from the prostate. Features of WSI/VM that are particularly well suited to assessment of prostate pathology include the ability...... and focus only on the interpretation component of competency testing. Other issues limiting the use of digital pathology in prostate pathology include the cost of high quality slide scanners for WSI and high resolution monitors for VM as well as the requirement for fast Internet connection as even a subtle...... to examine images at different magnifications as well as to view histology and immunohistochemistry side-by-side on the screen. Use of WSI/VM would also solve the difficulty in obtaining multiple identical copies of small lesions in prostate biopsies for teaching and proficiency testing. It would also permit...

  8. 一个基于图着色的CACHE优化方法%A cache optimizing method based on graph coloring

    Institute of Scientific and Technical Information of China (English)

    邓宇; 王蕾; 张明; 龚锐; 郭御风; 窦强

    2012-01-01

    A graph coloring based management optimizing algorithm for cache, namely Cache Coloring, has been proposed. This algorithm first partitions the data into several data objects according to their memory accessing behaviors. Then it partitions the cache into a pseudo register file with alias according to the size of the data objects. Each pseudo register in this register file can hold one of the data objects. Finally, it uses an extended graph coloring register allocation algorithm to determine the position of each data object in the cache and their replacement relationship. The data object partitioning divides the management of cache into two levels, one for the coarse-granularity management of the data objects in the compile-time and the other for the fine-granularity management of the cache lines in the run-time. So the advantages of both compiler and hardware are exploited. Cache Coloring is implemented in GCC. A hardware simulation platform which supports Cache Coloring is built based on the Simplescalar processor simulator. The primary experimental results show that Cache Coloring can exploit the locality well and reduce the cache miss rate.%提出了一个编译时的Cache管理优化方法.该方法根据访存行为将程序中的数据划分成若干数据对象,根据数据对象的大小将Cache划分为一个带有别名的伪寄存器文件,每个伪寄存器由若干Cache行组成,可以容纳一个数据对象;使用一个经过改进的图着色寄存器分配算法来决定这些对象在Cache中的位置以及发生冲突时的替换关系.数据对象的划分将Cache的管理分为两个层次,一个是编译时编译器对粗粒度的数据对象的管理,另一个是运行时硬件对细粒度的Cache行的管理,这样编译器和硬件的优势都得到发挥.基于GCC进行了方法实现,并通过simplescalar构造了支持Cache Coloring的硬件模拟平台.实验结果表明Cache Coloring能较好地开发程序的局部性,降低Cache失效率.

  9. Charged particle velocity map image reconstruction with one-dimensional projections of spherical functions

    Energy Technology Data Exchange (ETDEWEB)

    Gerber, Thomas; Liu Yuzhu; Knopp, Gregor; Hemberger, Patrick; Bodi, Andras; Radi, Peter; Sych, Yaroslav [Molecular Dynamics Group, Paul Scherrer Institut, 5232 Villigen (Switzerland)

    2013-03-15

    Velocity map imaging (VMI) is used in mass spectrometry and in angle resolved photo-electron spectroscopy to determine the lateral momentum distributions of charged particles accelerated towards a detector. VM-images are composed of projected Newton spheres with a common centre. The 2D images are usually evaluated by a decomposition into base vectors each representing the 2D projection of a set of particles starting from a centre with a specific velocity distribution. We propose to evaluate 1D projections of VM-images in terms of 1D projections of spherical functions, instead. The proposed evaluation algorithm shows that all distribution information can be retrieved from an adequately chosen set of 1D projections, alleviating the numerical effort for the interpretation of VM-images considerably. The obtained results produce directly the coefficients of the involved spherical functions, making the reconstruction of sliced Newton spheres obsolete.

  10. Dynamically Translating Binary Code for Multi-Threaded Programs Using Shared Code Cache

    Institute of Scientific and Technical Information of China (English)

    Chia-Lun Liu; Jiunn-Yeu Chen; Wuu Yang; Wei-Chung Hsu

    2014-01-01

    mc2llvm is a process-level ARM-to-x86 binary translator developed in our lab in the past several years. Currently, it is able to emulate single-threaded programs. We extend mc2llvm to emulate multi-threaded programs. Our main task is to reconstruct its architecture for multi-threaded programs. Register mapping, code cache management, and address mapping in mc2llvm have all been modified. In addition, to further speed up the emulation, we collect hot paths, aggressively optimize and generate code for them at run time. Additional threads are used to alleviate the overhead. Thus, when the same hot path is walked through again, the corresponding optimized native code will be executed instead. In our experiments, our system is 8.8X faster than QEMU (quick emulator) on average when emulating the specified benchmarks with 8 guest threads.

  11. Nucleotide sequencing and serologic analysis of Cache Valley virus isolates from the Yucatan Peninsula of Mexico.

    Science.gov (United States)

    Blitvich, Bradley J; Loroño-Pino, Maria A; Garcia-Rejon, Julian E; Farfan-Ale, Jose A; Dorman, Karin S

    2012-08-01

    Nucleotide sequencing was performed on part of the medium and large genome segments of 17 Cache Valley virus (CVV) isolates from the Yucatan Peninsula of Mexico. Alignment of these sequences to all other sequences in the Genbank database revealed that they have greatest nucleotide identity (97-98 %) with the equivalent regions of Tlacotalpan virus (TLAV), which is considered to be a variety of CVV. Next, cross-plaque reduction neutralization tests (PRNTs) were performed using sera from mice that had been inoculated with a representative isolate from the Yucatan Peninsula (CVV-478) or the prototype TLAV isolate (61-D-240). The PRNT titers exhibited a twofold difference in one direction and no difference in the other direction suggesting that CVV-478 and 61-D-240 belong to the same CVV subtype. In conclusion, we demonstrate that the CVV isolates from the Yucatan Peninsula of Mexico are genetically and antigenically similar to the prototype TLAV isolate.

  12. Comparison of the Frontier Distributed Database Caching System with NoSQL Databases

    CERN Document Server

    Dykstra, David

    2012-01-01

    One of the main attractions of non-relational "NoSQL" databases is their ability to scale to large numbers of readers, including readers spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects for Conditions data, is based on traditional SQL databases but also has high scalability and wide-area distributability for an important subset of applications. This paper compares the major characteristics of the two different approaches and identifies the criteria for choosing which approach to prefer over the other. It also compares in some detail the NoSQL databases used by CMS and ATLAS: MongoDB, CouchDB, HBase, and Cassandra.

  13. An ecological response model for the Cache la Poudre River through Fort Collins

    Science.gov (United States)

    Shanahan, Jennifer; Baker, Daniel; Bledsoe, Brian P.; Poff, LeRoy; Merritt, David M.; Bestgen, Kevin R.; Auble, Gregor T.; Kondratieff, Boris C.; Stokes, John; Lorie, Mark; Sanderson, John

    2014-01-01

    The Poudre River Ecological Response Model (ERM) is a collaborative effort initiated by the City of Fort Collins and a team of nine river scientists to provide the City with a tool to improve its understanding of the past, present, and likely future conditions of the Cache la Poudre River ecosystem. The overall ecosystem condition is described through the measurement of key ecological indicators such as shape and character of the stream channel and banks, streamside plant communities and floodplain wetlands, aquatic vegetation and insects, and fishes, both coolwater trout and warmwater native species. The 13- mile-long study area of the Poudre River flows through Fort Collins, Colorado, and is located in an ecological transition zone between the upstream, cold-water, steep-gradient system in the Front Range of the Southern Rocky Mountains and the downstream, warm-water, low-gradient reach in the Colorado high plains.

  14. gpuSPHASE-A shared memory caching implementation for 2D SPH using CUDA

    Science.gov (United States)

    Winkler, Daniel; Meister, Michael; Rezavand, Massoud; Rauch, Wolfgang

    2017-04-01

    Smoothed particle hydrodynamics (SPH) is a meshless Lagrangian method that has been successfully applied to computational fluid dynamics (CFD), solid mechanics and many other multi-physics problems. Using the method to solve transport phenomena in process engineering requires the simulation of several days to weeks of physical time. Based on the high computational demand of CFD such simulations in 3D need a computation time of years so that a reduction to a 2D domain is inevitable. In this paper gpuSPHASE, a new open-source 2D SPH solver implementation for graphics devices, is developed. It is optimized for simulations that must be executed with thousands of frames per second to be computed in reasonable time. A novel caching algorithm for Compute Unified Device Architecture (CUDA) shared memory is proposed and implemented. The software is validated and the performance is evaluated for the well established dambreak test case.

  15. Summary and Synthesis of Mercury Studies in the Cache Creek Watershed, California, 2000-01

    Science.gov (United States)

    Domagalski, Joseph L.; Slotton, Darell G.; Alpers, Charles N.; Suchanek, Thomas H.; Churchill, Ronald; Bloom, Nicolas; Ayers, Shaun M.; Clinkenbeard, John

    2004-01-01

    This report summarizes the principal findings of the Cache Creek, California, components of a project funded by the CALFED Bay?Delta Program entitled 'An Assessment of Ecological and Human Health Impacts of Mercury in the Bay?Delta Watershed.' A companion report summarizes the key findings of other components of the project based in the San Francisco Bay and the Delta of the Sacramento and San Joaquin Rivers. These summary documents present the more important findings of the various studies in a format intended for a wide audience. For more in-depth, scientific presentation and discussion of the research, a series of detailed technical reports of the integrated mercury studies is available at the following website: .

  16. Comparison of the Frontier Distributed Database Caching System to NoSQL Databases

    Science.gov (United States)

    Dykstra, Dave

    2012-12-01

    One of the main attractions of non-relational “NoSQL” databases is their ability to scale to large numbers of readers, including readers spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects for Conditions data, is based on traditional SQL databases but also adds high scalability and the ability to be distributed over a wide-area for an important subset of applications. This paper compares the major characteristics of the two different approaches and identifies the criteria for choosing which approach to prefer over the other. It also compares in some detail the NoSQL databases used by CMS and ATLAS: MongoDB, CouchDB, HBase, and Cassandra.

  17. A Cache-Oblivious Implicit Dictionary with the Working Set Property

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Kejlberg-Rasmussen, Casper; Truelsen, Jakob

    2010-01-01

    In this paper we present an implicit dictionary with the working set property i.e. a dictionary supporting \\op{insert}($e$), \\op{delete}($x$) and \\op{predecessor}($x$) in~$\\O(\\log n)$ time and \\op{search}($x$) in $\\O(\\log\\ell)$ time, where $n$ is the number of elements stored in the dictionary...... and $\\ell$ is the number of distinct elements searched for since the element with key~$x$ was last searched for. The dictionary stores the elements in an array of size~$n$ using \\emph{no} additional space. In the cache-oblivious model the operations \\op{insert}($e$), \\op{delete}($x$) and \\op...

  18. Towards Cache-Enabled, Order-Aware, Ontology-Based Stream Reasoning Framework

    Energy Technology Data Exchange (ETDEWEB)

    Yan, Rui; Praggastis, Brenda L.; Smith, William P.; McGuinness, Deborah L.

    2016-08-16

    While streaming data have become increasingly more popular in business and research communities, semantic models and processing software for streaming data have not kept pace. Traditional semantic solutions have not addressed transient data streams. Semantic web languages (e.g., RDF, OWL) have typically addressed static data settings and linked data approaches have predominantly addressed static or growing data repositories. Streaming data settings have some fundamental differences; in particular, data are consumed on the fly and data may expire. Stream reasoning, a combination of stream processing and semantic reasoning, has emerged with the vision of providing "smart" processing of streaming data. C-SPARQL is a prominent stream reasoning system that handles semantic (RDF) data streams. Many stream reasoning systems including C-SPARQL use a sliding window and use data arrival time to evict data. For data streams that include expiration times, a simple arrival time scheme is inadequate if the window size does not match the expiration period. In this paper, we propose a cache-enabled, order-aware, ontology-based stream reasoning framework. This framework consumes RDF streams with expiration timestamps assigned by the streaming source. Our framework utilizes both arrival and expiration timestamps in its cache eviction policies. In addition, we introduce the notion of "semantic importance" which aims to address the relevance of data to the expected reasoning, thus enabling the eviction algorithms to be more context- and reasoning-aware when choosing what data to maintain for question answering. We evaluate this framework by implementing three different prototypes and utilizing five metrics. The trade-offs of deploying the proposed framework are also discussed.

  19. Using dCache in Archiving Systems oriented to Earth Observation

    Science.gov (United States)

    Garcia Gil, I.; Perez Moreno, R.; Perez Navarro, O.; Platania, V.; Ozerov, D.; Leone, R.

    2012-04-01

    The object of LAST activity (Long term data Archive Study on new Technologies) is to perform an independent study on best practices and assessment of different archiving technologies mature for operation in the short and mid-term time frame, or available in the long-term with emphasis on technologies better suited to satisfy the requirements of ESA, LTDP and other European and Canadian EO partners in terms of digital information preservation and data accessibility and exploitation. During the last phase of the project, a testing of several archiving solutions has been performed in order to evaluate their suitability. In particular, dCache, aimed to provide a file system tree view of the data repository exchanging this data with backend (tertiary) Storage Systems as well as space management, pool attraction, dataset replication, hot spot determination and recovery from disk or node failures. Connected to a tertiary storage system, dCache simulates unlimited direct access storage space. Data exchanges to and from the underlying HSM are performed automatically and invisibly to the user Dcache was created to solve the requirements of big computer centers and universities with big amounts of data, putting their efforts together and founding EMI (European Middleware Initiative). At the moment being, Dcache is mature enough to be implemented, being used by several research centers of relevance (e.g. LHC storing up to 50TB/day). This solution has been not used so far in Earth Observation and the results of the study are summarized in this article, focusing on the capacities over a simulated environment to get in line with the ESA requirements for a geographically distributed storage. The challenge of a geographically distributed storage system can be summarized as the way to provide a maximum quality for storage and dissemination services with the minimum cost.

  20. Mercury and methylmercury concentrations and loads in the Cache Creek watershed, California.

    Science.gov (United States)

    Domagalski, Joseph L; Alpers, Charles N; Slotton, Darell G; Suchanek, Thomas H; Ayers, Shaun M

    2004-07-05

    Concentrations and loads of total mercury and methylmercury were measured in streams draining abandoned mercury mines and in the proximity of geothermal discharge in the Cache Creek watershed of California during a 17-month period from January 2000 through May 2001. Rainfall and runoff were lower than long-term averages during the study period. The greatest loading of mercury and methylmercury from upstream sources to downstream receiving waters, such as San Francisco Bay, generally occurred during or after winter rainfall events. During the study period, loads of mercury and methylmercury from geothermal sources tended to be greater than those from abandoned mining areas, a pattern attributable to the lack of large precipitation events capable of mobilizing significant amounts of either mercury-laden sediment or dissolved mercury and methylmercury from mine waste. Streambed sediments of Cache Creek are a significant source of mercury and methylmercury to downstream receiving bodies of water. Much of the mercury in these sediments is the result of deposition over the last 100-150 years by either storm-water runoff, from abandoned mines, or continuous discharges from geothermal areas. Several geochemical constituents were useful as natural tracers for mining and geothermal areas, including the aqueous concentrations of boron, chloride, lithium and sulfate, and the stable isotopes of hydrogen and oxygen in water. Stable isotopes of water in areas draining geothermal discharges showed a distinct trend toward enrichment of (18)O compared with meteoric waters, whereas much of the runoff from abandoned mines indicated a stable isotopic pattern more consistent with local meteoric water.

  1. Simultaneous measurement of tabun, sarin, soman, cyclosarin, VR, VX, and VM adducts to tyrosine in blood products by isotope dilution UHPLC-MS/MS.

    Science.gov (United States)

    Crow, Brian S; Pantazides, Brooke G; Quiñones-González, Jennifer; Garton, Joshua W; Carter, Melissa D; Perez, Jonas W; Watson, Caroline M; Tomcik, Dennis J; Crenshaw, Michael D; Brewer, Bobby N; Riches, James R; Stubbs, Sarah J; Read, Robert W; Evans, Ronald A; Thomas, Jerry D; Blake, Thomas A; Johnson, Rudolph C

    2014-10-21

    This work describes a new specific, sensitive, and rapid stable isotope dilution method for the simultaneous detection of the organophosphorus nerve agents (OPNAs) tabun (GA), sarin (GB), soman (GD), cyclosarin (GF), VR, VX, and VM adducts to tyrosine (Tyr). Serum, plasma, and lysed whole blood samples (50 μL) were prepared by protein precipitation followed by digestion with Pronase. Specific Tyr adducts were isolated from the digest by a single solid phase extraction (SPE) step, and the analytes were separated by reversed-phase ultra high performance liquid chromatography (UHPLC) gradient elution in less than 2 min. Detection was performed on a triple quadrupole tandem mass spectrometer using time-triggered selected reaction monitoring (SRM) in positive electrospray ionization (ESI) mode. The calibration range was characterized from 0.100-50.0 ng/mL for GB- and VR-Tyr and 0.250-50.0 ng/mL for GA-, GD-, GF-, and VX/VM-Tyr (R(2) ≥ 0.995). Inter- and intra-assay precision had coefficients of variation of ≤17 and ≤10%, respectively, and the measured concentration accuracies of spiked samples were within 15% of the targeted value for multiple spiking levels. The limit of detection was calculated to be 0.097, 0.027, 0.018, 0.074, 0.023, and 0.083 ng/mL for GA-, GB-, GD-, GF-, VR-, and VX/VM-Tyr, respectively. A convenience set of 96 serum samples with no known nerve agent exposure was screened and revealed no baseline values or potential interferences. This method provides a simple and highly specific diagnostic tool that may extend the time postevent that a confirmation of nerve agent exposure can be made with confidence.

  2. Images

    Data.gov (United States)

    National Aeronautics and Space Administration — Images for the website main pages and all configurations. The upload and access points for the other images are: Website Template RSW images BSCW Images HIRENASD...

  3. Intramural optical mapping of V(m) and Ca(i)2+ during long-duration ventricular fibrillation in canine hearts.

    Science.gov (United States)

    Kong, Wei; Ideker, Raymond E; Fast, Vladimir G

    2012-03-15

    Intramural gradients of intracellular Ca(2+) (Ca(i)(2+)) Ca(i)(2+) handling, Ca(i)(2+) oscillations, and Ca(i)(2+) transient (CaT) alternans may be important in long-duration ventricular fibrillation (LDVF). However, previous studies of Ca(i)(2+) handling have been limited to recordings from the heart surface during short-duration ventricular fibrillation. To examine whether abnormalities of intramural Ca(i)(2+) handling contribute to LDVF, we measured membrane voltage (V(m)) and Ca(i)(2+) during pacing and LDVF in six perfused canine hearts using five eight-fiber optrodes. Measurements were grouped into epicardial, midwall, and endocardial layers. We found that during pacing at 350-ms cycle length, CaT duration was slightly longer (by ≃10%) in endocardial layers than in epicardial layers, whereas action potential duration (APD) exhibited no difference. Rapid pacing at 150-ms cycle length caused alternans in both APD (APD-ALT) and CaT amplitude (CaA-ALT) without significant transmural differences. For 93% of optrode recordings, CaA-ALT was transmurally concordant, whereas APD-ALT was either concordant (36%) or discordant (54%), suggesting that APD-ALT was not caused by CaA-ALT. During LDVF, V(m) and Ca(i)(2+) progressively desynchronized when not every action potential was followed by a CaT. Such desynchronization developed faster in the epicardium than in the other layers. In addition, CaT duration strongly increased (by ∼240% at 5 min of LDVF), whereas APD shortened (by ∼17%). CaT rises always followed V(m) upstrokes during pacing and LDVF. In conclusion, the fact that V(m) upstrokes always preceded CaTs indicates that spontaneous Ca(i)(2+) oscillations in the working myocardium were not likely the reason for LDVF maintenance. Strong V(m)-Ca(i)(2+) desynchronization and the occurrence of long CaTs during LDVF indicate severely impaired Ca(i)(2+) handling and may potentially contribute to LDVF maintenance.

  4. Tracking Seed Fates of Tropical Tree Species: Evidence for Seed Caching in a Tropical Forest in North-East India.

    Science.gov (United States)

    Sidhu, Swati; Datta, Aparajita

    2015-01-01

    Rodents affect the post-dispersal fate of seeds by acting either as on-site seed predators or as secondary dispersers when they scatter-hoard seeds. The tropical forests of north-east India harbour a high diversity of little-studied terrestrial murid and hystricid rodents. We examined the role played by these rodents in determining the seed fates of tropical evergreen tree species in a forest site in north-east India. We selected ten tree species (3 mammal-dispersed and 7 bird-dispersed) that varied in seed size and followed the fates of 10,777 tagged seeds. We used camera traps to determine the identity of rodent visitors, visitation rates and their seed-handling behavior. Seeds of all tree species were handled by at least one rodent taxon. Overall rates of seed removal (44.5%) were much higher than direct on-site seed predation (9.9%), but seed-handling behavior differed between the terrestrial rodent groups: two species of murid rodents removed and cached seeds, and two species of porcupines were on-site seed predators. In addition, a true cricket, Brachytrupes sp., cached seeds of three species underground. We found 309 caches formed by the rodents and the cricket; most were single-seeded (79%) and seeds were moved up to 19 m. Over 40% of seeds were re-cached from primary cache locations, while about 12% germinated in the primary caches. Seed removal rates varied widely amongst tree species, from 3% in Beilschmiedia assamica to 97% in Actinodaphne obovata. Seed predation was observed in nine species. Chisocheton cumingianus (57%) and Prunus ceylanica (25%) had moderate levels of seed predation while the remaining species had less than 10% seed predation. We hypothesized that seed traits that provide information on resource quantity would influence rodent choice of a seed, while traits that determine resource accessibility would influence whether seeds are removed or eaten. Removal rates significantly decreased (p seed size. Removal rates were significantly

  5. Tracking Seed Fates of Tropical Tree Species: Evidence for Seed Caching in a Tropical Forest in North-East India

    Science.gov (United States)

    Sidhu, Swati; Datta, Aparajita

    2015-01-01

    Rodents affect the post-dispersal fate of seeds by acting either as on-site seed predators or as secondary dispersers when they scatter-hoard seeds. The tropical forests of north-east India harbour a high diversity of little-studied terrestrial murid and hystricid rodents. We examined the role played by these rodents in determining the seed fates of tropical evergreen tree species in a forest site in north-east India. We selected ten tree species (3 mammal-dispersed and 7 bird-dispersed) that varied in seed size and followed the fates of 10,777 tagged seeds. We used camera traps to determine the identity of rodent visitors, visitation rates and their seed-handling behavior. Seeds of all tree species were handled by at least one rodent taxon. Overall rates of seed removal (44.5%) were much higher than direct on-site seed predation (9.9%), but seed-handling behavior differed between the terrestrial rodent groups: two species of murid rodents removed and cached seeds, and two species of porcupines were on-site seed predators. In addition, a true cricket, Brachytrupes sp., cached seeds of three species underground. We found 309 caches formed by the rodents and the cricket; most were single-seeded (79%) and seeds were moved up to 19 m. Over 40% of seeds were re-cached from primary cache locations, while about 12% germinated in the primary caches. Seed removal rates varied widely amongst tree species, from 3% in Beilschmiedia assamica to 97% in Actinodaphne obovata. Seed predation was observed in nine species. Chisocheton cumingianus (57%) and Prunus ceylanica (25%) had moderate levels of seed predation while the remaining species had less than 10% seed predation. We hypothesized that seed traits that provide information on resource quantity would influence rodent choice of a seed, while traits that determine resource accessibility would influence whether seeds are removed or eaten. Removal rates significantly decreased (p seed size. Removal rates were significantly

  6. CSU Final Report on the Math/CS Institute CACHE: Communication-Avoiding and Communication-Hiding at the Extreme Scale

    Energy Technology Data Exchange (ETDEWEB)

    Strout, Michelle [Colorado State University

    2014-06-10

    The CACHE project entails researching and developing new versions of numerical algorithms that result in data reuse that can be scheduled in a communication avoiding way. Since memory accesses take more time than any computation and require the most power, the focus on turning data reuse into data locality is critical to improving performance and reducing power usage in scientific simulations. This final report summarizes the accomplishments at Colorado State University as part of the CACHE project.

  7. Externally applied electric fields up to 1.6 × 10(5) V/m do not affect the homogeneous nucleation of ice in supercooled water.

    Science.gov (United States)

    Stan, Claudiu A; Tang, Sindy K Y; Bishop, Kyle J M; Whitesides, George M

    2011-02-10

    The freezing of water can initiate at electrically conducting electrodes kept at a high electric potential or at charged electrically insulating surfaces. The microscopic mechanisms of these phenomena are unknown, but they must involve interactions between water molecules and electric fields. This paper investigates the effect of uniform electric fields on the homogeneous nucleation of ice in supercooled water. Electric fields were applied across drops of water immersed in a perfluorinated liquid using a parallel-plate capacitor; the drops traveled in a microchannel and were supercooled until they froze due to the homogeneous nucleation of ice. The distribution of freezing temperatures of drops depended on the rate of nucleation of ice, and the sensitivity of measurements allowed detection of changes by a factor of 1.5 in the rate of nucleation. Sinusoidal alternation of the electric field at frequencies from 3 to 100 kHz prevented free ions present in water from screening the electric field in the bulk of drops. Uniform electric fields in water with amplitudes up to (1.6 ± 0.4) × 10(5) V/m neither enhanced nor suppressed the homogeneous nucleation of ice. Estimations based on thermodynamic models suggest that fields in the range of 10(7)-10(8) V/m might cause an observable increase in the rate of nucleation.

  8. Inhibition of BCL-2 leads to increased apoptosis and delayed neuronal differentiation in human ReNcell VM cells in vitro.

    Science.gov (United States)

    Fröhlich, Michael; Jaeger, Alexandra; Weiss, Dieter G; Kriehuber, Ralf

    2016-02-01

    BCL-2 is a multifunctional protein involved in the regulation of apoptosis, cell cycle progression and neural developmental processes. Its function in the latter process is not well understood and needs further elucidation. Therefore, we characterized the protein expression kinetics of BCL-2 and associated regulatory proteins of the intrinsic apoptosis pathway during the process of neuronal differentiation in ReNcell VM cells with and without functional inhibition of BCL-2 by its competitive ligand HA14-1. Inhibition of BCL-2 caused a diminished BCL-2 expression and higher levels of cleaved BAX, activated Caspase-3 and cleaved PARP, all pro-apoptotic markers, when compared with untreated differentiating cells. In parallel, flow cytometric analysis of HA14-1-treated cells revealed a delayed differentiation into HuC/D+ neuronal cells when compared to untreated differentiating cells. In conclusion, BCL-2 possess a protective function in fully differentiated ReNcell VM cells. We propose that the pro-survival signaling of BCL-2 is closely connected with its stimulatory effects on neurogenesis of human neural progenitor cells.

  9. Web代理缓存算法的性能比较%Comparison of Web Proxy Caching Algorithm Performance

    Institute of Scientific and Technical Information of China (English)

    温志浩

    2012-01-01

    Web代理是现代Internet的一个重要中间网络器件.代理缓存算法的优劣不但影响到客户端的浏览速度,还关系到目标服务器的性能以及中间通信网络的整体表现。对目前几种流行的Web代理缓存算法进行比较与研究。%Web proxy is an important intermediate network devices in the modem Intemet, proxy caching algorithm not only affect the merits of the client's browsing speed, but also related to the target server's performance and the overall performance of the middle of the communication network. Presents several popular Web proxy caching algorithms to make comparison and research.

  10. Minimizing End-to-End Interference in I/O Stacks Spanning Shared Multi-Level Buffer Caches

    Science.gov (United States)

    Patrick, Christina M.

    2011-01-01

    This thesis presents an end-to-end interference minimizing uniquely designed high performance I/O stack that spans multi-level shared buffer cache hierarchies accessing shared I/O servers to deliver a seamless high performance I/O stack. In this thesis, I show that I can build a superior I/O stack which minimizes the inter-application interference…

  11. Minimizing End-to-End Interference in I/O Stacks Spanning Shared Multi-Level Buffer Caches

    Science.gov (United States)

    Patrick, Christina M.

    2011-01-01

    This thesis presents an end-to-end interference minimizing uniquely designed high performance I/O stack that spans multi-level shared buffer cache hierarchies accessing shared I/O servers to deliver a seamless high performance I/O stack. In this thesis, I show that I can build a superior I/O stack which minimizes the inter-application interference…

  12. Access driven Cache timing template attack on ARIA%ARIA访问驱动Cache计时模板攻击

    Institute of Scientific and Technical Information of China (English)

    赵新杰; 郭世泽; 王韬; 刘会英

    2011-01-01

    In order to evaluate the security of ARIA against Cache timing attacks, an access driven Cache timing template attack model was proposed, non-elimination or elimination two template matching method were given. Taking ARIA as an example, the first 4 rounds template attack method was presented, which was verified through the experiments. Experiment results demonstrate that ARIA is vulnerable to access driven Cache timing template attack, by applying the non-elimination or elimination template matching method, 200 samples are enough to obtain the 128 bit master key within 1 s. Meanwhile, the template analysis model can provide some ideas to access driven Cache timing template attacks against other block ciphers using S-box.%为评估ARIA密码抗Cache计时攻击安全性,提出了一种访问驱动Cache计时模板分析模型,给出了直接分析和排除分析2种模板匹配方法.以ARIA分组密码为例,给出了前4轮模板攻击方法,并通过本地攻击实验验证理论正确性.结果表明:ARIA易遭受访问驱动Cache计时模板攻击,应用直接模板分析和排除模板分析方法,200个样本均可在1 s内恢复ARIA 128 bit密钥.模板分析模型还可为其他使用S盒的分组密码访问驱动Cache计时模板分析提供一定参考.

  13. MIPAS temperature from the stratosphere to the lower thermosphere: comparison of version vM21 with ACE-FTS, MLS, OSIRIS, SABER, SOFIE and lidar measurements

    Directory of Open Access Journals (Sweden)

    M. García-Comas

    2014-07-01

    Full Text Available We present vM21 MIPAS temperatures from the lower stratosphere to the lower thermosphere, which cover all optimized resolution measurements performed by MIPAS in the Middle Atmosphere, Upper Atmosphere and NoctiLucent Cloud modes during its lifetime. i.e., from January 2005 to March 2012. The main upgrades with respect to the previous version of MIPAS temperatures (vM11 are the update of the spectroscopic database, the use of a different climatology of atomic oxygen and carbon dioxide, and the improvement of important technical aspects of the retrieval setup (temperature gradient along the line of sight and offset regularizations, apodization accuracy. Additionally, an updated version of ESA calibrated L1b spectra (5.02/5.06 is used. The vM21 temperatures correct the main systematic errors of the previous version because they on average provide a 1–2 K warmer stratopause and middle mesosphere, and a 6–10 K colder mesopause (except in high latitude summers and lower thermosphere. These lead to a remarkable improvement of MIPAS comparisons with ACE-FTS, MLS, OSIRIS, SABER, SOFIE and the two Rayleigh lidars at Mauna Loa and Table Mountain, that, with few specific exceptions, typically exhibit differences smaller than 1 K below 50 km and than 2 K at 50–80 km in spring, autumn, winter at all latitudes, and summer at low to mid-latitudes. Differences in the high latitude summers are typically smaller than 1 K below 50 km, smaller than 2 K at 50–65 km and 5 K at 65–80 km. Differences with the other instruments in the mid-mesosphere are generally negative. MIPAS mesopause is within 4 K of the other instruments measurements, except in the high latitude summers, where it is within 5–10 K of the other instruments, being warmer than SABER, MLS and OSIRIS and colder than ACE-FTS and SOFIE. The agreement in the lower thermosphere is typically better than 5 K, except for high latitudes during spring and summer, where MIPAS usually exhibits larger

  14. Diets of three species of anurans from the cache creek watershed, California, USA

    Science.gov (United States)

    Hothem, R.L.; Meckstroth, A.M.; Wegner, K.E.; Jennings, M.R.; Crayon, J.J.

    2009-01-01

    We evaluated the diets of three sympatric anuran species, the native Northern Pacific Treefrog, Pseudacris regilla, and Foothill Yellow-Legged Frog, Rana boylii, and the introduced American Bullfrog, Lithobates catesbeianus, based on stomach contents of frogs collected at 36 sites in 1997 and 1998. This investigation was part of a study of mercury bioaccumulation in the biota of the Cache Creek Watershed in north-central California, an area affected by mercury contamination from natural sources and abandoned mercury mines. We collected R. boylii at 22 sites, L. catesbeianus at 21 sites, and P. regilla at 13 sites. We collected both L. catesbeianus and R. boylii at nine sites and all three species at five sites. Pseudacris regilla had the least aquatic diet (100% of the samples had terrestrial prey vs. 5% with aquatic prey), followed by R. boylii (98% terrestrial, 28% aquatic), and L. catesbeianus, which had similar percentages of terrestrial (81%) and aquatic prey (74%). Observed predation by L. catesbeianus on R. boylii may indicate that interaction between these two species is significant. Based on their widespread abundance and their preference for aquatic foods, we suggest that, where present, L. catesbeianus should be the species of choice for all lethal biomonitoring of mercury in amphibians. Copyright ?? 2009 Society for the Study of Amphibians and Reptiles.

  15. AirCache: A Crowd-Based Solution for Geoanchored Floating Data

    Directory of Open Access Journals (Sweden)

    Armir Bujari

    2016-01-01

    Full Text Available The Internet edge has evolved from a simple consumer of information and data to eager producer feeding sensed data at a societal scale. The crowdsensing paradigm is a representative example which has the potential to revolutionize the way we acquire and consume data. Indeed, especially in the era of smartphones, the geographical and temporal scopus of data is often local. For instance, users’ queries are more and more frequently about a nearby object, event, person, location, and so forth. These queries could certainly be processed and answered locally, without the need for contacting a remote server through the Internet. In this scenario, the data is alimented (sensed by the users and, as a consequence, data lifetime is limited by human organizational factors (e.g., mobility. From this basis, data survivability in the Area of Interest (AoI is crucial and, if not guaranteed, could undermine system deployment. Addressing this scenario, we discuss and contribute with a novel protocol named AirCache, whose aim is to guarantee data availability in the AoI while at the same time reducing the data access costs at the network edges. We assess our proposal through a simulation analysis showing that our approach effectively fulfills its design objectives.

  16. Hoarding without reward: rodent responses to repeated episodes of complete cache loss.

    Science.gov (United States)

    Luo, Yang; Yang, Zheng; Steele, Michael A; Zhang, Zhibin; Stratford, Jeffrey A; Zhang, Hongmao

    2014-07-01

    For food-hoarding strategies to be maintained in a population, the benefits of hoarding must outweigh the costs. If rewards are too low to offset the costs of hoarding, hoarders might be expected to abandon hoarding and/or shift to an alternative storing strategy (e.g., increase food consumption). However the ability to adjust to such circumstances requires that animals anticipate long-term rewards and adjust storing strategies to modify future outcomes. To test this, we subjected three sympatric food-hoarding species (the Korean field mouse, Apodemus peninsulae, both a scatter and larder hoarder; the Chinese white-bellied rat, Niviventer confucianus, only a larder hoarder; and Père David's rock squirrel, Sciurotamias davidianus, predominantly a scatter hoarder) to repeated episodes of complete cache loss over nine sequential trials in semi-natural enclosures. Although these species increased harvest and consumption rates throughout the experiment, none of these three species ceased hoarding under these conditions. The variation in responses observed across species and gender suggest some degree of behavioural plasticity to compensate for such extreme losses, but a general inability to abandon hoarding or shift to an alternative strategy. Future studies should consider how such responses correspond to natural patterns of intensive pilferage in the field. Copyright © 2014 Elsevier B.V. All rights reserved.

  17. Políticas de reemplazo en la caché de web

    Directory of Open Access Journals (Sweden)

    Carlos Quesada Sánchez

    2006-05-01

    Full Text Available La web es el mecanismo de comunicación más utilizado en la actualidad debido a su flexibilidad y a la oferta casi interminable de herramientas para navegarla. Esto hace que día con día se agreguen alrededor de un millón de páginas en ella. De esta manera, es entonces la biblioteca más grande, con recursos textuales y de multimedia, que jamás se haya visto antes. Eso sí, es una biblioteca distribuida alrededor de todos los servidores que contienen esa información. Como fuente de consulta, es importante que la recuperación de los datos sea eficiente. Para ello existe el Web Caching, técnica mediante la cual se almacenan temporalmente algunos datos de la web en los servidores locales, de manera que no haya que pedirlos al servidor remoto cada vez que un usuario los solicita. Empero, la cantidad de memoria disponible en los servidores locales para almacenar esa información es limitada: hay que decidir cuáles objetos de la web se almacenan y cuáles no. Esto da pie a varias políticas de reemplazo que se explorarán en este artículo. Mediante un experimento de peticiones reales de la Web, compararemos el desempeño de estas técnicas.

  18. Improving DNS cache to alleviate the impact of DNS DDoS attack

    Directory of Open Access Journals (Sweden)

    Wei-min LI

    2011-02-01

    Full Text Available In recent years, adversaries have been launching distributed denial of service (DDoS attacks aimed at DNS (Domain Name System servers in various levels, and since the DNS is a most critical fundamental service of the Internet that provides mapping between domain names and IP addresses and a prerequisite for many other services, DDoS attacks successfully causing the unavailability of DNS could bring huge losses. In this paper, we present an easily implemented and practical scheme that can significantly alleviate the impact of the DNS DDoS attacks. Firstly, we propose interactive communications among DNS servers to obtain status information of others and with the premise we support that nameservers should not clean-up TTL-expired domain-name records in the cache when they detected that relevant nameservers are unavailable. Secondly, an evaluation based on the data of 511,781,146 DNS queries collected from four different DNS servers on the Internet shows that the DNS could still works well in the duration of a DDoS attack by applying our approach. And further, a long term DNS analysis of about 173 days proves the prerequisite for the validity of our project on the Internet today.

  19. Congenital malformations in sheep resulting from in utero inoculation of Cache Valley virus.

    Science.gov (United States)

    Chung, S I; Livingston, C W; Edwards, J F; Gauer, B B; Collisson, E W

    1990-10-01

    Serologic evidence indicated that an episode of congenital abnormalities in sheep was caused by Cache Valley virus (CVV), a bunyavirus indigenous to the United States. To determine the teratogenic potential of CVV in sheep, fetuses were infected in utero between 27 and 54 days of gestation with an isolate (CK-102) obtained in 1987 from a sentinel sheep in San Angelo, Texas. The dams of these fetuses were euthanatized between 28 and 75 days after inoculation, and the fetuses were examined for malformations. Twenty-eight of 34 fetuses had congenital abnormalities, including arthrogryposis, hydranencephaly, mummification, reabsorption, and oligohydroamnion. Virus was isolated from the allantoic fluid of 11 of 17 fetuses euthanatized at less than 70 days of gestation. The virus-positive fetuses, which were all negative for CVV-neutralizing antibody, had lesions ranging from none to severe arthrogryposis and hydranencephaly. Virus was not recovered from the allantoic fluid of fetuses after 76 days' gestation when CVV-specific antibody could be detected in 5 of 8 fetuses examined. The 2 fetuses infected on days 50 and 54 of gestation appeared normal and 1 had antibody to CVV.

  20. NACS: non-overlapping AP's caching scheme to reduce handoff in 802.11 wireless LAN

    CERN Document Server

    Tariq, Usman; Hong, Man-Pyo

    2011-01-01

    With the escalation of the IEEE 802.11 based wireless networks, voice over IP and analogous applications are also used over wireless networks. Recently, the wireless LAN systems are spaciously deployed for public Internet services. In public wireless LAN systems, reliable user authentication and mobility support are indispensable issues. When a mobile device budges out the range of one access point (AP) and endeavor to connect to new AP, it performs handoff. Contemporarily, PNC and SNC were proposed to propagate the MN context to the entire neighboring AP's on the wireless network with the help of neighbor graph. In this paper, we proposed a non-overlapping AP's caching scheme (NACS), which propagates the mobile node context to those AP's which do not overlap with the current AP. To capture the topology of non-overlapping AP's in the wireless network, non-overlapping graph (NOG) is generated at each AP. Simulation results shows that NACS reduces the signaling cost of propagating the MN context to the neighbor...

  1. An optimal and practical cache-oblivious algorithm for computing multiresolution rasters

    DEFF Research Database (Denmark)

    Arge, L.; Brodal, G.S.; Truelsen, J.

    2013-01-01

    where each cell of Gμ stores the average of the values of μ x μ cells of G . Here we consider the case where G is so large that it does not fit in the main memory of the computer. We present a novel algorithm that solves this problem in O(scan(N)) data block transfers from/to the external memory......, and in θ(N) CPU operations; here scan(N) is the number of block transfers that are needed to read the entire dataset from the external memory. Unlike previous results on this problem, our algorithm achieves this optimal performance without making any assumptions on the size of the main memory...... of the computer. Moreover, this algorithm is cache-oblivious; its performance does not depend on the data block size and the main memory size. We have implemented the new algorithm and we evaluate its performance on datasets of various sizes; we show that it clearly outperforms previous approaches on this problem...

  2. Traversal Caches: A Framework for FPGA Acceleration of Pointer Data Structures

    Directory of Open Access Journals (Sweden)

    James Coole

    2010-01-01

    Full Text Available Field-programmable gate arrays (FPGAs and other reconfigurable computing (RC devices have been widely shown to have numerous advantages including order of magnitude performance and power improvements compared to microprocessors for some applications. Unfortunately, FPGA usage has largely been limited to applications exhibiting sequential memory access patterns, thereby prohibiting acceleration of important applications with irregular patterns (e.g., pointer-based data structures. In this paper, we present a design pattern for RC application development that serializes irregular data structure traversals online into a traversal cache, which allows the corresponding data to be efficiently streamed to the FPGA. The paper presents a generalized framework that benefits applications with repeated traversals, which we show can achieve between 7x and 29x speedup over pointer-based software. For applications without strictly repeated traversals, we present application-specialized extensions that benefit applications with highly similar traversals by exploiting similarity to improve memory bandwidth and execute multiple traversals in parallel. We show that these extensions can achieve a speedup between 11x and 70x on a Virtex4 LX100 for Barnes-Hut n-body simulation.

  3. Applying VM to Evaluate the City Buildings of China%用VM的观点看中国的城市建筑

    Institute of Scientific and Technical Information of China (English)

    李岩; 李素蕾

    2006-01-01

    改革开放以来,中国的各大城市与若干年前相比发生了天翻地覆的变化.贪大、求洋、求新、超越实际经济承受能力的城市建筑正在中国大张旗鼓的上演.目前,我国是一个资源浪费大国,面临严重的能源危机.但还有很多城市在超越经济承受能力的情况下大搞城镇建设.本文用VM(Value Management)的观点分析这一现象.

  4. The Kv1.3 channel blocker Vm24 enhances muscle glucose transporter 4 mobilization but does not reduce body-weight gain in diet-induced obese male rats.

    Science.gov (United States)

    Jaimes-Hoy, Lorraine; Gurrola, Georgina B; Cisneros, Miguel; Joseph-Bravo, Patricia; Possani, Lourival D; Charli, Jean-Louis

    2017-07-15

    Voltage-gated potassium channels 1.3 (Kv1.3) can be targeted to reduce diet-induced obesity and insulin resistance in mice. Since species-specific differences in Kv1.3 expression and pharmacology have been observed, we tested the effect of Vm24, a high-affinity specific blocker of Kv1.3 channels from Vaejovis mexicanus smithi, on body weight (BW), glucose tolerance and insulin resistance in diet-induced obese rats. Young adult male Wistar rats were switched to a high-fat/high-fructose (HFF) diet. Eighteen days later animals were divided in two groups: vehicle and Vm24 group. Subcutaneous injections were applied every other day until sacrifice 2months later. An additional cohort was maintained on standard chow. The HFF diet promoted obesity. Treatment with Vm24 did not alter various metabolic parameters such as food intake, BW gain, visceral white adipose tissue mass, adipocyte diameter, serum glucose, leptin and thyroid hormone concentrations, brown adipose tissue mass or uncoupling protein-1 expression, and insulin tolerance. Vm24 did reduce basal and glucose-stimulated serum insulin concentrations, serum C-peptide concentration, increased QUICKI, and tended to lower HOMA-IR. Vm24 treatment did not change the activation of insulin receptor substrate-1, but enhanced protein-kinase B activation and membrane glucose-transporter 4 (GLUT4) protein levels in skeletal muscle. In conclusion, in male rats, long-term blockade of Kv1.3 channels with Vm24 does not reduce weight gain and visceral adiposity induced by HFF diet; instead, it reduces serum insulin concentration, and enhances GLUT4 mobilization in skeletal muscle. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. MEMORY CACHE-BASED DYNAMIC BLOCK MIGRATION ALGORITHM UNDER THE ENVIRONMENT OF MULTI-VIRTUAL MACHINES%多虚拟机下基于内存缓存的动态块迁移算法

    Institute of Scientific and Technical Information of China (English)

    刘典型

    2015-01-01

    虚拟化技术为用户提供了高可用性、动态、可扩展、可按需分配的逻辑资源,虚拟机迁移技术则减弱了虚拟化初次分配资源后物理资源和逻辑资源的耦合程度,使得物理资源池构建更为灵活。然而现存的虚拟机迁移技术存在着资源消耗多,物理磁盘负载重,迁移数据冗余等问题,使得迁移的稳定性和可用性大打折扣。提出基于内存缓存的动态块迁移算法。该算法主要关注以下两点:其一是如何在保证迁移性能不受明显影响的前提下,更加合理利用内存缓存,快速将虚拟机页面迁移到目的服务器,从而节省物理资源;其二是如何通过更细粒度的资源管理方式实现更有针对性的迁移时机优化。基于 QEMU 虚拟机实现了该算法,多种不同应用负载下的实验结果表明,该算法能有效降低资源消耗和物理磁盘的负载,稳定且迅速地实现虚拟机的迁移。%Virtualisation technology provides users with highly available,dynamic,scalable and allocation-on-demand logic resources, while the virtual machine migration technology weakens the coupling degree between the physical and logical resources after the initial virtualised allocation of resource,which enables the construction of physical resources pool more flexible.However,there are three main difficulties in existing solutions of virtual machines migration:too many resources are wasted,the hard disks are overloaded,and the redundancy of migration data,they greatly constrict the usability and stability of live migration of virtual machines.In the paper we propose a memory cache-based dynamic block migration algorithm,which focuses on two points:First,how to use the memory cache more reasonably to migrate the pages of virtual machine to the goal server rapidly on the premises of guaranteeing the migration performance not being apparently impacted so as to save physical resource;Secondly,how to

  6. A Novel Two-Tier Cooperative Caching Mechanism for the Optimization of Multi-Attribute Periodic Queries in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    ZhangBing Zhou

    2015-06-01

    Full Text Available Wireless sensor networks, serving as an important interface between physical environments and computational systems, have been used extensively for supporting domain applications, where multiple-attribute sensory data are queried from the network continuously and periodically. Usually, certain sensory data may not vary significantly within a certain time duration for certain applications. In this setting, sensory data gathered at a certain time slot can be used for answering concurrent queries and may be reused for answering the forthcoming queries when the variation of these data is within a certain threshold. To address this challenge, a popularity-based cooperative caching mechanism is proposed in this article, where the popularity of sensory data is calculated according to the queries issued in recent time slots. This popularity reflects the possibility that sensory data are interested in the forthcoming queries. Generally, sensory data with the highest popularity are cached at the sink node, while sensory data that may not be interested in the forthcoming queries are cached in the head nodes of divided grid cells. Leveraging these cooperatively cached sensory data, queries are answered through composing these two-tier cached data. Experimental evaluation shows that this approach can reduce the network communication cost significantly and increase the network capability.

  7. 处理器中非阻塞cache技术的研究%Research of the non-blocking cache technology in processor

    Institute of Scientific and Technical Information of China (English)

    孟锐

    2015-01-01

    Cache technology research has become the key technology to improve the performance of the processor in the design of modern high-speed processor , this paper researched the non-blocking cache technology for pipeline structure.,to improve the cache hit ratio, lack of cost reduction, improve the performance of the processor, and introduces the design of non-block-ing cache for the"longteng"R2 processor pipeline structure.%现代高速处理器的设计中对于cache技术的研究已经成为了提高处理器性能的关键技术,本文针对在流水线结构中采用非阻塞cache技术进行分析研究,提高cache的命中率,降低缺少代价,提高处理器的性能,并介绍了“龙腾”R2处理器的流水线结构的非阻塞cache 的设计。

  8. Evaluation of low-temperature geothermal potential in Cache Valley, Utah. Report of investigation No. 174

    Energy Technology Data Exchange (ETDEWEB)

    de Vries, J.L.

    1982-11-01

    Field work consisted of locating 90 wells and springs throughout the study area, collecting water samples for later laboratory analyses, and field measurement of pH, temperature, bicarbonate alkalinity, and electrical conductivity. Na/sup +/, K/sup +/, Ca/sup +2/, Mg/sup +2/, SiO/sub 2/, Fe, SO/sub 4//sup -2/, Cl/sup -/, F/sup -/, and total dissolved solids were determined in the laboratory. Temperature profiles were measured in 12 additional, unused walls. Thermal gradients calculated from the profiles were approximately the same as the average for the Basin and Range province, about 35/sup 0/C/km. One well produced a gradient of 297/sup 0/C/km, most probably as a result of a near-surface occurrence of warm water. Possible warm water reservoir temperatures were calculated using both the silica and the Na-K-Ca geothermometers, with the results averaging about 50 to 100/sup 0/C. If mixing calculations were applied, taking into account the temperatures and silica contents of both warm springs or wells and the cold groundwater, reservoir temperatures up to about 200/sup 0/C were indicated. Considering measured surface water temperatures, calculated reservoir temperatures, thermal gradients, and the local geology, most of the Cache Valley, Utah area is unsuited for geothermal development. However, the areas of North Logan, Benson, and Trenton were found to have anomalously warm groundwater in comparison to the background temperature of 13.0/sup 0/C for the study area. The warm water has potential for isolated energy development but is not warm enough for major commercial development.

  9. Suspense, culpa y cintas de vídeo. Caché/Escondido de Michael Haneke

    Directory of Open Access Journals (Sweden)

    Miguel Martínez-Cabeza

    2011-12-01

    Full Text Available Caché/Escondido (2005 representa dentro de la filmografía de Michael Haneke el ejemplo más destacado de síntesis de los planteamientos formales e ideológicos del cineasta austriaco. Este artículo analiza el filme como manifiesto cinematográfico y como explotación de las convenciones genéricas para construir un modelo de espectador reflexivo. La investigación del modo en que el director plantea y abandona las técnicas del suspense aporta claves para explicar el éxito casi unánime de crítica y la respuesta mucho menos homogénea de las audiencias. El desencadenante de la trama, unas cintas de vídeo que reciben los Laurent, es alusión directa a Carretera Perdida (1997 de David Lynch; no obstante, el misterio acerca del autor de la videovigilancia pierde interés en relación al sentimiento de culpa que desencadena en el protagonista. El episodio infantil de celos y venganza hacia un niño argelino y la actitud del Georges adulto representan una alegoría de la relación de Francia con su pasado colonial que tampoco cierra la narración de Haneke. Es precisamente la apertura formal con que el filme (desestructura cuestiones actuales como el límite entre la responsabilidad individual y colectiva lo que conforma un espectador tan distanciado de la diégesis como consciente de su propio papel de observador.

  10. Least cache value replacement algorithm%最小驻留价值缓存替换算法

    Institute of Scientific and Technical Information of China (English)

    刘磊; 熊小鹏

    2013-01-01

    In order to improve the performance of cache for search applications, this paper proposed a new replacement algorithm — the Least Cache Value (LCV) algorithm. The algorithm took into account the object access frequency and size of the object. The set of objects in the cache which has the minimum contribution to Byte Hit Ratio (BHR) should have priority to be replaced. Moreover, the selection of optimal replacement set of objects was transformed into classical 0-1 knapsack problems and a rapid approximate solution and the data structure of algorithm were given. The experiment proves that the LCV algorithm has better performance in increasing BHR and reducing Average Latency Time (ALT) than algorithms of LRU (Least Recently Used), FIFO (First-In First-Out) and GD-Size (Greed Dual-Size).%为提高搜索应用的缓存性能,提出一种新的缓存替换算法——最小驻留价值(LCV)算法.该算法通过计算对象访问频率,结合对象大小,优先选取对字节命中率贡献最小的对象集进行缓存替换.同时,将最优替换对象集的选取转化为经典0-1背包问题进行了求解,并给出一种快速近似解法及其算法数据结构.在与最近最少使用(LRU)、先进先出(FIFO)和考虑多重因子(GD-Size)算法的对比实验中,LCV算法在提高字节命中率(BHR)和降低平均延时时间(ALT)方面具有更好的性能.

  11. Use of diuretics is associated with reduced risk of Alzheimer’s disease: the Cache County Study

    Science.gov (United States)

    Chuang, Yi-Fang; Breitner, John C.S.; Chiu, Yen-Ling; Khachaturian, Ara; Hayden, Kathleen; Corcoran, Chris; Tschanz, JoAnn; Norton, Maria; Munger, Ron; Welsh-Bohmer, Kathleen; Zandi, Peter P.

    2015-01-01

    Although the use of antihypertensive medications has been associated with reduced risk of Alzheimer’s disease (AD), it remains unclear which class provides the most benefit. The Cache County Study of Memory Health and Aging is a prospective longitudinal cohort study of dementing illnesses among the elderly population of Cache County, Utah. Using waves I to IV data of the Cache County Study, 3417 participants had a mean of 7.1 years of follow-up. Time-varying use of antihypertensive medications including different class of diuretics, angiotensin converting enzyme inhibitors, β-blockers, and calcium channel blockers was used to predict the incidence of AD using Cox proportional hazards analyses. During follow-up, 325 AD cases were ascertained with a total of 23,590 person-years. Use of any anti-hypertensive medication was associated with lower incidence of AD (adjusted hazard ratio [aHR], 0.77; 95% confidence interval [CI], 0.61–0.97). Among different classes of antihypertensive medications, thiazide (aHR, 0.7; 95% CI, 0.53–0.93), and potassium-sparing diuretics (aHR, 0.69; 95% CI, 0.48–0.99) were associated with the greatest reduction of AD risk. Thiazide and potassium-sparing diuretics were associated with decreased risk of AD. The inverse association of potassium-sparing diuretics confirms an earlier finding in this cohort, now with longer follow-up, and merits further investigation. PMID:24910391

  12. Design of CPU with Cache and Precise Interruption Response%带Cache和精确中断响应的CPU设计

    Institute of Scientific and Technical Information of China (English)

    刘秋菊; 李飞; 刘书伦

    2012-01-01

    In this paper the design of CPU with Cache and precise interruption response was proposed. 15 of the MIPS instruction set were selected as the basic instruction for the CPU. By using 5 stage pipeline, the instruction Cache,data Cache and precise interruption response were realized. The teat results show that the scheme meets the design requirements.%提出了带Cache和精确中断响应的CPU设计方案,实现指令集MIPS中选取15条指令作为本CPU的基本指令.采用基本5步流水线CPU设计,给出了指令Cache、数据Cache和精确中断响应的设计与实现.测试结果表明,该方案符合设计要求.

  13. Implementation of the Technique for Map Cache Based on ArcGIS Server%基于ArcGIS Server缓存技术的实现

    Institute of Scientific and Technical Information of China (English)

    郭利利

    2011-01-01

    在深入研究ArcGIS Server技术的基础上,对地图缓存技术进行介绍。结合实例,详细论述了地图缓存技术实现的方法和步骤。实例证实应用ArcGIS Server地图缓存技术可以较好地减轻WebGIS服务器端的负荷,提高系统响应速度。%The paper provides information of map cache and skill based on studying the technology of ArcGIS Server.With examples,the paper discusses the methods and steps to implement the map cache technology in detail,and verifies the map cache which can well release the load of WebGIS server through examples,at the same time improves the system responsiveness.

  14. 倒排文件索引缓存机制的优化%Optimization of Inverted File Index Caching Mechanism

    Institute of Scientific and Technical Information of China (English)

    杨晓波

    2012-01-01

    为了有效提高搜索引擎检索服务系统的整体性能,提出了一种基于倒排文件索引的缓存机制优化方法.具体研究过程是:首先分析倒排文件缓存的体系结构和数据加载,接着讨论负载数据对倒排文件缓存和缓存替换算法的影响,最后通过设计仿真实验研究倒排文件的缓存优化.研究结果表明,采用倒排文件索引的缓存机制优化方法可以明显减少磁盘系统I/O访问次数,提高磁盘系统带宽的利用率.%In order to improve the whole performance of search service system effectively, it proposes a method of caching mechanism optimization based on inverted file index in this paper. The specific studying process is as follows. Firstly, the system structure and data loading of inverted file caching are analyzed, and then discuss the impact of loading data to inverted file caching and cache replacement algorithm; finally, the cache optimization of inverted file is studied through designing simulation experiment. The result shows that the method of caching mechanism optimization based on inverted file index can reduce disk system I/O access times significantly, and also improve the bandwidth utilization of disk system.

  15. MicroRNA-181b提高U87细胞对VM-26的敏感性研究%miR-181b Enhances Sensitivity to Teniposide in Glioma Cell Line U87

    Institute of Scientific and Technical Information of China (English)

    孙衍昶; 郭琤琤; 赛克; 王翦; 王洁; 陈芙蓉; 张宗平; 陈忠平

    2013-01-01

    背景与目的:MicroRNA(miRNA)参与肿瘤发生发展的诸多过程,并参与调节多种抗肿瘤药物的敏感性.本研究探讨恶性胶质瘤中miR-181b对VM-26(teniposide)化疗敏感性的影响.方法:以荧光定量PCR法检测miR-181b在高级别胶质瘤中的表达,并利用CCK-8细胞毒性实验检测高级别胶质瘤患者细胞对VM-26的化疗敏感性;并通过慢病毒感染构建稳定高表达miR-181b的U87/181b细胞及其对照组U87/nc,在荧光显微镜下观察其转染率及荧光定量PCR法检测其中miR-181b的表达;进而利用CCK-8细胞毒性实验检测U87/181b和U87/nc细胞对VM-26的敏感性,利用流式细胞仪检测VM-26作用72小时后U87/181b和U87/nc的凋亡情况.结果:在高级别胶质瘤中,miR-181b的表达与VM-26的敏感性呈正相关(r=-0.691,P< 0.01),也就是miR-181b高表达肿瘤对VM-26的敏感性高.qPCR检测miR-181b在U87/181b (0.699±0.023)的表达显著高于U87/nc(0.019±0.001) (P< 0.05).CCK-8检测结果显示U87/181b [IC50:(1.25±0.12)μg/mL]对VM-26的敏感性显著高于U87/nc[IC50:(6.24±0.88) μg/mL] (P< 0.05).经VM-26处理后U87/181b凋亡率(69.41±0.77)明显高于U87/nc(37.93±2.90)(P< 0.05).结论:在高级别胶质瘤高表达miR-181b的肿瘤对VM-26的敏感性高;在胶质瘤细胞U87中增加miR-181b表达可以提高对VM-26的敏感性.

  16. 'tomo_display' and 'vol_tools': IDL VM Packages for Tomography Data Reconstruction, Processing, and Visualization

    Science.gov (United States)

    Rivers, M. L.; Gualda, G. A.

    2009-05-01

    One of the challenges in tomography is the availability of suitable software for image processing and analysis in 3D. We present here 'tomo_display' and 'vol_tools', two packages created in IDL that enable reconstruction, processing, and visualization of tomographic data. They complement in many ways the capabilities offered by Blob3D (Ketcham 2005 - Geosphere, 1: 32-41, DOI: 10.1130/GES00001.1) and, in combination, allow users without programming knowledge to perform all steps necessary to obtain qualitative and quantitative information using tomographic data. The package 'tomo_display' was created and is maintained by Mark Rivers. It allows the user to: (1) preprocess and reconstruct parallel beam tomographic data, including removal of anomalous pixels, ring artifact reduction, and automated determination of the rotation center, (2) visualization of both raw and reconstructed data, either as individual frames, or as a series of sequential frames. The package 'vol_tools' consists of a series of small programs created and maintained by Guilherme Gualda to perform specific tasks not included in other packages. Existing modules include simple tools for cropping volumes, generating histograms of intensity, sample volume measurement (useful for porous samples like pumice), and computation of volume differences (for differential absorption tomography). The module 'vol_animate' can be used to generate 3D animations using rendered isosurfaces around objects. Both packages use the same NetCDF format '.volume' files created using code written by Mark Rivers. Currently, only 16-bit integer volumes are created and read by the packages, but floating point and 8-bit data can easily be stored in the NetCDF format as well. A simple GUI to convert sequences of tiffs into '.volume' files is available within 'vol_tools'. Both 'tomo_display' and 'vol_tools' include options to (1) generate onscreen output that allows for dynamic visualization in 3D, (2) save sequences of tiffs to disk

  17. A Routing Mechanism for Cloud Outsourcing of Medical Imaging Repositories.

    Science.gov (United States)

    Godinho, Tiago Marques; Viana-Ferreira, Carlos; Bastião Silva, Luís A; Costa, Carlos

    2016-01-01

    Web-based technologies have been increasingly used in picture archive and communication systems (PACS), in services related to storage, distribution, and visualization of medical images. Nowadays, many healthcare institutions are outsourcing their repositories to the cloud. However, managing communications between multiple geo-distributed locations is still challenging due to the complexity of dealing with huge volumes of data and bandwidth requirements. Moreover, standard methodologies still do not take full advantage of outsourced archives, namely because their integration with other in-house solutions is troublesome. In order to improve the performance of distributed medical imaging networks, a smart routing mechanism was developed. This includes an innovative cache system based on splitting and dynamic management of digital imaging and communications in medicine objects. The proposed solution was successfully deployed in a regional PACS archive. The results obtained proved that it is better than conventional approaches, as it reduces remote access latency and also the required cache storage space.

  18. From the Island of the Blue Dolphins: A unique 19th century cache feature from San Nicolas Island, California

    Science.gov (United States)

    Erlandson, Jon M.; Thomas-Barnett, Lisa; Vellanoweth, René L.; Schwartz, Steven J.; Muhs, Daniel R.

    2013-01-01

    A cache feature salvaged from an eroding sea cliff on San Nicolas Island produced two redwood boxes containing more than 200 artifacts of Nicoleño, Native Alaskan, and Euro-American origin. Outside the boxes were four asphaltum-coated baskets, abalone shells, a sandstone dish, and a hafted stone knife. The boxes, made from split redwood planks, contained a variety of artifacts and numerous unmodified bones and teeth from marine mammals, fish, birds, and large land mammals. Nicoleño-style artifacts include 11 knives with redwood handles and stone blades, stone projectile points, steatite ornaments and effigies, a carved stone pipe, abraders and burnishing stones, bird bone whistles, bone and shell pendants, abalone shell dishes, and two unusual barbed shell fishhooks. Artifacts of Native Alaskan style include four bone toggling harpoons, two unilaterally barbed bone harpoon heads, bone harpoon fore-shafts, a ground slate blade, and an adze blade. Objects of Euro-American origin or materials include a brass button, metal harpoon blades, and ten flaked glass bifaces. The contents of the cache feature, dating to the early-to-mid nineteenth century, provide an extraordinary window on a time of European expansion and global economic development that created unique cultural interactions and social transformations.

  19. Glprof: A Gprof inspired, Callgraph-oriented Per-Object Disseminating Memory Access Multi-Cache Profiler

    Energy Technology Data Exchange (ETDEWEB)

    Janjusic, Tommy [ORNL; Kartsaklis, Christos [ORNL

    2015-01-01

    Application analysis is facilitated through a number of program profiling tools. The tools vary in their complexity, ease of deployment, design, and profiling detail. Specifically, understand- ing, analyzing, and optimizing is of particular importance for scientific applications where minor changes in code paths and data-structure layout can have profound effects. Understanding how intricate data-structures are accessed and how a given memory system responds is a complex task. In this paper we describe a trace profiling tool, Glprof, specifically aimed to lessen the burden of the programmer to pin-point heavily involved data-structures during an application's run-time, and understand data-structure run-time usage. Moreover, we showcase the tool's modularity using additional cache simulation components. We elaborate on the tool's design, and features. Finally we demonstrate the application of our tool in the context of Spec bench- marks using the Glprof profiler and two concurrently running cache simulators, PPC440 and AMD Interlagos.

  20. Transactional WaveCache: Towards Speculative and Out-of-Order DataFlow Execution of Memory Operations

    CERN Document Server

    Marzulo, Leandro A J; Costa, Vítor Santos

    2007-01-01

    The WaveScalar is the first DataFlow Architecture that can efficiently provide the sequential memory semantics required by imperative languages. This work presents an alternative memory ordering mechanism for this architecture, the Transaction WaveCache. Our mechanism maintains the execution order of memory operations within blocks of code, called Waves, but adds the ability to speculatively execute, out-of-order, operations from different waves. This ordering mechanism is inspired by progress in supporting Transactional Memories. Waves are considered as atomic regions and executed as nested transactions. If a wave has finished the execution of all its memory operations, as soon as the previous waves are committed, it can be committed. If a hazard is detected in a speculative Wave, all the following Waves (children) are aborted and re-executed. We evaluate the WaveCache on a set artificial benchmarks. If the benchmark does not access memory often, we could achieve speedups of around 90%. Speedups of 33.1% and...