WorldWideScience

Sample records for vm image caching

  1. Using XRootD to provide caches for CernVM-FS

    CERN Document Server

    Domenighini, Matteo

    2017-01-01

    CernVM-FS recently added the possibility of using plugin for cache management. In order to investigate the capabilities and limits of such possibility, an XRootD plugin was written and benchmarked; as a byproduct, a POSIX plugin was also generated. The tests revealed that the plugin interface introduces no signicant performance over- head; moreover, the XRootD plugin performance was discovered to be worse than the ones of the built-in cache manager and the POSIX plugin. Further test of the XRootD component revealed that its per- formance is dependent on the server disk speed.

  2. Security in the CernVM File System and the Frontier Distributed Database Caching System

    International Nuclear Information System (INIS)

    Dykstra, D; Blomer, J

    2014-01-01

    Both the CernVM File System (CVMFS) and the Frontier Distributed Database Caching System (Frontier) distribute centrally updated data worldwide for LHC experiments using http proxy caches. Neither system provides privacy or access control on reading the data, but both control access to updates of the data and can guarantee the authenticity and integrity of the data transferred to clients over the internet. CVMFS has since its early days required digital signatures and secure hashes on all distributed data, and recently Frontier has added X.509-based authenticity and integrity checking. In this paper we detail and compare the security models of CVMFS and Frontier.

  3. Security in the CernVM File System and the Frontier Distributed Database Caching System

    Science.gov (United States)

    Dykstra, D.; Blomer, J.

    2014-06-01

    Both the CernVM File System (CVMFS) and the Frontier Distributed Database Caching System (Frontier) distribute centrally updated data worldwide for LHC experiments using http proxy caches. Neither system provides privacy or access control on reading the data, but both control access to updates of the data and can guarantee the authenticity and integrity of the data transferred to clients over the internet. CVMFS has since its early days required digital signatures and secure hashes on all distributed data, and recently Frontier has added X.509-based authenticity and integrity checking. In this paper we detail and compare the security models of CVMFS and Frontier.

  4. Porting of $\\mu$CernVM to AArch64

    CERN Document Server

    Scheffler, Felix

    2016-01-01

    $\\mu$CernVM is a virtual appliance that contains a stripped-down Linux OS connecting to a CernVM-Filesystem (CVMFS) repository that resides on a dedicated web server. In contrast to “usual” VMs, anything that is needed from this repository is only downloaded on demand, aggressively cached and eventually released again. Currently, $\\mu$CernVM is only distributed for x86-64. Recently, ARM (market leader in mobile computing) has started to enter the server market, which is still dominated by x86-64 infrastructure. However, in terms of performance/watt, AArch64 (latest ARM 64bit architecture) is a promising alternative. Facing millions of jobs to compute every day, it is thus desirable to have an HEP virtualisation solution for AArch64. In this project, $\\mu$CernVM was successfully ported to AArch64. Native and virtualised runtime performance was evaluated using ROOT6 and CMS benchmarks. It was found that VM performance is inferior to host performance across all tests. Respective numbers greatly vary between...

  5. Micro-CernVM: slashing the cost of building and deploying virtual machines

    International Nuclear Information System (INIS)

    Blomer, J; Berzano, D; Buncic, P; Charalampidis, I; Ganis, G; Lestaris, G; Meusel, R; Nicolaou, V

    2014-01-01

    The traditional virtual machine (VM) building and and deployment process is centered around the virtual machine hard disk image. The packages comprising the VM operating system are carefully selected, hard disk images are built for a variety of different hypervisors, and images have to be distributed and decompressed in order to instantiate a virtual machine. Within the HEP community, the CernVM File System (CernVM-FS) has been established in order to decouple the distribution from the experiment software from the building and distribution of the VM hard disk images. We show how to get rid of such pre-built hard disk images altogether. Due to the high requirements on POSIX compliance imposed by HEP application software, CernVM-FS can also be used to host and boot a Linux operating system. This allows the use of a tiny bootable CD image that comprises only a Linux kernel while the rest of the operating system is provided on demand by CernVM-FS. This approach speeds up the initial instantiation time and reduces virtual machine image sizes by an order of magnitude. Furthermore, security updates can be distributed instantaneously through CernVM-FS. By leveraging the fact that CernVM-FS is a versioning file system, a historic analysis environment can be easily re-spawned by selecting the corresponding CernVM-FS file system snapshot.

  6. Status and Roadmap of CernVM

    Science.gov (United States)

    Berzano, D.; Blomer, J.; Buncic, P.; Charalampidis, I.; Ganis, G.; Meusel, R.

    2015-12-01

    Cloud resources nowadays contribute an essential share of resources for computing in high-energy physics. Such resources can be either provided by private or public IaaS clouds (e.g. OpenStack, Amazon EC2, Google Compute Engine) or by volunteers computers (e.g. LHC@Home 2.0). In any case, experiments need to prepare a virtual machine image that provides the execution environment for the physics application at hand. The CernVM virtual machine since version 3 is a minimal and versatile virtual machine image capable of booting different operating systems. The virtual machine image is less than 20 megabyte in size. The actual operating system is delivered on demand by the CernVM File System. CernVM 3 has matured from a prototype to a production environment. It is used, for instance, to run LHC applications in the cloud, to tune event generators using a network of volunteer computers, and as a container for the historic Scientific Linux 5 and Scientific Linux 4 based software environments in the course of long-term data preservation efforts of the ALICE, CMS, and ALEPH experiments. We present experience and lessons learned from the use of CernVM at scale. We also provide an outlook on the upcoming developments. These developments include adding support for Scientific Linux 7, the use of container virtualization, such as provided by Docker, and the streamlining of virtual machine contextualization towards the cloud-init industry standard.

  7. Web Caching

    Indian Academy of Sciences (India)

    leveraged through Web caching technology. Specifically, Web caching becomes an ... Web routing can improve the overall performance of the Internet. Web caching is similar to memory system caching - a Web cache stores Web resources in ...

  8. Engineering the CernVM-Filesystem as a High Bandwidth Distributed Filesystem for Auxiliary Physics Data

    Energy Technology Data Exchange (ETDEWEB)

    Dykstra, D. [Fermilab; Bockelman, B. [Nebraska U.; Blomer, J. [CERN; Herner, K. [Fermilab; Levshina, T. [Fermilab; Slyz, M. [Fermilab

    2015-12-23

    A common use pattern in the computing models of particle physics experiments is running many distributed applications that read from a shared set of data files. We refer to this data is auxiliary data, to distinguish it from (a) event data from the detector (which tends to be different for every job), and (b) conditions data about the detector (which tends to be the same for each job in a batch of jobs). Relatively speaking, conditions data also tends to be relatively small per job where both event data and auxiliary data are larger per job. Unlike event data, auxiliary data comes from a limited working set of shared files. Since there is spatial locality of the auxiliary data access, the use case appears to be identical to that of the CernVM- Filesystem (CVMFS). However, we show that distributing auxiliary data through CVMFS causes the existing CVMFS infrastructure to perform poorly. We utilize a CVMFS client feature called 'alien cache' to cache data on existing local high-bandwidth data servers that were engineered for storing event data. This cache is shared between the worker nodes at a site and replaces caching CVMFS files on both the worker node local disks and on the site's local squids. We have tested this alien cache with the dCache NFSv4.1 interface, Lustre, and the Hadoop Distributed File System (HDFS) FUSE interface, and measured performance. In addition, we use high-bandwidth data servers at central sites to perform the CVMFS Stratum 1 function instead of the low-bandwidth web servers deployed for the CVMFS software distribution function. We have tested this using the dCache HTTP interface. As a result, we have a design for an end-to-end high-bandwidth distributed caching read-only filesystem, using existing client software already widely deployed to grid worker nodes and existing file servers already widely installed at grid sites. Files are published in a central place and are soon available on demand throughout the grid and cached

  9. Engineering the CernVM-Filesystem as a High Bandwidth Distributed Filesystem for Auxiliary Physics Data

    Science.gov (United States)

    Dykstra, D.; Bockelman, B.; Blomer, J.; Herner, K.; Levshina, T.; Slyz, M.

    2015-12-01

    A common use pattern in the computing models of particle physics experiments is running many distributed applications that read from a shared set of data files. We refer to this data is auxiliary data, to distinguish it from (a) event data from the detector (which tends to be different for every job), and (b) conditions data about the detector (which tends to be the same for each job in a batch of jobs). Relatively speaking, conditions data also tends to be relatively small per job where both event data and auxiliary data are larger per job. Unlike event data, auxiliary data comes from a limited working set of shared files. Since there is spatial locality of the auxiliary data access, the use case appears to be identical to that of the CernVM- Filesystem (CVMFS). However, we show that distributing auxiliary data through CVMFS causes the existing CVMFS infrastructure to perform poorly. We utilize a CVMFS client feature called "alien cache" to cache data on existing local high-bandwidth data servers that were engineered for storing event data. This cache is shared between the worker nodes at a site and replaces caching CVMFS files on both the worker node local disks and on the site's local squids. We have tested this alien cache with the dCache NFSv4.1 interface, Lustre, and the Hadoop Distributed File System (HDFS) FUSE interface, and measured performance. In addition, we use high-bandwidth data servers at central sites to perform the CVMFS Stratum 1 function instead of the low-bandwidth web servers deployed for the CVMFS software distribution function. We have tested this using the dCache HTTP interface. As a result, we have a design for an end-to-end high-bandwidth distributed caching read-only filesystem, using existing client software already widely deployed to grid worker nodes and existing file servers already widely installed at grid sites. Files are published in a central place and are soon available on demand throughout the grid and cached locally on the

  10. CernVM Online and Cloud Gateway: a uniform interface for CernVM contextualization and deployment

    International Nuclear Information System (INIS)

    Lestaris, G; Charalampidis, I; Berzano, D; Blomer, J; Buncic, P; Ganis, G; Meusel, R

    2014-01-01

    In a virtualized environment, contextualization is the process of configuring a VM instance for the needs of various deployment use cases. Contextualization in CernVM can be done by passing a handwritten context to the user data field of cloud APIs, when running CernVM on the cloud, or by using CernVM web interface when running the VM locally. CernVM Online is a publicly accessible web interface that unifies these two procedures. A user is able to define, store and share CernVM contexts using CernVM Online and then apply them either in a cloud by using CernVM Cloud Gateway or on a local VM with the single-step pairing mechanism. CernVM Cloud Gateway is a distributed system that provides a single interface to use multiple and different clouds (by location or type, private or public). Cloud gateway has been so far integrated with OpenNebula, CloudStack and EC2 tools interfaces. A user, with access to a number of clouds, can run CernVM cloud agents that will communicate with these clouds using their interfaces, and then use one single interface to deploy and scale CernVM clusters. CernVM clusters are defined in CernVM Online and consist of a set of CernVM instances that are contextualized and can communicate with each other.

  11. An Adaptive Insertion and Promotion Policy for Partitioned Shared Caches

    Science.gov (United States)

    Mahrom, Norfadila; Liebelt, Michael; Raof, Rafikha Aliana A.; Daud, Shuhaizar; Hafizah Ghazali, Nur

    2018-03-01

    Cache replacement policies in chip multiprocessors (CMP) have been investigated extensively and proven able to enhance shared cache management. However, competition among multiple processors executing different threads that require simultaneous access to a shared memory may cause cache contention and memory coherence problems on the chip. These issues also exist due to some drawbacks of the commonly used Least Recently Used (LRU) policy employed in multiprocessor systems, which are because of the cache lines residing in the cache longer than required. In image processing analysis of for example extra pulmonary tuberculosis (TB), an accurate diagnosis for tissue specimen is required. Therefore, a fast and reliable shared memory management system to execute algorithms for processing vast amount of specimen image is needed. In this paper, the effects of the cache replacement policy in a partitioned shared cache are investigated. The goal is to quantify whether better performance can be achieved by using less complex replacement strategies. This paper proposes a Middle Insertion 2 Positions Promotion (MI2PP) policy to eliminate cache misses that could adversely affect the access patterns and the throughput of the processors in the system. The policy employs a static predefined insertion point, near distance promotion, and the concept of ownership in the eviction policy to effectively improve cache thrashing and to avoid resource stealing among the processors.

  12. CryptoCache: A Secure Sharable File Cache for Roaming Users

    DEFF Research Database (Denmark)

    Jensen, Christian D.

    2000-01-01

    . Conventional distributed file systems cache everything locally or not at all; there is no possibility to cache files on nearby nodes.In this paper we present the design of a secure cache system called CryptoCache that allows roaming users to cache files on untrusted file hosting servers. The system allows...... flexible sharing of cached files among unauthenticated users, i.e. unlike most distributed file systems CryptoCache does not require a global authentication framework.Files are encrypted when they are transferred over the network and while stored on untrusted servers. The system uses public key......Small mobile computers are now sufficiently powerful to run many applications, but storage capacity remains limited so working files cannot be cached or stored locally. Even if files can be stored locally, the mobile device is not powerful enough to act as server in collaborations with other users...

  13. Exploiting VM/XA

    International Nuclear Information System (INIS)

    Boeheim, C.

    1990-03-01

    The Stanford Linear Accelerator Center has recently completed a conversion to IBM's VM/XA SP Release 2 operating system. The primary physics application had been constrained by the previous 16 megabyte memory limit. Work is underway to enable this application to exploit the new features of VM/XA. This paper presents a brief tutorial on how to convert an application to exploit VM/XA and discusses some of the SLAC experiences in doing so. 13 figs

  14. Caching Patterns and Implementation

    Directory of Open Access Journals (Sweden)

    Octavian Paul ROTARU

    2006-01-01

    Full Text Available Repetitious access to remote resources, usually data, constitutes a bottleneck for many software systems. Caching is a technique that can drastically improve the performance of any database application, by avoiding multiple read operations for the same data. This paper addresses the caching problems from a pattern perspective. Both Caching and caching strategies, like primed and on demand, are presented as patterns and a pattern-based flexible caching implementation is proposed.The Caching pattern provides method of expensive resources reacquisition circumvention. Primed Cache pattern is applied in situations in which the set of required resources, or at least a part of it, can be predicted, while Demand Cache pattern is applied whenever the resources set required cannot be predicted or is unfeasible to be buffered.The advantages and disadvantages of all the caching patterns presented are also discussed, and the lessons learned are applied in the implementation of the pattern-based flexible caching solution proposed.

  15. CacheCard : Caching static and dynamic content on the NIC

    NARCIS (Netherlands)

    Bos, Herbert; Huang, Kaiming

    2009-01-01

    CacheCard is a NIC-based cache for static and dynamic web content in a way that allows for implementation on simple devices like NICs. It requires neither understanding of the way dynamic data is generated, nor execution of scripts on the cache. By monitoring file system activity and potential

  16. A method cache for Patmos

    DEFF Research Database (Denmark)

    Degasperi, Philipp; Hepp, Stefan; Puffitsch, Wolfgang

    2014-01-01

    For real-time systems we need time-predictable processors. This paper presents a method cache as a time-predictable solution for instruction caching. The method cache caches whole methods (or functions) and simplifies worst-case execution time analysis. We have integrated the method cache...... in the time-predictable processor Patmos. We evaluate the method cache with a large set of embedded benchmarks. Most benchmarks show a good hit rate for a method cache size in the range between 4 and 16 KB....

  17. A Time-predictable Stack Cache

    DEFF Research Database (Denmark)

    Abbaspour, Sahar; Brandner, Florian; Schoeberl, Martin

    2013-01-01

    Real-time systems need time-predictable architectures to support static worst-case execution time (WCET) analysis. One architectural feature, the data cache, is hard to analyze when different data areas (e.g., heap allocated and stack allocated data) share the same cache. This sharing leads to le...... of a cache for stack allocated data. Our port of the LLVM C++ compiler supports the management of the stack cache. The combination of stack cache instructions and the hardware implementation of the stack cache is a further step towards timepredictable architectures.......Real-time systems need time-predictable architectures to support static worst-case execution time (WCET) analysis. One architectural feature, the data cache, is hard to analyze when different data areas (e.g., heap allocated and stack allocated data) share the same cache. This sharing leads to less...... precise results of the cache analysis part of the WCET analysis. Splitting the data cache for different data areas enables composable data cache analysis. The WCET analysis tool can analyze the accesses to these different data areas independently. In this paper we present the design and implementation...

  18. Don't make cache too complex: A simple probability-based cache management scheme for SSDs.

    Directory of Open Access Journals (Sweden)

    Seungjae Baek

    Full Text Available Solid-state drives (SSDs have recently become a common storage component in computer systems, and they are fueled by continued bit cost reductions achieved with smaller feature sizes and multiple-level cell technologies. However, as the flash memory stores more bits per cell, the performance and reliability of the flash memory degrade substantially. To solve this problem, a fast non-volatile memory (NVM-based cache has been employed within SSDs to reduce the long latency required to write data. Absorbing small writes in a fast NVM cache can also reduce the number of flash memory erase operations. To maximize the benefits of an NVM cache, it is important to increase the NVM cache utilization. In this paper, we propose and study ProCache, a simple NVM cache management scheme, that makes cache-entrance decisions based on random probability testing. Our scheme is motivated by the observation that frequently written hot data will eventually enter the cache with a high probability, and that infrequently accessed cold data will not enter the cache easily. Owing to its simplicity, ProCache is easy to implement at a substantially smaller cost than similar previously studied techniques. We evaluate ProCache and conclude that it achieves comparable performance compared to a more complex reference counter-based cache-management scheme.

  19. Pattern recognition for cache management in distributed medical imaging environments.

    Science.gov (United States)

    Viana-Ferreira, Carlos; Ribeiro, Luís; Matos, Sérgio; Costa, Carlos

    2016-02-01

    Traditionally, medical imaging repositories have been supported by indoor infrastructures with huge operational costs. This paradigm is changing thanks to cloud outsourcing which not only brings technological advantages but also facilitates inter-institutional workflows. However, communication latency is one main problem in this kind of approaches, since we are dealing with tremendous volumes of data. To minimize the impact of this issue, cache and prefetching are commonly used. The effectiveness of these mechanisms is highly dependent on their capability of accurately selecting the objects that will be needed soon. This paper describes a pattern recognition system based on artificial neural networks with incremental learning to evaluate, from a set of usage pattern, which one fits the user behavior at a given time. The accuracy of the pattern recognition model in distinct training conditions was also evaluated. The solution was tested with a real-world dataset and a synthesized dataset, showing that incremental learning is advantageous. Even with very immature initial models, trained with just 1 week of data samples, the overall accuracy was very similar to the value obtained when using 75% of the long-term data for training the models. Preliminary results demonstrate an effective reduction in communication latency when using the proposed solution to feed a prefetching mechanism. The proposed approach is very interesting for cache replacement and prefetching policies due to the good results obtained since the first deployment moments.

  20. Security model for VM in cloud

    Science.gov (United States)

    Kanaparti, Venkataramana; Naveen K., R.; Rajani, S.; Padmvathamma, M.; Anitha, C.

    2013-03-01

    Cloud computing is a new approach emerged to meet ever-increasing demand for computing resources and to reduce operational costs and Capital Expenditure for IT services. As this new way of computation allows data and applications to be stored away from own corporate server, it brings more issues in security such as virtualization security, distributed computing, application security, identity management, access control and authentication. Even though Virtualization forms the basis for cloud computing it poses many threats in securing cloud. As most of Security threats lies at Virtualization layer in cloud we proposed this new Security Model for Virtual Machine in Cloud (SMVC) in which every process is authenticated by Trusted-Agent (TA) in Hypervisor as well as in VM. Our proposed model is designed to with-stand attacks by unauthorized process that pose threat to applications related to Data Mining, OLAP systems, Image processing which requires huge resources in cloud deployed on one or more VM's.

  1. Maintaining Web Cache Coherency

    Directory of Open Access Journals (Sweden)

    2000-01-01

    Full Text Available Document coherency is a challenging problem for Web caching. Once the documents are cached throughout the Internet, it is often difficult to keep them coherent with the origin document without generating a new traffic that could increase the traffic on the international backbone and overload the popular servers. Several solutions have been proposed to solve this problem, among them two categories have been widely discussed: the strong document coherency and the weak document coherency. The cost and the efficiency of the two categories are still a controversial issue, while in some studies the strong coherency is far too expensive to be used in the Web context, in other studies it could be maintained at a low cost. The accuracy of these analysis is depending very much on how the document updating process is approximated. In this study, we compare some of the coherence methods proposed for Web caching. Among other points, we study the side effects of these methods on the Internet traffic. The ultimate goal is to study the cache behavior under several conditions, which will cover some of the factors that play an important role in the Web cache performance evaluation and quantify their impact on the simulation accuracy. The results presented in this study show indeed some differences in the outcome of the simulation of a Web cache depending on the workload being used, and the probability distribution used to approximate updates on the cached documents. Each experiment shows two case studies that outline the impact of the considered parameter on the performance of the cache.

  2. Time-predictable Stack Caching

    DEFF Research Database (Denmark)

    Abbaspourseyedi, Sahar

    completely. Thus, in systems with hard deadlines the worst-case execution time (WCET) of the real-time software running on them needs to be bounded. Modern architectures use features such as pipelining and caches for improving the average performance. These features, however, make the WCET analysis more...... addresses, provides an opportunity to predict and tighten the WCET of accesses to data in caches. In this thesis, we introduce the time-predictable stack cache design and implementation within a time-predictable processor. We introduce several optimizations to our design for tightening the WCET while...... keeping the timepredictability of the design intact. Moreover, we provide a solution for reducing the cost of context switching in a system using the stack cache. In design of these caches, we use custom hardware and compiler support for delivering time-predictable stack data accesses. Furthermore...

  3. Investigating the role of the ventromedial prefrontal cortex (vmPFC in the assessment of brands

    Directory of Open Access Journals (Sweden)

    Jose Paulo eSantos

    2011-06-01

    Full Text Available The ventromedial prefrontal cortex (vmPFC is believed to be important in everyday preference judgments, processing emotions during decision-making. However, there is still controversy in the literature regarding the participation of the vmPFC. To further elucidate the contribution of the vmPFC in brand preference, we designed a functional magnetic resonance imaging (fMRI study where 18 subjects assessed positive, indifferent and fictitious brands. Also, both the period during and after the decision process were analyzed, hoping to unravel temporally the role of the vmPFC, using modeled and model-free fMRI analysis. Considering together the period before and after decision-making, there was activation of the vmPFC when comparing positive with indifferent or fictitious brands. However, when the decision-making period was separated from the moment after the response, and especially for positive brands, the vmPFC was more active after the choice than during the decision process itself, challenging some of the existing literature. The results of the present study support the notion that the vmPFC may be unimportant in the decision stage of brand preference, questioning theories that postulate that the vmPFC is in the origin of such a choice. Further studies are needed to investigate in detail why the vmPFC seems to be involved in brand preference only after the decision process.

  4. Cache-Aware and Cache-Oblivious Adaptive Sorting

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Fagerberg, Rolf; Moruz, Gabriel

    2005-01-01

    Two new adaptive sorting algorithms are introduced which perform an optimal number of comparisons with respect to the number of inversions in the input. The first algorithm is based on a new linear time reduction to (non-adaptive) sorting. The second algorithm is based on a new division protocol...... for the GenericSort algorithm by Estivill-Castro and Wood. From both algorithms we derive I/O-optimal cache-aware and cache-oblivious adaptive sorting algorithms. These are the first I/O-optimal adaptive sorting algorithms....

  5. Data cache organization for accurate timing analysis

    DEFF Research Database (Denmark)

    Schoeberl, Martin; Huber, Benedikt; Puffitsch, Wolfgang

    2013-01-01

    it is important to classify memory accesses as either cache hit or cache miss. The addresses of instruction fetches are known statically and static cache hit/miss classification is possible for the instruction cache. The access to data that is cached in the data cache is harder to predict statically. Several...

  6. Developing the Value Management Maturity Model (VM

    Directory of Open Access Journals (Sweden)

    Saipol Bari Abd Karim

    2013-06-01

    Full Text Available Value management (VM practices have been expanded and became a well-received technique globally. Organisations are now progressing towards a better implementation of VM and should be assessing their strengths and weaknesses in order to move forward competitively. There is a need to benchmark the existing VM practices to reflect their maturing levels which is currently not available. This paper outlines the concept of Value Management Maturity Model (VM3' as a structured plan of maturity and performance growth for businesses. It proposes five levels of maturity and each level has its own criteria or attributes to be achieved before progressing to a higher level. The framework for VM3' has been developed based on the review of literatures related to VM and maturity models (MM. Data is collected through questionnaire surveys to organisations that have implemented VM methodology. Additionally, semi-structured interviews were conducted to select individuals involved in implementing VM. The questions were developed to achieve the research objectives; investigating the current implementation of VM and, exploring the organisation's MM knowledge and practices. However, this research was limited to VM implementation in the Malaysian government's projects and programmes. VM3' introduces a new paradigm in VM as it provides a rating method for capabilities or performance. It is advocated that this VM3' framework is still being refined in the advance stage in order to provide a comprehensive and well accepted method to provide ratings for organisations' maturity.

  7. Research on Cache Placement in ICN

    Directory of Open Access Journals (Sweden)

    Yu Zhang

    2017-08-01

    Full Text Available Ubiquitous in-network caching is one of key features of Information Centric Network, together with receiver-drive content retrieval paradigm, Information Centric Network is better support for content distribution, multicast, mobility, etc. Cache placement strategy is crucial to improving utilization of cache space and reducing the occupation of link bandwidth. Most of the literature about caching policies considers the overall cost and bandwidth, but ignores the limits of node cache capacity. This paper proposes a G-FMPH algorithm which takes into ac-count both constrains on the link bandwidth and the cache capacity of nodes. Our algorithm aims at minimizing the overall cost of contents caching afterwards. The simulation results have proved that our proposed algorithm has a better performance.

  8. Cache-Oblivious Mesh Layouts

    International Nuclear Information System (INIS)

    Yoon, S; Lindstrom, P; Pascucci, V; Manocha, D

    2005-01-01

    We present a novel method for computing cache-oblivious layouts of large meshes that improve the performance of interactive visualization and geometric processing algorithms. Given that the mesh is accessed in a reasonably coherent manner, we assume no particular data access patterns or cache parameters of the memory hierarchy involved in the computation. Furthermore, our formulation extends directly to computing layouts of multi-resolution and bounding volume hierarchies of large meshes. We develop a simple and practical cache-oblivious metric for estimating cache misses. Computing a coherent mesh layout is reduced to a combinatorial optimization problem. We designed and implemented an out-of-core multilevel minimization algorithm and tested its performance on unstructured meshes composed of tens to hundreds of millions of triangles. Our layouts can significantly reduce the number of cache misses. We have observed 2-20 times speedups in view-dependent rendering, collision detection, and isocontour extraction without any modification of the algorithms or runtime applications

  9. On the Limits of Cache-Obliviousness

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Fagerberg, Rolf

    2003-01-01

    In this paper, we present lower bounds for permuting and sorting in the cache-oblivious model. We prove that (1) I/O optimal cache-oblivious comparison based sorting is not possible without a tall cache assumption, and (2) there does not exist an I/O optimal cache-oblivious algorithm for permutin...

  10. Optimizing Maintenance of Constraint-Based Database Caches

    Science.gov (United States)

    Klein, Joachim; Braun, Susanne

    Caching data reduces user-perceived latency and often enhances availability in case of server crashes or network failures. DB caching aims at local processing of declarative queries in a DBMS-managed cache close to the application. Query evaluation must produce the same results as if done at the remote database backend, which implies that all data records needed to process such a query must be present and controlled by the cache, i. e., to achieve “predicate-specific” loading and unloading of such record sets. Hence, cache maintenance must be based on cache constraints such that “predicate completeness” of the caching units currently present can be guaranteed at any point in time. We explore how cache groups can be maintained to provide the data currently needed. Moreover, we design and optimize loading and unloading algorithms for sets of records keeping the caching units complete, before we empirically identify the costs involved in cache maintenance.

  11. CACHING DATA STORED IN SQL SERVER FOR OPTIMIZING THE PERFORMANCE

    Directory of Open Access Journals (Sweden)

    Demian Horia

    2016-12-01

    Full Text Available This paper present the architecture of web site with different techniques used for optimize the performance of loading the web content. The architecture presented here is for e-commerce site developed on windows with MVC, IIS and Micosoft SQL Server. Caching the data is one technique used by the browsers, by the web servers itself or by proxy servers. Caching the data is made without the knowledge of users and need to provide to user the more recent information from the server. This means that caching mechanism has to be aware of any modification of data on the server. There are different information’s presented in e-commerce site related to products like images, code of product, description, properties or stock

  12. Cache-Oblivious Algorithms and Data Structures

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting

    2004-01-01

    Frigo, Leiserson, Prokop and Ramachandran in 1999 introduced the ideal-cache model as a formal model of computation for developing algorithms in environments with multiple levels of caching, and coined the terminology of cache-oblivious algorithms. Cache-oblivious algorithms are described...... as standard RAM algorithms with only one memory level, i.e. without any knowledge about memory hierarchies, but are analyzed in the two-level I/O model of Aggarwal and Vitter for an arbitrary memory and block size and an optimal off-line cache replacement strategy. The result are algorithms that automatically...... apply to multi-level memory hierarchies. This paper gives an overview of the results achieved on cache-oblivious algorithms and data structures since the seminal paper by Frigo et al....

  13. Web cache location

    Directory of Open Access Journals (Sweden)

    Boffey Brian

    2004-01-01

    Full Text Available Stress placed on network infrastructure by the popularity of the World Wide Web may be partially relieved by keeping multiple copies of Web documents at geographically dispersed locations. In particular, use of proxy caches and replication provide a means of storing information 'nearer to end users'. This paper concentrates on the locational aspects of Web caching giving both an overview, from an operational research point of view, of existing research and putting forward avenues for possible further research. This area of research is in its infancy and the emphasis will be on themes and trends rather than on algorithm construction. Finally, Web caching problems are briefly related to referral systems more generally.

  14. Caching web service for TICF project

    International Nuclear Information System (INIS)

    Pais, V.F.; Stancalie, V.

    2008-01-01

    A caching web service was developed to allow caching of any object to a network cache, presented in the form of a web service. This application was used to increase the speed of previously implemented web services and for new ones. Various tests were conducted to determine the impact of using this caching web service in the existing network environment and where it should be placed in order to achieve the greatest increase in performance. Since the cache is presented to applications as a web service, it can also be used for remote access to stored data and data sharing between applications

  15. dCache, agile adoption of storage technology

    Energy Technology Data Exchange (ETDEWEB)

    Millar, A. P. [Hamburg U.; Baranova, T. [Hamburg U.; Behrmann, G. [Unlisted, DK; Bernardt, C. [Hamburg U.; Fuhrmann, P. [Hamburg U.; Litvintsev, D. O. [Fermilab; Mkrtchyan, T. [Hamburg U.; Petersen, A. [Hamburg U.; Rossi, A. [Fermilab; Schwank, K. [Hamburg U.

    2012-01-01

    For over a decade, dCache has been synonymous with large-capacity, fault-tolerant storage using commodity hardware that supports seamless data migration to and from tape. In this paper we provide some recent news of changes within dCache and the community surrounding it. We describe the flexible nature of dCache that allows both externally developed enhancements to dCache facilities and the adoption of new technologies. Finally, we present information about avenues the dCache team is exploring for possible future improvements in dCache.

  16. Leveraging KVM Events to Detect Cache-Based Side Channel Attacks in a Virtualization Environment

    Directory of Open Access Journals (Sweden)

    Ady Wahyudi Paundu

    2018-01-01

    Full Text Available Cache-based side channel attack (CSCa techniques in virtualization systems are becoming more advanced, while defense methods against them are still perceived as nonpractical. The most recent CSCa variant called Flush + Flush has showed that the current detection methods can be easily bypassed. Within this work, we introduce a novel monitoring approach to detect CSCa operations inside a virtualization environment. We utilize the Kernel Virtual Machine (KVM event data in the kernel and process this data using a machine learning technique to identify any CSCa operation in the guest Virtual Machine (VM. We evaluate our approach using Receiver Operating Characteristic (ROC diagram of multiple attack and benign operation scenarios. Our method successfully separate the CSCa datasets from the non-CSCa datasets, on both trained and nontrained data scenarios. The successful classification also include the Flush + Flush attack scenario. We are also able to explain the classification results by extracting the set of most important features that separate both classes using their Fisher scores and show that our monitoring approach can work to detect CSCa in general. Finally, we evaluate the overhead impact of our CSCa monitoring method and show that it has a negligible computation overhead on the host and the guest VM.

  17. Test data generation for LRU cache-memory testing

    OpenAIRE

    Evgeni, Kornikhin

    2009-01-01

    System functional testing of microprocessors deals with many assembly programs of given behavior. The paper proposes new constraint-based algorithm of initial cache-memory contents generation for given behavior of assembly program (with cache misses and hits). Although algorithm works for any types of cache-memory, the paper describes algorithm in detail for basis types of cache-memory only: fully associative cache and direct mapped cache.

  18. Research and Implementation of Software Used for the Remote Control for VM700T Video Measuring Instrument

    Directory of Open Access Journals (Sweden)

    Song Wenjie

    2015-01-01

    Full Text Available In this paper, the measurement software which can be used to realize remote control of the VM700T video measuring instrument is introduced. The authors can operate VM700T by a virtual panel on the client computer, select the results that the measuring equipment displayed to transmit it, and then display the image on the VM700T virtual panel in real time. The system does have some practical values and play an important role in distance learning. The functions that the system realized mainly includes four aspects: the real-time transmission of message based on the socket technology, the serial connection between server PC and VM700T measuring equipment, the image acquisition based on VFW technology and JPEG compression and decompression, and the network transmission of image files. The actual network transmission test is shown that the data acquisition method of this thesis is flexible and convenient, and the system is of extraordinary stability. It can display the measurement results in real time and basically realize the requirements of remote control. In the content, this paper includes a summary of principle, the detailed introduction of the system realization process and some related technology.

  19. MESI Cache Coherence Simulator for Teaching Purposes

    OpenAIRE

    Gómez Luna, Juan; Herruzo Gómez, Ezequiel; Benavides Benítez, José Ignacio

    2009-01-01

    Nowadays, the computational systems (multi and uniprocessors) need to avoid the cache coherence problem. There are some techniques to solve this problem. The MESI cache coherence protocol is one of them. This paper presents a simulator of the MESI protocol which is used for teaching the cache memory coherence on the computer systems with hierarchical memory system and for explaining the process of the cache memory location in multilevel cache memory systems. The paper shows a d...

  20. Web proxy cache replacement strategies simulation, implementation, and performance evaluation

    CERN Document Server

    ElAarag, Hala; Cobb, Jake

    2013-01-01

    This work presents a study of cache replacement strategies designed for static web content. Proxy servers can improve performance by caching static web content such as cascading style sheets, java script source files, and large files such as images. This topic is particularly important in wireless ad hoc networks, in which mobile devices act as proxy servers for a group of other mobile devices. Opening chapters present an introduction to web requests and the characteristics of web objects, web proxy servers and Squid, and artificial neural networks. This is followed by a comprehensive review o

  1. vlPFC-vmPFC-Amygdala Interactions Underlie Age-Related Differences in Cognitive Regulation of Emotion.

    Science.gov (United States)

    Silvers, Jennifer A; Insel, Catherine; Powers, Alisa; Franz, Peter; Helion, Chelsea; Martin, Rebecca E; Weber, Jochen; Mischel, Walter; Casey, B J; Ochsner, Kevin N

    2017-07-01

    Emotion regulation is a critical life skill that develops throughout childhood and adolescence. Despite this development in emotional processes, little is known about how the underlying brain systems develop with age. This study examined emotion regulation in 112 individuals (aged 6-23 years) as they viewed aversive and neutral images using a reappraisal task. On "reappraisal" trials, participants were instructed to view the images as distant, a strategy that has been previously shown to reduce negative affect. On "reactivity" trials, participants were instructed to view the images without regulating emotions to assess baseline emotional responding. During reappraisal, age predicted less negative affect, reduced amygdala responses and inverse coupling between the ventromedial prefrontal cortex (vmPFC) and amygdala. Moreover, left ventrolateral prefrontal (vlPFC) recruitment mediated the relationship between increasing age and diminishing amygdala responses. This negative vlPFC-amygdala association was stronger for individuals with inverse coupling between the amygdala and vmPFC. These data provide evidence that vmPFC-amygdala connectivity facilitates vlPFC-related amygdala modulation across development. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  2. Efficient sorting using registers and caches

    DEFF Research Database (Denmark)

    Wickremesinghe, Rajiv; Arge, Lars Allan; Chase, Jeffrey S.

    2002-01-01

    . Inadequate models lead to poor algorithmic choices and an incomplete understanding of algorithm behavior on real machines.A key step toward developing better models is to quantify the performance effects of features not reflected in the models. This paper explores the effect of memory system features...... on sorting performance. We introduce a new cache-conscious sorting algorithm, R-MERGE, which achieves better performance in practice over algorithms that are superior in the theoretical models. R-MERGE is designed to minimize memory stall cycles rather than cache misses by considering features common to many......Modern computer systems have increasingly complex memory systems. Common machine models for algorithm analysis do not reflect many of the features of these systems, e.g., large register sets, lockup-free caches, cache hierarchies, associativity, cache line fetching, and streaming behavior...

  3. Cache-aware network-on-chip for chip multiprocessors

    Science.gov (United States)

    Tatas, Konstantinos; Kyriacou, Costas; Dekoulis, George; Demetriou, Demetris; Avraam, Costas; Christou, Anastasia

    2009-05-01

    This paper presents the hardware prototype of a Network-on-Chip (NoC) for a chip multiprocessor that provides support for cache coherence, cache prefetching and cache-aware thread scheduling. A NoC with support to these cache related mechanisms can assist in improving systems performance by reducing the cache miss ratio. The presented multi-core system employs the Data-Driven Multithreading (DDM) model of execution. In DDM thread scheduling is done according to data availability, thus the system is aware of the threads to be executed in the near future. This characteristic of the DDM model allows for cache aware thread scheduling and cache prefetching. The NoC prototype is a crossbar switch with output buffering that can support a cache-aware 4-node chip multiprocessor. The prototype is built on the Xilinx ML506 board equipped with a Xilinx Virtex-5 FPGA.

  4. Store operations to maintain cache coherence

    Energy Technology Data Exchange (ETDEWEB)

    Evangelinos, Constantinos; Nair, Ravi; Ohmacht, Martin

    2017-08-01

    In one embodiment, a computer-implemented method includes encountering a store operation during a compile-time of a program, where the store operation is applicable to a memory line. It is determined, by a computer processor, that no cache coherence action is necessary for the store operation. A store-without-coherence-action instruction is generated for the store operation, responsive to determining that no cache coherence action is necessary. The store-without-coherence-action instruction specifies that the store operation is to be performed without a cache coherence action, and cache coherence is maintained upon execution of the store-without-coherence-action instruction.

  5. Store operations to maintain cache coherence

    Energy Technology Data Exchange (ETDEWEB)

    Evangelinos, Constantinos; Nair, Ravi; Ohmacht, Martin

    2017-09-12

    In one embodiment, a computer-implemented method includes encountering a store operation during a compile-time of a program, where the store operation is applicable to a memory line. It is determined, by a computer processor, that no cache coherence action is necessary for the store operation. A store-without-coherence-action instruction is generated for the store operation, responsive to determining that no cache coherence action is necessary. The store-without-coherence-action instruction specifies that the store operation is to be performed without a cache coherence action, and cache coherence is maintained upon execution of the store-without-coherence-action instruction.

  6. Calorie restriction as an anti-invasive therapy for malignant brain cancer in the VM mouse.

    Science.gov (United States)

    Shelton, Laura M; Huysentruyt, Leanne C; Mukherjee, Purna; Seyfried, Thomas N

    2010-07-23

    GBM (glioblastoma multiforme) is the most aggressive and invasive form of primary human brain cancer. We recently developed a novel brain cancer model in the inbred VM mouse strain that shares several characteristics with human GBM. Using bioluminescence imaging, we tested the efficacy of CR (calorie restriction) for its ability to reduce tumour size and invasion. CR targets glycolysis and rapid tumour cell growth in part by lowering circulating glucose levels. The VM-M3 tumour cells were implanted intracerebrally in the syngeneic VM mouse host. Approx. 12-15 days post-implantation, brains were removed and both ipsilateral and contralateral hemispheres were imaged to measure bioluminescence of invading tumour cells. CR significantly reduced the invasion of tumour cells from the implanted ipsilateral hemisphere into the contralateral hemisphere. The total percentage of Ki-67-stained cells within the primary tumour and the total number of blood vessels was also significantly lower in the CR-treated mice than in the mice fed ad libitum, suggesting that CR is anti-proliferative and anti-angiogenic. Our findings indicate that the VM-M3 GBM model is a valuable tool for studying brain tumour cell invasion and for evaluating potential therapeutic approaches for managing invasive brain cancer. In addition, we show that CR can be effective in reducing malignant brain tumour growth and invasion.

  7. Calorie Restriction as an Anti-Invasive Therapy for Malignant Brain Cancer in the VM Mouse

    Directory of Open Access Journals (Sweden)

    Laura M Shelton

    2010-07-01

    Full Text Available GBM (glioblastoma multiforme is the most aggressive and invasive form of primary human brain cancer. We recently developed a novel brain cancer model in the inbred VM mouse strain that shares several characteristics with human GBM. Using bioluminescence imaging, we tested the efficacy of CR (calorie restriction for its ability to reduce tumour size and invasion. CR targets glycolysis and rapid tumour cell growth in part by lowering circulating glucose levels. The VM-M3 tumour cells were implanted intracerebrally in the syngeneic VM mouse host. Approx. 12-15 days post-implantation, brains were removed and both ipsilateral and contralateral hemispheres were imaged to measure bioluminescence of invading tumour cells. CR significantly reduced the invasion of tumour cells from the implanted ipsilateral hemisphere into the contralateral hemisphere. The total percentage of Ki-67-stained cells within the primary tumour and the total number of blood vessels was also significantly lower in the CR-treated mice than in the mice fed ad libitum, suggesting that CR is anti-proliferative and anti-angiogenic. Our findings indicate that the VM-M3 GBM model is a valuable tool for studying brain tumour cell invasion and for evaluating potential therapeutic approaches for managing invasive brain cancer. In addition, we show that CR can be effective in reducing malignant brain tumour growth and invasion.

  8. Performance Tests of CMSSW on the CernVM

    CERN Document Server

    Petek, Marko

    2012-01-01

    goal of allowing the execution of the experiment's software on different operating systems in an easy way for the users. To achieve this it makes use of Virtual Machine images consisting of a JEOS (Just Enough Operational System) Linux image, bundled with CVMFS, a distributed file system for software. This image can this be run with a proper virtualizer on most of the platforms available. It also aggressively caches data on the local user's machine so that it can operate disconnected from the network. CMS wanted to compare the performance of the CMS Software running in the virtualized environment with the same software running on a native Linux box. To answer this need a series of tests were made on a controlled environment during 2010-2011. This work presents the results of those tests.

  9. Cache management of tape files in mass storage system

    International Nuclear Information System (INIS)

    Cheng Yaodong; Ma Nan; Yu Chuansong; Chen Gang

    2006-01-01

    This paper proposes the group-cooperative caching policy according to the characteristics of tapes and requirements of high energy physics domain. This policy integrates the advantages of traditional local caching and cooperative caching on basis of cache model. It divides cache into independent groups; the same group of cache is made of cooperating disks on network. This paper also analyzes the directory management, update algorithm and cache consistency of the policy. The experiment shows the policy can meet the requirements of data processing and mass storage in high energy physics domain very well. (authors)

  10. Reducing Competitive Cache Misses in Modern Processor Architectures

    OpenAIRE

    Prisagjanec, Milcho; Mitrevski, Pece

    2017-01-01

    The increasing number of threads inside the cores of a multicore processor, and competitive access to the shared cache memory, become the main reasons for an increased number of competitive cache misses and performance decline. Inevitably, the development of modern processor architectures leads to an increased number of cache misses. In this paper, we make an attempt to implement a technique for decreasing the number of competitive cache misses in the first level of cache memory. This tec...

  11. Software trace cache

    OpenAIRE

    Ramírez Bellido, Alejandro; Larriba Pey, Josep; Valero Cortés, Mateo

    2005-01-01

    We explore the use of compiler optimizations, which optimize the layout of instructions in memory. The target is to enable the code to make better use of the underlying hardware resources regardless of the specific details of the processor/architecture in order to increase fetch performance. The Software Trace Cache (STC) is a code layout algorithm with a broader target than previous layout optimizations. We target not only an improvement in the instruction cache hit rate, but also an increas...

  12. The dCache scientific storage cloud

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    For over a decade, the dCache team has provided software for handling big data for a diverse community of scientists. The team has also amassed a wealth of operational experience from using this software in production. With this experience, the team have refined dCache with the goal of providing a "scientific cloud": a storage solution that satisfies all requirements of a user community by exposing different facets of dCache with which users interact. Recent development, as part of this "scientific cloud" vision, has introduced a new facet: a sync-and-share service, often referred to as "dropbox-like storage". This work has been strongly focused on local requirements, but will be made available in future releases of dCache allowing others to adopt dCache solutions. In this presentation we will outline the current status of the work: both the successes and limitations, and the direction and time-scale of future work.

  13. A Distributed Cache Update Deployment Strategy in CDN

    Science.gov (United States)

    E, Xinhua; Zhu, Binjie

    2018-04-01

    The CDN management system distributes content objects to the edge of the internet to achieve the user's near access. Cache strategy is an important problem in network content distribution. A cache strategy was designed in which the content effective diffusion in the cache group, so more content was storage in the cache, and it improved the group hit rate.

  14. Search-Order Independent State Caching

    DEFF Research Database (Denmark)

    Evangelista, Sami; Kristensen, Lars Michael

    2010-01-01

    State caching is a memory reduction technique used by model checkers to alleviate the state explosion problem. It has traditionally been coupled with a depth-first search to ensure termination.We propose and experimentally evaluate an extension of the state caching method for general state...

  15. Cache Management of Big Data in Equipment Condition Assessment

    Directory of Open Access Journals (Sweden)

    Ma Yan

    2016-01-01

    Full Text Available Big data platform for equipment condition assessment is built for comprehensive analysis. The platform has various application demands. According to its response time, its application can be divided into offline, interactive and real-time types. For real-time application, its data processing efficiency is important. In general, data cache is one of the most efficient ways to improve query time. However, big data caching is different from the traditional data caching. In the paper we propose a distributed cache management framework of big data for equipment condition assessment. It consists of three parts: cache structure, cache replacement algorithm and cache placement algorithm. Cache structure is the basis of the latter two algorithms. Based on the framework and algorithms, we make full use of the characteristics of just accessing some valuable data during a period of time, and put relevant data on the neighborhood nodes, which largely reduce network transmission cost. We also validate the performance of our proposed approaches through extensive experiments. It demonstrates that the proposed cache replacement algorithm and cache management framework has higher hit rate or lower query time than LRU algorithm and round-robin algorithm.

  16. WATCHMAN: A Data Warehouse Intelligent Cache Manager

    Science.gov (United States)

    Scheuermann, Peter; Shim, Junho; Vingralek, Radek

    1996-01-01

    Data warehouses store large volumes of data which are used frequently by decision support applications. Such applications involve complex queries. Query performance in such an environment is critical because decision support applications often require interactive query response time. Because data warehouses are updated infrequently, it becomes possible to improve query performance by caching sets retrieved by queries in addition to query execution plans. In this paper we report on the design of an intelligent cache manager for sets retrieved by queries called WATCHMAN, which is particularly well suited for data warehousing environment. Our cache manager employs two novel, complementary algorithms for cache replacement and for cache admission. WATCHMAN aims at minimizing query response time and its cache replacement policy swaps out entire retrieved sets of queries instead of individual pages. The cache replacement and admission algorithms make use of a profit metric, which considers for each retrieved set its average rate of reference, its size, and execution cost of the associated query. We report on a performance evaluation based on the TPC-D and Set Query benchmarks. These experiments show that WATCHMAN achieves a substantial performance improvement in a decision support environment when compared to a traditional LRU replacement algorithm.

  17. Static analysis of worst-case stack cache behavior

    DEFF Research Database (Denmark)

    Jordan, Alexander; Brandner, Florian; Schoeberl, Martin

    2013-01-01

    Utilizing a stack cache in a real-time system can aid predictability by avoiding interference that heap memory traffic causes on the data cache. While loads and stores are guaranteed cache hits, explicit operations are responsible for managing the stack cache. The behavior of these operations can......-graph, the worst-case bounds can be efficiently yet precisely determined. Our evaluation using the MiBench benchmark suite shows that only 37% and 21% of potential stack cache operations actually store to and load from memory, respectively. Analysis times are modest, on average running between 0.46s and 1.30s per...

  18. Truth Space Method for Caching Database Queries

    Directory of Open Access Journals (Sweden)

    S. V. Mosin

    2015-01-01

    Full Text Available We propose a new method of client-side data caching for relational databases with a central server and distant clients. Data are loaded into the client cache based on queries executed on the server. Every query has the corresponding DB table – the result of the query execution. These queries have a special form called "universal relational query" based on three fundamental Relational Algebra operations: selection, projection and natural join. We have to mention that such a form is the closest one to the natural language and the majority of database search queries can be expressed in this way. Besides, this form allows us to analyze query correctness by checking lossless join property. A subsequent query may be executed in a client’s local cache if we can determine that the query result is entirely contained in the cache. For this we compare truth spaces of the logical restrictions in a new user’s query and the results of the queries execution in the cache. Such a comparison can be performed analytically , without need in additional Database queries. This method may be used to define lacking data in the cache and execute the query on the server only for these data. To do this the analytical approach is also used, what distinguishes our paper from the existing technologies. We propose four theorems for testing the required conditions. The first and the third theorems conditions allow us to define the existence of required data in cache. The second and the fourth theorems state conditions to execute queries with cache only. The problem of cache data actualizations is not discussed in this paper. However, it can be solved by cataloging queries on the server and their serving by triggers in background mode. The article is published in the author’s wording.

  19. Cache memory modelling method and system

    OpenAIRE

    Posadas Cobo, Héctor; Villar Bonet, Eugenio; Díaz Suárez, Luis

    2011-01-01

    The invention relates to a method for modelling a data cache memory of a destination processor, in order to simulate the behaviour of said data cache memory during the execution of a software code on a platform comprising said destination processor. According to the invention, the simulation is performed on a native platform having a processor different from the destination processor comprising the aforementioned data cache memory to be modelled, said modelling being performed by means of the...

  20. Efficient Mobile Client Caching Supporting Transaction Semantics

    Directory of Open Access Journals (Sweden)

    IlYoung Chung

    2000-05-01

    Full Text Available In mobile client-server database systems, caching of frequently accessed data is an important technique that will reduce the contention on the narrow bandwidth wireless channel. As the server in mobile environments may not have any information about the state of its clients' cache(stateless server, using broadcasting approach to transmit the updated data lists to numerous concurrent mobile clients is an attractive approach. In this paper, a caching policy is proposed to maintain cache consistency for mobile computers. The proposed protocol adopts asynchronous(non-periodic broadcasting as the cache invalidation scheme, and supports transaction semantics in mobile environments. With the asynchronous broadcasting approach, the proposed protocol can improve the throughput by reducing the abortion of transactions with low communication costs. We study the performance of the protocol by means of simulation experiments.

  1. PEMILIHAN DAN MIGRASI VM MENGGUNAKAN MCDM UNTUK PENINGKATAN KINERJA LAYANAN PADA CLOUD COMPUTING

    Directory of Open Access Journals (Sweden)

    Abdullah Fadil

    2016-08-01

    Full Text Available Komputasi awan atau cloud computing merupakan lingkungan yang heterogen dan terdistribusi, tersusun atas gugusan jaringan server dengan berbagai kapasitas sumber daya komputasi yang berbeda-beda guna menopang model layanan yang ada di atasnya. Virtual machine (VM dijadikan sebagai representasi dari ketersediaan sumber daya komputasi dinamis yang dapat dialokasikan dan direalokasikan sesuai dengan permintaan. Mekanisme live migration VM di antara server fisik yang terdapat di dalam data center cloud digunakan untuk mencapai konsolidasi dan memaksimalkan utilisasi VM. Pada prosedur konsoidasi vm, pemilihan dan penempatan VM sering kali menggunakan kriteria tunggal dan statis. Dalam penelitian ini diusulkan pemilihan dan penempatan VM menggunakan multi-criteria decision making (MCDM pada prosedur konsolidasi VM dinamis di lingkungan cloud data center guna meningkatkan layanan cloud computing. Pendekatan praktis digunakan dalam mengembangkan lingkungan cloud computing berbasis OpenStack Cloud dengan mengintegrasikan VM selection dan VM Placement pada prosedur konsolidasi VM menggunakan OpenStack-Neat. Hasil penelitian menunjukkan bahwa metode pemilihan dan penempatan VM melalui live migration mampu menggantikan kerugian yang disebabkan oleh down-times sebesar 11,994 detik dari waktu responnya. Peningkatan response times terjadi sebesar 6 ms ketika terjadi proses live migration VM dari host asal ke host tujuan. Response times rata-rata setiap vm yang tersebar pada compute node setelah terjadi proses live migration sebesar 67 ms yang menunjukkan keseimbangan beban pada sistem cloud computing.

  2. Efficacy of Code Optimization on Cache-based Processors

    Science.gov (United States)

    VanderWijngaart, Rob F.; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    The current common wisdom in the U.S. is that the powerful, cost-effective supercomputers of tomorrow will be based on commodity (RISC) micro-processors with cache memories. Already, most distributed systems in the world use such hardware as building blocks. This shift away from vector supercomputers and towards cache-based systems has brought about a change in programming paradigm, even when ignoring issues of parallelism. Vector machines require inner-loop independence and regular, non-pathological memory strides (usually this means: non-power-of-two strides) to allow efficient vectorization of array operations. Cache-based systems require spatial and temporal locality of data, so that data once read from main memory and stored in high-speed cache memory is used optimally before being written back to main memory. This means that the most cache-friendly array operations are those that feature zero or unit stride, so that each unit of data read from main memory (a cache line) contains information for the next iteration in the loop. Moreover, loops ought to be 'fat', meaning that as many operations as possible are performed on cache data-provided instruction caches do not overflow and enough registers are available. If unit stride is not possible, for example because of some data dependency, then care must be taken to avoid pathological strides, just ads on vector computers. For cache-based systems the issues are more complex, due to the effects of associativity and of non-unit block (cache line) size. But there is more to the story. Most modern micro-processors are superscalar, which means that they can issue several (arithmetic) instructions per clock cycle, provided that there are enough independent instructions in the loop body. This is another argument for providing fat loop bodies. With these restrictions, it appears fairly straightforward to produce code that will run efficiently on any cache-based system. It can be argued that although some of the important

  3. Design Space Exploration of Object Caches with Cross-Profiling

    DEFF Research Database (Denmark)

    Schoeberl, Martin; Binder, Walter; Villazon, Alex

    2011-01-01

    . However, before implementing such an object cache, an empirical analysis of different organization forms is needed. We use a cross-profiling technique based on aspect-oriented programming in order to evaluate different object cache organizations with standard Java benchmarks. From the evaluation we......To avoid data cache trashing between heap-allocated data and other data areas, a distinct object cache has been proposed for embedded real-time Java processors. This object cache uses high associativity in order to statically track different object pointers for worst-case execution-time analysis...... conclude that field access exhibits some temporal locality, but almost no spatial locality. Therefore, filling long cache lines on a miss just introduces a high miss penalty without increasing the hit rate enough to make up for the increased miss penalty. For an object cache, it is more efficient to fill...

  4. Archeological Excavations at the Wanapum Cache Site

    International Nuclear Information System (INIS)

    T. E. Marceau

    2000-01-01

    This report was prepared to document the actions taken to locate and excavate an abandoned Wanapum cache located east of the 100-H Reactor area. Evidence (i.e., glass, ceramics, metal, and wood) obtained from shovel and backhoe excavations at the Wanapum cache site indicate that the storage caches were found. The highly fragmented condition of these materials argues that the contents of the caches were collected or destroyed prior to the caches being burned and buried by mechanical equipment. While the fiber nets would have been destroyed by fire, the specialized stone weights would have remained behind. The fact that the site might have been gleaned of desirable artifacts prior to its demolition is consistent with the account by Riddell (1948) for a contemporary village site. Unfortunately, fishing equipment, owned by and used on behalf of the village, that might have returned to productive use has been irretrievably lost

  5. Engineering a Cache-Oblivious Sorting Algorithm

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Fagerberg, Rolf; Vinther, Kristoffer

    2007-01-01

    This paper is an algorithmic engineering study of cache-oblivious sorting. We investigate by empirical methods a number of implementation issues and parameter choices for the cache-oblivious sorting algorithm Lazy Funnelsort, and compare the final algorithm with Quicksort, the established standard...

  6. Corvid re-caching without 'theory of mind': a model.

    Science.gov (United States)

    van der Vaart, Elske; Verbrugge, Rineke; Hemelrijk, Charlotte K

    2012-01-01

    Scrub jays are thought to use many tactics to protect their caches. For instance, they predominantly bury food far away from conspecifics, and if they must cache while being watched, they often re-cache their worms later, once they are in private. Two explanations have been offered for such observations, and they are intensely debated. First, the birds may reason about their competitors' mental states, with a 'theory of mind'; alternatively, they may apply behavioral rules learned in daily life. Although this second hypothesis is cognitively simpler, it does seem to require a different, ad-hoc behavioral rule for every caching and re-caching pattern exhibited by the birds. Our new theory avoids this drawback by explaining a large variety of patterns as side-effects of stress and the resulting memory errors. Inspired by experimental data, we assume that re-caching is not motivated by a deliberate effort to safeguard specific caches from theft, but by a general desire to cache more. This desire is brought on by stress, which is determined by the presence and dominance of onlookers, and by unsuccessful recovery attempts. We study this theory in two experiments similar to those done with real birds with a kind of 'virtual bird', whose behavior depends on a set of basic assumptions about corvid cognition, and a well-established model of human memory. Our results show that the 'virtual bird' acts as the real birds did; its re-caching reflects whether it has been watched, how dominant its onlooker was, and how close to that onlooker it has cached. This happens even though it cannot attribute mental states, and it has only a single behavioral rule assumed to be previously learned. Thus, our simulations indicate that corvid re-caching can be explained without sophisticated social cognition. Given our specific predictions, our theory can easily be tested empirically.

  7. Corvid re-caching without 'theory of mind': a model.

    Directory of Open Access Journals (Sweden)

    Elske van der Vaart

    Full Text Available Scrub jays are thought to use many tactics to protect their caches. For instance, they predominantly bury food far away from conspecifics, and if they must cache while being watched, they often re-cache their worms later, once they are in private. Two explanations have been offered for such observations, and they are intensely debated. First, the birds may reason about their competitors' mental states, with a 'theory of mind'; alternatively, they may apply behavioral rules learned in daily life. Although this second hypothesis is cognitively simpler, it does seem to require a different, ad-hoc behavioral rule for every caching and re-caching pattern exhibited by the birds. Our new theory avoids this drawback by explaining a large variety of patterns as side-effects of stress and the resulting memory errors. Inspired by experimental data, we assume that re-caching is not motivated by a deliberate effort to safeguard specific caches from theft, but by a general desire to cache more. This desire is brought on by stress, which is determined by the presence and dominance of onlookers, and by unsuccessful recovery attempts. We study this theory in two experiments similar to those done with real birds with a kind of 'virtual bird', whose behavior depends on a set of basic assumptions about corvid cognition, and a well-established model of human memory. Our results show that the 'virtual bird' acts as the real birds did; its re-caching reflects whether it has been watched, how dominant its onlooker was, and how close to that onlooker it has cached. This happens even though it cannot attribute mental states, and it has only a single behavioral rule assumed to be previously learned. Thus, our simulations indicate that corvid re-caching can be explained without sophisticated social cognition. Given our specific predictions, our theory can easily be tested empirically.

  8. CernVM - a virtual software appliance for LHC applications

    International Nuclear Information System (INIS)

    Buncic, P; Sanchez, C Aguado; Blomer, J; Franco, L; Mato, P; Harutyunian, A; Yao, Y

    2010-01-01

    CernVM is a Virtual Software Appliance capable of running physics applications from the LHC experiments at CERN. It aims to provide a complete and portable environment for developing and running LHC data analysis on any end-user computer (laptop, desktop) as well as on the Grid, independently of Operating System platforms (Linux, Windows, MacOS). The experiment application software and its specific dependencies are built independently from CernVM and delivered to the appliance just in time by means of a CernVM File System (CVMFS) specifically designed for efficient software distribution. The procedures for building, installing and validating software releases remains under the control and responsibility of each user community. We provide a mechanism to publish pre-built and configured experiment software releases to a central distribution point from where it finds its way to the running CernVM instances via the hierarchy of proxy servers or content delivery networks. In this paper, we present current state of CernVM project and compare performance of CVMFS to performance of traditional network file system like AFS and discuss possible scenarios that could further improve its performance and scalability.

  9. Analysis of preemption costs for the stack cache

    DEFF Research Database (Denmark)

    Naji, Amine; Abbaspour, Sahar; Brandner, Florian

    2018-01-01

    , the analysis of the stack cache was limited to individual tasks, ignoring aspects related to multitasking. A major drawback of the original stack cache design is that, due to its simplicity, it cannot hold the data of multiple tasks at the same time. Consequently, the entire cache content needs to be saved...

  10. Enabling μCernVM for the Interactive Use Case

    CERN Document Server

    Nicolaou, Vasilis

    2013-01-01

    The $\\mu$CernVM will be the successor of the CernVM as a new appliance to help with accessing LHC for data analysis and development. CernVM has a web appliance agent that facilitates user interaction with the virtual machine and reduces the need for executing shell commands or installing graphical applications for displaying basic information such as memory usage or performing simple tasks such as updating the operating system. The updates are done differently in the $\\mu$CernVM than mainstream Linux distributions. Its filesystem is a composition of a read-only layer that exists in the network and a read/write layer that is initilised on first boot and keeps the user changes afterwards. Thus, means are provided to avoid loss of user data and system instabilities when the operating system is updated by fetching a new read-only layer.

  11. OPTIMAL DATA REPLACEMENT TECHNIQUE FOR COOPERATIVE CACHING IN MANET

    Directory of Open Access Journals (Sweden)

    P. Kuppusamy

    2014-09-01

    Full Text Available A cooperative caching approach improves data accessibility and reduces query latency in Mobile Ad hoc Network (MANET. Maintaining the cache is challenging issue in large MANET due to mobility, cache size and power. The previous research works on caching primarily have dealt with LRU, LFU and LRU-MIN cache replacement algorithms that offered low query latency and greater data accessibility in sparse MANET. This paper proposes Memetic Algorithm (MA to locate the better replaceable data based on neighbours interest and fitness value of cached data to store the newly arrived data. This work also elects ideal CH using Meta heuristic search Ant Colony Optimization algorithm. The simulation results shown that proposed algorithm reduces the latency, control overhead and increases the packet delivery rate than existing approach by increasing nodes and speed respectively.

  12. The Cost of Cache-Oblivious Searching

    DEFF Research Database (Denmark)

    Bender, Michael A.; Brodal, Gert Stølting; Fagerberg, Rolf

    2003-01-01

    , multilevel memory hierarchies can be modelled. It is shown that as k grows, the search costs of the optimal k-level DAM search structure and of the optimal cache-oblivious search structure rapidly converge. This demonstrates that for a multilevel memory hierarchy, a simple cache-oblivious structure almost......Tight bounds on the cost of cache-oblivious searching are proved. It is shown that no cache-oblivious search structure can guarantee that a search performs fewer than lg e log B N block transfers between any two levels of the memory hierarchy. This lower bound holds even if all of the block sizes...... the random placement of the rst element of the structure in memory. As searching in the Disk Access Model (DAM) can be performed in log B N + 1 block transfers, this result shows a separation between the 2-level DAM and cacheoblivious memory-hierarchy models. By extending the DAM model to k levels...

  13. A Two-Level Cache for Distributed Information Retrieval in Search Engines

    Directory of Open Access Journals (Sweden)

    Weizhe Zhang

    2013-01-01

    Full Text Available To improve the performance of distributed information retrieval in search engines, we propose a two-level cache structure based on the queries of the users’ logs. We extract the highest rank queries of users from the static cache, in which the queries are the most popular. We adopt the dynamic cache as an auxiliary to optimize the distribution of the cache data. We propose a distribution strategy of the cache data. The experiments prove that the hit rate, the efficiency, and the time consumption of the two-level cache have advantages compared with other structures of cache.

  14. A two-level cache for distributed information retrieval in search engines.

    Science.gov (United States)

    Zhang, Weizhe; He, Hui; Ye, Jianwei

    2013-01-01

    To improve the performance of distributed information retrieval in search engines, we propose a two-level cache structure based on the queries of the users' logs. We extract the highest rank queries of users from the static cache, in which the queries are the most popular. We adopt the dynamic cache as an auxiliary to optimize the distribution of the cache data. We propose a distribution strategy of the cache data. The experiments prove that the hit rate, the efficiency, and the time consumption of the two-level cache have advantages compared with other structures of cache.

  15. Optimal Caching in Multicast 5G Networks with Opportunistic Spectrum Access

    KAUST Repository

    Emara, Mostafa

    2018-01-15

    Cache-enabled small base station (SBS) densification is foreseen as a key component of 5G cellular networks. This architecture enables storing popular files at the network edge (i.e., SBS caches), which empowers local communication and alleviates traffic congestions at the core/backhaul network. This paper develops a mathematical framework, based on stochastic geometry, to characterize the hit probability of a cache-enabled multicast 5G network with SBS multi-channel capabilities and opportunistic spectrum access. To this end, we first derive the hit probability by characterizing opportunistic spectrum access success probabilities, service distance distributions, and coverage probabilities. The optimal caching distribution to maximize the hit probability is then computed. The performance and trade-offs of the derived optimal caching distributions are then assessed and compared with two widely employed caching distribution schemes, namely uniform and Zipf caching, through numerical results and extensive simulations. It is shown that the Zipf caching almost optimal only in scenarios with large number of available channels and large cache sizes.

  16. Compiler-Enforced Cache Coherence Using a Functional Language

    Directory of Open Access Journals (Sweden)

    Rich Wolski

    1996-01-01

    Full Text Available The cost of hardware cache coherence, both in terms of execution delay and operational cost, is substantial for scalable systems. Fortunately, compiler-generated cache management can reduce program serialization due to cache contention; increase execution performance; and reduce the cost of parallel systems by eliminating the need for more expensive hardware support. In this article, we use the Sisal functional language system as a vehicle to implement and investigate automatic, compiler-based cache management. We describe our implementation of Sisal for the IBM Power/4. The Power/4, briefly available as a product, represents an early attempt to build a shared memory machine that relies strictly on the language system for cache coherence. We discuss the issues associated with deterministic execution and program correctness on a system without hardware coherence, and demonstrate how Sisal (as a functional language is able to address those issues.

  17. Version pressure feedback mechanisms for speculative versioning caches

    Science.gov (United States)

    Eichenberger, Alexandre E.; Gara, Alan; O& #x27; Brien, Kathryn M.; Ohmacht, Martin; Zhuang, Xiaotong

    2013-03-12

    Mechanisms are provided for controlling version pressure on a speculative versioning cache. Raw version pressure data is collected based on one or more threads accessing cache lines of the speculative versioning cache. One or more statistical measures of version pressure are generated based on the collected raw version pressure data. A determination is made as to whether one or more modifications to an operation of a data processing system are to be performed based on the one or more statistical measures of version pressure, the one or more modifications affecting version pressure exerted on the speculative versioning cache. An operation of the data processing system is modified based on the one or more determined modifications, in response to a determination that one or more modifications to the operation of the data processing system are to be performed, to affect the version pressure exerted on the speculative versioning cache.

  18. Dynamic web cache publishing for IaaS clouds using Shoal

    International Nuclear Information System (INIS)

    Gable, Ian; Chester, Michael; Berghaus, Frank; Leavett-Brown, Colin; Paterson, Michael; Prior, Robert; Sobie, Randall; Taylor, Ryan; Armstrong, Patrick; Charbonneau, Andre

    2014-01-01

    We have developed a highly scalable application, called Shoal, for tracking and utilizing a distributed set of HTTP web caches. Our application uses the Squid HTTP cache. Squid servers advertise their existence to the Shoal server via AMQP messaging by running Shoal Agent. The Shoal server provides a simple REST interface that allows clients to determine their closest Squid cache. Our goal is to dynamically instantiate Squid caches on IaaS clouds in response to client demand. Shoal provides the VMs on IaaS clouds with the location of the nearest dynamically instantiated Squid Cache

  19. Search-Order Independent State Caching

    DEFF Research Database (Denmark)

    Evangelista, Sami; Kristensen, Lars Michael

    2009-01-01

    State caching is a memory reduction technique used by model checkers to alleviate the state explosion problem. It has traditionally been coupled with a depth-first search to ensure termination.We propose and experimentally evaluate an extension of the state caching method for general state...... exploring algorithms that are independent of the search order (i.e., search algorithms that partition the state space into closed (visited) states, open (to visit) states and unmet states)....

  20. dCache on Steroids - Delegated Storage Solutions

    Science.gov (United States)

    Mkrtchyan, T.; Adeyemi, F.; Ashish, A.; Behrmann, G.; Fuhrmann, P.; Litvintsev, D.; Millar, P.; Rossi, A.; Sahakyan, M.; Starek, J.

    2017-10-01

    For over a decade, dCache.org has delivered a robust software used at more than 80 Universities and research institutes around the world, allowing these sites to provide reliable storage services for the WLCG experiments as well as many other scientific communities. The flexible architecture of dCache allows running it in a wide variety of configurations and platforms - from a SoC based all-in-one Raspberry-Pi up to hundreds of nodes in a multipetabyte installation. Due to lack of managed storage at the time, dCache implemented data placement, replication and data integrity directly. Today, many alternatives are available: S3, GlusterFS, CEPH and others. While such solutions position themselves as scalable storage systems, they cannot be used by many scientific communities out of the box. The absence of community-accepted authentication and authorization mechanisms, the use of product specific protocols and the lack of namespace are some of the reasons that prevent wide-scale adoption of these alternatives. Most of these limitations are already solved by dCache. By delegating low-level storage management functionality to the above-mentioned new systems and providing the missing layer through dCache, we provide a solution which combines the benefits of both worlds - industry standard storage building blocks with the access protocols and authentication required by scientific communities. In this paper, we focus on CEPH, a popular software for clustered storage that supports file, block and object interfaces. CEPH is often used in modern computing centers, for example as a backend to OpenStack services. We will show prototypes of dCache running with a CEPH backend and discuss the benefits and limitations of such an approach. We will also outline the roadmap for supporting ‘delegated storage’ within the dCache releases.

  1. Cooperative Caching in Mobile Ad Hoc Networks Based on Data Utility

    Directory of Open Access Journals (Sweden)

    Narottam Chand

    2007-01-01

    Full Text Available Cooperative caching, which allows sharing and coordination of cached data among clients, is a potential technique to improve the data access performance and availability in mobile ad hoc networks. However, variable data sizes, frequent data updates, limited client resources, insufficient wireless bandwidth and client's mobility make cache management a challenge. In this paper, we propose a utility based cache replacement policy, least utility value (LUV, to improve the data availability and reduce the local cache miss ratio. LUV considers several factors that affect cache performance, namely access probability, distance between the requester and data source/cache, coherency and data size. A cooperative cache management strategy, Zone Cooperative (ZC, is developed that employs LUV as replacement policy. In ZC one-hop neighbors of a client form a cooperation zone since the cost for communication with them is low both in terms of energy consumption and message exchange. Simulation experiments have been conducted to evaluate the performance of LUV based ZC caching strategy. The simulation results show that, LUV replacement policy substantially outperforms the LRU policy.

  2. A Stack Cache for Real-Time Systems

    DEFF Research Database (Denmark)

    Schoeberl, Martin; Nielsen, Carsten

    2016-01-01

    Real-time systems need time-predictable computing platforms to allowfor static analysis of the worst-case execution time. Caches are important for good performance, but data caches arehard to analyze for the worst-case execution time. Stack allocated data has different properties related...

  3. A distributed storage system with dCache

    DEFF Research Database (Denmark)

    Behrmann, Gerd; Fuhrmann, Patrick; Grønager, Michael

    2008-01-01

    The LCG collaboration is encompassed by a number of Tier 1 centers. The Nordic LCG Tier 1, operated by NDGF, is in contrast to many other Tier 1 centers distributed over the Nordic countries. A distributed setup was chosen for both political and technical reasons, but also provides a number...... of unique challenges. dCache is well known and respected as a powerful distributed storage resource manager, and was chosen for implementing the storage aspects of the Nordic Tier 1. In contrast to classic dCache deployments, we deploy dCache over a WAN with limited bandwidth, high latency, frequent network...

  4. Diffusion bonding of IN 718 to VM 350 grade maraging steel

    Science.gov (United States)

    Crosby, S. R.; Biederman, R. R.; Reynolds, C. C.

    1972-01-01

    Diffusion bonding studies have been conducted on IN 718, VM 350 and the dissimilar alloy couple, IN 718 to maraging steel. The experimental processing parameters critical to obtaining consistently good diffusion bonds between IN 718 and VM 350 were determined. Interrelationships between temperature, pressure and surface preparation were explored for short bending intervals under vacuum conditions. Successful joining was achieved for a range of bonding cycle temperatures, pressures and surface preparations. The strength of the weaker parent material was used as a criterion for a successful tensile test of the heat treated bond. Studies of VM-350/VM-350 couples in the as-bonded condition showed a greater yielding and failure outside the bond region.

  5. Smart caching based on mobile agent of power WebGIS platform.

    Science.gov (United States)

    Wang, Xiaohui; Wu, Kehe; Chen, Fei

    2013-01-01

    Power information construction is developing towards intensive, platform, distributed direction with the expansion of power grid and improvement of information technology. In order to meet the trend, power WebGIS was designed and developed. In this paper, we first discuss the architecture and functionality of power WebGIS, and then we study caching technology in detail, which contains dynamic display cache model, caching structure based on mobile agent, and cache data model. We have designed experiments of different data capacity to contrast performance between WebGIS with the proposed caching model and traditional WebGIS. The experimental results showed that, with the same hardware environment, the response time of WebGIS with and without caching model increased as data capacity growing, while the larger the data was, the higher the performance of WebGIS with proposed caching model improved.

  6. Behavior-aware cache hierarchy optimization for low-power multi-core embedded systems

    Science.gov (United States)

    Zhao, Huatao; Luo, Xiao; Zhu, Chen; Watanabe, Takahiro; Zhu, Tianbo

    2017-07-01

    In modern embedded systems, the increasing number of cores requires efficient cache hierarchies to ensure data throughput, but such cache hierarchies are restricted by their tumid size and interference accesses which leads to both performance degradation and wasted energy. In this paper, we firstly propose a behavior-aware cache hierarchy (BACH) which can optimally allocate the multi-level cache resources to many cores and highly improved the efficiency of cache hierarchy, resulting in low energy consumption. The BACH takes full advantage of the explored application behaviors and runtime cache resource demands as the cache allocation bases, so that we can optimally configure the cache hierarchy to meet the runtime demand. The BACH was implemented on the GEM5 simulator. The experimental results show that energy consumption of a three-level cache hierarchy can be saved from 5.29% up to 27.94% compared with other key approaches while the performance of the multi-core system even has a slight improvement counting in hardware overhead.

  7. dCache, agile adoption of storage technology

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    For over a decade, dCache has been synonymous with large-capacity, fault-tolerant storage using commodity hardware that supports seamless data migration to and from tape. Over that time, it has satisfied the requirements of various demanding scientific user communities to store their data, transfer it between sites and fast, site-local access. When the dCache project started, the focus was on managing a relatively small disk cache in front of large tape archives. Over the project's lifetime storage technology has changed. During this period, technology changes have driven down the cost-per-GiB of harddisks. This resulted in a shift towards systems where the majority of data is stored on disk. More recently, the availability of Solid State Disks, while not yet a replacement for magnetic disks, offers an intriguing opportunity for significant performance improvement if they can be used intelligently within an existing system. New technologies provide new opportunities and dCache user communities' computi...

  8. Optimization of CernVM early boot process

    CERN Document Server

    Mazdin, Petra

    2015-01-01

    CernVM virtual machine is a Linux based virtual appliance optimized for High Energy Physics experiments. It is used for cloud computing, volunteer computing, and software development by the four large LHC experiments. The goal of this project is proling and optimizing the boot process of the CernVM. A key part was the development of a performance profiler for shell scripts as an extension to the popular BusyBox open source UNIX tool suite. Based on the measurements, costly shell code was replaced by more efficient, custom C programs. The results are compared to the original ones and successful optimization is proven.

  9. Study of cache performance in distributed environment for data processing

    International Nuclear Information System (INIS)

    Makatun, Dzmitry; Lauret, Jérôme; Šumbera, Michal

    2014-01-01

    Processing data in distributed environment has found its application in many fields of science (Nuclear and Particle Physics (NPP), astronomy, biology to name only those). Efficiently transferring data between sites is an essential part of such processing. The implementation of caching strategies in data transfer software and tools, such as the Reasoner for Intelligent File Transfer (RIFT) being developed in the STAR collaboration, can significantly decrease network load and waiting time by reusing the knowledge of data provenance as well as data placed in transfer cache to further expand on the availability of sources for files and data-sets. Though, a great variety of caching algorithms is known, a study is needed to evaluate which one can deliver the best performance in data access considering the realistic demand patterns. Records of access to the complete data-sets of NPP experiments were analyzed and used as input for computer simulations. Series of simulations were done in order to estimate the possible cache hits and cache hits per byte for known caching algorithms. The simulations were done for cache of different sizes within interval 0.001 – 90% of complete data-set and low-watermark within 0-90%. Records of data access were taken from several experiments and within different time intervals in order to validate the results. In this paper, we will discuss the different data caching strategies from canonical algorithms to hybrid cache strategies, present the results of our simulations for the diverse algorithms, debate and identify the choice for the best algorithm in the context of Physics Data analysis in NPP. While the results of those studies have been implemented in RIFT, they can also be used when setting up cache in any other computational work-flow (Cloud processing for example) or managing data storages with partial replicas of the entire data-set

  10. A Cache System Design for CMPs with Built-In Coherence Verification

    Directory of Open Access Journals (Sweden)

    Mamata Dalui

    2016-01-01

    Full Text Available This work reports an effective design of cache system for Chip Multiprocessors (CMPs. It introduces built-in logic for verification of cache coherence in CMPs realizing directory based protocol. It is developed around the cellular automata (CA machine, invented by John von Neumann in the 1950s. A special class of CA referred to as single length cycle 2-attractor cellular automata (TACA has been planted to detect the inconsistencies in cache line states of processors’ private caches. The TACA module captures coherence status of the CMPs’ cache system and memorizes any inconsistent recording of the cache line states during the processors’ reference to a memory block. Theory has been developed to empower a TACA to analyse the cache state updates and then to settle to an attractor state indicating quick decision on a faulty recording of cache line status. The introduction of segmentation of the CMPs’ processor pool ensures a better efficiency, in determining the inconsistencies, by reducing the number of computation steps in the verification logic. The hardware requirement for the verification logic points to the fact that the overhead of proposed coherence verification module is much lesser than that of the conventional verification units and is insignificant with respect to the cost involved in CMPs’ cache system.

  11. Scope-Based Method Cache Analysis

    DEFF Research Database (Denmark)

    Huber, Benedikt; Hepp, Stefan; Schoeberl, Martin

    2014-01-01

    The quest for time-predictable systems has led to the exploration of new hardware architectures that simplify analysis and reasoning in the temporal domain, while still providing competitive performance. For the instruction memory, the method cache is a conceptually attractive solution, as it req......The quest for time-predictable systems has led to the exploration of new hardware architectures that simplify analysis and reasoning in the temporal domain, while still providing competitive performance. For the instruction memory, the method cache is a conceptually attractive solution...

  12. Funnel Heap - A Cache Oblivious Priority Queue

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Fagerberg, Rolf

    2002-01-01

    The cache oblivious model of computation is a two-level memory model with the assumption that the parameters of the model are unknown to the algorithms. A consequence of this assumption is that an algorithm efficient in the cache oblivious model is automatically efficient in a multi-level memory...

  13. Value-Based Caching in Information-Centric Wireless Body Area Networks

    Directory of Open Access Journals (Sweden)

    Fadi M. Al-Turjman

    2017-01-01

    Full Text Available We propose a resilient cache replacement approach based on a Value of sensed Information (VoI policy. To resolve and fetch content when the origin is not available due to isolated in-network nodes (fragmentation and harsh operational conditions, we exploit a content caching approach. Our approach depends on four functional parameters in sensory Wireless Body Area Networks (WBANs. These four parameters are: age of data based on periodic request, popularity of on-demand requests, communication interference cost, and the duration for which the sensor node is required to operate in active mode to capture the sensed readings. These parameters are considered together to assign a value to the cached data to retain the most valuable information in the cache for prolonged time periods. The higher the value, the longer the duration for which the data will be retained in the cache. This caching strategy provides significant availability for most valuable and difficult to retrieve data in the WBANs. Extensive simulations are performed to compare the proposed scheme against other significant caching schemes in the literature while varying critical aspects in WBANs (e.g., data popularity, cache size, publisher load, connectivity-degree, and severe probabilities of node failures. These simulation results indicate that the proposed VoI-based approach is a valid tool for the retrieval of cached content in disruptive and challenging scenarios, such as the one experienced in WBANs, since it allows the retrieval of content for a long period even while experiencing severe in-network node failures.

  14. A detailed GPU cache model based on reuse distance theory

    NARCIS (Netherlands)

    Nugteren, C.; Braak, van den G.J.W.; Corporaal, H.; Bal, H.E.

    2014-01-01

    As modern GPUs rely partly on their on-chip memories to counter the imminent off-chip memory wall, the efficient use of their caches has become important for performance and energy. However, optimising cache locality systematically requires insight into and prediction of cache behaviour. On

  15. Adjustable Two-Tier Cache for IPTV Based on Segmented Streaming

    Directory of Open Access Journals (Sweden)

    Kai-Chun Liang

    2012-01-01

    Full Text Available Internet protocol TV (IPTV is a promising Internet killer application, which integrates video, voice, and data onto a single IP network, and offers viewers an innovative set of choices and control over their TV content. To provide high-quality IPTV services, an effective strategy is based on caching. This work proposes a segment-based two-tier caching approach, which divides each video into multiple segments to be cached. This approach also partitions the cache space into two layers, where the first layer mainly caches to-be-played segments and the second layer saves possibly played segments. As the segment access becomes frequent, the proposed approach enlarges the first layer and reduces the second layer, and vice versa. Because requested segments may not be accessed frequently, this work further designs an admission control mechanism to determine whether an incoming segment should be cached or not. The cache architecture takes forward/stop playback into account and may replace the unused segments under the interrupted playback. Finally, we conduct comprehensive simulation experiments to evaluate the performance of the proposed approach. The results show that our approach can yield higher hit ratio than previous work under various environmental parameters.

  16. Novel dynamic caching for hierarchically distributed video-on-demand systems

    Science.gov (United States)

    Ogo, Kenta; Matsuda, Chikashi; Nishimura, Kazutoshi

    1998-02-01

    It is difficult to simultaneously serve the millions of video streams that will be needed in the age of 'Mega-Media' networks by using only one high-performance server. To distribute the service load, caching servers should be location near users. However, in previously proposed caching mechanisms, the grade of service depends on whether the data is already cached at a caching server. To make the caching servers transparent to the users, the ability to randomly access the large volume of data stored in the central server should be supported, and the operational functions of the provided service should not be narrowly restricted. We propose a mechanism for constructing a video-stream-caching server that is transparent to the users and that will always support all special playback functions for all available programs to all the contents with a latency of only 1 or 2 seconds. This mechanism uses Variable-sized-quantum-segment- caching technique derived from an analysis of the historical usage log data generated by a line-on-demand-type service experiment and based on the basic techniques used by a time- slot-based multiple-stream video-on-demand server.

  17. Alignment of Memory Transfers of a Time-Predictable Stack Cache

    DEFF Research Database (Denmark)

    Abbaspourseyedi, Sahar; Brandner, Florian

    2014-01-01

    of complex cache states. Instead, only the occupancy level of the cache has to be determined. The memory transfers generated by the standard stack cache are not generally aligned. These unaligned accesses risk to introduce complexity to the otherwise simple WCET analysis. In this work, we investigate three...

  18. Toxicity and medical countermeasure studies on the organophosphorus nerve agents VM and VX.

    Science.gov (United States)

    Rice, Helen; Dalton, Christopher H; Price, Matthew E; Graham, Stuart J; Green, A Christopher; Jenner, John; Groombridge, Helen J; Timperley, Christopher M

    2015-04-08

    To support the effort to eliminate the Syrian Arab Republic chemical weapons stockpile safely, there was a requirement to provide scientific advice based on experimentally derived information on both toxicity and medical countermeasures (MedCM) in the event of exposure to VM, VX or VM-VX mixtures. Complementary in vitro and in vivo studies were undertaken to inform that advice. The penetration rate of neat VM was not significantly different from that of neat VX, through either guinea pig or pig skin in vitro . The presence of VX did not affect the penetration rate of VM in mixtures of various proportions. A lethal dose of VM was approximately twice that of VX in guinea pigs poisoned via the percutaneous route. There was no interaction in mixed agent solutions which altered the in vivo toxicity of the agents. Percutaneous poisoning by VM responded to treatment with standard MedCM, although complete protection was not achieved.

  19. A trace-driven analysis of name and attribute caching in a distributed system

    Science.gov (United States)

    Shirriff, Ken W.; Ousterhout, John K.

    1992-01-01

    This paper presents the results of simulating file name and attribute caching on client machines in a distributed file system. The simulation used trace data gathered on a network of about 40 workstations. Caching was found to be advantageous: a cache on each client containing just 10 directories had a 91 percent hit rate on name look ups. Entry-based name caches (holding individual directory entries) had poorer performance for several reasons, resulting in a maximum hit rate of about 83 percent. File attribute caching obtained a 90 percent hit rate with a cache on each machine of the attributes for 30 files. The simulations show that maintaining cache consistency between machines is not a significant problem; only 1 in 400 name component look ups required invalidation of a remotely cached entry. Process migration to remote machines had little effect on caching. Caching was less successful in heavily shared and modified directories such as /tmp, but there weren't enough references to /tmp overall to affect the results significantly. We estimate that adding name and attribute caching to the Sprite operating system could reduce server load by 36 percent and the number of network packets by 30 percent.

  20. A Novel Cache Invalidation Scheme for Mobile Networks

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    In this paper, we propose a strategy of maintaining cache consistency in wireless mobile environments, which adds a validation server (VS) into the GPRS network, utilizes the location information of mobile terminal in SGSN located at GPRS backbone, just sends invalidation information to mobile terminal which is online in accordance with the cached data, and reduces the information amount in asynchronous transmission. This strategy enables mobile terminal to access cached data with very little computing amount, little delay and arbitrary disconnection intervals, and excels the synchronous IR and asynchronous state (AS) in the total performances.

  1. A distributed storage system with dCache

    Science.gov (United States)

    Behrmann, G.; Fuhrmann, P.; Grønager, M.; Kleist, J.

    2008-07-01

    The LCG collaboration is encompassed by a number of Tier 1 centers. The Nordic LCG Tier 1, operated by NDGF, is in contrast to many other Tier 1 centers distributed over the Nordic countries. A distributed setup was chosen for both political and technical reasons, but also provides a number of unique challenges. dCache is well known and respected as a powerful distributed storage resource manager, and was chosen for implementing the storage aspects of the Nordic Tier 1. In contrast to classic dCache deployments, we deploy dCache over a WAN with limited bandwidth, high latency, frequent network failures, and spanning many administrative domains. These properties provide unique challenges, covering topics such as security, administration, maintenance, upgradability, reliability, and performance. Our initial focus has been on implementing the GFD.47 OGF recommendation (which introduced the GridFTP 2 protocol) in dCache and the Globus Toolkit. Compared to GridFTP 1, GridFTP 2 allows for more intelligent data flow between clients and storage pools, thus enabling more efficient use of our limited bandwidth.

  2. A distributed storage system with dCache

    International Nuclear Information System (INIS)

    Behrmann, G; Groenager, M; Fuhrmann, P; Kleist, J

    2008-01-01

    The LCG collaboration is encompassed by a number of Tier 1 centers. The Nordic LCG Tier 1, operated by NDGF, is in contrast to many other Tier 1 centers distributed over the Nordic countries. A distributed setup was chosen for both political and technical reasons, but also provides a number of unique challenges. dCache is well known and respected as a powerful distributed storage resource manager, and was chosen for implementing the storage aspects of the Nordic Tier 1. In contrast to classic dCache deployments, we deploy dCache over a WAN with limited bandwidth, high latency, frequent network failures, and spanning many administrative domains. These properties provide unique challenges, covering topics such as security, administration, maintenance, upgradability, reliability, and performance. Our initial focus has been on implementing the GFD.47 OGF recommendation (which introduced the GridFTP 2 protocol) in dCache and the Globus Toolkit. Compared to GridFTP 1, GridFTP 2 allows for more intelligent data flow between clients and storage pools, thus enabling more efficient use of our limited bandwidth

  3. Efficient Context Switching for the Stack Cache: Implementation and Analysis

    DEFF Research Database (Denmark)

    Abbaspourseyedi, Sahar; Brandner, Florian; Naji, Amine

    2015-01-01

    , the analysis of the stack cache was limited to individual tasks, ignoring aspects related to multitasking. A major drawback of the original stack cache design is that, due to its simplicity, it cannot hold the data of multiple tasks at the same time. Consequently, the entire cache content needs to be saved...

  4. Energy Efficient Caching in Backhaul-Aware Cellular Networks with Dynamic Content Popularity

    Directory of Open Access Journals (Sweden)

    Jiequ Ji

    2018-01-01

    Full Text Available Caching popular contents at base stations (BSs has been regarded as an effective approach to alleviate the backhaul load and to improve the quality of service. To meet the explosive data traffic demand and to save energy consumption, energy efficiency (EE has become an extremely important performance index for the 5th generation (5G cellular networks. In general, there are two ways for improving the EE for caching, that is, improving the cache-hit rate and optimizing the cache size. In this work, we investigate the energy efficient caching problem in backhaul-aware cellular networks jointly considering these two approaches. Note that most existing works are based on the assumption that the content catalog and popularity are static. However, in practice, content popularity is dynamic. To timely estimate the dynamic content popularity, we propose a method based on shot noise model (SNM. Then we propose a distributed caching policy to improve the cache-hit rate in such a dynamic environment. Furthermore, we analyze the tradeoff between energy efficiency and cache capacity for which an optimization is formulated. We prove its convexity and derive a closed-form optimal cache capacity for maximizing the EE. Simulation results validate the proposed scheme and show that EE can be improved with appropriate choice of cache capacity.

  5. On Optimal Geographical Caching in Heterogeneous Cellular Networks

    NARCIS (Netherlands)

    Serbetci, Berksan; Goseling, Jasper

    2017-01-01

    In this work we investigate optimal geographical caching in heterogeneous cellular networks where different types of base stations (BSs) have different cache capacities. Users request files from a content library according to a known probability distribution. The performance metric is the total hit

  6. Distributed caching mechanism for various MPE software services

    CERN Document Server

    Svec, Andrej

    2017-01-01

    The MPE Software Section provides multiple software services to facilitate the testing and the operation of the CERN Accelerator complex. Continuous growth in the number of users and the amount of processed data result in the requirement of high scalability. Our current priority is to move towards a distributed and properly load balanced set of services based on containers. The aim of this project is to implement the generic caching mechanism applicable to our services and chosen architecture. The project will at first require research about the different aspects of distributed caching (persistence, no gc-caching, cache consistency etc.) and the available technologies followed by the implementation of the chosen solution. In order to validate the correctness and performance of the implementation in the last phase of the project it will be required to implement a monitoring layer and integrate it with the current ELK stack.

  7. Caching Efficiency Enhancement at Wireless Edges with Concerns on User’s Quality of Experience

    Directory of Open Access Journals (Sweden)

    Feng Li

    2018-01-01

    Full Text Available Content caching is a promising approach to enhancing bandwidth utilization and minimizing delivery delay for new-generation Internet applications. The design of content caching is based on the principles that popular contents are cached at appropriate network edges in order to reduce transmission delay and avoid backhaul bottleneck. In this paper, we propose a cooperative caching replacement and efficiency optimization scheme for IP-based wireless networks. Wireless edges are designed to establish a one-hop scope of caching information table for caching replacement in cases when there is not enough cache resource available within its own space. During the course, after receiving the caching request, every caching node should determine the weight of the required contents and provide a response according to the availability of its own caching space. Furthermore, to increase the caching efficiency from a practical perspective, we introduce the concept of quality of user experience (QoE and try to properly allocate the cache resource of the whole networks to better satisfy user demands. Different caching allocation strategies are devised to be adopted to enhance user QoE in various circumstances. Numerical results are further provided to justify the performance improvement of our proposal from various aspects.

  8. Cache timing attacks on recent microarchitectures

    DEFF Research Database (Denmark)

    Andreou, Alexandres; Bogdanov, Andrey; Tischhauser, Elmar Wolfgang

    2017-01-01

    Cache timing attacks have been known for a long time, however since the rise of cloud computing and shared hardware resources, such attacks found new potentially devastating applications. One prominent example is S$A (presented by Irazoqui et al at S&P 2015) which is a cache timing attack against...... AES or similar algorithms in virtualized environments. This paper applies variants of this cache timing attack to Intel's latest generation of microprocessors. It enables a spy-process to recover cryptographic keys, interacting with the victim processes only over TCP. The threat model is a logically...... separated but CPU co-located attacker with root privileges. We report successful and practically verified applications of this attack against a wide range of microarchitectures, from a two-core Nehalem processor (i5-650) to two-core Haswell (i7-4600M) and four-core Skylake processors (i7-6700). The attack...

  9. Unfavorable Strides in Cache Memory Systems (RNR Technical Report RNR-92-015

    Directory of Open Access Journals (Sweden)

    David H. Bailey

    1995-01-01

    Full Text Available An important issue in obtaining high performance on a scientific application running on a cache-based computer system is the behavior of the cache when data are accessed at a constant stride. Others who have discussed this issue have noted an odd phenomenon in such situations: A few particular innocent-looking strides result in sharply reduced cache efficiency. In this article, this problem is analyzed, and a simple formula is presented that accurately gives the cache efficiency for various cache parameters and data strides.

  10. Enhancing Leakage Power in CPU Cache Using Inverted Architecture

    OpenAIRE

    Bilal A. Shehada; Ahmed M. Serdah; Aiman Abu Samra

    2013-01-01

    Power consumption is an increasingly pressing problem in modern processor design. Since the on-chip caches usually consume a significant amount of power so power and energy consumption parameters have become one of the most important design constraint. It is one of the most attractive targets for power reduction. This paper presents an approach to enhance the dynamic power consumption of CPU cache using inverted cache architecture. Our assumption tries to reduce dynamic write power dissipatio...

  11. Combining instruction prefetching with partial cache locking to improve WCET in real-time systems.

    Directory of Open Access Journals (Sweden)

    Fan Ni

    Full Text Available Caches play an important role in embedded systems to bridge the performance gap between fast processor and slow memory. And prefetching mechanisms are proposed to further improve the cache performance. While in real-time systems, the application of caches complicates the Worst-Case Execution Time (WCET analysis due to its unpredictable behavior. Modern embedded processors often equip locking mechanism to improve timing predictability of the instruction cache. However, locking the whole cache may degrade the cache performance and increase the WCET of the real-time application. In this paper, we proposed an instruction-prefetching combined partial cache locking mechanism, which combines an instruction prefetching mechanism (termed as BBIP with partial cache locking to improve the WCET estimates of real-time applications. BBIP is an instruction prefetching mechanism we have already proposed to improve the worst-case cache performance and in turn the worst-case execution time. The estimations on typical real-time applications show that the partial cache locking mechanism shows remarkable WCET improvement over static analysis and full cache locking.

  12. Combining instruction prefetching with partial cache locking to improve WCET in real-time systems.

    Science.gov (United States)

    Ni, Fan; Long, Xiang; Wan, Han; Gao, Xiaopeng

    2013-01-01

    Caches play an important role in embedded systems to bridge the performance gap between fast processor and slow memory. And prefetching mechanisms are proposed to further improve the cache performance. While in real-time systems, the application of caches complicates the Worst-Case Execution Time (WCET) analysis due to its unpredictable behavior. Modern embedded processors often equip locking mechanism to improve timing predictability of the instruction cache. However, locking the whole cache may degrade the cache performance and increase the WCET of the real-time application. In this paper, we proposed an instruction-prefetching combined partial cache locking mechanism, which combines an instruction prefetching mechanism (termed as BBIP) with partial cache locking to improve the WCET estimates of real-time applications. BBIP is an instruction prefetching mechanism we have already proposed to improve the worst-case cache performance and in turn the worst-case execution time. The estimations on typical real-time applications show that the partial cache locking mechanism shows remarkable WCET improvement over static analysis and full cache locking.

  13. Effects of simulated mountain lion caching on decomposition of ungulate carcasses

    Science.gov (United States)

    Bischoff-Mattson, Z.; Mattson, D.

    2009-01-01

    Caching of animal remains is common among carnivorous species of all sizes, yet the effects of caching on larger prey are unstudied. We conducted a summer field experiment designed to test the effects of simulated mountain lion (Puma concolor) caching on mass loss, relative temperature, and odor dissemination of 9 prey-like carcasses. We deployed all but one of the carcasses in pairs, with one of each pair exposed and the other shaded and shallowly buried (cached). Caching substantially reduced wastage during dry and hot (drought) but not wet and cool (monsoon) periods, and it also reduced temperature and discernable odor to some degree during both seasons. These results are consistent with the hypotheses that caching serves to both reduce competition from arthropods and microbes and reduce odds of detection by larger vertebrates such as bears (Ursus spp.), wolves (Canis lupus), or other lions.

  14. Explicit Content Caching at Mobile Edge Networks with Cross-Layer Sensing

    Science.gov (United States)

    Chen, Lingyu; Su, Youxing; Luo, Wenbin; Hong, Xuemin; Shi, Jianghong

    2018-01-01

    The deployment density and computational power of small base stations (BSs) are expected to increase significantly in the next generation mobile communication networks. These BSs form the mobile edge network, which is a pervasive and distributed infrastructure that can empower a variety of edge/fog computing applications. This paper proposes a novel edge-computing application called explicit caching, which stores selective contents at BSs and exposes such contents to local users for interactive browsing and download. We formulate the explicit caching problem as a joint content recommendation, caching, and delivery problem, which aims to maximize the expected user quality-of-experience (QoE) with varying degrees of cross-layer sensing capability. Optimal and effective heuristic algorithms are presented to solve the problem. The theoretical performance bounds of the explicit caching system are derived in simplified scenarios. The impacts of cache storage space, BS backhaul capacity, cross-layer information, and user mobility on the system performance are simulated and discussed in realistic scenarios. Results suggest that, compared with conventional implicit caching schemes, explicit caching can better exploit the mobile edge network infrastructure for personalized content dissemination. PMID:29565313

  15. A Scalable and Highly Configurable Cache-Aware Hybrid Flash Translation Layer

    Directory of Open Access Journals (Sweden)

    Jalil Boukhobza

    2014-03-01

    Full Text Available This paper presents a cache-aware configurable hybrid flash translation layer (FTL, named CACH-FTL. It was designed based on the observation that most state-of­­-the-art flash-specific cache systems above FTLs flush groups of pages belonging to the same data block. CACH-FTL relies on this characteristic to optimize flash write operations placement, as large groups of pages are flushed to a block-mapped region, named BMR, whereas small groups are buffered into a page-mapped region, named PMR. Page group placement is based on a configurable threshold defining the limit under which it is more cost-effective to use page mapping (PMR and wait for grouping more pages before flushing to the BMR. CACH-FTL is scalable in terms of mapping table size and flexible in terms of Input/Output (I/O workload support. CACH-FTL performs very well, as the performance difference with the ideal page-mapped FTL is less than 15% in most cases and has a mean of 4% for the best CACH-FTL configurations, while using at least 78% less memory for table mapping storage on RAM.

  16. Explicit Content Caching at Mobile Edge Networks with Cross-Layer Sensing.

    Science.gov (United States)

    Chen, Lingyu; Su, Youxing; Luo, Wenbin; Hong, Xuemin; Shi, Jianghong

    2018-03-22

    The deployment density and computational power of small base stations (BSs) are expected to increase significantly in the next generation mobile communication networks. These BSs form the mobile edge network, which is a pervasive and distributed infrastructure that can empower a variety of edge/fog computing applications. This paper proposes a novel edge-computing application called explicit caching, which stores selective contents at BSs and exposes such contents to local users for interactive browsing and download. We formulate the explicit caching problem as a joint content recommendation, caching, and delivery problem, which aims to maximize the expected user quality-of-experience (QoE) with varying degrees of cross-layer sensing capability. Optimal and effective heuristic algorithms are presented to solve the problem. The theoretical performance bounds of the explicit caching system are derived in simplified scenarios. The impacts of cache storage space, BS backhaul capacity, cross-layer information, and user mobility on the system performance are simulated and discussed in realistic scenarios. Results suggest that, compared with conventional implicit caching schemes, explicit caching can better exploit the mobile edge network infrastructure for personalized content dissemination.

  17. Randomized Caches Considered Harmful in Hard Real-Time Systems

    Directory of Open Access Journals (Sweden)

    Jan Reineke

    2014-06-01

    Full Text Available We investigate the suitability of caches with randomized placement and replacement in the context of hard real-time systems. Such caches have been claimed to drastically reduce the amount of information required by static worst-case execution time (WCET analysis, and to be an enabler for measurement-based probabilistic timing analysis. We refute these claims and conclude that with prevailing static and measurement-based analysis techniques caches with deterministic placement and least-recently-used replacement are preferable over randomized ones.

  18. Managing the Virtual Machine Lifecycle of the CernVM Project

    International Nuclear Information System (INIS)

    Charalampidis, I; Blomer, J; Buncic, P; Harutyunyan, A; Larsen, D

    2012-01-01

    CernVM is a virtual software appliance designed to support the development cycle and provide a runtime environment for LHC applications. It consists of a minimal Linux distribution, a specially tuned file system designed to deliver application software on demand, and contextualization tools. The maintenance of these components involves a variety of different procedures and tools that cannot always connect with each other. Additionally, most of these procedures need to be performed frequently. Currently, in the CernVM project, every time we build a new virtual machine image, we have to perform the whole process manually, because of the heterogeneity of the tools involved. The overall process is error-prone and time-consuming. Therefore, to simplify and aid this continuous maintenance process, we are developing a framework that combines these virtually unrelated tools with a single, coherent interface. To do so, we identified all the involved procedures and their tools, tracked their dependencies and organized them into logical groups (e.g. build, test, instantiate). These groups define the procedures that are performed throughout the lifetime of a virtual machine. In this paper we describe the Virtual Machine Lifecycle and the framework we developed (iAgent) in order to simplify the maintenance process.

  19. Learning Automata Based Caching for Efficient Data Access in Delay Tolerant Networks

    Directory of Open Access Journals (Sweden)

    Zhenjie Ma

    2018-01-01

    Full Text Available Effective data access is one of the major challenges in Delay Tolerant Networks (DTNs that are characterized by intermittent network connectivity and unpredictable node mobility. Currently, different data caching schemes have been proposed to improve the performance of data access in DTNs. However, most existing data caching schemes perform poorly due to the lack of global network state information and the changing network topology in DTNs. In this paper, we propose a novel data caching scheme based on cooperative caching in DTNs, aiming at improving the successful rate of data access and reducing the data access delay. In the proposed scheme, learning automata are utilized to select a set of caching nodes as Caching Node Set (CNS in DTNs. Unlike the existing caching schemes failing to address the challenging characteristics of DTNs, our scheme is designed to automatically self-adjust to the changing network topology through the well-designed voting and updating processes. The proposed scheme improves the overall performance of data access in DTNs compared with the former caching schemes. The simulations verify the feasibility of our scheme and the improvements in performance.

  20. Re-caching by Western scrub-jays (Aphelocoma californica cannot be attributed to stress.

    Directory of Open Access Journals (Sweden)

    James M Thom

    Full Text Available Western scrub-jays (Aphelocoma californica live double lives, storing food for the future while raiding the stores of other birds. One tactic scrub-jays employ to protect stores is "re-caching"-relocating caches out of sight of would-be thieves. Recent computational modelling work suggests that re-caching might be mediated not by complex cognition, but by a combination of memory failure and stress. The "Stress Model" asserts that re-caching is a manifestation of a general drive to cache, rather than a desire to protect existing stores. Here, we present evidence strongly contradicting the central assumption of these models: that stress drives caching, irrespective of social context. In Experiment (i, we replicate the finding that scrub-jays preferentially relocate food they were watched hiding. In Experiment (ii we find no evidence that stress increases caching. In light of our results, we argue that the Stress Model cannot account for scrub-jay re-caching.

  1. Cache and memory hierarchy design a performance directed approach

    CERN Document Server

    Przybylski, Steven A

    1991-01-01

    An authoritative book for hardware and software designers. Caches are by far the simplest and most effective mechanism for improving computer performance. This innovative book exposes the characteristics of performance-optimal single and multi-level cache hierarchies by approaching the cache design process through the novel perspective of minimizing execution times. It presents useful data on the relative performance of a wide spectrum of machines and offers empirical and analytical evaluations of the underlying phenomena. This book will help computer professionals appreciate the impact of ca

  2. High Performance Analytics with the R3-Cache

    Science.gov (United States)

    Eavis, Todd; Sayeed, Ruhan

    Contemporary data warehouses now represent some of the world’s largest databases. As these systems grow in size and complexity, however, it becomes increasingly difficult for brute force query processing approaches to meet the performance demands of end users. Certainly, improved indexing and more selective view materialization are helpful in this regard. Nevertheless, with warehouses moving into the multi-terabyte range, it is clear that the minimization of external memory accesses must be a primary performance objective. In this paper, we describe the R 3-cache, a natively multi-dimensional caching framework designed specifically to support sophisticated warehouse/OLAP environments. R 3-cache is based upon an in-memory version of the R-tree that has been extended to support buffer pages rather than disk blocks. A key strength of the R 3-cache is that it is able to utilize multi-dimensional fragments of previous query results so as to significantly minimize the frequency and scale of disk accesses. Moreover, the new caching model directly accommodates the standard relational storage model and provides mechanisms for pro-active updates that exploit the existence of query “hot spots”. The current prototype has been evaluated as a component of the Sidera DBMS, a “shared nothing” parallel OLAP server designed for multi-terabyte analytics. Experimental results demonstrate significant performance improvements relative to simpler alternatives.

  3. Adaptive Neuro-fuzzy Inference System as Cache Memory Replacement Policy

    Directory of Open Access Journals (Sweden)

    CHUNG, Y. M.

    2014-02-01

    Full Text Available To date, no cache memory replacement policy that can perform efficiently for all types of workloads is yet available. Replacement policies used in level 1 cache memory may not be suitable in level 2. In this study, we focused on developing an adaptive neuro-fuzzy inference system (ANFIS as a replacement policy for improving level 2 cache performance in terms of miss ratio. The recency and frequency of referenced blocks were used as input data for ANFIS to make decisions on replacement. MATLAB was employed as a training tool to obtain the trained ANFIS model. The trained ANFIS model was implemented on SimpleScalar. Simulations on SimpleScalar showed that the miss ratio improved by as high as 99.95419% and 99.95419% for instruction level 2 cache, and up to 98.04699% and 98.03467% for data level 2 cache compared with least recently used and least frequently used, respectively.

  4. Probabilistic Caching Placement in the Presence of Multiple Eavesdroppers

    Directory of Open Access Journals (Sweden)

    Fang Shi

    2018-01-01

    Full Text Available The wireless caching has attracted a lot of attention in recent years, since it can reduce the backhaul cost significantly and improve the user-perceived experience. The existing works on the wireless caching and transmission mainly focus on the communication scenarios without eavesdroppers. When the eavesdroppers appear, it is of vital importance to investigate the physical-layer security for the wireless caching aided networks. In this paper, a caching network is studied in the presence of multiple eavesdroppers, which can overhear the secure information transmission. We model the locations of eavesdroppers by a homogeneous Poisson Point Process (PPP, and the eavesdroppers jointly receive and decode contents through the maximum ratio combining (MRC reception which yields the worst case of wiretap. Moreover, the main performance metric is measured by the average probability of successful transmission, which is the probability of finding and successfully transmitting all the requested files within a radius R. We study the system secure transmission performance by deriving a single integral result, which is significantly affected by the probability of caching each file. Therefore, we extend to build the optimization problem of the probability of caching each file, in order to optimize the system secure transmission performance. This optimization problem is nonconvex, and we turn to use the genetic algorithm (GA to solve the problem. Finally, simulation and numerical results are provided to validate the proposed studies.

  5. Horizontally scaling dCache SRM with the Terracotta platform

    International Nuclear Information System (INIS)

    Perelmutov, T; Crawford, M; Moibenko, A; Oleynik, G

    2011-01-01

    The dCache disk caching file system has been chosen by a majority of LHC experiments' Tier 1 centers for their data storage needs. It is also deployed at many Tier 2 centers. The Storage Resource Manager (SRM) is a standardized grid storage interface and a single point of remote entry into dCache, and hence is a critical component. SRM must scale to increasing transaction rates and remain resilient against changing usage patterns. The initial implementation of the SRM service in dCache suffered from an inability to support clustered deployment, and its performance was limited by the hardware of a single node. Using the Terracotta platform[l], we added the ability to horizontally scale the dCache SRM service to run on multiple nodes in a cluster configuration, coupled with network load balancing. This gives site administrators the ability to increase the performance and reliability of SRM service to face the ever-increasing requirements of LHC data handling. In this paper we will describe the previous limitations of the architecture SRM server and how the Terracotta platform allowed us to readily convert single node service into a highly scalable clustered application.

  6. Método y sistema de modelado de memoria cache

    OpenAIRE

    Posadas Cobo, Héctor; Villar Bonet, Eugenio; Díaz Suárez, Luis

    2010-01-01

    Un método de modelado de una memoria cache de datos de un procesador destino, para simular el comportamiento de dicha memoria cache de datos en la ejecución de un código software en una plataforma que comprenda dicho procesador destino, donde dicha simulación se realiza en una plataforma nativa que tiene un procesador diferente del procesador destino que comprende dicha memoria cache de datos que se va a modelar, donde dicho modelado se realiza mediante la ejecución en dicha plataforma nativa...

  7. cernatschool.org's use of CVMFS and the CernVM

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    cernatschool.org is a very small Virtual Organisation made up of secondary school and university students, and participating organisations in the Institute for Research in Schools. We use CVMFS to delpoy dependencies and Python 3 itself for custom software used for analysing radiation data from Medipix detectors. This software is designed for running on GridPP worker nodes, part of the UK based distributed computing grid. The cernatschool.org VO also uses the CernVM, for job submission and interacting with the grid. The current use for both CVMFS and the CernVM is for facilitating analysis of 3 years worth of data from the LUCID payload on TechDemoSat-1. The CernVM looks like it could be particularly useful in the future for a standard system for students to use to program and analyse data themselves with, allowing easy access to any software they might need (not necessarily using GridPP compute resources at all).

  8. A VM-shared desktop virtualization system based on OpenStack

    Science.gov (United States)

    Liu, Xi; Zhu, Mingfa; Xiao, Limin; Jiang, Yuanjie

    2018-04-01

    With the increasing popularity of cloud computing, desktop virtualization is rising in recent years as a branch of virtualization technology. However, existing desktop virtualization systems are mostly designed as a one-to-one mode, which one VM can only be accessed by one user. Meanwhile, previous desktop virtualization systems perform weakly in terms of response time and cost saving. This paper proposes a novel VM-Shared desktop virtualization system based on OpenStack platform. The paper modified the connecting process and the display data transmission process of the remote display protocol SPICE to support VM-Shared function. On the other hand, we propose a server-push display mode to improve user interactive experience. The experimental results show that our system performs well in response time and achieves a low CPU consumption.

  9. Fundamental Parallel Algorithms for Private-Cache Chip Multiprocessors

    DEFF Research Database (Denmark)

    Arge, Lars Allan; Goodrich, Michael T.; Nelson, Michael

    2008-01-01

    about the way cores are interconnected, for we assume that all inter-processor communication occurs through the memory hierarchy. We study several fundamental problems, including prefix sums, selection, and sorting, which often form the building blocks of other parallel algorithms. Indeed, we present...... two sorting algorithms, a distribution sort and a mergesort. Our algorithms are asymptotically optimal in terms of parallel cache accesses and space complexity under reasonable assumptions about the relationships between the number of processors, the size of memory, and the size of cache blocks....... In addition, we study sorting lower bounds in a computational model, which we call the parallel external-memory (PEM) model, that formalizes the essential properties of our algorithms for private-cache CMPs....

  10. A Scalable proxy cache for Grid Data Access

    International Nuclear Information System (INIS)

    Cristian Cirstea, Traian; Just Keijser, Jan; Arthur Koeroo, Oscar; Starink, Ronald; Alan Templon, Jeffrey

    2012-01-01

    We describe a prototype grid proxy cache system developed at Nikhef, motivated by a desire to construct the first building block of a future https-based Content Delivery Network for grid infrastructures. Two goals drove the project: firstly to provide a “native view” of the grid for desktop-type users, and secondly to improve performance for physics-analysis type use cases, where multiple passes are made over the same set of data (residing on the grid). We further constrained the design by requiring that the system should be made of standard components wherever possible. The prototype that emerged from this exercise is a horizontally-scalable, cooperating system of web server / cache nodes, fronted by a customized webDAV server. The webDAV server is custom only in the sense that it supports http redirects (providing horizontal scaling) and that the authentication module has, as back end, a proxy delegation chain that can be used by the cache nodes to retrieve files from the grid. The prototype was deployed at Nikhef and tested at a scale of several terabytes of data and approximately one hundred fast cores of computing. Both small and large files were tested, in a number of scenarios, and with various numbers of cache nodes, in order to understand the scaling properties of the system. For properly-dimensioned cache-node hardware, the system showed speedup of several integer factors for the analysis-type use cases. These results and others are presented and discussed.

  11. Evidence for cache surveillance by a scatter-hoarding rodent

    NARCIS (Netherlands)

    Hirsch, B.T.; Kays, R.; Jansen, P.A.

    2013-01-01

    The mechanisms by which food-hoarding animals are capable of remembering the locations of numerous cached food items over long time spans has been the focus of intensive research. The ‘memory enhancement hypothesis’ states that hoarders reinforce spatial memory of their caches by repeatedly

  12. A high level implementation and performance evaluation of level-I asynchronous cache on FPGA

    Directory of Open Access Journals (Sweden)

    Mansi Jhamb

    2017-07-01

    Full Text Available To bridge the ever-increasing performance gap between the processor and the main memory in a cost-effective manner, novel cache designs and implementations are indispensable. Cache is responsible for a major part of energy consumption (approx. 50% of processors. This paper presents a high level implementation of a micropipelined asynchronous architecture of L1 cache. Due to the fact that each cache memory implementation is time consuming and error-prone process, a synthesizable and a configurable model proves out to be of immense help as it aids in generating a range of caches in a reproducible and quick fashion. The micropipelined cache, implemented using C-Elements acts as a distributed message-passing system. The RTL cache model implemented in this paper, comprising of data and instruction caches has a wide array of configurable parameters. In addition to timing robustness our implementation has high average cache throughput and low latency. The implemented architecture comprises of two direct-mapped, write-through caches for data and instruction. The architecture is implemented in a Field Programmable Gate Array (FPGA chip using Very High Speed Integrated Circuit Hardware Description Language (VHSIC HDL along with advanced synthesis and place-and-route tools.

  13. Analisis Algoritma Pergantian Cache Pada Proxy Web Server Internet Dengan Simulasi

    OpenAIRE

    Nurwarsito, Heru

    2007-01-01

    Pertumbuhan jumlah client internet dari waktu ke waktu terus bertambah, maka respon akses internet menjadi semakin lambat. Untuk membantu kecepatan akses tersebut maka diperlukan cache pada Proxy Server. Penelitian ini bertujuan untuk menganalisis performansi Proxy Server pada Jaringan Internet terhadap penggunaan algoritma pergantian cache-nya.Analisis Algoritma Pergantian Cache Pada Proxy Server didesain dengan metoda pemodelan simulasi jaringan internet yang terdiri dari Web server, Proxy ...

  14. Nature as a treasure map! Teaching geoscience with the help of earth caches?!

    Science.gov (United States)

    Zecha, Stefanie; Schiller, Thomas

    2015-04-01

    This presentation looks at how earth caches are influence the learning process in the field of geo science in non-formal education. The development of mobile technologies using Global Positioning System (GPS) data to point geographical location together with the evolving Web 2.0 supporting the creation and consumption of content, suggest a potential for collaborative informal learning linked to location. With the help of the GIS in smartphones you can go directly in nature, search for information by your smartphone, and learn something about nature. Earth caches are a very good opportunity, which are organized and supervised geocaches with special information about physical geography high lights. Interested people can inform themselves about aspects in geoscience area by earth caches. The main question of this presentation is how these caches are created in relation to learning processes. As is not possible, to analyze all existing earth caches, there was focus on Bavaria and a certain feature of earth caches. At the end the authors show limits and potentials for the use of earth caches and give some remark for the future.

  15. 3D-e-Chem-VM: Structural Cheminformatics Research Infrastructure in a Freely Available Virtual Machine

    NARCIS (Netherlands)

    McGuire, R.; Verhoeven, S.; Vass, M.; Vriend, G.; Esch, I.J. de; Lusher, S.J.; Leurs, R.; Ridder, L.; Kooistra, A.J.; Ritschel, T.; Graaf, C. de

    2017-01-01

    3D-e-Chem-VM is an open source, freely available Virtual Machine ( http://3d-e-chem.github.io/3D-e-Chem-VM/ ) that integrates cheminformatics and bioinformatics tools for the analysis of protein-ligand interaction data. 3D-e-Chem-VM consists of software libraries, and database and workflow tools

  16. 3D-e-Chem-VM : Structural Cheminformatics Research Infrastructure in a Freely Available Virtual Machine

    NARCIS (Netherlands)

    McGuire, Ross; Verhoeven, Stefan; Vass, Márton; Vriend, Gerrit; De Esch, Iwan J P; Lusher, Scott J.; Leurs, Rob; Ridder, Lars; Kooistra, Albert J.; Ritschel, Tina; de Graaf, C.

    2017-01-01

    3D-e-Chem-VM is an open source, freely available Virtual Machine ( http://3d-e-chem.github.io/3D-e-Chem-VM/ ) that integrates cheminformatics and bioinformatics tools for the analysis of protein-ligand interaction data. 3D-e-Chem-VM consists of software libraries, and database and workflow tools

  17. A Novel Architecture of Metadata Management System Based on Intelligent Cache

    Institute of Scientific and Technical Information of China (English)

    SONG Baoyan; ZHAO Hongwei; WANG Yan; GAO Nan; XU Jin

    2006-01-01

    This paper introduces a novel architecture of metadata management system based on intelligent cache called Metadata Intelligent Cache Controller (MICC). By using an intelligent cache to control the metadata system, MICC can deal with different scenarios such as splitting and merging of queries into sub-queries for available metadata sets in local, in order to reduce access time of remote queries. Application can find results patially from local cache and the remaining portion of the metadata that can be fetched from remote locations. Using the existing metadata, it can not only enhance the fault tolerance and load balancing of system effectively, but also improve the efficiency of access while ensuring the access quality.

  18. A cache-friendly sampling strategy for texture-based volume rendering on GPU

    Directory of Open Access Journals (Sweden)

    Junpeng Wang

    2017-06-01

    Full Text Available The texture-based volume rendering is a memory-intensive algorithm. Its performance relies heavily on the performance of the texture cache. However, most existing texture-based volume rendering methods blindly map computational resources to texture memory and result in incoherent memory access patterns, causing low cache hit rates in certain cases. The distance between samples taken by threads of an atomic scheduling unit (e.g. a warp of 32 threads in CUDA of the GPU is a crucial factor that affects the texture cache performance. Based on this fact, we present a new sampling strategy, called Warp Marching, for the ray-casting algorithm of texture-based volume rendering. The effects of different sample organizations and different thread-pixel mappings in the ray-casting algorithm are thoroughly analyzed. Also, a pipeline manner color blending approach is introduced and the power of warp-level GPU operations is leveraged to improve the efficiency of parallel executions on the GPU. In addition, the rendering performance of the Warp Marching is view-independent, and it outperforms existing empty space skipping techniques in scenarios that need to render large dynamic volumes in a low resolution image. Through a series of micro-benchmarking and real-life data experiments, we rigorously analyze our sampling strategies and demonstrate significant performance enhancements over existing sampling methods.

  19. Organizing the pantry: cache management improves quality of overwinter food stores in a montane mammal

    Science.gov (United States)

    Jakopak, Rhiannon P.; Hall, L. Embere; Chalfoun, Anna D.

    2017-01-01

    Many mammals create food stores to enhance overwinter survival in seasonal environments. Strategic arrangement of food within caches may facilitate the physical integrity of the cache or improve access to high-quality food to ensure that cached resources meet future nutritional demands. We used the American pika (Ochotona princeps), a food-caching lagomorph, to evaluate variation in haypile (cache) structure (i.e., horizontal layering by plant functional group) in Wyoming, United States. Fifty-five percent of 62 haypiles contained at least 2 discrete layers of vegetation. Adults and juveniles layered haypiles in similar proportions. The probability of layering increased with haypile volume, but not haypile number per individual or nearby forage diversity. Vegetation cached in layered haypiles was also higher in nitrogen compared to vegetation in unlayered piles. We found that American pikas frequently structured their food caches, structured caches were larger, and the cached vegetation in structured piles was of higher nutritional quality. Improving access to stable, high-quality vegetation in haypiles, a critical overwinter food resource, may allow individuals to better persist amidst harsh conditions.

  20. Tier 3 batch system data locality via managed caches

    Science.gov (United States)

    Fischer, Max; Giffels, Manuel; Jung, Christopher; Kühn, Eileen; Quast, Günter

    2015-05-01

    Modern data processing increasingly relies on data locality for performance and scalability, whereas the common HEP approaches aim for uniform resource pools with minimal locality, recently even across site boundaries. To combine advantages of both, the High- Performance Data Analysis (HPDA) Tier 3 concept opportunistically establishes data locality via coordinated caches. In accordance with HEP Tier 3 activities, the design incorporates two major assumptions: First, only a fraction of data is accessed regularly and thus the deciding factor for overall throughput. Second, data access may fallback to non-local, making permanent local data availability an inefficient resource usage strategy. Based on this, the HPDA design generically extends available storage hierarchies into the batch system. Using the batch system itself for scheduling file locality, an array of independent caches on the worker nodes is dynamically populated with high-profile data. Cache state information is exposed to the batch system both for managing caches and scheduling jobs. As a result, users directly work with a regular, adequately sized storage system. However, their automated batch processes are presented with local replications of data whenever possible.

  1. Cache Oblivious Distribution Sweeping

    DEFF Research Database (Denmark)

    Brodal, G.S.; Fagerberg, R.

    2002-01-01

    We adapt the distribution sweeping method to the cache oblivious model. Distribution sweeping is the name used for a general approach for divide-and-conquer algorithms where the combination of solved subproblems can be viewed as a merging process of streams. We demonstrate by a series of algorith...

  2. Do Clark's nutcrackers demonstrate what-where-when memory on a cache-recovery task?

    Science.gov (United States)

    Gould, Kristy L; Ort, Amy J; Kamil, Alan C

    2012-01-01

    What-where-when (WWW) memory during cache recovery was investigated in six Clark's nutcrackers. During caching, both red- and blue-colored pine seeds were cached by the birds in holes filled with sand. Either a short (3 day) retention interval (RI) or a long (9 day) RI was followed by a recovery session during which caches were replaced with either a single seed or wooden bead depending upon the color of the cache and length of the retention interval. Knowledge of what was in the cache (seed or bead), where it was located, and when the cache had been made (3 or 9 days ago) were the three WWW memory components under investigation. Birds recovered items (bead or seed) at above chance levels, demonstrating accurate spatial memory. They also recovered seeds more than beads after the long RI, but not after the short RI, when they recovered seeds and beads equally often. The differential recovery after the long RI demonstrates that nutcrackers may have the capacity for WWW memory during this task, but it is not clear why it was influenced by RI duration.

  3. Efficacy of Code Optimization on Cache-Based Processors

    Science.gov (United States)

    VanderWijngaart, Rob F.; Saphir, William C.; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    In this paper a number of techniques for improving the cache performance of a representative piece of numerical software is presented. Target machines are popular processors from several vendors: MIPS R5000 (SGI Indy), MIPS R8000 (SGI PowerChallenge), MIPS R10000 (SGI Origin), DEC Alpha EV4 + EV5 (Cray T3D & T3E), IBM RS6000 (SP Wide-node), Intel PentiumPro (Ames' Whitney), Sun UltraSparc (NERSC's NOW). The optimizations all attempt to increase the locality of memory accesses. But they meet with rather varied and often counterintuitive success on the different computing platforms. We conclude that it may be genuinely impossible to obtain portable performance on the current generation of cache-based machines. At the least, it appears that the performance of modern commodity processors cannot be described with parameters defining the cache alone.

  4. Cache Aided Decode-and-Forward Relaying Networks: From the Spatial View

    Directory of Open Access Journals (Sweden)

    Junjuan Xia

    2018-01-01

    Full Text Available We investigate cache technique from the spatial view and study its impact on the relaying networks. In particular, we consider a dual-hop relaying network, where decode-and-forward (DF relays can assist the data transmission from the source to the destination. In addition to the traditional dual-hop relaying, we also consider the cache from the spatial view, where the source can prestore the data among the memories of the nodes around the destination. For the DF relaying networks without and with cache, we study the system performance by deriving the analytical expressions of outage probability and symbol error rate (SER. We also derive the asymptotic outage probability and SER in the high regime of transmit power, from which we find the system diversity order can be rapidly increased by using cache and the system performance can be significantly improved. Simulation and numerical results are demonstrated to verify the proposed studies and find that the system power resources can be efficiently saved by using cache technique.

  5. Properties and Microstructure of Laser Welded VM12-SHC Steel Pipes Joints

    Directory of Open Access Journals (Sweden)

    Skrzypczyk A.

    2016-06-01

    Full Text Available Paper presents results of microstructure and tests of welded joints of new generation VM12-SHC martensitic steel using high power CO2 laser (LBW method with bifocal welding head. VM12-SHC is dedicated to energetic installation material, designed to replace currently used. High content of chromium and others alloying elements improve its resistance and strength characteristic. Use of VM12-SHC steel for production of the superheaters, heating chambers and walls in steam boilers resulted in various weldability researches. In article are presented results of destructive and non-destructive tests. For destructive: static bending and Vickers hardness tests, and for non-destructive: VT, RT, UT, micro and macroscopic tests were performed.

  6. Image matrix processor for fast multi-dimensional computations

    Science.gov (United States)

    Roberson, George P.; Skeate, Michael F.

    1996-01-01

    An apparatus for multi-dimensional computation which comprises a computation engine, including a plurality of processing modules. The processing modules are configured in parallel and compute respective contributions to a computed multi-dimensional image of respective two dimensional data sets. A high-speed, parallel access storage system is provided which stores the multi-dimensional data sets, and a switching circuit routes the data among the processing modules in the computation engine and the storage system. A data acquisition port receives the two dimensional data sets representing projections through an image, for reconstruction algorithms such as encountered in computerized tomography. The processing modules include a programmable local host, by which they may be configured to execute a plurality of different types of multi-dimensional algorithms. The processing modules thus include an image manipulation processor, which includes a source cache, a target cache, a coefficient table, and control software for executing image transformation routines using data in the source cache and the coefficient table and loading resulting data in the target cache. The local host processor operates to load the source cache with a two dimensional data set, loads the coefficient table, and transfers resulting data out of the target cache to the storage system, or to another destination.

  7. A Cache Considering Role-Based Access Control and Trust in Privilege Management Infrastructure

    Institute of Scientific and Technical Information of China (English)

    ZHANG Shaomin; WANG Baoyi; ZHOU Lihua

    2006-01-01

    PMI(privilege management infrastructure) is used to perform access control to resource in an E-commerce or E-government system. With the ever-increasing need for secure transaction, the need for systems that offer a wide variety of QoS (quality-of-service) features is also growing. In order to improve the QoS of PMI system, a cache based on RBAC(Role-based Access Control) and trust is proposed. Our system is realized based on Web service. How to design the cache based on RBAC and trust in the access control model is described in detail. The algorithm to query role permission in cache and to add records in cache is dealt with. The policy to update cache is introduced also.

  8. Magpies can use local cues to retrieve their food caches.

    Science.gov (United States)

    Feenders, Gesa; Smulders, Tom V

    2011-03-01

    Much importance has been placed on the use of spatial cues by food-hoarding birds in the retrieval of their caches. In this study, we investigate whether food-hoarding birds can be trained to use local cues ("beacons") in their cache retrieval. We test magpies (Pica pica) in an active hoarding-retrieval paradigm, where local cues are always reliable, while spatial cues are not. Our results show that the birds use the local cues to retrieve their caches, even when occasionally contradicting spatial information is available. The design of our study does not allow us to test rigorously whether the birds prefer using local over spatial cues, nor to investigate the process through which they learn to use local cues. We furthermore provide evidence that magpies develop landmark preferences, which improve their retrieval accuracy. Our findings support the hypothesis that birds are flexible in their use of memory information, using a combination of the most reliable or salient information to retrieve their caches. © Springer-Verlag 2010

  9. Analyzing data distribution on disk pools for dCache

    Energy Technology Data Exchange (ETDEWEB)

    Halstenberg, S; Jung, C; Ressmann, D [Forschungszentrum Karlsruhe, Steinbuch Centre for Computing, Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen (Germany)

    2010-04-01

    Most Tier-1 centers of LHC Computing Grid are using dCache as their storage system. dCache uses a cost model incorporating CPU and space costs for the distribution of data on its disk pools. Storage resources at Tier-1 centers are usually upgraded once or twice a year according to given milestones. One of the effects of this procedure is the accumulation of heterogeneous hardware resources. For a dCache system, a heterogeneous set of disk pools complicates the process of weighting CPU and space costs for an efficient distribution of data. In order to evaluate the data distribution on the disk pools, the distribution is simulated in Java. The results are discussed and suggestions for improving the weight scheme are given.

  10. Massively parallel algorithms for trace-driven cache simulations

    Science.gov (United States)

    Nicol, David M.; Greenberg, Albert G.; Lubachevsky, Boris D.

    1991-01-01

    Trace driven cache simulation is central to computer design. A trace is a very long sequence of reference lines from main memory. At the t(exp th) instant, reference x sub t is hashed into a set of cache locations, the contents of which are then compared with x sub t. If at the t sup th instant x sub t is not present in the cache, then it is said to be a miss, and is loaded into the cache set, possibly forcing the replacement of some other memory line, and making x sub t present for the (t+1) sup st instant. The problem of parallel simulation of a subtrace of N references directed to a C line cache set is considered, with the aim of determining which references are misses and related statistics. A simulation method is presented for the Least Recently Used (LRU) policy, which regradless of the set size C runs in time O(log N) using N processors on the exclusive read, exclusive write (EREW) parallel model. A simpler LRU simulation algorithm is given that runs in O(C log N) time using N/log N processors. Timings are presented of the second algorithm's implementation on the MasPar MP-1, a machine with 16384 processors. A broad class of reference based line replacement policies are considered, which includes LRU as well as the Least Frequently Used and Random replacement policies. A simulation method is presented for any such policy that on any trace of length N directed to a C line set runs in the O(C log N) time with high probability using N processors on the EREW model. The algorithms are simple, have very little space overhead, and are well suited for SIMD implementation.

  11. California scrub-jays reduce visual cues available to potential pilferers by matching food colour to caching substrate.

    Science.gov (United States)

    Kelley, Laura A; Clayton, Nicola S

    2017-07-01

    Some animals hide food to consume later; however, these caches are susceptible to theft by conspecifics and heterospecifics. Caching animals can use protective strategies to minimize sensory cues available to potential pilferers, such as caching in shaded areas and in quiet substrate. Background matching (where object patterning matches the visual background) is commonly seen in prey animals to reduce conspicuousness, and caching animals may also use this tactic to hide caches, for example, by hiding coloured food in a similar coloured substrate. We tested whether California scrub-jays ( Aphelocoma californica ) camouflage their food in this way by offering them caching substrates that either matched or did not match the colour of food available for caching. We also determined whether this caching behaviour was sensitive to social context by allowing the birds to cache when a conspecific potential pilferer could be both heard and seen (acoustic and visual cues present), or unseen (acoustic cues only). When caching events could be both heard and seen by a potential pilferer, birds cached randomly in matching and non-matching substrates. However, they preferentially hid food in the substrate that matched the food colour when only acoustic cues were present. This is a novel cache protection strategy that also appears to be sensitive to social context. We conclude that studies of cache protection strategies should consider the perceptual capabilities of the cacher and potential pilferers. © 2017 The Author(s).

  12. Replication Strategy for Spatiotemporal Data Based on Distributed Caching System.

    Science.gov (United States)

    Xiong, Lian; Yang, Liu; Tao, Yang; Xu, Juan; Zhao, Lun

    2018-01-14

    The replica strategy in distributed cache can effectively reduce user access delay and improve system performance. However, developing a replica strategy suitable for varied application scenarios is still quite challenging, owing to differences in user access behavior and preferences. In this paper, a replication strategy for spatiotemporal data (RSSD) based on a distributed caching system is proposed. By taking advantage of the spatiotemporal locality and correlation of user access, RSSD mines high popularity and associated files from historical user access information, and then generates replicas and selects appropriate cache node for placement. Experimental results show that the RSSD algorithm is simple and efficient, and succeeds in significantly reducing user access delay.

  13. Optimal Caching in Multicast 5G Networks with Opportunistic Spectrum Access

    KAUST Repository

    Emara, Mostafa; Elsawy, Hesham; Sorour, Sameh; Al-Ghadhban, Samir; Alouini, Mohamed-Slim; Al-Naffouri, Tareq Y.

    2018-01-01

    Cache-enabled small base station (SBS) densification is foreseen as a key component of 5G cellular networks. This architecture enables storing popular files at the network edge (i.e., SBS caches), which empowers local communication and alleviates

  14. Cocaine craving during protracted withdrawal requires PKCε priming within vmPFC.

    Science.gov (United States)

    Miller, Bailey W; Wroten, Melissa G; Sacramento, Arianne D; Silva, Hannah E; Shin, Christina B; Vieira, Philip A; Ben-Shahar, Osnat; Kippin, Tod E; Szumlinski, Karen K

    2017-05-01

    In individuals with a history of drug taking, the capacity of drug-associated cues to elicit indices of drug craving intensifies or incubates with the passage of time during drug abstinence. This incubation of cocaine craving, as well as difficulties with learning to suppress drug-seeking behavior during protracted withdrawal, are associated with a time-dependent deregulation of ventromedial prefrontal cortex (vmPFC) function. As the molecular bases for cocaine-related vmPFC deregulation remain elusive, the present study assayed the consequences of extended access to intravenous cocaine (6 hours/day; 0.25 mg/infusion for 10 day) on the activational state of protein kinase C epsilon (PKCε), an enzyme highly implicated in drug-induced neuroplasticity. The opportunity to engage in cocaine seeking during cocaine abstinence time-dependently altered PKCε phosphorylation within vmPFC, with reduced and increased p-PKCε expression observed in early (3 days) and protracted (30 days) withdrawal, respectively. This effect was more robust within the ventromedial versus dorsomedial PFC, was not observed in comparable cocaine-experienced rats not tested for drug-seeking behavior and was distinct from the rise in phosphorylated extracellular signal-regulated kinase observed in cocaine-seeking rats. Further, the impact of inhibiting PKCε translocation within the vmPFC using TAT infusion proteins upon cue-elicited responding was determined and inhibition coinciding with the period of testing attenuated cocaine-seeking behavior, with an effect also apparent the next day. In contrast, inhibitor pretreatment prior to testing during early withdrawal was without effect. Thus, a history of excessive cocaine taking influences the cue reactivity of important intracellular signaling molecules within the vmPFC, with PKCε playing a critical role in the manifestation of cue-elicited cocaine seeking during protracted drug withdrawal. © 2016 Society for the Study of Addiction.

  15. LPPS: A Distributed Cache Pushing Based K-Anonymity Location Privacy Preserving Scheme

    Directory of Open Access Journals (Sweden)

    Ming Chen

    2016-01-01

    Full Text Available Recent years have witnessed the rapid growth of location-based services (LBSs for mobile social network applications. To enable location-based services, mobile users are required to report their location information to the LBS servers and receive answers of location-based queries. Location privacy leak happens when such servers are compromised, which has been a primary concern for information security. To address this issue, we propose the Location Privacy Preservation Scheme (LPPS based on distributed cache pushing. Unlike existing solutions, LPPS deploys distributed cache proxies to cover users mostly visited locations and proactively push cache content to mobile users, which can reduce the risk of leaking users’ location information. The proposed LPPS includes three major process. First, we propose an algorithm to find the optimal deployment of proxies to cover popular locations. Second, we present cache strategies for location-based queries based on the Markov chain model and propose update and replacement strategies for cache content maintenance. Third, we introduce a privacy protection scheme which is proved to achieve k-anonymity guarantee for location-based services. Extensive experiments illustrate that the proposed LPPS achieves decent service coverage ratio and cache hit ratio with lower communication overhead compared to existing solutions.

  16. Efficiently GPU-accelerating long kernel convolutions in 3-D DIRECT TOF PET reconstruction via memory cache optimization

    Energy Technology Data Exchange (ETDEWEB)

    Ha, Sungsoo; Mueller, Klaus [Stony Brook Univ., NY (United States). Center for Visual Computing; Matej, Samuel [Pennsylvania Univ., Philadelphia, PA (United States). Dept. of Radiology

    2011-07-01

    The DIRECT represents a novel approach for 3-D Time-of-Flight (TOF) PET reconstruction. Its novelty stems from the fact that it performs all iterative predictor-corrector operations directly in image space. The projection operations now amount to convolutions in image space, using long TOF (resolution) kernels. While for spatially invariant kernels the computational complexity can be algorithmically overcome by replacing spatial convolution with multiplication in Fourier space, spatially variant kernels cannot use this shortcut. Therefore in this paper, we describe a GPU-accelerated approach for this task. However, the intricate parallel architecture of GPUs poses its own challenges, and careful memory and thread management is the key to obtaining optimal results. As convolution is mainly memory-bound we focus on the former, proposing two types of memory caching schemes that warrant best cache memory re-use by the parallel threads. In contrast to our previous two-stage algorithm, the schemes presented here are both single-stage which is more accurate. (orig.)

  17. Enabling MPEG-2 video playback in embedded systems through improved data cache efficiency

    Science.gov (United States)

    Soderquist, Peter; Leeser, Miriam E.

    1999-01-01

    Digital video decoding, enabled by the MPEG-2 Video standard, is an important future application for embedded systems, particularly PDAs and other information appliances. Many such system require portability and wireless communication capabilities, and thus face severe limitations in size and power consumption. This places a premium on integration and efficiency, and favors software solutions for video functionality over specialized hardware. The processors in most embedded system currently lack the computational power needed to perform video decoding, but a related and equally important problem is the required data bandwidth, and the need to cost-effectively insure adequate data supply. MPEG data sets are very large, and generate significant amounts of excess memory traffic for standard data caches, up to 100 times the amount required for decoding. Meanwhile, cost and power limitations restrict cache sizes in embedded systems. Some systems, including many media processors, eliminate caches in favor of memories under direct, painstaking software control in the manner of digital signal processors. Yet MPEG data has locality which caches can exploit if properly optimized, providing fast, flexible, and automatic data supply. We propose a set of enhancements which target the specific needs of the heterogeneous types within the MPEG decoder working set. These optimizations significantly improve the efficiency of small caches, reducing cache-memory traffic by almost 70 percent, and can make an enhanced 4 KB cache perform better than a standard 1 MB cache. This performance improvement can enable high-resolution, full frame rate video playback in cheaper, smaller system than woudl otherwise be possible.

  18. The Optimization of In-Memory Space Partitioning Trees for Cache Utilization

    Science.gov (United States)

    Yeo, Myung Ho; Min, Young Soo; Bok, Kyoung Soo; Yoo, Jae Soo

    In this paper, a novel cache conscious indexing technique based on space partitioning trees is proposed. Many researchers investigated efficient cache conscious indexing techniques which improve retrieval performance of in-memory database management system recently. However, most studies considered data partitioning and targeted fast information retrieval. Existing data partitioning-based index structures significantly degrade performance due to the redundant accesses of overlapped spaces. Specially, R-tree-based index structures suffer from the propagation of MBR (Minimum Bounding Rectangle) information by updating data frequently. In this paper, we propose an in-memory space partitioning index structure for optimal cache utilization. The proposed index structure is compared with the existing index structures in terms of update performance, insertion performance and cache-utilization rate in a variety of environments. The results demonstrate that the proposed index structure offers better performance than existing index structures.

  19. Proposal and development of a reconfigurable associativity algorithm in cache memories.

    OpenAIRE

    Roberto Borges Kerr Junior

    2008-01-01

    A evolução constante dos processadores está aumentando cada vez o overhead dos acessos à memória. Tentando evitar este problema, os desenvolvedores de processadores utilizam diversas técnicas, entre elas, o emprego de memórias cache na hierarquia de memórias dos computadores. As memórias cache, por outro lado, não conseguem suprir totalmente as suas necessidades, sendo interessante alguma técnica que tornasse possível aproveitar melhor a memória cache. Para resolver este problema, autores pro...

  20. SIMPLE HEURISTIC ALGORITHM FOR DYNAMIC VM REALLOCATION IN IAAS CLOUDS

    Directory of Open Access Journals (Sweden)

    Nikita A. Balashov

    2018-03-01

    Full Text Available The rapid development of cloud technologies and its high prevalence in both commercial and academic areas have stimulated active research in the domain of optimal cloud resource management. One of the most active research directions is dynamic virtual machine (VM placement optimization in clouds build on Infrastructure-as-a-Service model. This kind of research may pursue different goals with energy-aware optimization being the most common goal as it aims at a urgent problem of green cloud computing - reducing energy consumption by data centers. In this paper we present a new heuristic algorithm of dynamic reallocation of VMs based on an approach presented in one of our previous works. In the algorithm we apply a 2-rank strategy to classify VMs and servers corresponding to the highly and lowly active VMs and solve four tasks: VM classification, host classification, forming a VM migration map and VMs migration. Dividing all of the VMs and servers into two classes we attempt to implement the possibility of risk reduction in case of hardware overloads under overcommitment conditions and to reduce the influence of the occurring overloads on the performance of the cloud VMs. Presented algorithm was developed based on the workload profile of the JINR cloud (a scientific private cloud with the goal of maximizing its usage, but it can also be applied in both public and private commercial clouds to organize the simultaneous use of different SLA and QoS levels in the same cloud environment by giving each VM rank its own level of overcommitment.

  1. Worst-case execution time analysis-driven object cache design

    DEFF Research Database (Denmark)

    Huber, Benedikt; Puffitsch, Wolfgang; Schoeberl, Martin

    2012-01-01

    result in a WCET analysis‐friendly design. Aiming for a time‐predictable design, we therefore propose to employ WCET analysis techniques for the design space exploration of processor architectures. We evaluated different object cache configurations using static analysis techniques. The number of field......Hard real‐time systems need a time‐predictable computing platform to enable static worst‐case execution time (WCET) analysis. All performance‐enhancing features need to be WCET analyzable. However, standard data caches containing heap‐allocated data are very hard to analyze statically....... In this paper we explore a new object cache design, which is driven by the capabilities of static WCET analysis. Simulations of standard benchmarks estimating the expected average case performance usually drive computer architecture design. The design decisions derived from this methodology do not necessarily...

  2. Caching at the Mobile Edge: a Practical Implementation

    DEFF Research Database (Denmark)

    Poderys, Justas; Artuso, Matteo; Lensbøl, Claus Michael Oest

    2018-01-01

    Thanks to recent advances in mobile networks, it is becoming increasingly popular to access heterogeneous content from mobile terminals. There are, however, unique challenges in mobile networks that affect the perceived quality of experience (QoE) at the user end. One such challenge is the higher...... latency that users typically experience in mobile networks compared to wired ones. Cloud-based radio access networks with content caches at the base stations are seen as a key contributor in reducing the latency required to access content and thus improve the QoE at the mobile user terminal. In this paper...... for the mobile user obtained by caching content at the base stations. This is quantified with a comparison to non-cached content by means of ping tests (10–11% shorter times), a higher response rate for web traffic (1.73–3.6 times higher), and an improvement in the jitter (6% reduction)....

  3. EqualChance: Addressing Intra-set Write Variation to Increase Lifetime of Non-volatile Caches

    Energy Technology Data Exchange (ETDEWEB)

    Mittal, Sparsh [ORNL; Vetter, Jeffrey S [ORNL

    2014-01-01

    To address the limitations of SRAM such as high-leakage and low-density, researchers have explored use of non-volatile memory (NVM) devices, such as ReRAM (resistive RAM) and STT-RAM (spin transfer torque RAM) for designing on-chip caches. A crucial limitation of NVMs, however, is that their write endurance is low and the large intra-set write variation introduced by existing cache management policies may further exacerbate this problem, thereby reducing the cache lifetime significantly. We present EqualChance, a technique to increase cache lifetime by reducing intra-set write variation. EqualChance works by periodically changing the physical cache-block location of a write-intensive data item within a set to achieve wear-leveling. Simulations using workloads from SPEC CPU2006 suite and HPC (high-performance computing) field show that EqualChance improves the cache lifetime by 4.29X. Also, its implementation overhead is small, and it incurs very small performance and energy loss.

  4. ARC Cache: A solution for lightweight Grid sites in ATLAS

    CERN Document Server

    Garonne, Vincent; The ATLAS collaboration

    2016-01-01

    Many Grid sites have the need to reduce operational manpower, and running a storage element consumes a large amount of effort. In addition, setting up a new Grid site including a storage element involves a steep learning curve and large investment of time. For these reasons so-called storage-less sites are becoming more popular as a way to provide Grid computing resources with less operational overhead. ARC CE is a widely-used and mature Grid middleware which was designed from the start to be used on sites with no persistent storage element. Instead, it maintains a local self-managing cache of data which retains popular data for future jobs. As the cache is simply an area on a local posix shared filesystem with no external-facing service, it requires no extra maintenance. The cache can be scaled up as required by increasing the size of the filesystem or adding new filesystems. This paper describes how ARC CE and its cache are an ideal solution for lightweight Grid sites in the ATLAS experiment, and the integr...

  5. Cache-Conscious Radix-Decluster Projections

    NARCIS (Netherlands)

    S. Manegold (Stefan); P.A. Boncz (Peter); N.J. Nes (Niels); M.L. Kersten (Martin)

    2004-01-01

    textabstractAs CPUs become more powerful with Moore's law and memory latencies stay constant, the impact of the memory access performance bottleneck continues to grow on relational operators like join, which can exhibit random access on a memory region larger than the hardware caches. While

  6. Cooperative Coding and Caching for Streaming Data in Multihop Wireless Networks

    Directory of Open Access Journals (Sweden)

    Liu Jiangchuan

    2010-01-01

    Full Text Available This paper studies the distributed caching managements for the current flourish of the streaming applications in multihop wireless networks. Many caching managements to date use randomized network coding approach, which provides an elegant solution for ubiquitous data accesses in such systems. However, the encoding, essentially a combination operation, makes the coded data difficult to be changed. In particular, to accommodate new data, the system may have to first decode all the combined data segments, remove some unimportant ones, and then reencode the data segments again. This procedure is clearly expensive for continuously evolving data storage. As such, we introduce a novel Cooperative Coding and Caching ( scheme, which allows decoding-free data removal through a triangle-like codeword organization. Its decoding performance is very close to the conventional network coding with only a sublinear overhead. Our scheme offers a promising solution to the caching management for streaming data.

  7. dCache data storage system implementations at a Tier-2 centre

    Energy Technology Data Exchange (ETDEWEB)

    Tsigenov, Oleg; Nowack, Andreas; Kress, Thomas [III. Physikalisches Institut B, RWTH Aachen (Germany)

    2009-07-01

    The experimental high energy physics groups of the RWTH Aachen University operate one of the largest Grid Tier-2 sites in the world and offer more than 2000 modern CPU cores and about 550 TB of disk space mainly to the CMS experiment and to a lesser extent to the Auger and Icecube collaborations.Running such a large data cluster requires a flexible storage system with high performance. We use dCache for this purpose and are integrated into the dCache support team to the benefit of the German Grid sites. Recently, a storage pre-production cluster has been built to study the setup and the behavior of novel dCache features within Chimera without interfering with the production system. This talk gives an overview about the practical experience gained with dCache on both the production and the testbed cluster and discusses future plans.

  8. Replication Strategy for Spatiotemporal Data Based on Distributed Caching System

    Science.gov (United States)

    Xiong, Lian; Tao, Yang; Xu, Juan; Zhao, Lun

    2018-01-01

    The replica strategy in distributed cache can effectively reduce user access delay and improve system performance. However, developing a replica strategy suitable for varied application scenarios is still quite challenging, owing to differences in user access behavior and preferences. In this paper, a replication strategy for spatiotemporal data (RSSD) based on a distributed caching system is proposed. By taking advantage of the spatiotemporal locality and correlation of user access, RSSD mines high popularity and associated files from historical user access information, and then generates replicas and selects appropriate cache node for placement. Experimental results show that the RSSD algorithm is simple and efficient, and succeeds in significantly reducing user access delay. PMID:29342897

  9. Biological effects of 60Co γ-irradiation on Laiwu ginger VM1 growth

    International Nuclear Information System (INIS)

    Zhou Ming; Huang Jinli; Wei Yuxia; Guan Qiuzhu; Zhang Zhenxian

    2008-01-01

    Rhizome of Laiwu ginger were treated with γ-irradiation at the doses of 0, 20, 40 and 60 Gy. The results showed that 60 Co γ-irradiation inhibited the rhizome burgeoning, and decreased the survival rate of the seedlings, rate of leaf- expansion and the growth of plants (VM 1 ). The inhibition effects became stronger with the increase of the irradiation dose. Different bands were found through the analysis of POD, EST isozymes and RAPD of VM 1 plants, which showed that variation on molecular level occurred in VM 1 plants. LD 30-40 was appropriate for the irradiation of rhizomes of Laiwu ginger and the optimal irradiation dose was about 20- 30 Gy. (authors)

  10. Energy-Efficient Caching for Mobile Edge Computing in 5G Networks

    Directory of Open Access Journals (Sweden)

    Zhaohui Luo

    2017-05-01

    Full Text Available Mobile Edge Computing (MEC, which is considered a promising and emerging paradigm to provide caching capabilities in proximity to mobile devices in 5G networks, enables fast, popular content delivery of delay-sensitive applications at the backhaul capacity of limited mobile networks. Most existing studies focus on cache allocation, mechanism design and coding design for caching. However, grid power supply with fixed power uninterruptedly in support of a MEC server (MECS is costly and even infeasible, especially when the load changes dynamically over time. In this paper, we investigate the energy consumption of the MECS problem in cellular networks. Given the average download latency constraints, we take the MECS’s energy consumption, backhaul capacities and content popularity distributions into account and formulate a joint optimization framework to minimize the energy consumption of the system. As a complicated joint optimization problem, we apply a genetic algorithm to solve it. Simulation results show that the proposed solution can effectively determine the near-optimal caching placement to obtain better performance in terms of energy efficiency gains compared with conventional caching placement strategies. In particular, it is shown that the proposed scheme can significantly reduce the joint cost when backhaul capacity is low.

  11. The Conserved Spore Coat Protein SpoVM Is Largely Dispensable in Clostridium difficile Spore Formation.

    Science.gov (United States)

    Ribis, John W; Ravichandran, Priyanka; Putnam, Emily E; Pishdadian, Keyan; Shen, Aimee

    2017-01-01

    The spore-forming bacterial pathogen Clostridium difficile is a leading cause of health care-associated infections in the United States. In order for this obligate anaerobe to transmit infection, it must form metabolically dormant spores prior to exiting the host. A key step during this process is the assembly of a protective, multilayered proteinaceous coat around the spore. Coat assembly depends on coat morphogenetic proteins recruiting distinct subsets of coat proteins to the developing spore. While 10 coat morphogenetic proteins have been identified in Bacillus subtilis , only two of these morphogenetic proteins have homologs in the Clostridia : SpoIVA and SpoVM. C. difficile SpoIVA is critical for proper coat assembly and functional spore formation, but the requirement for SpoVM during this process was unknown. Here, we show that SpoVM is largely dispensable for C. difficile spore formation, in contrast with B. subtilis . Loss of C. difficile SpoVM resulted in modest decreases (~3-fold) in heat- and chloroform-resistant spore formation, while morphological defects such as coat detachment from the forespore and abnormal cortex thickness were observed in ~30% of spoVM mutant cells. Biochemical analyses revealed that C. difficile SpoIVA and SpoVM directly interact, similarly to their B. subtilis counterparts. However, in contrast with B. subtilis , C. difficile SpoVM was not essential for SpoIVA to encase the forespore. Since C. difficile coat morphogenesis requires SpoIVA-interacting protein L (SipL), which is conserved exclusively in the Clostridia , but not the more broadly conserved SpoVM, our results reveal another key difference between C. difficile and B. subtilis spore assembly pathways. IMPORTANCE The spore-forming obligate anaerobe Clostridium difficile is the leading cause of antibiotic-associated diarrheal disease in the United States. When C. difficile spores are ingested by susceptible individuals, they germinate within the gut and

  12. Cooperative Coding and Caching for Streaming Data in Multihop Wireless Networks

    Directory of Open Access Journals (Sweden)

    Dan Wang

    2010-01-01

    Full Text Available This paper studies the distributed caching managements for the current flourish of the streaming applications in multihop wireless networks. Many caching managements to date use randomized network coding approach, which provides an elegant solution for ubiquitous data accesses in such systems. However, the encoding, essentially a combination operation, makes the coded data difficult to be changed. In particular, to accommodate new data, the system may have to first decode all the combined data segments, remove some unimportant ones, and then reencode the data segments again. This procedure is clearly expensive for continuously evolving data storage. As such, we introduce a novel Cooperative Coding and Caching (C3 scheme, which allows decoding-free data removal through a triangle-like codeword organization. Its decoding performance is very close to the conventional network coding with only a sublinear overhead. Our scheme offers a promising solution to the caching management for streaming data.

  13. Dynamic Video Streaming in Caching-enabled Wireless Mobile Networks

    OpenAIRE

    Liang, C.; Hu, S.

    2017-01-01

    Recent advances in software-defined mobile networks (SDMNs), in-network caching, and mobile edge computing (MEC) can have great effects on video services in next generation mobile networks. In this paper, we jointly consider SDMNs, in-network caching, and MEC to enhance the video service in next generation mobile networks. With the objective of maximizing the mean measurement of video quality, an optimization problem is formulated. Due to the coupling of video data rate, computing resource, a...

  14. On the Feasibility of Prefetching and Caching for Online TV Services: A Measurement Study on Hulu

    Science.gov (United States)

    Krishnappa, Dilip Kumar; Khemmarat, Samamon; Gao, Lixin; Zink, Michael

    Lately researchers are looking at ways to reduce the delay on video playback through mechanisms like prefetching and caching for Video-on-Demand (VoD) services. The usage of prefetching and caching also has the potential to reduce the amount of network bandwidth usage, as most popular requests are served from a local cache rather than the server containing the original content. In this paper, we investigate the advantages of having such a prefetching and caching scheme for a free hosting service of professionally created video (movies and TV shows) named "hulu". We look into the advantages of using a prefetching scheme where the most popular videos of the week, as provided by the hulu website, are prefetched and compare this approach with a conventional LRU caching scheme with limited storage space and a combined scheme of prefetching and caching. Results from our measurement and analysis shows that employing a basic caching scheme at the proxy yields a hit ratio of up to 77.69%, but requires storage of about 236GB. Further analysis shows that a prefetching scheme where the top-100 popular videos of the week are downloaded to the proxy yields a hit ratio of 44% with a storage requirement of 10GB. A LRU caching scheme with a storage limitation of 20GB can achieve a hit ratio of 55% but downloads 4713 videos to achieve such high hit ratio compared to 100 videos in prefetching scheme, whereas a scheme with both prefetching and caching with the same storage yields a hit ratio of 59% with download requirement of 4439 videos. We find that employing a scheme of prefetching along with caching with trade-off on the storage will yield a better hit ratio and bandwidth saving than individual caching or prefetching schemes.

  15. Performance Evaluation of Moving Small-Cell Network with Proactive Cache

    Directory of Open Access Journals (Sweden)

    Young Min Kwon

    2016-01-01

    Full Text Available Due to rapid growth in mobile traffic, mobile network operators (MNOs are considering the deployment of moving small-cells (mSCs. mSC is a user-centric network which provides voice and data services during mobility. mSC can receive and forward data traffic via wireless backhaul and sidehaul links. In addition, due to the predictive nature of users demand, mSCs can proactively cache the predicted contents in off-peak-traffic periods. Due to these characteristics, MNOs consider mSCs as a cost-efficient solution to not only enhance the system capacity but also provide guaranteed quality of service (QoS requirements to moving user equipment (UE in peak-traffic periods. In this paper, we conduct extensive system level simulations to analyze the performance of mSCs with varying cache size and content popularity and their effect on wireless backhaul load. The performance evaluation confirms that the QoS of moving small-cell UE (mSUE notably improves by using mSCs together with proactive caching. We also show that the effective use of proactive cache significantly reduces the wireless backhaul load and increases the overall network capacity.

  16. I-Structure software cache for distributed applications

    Directory of Open Access Journals (Sweden)

    Alfredo Cristóbal Salas

    2004-01-01

    Full Text Available En este artículo, describimos el caché de software I-Structure para entornos de memoria distribuida (D-ISSC, lo cual toma ventaja de la localidad de los datos mientras mantiene la capacidad de tolerancia a la latencia de sistemas de memoria I-Structure. Las facilidades de programación de los programas MPI, le ocultan los problemas de sincronización al programador. Nuestra evaluación experimental usando un conjunto de pruebas de rendimiento indica que clusters de PC con I-Structure y su mecanismo de cache D-ISSC son más robustos. El sistema puede acelerar aplicaciones de comunicación intensiva regulares e irregulares.

  17. Content Delivery in Fog-Aided Small-Cell Systems with Offline and Online Caching: An Information—Theoretic Analysis

    Directory of Open Access Journals (Sweden)

    Seyyed Mohammadreza Azimi

    2017-07-01

    Full Text Available The storage of frequently requested multimedia content at small-cell base stations (BSs can reduce the load of macro-BSs without relying on high-speed backhaul links. In this work, the optimal operation of a system consisting of a cache-aided small-cell BS and a macro-BS is investigated for both offline and online caching settings. In particular, a binary fading one-sided interference channel is considered in which the small-cell BS, whose transmission is interfered by the macro-BS, has a limited-capacity cache. The delivery time per bit (DTB is adopted as a measure of the coding latency, that is, the duration of the transmission block, required for reliable delivery. For offline caching, assuming a static set of popular contents, the minimum achievable DTB is characterized through information-theoretic achievability and converse arguments as a function of the cache capacity and of the capacity of the backhaul link connecting cloud and small-cell BS. For online caching, under a time-varying set of popular contents, the long-term (average DTB is evaluated for both proactive and reactive caching policies. Furthermore, a converse argument is developed to characterize the minimum achievable long-term DTB for online caching in terms of the minimum achievable DTB for offline caching. The performance of both online and offline caching is finally compared using numerical results.

  18. Optimal and Scalable Caching for 5G Using Reinforcement Learning of Space-Time Popularities

    Science.gov (United States)

    Sadeghi, Alireza; Sheikholeslami, Fatemeh; Giannakis, Georgios B.

    2018-02-01

    Small basestations (SBs) equipped with caching units have potential to handle the unprecedented demand growth in heterogeneous networks. Through low-rate, backhaul connections with the backbone, SBs can prefetch popular files during off-peak traffic hours, and service them to the edge at peak periods. To intelligently prefetch, each SB must learn what and when to cache, while taking into account SB memory limitations, the massive number of available contents, the unknown popularity profiles, as well as the space-time popularity dynamics of user file requests. In this work, local and global Markov processes model user requests, and a reinforcement learning (RL) framework is put forth for finding the optimal caching policy when the transition probabilities involved are unknown. Joint consideration of global and local popularity demands along with cache-refreshing costs allow for a simple, yet practical asynchronous caching approach. The novel RL-based caching relies on a Q-learning algorithm to implement the optimal policy in an online fashion, thus enabling the cache control unit at the SB to learn, track, and possibly adapt to the underlying dynamics. To endow the algorithm with scalability, a linear function approximation of the proposed Q-learning scheme is introduced, offering faster convergence as well as reduced complexity and memory requirements. Numerical tests corroborate the merits of the proposed approach in various realistic settings.

  19. Ventral medial prefrontal cortex (vmPFC) as a target of the dorsolateral prefrontal modulation by transcranial direct current stimulation (tDCS) in drug addiction.

    Science.gov (United States)

    Nakamura-Palacios, Ester Miyuki; Lopes, Isabela Bittencourt Coutinho; Souza, Rodolpho Albuquerque; Klauss, Jaisa; Batista, Edson Kruger; Conti, Catarine Lima; Moscon, Janine Andrade; de Souza, Rodrigo Stênio Moll

    2016-10-01

    Here, we report some electrophysiologic and imaging effects of the transcranial direct current stimulation (tDCS) over the dorsolateral prefrontal cortex (dlPFC) in drug addiction, notably in alcohol and crack-cocaine dependence. The low resolution electromagnetic tomography (LORETA) analysis obtained through event-related potentials (ERPs) under drug-related cues, more specifically in its P3 segment (300-500 ms) in both, alcoholics and crack-cocaine users, showed that the ventral medial prefrontal cortex (vmPFC) was the brain area with the largest change towards increasing activation under drug-related cues in those subjects that kept abstinence during and after the treatment with bilateral tDCS (2 mA, 35 cm(2), cathodal left and anodal right) over dlPFC, applied repetitively (five daily sessions). In an additional study in crack-cocaine, which showed craving decreases after repetitive bilateral tDCS, we examined data originating from diffusion tensor imaging (DTI), and we found increased DTI parameters in the left connection between vmPFC and nucleus accumbens (NAcc), such as the number of voxels, fractional anisotropy (FA) and apparent diffusion coefficient (ADC), in tDCS-treated crack-cocaine users when compared to the sham-tDCS group. This increasing of DTI parameters was significantly correlated with craving decreasing after the repetitive tDCS. The vmPFC relates to the control of drug seeking, possibly by extinguishing this behavior. In our studies, the bilateral dlPFC tDCS reduced relapses and craving to the drug use, and increased the vmPFC activation under drug cues, which may be of a great importance in the control of drug use in drug addiction.

  20. A general approach for cache-oblivious range reporting and approximate range counting

    DEFF Research Database (Denmark)

    Afshani, Peyman; Hamilton, Chris; Zeh, Norbert

    2010-01-01

    We present cache-oblivious solutions to two important variants of range searching: range reporting and approximate range counting. Our main contribution is a general approach for constructing cache-oblivious data structures that provide relative (1+ε)-approximations for a general class of range c...

  1. TaPT: Temperature-Aware Dynamic Cache Optimization for Embedded Systems

    Directory of Open Access Journals (Sweden)

    Tosiron Adegbija

    2017-12-01

    Full Text Available Embedded systems have stringent design constraints, which has necessitated much prior research focus on optimizing energy consumption and/or performance. Since embedded systems typically have fewer cooling options, rising temperature, and thus temperature optimization, is an emergent concern. Most embedded systems only dissipate heat by passive convection, due to the absence of dedicated thermal management hardware mechanisms. The embedded system’s temperature not only affects the system’s reliability, but can also affect the performance, power, and cost. Thus, embedded systems require efficient thermal management techniques. However, thermal management can conflict with other optimization objectives, such as execution time and energy consumption. In this paper, we focus on managing the temperature using a synergy of cache optimization and dynamic frequency scaling, while also optimizing the execution time and energy consumption. This paper provides new insights on the impact of cache parameters on efficient temperature-aware cache tuning heuristics. In addition, we present temperature-aware phase-based tuning, TaPT, which determines Pareto optimal clock frequency and cache configurations for fine-grained execution time, energy, and temperature tradeoffs. TaPT enables autonomous system optimization and also allows designers to specify temperature constraints and optimization priorities. Experiments show that TaPT can effectively reduce execution time, energy, and temperature, while imposing minimal hardware overhead.

  2. Web Cache Prefetching as an Aspect: Towards a Dynamic-Weaving Based Solution

    DEFF Research Database (Denmark)

    Segura-Devillechaise, Marc; Menaud, Jean-Marc; Muller, Gilles

    2003-01-01

    Given the high proportion of HTTP traffic in the Internet, Web caches are crucial to reduce user access time, network latency, and bandwidth consumption. Prefetching in a Web cache can further enhance these benefits. For the best performance, however, the prefetching policy must match user and Web...

  3. Cache aware mapping of streaming apllications on a multiprocessor system-on-chip

    NARCIS (Netherlands)

    Moonen, A.J.M.; Bekooij, M.J.G.; Berg, van den R.M.J.; Meerbergen, van J.; Sciuto, D.; Peng, Z.

    2008-01-01

    Efficient use of the memory hierarchy is critical for achieving high performance in a multiprocessor system- on-chip. An external memory that is shared between processors is a bottleneck in current and future systems. Cache misses and a large cache miss penalty contribute to a low processor

  4. Overhead-Aware-Best-Fit (OABF) Resource Allocation Algorithm for Minimizing VM Launching Overhead

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Hao [IIT; Garzoglio, Gabriele [Fermilab; Ren, Shangping [IIT, Chicago; Timm, Steven [Fermilab; Noh, Seo Young [KISTI, Daejeon

    2014-11-11

    FermiCloud is a private cloud developed in Fermi National Accelerator Laboratory to provide elastic and on-demand resources for different scientific research experiments. The design goal of the FermiCloud is to automatically allocate resources for different scientific applications so that the QoS required by these applications is met and the operational cost of the FermiCloud is minimized. Our earlier research shows that VM launching overhead has large variations. If such variations are not taken into consideration when making resource allocation decisions, it may lead to poor performance and resource waste. In this paper, we show how we may use an VM launching overhead reference model to minimize VM launching overhead. In particular, we first present a training algorithm that automatically tunes a given refer- ence model to accurately reflect FermiCloud environment. Based on the tuned reference model for virtual machine launching overhead, we develop an overhead-aware-best-fit resource allocation algorithm that decides where and when to allocate resources so that the average virtual machine launching overhead is minimized. The experimental results indicate that the developed overhead-aware-best-fit resource allocation algorithm can significantly improved the VM launching time when large number of VMs are simultaneously launched.

  5. Fast and Cache-Oblivious Dynamic Programming with Local Dependencies

    DEFF Research Database (Denmark)

    Bille, Philip; Stöckel, Morten

    2012-01-01

    are widely used in bioinformatics to compare DNA and protein sequences. These problems can all be solved using essentially the same dynamic programming scheme over a two-dimensional matrix, where each entry depends locally on at most 3 neighboring entries. We present a simple, fast, and cache......-oblivious algorithm for this type of local dynamic programming suitable for comparing large-scale strings. Our algorithm outperforms the previous state-of-the-art solutions. Surprisingly, our new simple algorithm is competitive with a complicated, optimized, and tuned implementation of the best cache-aware algorithm...

  6. Cache and energy efficient algorithms for Nussinov's RNA Folding.

    Science.gov (United States)

    Zhao, Chunchun; Sahni, Sartaj

    2017-12-06

    An RNA folding/RNA secondary structure prediction algorithm determines the non-nested/pseudoknot-free structure by maximizing the number of complementary base pairs and minimizing the energy. Several implementations of Nussinov's classical RNA folding algorithm have been proposed. Our focus is to obtain run time and energy efficiency by reducing the number of cache misses. Three cache-efficient algorithms, ByRow, ByRowSegment and ByBox, for Nussinov's RNA folding are developed. Using a simple LRU cache model, we show that the Classical algorithm of Nussinov has the highest number of cache misses followed by the algorithms Transpose (Li et al.), ByRow, ByRowSegment, and ByBox (in this order). Extensive experiments conducted on four computational platforms-Xeon E5, AMD Athlon 64 X2, Intel I7 and PowerPC A2-using two programming languages-C and Java-show that our cache efficient algorithms are also efficient in terms of run time and energy. Our benchmarking shows that, depending on the computational platform and programming language, either ByRow or ByBox give best run time and energy performance. The C version of these algorithms reduce run time by as much as 97.2% and energy consumption by as much as 88.8% relative to Classical and by as much as 56.3% and 57.8% relative to Transpose. The Java versions reduce run time by as much as 98.3% relative to Classical and by as much as 75.2% relative to Transpose. Transpose achieves run time and energy efficiency at the expense of memory as it takes twice the memory required by Classical. The memory required by ByRow, ByRowSegment, and ByBox is the same as that of Classical. As a result, using the same amount of memory, the algorithms proposed by us can solve problems up to 40% larger than those solvable by Transpose.

  7. dCache: Big Data storage for HEP communities and beyond

    International Nuclear Information System (INIS)

    Millar, A P; Bernardt, C; Fuhrmann, P; Mkrtchyan, T; Petersen, A; Schwank, K; Behrmann, G; Litvintsev, D; Rossi, A

    2014-01-01

    With over ten years in production use dCache data storage system has evolved to match ever changing lansdcape of continually evolving storage technologies with new solutions to both existing problems and new challenges. In this paper, we present three areas of innovation in dCache: providing efficient access to data with NFS v4.1 pNFS, adoption of CDMI and WebDAV as an alternative to SRM for managing data, and integration with alternative authentication mechanisms.

  8. On Use of the Variable Zagreb vM2 Index in QSPR: Boiling Points of Benzenoid Hydrocarbons

    Directory of Open Access Journals (Sweden)

    Albin Jurić

    2004-12-01

    Full Text Available The variable Zagreb vM2 index is introduced and applied to the structure-boiling point modeling of benzenoid hydrocarbons. The linear model obtained (thestandard error of estimate for the fit model Sfit=6.8 oC is much better than thecorresponding model based on the original Zagreb M2 index (Sfit=16.4 oC. Surprisingly,the model based on the variable vertex-connectivity index (Sfit=6.8 oC is comparable tothe model based on vM2 index. A comparative study with models based on the vertex-connectivity index, edge-connectivity index and several distance indices favours modelsbased on the variable Zagreb vM2 index and variable vertex-connectivity index.However, the multivariate regression with two-, three- and four-descriptors givesimproved models, the best being the model with four-descriptors (but vM2 index is notamong them with Sfit=5 oC, though the four-descriptor model contaning vM2 index isonly slightly inferior (Sfit=5.3 oC.

  9. Using shadow page cache to improve isolated drivers performance.

    Science.gov (United States)

    Zheng, Hao; Dong, Xiaoshe; Wang, Endong; Chen, Baoke; Zhu, Zhengdong; Liu, Chengzhe

    2015-01-01

    With the advantage of the reusability property of the virtualization technology, users can reuse various types and versions of existing operating systems and drivers in a virtual machine, so as to customize their application environment. In order to prevent users' virtualization environments being impacted by driver faults in virtual machine, Chariot examines the correctness of driver's write operations by the method of combining a driver's write operation capture and a driver's private access control table. However, this method needs to keep the write permission of shadow page table as read-only, so as to capture isolated driver's write operations through page faults, which adversely affect the performance of the driver. Based on delaying setting frequently used shadow pages' write permissions to read-only, this paper proposes an algorithm using shadow page cache to improve the performance of isolated drivers and carefully study the relationship between the performance of drivers and the size of shadow page cache. Experimental results show that, through the shadow page cache, the performance of isolated drivers can be greatly improved without impacting Chariot's reliability too much.

  10. HotpathVM: An Effective JIT for Resource-constrained Devices

    DEFF Research Database (Denmark)

    Gal, Andreas; Franz, Michael; Probst, Christian

    2006-01-01

    We present a just-in-time compiler for a Java VM that is small enough to fit on resource-constrained devices, yet surprisingly effective. Our system dynamically identifies traces of frequently executed bytecode instructions (which may span several basic blocks across several methods) and compiles...

  11. Lack of caching of direct-seeded Douglas fir seeds by deer mice

    International Nuclear Information System (INIS)

    Sullivan, T.P.

    1978-01-01

    Seed caching by deer mice was investigated by radiotagging seeds in forest and clear-cut areas in coastal British Columbia. Deer mice tend to cache very few Douglas fir seeds in the fall when the seed is uniformly distributed and is at densities comparable with those used in direct-seeding programs. (author)

  12. 3D-e-Chem-VM: Structural Cheminformatics Research Infrastructure in a Freely Available Virtual Machine.

    Science.gov (United States)

    McGuire, Ross; Verhoeven, Stefan; Vass, Márton; Vriend, Gerrit; de Esch, Iwan J P; Lusher, Scott J; Leurs, Rob; Ridder, Lars; Kooistra, Albert J; Ritschel, Tina; de Graaf, Chris

    2017-02-27

    3D-e-Chem-VM is an open source, freely available Virtual Machine ( http://3d-e-chem.github.io/3D-e-Chem-VM/ ) that integrates cheminformatics and bioinformatics tools for the analysis of protein-ligand interaction data. 3D-e-Chem-VM consists of software libraries, and database and workflow tools that can analyze and combine small molecule and protein structural information in a graphical programming environment. New chemical and biological data analytics tools and workflows have been developed for the efficient exploitation of structural and pharmacological protein-ligand interaction data from proteomewide databases (e.g., ChEMBLdb and PDB), as well as customized information systems focused on, e.g., G protein-coupled receptors (GPCRdb) and protein kinases (KLIFS). The integrated structural cheminformatics research infrastructure compiled in the 3D-e-Chem-VM enables the design of new approaches in virtual ligand screening (Chemdb4VS), ligand-based metabolism prediction (SyGMa), and structure-based protein binding site comparison and bioisosteric replacement for ligand design (KRIPOdb).

  13. Decision-cache based XACML authorisation and anonymisation for XML documents

    OpenAIRE

    Ulltveit-Moe, Nils; Oleshchuk, Vladimir A

    2012-01-01

    Author's version of an article in the journal: Computer Standards and Interfaces. Also available from the publisher at: http://dx.doi.org/10.1016/j.csi.2011.10.007 This paper describes a decision cache for the eXtensible Access Control Markup Language (XACML) that supports fine-grained authorisation and anonymisation of XML based messages and documents down to XML attribute and element level. The decision cache is implemented as an XACML obligation service, where a specification of the XML...

  14. Turbidity and Total Suspended Solids on the Lower Cache River Watershed, AR.

    Science.gov (United States)

    Rosado-Berrios, Carlos A; Bouldin, Jennifer L

    2016-06-01

    The Cache River Watershed (CRW) in Arkansas is part of one of the largest remaining bottomland hardwood forests in the US. Although wetlands are known to improve water quality, the Cache River is listed as impaired due to sedimentation and turbidity. This study measured turbidity and total suspended solids (TSS) in seven sites of the lower CRW; six sites were located on the Bayou DeView tributary of the Cache River. Turbidity and TSS levels ranged from 1.21 to 896 NTU, and 0.17 to 386.33 mg/L respectively and had an increasing trend over the 3-year study. However, a decreasing trend from upstream to downstream in the Bayou DeView tributary was noted. Sediment loading calculated from high precipitation events and mean TSS values indicate that contributions from the Cache River main channel was approximately 6.6 times greater than contributions from Bayou DeView. Land use surrounding this river channel affects water quality as wetlands provide a filter for sediments in the Bayou DeView channel.

  15. On the Performance of the Cache Coding Protocol

    Directory of Open Access Journals (Sweden)

    Behnaz Maboudi

    2018-03-01

    Full Text Available Network coding approaches typically consider an unrestricted recoding of coded packets in the relay nodes to increase performance. However, this can expose the system to pollution attacks that cannot be detected during transmission, until the receivers attempt to recover the data. To prevent these attacks while allowing for the benefits of coding in mesh networks, the cache coding protocol was proposed. This protocol only allows recoding at the relays when the relay has received enough coded packets to decode an entire generation of packets. At that point, the relay node recodes and signs the recoded packets with its own private key, allowing the system to detect and minimize the effect of pollution attacks and making the relays accountable for changes on the data. This paper analyzes the delay performance of cache coding to understand the security-performance trade-off of this scheme. We introduce an analytical model for the case of two relays in an erasure channel relying on an absorbing Markov chain and an approximate model to estimate the performance in terms of the number of transmissions before successfully decoding at the receiver. We confirm our analysis using simulation results. We show that cache coding can overcome the security issues of unrestricted recoding with only a moderate decrease in system performance.

  16. Cache-Oblivious Search Trees via Binary Trees of Small Height

    DEFF Research Database (Denmark)

    Brodal, G.S.; Fagerberg, R.; Jacob, R.

    2002-01-01

    We propose a version of cache oblivious search trees which is simpler than the previous proposal of Bender, Demaine and Farach-Colton and has the same complexity bounds. In particular, our data structure avoids the use of weight balanced B-trees, and can be implemented as just a single array......, and range queries in worst case O(logB n + k/B) memory transfers, where k is the size of the output.The basic idea of our data structure is to maintain a dynamic binary tree of height log n+O(1) using existing methods, embed this tree in a static binary tree, which in turn is embedded in an array in a cache...... oblivious fashion, using the van Emde Boas layout of Prokop.We also investigate the practicality of cache obliviousness in the area of search trees, by providing an empirical comparison of different methods for laying out a search tree in memory....

  17. Consistencia de ejecución: una propuesta no cache coherente

    OpenAIRE

    García, Rafael B.; Ardenghi, Jorge Raúl

    2005-01-01

    La presencia de uno o varios niveles de memoria cache en los procesadores modernos, cuyo objetivo es reducir el tiempo efectivo de acceso a memoria, adquiere especial relevancia en un ambiente multiprocesador del tipo DSM dado el mucho mayor costo de las referencias a memoria en módulos remotos. Claramente, el protocolo de coherencia de cache debe responder al modelo de consistencia de memoria adoptado. El modelo secuencial SC, aceptado generalmente como el más natural, junto a una serie de m...

  18. Randomized Caches Can Be Pretty Useful to Hard Real-Time Systems

    Directory of Open Access Journals (Sweden)

    Enrico Mezzetti

    2015-03-01

    Full Text Available Cache randomization per se, and its viability for probabilistic timing analysis (PTA of critical real-time systems, are receiving increasingly close attention from the scientific community and the industrial practitioners. In fact, the very notion of introducing randomness and probabilities in time-critical systems has caused strenuous debates owing to the apparent clash that this idea has with the strictly deterministic view traditionally held for those systems. A paper recently appeared in LITES (Reineke, J. (2014. Randomized Caches Considered Harmful in Hard Real-Time Systems. LITES, 1(1, 03:1-03:13. provides a critical analysis of the weaknesses and risks entailed in using randomized caches in hard real-time systems. In order to provide the interested reader with a fuller, balanced appreciation of the subject matter, a critical analysis of the benefits brought about by that innovation should be provided also. This short paper addresses that need by revisiting the array of issues addressed in the cited work, in the light of the latest advances to the relevant state of the art. Accordingly, we show that the potential benefits of randomized caches do offset their limitations, causing them to be - when used in conjunction with PTA - a serious competitor to conventional designs.

  19. Greatly improved cache update times for conditions data with Frontier/Squid

    International Nuclear Information System (INIS)

    Dykstra, Dave; Lueking, Lee

    2009-01-01

    The CMS detector project loads copies of conditions data to over 100,000 computer cores worldwide by using a software subsystem called Frontier. This subsystem translates database queries into HTTP, looks up the results in a central database at CERN, and caches the results in an industry-standard HTTP proxy/caching server called Squid. One of the most challenging aspects of any cache system is coherency, that is, ensuring that changes made to the underlying data get propagated out to all clients in a timely manner. Recently, the Frontier system was enhanced to drastically reduce the time for changes to be propagated everywhere without heavily loading servers. The propagation time is now as low as 15 minutes for some kinds of data and no more than 60 minutes for the rest of the data. This was accomplished by taking advantage of an HTTP and Squid feature called If-Modified-Since. In order to use this feature, the Frontier server sends a Last-Modified timestamp, but since modification times are not normally tracked by Oracle databases, a PL/SQL program was developed to track the modification times of database tables. We discuss the details of this caching scheme and the obstacles overcome including database and Squid bugs.

  20. Greatly improved cache update times for conditions data with Frontier/Squid

    Energy Technology Data Exchange (ETDEWEB)

    Dykstra, Dave; Lueking, Lee, E-mail: dwd@fnal.go [Computing Division, Fermilab, Batavia, IL (United States)

    2010-04-01

    The CMS detector project loads copies of conditions data to over 100,000 computer cores worldwide by using a software subsystem called Frontier. This subsystem translates database queries into HTTP, looks up the results in a central database at CERN, and caches the results in an industry-standard HTTP proxy/caching server called Squid. One of the most challenging aspects of any cache system is coherency, that is, ensuring that changes made to the underlying data get propagated out to all clients in a timely manner. Recently, the Frontier system was enhanced to drastically reduce the time for changes to be propagated everywhere without heavily loading servers. The propagation time is now as low as 15 minutes for some kinds of data and no more than 60 minutes for the rest of the data. This was accomplished by taking advantage of an HTTP and Squid feature called If-Modified-Since. In order to use this feature, the Frontier server sends a Last-Modified timestamp, but since modification times are not normally tracked by Oracle databases, a PL/SQL program was developed to track the modification times of database tables. We discuss the details of this caching scheme and the obstacles overcome including database and Squid bugs.

  1. Image Interpolation with Geometric Contour Stencils

    Directory of Open Access Journals (Sweden)

    Pascal Getreuer

    2011-09-01

    Full Text Available We consider the image interpolation problem where given an image vm,n with uniformly-sampled pixels vm,n and point spread function h, the goal is to find function u(x,y satisfying vm,n = (h*u(m,n for all m,n in Z. This article improves upon the IPOL article Image Interpolation with Contour Stencils. In the previous work, contour stencils are used to estimate the image contours locally as short line segments. This article begins with a continuous formulation of total variation integrated over a collection of curves and defines contour stencils as a consistent discretization. This discretization is more reliable than the previous approach and can effectively distinguish contours that are locally shaped like lines, curves, corners, and circles. These improved contour stencils sense more of the geometry in the image. Interpolation is performed using an extension of the method described in the previous article. Using the improved contour stencils, there is an increase in image quality while maintaining similar computational efficiency.

  2. Sex, estradiol, and spatial memory in a food-caching corvid.

    Science.gov (United States)

    Rensel, Michelle A; Ellis, Jesse M S; Harvey, Brigit; Schlinger, Barney A

    2015-09-01

    Estrogens significantly impact spatial memory function in mammalian species. Songbirds express the estrogen synthetic enzyme aromatase at relatively high levels in the hippocampus and there is evidence from zebra finches that estrogens facilitate performance on spatial learning and/or memory tasks. It is unknown, however, whether estrogens influence hippocampal function in songbirds that naturally exhibit memory-intensive behaviors, such as cache recovery observed in many corvid species. To address this question, we examined the impact of estradiol on spatial memory in non-breeding Western scrub-jays, a species that routinely participates in food caching and retrieval in nature and in captivity. We also asked if there were sex differences in performance or responses to estradiol. Utilizing a combination of an aromatase inhibitor, fadrozole, with estradiol implants, we found that while overall cache recovery rates were unaffected by estradiol, several other indices of spatial memory, including searching efficiency and efficiency to retrieve the first item, were impaired in the presence of estradiol. In addition, males and females differed in some performance measures, although these differences appeared to be a consequence of the nature of the task as neither sex consistently out-performed the other. Overall, our data suggest that a sustained estradiol elevation in a food-caching bird impairs some, but not all, aspects of spatial memory on an innate behavioral task, at times in a sex-specific manner. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. Using Shadow Page Cache to Improve Isolated Drivers Performance

    Directory of Open Access Journals (Sweden)

    Hao Zheng

    2015-01-01

    Full Text Available With the advantage of the reusability property of the virtualization technology, users can reuse various types and versions of existing operating systems and drivers in a virtual machine, so as to customize their application environment. In order to prevent users’ virtualization environments being impacted by driver faults in virtual machine, Chariot examines the correctness of driver’s write operations by the method of combining a driver’s write operation capture and a driver’s private access control table. However, this method needs to keep the write permission of shadow page table as read-only, so as to capture isolated driver’s write operations through page faults, which adversely affect the performance of the driver. Based on delaying setting frequently used shadow pages’ write permissions to read-only, this paper proposes an algorithm using shadow page cache to improve the performance of isolated drivers and carefully study the relationship between the performance of drivers and the size of shadow page cache. Experimental results show that, through the shadow page cache, the performance of isolated drivers can be greatly improved without impacting Chariot’s reliability too much.

  4. The development of caching and object permanence in Western scrub-jays (Aphelocoma californica): which emerges first?

    Science.gov (United States)

    Salwiczek, Lucie H; Emery, Nathan J; Schlinger, Barney; Clayton, Nicola S

    2009-08-01

    Recent studies on the food-caching behavior of corvids have revealed complex physical and social skills, yet little is known about the ontogeny of food caching in relation to the development of cognitive capacities. Piagetian object permanence is the understanding that objects continue to exist even when they are no longer visible. Here, the authors focus on Piagetian Stages 3 and 4, because they are hallmarks in the cognitive development of both young children and animals. Our aim is to determine in a food-caching corvid, the Western scrub-jay, whether (1) Piagetian Stage 4 competence and tentative caching (i.e., hiding an item invisibly and retrieving it without delay), emerge concomitantly or consecutively; (2) whether experiencing the reappearance of hidden objects enhances the timing of the appearance of object permanence; and (3) discuss how the development of object permanence is related to behavioral development and sensorimotor intelligence. Our findings suggest that object permanence Stage 4 emerges before tentative caching, and independent of environmental influences, but that once the birds have developed simple object-permanence, then social learning might advance the interval after which tentative caching commences. Copyright 2009 APA, all rights reserved.

  5. Studying VM-1 molybdenum alloy workability at high current density. II

    Energy Technology Data Exchange (ETDEWEB)

    Tatarinova, O M; Amirkhanova, N A; Zaripov, R A

    1976-01-01

    Under galvanostatic conditions, voltampere characteristics have been taken off for VM-1 alloy; determined are also the selective effect of electrolytes and the influence of hydrodynamical conditions on the rate of anodic dissolution in the electrolytes containing 15% NaNO/sub 3/; 15% NaNO/sub 3/ + 5% NaOH, and 15 % NaOH. In a composite electrolyte, the quality of the surface is improved, and higher current densities have been attained as compared with those for pure 15% NaNO/sub 3/. The process of dissolution in the above electrolytes is effected with diffuse limitations. For the electrochemical treatment of the VM-1 alloy under production conditions, a composite electrolyte containing 15% NaNO/sub 3/ and 5% NaOH has been suggested and tested.

  6. Language-Based Caching of Dynamically Generated HTML

    DEFF Research Database (Denmark)

    Brabrand, Claus; Møller, Anders; Olesen, Steffan

    2002-01-01

    Increasingly, HTML documents are dynamically generated by interactive Web services. To ensure that the client is presented with the newest versions of such documents it is customary to disable client caching causing a seemingly inevitable performance penalty. In the system, dynamic HTML documents...

  7. dCache: implementing a high-end NFSv4.1 service using a Java NIO framework

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    dCache is a high performance scalable storage system widely used by HEP community. In addition to set of home grown protocols we also provide industry standard access mechanisms like WebDAV and NFSv4.1. This support places dCache as a direct competitor to commercial solutions. Nevertheless conforming to a protocol is not enough; our implementations must perform comparably or even better than commercial systems. To achieve this, dCache uses two high-end IO frameworks from well know application servers: GlassFish and JBoss. This presentation describes how we implemented an rfc1831 and rfc2203 compliant ONC RPC (Sun RPC) service based on the Grizzly NIO framework, part of the GlassFish application server. This ONC RPC service is the key component of dCache’s NFSv4.1 implementation, but is independent of dCache and available for other projects. We will also show some details of dCache NFS v4.1 implementations, describe some of the Java NIO techniques used and, finally, present details of our performance e...

  8. Planetary Sample Caching System Design Options

    Science.gov (United States)

    Collins, Curtis; Younse, Paulo; Backes, Paul

    2009-01-01

    Potential Mars Sample Return missions would aspire to collect small core and regolith samples using a rover with a sample acquisition tool and sample caching system. Samples would need to be stored in individual sealed tubes in a canister that could be transfered to a Mars ascent vehicle and returned to Earth. A sample handling, encapsulation and containerization system (SHEC) has been developed as part of an integrated system for acquiring and storing core samples for application to future potential MSR and other potential sample return missions. Requirements and design options for the SHEC system were studied and a recommended design concept developed. Two families of solutions were explored: 1)transfer of a raw sample from the tool to the SHEC subsystem and 2)transfer of a tube containing the sample to the SHEC subsystem. The recommended design utilizes sample tool bit change out as the mechanism for transferring tubes to and samples in tubes from the tool. The SHEC subsystem design, called the Bit Changeout Caching(BiCC) design, is intended for operations on a MER class rover.

  9. New distributive web-caching technique for VOD services

    Science.gov (United States)

    Kim, Iksoo; Woo, Yoseop; Hwang, Taejune; Choi, Jintak; Kim, Youngjune

    2002-12-01

    At present, one of the most popular services through internet is on-demand services including VOD, EOD and NOD. But the main problems for on-demand service are excessive load of server and insufficiency of network resources. Therefore the service providers require a powerful expensive server and clients are faced with long end-to-end delay and network congestion problem. This paper presents a new distributive web-caching technique for fluent VOD services using distributed proxies in Head-end-Network (HNET). The HNET consists of a Switching-Agent (SA) as a control node, some Head-end Nodes (HEN) as proxies and clients connected to HEN. And each HEN is composing a LAN. Clients request VOD services to server through a HEN and SA. The SA operates the heart of HNET, all the operations using proposed distributive caching technique perform under the control of SA. This technique stores some parts of a requested video on the corresponding HENs when clients connected to each HEN request an identical video. Thus, clients access those HENs (proxies) alternatively for acquiring video streams. Eventually, this fact leads to equi-loaded proxy (HEN). We adopt the cache replacement strategy using the combination of LRU, LFU, remove streams from other HEN prior to server streams and the method of replacing the first block of video last to reduce end-to end delay.

  10. Servidor proxy caché: comprensión y asimilación tecnológica

    Directory of Open Access Journals (Sweden)

    Carlos E. Gómez

    2012-01-01

    Full Text Available Los proveedores de acceso a Internet usualmente incluyen el concepto de aceleradores de Internet para reducir el tiempo promedio que tarda un navegador en obtener los archivos solicitados. Para los administradores del sistema es difícil elegir la configuración del servidor proxy caché, ya que es necesario decidir los valores que se deben usar en diferentes variables. En este artículo se presenta la forma como se abordó el proceso de comprensión y asimilación tecnológica del servicio de proxy caché, un servicio de alto impacto organizacional. Además, este artículo es producto del proyecto de investigación “Análisis de configuraciones de servidores proxy caché”, en el cual se estudiaron aspectos relevantes del rendimiento de Squid como servidor proxy caché.

  11. Secure File Allocation and Caching in Large-scale Distributed Systems

    DEFF Research Database (Denmark)

    Di Mauro, Alessio; Mei, Alessandro; Jajodia, Sushil

    2012-01-01

    In this paper, we present a file allocation and caching scheme that guarantees high assurance, availability, and load balancing in a large-scale distributed file system that can support dynamic updates of authorization policies. The scheme uses fragmentation and replication to store files with hi......-balancing, and reducing delay of read operations. The system offers a trade-off-between performance and security that is dynamically tunable according to the current level of threat. We validate our mechanisms with extensive simulations in an Internet-like network.......In this paper, we present a file allocation and caching scheme that guarantees high assurance, availability, and load balancing in a large-scale distributed file system that can support dynamic updates of authorization policies. The scheme uses fragmentation and replication to store files with high...... security requirements in a system composed of a majority of low-security servers. We develop mechanisms to fragment files, to allocate them into multiple servers, and to cache them as close as possible to their readers while preserving the security requirement of the files, providing load...

  12. Investigation into electrochemical behavior of molybdenum VM-1 alloy at high current density

    Energy Technology Data Exchange (ETDEWEB)

    Tatarinova, O M; Amirkhanova, N A; Akhmadiev, A G

    1975-01-01

    The effect of the composition and concentration of electrolyte on the workability of the molybdenum VM-1 alloy has been studied and a number of anions has been determined relative to their activation capacity. The best workability of the alloy is achieved in a 15% NaOH solution and a composite electrolyte 15% NaNO/sub 3/+5%NaOH. It is shown that in polarization of the VM-1 alloy both in alkali- and salt solutions a film of oxides of different valence molybdenum is formed: Mo/sub 2/O/sub 3/, Mo/sub 4/O/sub 11/, Mo/sub 9/O/sub 26/, MoO/sub 3/, but molybdenum gets dissolved only in a hexavalent form, its content in a solution being in conformity with the polarizing current densities. Using a temperature-kinetic technique it has been found that the concentrational polarization is the limiting stage in the reaction of molybdenum and VM-1 alloy anodic dissolution in 15% NaNO/sub 3/ solution and in the composite electrolyte 15%NaNO/sub 3/+5%NaOH.

  13. Optical RAM-enabled cache memory and optical routing for chip multiprocessors: technologies and architectures

    Science.gov (United States)

    Pleros, Nikos; Maniotis, Pavlos; Alexoudi, Theonitsa; Fitsios, Dimitris; Vagionas, Christos; Papaioannou, Sotiris; Vyrsokinos, K.; Kanellos, George T.

    2014-03-01

    The processor-memory performance gap, commonly referred to as "Memory Wall" problem, owes to the speed mismatch between processor and electronic RAM clock frequencies, forcing current Chip Multiprocessor (CMP) configurations to consume more than 50% of the chip real-estate for caching purposes. In this article, we present our recent work spanning from Si-based integrated optical RAM cell architectures up to complete optical cache memory architectures for Chip Multiprocessor configurations. Moreover, we discuss on e/o router subsystems with up to Tb/s routing capacity for cache interconnection purposes within CMP configurations, currently pursued within the FP7 PhoxTrot project.

  14. Minimizing cache misses in an event-driven network server: A case study of TUX

    DEFF Research Database (Denmark)

    Bhatia, Sapan; Consel, Charles; Lawall, Julia Laetitia

    2006-01-01

    We analyze the performance of CPU-bound network servers and demonstrate experimentally that the degradation in the performance of these servers under high-concurrency workloads is largely due to inefficient use of the hardware caches. We then describe an approach to speeding up event-driven network...... servers by optimizing their use of the L2 CPU cache in the context of the TUX Web server, known for its robustness to heavy load. Our approach is based on a novel cache-aware memory allocator and a specific scheduling strategy that together ensure that the total working data set of the server stays...

  15. Applying Data Mining Techniques to Improve Information Security in the Cloud: A Single Cache System Approach

    Directory of Open Access Journals (Sweden)

    Amany AlShawi

    2016-01-01

    Full Text Available Presently, the popularity of cloud computing is gradually increasing day by day. The purpose of this research was to enhance the security of the cloud using techniques such as data mining with specific reference to the single cache system. From the findings of the research, it was observed that the security in the cloud could be enhanced with the single cache system. For future purposes, an Apriori algorithm can be applied to the single cache system. This can be applied by all cloud providers, vendors, data distributors, and others. Further, data objects entered into the single cache system can be extended into 12 components. Database and SPSS modelers can be used to implement the same.

  16. Delivery Time Minimization in Edge Caching: Synergistic Benefits of Subspace Alignment and Zero Forcing

    KAUST Repository

    Kakar, Jaber

    2017-10-29

    An emerging trend of next generation communication systems is to provide network edges with additional capabilities such as additional storage resources in the form of caches to reduce file delivery latency. To investigate this aspect, we study the fundamental limits of a cache-aided wireless network consisting of one central base station, $M$ transceivers and $K$ receivers from a latency-centric perspective. We use the normalized delivery time (NDT) to capture the per-bit latency for the worst-case file request pattern at high signal-to-noise ratios (SNR), normalized with respect to a reference interference-free system with unlimited transceiver cache capabilities. For various special cases with $M=\\\\{1,2\\\\}$ and $K=\\\\{1,2,3\\\\}$ that satisfy $M+K\\\\leq 4$, we establish the optimal tradeoff between cache storage and latency. This is facilitated through establishing a novel converse (for arbitrary $M$ and $K$) and an achievability scheme on the NDT. Our achievability scheme is a synergistic combination of multicasting, zero-forcing beamforming and interference alignment.

  17. Evict on write, a management strategy for a prefetch unit and/or first level cache in a multiprocessor system with speculative execution

    Science.gov (United States)

    Gara, Alan; Ohmacht, Martin

    2014-09-16

    In a multiprocessor system with at least two levels of cache, a speculative thread may run on a core processor in parallel with other threads. When the thread seeks to do a write to main memory, this access is to be written through the first level cache to the second level cache. After the write though, the corresponding line is deleted from the first level cache and/or prefetch unit, so that any further accesses to the same location in main memory have to be retrieved from the second level cache. The second level cache keeps track of multiple versions of data, where more than one speculative thread is running in parallel, while the first level cache does not have any of the versions during speculation. A switch allows choosing between modes of operation of a speculation blind first level cache.

  18. Cache Timing Analysis of eStream Finalists

    DEFF Research Database (Denmark)

    Zenner, Erik

    2009-01-01

    Cache Timing Attacks have attracted a lot of cryptographic attention due to their relevance for the AES. However, their applicability to other cryptographic primitives is less well researched. In this talk, we give an overview over our analysis of the stream ciphers that were selected for phase 3...

  19. A Survey on Mobile Edge Networks: Convergence of Computing, Caching and Communications

    OpenAIRE

    Wang, Shuo; Zhang, Xing; Zhang, Yan; Wang, Lin; Yang, Juwo; Wang, Wenbo

    2017-01-01

    As the explosive growth of smart devices and the advent of many new applications, traffic volume has been growing exponentially. The traditional centralized network architecture cannot accommodate such user demands due to heavy burden on the backhaul links and long latency. Therefore, new architectures which bring network functions and contents to the network edge are proposed, i.e., mobile edge computing and caching. Mobile edge networks provide cloud computing and caching capabilities at th...

  20. Cache Timing Analysis of LFSR-based Stream Ciphers

    DEFF Research Database (Denmark)

    Zenner, Erik; Leander, Gregor; Hawkes, Philip

    2009-01-01

    Cache timing attacks are a class of side-channel attacks that is applicable against certain software implementations. They have generated significant interest when demonstrated against the Advanced Encryption Standard (AES), but have more recently also been applied against other cryptographic...

  1. Autosomal dominant inheritance of brain cardiolipin fatty acid abnormality in VM/DK mice: association with hypoxic-induced cognitive insensitivity.

    Science.gov (United States)

    Ta, Nathan L; Jia, Xibei; Kiebish, Michael; Seyfried, Thomas N

    2014-01-01

    Cardiolipin is a complex polyglycerol phospholipid found almost exclusively in the inner mitochondrial membrane and regulates numerous enzyme activities especially those related to oxidative phosphorylation and coupled respiration. Abnormalities in cardiolipin can impair mitochondrial function and bioenergetics. We recently demonstrated that the ratio of shorter chain saturated and monounsaturated fatty acids (C16:0; C18:0; C18:1) to longer chain polyunsaturated fatty acids (C18:2; C20:4; C22:6) was significantly greater in the brains of adult VM/DK (VM) inbred mice than in the brains of C57BL/6 J (B6) mice. The cardiolipin fatty acid abnormalities in VM mice are also associated with alterations in the activity of mitochondrial respiratory complexes. In this study we found that the abnormal brain fatty acid ratio in the VM strain was inherited as an autosomal dominant trait in reciprocal B6 × VM F1 hybrids. To evaluate the potential influence of brain cardiolipin fatty acid composition on cognitive sensitivity, we placed the parental B6 and VM mice and their reciprocal male and female B6VMF1 hybrid mice (3-month-old) in a hypoxic chamber (5 % O2). Cognitive awareness (conscientiousness) under hypoxia was significantly lower in the VM parental mice and F1 hybrid mice (11.4 ± 0.4  and 11.0 ± 0.4 min, respectively) than in the parental B6 mice (15.3 ± 1.4 min), indicating an autosomal dominant inheritance like that of the brain cardiolipin abnormalities. These findings suggest that impaired cognitive awareness under hypoxia is associated with abnormalities in neural lipid composition.

  2. Applying Data Mining Techniques to Improve Information Security in the Cloud: A Single Cache System Approach

    OpenAIRE

    Amany AlShawi

    2016-01-01

    Presently, the popularity of cloud computing is gradually increasing day by day. The purpose of this research was to enhance the security of the cloud using techniques such as data mining with specific reference to the single cache system. From the findings of the research, it was observed that the security in the cloud could be enhanced with the single cache system. For future purposes, an Apriori algorithm can be applied to the single cache system. This can be applied by all cloud providers...

  3. Architectural Development and Performance Analysis of a Primary Data Cache with Read Miss Address Prediction Capability

    National Research Council Canada - National Science Library

    Christensen, Kathryn

    1998-01-01

    .... The Predictive Read Cache (PRC) further improves the overall memory hierarchy performance by tracking the data read miss patterns of memory accesses, developing a prediction for the next access and prefetching the data into the faster cache memory...

  4. Dynamic Allocation of SPM Based on Time-Slotted Cache Conflict Graph for System Optimization

    Science.gov (United States)

    Wu, Jianping; Ling, Ming; Zhang, Yang; Mei, Chen; Wang, Huan

    This paper proposes a novel dynamic Scratch-pad Memory allocation strategy to optimize the energy consumption of the memory sub-system. Firstly, the whole program execution process is sliced into several time slots according to the temporal dimension; thereafter, a Time-Slotted Cache Conflict Graph (TSCCG) is introduced to model the behavior of Data Cache (D-Cache) conflicts within each time slot. Then, Integer Nonlinear Programming (INP) is implemented, which can avoid time-consuming linearization process, to select the most profitable data pages. Virtual Memory System (VMS) is adopted to remap those data pages, which will cause severe Cache conflicts within a time slot, to SPM. In order to minimize the swapping overhead of dynamic SPM allocation, a novel SPM controller with a tightly coupled DMA is introduced to issue the swapping operations without CPU's intervention. Last but not the least, this paper discusses the fluctuation of system energy profit based on different MMU page size as well as the Time Slot duration quantitatively. According to our design space exploration, the proposed method can optimize all of the data segments, including global data, heap and stack data in general, and reduce the total energy consumption by 27.28% on average, up to 55.22% with a marginal performance promotion. And comparing to the conventional static CCG (Cache Conflicts Graph), our approach can obtain 24.7% energy profit on average, up to 30.5% with a sight boost in performance.

  5. Exploitation of pocket gophers and their food caches by grizzly bears

    Science.gov (United States)

    Mattson, D.J.

    2004-01-01

    I investigated the exploitation of pocket gophers (Thomomys talpoides) by grizzly bears (Ursus arctos horribilis) in the Yellowstone region of the United States with the use of data collected during a study of radiomarked bears in 1977-1992. My analysis focused on the importance of pocket gophers as a source of energy and nutrients, effects of weather and site features, and importance of pocket gophers to grizzly bears in the western contiguous United States prior to historical extirpations. Pocket gophers and their food caches were infrequent in grizzly bear feces, although foraging for pocket gophers accounted for about 20-25% of all grizzly bear feeding activity during April and May. Compared with roots individually excavated by bears, pocket gopher food caches were less digestible but more easily dug out. Exploitation of gopher food caches by grizzly bears was highly sensitive to site and weather conditions and peaked during and shortly after snowmelt. This peak coincided with maximum success by bears in finding pocket gopher food caches. Exploitation was most frequent and extensive on gently sloping nonforested sites with abundant spring beauty (Claytonia lanceolata) and yampah (Perdieridia gairdneri). Pocket gophers are rare in forests, and spring beauty and yampah roots are known to be important foods of both grizzly bears and burrowing rodents. Although grizzly bears commonly exploit pocket gophers only in the Yellowstone region, this behavior was probably widespread in mountainous areas of the western contiguous United States prior to extirpations of grizzly bears within the last 150 years.

  6. Hybrid caches: design and data management

    OpenAIRE

    Valero Bresó, Alejandro

    2013-01-01

    Cache memories have been usually implemented with Static Random-Access Memory (SRAM) technology since it is the fastest electronic memory technology. However, this technology consumes a high amount of leakage currents, which is a major design concern because leakage energy consumption increases as the transistor size shrinks. Alternative technologies are being considered to reduce this consumption. Among them, embedded Dynamic RAM (eDRAM) technology provides minimal area and le...

  7. Efficient Resource Scheduling by Exploiting Relay Cache for Cellular Networks

    Directory of Open Access Journals (Sweden)

    Chun He

    2015-01-01

    Full Text Available In relay-enhanced cellular systems, throughput of User Equipment (UE is constrained by the bottleneck of the two-hop link, backhaul link (or the first hop link, and access link (the second hop link. To maximize the throughput, resource allocation should be coordinated between these two hops. A common resource scheduling algorithm, Adaptive Distributed Proportional Fair, only ensures that the throughput of the first hop is greater than or equal to that of the second hop. But it cannot guarantee a good balance of the throughput and fairness between the two hops. In this paper, we propose a Two-Hop Balanced Distributed Scheduling (TBS algorithm by exploiting relay cache for non-real-time data traffic. The evolved Node Basestation (eNB adaptively adjusts the number of Resource Blocks (RBs allocated to the backhaul link and direct links based on the cache information of relays. Each relay allocates RBs for relay UEs based on the size of the relay UE’s Transport Block. We also design a relay UE’s ACK feedback mechanism to update the data at relay cache. Simulation results show that the proposed TBS can effectively improve resource utilization and achieve a good trade-off between system throughput and fairness by balancing the throughput of backhaul and access link.

  8. Caching Over-The-Top Services, the Netflix Case

    DEFF Research Database (Denmark)

    Jensen, Stefan; Jensen, Michael; Gutierrez Lopez, Jose Manuel

    2015-01-01

    Problem (LLB-CFL). The solution search processes are implemented based on Genetic Algorithms (GA), designing genetic operators highly targeted towards this specific problem. The proposed methods are applied to a case study focusing on the demand and cache specifications of Netflix, and framed into a real...

  9. Cache-Oblivious Planar Orthogonal Range Searching and Counting

    DEFF Research Database (Denmark)

    Arge, Lars; Brodal, Gerth Stølting; Fagerberg, Rolf

    2005-01-01

    present the first cache-oblivious data structure for planar orthogonal range counting, and improve on previous results for cache-oblivious planar orthogonal range searching. Our range counting structure uses O(Nlog2 N) space and answers queries using O(logB N) memory transfers, where B is the block...... size of any memory level in a multilevel memory hierarchy. Using bit manipulation techniques, the space can be further reduced to O(N). The structure can also be modified to support more general semigroup range sum queries in O(logB N) memory transfers, using O(Nlog2 N) space for three-sided queries...... and O(Nlog22 N/log2log2 N) space for four-sided queries. Based on the O(Nlog N) space range counting structure, we develop a data structure that uses O(Nlog2 N) space and answers three-sided range queries in O(logB N+T/B) memory transfers, where T is the number of reported points. Based...

  10. An ESL Approach for Energy Consumption Analysis of Cache Memories in SoC Platforms

    Directory of Open Access Journals (Sweden)

    Abel G. Silva-Filho

    2011-01-01

    Full Text Available The design of complex circuits as SoCs presents two great challenges to designers. One is the speeding up of system functionality modeling and the second is the implementation of the system in an architecture that meets performance and power consumption requirements. Thus, developing new high-level specification mechanisms for the reduction of the design effort with automatic architecture exploration is a necessity. This paper proposes an Electronic-System-Level (ESL approach for system modeling and cache energy consumption analysis of SoCs called PCacheEnergyAnalyzer. It uses as entry a high-level UML-2.0 profile model of the system and it generates a simulation model of a multicore platform that can be analyzed for cache tuning. PCacheEnergyAnalyzer performs static/dynamic energy consumption analysis of caches on platforms that may have different processors. Architecture exploration is achieved by letting designers choose different processors for platform generation and different mechanisms for cache optimization. PCacheEnergyAnalyzer has been validated with several applications of Mibench, Mediabench, and PowerStone benchmarks, and results show that it provides analysis with reduced simulation effort.

  11. Automated Cache Performance Analysis And Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Mohror, Kathryn [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2013-12-23

    While there is no lack of performance counter tools for coarse-grained measurement of cache activity, there is a critical lack of tools for relating data layout to cache behavior to application performance. Generally, any nontrivial optimizations are either not done at all, or are done ”by hand” requiring significant time and expertise. To the best of our knowledge no tool available to users measures the latency of memory reference instructions for partic- ular addresses and makes this information available to users in an easy-to-use and intuitive way. In this project, we worked to enable the Open|SpeedShop performance analysis tool to gather memory reference latency information for specific instructions and memory ad- dresses, and to gather and display this information in an easy-to-use and intuitive way to aid performance analysts in identifying problematic data structures in their codes. This tool was primarily designed for use in the supercomputer domain as well as grid, cluster, cloud-based parallel e-commerce, and engineering systems and middleware. Ultimately, we envision a tool to automate optimization of application cache layout and utilization in the Open|SpeedShop performance analysis tool. To commercialize this soft- ware, we worked to develop core capabilities for gathering enhanced memory usage per- formance data from applications and create and apply novel methods for automatic data structure layout optimizations, tailoring the overall approach to support existing supercom- puter and cluster programming models and constraints. In this Phase I project, we focused on infrastructure necessary to gather performance data and present it in an intuitive way to users. With the advent of enhanced Precise Event-Based Sampling (PEBS) counters on recent Intel processor architectures and equivalent technology on AMD processors, we are now in a position to access memory reference information for particular addresses. Prior to the introduction of PEBS counters

  12. Utility of whole slide imaging and virtual microscopy in prostate pathology

    DEFF Research Database (Denmark)

    Camparo, Philippe; Egevad, Lars; Algaba, Ferran

    2012-01-01

    Whole slide imaging (WSI) has been used in conjunction with virtual microscopy (VM) for training or proficiency testing purposes, multicentre research, remote frozen section diagnosis and to seek specialist second opinion in a number of organ systems. The feasibility of using WSI/VM for routine...... to examine images at different magnifications as well as to view histology and immunohistochemistry side-by-side on the screen. Use of WSI/VM would also solve the difficulty in obtaining multiple identical copies of small lesions in prostate biopsies for teaching and proficiency testing. It would also permit...... delay in presentation of images on the screen may be very disturbing for a pathologist used to the rapid viewing of glass slides under a microscope. However, these problems are likely to be overcome by technological advances in the future....

  13. Barbröstade grabbar, med färgat hår och litervis med öl : En analys av Aftonbladets skildring av herr- och damfotboll i Herr-VM 2006 och Dam-VM 2007

    OpenAIRE

    Mårtensson, Henning

    2012-01-01

    I den här uppsatsen har jag undersökt hur män och kvinnor framställs i bild och text i Aftonbladets rapportering från herrarnas fotbolls-VM i Tyskland 2006 och damernas fotbolls-VM i Kina 2007. Mitt syfte var att titta på hur konstrueringen av en nationell diskurs skiljer sig åt i texterna om dam- och herrfotboll, om det finns någon tydlig manlig och kvinnlig diskurs på bilderna, samt hur väl min undersökning stämmer in på beprövade genusteorier. För att kunna besvara mitt syfte använde jag m...

  14. An Intelligent Cloud Storage Gateway for Medical Imaging.

    Science.gov (United States)

    Viana-Ferreira, Carlos; Guerra, António; Silva, João F; Matos, Sérgio; Costa, Carlos

    2017-09-01

    Historically, medical imaging repositories have been supported by indoor infrastructures. However, the amount of diagnostic imaging procedures has continuously increased over the last decades, imposing several challenges associated with the storage volume, data redundancy and availability. Cloud platforms are focused on delivering hardware and software services over the Internet, becoming an appealing solution for repository outsourcing. Although this option may bring financial and technological benefits, it also presents new challenges. In medical imaging scenarios, communication latency is a critical issue that still hinders the adoption of this paradigm. This paper proposes an intelligent Cloud storage gateway that optimizes data access times. This is achieved through a new cache architecture that combines static rules and pattern recognition for eviction and prefetching. The evaluation results, obtained from experiments over a real-world dataset, show that cache hit ratios can reach around 80%, leading to reductions of image retrieval times by over 60%. The combined use of eviction and prefetching policies proposed can significantly reduce communication latency, even when using a small cache in comparison to the total size of the repository. Apart from the performance gains, the proposed system is capable of adjusting to specific workflows of different institutions.

  15. Memory for multiple cache locations and prey quantities in a food-hoarding songbird

    Directory of Open Access Journals (Sweden)

    Nicola eArmstrong

    2012-12-01

    Full Text Available Most animals can discriminate between pairs of numbers that are each less than four without training. However, North Island robins (Petroica longipes, a food hoarding songbird endemic to New Zealand, can discriminate between quantities of items as high as eight without training. Here we investigate whether robins are capable of other complex quantity discrimination tasks. We test whether their ability to discriminate between small quantities declines with 1. the number of cache sites containing prey rewards and 2. the length of time separating cache creation and retrieval (retention interval. Results showed that subjects generally performed above chance expectations. They were equally able to discriminate between different combinations of prey quantities that were hidden from view in 2, 3 and 4 cache sites from between 1, 10 and 60 seconds. Overall results indicate that North Island robins can process complex quantity information involving more than two discrete quantities of items for up to one minute long retention intervals without training.

  16. Instant Varnish Cache how-to

    CERN Document Server

    Moutinho, Roberto

    2013-01-01

    Filled with practical, step-by-step instructions and clear explanations for the most important and useful tasks. Get the job done and learn as you go. Easy-to-follow, step-by-step recipes which will get you started with Varnish Cache. Practical examples will help you to get set up quickly and easily.This book is aimed at system administrators and web developers who need to scale websites without tossing money on a large and costly infrastructure. It's assumed that you have some knowledge of the HTTP protocol, how browsers and server communicate with each other, and basic Linux systems.

  17. Tannin concentration enhances seed caching by scatter-hoarding rodents: An experiment using artificial ‘seeds’

    Science.gov (United States)

    Wang, Bo; Chen, Jin

    2008-11-01

    Tannins are very common among plant seeds but their effects on the fate of seeds, for example, via mediation of the feeding preferences of scatter-hoarding rodents, are poorly understood. In this study, we created a series of artificial 'seeds' that only differed in tannin concentration and the type of tannin, and placed them in a pine forest in the Shangri-La Alpine Botanical Garden, Yunnan Province of China. Two rodent species ( Apodemus latronum and A. chevrieri) showed significant preferences for 'seeds' with different tannin concentrations. A significantly higher proportion of seeds with low tannin concentration were consumed in situ compared with seeds with a higher tannin concentration. Meanwhile, the tannin concentration was significantly positively correlated with the proportion of seeds cached. The different types of tannin (hydrolysable tannin vs condensed tannin) did not differ significantly in their effect on the proportion of seeds eaten in situ vs seeds cached. Tannin concentrations had no significant effect on the distance that cached seeds were carried, which suggests that rodents may respond to different seed traits in deciding whether or not to cache seeds and how far they will transport seeds.

  18. CACHE: an extended BASIC program which computes the performance of shell and tube heat exchangers

    International Nuclear Information System (INIS)

    Tallackson, J.R.

    1976-03-01

    An extended BASIC program, CACHE, has been written to calculate steady state heat exchange rates in the core auxiliary heat exchangers, (CAHE), designed to remove afterheat from High-Temperature Gas-Cooled Reactors (HTGR). Computationally, these are unbaffled counterflow shell and tube heat exchangers. The computational method is straightforward. The exchanger is subdivided into a user-selected number of lengthwise segments; heat exchange in each segment is calculated in sequence and summed. The program takes the temperature dependencies of all thermal conductivities, viscosities and heat capacities into account providing these are expressed algebraically. CACHE is easily adapted to compute steady state heat exchange rates in any unbaffled counterflow exchanger. As now used, CACHE calculates heat removal by liquid weight from high-temperature helium and helium mixed with nitrogen, oxygen and carbon monoxide. A second program, FULTN, is described. FULTN computes the geometrical parameters required as input to CACHE. As reported herein, FULTN computes the internal dimensions of the Fulton Station CAHE. The two programs are chained to operate as one. Complete user information is supplied. The basic equations, variable lists, annotated program lists, and sample outputs with explanatory notes are included

  19. The Potential Role of Cache Mechanism for Complicated Design Optimization

    International Nuclear Information System (INIS)

    Noriyasu, Hirokawa; Fujita, Kikuo

    2002-01-01

    This paper discusses the potential role of cache mechanism for complicated design optimization While design optimization is an application of mathematical programming techniques to engineering design problems over numerical computation, its progress has been coevolutionary. The trend in such progress indicates that more complicated applications become the next target of design optimization beyond growth of computational resources. As the progress in the past two decades had required response surface techniques, decomposition techniques, etc., any new framework must be introduced for the future of design optimization methods. This paper proposes a possibility of what we call cache mechanism for mediating the coming challenge and briefly demonstrates some promises in the idea of Voronoi diagram based cumulative approximation as an example of its implementation, development of strict robust design, extension of design optimization for product variety

  20. State-dependent compound inhibition of Nav1.2 sodium channels using the FLIPR Vm dye: on-target and off-target effects of diverse pharmacological agents.

    Science.gov (United States)

    Benjamin, Elfrida R; Pruthi, Farhana; Olanrewaju, Shakira; Ilyin, Victor I; Crumley, Gregg; Kutlina, Elena; Valenzano, Kenneth J; Woodward, Richard M

    2006-02-01

    Voltage-gated sodium channels (NaChs) are relevant targets for pain, epilepsy, and a variety of neurological and cardiac disorders. Traditionally, it has been difficult to develop structure-activity relationships for NaCh inhibitors due to rapid channel kinetics and state-dependent compound interactions. Membrane potential (Vm) dyes in conjunction with a high-throughput fluorescence imaging plate reader (FLIPR) offer a satisfactory 1st-tier solution. Thus, the authors have developed a FLIPR Vm assay of rat Nav1.2 NaCh. Channels were opened by addition of veratridine, and Vm dye responses were measured. The IC50 values from various structural classes of compounds were compared to the resting state binding constant (Kr)and inactivated state binding constant (Ki)obtained using patch-clamp electrophysiology (EP). The FLIPR values correlated with Ki but not Kr. FLIPRIC50 values fell within 0.1-to 1.5-fold of EP Ki values, indicating that the assay generally reports use-dependent inhibition rather than resting state block. The Library of Pharmacologically Active Compounds (LOPAC, Sigma) was screened. Confirmed hits arose from diverse classes such as dopamine receptor antagonists, serotonin transport inhibitors, and kinase inhibitors. These data suggest that NaCh inhibition is inherent in a diverse set of biologically active molecules and may warrant counterscreening NaChs to avoid unwanted secondary pharmacology.

  1. 5G Network Communication, Caching, and Computing Algorithms Based on the Two‐Tier Game Model

    Directory of Open Access Journals (Sweden)

    Sungwook Kim

    2018-02-01

    Full Text Available In this study, we developed hybrid control algorithms in smart base stations (SBSs along with devised communication, caching, and computing techniques. In the proposed scheme, SBSs are equipped with computing power and data storage to collectively offload the computation from mobile user equipment and to cache the data from clouds. To combine in a refined manner the communication, caching, and computing algorithms, game theory is adopted to characterize competitive and cooperative interactions. The main contribution of our proposed scheme is to illuminate the ultimate synergy behind a fully integrated approach, while providing excellent adaptability and flexibility to satisfy the different performance requirements. Simulation results demonstrate that the proposed approach can outperform existing schemes by approximately 5% to 15% in terms of bandwidth utilization, access delay, and system throughput.

  2. Implementació d'una Cache per a un processador MIPS d'una FPGA

    OpenAIRE

    Riera Villanueva, Marc

    2013-01-01

    [CATALÀ] Primer s'explicarà breument l'arquitectura d'un MIPS, la jerarquia de memòria i el funcionament de la cache. Posteriorment s'explicarà com s'ha dissenyat i implementat una jerarquia de memòria per a un MIPS implementat en VHDL en una FPGA. [ANGLÈS] First, the MIPS architecture, memory hierarchy and the functioning of the cache will be explained briefly. Then, the design and implementation of a memory hierarchy for a MIPS processor implemented in VHDL on an FPGA will be explained....

  3. A Software Managed Stack Cache for Real-Time Systems

    DEFF Research Database (Denmark)

    Jordan, Alexander; Abbaspourseyedi, Sahar; Schoeberl, Martin

    2016-01-01

    In a real-time system, the use of a scratchpad memory can mitigate the difficulties related to analyzing data caches, whose behavior is inherently hard to predict. We propose to use a scratchpad memory for stack allocated data. While statically allocating stack frames for individual functions...

  4. Achieving cost/performance balance ratio using tiered storage caching techniques: A case study with CephFS

    Science.gov (United States)

    Poat, M. D.; Lauret, J.

    2017-10-01

    As demand for widely accessible storage capacity increases and usage is on the rise, steady IO performance is desired but tends to suffer within multi-user environments. Typical deployments use standard hard drives as the cost per/GB is quite low. On the other hand, HDD based solutions for storage is not known to scale well with process concurrency and soon enough, high rate of IOPs create a “random access” pattern killing performance. Though not all SSDs are alike, SSDs are an established technology often used to address this exact “random access” problem. In this contribution, we will first discuss the IO performance of many different SSD drives (tested in a comparable and standalone manner). We will then be discussing the performance and integrity of at least three low-level disk caching techniques (Flashcache, dm-cache, and bcache) including individual policies, procedures, and IO performance. Furthermore, the STAR online computing infrastructure currently hosts a POSIX-compliant Ceph distributed storage cluster - while caching is not a native feature of CephFS (only exists in the Ceph Object store), we will show how one can implement a caching mechanism profiting from an implementation at a lower level. As our illustration, we will present our CephFS setup, IO performance tests, and overall experience from such configuration. We hope this work will service the community’s interest for using disk-caching mechanisms with applicable uses such as distributed storage systems and seeking an overall IO performance gain.

  5. Radiation-induced conduction under high electric field (1 x 106 to 1 x 108 V/m) in polyethylene-terephthalate

    International Nuclear Information System (INIS)

    Maeda, H.; Kurashige, M.; Ito, D.; Nakakita, T.

    1978-01-01

    Radiation-induced conduction in polyethylene-terephthalate (PET) has been measured under high electric field (1.0 x 10 6 to 1.6 x 10 8 V/m). In a 6-μm-thick PET film, saturation of the radiation-induced current occurs at field strengths above 1.2 x 10 8 V/m. This has been demonstrated by the thickness and dose rate dependence of the induced current. Radiation-induced conductivity increases monotonically with field strength, then shows a saturation tendency. This may be explained by geminate recombination. Above 1 x 10 8 V/m, slowly increasing radiation-induced current appears. This may be caused by electron injection from the cathode, enhanced by the accumulation of the hetero space charges near it

  6. Storageless and caching Tier-2 models in the UK context

    Science.gov (United States)

    Cadellin Skipsey, Samuel; Dewhurst, Alastair; Crooks, David; MacMahon, Ewan; Roy, Gareth; Smith, Oliver; Mohammed, Kashif; Brew, Chris; Britton, David

    2017-10-01

    Operational and other pressures have lead to WLCG experiments moving increasingly to a stratified model for Tier-2 resources, where “fat” Tier-2s (“T2Ds”) and “thin” Tier-2s (“T2Cs”) provide different levels of service. In the UK, this distinction is also encouraged by the terms of the current GridPP5 funding model. In anticipation of this, testing has been performed on the implications, and potential implementation, of such a distinction in our resources. In particular, this presentation presents the results of testing of storage T2Cs, where the “thin” nature is expressed by the site having either no local data storage, or only a thin caching layer; data is streamed or copied from a “nearby” T2D when needed by jobs. In OSG, this model has been adopted successfully for CMS AAA sites; but the network topology and capacity in the USA is significantly different to that in the UK (and much of Europe). We present the result of several operational tests: the in-production University College London (UCL) site, which runs ATLAS workloads using storage at the Queen Mary University of London (QMUL) site; the Oxford site, which has had scaling tests performed against T2Ds in various locations in the UK (to test network effects); and the Durham site, which has been testing the specific ATLAS caching solution of “Rucio Cache” integration with ARC’s caching layer.

  7. Optimal Replacement Policies for Non-Uniform Cache Objects with Optional Eviction

    National Research Council Canada - National Science Library

    Bahat, Omri; Makowski, Armand M

    2002-01-01

    .... However, since the introduction of optimal replacement policies for conventional caching, the problem of finding optimal replacement policies under the factors indicated has not been studied in any systematic manner...

  8. Effective caching of shortest paths for location-based services

    DEFF Research Database (Denmark)

    Jensen, Christian S.; Thomsen, Jeppe Rishede; Yiu, Man Lung

    2012-01-01

    Web search is ubiquitous in our daily lives. Caching has been extensively used to reduce the computation time of the search engine and reduce the network traffic beyond a proxy server. Another form of web search, known as online shortest path search, is popular due to advances in geo...

  9. Biased visualization of hypoperfused tissue by computed tomography due to short imaging duration: improved classification by image down-sampling and vascular models

    Energy Technology Data Exchange (ETDEWEB)

    Mikkelsen, Irene Klaerke; Ribe, Lars Riisgaard; Bekke, Susanne Lise; Tietze, Anna; Oestergaard, Leif; Mouridsen, Kim [Aarhus University Hospital, Center of Functionally Integrative Neuroscience, Aarhus C (Denmark); Jones, P.S.; Alawneh, Josef [University of Cambridge, Department of Clinical Neurosciences, Cambridge (United Kingdom); Puig, Josep; Pedraza, Salva [Dr. Josep Trueta Girona University Hospitals, Department of Radiology, Girona Biomedical Research Institute, Girona (Spain); Gillard, Jonathan H. [University of Cambridge, Department of Radiology, Cambridge (United Kingdom); Warburton, Elisabeth A. [Cambrigde University Hospitals, Addenbrooke, Stroke Unit, Cambridge (United Kingdom); Baron, Jean-Claude [University of Cambridge, Department of Clinical Neurosciences, Cambridge (United Kingdom); Centre Hospitalier Sainte Anne, INSERM U894, Paris (France)

    2015-07-15

    Lesion detection in acute stroke by computed-tomography perfusion (CTP) can be affected by incomplete bolus coverage in veins and hypoperfused tissue, so-called bolus truncation (BT), and low contrast-to-noise ratio (CNR). We examined the BT-frequency and hypothesized that image down-sampling and a vascular model (VM) for perfusion calculation would improve normo- and hypoperfused tissue classification. CTP datasets from 40 acute stroke patients were retrospectively analysed for BT. In 16 patients with hypoperfused tissue but no BT, repeated 2-by-2 image down-sampling and uniform filtering was performed, comparing CNR to perfusion-MRI levels and tissue classification to that of unprocessed data. By simulating reduced scan duration, the minimum scan-duration at which estimated lesion volumes came within 10 % of their true volume was compared for VM and state-of-the-art algorithms. BT in veins and hypoperfused tissue was observed in 9/40 (22.5 %) and 17/40 patients (42.5 %), respectively. Down-sampling to 128 x 128 resolution yielded CNR comparable to MR data and improved tissue classification (p = 0.0069). VM reduced minimum scan duration, providing reliable maps of cerebral blood flow and mean transit time: 5 s (p = 0.03) and 7 s (p < 0.0001), respectively. BT is not uncommon in stroke CTP with 40-s scan duration. Applying image down-sampling and VM improve tissue classification. (orig.)

  10. Biased visualization of hypoperfused tissue by computed tomography due to short imaging duration: improved classification by image down-sampling and vascular models

    International Nuclear Information System (INIS)

    Mikkelsen, Irene Klaerke; Ribe, Lars Riisgaard; Bekke, Susanne Lise; Tietze, Anna; Oestergaard, Leif; Mouridsen, Kim; Jones, P.S.; Alawneh, Josef; Puig, Josep; Pedraza, Salva; Gillard, Jonathan H.; Warburton, Elisabeth A.; Baron, Jean-Claude

    2015-01-01

    Lesion detection in acute stroke by computed-tomography perfusion (CTP) can be affected by incomplete bolus coverage in veins and hypoperfused tissue, so-called bolus truncation (BT), and low contrast-to-noise ratio (CNR). We examined the BT-frequency and hypothesized that image down-sampling and a vascular model (VM) for perfusion calculation would improve normo- and hypoperfused tissue classification. CTP datasets from 40 acute stroke patients were retrospectively analysed for BT. In 16 patients with hypoperfused tissue but no BT, repeated 2-by-2 image down-sampling and uniform filtering was performed, comparing CNR to perfusion-MRI levels and tissue classification to that of unprocessed data. By simulating reduced scan duration, the minimum scan-duration at which estimated lesion volumes came within 10 % of their true volume was compared for VM and state-of-the-art algorithms. BT in veins and hypoperfused tissue was observed in 9/40 (22.5 %) and 17/40 patients (42.5 %), respectively. Down-sampling to 128 x 128 resolution yielded CNR comparable to MR data and improved tissue classification (p = 0.0069). VM reduced minimum scan duration, providing reliable maps of cerebral blood flow and mean transit time: 5 s (p = 0.03) and 7 s (p < 0.0001), respectively. BT is not uncommon in stroke CTP with 40-s scan duration. Applying image down-sampling and VM improve tissue classification. (orig.)

  11. Fast-Solving Quasi-Optimal LS-S3VM Based on an Extended Candidate Set.

    Science.gov (United States)

    Ma, Yuefeng; Liang, Xun; Kwok, James T; Li, Jianping; Zhou, Xiaoping; Zhang, Haiyan

    2018-04-01

    The semisupervised least squares support vector machine (LS-S 3 VM) is an important enhancement of least squares support vector machines in semisupervised learning. Given that most data collected from the real world are without labels, semisupervised approaches are more applicable than standard supervised approaches. Although a few training methods for LS-S 3 VM exist, the problem of deriving the optimal decision hyperplane efficiently and effectually has not been solved. In this paper, a fully weighted model of LS-S 3 VM is proposed, and a simple integer programming (IP) model is introduced through an equivalent transformation to solve the model. Based on the distances between the unlabeled data and the decision hyperplane, a new indicator is designed to represent the possibility that the label of an unlabeled datum should be reversed in each iteration during training. Using the indicator, we construct an extended candidate set consisting of the indices of unlabeled data with high possibilities, which integrates more information from unlabeled data. Our algorithm is degenerated into a special scenario of the previous algorithm when the extended candidate set is reduced into a set with only one element. Two strategies are utilized to determine the descent directions based on the extended candidate set. Furthermore, we developed a novel method for locating a good starting point based on the properties of the equivalent IP model. Combined with the extended candidate set and the carefully computed starting point, a fast algorithm to solve LS-S 3 VM quasi-optimally is proposed. The choice of quasi-optimal solutions results in low computational cost and avoidance of overfitting. Experiments show that our algorithm equipped with the two designed strategies is more effective than other algorithms in at least one of the following three aspects: 1) computational complexity; 2) generalization ability; and 3) flexibility. However, our algorithm and other algorithms have

  12. Study on data acquisition system based on reconfigurable cache technology

    Science.gov (United States)

    Zhang, Qinchuan; Li, Min; Jiang, Jun

    2018-03-01

    Waveform capture rate is one of the key features of digital acquisition systems, which represents the waveform processing capability of the system in a unit time. The higher the waveform capture rate is, the larger the chance to capture elusive events is and the more reliable the test result is. First, this paper analyzes the impact of several factors on the waveform capture rate of the system, then the novel technology based on reconfigurable cache is further proposed to optimize system architecture, and the simulation results show that the signal-to-noise ratio of signal, capacity, and structure of cache have significant effects on the waveform capture rate. Finally, the technology is demonstrated by the engineering practice, and the results show that the waveform capture rate of the system is improved substantially without significant increase of system's cost, and the technology proposed has a broad application prospect.

  13. Cache-aware data structure model for parallelism and dynamic load balancing

    International Nuclear Information System (INIS)

    Sridi, Marwa

    2016-01-01

    This PhD thesis is dedicated to the implementation of innovative parallel methods in the framework of fast transient fluid-structure dynamics. It improves existing methods within EUROPLEXUS software, in order to optimize the shared memory parallel strategy, complementary to the original distributed memory approach, brought together into a global hybrid strategy for clusters of multi-core nodes. Starting from a sound analysis of the state of the art concerning data structuring techniques correlated to the hierarchic memory organization of current multi-processor architectures, the proposed work introduces an approach suitable for an explicit time integration (i.e. with no linear system to solve at each step). A data structure of type 'Structure of arrays' is conserved for the global data storage, providing flexibility and efficiency for current operations on kinematics fields (displacement, velocity and acceleration). On the contrary, in the particular case of elementary operations (for internal forces generic computations, as well as fluxes computations between cell faces for fluid models), particularly time consuming but localized in the program, a temporary data structure of type 'Array of structures' is used instead, to force an efficient filling of the cache memory and increase the performance of the resolution, for both serial and shared memory parallel processing. Switching from the global structure to the temporary one is based on a cell grouping strategy, following classing cache-blocking principles but handling specifically for this work neighboring data necessary to the efficient treatment of ALE fluxes for cells on the group boundaries. The proposed approach is extensively tested, from the point of views of both the computation time and the access failures into cache memory, confronting the gains obtained within the elementary operations to the potential overhead generated by the data structure switch. Obtained results are very satisfactory, especially

  14. Enhancement web proxy cache performance using Wrapper Feature Selection methods with NB and J48

    Science.gov (United States)

    Mahmoud Al-Qudah, Dua'a.; Funke Olanrewaju, Rashidah; Wong Azman, Amelia

    2017-11-01

    Web proxy cache technique reduces response time by storing a copy of pages between client and server sides. If requested pages are cached in the proxy, there is no need to access the server. Due to the limited size and excessive cost of cache compared to the other storages, cache replacement algorithm is used to determine evict page when the cache is full. On the other hand, the conventional algorithms for replacement such as Least Recently Use (LRU), First in First Out (FIFO), Least Frequently Use (LFU), Randomized Policy etc. may discard important pages just before use. Furthermore, using conventional algorithm cannot be well optimized since it requires some decision to intelligently evict a page before replacement. Hence, most researchers propose an integration among intelligent classifiers and replacement algorithm to improves replacement algorithms performance. This research proposes using automated wrapper feature selection methods to choose the best subset of features that are relevant and influence classifiers prediction accuracy. The result present that using wrapper feature selection methods namely: Best First (BFS), Incremental Wrapper subset selection(IWSS)embedded NB and particle swarm optimization(PSO)reduce number of features and have a good impact on reducing computation time. Using PSO enhance NB classifier accuracy by 1.1%, 0.43% and 0.22% over using NB with all features, using BFS and using IWSS embedded NB respectively. PSO rises J48 accuracy by 0.03%, 1.91 and 0.04% over using J48 classifier with all features, using IWSS-embedded NB and using BFS respectively. While using IWSS embedded NB fastest NB and J48 classifiers much more than BFS and PSO. However, it reduces computation time of NB by 0.1383 and reduce computation time of J48 by 2.998.

  15. Fox squirrels match food assessment and cache effort to value and scarcity.

    Directory of Open Access Journals (Sweden)

    Mikel M Delgado

    Full Text Available Scatter hoarders must allocate time to assess items for caching, and to carry and bury each cache. Such decisions should be driven by economic variables, such as the value of the individual food items, the scarcity of these items, competition for food items and risk of pilferage by conspecifics. The fox squirrel, an obligate scatter-hoarder, assesses cacheable food items using two overt movements, head flicks and paw manipulations. These behaviors allow an examination of squirrel decision processes when storing food for winter survival. We measured wild squirrels' time allocations and frequencies of assessment and investment behaviors during periods of food scarcity (summer and abundance (fall, giving the squirrels a series of 15 items (alternating five hazelnuts and five peanuts. Assessment and investment per cache increased when resource value was higher (hazelnuts or resources were scarcer (summer, but decreased as scarcity declined (end of sessions. This is the first study to show that assessment behaviors change in response to factors that indicate daily and seasonal resource abundance, and that these factors may interact in complex ways to affect food storing decisions. Food-storing tree squirrels may be a useful and important model species to understand the complex economic decisions made under natural conditions.

  16. Killing and caching of an adult White-tailed deer, Odocoileus virginianus, by a single Gray Wolf, Canis lupus

    Science.gov (United States)

    Nelson, Michael E.

    2011-01-01

    A single Gray Wolf (Canis lupus) killed an adult male White-tailed Deer (Odocoileus virginianus) and cached the intact carcass in 76 cm of snow. The carcass was revisited and entirely consumed between four and seven days later. This is the first recorded observation of a Gray Wolf caching an entire adult deer.

  17. A Novel Two-Tier Cooperative Caching Mechanism for the Optimization of Multi-Attribute Periodic Queries in Wireless Sensor Networks

    Science.gov (United States)

    Zhou, ZhangBing; Zhao, Deng; Shu, Lei; Tsang, Kim-Fung

    2015-01-01

    Wireless sensor networks, serving as an important interface between physical environments and computational systems, have been used extensively for supporting domain applications, where multiple-attribute sensory data are queried from the network continuously and periodically. Usually, certain sensory data may not vary significantly within a certain time duration for certain applications. In this setting, sensory data gathered at a certain time slot can be used for answering concurrent queries and may be reused for answering the forthcoming queries when the variation of these data is within a certain threshold. To address this challenge, a popularity-based cooperative caching mechanism is proposed in this article, where the popularity of sensory data is calculated according to the queries issued in recent time slots. This popularity reflects the possibility that sensory data are interested in the forthcoming queries. Generally, sensory data with the highest popularity are cached at the sink node, while sensory data that may not be interested in the forthcoming queries are cached in the head nodes of divided grid cells. Leveraging these cooperatively cached sensory data, queries are answered through composing these two-tier cached data. Experimental evaluation shows that this approach can reduce the network communication cost significantly and increase the network capability. PMID:26131665

  18. Cache-Oblivious Red-Blue Line Segment Intersection

    DEFF Research Database (Denmark)

    Arge, Lars; Mølhave, Thomas; Zeh, Norbert

    2008-01-01

    We present an optimal cache-oblivious algorithm for finding all intersections between a set of non-intersecting red segments and a set of non-intersecting blue segments in the plane. Our algorithm uses $O(\\frac{N}{B}\\log_{M/B}\\frac{N}{B}+T/B)$ memory transfers, where N is the total number...... of segments, M and B are the memory and block transfer sizes of any two consecutive levels of any multilevel memory hierarchy, and T is the number of intersections....

  19. Universal Voltage Conveyor and its Novel Dual-Output Fully-Cascadable VM APF Application

    Directory of Open Access Journals (Sweden)

    Norbert Herencsar

    2017-03-01

    Full Text Available This letter presents a novel realization of a voltage-mode (VM first-order all-pass filter (APF with attractive features. The proposed circuit employs a single readily available six-terminal active device called as universal voltage conveyor (UVC and only grounded passive components, which predict its easy monolithic integration with desired circuit simplicity. The auxiliary voltage input (W and output (ZP, ZN terminals of the device fully ensure easy cascadability of VM APF, since the input and output terminal impedances are theoretically infinitely high and zero, respectively. Moreover, thanks to mutually inverse outputs of the UVC, the proposed filter simultaneously provides both inverting and non-inverting outputs from the same configuration. All of these features make the UVC a unique active device currently available in the literature. The behavior of the filter was experimentally measured using the readily available UVC-N1C 0520 chip, which was produced in cooperation with ON Semiconductor Czech Republic, Ltd.

  20. CernVM Co-Pilot: an Extensible Framework for Building Scalable Cloud Computing Infrastructures

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    CernVM Co-Pilot is a framework for instantiating an ad-hoc computing infrastructure on top of distributed computing resources. Such resources include commercial computing clouds (e.g. Amazon EC2), scientific computing clouds (e.g. CERN lxcloud), as well as the machines of users participating in volunteer computing projects (e.g. BOINC). The framework consists of components that communicate using the Extensible Messaging and Presence protocol (XMPP), allowing for new components to be developed in virtually any programming language and interfaced to existing Grid and batch computing infrastructures exploited by the High Energy Physics community. Co-Pilot has been used to execute jobs for both the ALICE and ATLAS experiments at CERN. CernVM Co-Pilot is also one of the enabling technologies behind the LHC@home 2.0 volunteer computing project, which is the first such project that exploits virtual machine technology. The use of virtual machines eliminates the necessity of modifying existing applications and adapt...

  1. Ordering sparse matrices for cache-based systems

    International Nuclear Information System (INIS)

    Biswas, Rupak; Oliker, Leonid

    2001-01-01

    The Conjugate Gradient (CG) algorithm is the oldest and best-known Krylov subspace method used to solve sparse linear systems. Most of the coating-point operations within each CG iteration is spent performing sparse matrix-vector multiplication (SPMV). We examine how various ordering and partitioning strategies affect the performance of CG and SPMV when different programming paradigms are used on current commercial cache-based computers. However, a multithreaded implementation on the cacheless Cray MTA demonstrates high efficiency and scalability without any special ordering or partitioning

  2. MonetDB/X100 - A DBMS in the CPU cache

    NARCIS (Netherlands)

    M. Zukowski (Marcin); P.A. Boncz (Peter); N.J. Nes (Niels); S. Héman (Sándor)

    2005-01-01

    textabstractX100 is a new execution engine for the MonetDB system, that improves execution speed and overcomes its main memory limitation. It introduces the concept of in-cache vectorized processing that strikes a balance between the existing column-at-a-time MIL execution primitives of MonetDB and

  3. Non-Toxic Metabolic Management of Metastatic Cancer in VM Mice: Novel Combination of Ketogenic Diet, Ketone Supplementation, and Hyperbaric Oxygen Therapy.

    Directory of Open Access Journals (Sweden)

    A M Poff

    Full Text Available The Warburg effect and tumor hypoxia underlie a unique cancer metabolic phenotype characterized by glucose dependency and aerobic fermentation. We previously showed that two non-toxic metabolic therapies - the ketogenic diet with concurrent hyperbaric oxygen (KD+HBOT and dietary ketone supplementation - could increase survival time in the VM-M3 mouse model of metastatic cancer. We hypothesized that combining these therapies could provide an even greater therapeutic benefit in this model. Mice receiving the combination therapy demonstrated a marked reduction in tumor growth rate and metastatic spread, and lived twice as long as control animals. To further understand the effects of these metabolic therapies, we characterized the effects of high glucose (control, low glucose (LG, ketone supplementation (βHB, hyperbaric oxygen (HBOT, or combination therapy (LG+βHB+HBOT on VM-M3 cells. Individually and combined, these metabolic therapies significantly decreased VM-M3 cell proliferation and viability. HBOT, alone or in combination with LG and βHB, increased ROS production in VM-M3 cells. This study strongly supports further investigation into this metabolic therapy as a potential non-toxic treatment for late-stage metastatic cancers.

  4. On-chip COMA cache-coherence protocol for microgrids of microthreaded cores

    NARCIS (Netherlands)

    Zhang, L.; Jesshope, C.

    2008-01-01

    This paper describes an on-chip COMA cache coherency protocol to support the microthread model of concurrent program composition. The model gives a sound basis for building multi-core computers as it captures concurrency, abstracts communication and identifies resources, such as processor groups

  5. OneService - Generic Cache Aggregator Framework for Service Depended Cloud Applications

    NARCIS (Netherlands)

    Tekinerdogan, B.; Oral, O.A.

    2017-01-01

    Current big data cloud systems often use different data migration strategies from providers to customers. This often results in increased bandwidth usage and herewith a decrease of the performance. To enhance the performance often caching mechanisms are adopted. However, the implementations of these

  6. Model checking a cache coherence protocol for a Java DSM implementation

    NARCIS (Netherlands)

    J. Pang; W.J. Fokkink (Wan); R. Hofman (Rutger); R. Veldema

    2007-01-01

    textabstractJackal is a fine-grained distributed shared memory implementation of the Java programming language. It aims to implement Java's memory model and allows multithreaded Java programs to run unmodified on a distributed memory system. It employs a multiple-writer cache coherence

  7. NMDA antagonist, but not nNOS inhibitor, requires AMPA receptors in the ventromedial prefrontal cortex (vmPFC) to induce antidepressant-like effects

    DEFF Research Database (Denmark)

    Pereira, V. S.; Wegener, Gregers; Joca, S. R.

    2013-01-01

    of the glutamatergic and nitrergic systems of the vmPFC on the behavioral consequences induced by forced swimming (FS), an animal model of depression. Male Wistar rats (230-260g) with guide cannulas aimed at the prelimbic (PL) region of vmPFC were submitted to a 15min session of FS and, 24h later, they were submitted...

  8. Application of computer graphics to generate coal resources of the Cache coal bed, Recluse geologic model area, Campbell County, Wyoming

    Science.gov (United States)

    Schneider, G.B.; Crowley, S.S.; Carey, M.A.

    1982-01-01

    Low-sulfur subbituminous coal resources have been calculated, using both manual and computer methods, for the Cache coal bed in the Recluse Model Area, which covers the White Tail Butte, Pitch Draw, Recluse, and Homestead Draw SW 7 1/2 minute quadrangles, Campbell County, Wyoming. Approximately 275 coal thickness measurements obtained from drill hole data are evenly distributed throughout the area. The Cache coal and associated beds are in the Paleocene Tongue River Member of the Fort Union Formation. The depth from the surface to the Cache bed ranges from 269 to 1,257 feet. The thickness of the coal is as much as 31 feet, but in places the Cache coal bed is absent. Comparisons between hand-drawn and computer-generated isopach maps show minimal differences. Total coal resources calculated by computer show the bed to contain 2,316 million short tons or about 6.7 percent more than the hand-calculated figure of 2,160 million short tons.

  9. Physical metallurgy: Scientific school of the Academician V.M. Schastlivtsev

    Science.gov (United States)

    Tabatchikova, T. I.

    2016-04-01

    This paper is to honor Academician Vadim Mikhailovich Schastlivtsev, a prominent scientist in the field of metal physics and materials science. The article comprises an analysis of the topical issues of the physical metallurgy of the early 21st century and of the contribution of V.M. Schastlivtsev and of his school to the science of phase and structural transformations in steels. In 2015, Vadim Mikhailovich celebrates his 80th birthday, and this paper is timed to this honorable date. The list of his main publications is given in it.

  10. Model checking a cache coherence protocol of a Java DSM implementation

    NARCIS (Netherlands)

    Pang, J.; Fokkink, W.J.; Hofman, R.; Veldema, R.S.

    2007-01-01

    Jackal is a fine-grained distributed shared memory implementation of the Java programming language. It aims to implement Java's memory model and allows multithreaded Java programs to run unmodified on a distributed memory system. It employs a multiple-writer cache coherence protocol. In this paper,

  11. An Economic Model for Self-tuned Cloud Caching

    OpenAIRE

    Dash, Debabrata; Kantere, Verena; Ailamaki, Anastasia

    2009-01-01

    Cloud computing, the new trend for service infrastructures requires user multi-tenancy as well as minimal capital expenditure. In a cloud that services large amounts of data that are massively collected and queried, such as scientific data, users typically pay for query services. The cloud supports caching of data in order to provide quality query services. User payments cover query execution costs and maintenance of cloud infrastructure, and incur cloud profit. The challenge resides in provi...

  12. Cache Performance Optimization for SoC Vedio Applications

    OpenAIRE

    Lei Li; Wei Zhang; HuiYao An; Xing Zhang; HuaiQi Zhu

    2014-01-01

    Chip Multiprocessors (CMPs) are adopted by industry to deal with the speed limit of the single-processor. But memory access has become the bottleneck of the performance, especially in multimedia applications. In this paper, a set of management policies is proposed to improve the cache performance for a SoC platform of video application. By analyzing the behavior of Vedio Engine, the memory-friendly writeback and efficient prefetch policies are adopted. The experiment platform is simulated by ...

  13. The effect of future time perspective on delay discounting is mediated by the gray matter volume of vmPFC.

    Science.gov (United States)

    Guo, Yiqun; Chen, Zhiyi; Feng, Tingyong

    2017-07-28

    Although several previous studies have shown that individuals' attitude towards time could affect their intertemporal preference, little is known about the neural basis of the relation between time perspective (TP) and delay discounting. In the present study, we quantified the gray matter (GM) cortical volume using voxel-based morphometry (VBM) methods to investigate the effect of TP on delay discounting (DD) across two independent samples. For group 1 (102 healthy college students; 46 male; 20.40 ± 1.87 years), behavioral results showed that only Future TP was a significant predictor of DD, and higher scores on Future TP were related to lower discounting rates. Whole-brain analysis revealed that steeper discounting correlated with greater GM volume in the ventromedial prefrontal cortex (vmPFC) and ventral part of posterior cingulate cortex (vPCC). Also, GM volume of a cluster in the vmPFC was correlated with Future TP. Interestingly, there was an overlapping region in vmPFC that was correlated with both DD and Future TP. Region-of-interest analysis further indicated that the overlapping region of vmPFC played a partially mediating role in the relation between Future TP and DD in the other independent dataset (Group 2, 36 healthy college students; 14 male; 20.18±1.80 years). Taken together, our results provide a new perspective from neural basis for explaining the relation between DD and future TP. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Transient Variable Caching in Java’s Stack-Based Intermediate Representation

    Directory of Open Access Journals (Sweden)

    Paul Týma

    1999-01-01

    Full Text Available Java’s stack‐based intermediate representation (IR is typically coerced to execute on register‐based architectures. Unoptimized compiled code dutifully replicates transient variable usage designated by the programmer and common optimization practices tend to introduce further usage (i.e., CSE, Loop‐invariant Code Motion, etc.. On register based machines, often transient variables are cached within registers (when available saving the expense of actually accessing memory. Unfortunately, in stack‐based environments because of the need to push and pop the transient values, further performance improvement is possible. This paper presents Transient Variable Caching (TVC, a technique for eliminating transient variable overhead whenever possible. This optimization would find a likely home in optimizers attached to the back of popular Java compilers. Side effects of the algorithm include significant instruction reordering and introduction of many stack‐manipulation operations. This combination has proven to greatly impede the ability to decompile stack‐based IR code sequences. The code that results from the transform is faster, smaller, and greatly impedes decompilation.

  15. Broadcasted Location-Aware Data Cache for Vehicular Application

    Directory of Open Access Journals (Sweden)

    Fukuda Akira

    2007-01-01

    Full Text Available There has been increasing interest in the exploitation of advances in information technology, for example, mobile computing and wireless communications in ITS (intelligent transport systems. Classes of applications that can benefit from such an infrastructure include traffic information, roadside businesses, weather reports, entertainment, and so on. There are several wireless communication methods currently available that can be utilized for vehicular applications, such as cellular phone networks, DSRC (dedicated short-range communication, and digital broadcasting. While a cellular phone network is relatively slow and a DSRC has a very small communication area, one-segment digital terrestrial broadcasting service was launched in Japan in 2006, high-performance digital broadcasting for mobile hosts has been available recently. However, broadcast delivery methods have the drawback that clients need to wait for the required data items to appear on the broadcast channel. In this paper, we propose a new cache system to effectively prefetch and replace broadcast data using "scope" (an available area of location-dependent data and "mobility specification" (a schedule according to the direction in which a mobile host moves. We numerically evaluate the cache system on the model close to the traffic road environment, and implement the emulation system to evaluate this location-aware data delivery method for a concrete vehicular application that delivers geographic road map data to a car navigation system.

  16. Wolves, Canis lupus, carry and cache the collars of radio-collared White-tailed Deer, Odocoileus virginianus, they killed

    Science.gov (United States)

    Nelson, Michael E.; Mech, L. David

    2011-01-01

    Wolves (Canis lupus) in northeastern Minnesota cached six radio-collars (four in winter, two in spring-summer) of 202 radio-collared White-tailed Deer (Odocoileus virginianus) they killed or consumed from 1975 to 2010. A Wolf bedded on top of one collar cached in snow. We found one collar each at a Wolf den and Wolf rendezvous site, 2.5 km and 0.5 km respectively, from each deer's previous locations.

  17. Analytical derivation of traffic patterns in cache-coherent shared-memory systems

    DEFF Research Database (Denmark)

    Stuart, Matthias Bo; Sparsø, Jens

    2011-01-01

    This paper presents an analytical method to derive the worst-case traffic pattern caused by a task graph mapped to a cache-coherent shared-memory system. Our analysis allows designers to rapidly evaluate the impact of different mappings of tasks to IP cores on the traffic pattern. The accuracy...

  18. The Cost of Cache-Oblivious Searching

    DEFF Research Database (Denmark)

    Bender, Michael A.; Brodal, Gerth Stølting; Fagerberg, Rolf

    2011-01-01

    of the block sizes are limited to be powers of 2. The paper gives modified versions of the van Emde Boas layout, where the expected number of memory transfers between any two levels of the memory hierarchy is arbitrarily close to [lg e+O(lg lg B/lg B)]log  B N+O(1). This factor approaches lg e≈1.443 as B...... increases. The expectation is taken over the random placement in memory of the first element of the structure. Because searching in the disk-access machine (DAM) model can be performed in log  B N+O(1) block transfers, this result establishes a separation between the (2-level) DAM model and cache...

  19. vmPFC activation during a stressor predicts positive emotions during stress recovery

    Science.gov (United States)

    Yang, Xi; Garcia, Katelyn M; Jung, Youngkyoo; Whitlow, Christopher T; McRae, Kateri; Waugh, Christian E

    2018-01-01

    Abstract Despite accruing evidence showing that positive emotions facilitate stress recovery, the neural basis for this effect remains unclear. To identify the underlying mechanism, we compared stress recovery for people reflecting on a stressor while in a positive emotional context with that for people in a neutral context. While blood–oxygen-level dependent data were being collected, participants (N = 43) performed a stressful anagram task, which was followed by a recovery period during which they reflected on the stressor while watching a positive or neutral video. Participants also reported positive and negative emotions throughout the task as well as retrospective thoughts about the task. Although there was no effect of experimental context on emotional recovery, we found that ventromedial prefrontal cortex (vmPFC) activation during the stressor predicted more positive emotions during recovery, which in turn predicted less negative emotions during recovery. In addition, the relationship between vmPFC activation and positive emotions during recovery was mediated by decentering—the meta-cognitive detachment of oneself from one’s feelings. In sum, successful recovery from a stressor seems to be due to activation of positive emotion-related regions during the stressor itself as well as to their downstream effects on certain cognitive forms of emotion regulation. PMID:29462404

  20. Researching of Covert Timing Channels Based on HTTP Cache Headers in Web API

    Directory of Open Access Journals (Sweden)

    Denis Nikolaevich Kolegov

    2015-12-01

    Full Text Available In this paper, it is shown how covert timing channels based on HTTP cache headers can be implemented using different Web API of Google Drive, Dropbox and Facebook  Internet services.

  1. A Cross-Layer Framework for Designing and Optimizing Deeply-Scaled FinFET-Based Cache Memories

    Directory of Open Access Journals (Sweden)

    Alireza Shafaei

    2015-08-01

    Full Text Available This paper presents a cross-layer framework in order to design and optimize energy-efficient cache memories made of deeply-scaled FinFET devices. The proposed design framework spans device, circuit and architecture levels and considers both super- and near-threshold modes of operation. Initially, at the device-level, seven FinFET devices on a 7-nm process technology are designed in which only one geometry-related parameter (e.g., fin width, gate length, gate underlap is changed per device. Next, at the circuit-level, standard 6T and 8T SRAM cells made of these 7-nm FinFET devices are characterized and compared in terms of static noise margin, access latency, leakage power consumption, etc. Finally, cache memories with all different combinations of devices and SRAM cells are evaluated at the architecture-level using a modified version of the CACTI tool with FinFET support and other considerations for deeply-scaled technologies. Using this design framework, it is observed that L1 cache memory made of longer channel FinFET devices operating at the near-threshold regime achieves the minimum energy operation point.

  2. Federated or cached searches: providing expected performance from multiple invasive species databases

    Science.gov (United States)

    Graham, Jim; Jarnevich, Catherine S.; Simpson, Annie; Newman, Gregory J.; Stohlgren, Thomas J.

    2011-01-01

    Invasive species are a universal global problem, but the information to identify them, manage them, and prevent invasions is stored around the globe in a variety of formats. The Global Invasive Species Information Network is a consortium of organizations working toward providing seamless access to these disparate databases via the Internet. A distributed network of databases can be created using the Internet and a standard web service protocol. There are two options to provide this integration. First, federated searches are being proposed to allow users to search “deep” web documents such as databases for invasive species. A second method is to create a cache of data from the databases for searching. We compare these two methods, and show that federated searches will not provide the performance and flexibility required from users and a central cache of the datum are required to improve performance.

  3. CSU Final Report on the Math/CS Institute CACHE: Communication-Avoiding and Communication-Hiding at the Extreme Scale

    Energy Technology Data Exchange (ETDEWEB)

    Strout, Michelle [Colorado State University

    2014-06-10

    The CACHE project entails researching and developing new versions of numerical algorithms that result in data reuse that can be scheduled in a communication avoiding way. Since memory accesses take more time than any computation and require the most power, the focus on turning data reuse into data locality is critical to improving performance and reducing power usage in scientific simulations. This final report summarizes the accomplishments at Colorado State University as part of the CACHE project.

  4. The Use of Proxy Caches for File Access in a Multi-Tier Grid Environment

    International Nuclear Information System (INIS)

    Brun, R; Duellmann, D; Ganis, G; Janyst, L; Peters, A J; Rademakers, F; Sindrilaru, E; Hanushevsky, A

    2011-01-01

    The use of proxy caches has been extensively studied in the HEP environment for efficient access of database data and showed significant performance with only very moderate operational effort at higher grid tiers (T2, T3). In this contribution we propose to apply the same concept to the area of file access and analyse the possible performance gains, operational impact on site services and applicability to different HEP use cases. Base on a proof-of-concept studies with a modified XROOT proxy server we review the cache efficiency and overheads for access patterns of typical ROOT based analysis programs. We conclude with a discussion of the potential role of this new component at the different tiers of a distributed computing grid.

  5. Dynamic virtual AliEn Grid sites on Nimbus with CernVM

    International Nuclear Information System (INIS)

    Harutyunyan, A; Buncic, P; Freeman, T; Keahey, K

    2010-01-01

    We describe the work on enabling one click deployment of Grid sites of AliEn Grid framework on the Nimbus 'science cloud' at the University of Chicago. The integration of computing resources of the cloud with the resource pool of AliEn Grid is achieved by leveraging two mechanisms: the Nimbus Context Broker developed at Argonne National Laboratory and the University of Chicago, and CernVM - a baseline virtual software appliance for LHC experiments developed at CERN. Two approaches of dynamic virtual AliEn Grid site deployment are presented.

  6. A Unified Buffering Management with Set Divisible Cache for PCM Main Memory

    Institute of Scientific and Technical Information of China (English)

    Mei-Ying Bian; Su-Kyung Yoon; Jeong-Geun Kim; Sangjae Nam; Shin-Dug Kim

    2016-01-01

    This research proposes a phase-change memory (PCM) based main memory system with an effective combi-nation of a superblock-based adaptive buffering structure and its associated set divisible last-level cache (LLC). To achieve high performance similar to that of dynamic random-access memory (DRAM) based main memory, the superblock-based adaptive buffer (SABU) is comprised of dual DRAM buffers, i.e., an aggressive superblock-based pre-fetching buffer (SBPB) and an adaptive sub-block reusing buffer (SBRB), and a set divisible LLC based on a cache space optimization scheme. According to our experiment, the longer PCM access latency can typically be hidden using our proposed SABU, which can significantly reduce the number of writes over the PCM main memory by 26.44%. The SABU approach can reduce PCM access latency up to 0.43 times, compared with conventional DRAM main memory. Meanwhile, the average memory energy consumption can be reduced by 19.7%.

  7. Stereoscopic Visualization of Diffusion Tensor Imaging Data: A Comparative Survey of Visualization Techniques

    International Nuclear Information System (INIS)

    Raslan, O.; Debnam, J.M.; Ketonen, L.; Kumar, A.J.; Schellingerhout, D.; Wang, J.

    2013-01-01

    Diffusion tensor imaging (DTI) data has traditionally been displayed as a gray scale functional anisotropy map (GSFM) or color coded orientation map (CCOM). These methods use black and white or color with intensity values to map the complex multidimensional DTI data to a two-dimensional image. Alternative visualization techniques, such as V m ax maps utilize enhanced graphical representation of the principal eigenvector by means of a headless arrow on regular non stereoscopic (VM) or stereoscopic display (VMS). A survey of clinical utility of patients with intracranial neoplasms was carried out by 8 neuro radiologists using traditional and nontraditional methods of DTI display. Pairwise comparison studies of 5 intracranial neoplasms were performed with a structured questionnaire comparing GSFM, CCOM, VM, and VMS. Six of 8 neuro radiologists favored V m ax maps over traditional methods of display (GSFM and CCOM). When comparing the stereoscopic (VMS) and the non-stereoscopic (VM) modes, 4 favored VMS, 2 favored VM, and 2 had no preference. In conclusion, processing and visualizing DTI data stereoscopically is technically feasible. An initial survey of users indicated that V m ax based display methodology with or without stereoscopic visualization seems to be preferred over traditional methods to display DTI data.

  8. Broadcasted Location-Aware Data Cache for Vehicular Application

    Directory of Open Access Journals (Sweden)

    Kenya Sato

    2007-05-01

    Full Text Available There has been increasing interest in the exploitation of advances in information technology, for example, mobile computing and wireless communications in ITS (intelligent transport systems. Classes of applications that can benefit from such an infrastructure include traffic information, roadside businesses, weather reports, entertainment, and so on. There are several wireless communication methods currently available that can be utilized for vehicular applications, such as cellular phone networks, DSRC (dedicated short-range communication, and digital broadcasting. While a cellular phone network is relatively slow and a DSRC has a very small communication area, one-segment digital terrestrial broadcasting service was launched in Japan in 2006, high-performance digital broadcasting for mobile hosts has been available recently. However, broadcast delivery methods have the drawback that clients need to wait for the required data items to appear on the broadcast channel. In this paper, we propose a new cache system to effectively prefetch and replace broadcast data using “scope” (an available area of location-dependent data and “mobility specification” (a schedule according to the direction in which a mobile host moves. We numerically evaluate the cache system on the model close to the traffic road environment, and implement the emulation system to evaluate this location-aware data delivery method for a concrete vehicular application that delivers geographic road map data to a car navigation system.

  9. XRootd, disk-based, caching proxy for optimization of data access, data placement and data replication

    International Nuclear Information System (INIS)

    Bauerdick, L A T; Bloom, K; Bockelman, B; Bradley, D C; Dasu, S; Dost, J M; Sfiligoi, I; Tadel, A; Tadel, M; Wuerthwein, F; Yagil, A

    2014-01-01

    Following the success of the XRootd-based US CMS data federation, the AAA project investigated extensions of the federation architecture by developing two sample implementations of an XRootd, disk-based, caching proxy. The first one simply starts fetching a whole file as soon as a file open request is received and is suitable when completely random file access is expected or it is already known that a whole file be read. The second implementation supports on-demand downloading of partial files. Extensions to the Hadoop Distributed File System have been developed to allow for an immediate fallback to network access when local HDFS storage fails to provide the requested block. Both cache implementations are in pre-production testing at UCSD.

  10. Development of a Temperature Controller for a Vuilleumier (VM) Cycle Power Cylinder

    Science.gov (United States)

    1975-10-01

    the system in the event of a shorted sensor; both of these actions turn the power section of the controller "off," and it cannot be repowered until...400-Hz power to a low-level DC with the attendant necessity of using a 400-Hz power transformer . Thus use of DC will allow a less compli- cated...N AFFDL.TR-75-99 7? ^0 00 o o o CQ DEVELOPMENT OF A TEMPERATURE CONTROLLER FOR A VUILLEUMIER (VM) CYCLE POWER CYLINDER i ■ L RTHUR D

  11. A Routing Mechanism for Cloud Outsourcing of Medical Imaging Repositories.

    Science.gov (United States)

    Godinho, Tiago Marques; Viana-Ferreira, Carlos; Bastião Silva, Luís A; Costa, Carlos

    2016-01-01

    Web-based technologies have been increasingly used in picture archive and communication systems (PACS), in services related to storage, distribution, and visualization of medical images. Nowadays, many healthcare institutions are outsourcing their repositories to the cloud. However, managing communications between multiple geo-distributed locations is still challenging due to the complexity of dealing with huge volumes of data and bandwidth requirements. Moreover, standard methodologies still do not take full advantage of outsourced archives, namely because their integration with other in-house solutions is troublesome. In order to improve the performance of distributed medical imaging networks, a smart routing mechanism was developed. This includes an innovative cache system based on splitting and dynamic management of digital imaging and communications in medicine objects. The proposed solution was successfully deployed in a regional PACS archive. The results obtained proved that it is better than conventional approaches, as it reduces remote access latency and also the required cache storage space.

  12. VMCast: A VM-Assisted Stability Enhancing Solution for Tree-Based Overlay Multicast.

    Directory of Open Access Journals (Sweden)

    Weidong Gu

    Full Text Available Tree-based overlay multicast is an effective group communication method for media streaming applications. However, a group member's departure causes all of its descendants to be disconnected from the multicast tree for some time, which results in poor performance. The above problem is difficult to be addressed because overlay multicast tree is intrinsically instable. In this paper, we proposed a novel stability enhancing solution, VMCast, for tree-based overlay multicast. This solution uses two types of on-demand cloud virtual machines (VMs, i.e., multicast VMs (MVMs and compensation VMs (CVMs. MVMs are used to disseminate the multicast data, whereas CVMs are used to offer streaming compensation. The used VMs in the same cloud datacenter constitute a VM cluster. Each VM cluster is responsible for a service domain (VMSD, and each group member belongs to a specific VMSD. The data source delivers the multicast data to MVMs through a reliable path, and MVMs further disseminate the data to group members along domain overlay multicast trees. The above approach structurally improves the stability of the overlay multicast tree. We further utilized CVM-based streaming compensation to enhance the stability of the data distribution in the VMSDs. VMCast can be used as an extension to existing tree-based overlay multicast solutions, to provide better services for media streaming applications. We applied VMCast to two application instances (i.e., HMTP and HCcast. The results show that it can obviously enhance the stability of the data distribution.

  13. A New Caching Technique to Support Conjunctive Queries in P2P DHT

    Science.gov (United States)

    Kobatake, Koji; Tagashira, Shigeaki; Fujita, Satoshi

    P2P DHT (Peer-to-Peer Distributed Hash Table) is one of typical techniques for realizing an efficient management of shared resources distributed over a network and a keyword search over such networks in a fully distributed manner. In this paper, we propose a new method for supporting conjunctive queries in P2P DHT. The basic idea of the proposed technique is to share a global information on past trials by conducting a local caching of search results for conjunctive queries and by registering the fact to the global DHT. Such a result caching is expected to significantly reduce the amount of transmitted data compared with conventional schemes. The effect of the proposed method is experimentally evaluated by simulation. The result of experiments indicates that by using the proposed method, the amount of returned data is reduced by 60% compared with conventional P2P DHT which does not support conjunctive queries.

  14. GABA levels in the ventromedial prefrontal cortex during the viewing of appetitive and disgusting food images.

    Science.gov (United States)

    Padulo, Caterina; Delli Pizzi, Stefano; Bonanni, Laura; Edden, Richard A E; Ferretti, Antonio; Marzoli, Daniele; Franciotti, Raffaella; Manippa, Valerio; Onofrj, Marco; Sepede, Gianna; Tartaro, Armando; Tommasi, Luca; Puglisi-Allegra, Stefano; Brancucci, Alfredo

    2016-10-01

    Characterizing how the brain appraises the psychological dimensions of reward is one of the central topics of neuroscience. It has become clear that dopamine neurons are implicated in the transmission of both rewarding information and aversive and alerting events through two different neuronal populations involved in encoding the motivational value and the motivational salience of stimuli, respectively. Nonetheless, there is less agreement on the role of the ventromedial prefrontal cortex (vmPFC) and the related neurotransmitter release during the processing of biologically relevant stimuli. To address this issue, we employed magnetic resonance spectroscopy (MRS), a non-invasive methodology that allows detection of some metabolites in the human brain in vivo, in order to assess the role of the vmPFC in encoding stimulus value rather than stimulus salience. Specifically, we measured gamma-aminobutyric acid (GABA) and, with control purposes, Glx levels in healthy subjects during the observation of appetitive and disgusting food images. We observed a decrease of GABA and no changes in Glx concentration in the vmPFC in both conditions. Furthermore, a comparatively smaller GABA reduction during the observation of appetitive food images than during the observation of disgusting food images was positively correlated with the scores obtained to the body image concerns sub-scale of Body Uneasiness Test (BUT). These results are consistent with the idea that the vmPFC plays a crucial role in processing both rewarding and aversive stimuli, possibly by encoding stimulus salience through glutamatergic and/or noradrenergic projections to deeper mesencephalic and limbic areas. Copyright © 2016 IBRO. Published by Elsevier Ltd. All rights reserved.

  15. Volume-monitored chest CT: a simplified method for obtaining motion-free images near full inspiratory and end expiratory lung volumes

    Energy Technology Data Exchange (ETDEWEB)

    Mueller, Kathryn S. [The Ohio State University College of Medicine, Columbus, OH (United States); Long, Frederick R. [Nationwide Children' s Hospital, The Children' s Radiological Institute, Columbus, OH (United States); Flucke, Robert L. [Nationwide Children' s Hospital, Department of Pulmonary Medicine, Columbus, OH (United States); Castile, Robert G. [The Research Institute at Nationwide Children' s Hospital, Center for Perinatal Research, Columbus, OH (United States)

    2010-10-15

    Lung inflation and respiratory motion during chest CT affect diagnostic accuracy and reproducibility. To describe a simple volume-monitored (VM) method for performing reproducible, motion-free full inspiratory and end expiratory chest CT examinations in children. Fifty-two children with cystic fibrosis (mean age 8.8 {+-} 2.2 years) underwent pulmonary function tests and inspiratory and expiratory VM-CT scans (1.25-mm slices, 80-120 kVp, 16-40 mAs) according to an IRB-approved protocol. The VM-CT technique utilizes instruction from a respiratory therapist, a portable spirometer and real-time documentation of lung volume on a computer. CT image quality was evaluated for achievement of targeted lung-volume levels and for respiratory motion. Children achieved 95% of vital capacity during full inspiratory imaging. For end expiratory scans, 92% were at or below the child's end expiratory level. Two expiratory exams were judged to be at suboptimal volumes. Two inspiratory (4%) and three expiratory (6%) exams showed respiratory motion. Overall, 94% of scans were performed at optimal volumes without respiratory motion. The VM-CT technique is a simple, feasible method in children as young as 4 years to achieve reproducible high-quality full inspiratory and end expiratory lung CT images. (orig.)

  16. Big Data Caching for Networking: Moving from Cloud to Edge

    OpenAIRE

    Zeydan, Engin; Baştuğ, Ejder; Bennis, Mehdi; Kader, Manhal Abdel; Karatepe, Alper; Er, Ahmet Salih; Debbah, Mérouane

    2016-01-01

    In order to cope with the relentless data tsunami in $5G$ wireless networks, current approaches such as acquiring new spectrum, deploying more base stations (BSs) and increasing nodes in mobile packet core networks are becoming ineffective in terms of scalability, cost and flexibility. In this regard, context-aware $5$G networks with edge/cloud computing and exploitation of \\emph{big data} analytics can yield significant gains to mobile operators. In this article, proactive content caching in...

  17. Delivery Time Minimization in Edge Caching: Synergistic Benefits of Subspace Alignment and Zero Forcing

    KAUST Repository

    Kakar, Jaber; Alameer, Alaa; Chaaban, Anas; Sezgin, Aydin; Paulraj, Arogyaswami

    2017-01-01

    the fundamental limits of a cache-aided wireless network consisting of one central base station, $M$ transceivers and $K$ receivers from a latency-centric perspective. We use the normalized delivery time (NDT) to capture the per-bit latency for the worst-case file

  18. Constraining models of postglacial rebound using space geodesy: a detailed assessment of model ICE-5G (VM2) and its relatives

    Science.gov (United States)

    Argus, Donald F.; Peltier, W. Richard

    2010-05-01

    Using global positioning system, very long baseline interferometry, satellite laser ranging and Doppler Orbitography and Radiopositioning Integrated by Satellite observations, including the Canadian Base Network and Fennoscandian BIFROST array, we constrain, in models of postglacial rebound, the thickness of the ice sheets as a function of position and time and the viscosity of the mantle as a function of depth. We test model ICE-5G VM2 T90 Rot, which well fits many hundred Holocene relative sea level histories in North America, Europe and worldwide. ICE-5G is the deglaciation history having more ice in western Canada than ICE-4G; VM2 is the mantle viscosity profile having a mean upper mantle viscosity of 0.5 × 1021Pas and a mean uppermost-lower mantle viscosity of 1.6 × 1021Pas T90 is an elastic lithosphere thickness of 90 km; and Rot designates that the model includes (rotational feedback) Earth's response to the wander of the North Pole of Earth's spin axis towards Canada at a speed of ~1° Myr-1. The vertical observations in North America show that, relative to ICE-5G, the Laurentide ice sheet at last glacial maximum (LGM) at ~26 ka was (1) much thinner in southern Manitoba, (2) thinner near Yellowknife (Northwest Territories), (3) thicker in eastern and southern Quebec and (4) thicker along the northern British Columbia-Alberta border, or that ice was unloaded from these areas later (thicker) or earlier (thinner) than in ICE-5G. The data indicate that the western Laurentide ice sheet was intermediate in mass between ICE-5G and ICE-4G. The vertical observations and GRACE gravity data together suggest that the western Laurentide ice sheet was nearly as massive as that in ICE-5G but distributed more broadly across northwestern Canada. VM2 poorly fits the horizontal observations in North America, predicting places along the margins of the Laurentide ice sheet to be moving laterally away from the ice centre at 2 mm yr-1 in ICE-4G and 3 mm yr-1 in ICE-5G, in

  19. Impacto de la memoria cache en la aceleración de la ejecución de algoritmo de detección de rostros en sistemas empotrados

    Directory of Open Access Journals (Sweden)

    Alejandro Cabrera Aldaya

    2012-06-01

    Full Text Available En este trabajo se analiza el impacto de la memoria cache sobre la aceleración de la ejecución del algoritmo de detección de rostros de Viola-Jones en un sistema de procesamiento basado en el procesador Microblaze empotrado en un FPGA. Se expone el algoritmo, se describe una implementación software del mismo y se analizan sus funciones más relevantes y las características de localidad de las instrucciones y los datos. Se analiza el impacto de las memorias cache de instrucciones y de datos, tanto de sus capacidades (entre 2 y 16 kB como de tamaño de línea (de 4 y 8 palabras. Los resultados obtenidos utilizando una placa de desarrollo Spartan3A Starter Kit basada en un FPGA Spartan3A XC3S700A, con el procesador Microblaze a 62,5 MHz y 64 MB de memoria externa DDR2 a 125 MHz,  muestran un mayor impacto de la cache de instrucciones que la de datos, con valores óptimos de 8kB para la cache de instrucciones y entre 4 y 16kB para la cache de datos. Con estas memorias se alcanza una aceleración de 17 veces con relación a la ejecución del algoritmo en memoria externa. El tamaño de la línea de cache tiene poca influencia sobre la aceleración del algoritmo.

  20. Experience on QA in the CernVM File System

    CERN Multimedia

    CERN. Geneva; MEUSEL, Rene

    2015-01-01

    The CernVM-File System (CVMFS) delivers experiment software installations to thousands of globally distributed nodes in the WLCG and beyond. In recent years it became a mission-critical component for offline data processing of the LHC experiments and many other collaborations. From a software engineering perspective, CVMFS is a medium-sized C++ system-level project. Following the growth of the project, we introduced a number of measures to improve the code quality, testability, and maintainability. In particular, we found very useful code reviews through github pull requests and automated unit- and integration testing. We are also transitioning to a test-driven development for new features and bug fixes. These processes are supported by a number of tools, such as Google Test, Jenkins, Docker, and others. We would like to share our experience on problems we encountered and on which processes and tools worked well for us.

  1. Effectiveness of caching in a distributed digital library system

    DEFF Research Database (Denmark)

    Hollmann, J.; Ardø, Anders; Stenstrom, P.

    2007-01-01

    as manifested by gateways that implement the interfaces to the many fulltext archives. A central research question in this approach is: What is the nature of locality in the user access stream to such a digital library? Based on access logs that drive the simulations, it is shown that client-side caching can......Today independent publishers are offering digital libraries with fulltext archives. In an attempt to provide a single user-interface to a large set of archives, the studied Article-Database-Service offers a consolidated interface to a geographically distributed set of archives. While this approach...

  2. Replicas Strategy and Cache Optimization of Video Surveillance Systems Based on Cloud Storage

    Directory of Open Access Journals (Sweden)

    Rongheng Li

    2018-04-01

    Full Text Available With the rapid development of video surveillance technology, especially the popularity of cloud-based video surveillance applications, video data begins to grow explosively. However, in the cloud-based video surveillance system, replicas occupy an amount of storage space. Also, the slow response to video playback constrains the performance of the system. In this paper, considering the characteristics of video data comprehensively, we propose a dynamic redundant replicas mechanism based on security levels that can dynamically adjust the number of replicas. Based on the location correlation between cameras, this paper also proposes a data cache strategy to improve the response speed of data reading. Experiments illustrate that: (1 our dynamic redundant replicas mechanism can save storage space while ensuring data security; (2 the cache mechanism can predict the playback behaviors of the users in advance and improve the response speed of data reading according to the location and time correlation of the front-end cameras; and (3 in terms of cloud-based video surveillance, our proposed approaches significantly outperform existing methods.

  3. dCache, towards Federated Identities & Anonymized Delegation

    Science.gov (United States)

    Ashish, A.; Millar, AP; Mkrtchyan, T.; Fuhrmann, P.; Behrmann, G.; Sahakyan, M.; Adeyemi, O. S.; Starek, J.; Litvintsev, D.; Rossi, A.

    2017-10-01

    For over a decade, dCache has relied on the authentication and authorization infrastructure (AAI) offered by VOMS, Kerberos, Xrootd etc. Although the established infrastructure has worked well and provided sufficient security, the implementation of procedures and the underlying software is often seen as a burden, especially by smaller communities trying to adopt existing HEP software stacks [1]. Moreover, scientists are increasingly dependent on service portals for data access [2]. In this paper, we describe how federated identity management systems can facilitate the transition from traditional AAI infrastructure to novel solutions like OpenID Connect. We investigate the advantages offered by OpenID Connect in regards to ‘delegation of authentication’ and ‘credential delegation for offline access’. Additionally, we demonstrate how macaroons can provide a more fine-granular authorization mechanism that supports anonymized delegation.

  4. Something different - caching applied to calculation of impedance matrix elements

    CSIR Research Space (South Africa)

    Lysko, AA

    2012-09-01

    Full Text Available of the multipliers, the approximating functions are used any required parameters, such as input impedance or gain pattern etc. The method is relatively straightforward but, especially for small to medium matrices, requires spending time on filling... of the computing the impedance matrix for the method of moments, or a similar method, such as boundary element method (BEM) [22], with the help of the flowchart shown in Figure 1. Input Parameters (a) Search the cached data for a match (b) A match found...

  5. A Comparison between Fixed Priority and EDF Scheduling accounting for Cache Related Pre-emption Delays

    Directory of Open Access Journals (Sweden)

    Will Lunniss

    2014-04-01

    Full Text Available In multitasking real-time systems, the choice of scheduling algorithm is an important factor to ensure that response time requirements are met while maximising limited system resources. Two popular scheduling algorithms include fixed priority (FP and earliest deadline first (EDF. While they have been studied in great detail before, they have not been compared when taking into account cache related pre-emption delays (CRPD. Memory and cache are split into a number of blocks containing instructions and data. During a pre-emption, cache blocks from the pre-empting task can evict those of the pre-empted task. When the pre-empted task is resumed, if it then has to re-load the evicted blocks, CRPD are introduced which then affect the schedulability of the task. In this paper we compare FP and EDF scheduling algorithms in the presence of CRPD using the state-of-the-art CRPD analysis. We find that when CRPD is accounted for, the performance gains offered by EDF over FP, while still notable, are diminished. Furthermore, we find that under scenarios that cause relatively high CRPD, task layout optimisation techniques can be applied to allow FP to schedule tasksets at a similar processor utilisation to EDF. Thus making the choice of the task layout in memory as important as the choice of scheduling algorithm. This is very relevant for industry, as it is much cheaper and simpler to adjust the task layout through the linker than it is to switch the scheduling algorithm.

  6. A critical survey of live virtual machine migration techniques

    Directory of Open Access Journals (Sweden)

    Anita Choudhary

    2017-11-01

    Full Text Available Abstract Virtualization techniques effectively handle the growing demand for computing, storage, and communication resources in large-scale Cloud Data Centers (CDC. It helps to achieve different resource management objectives like load balancing, online system maintenance, proactive fault tolerance, power management, and resource sharing through Virtual Machine (VM migration. VM migration is a resource-intensive procedure as VM’s continuously demand appropriate CPU cycles, cache memory, memory capacity, and communication bandwidth. Therefore, this process degrades the performance of running applications and adversely affects efficiency of the data centers, particularly when Service Level Agreements (SLA and critical business objectives are to be met. Live VM migration is frequently used because it allows the availability of application service, while migration is performed. In this paper, we make an exhaustive survey of the literature on live VM migration and analyze the various proposed mechanisms. We first classify the types of Live VM migration (single, multiple and hybrid. Next, we categorize VM migration techniques based on duplication mechanisms (replication, de-duplication, redundancy, and compression and awareness of context (dependency, soft page, dirty page, and page fault and evaluate the various Live VM migration techniques. We discuss various performance metrics like application service downtime, total migration time and amount of data transferred. CPU, memory and storage data is transferred during the process of VM migration and we identify the category of data that needs to be transferred in each case. We present a brief discussion on security threats in live VM migration and categories them in three different classes (control plane, data plane, and migration module. We also explain the security requirements and existing solutions to mitigate possible attacks. Specific gaps are identified and the research challenges in improving

  7. Geometric Algorithms for Private-Cache Chip Multiprocessors

    DEFF Research Database (Denmark)

    Ajwani, Deepak; Sitchinava, Nodari; Zeh, Norbert

    2010-01-01

    -D convex hulls. These results are obtained by analyzing adaptations of either the PEM merge sort algorithm or PRAM algorithms. For the second group of problems—orthogonal line segment intersection reporting, batched range reporting, and related problems—more effort is required. What distinguishes......We study techniques for obtaining efficient algorithms for geometric problems on private-cache chip multiprocessors. We show how to obtain optimal algorithms for interval stabbing counting, 1-D range counting, weighted 2-D dominance counting, and for computing 3-D maxima, 2-D lower envelopes, and 2...... these problems from the ones in the previous group is the variable output size, which requires I/O-efficient load balancing strategies based on the contribution of the individual input elements to the output size. To obtain nearly optimal algorithms for these problems, we introduce a parallel distribution...

  8. Observations of territorial breeding common ravens caching eggs of greater sage-grouse

    Science.gov (United States)

    Howe, Kristy B.; Coates, Peter S.

    2015-01-01

    Previous investigations using continuous video monitoring of greater sage-grouse Centrocercus urophasianus nests have unambiguously identified common ravens Corvus corax as an important egg predator within the western United States. The quantity of greater sage-grouse eggs an individual common raven consumes during the nesting period and the extent to which common ravens actively hunt greater sage-grouse nests are largely unknown. However, some evidence suggests that territorial breeding common ravens, rather than nonbreeding transients, are most likely responsible for nest depredations. We describe greater sage-grouse egg depredation observations obtained opportunistically from three common raven nests located in Idaho and Nevada where depredated greater sage-grouse eggs were found at or in the immediate vicinity of the nest site, including the caching of eggs in nearby rock crevices. We opportunistically monitored these nests by counting and removing depredated eggs and shell fragments from the nest sites during each visit to determine the extent to which the common raven pairs preyed on greater sage-grouse eggs. To our knowledge, our observations represent the first evidence that breeding, territorial pairs of common ravens cache greater sage-grouse eggs and are capable of depredating multiple greater sage-grouse nests.

  9. XRootd, disk-based, caching-proxy for optimization of data-access, data-placement and data-replication

    CERN Document Server

    Tadel, Matevz

    2013-01-01

    Following the smashing success of XRootd-based USCMS data-federation, AAA project investigated extensions of the federation architecture by developing two sample implementations of an XRootd, disk-based, caching-proxy. The first one simply starts fetching a whole file as soon as a file-open request is received and is suitable when completely random file access is expected or it is already known that a whole file be read. The second implementation supports on-demand downloading of partial files. Extensions to the Hadoop file-system have been developed to allow foran immediate fallback to network access when local HDFS storage fails to provide the requested block. Tools needed to analyze and to tweak block replication factors and to inject downloaded blocks into a running HDFS installation have also been developed. Both cache implementations are in operation at UCSD and several tests were also performed at UNL and UW-M. Operational experience and applications to automatic storage healing and opportunistic compu...

  10. Geochemistry of mercury and other constituents in subsurface sediment—Analyses from 2011 and 2012 coring campaigns, Cache Creek Settling Basin, Yolo County, California

    Science.gov (United States)

    Arias, Michelle R.; Alpers, Charles N.; Marvin-DiPasquale, Mark C.; Fuller, Christopher C.; Agee, Jennifer L.; Sneed, Michelle; Morita, Andrew Y.; Salas, Antonia

    2017-10-31

    Cache Creek Settling Basin was constructed in 1937 to trap sediment from Cache Creek before delivery to the Yolo Bypass, a flood conveyance for the Sacramento River system that is tributary to the Sacramento–San Joaquin Delta. Sediment management options being considered by stakeholders in the Cache Creek Settling Basin include sediment excavation; however, that could expose sediments containing elevated mercury concentrations from historical mercury mining in the watershed. In cooperation with the California Department of Water Resources, the U.S. Geological Survey undertook sediment coring campaigns in 2011–12 (1) to describe lateral and vertical distributions of mercury concentrations in deposits of sediment in the Cache Creek Settling Basin and (2) to improve constraint of estimates of the rate of sediment deposition in the basin.Sediment cores were collected in the Cache Creek Settling Basin, Yolo County, California, during October 2011 at 10 locations and during August 2012 at 5 other locations. Total core depths ranged from approximately 4.6 to 13.7 meters (15 to 45 feet), with penetration to about 9.1 meters (30 feet) at most locations. Unsplit cores were logged for two geophysical parameters (gamma bulk density and magnetic susceptibility); then, selected cores were split lengthwise. One half of each core was then photographed and archived, and the other half was subsampled. Initial subsamples from the cores (20-centimeter composite samples from five predetermined depths in each profile) were analyzed for total mercury, methylmercury, total reduced sulfur, iron speciation, organic content (as the percentage of weight loss on ignition), and grain-size distribution. Detailed follow-up subsampling (3-centimeter intervals) was done at six locations along an east-west transect in the southern part of the Cache Creek Settling Basin and at one location in the northern part of the basin for analyses of total mercury; organic content; and cesium-137, which was

  11. Population genetic structure and its implications for adaptive variation in memory and the hippocampus on a continental scale in food-caching black-capped chickadees.

    Science.gov (United States)

    Pravosudov, V V; Roth, T C; Forister, M L; Ladage, L D; Burg, T M; Braun, M J; Davidson, B S

    2012-09-01

    Food-caching birds rely on stored food to survive the winter, and spatial memory has been shown to be critical in successful cache recovery. Both spatial memory and the hippocampus, an area of the brain involved in spatial memory, exhibit significant geographic variation linked to climate-based environmental harshness and the potential reliance on food caches for survival. Such geographic variation has been suggested to have a heritable basis associated with differential selection. Here, we ask whether population genetic differentiation and potential isolation among multiple populations of food-caching black-capped chickadees is associated with differences in memory and hippocampal morphology by exploring population genetic structure within and among groups of populations that are divergent to different degrees in hippocampal morphology. Using mitochondrial DNA and 583 AFLP loci, we found that population divergence in hippocampal morphology is not significantly associated with neutral genetic divergence or geographic distance, but instead is significantly associated with differences in winter climate. These results are consistent with variation in a history of natural selection on memory and hippocampal morphology that creates and maintains differences in these traits regardless of population genetic structure and likely associated gene flow. Published 2012. This article is a US Government work and is in the public domain in the USA.

  12. Color image segmentation using perceptual spaces through applets ...

    African Journals Online (AJOL)

    Color image segmentation using perceptual spaces through applets for determining and preventing diseases in chili peppers. JL González-Pérez, MC Espino-Gudiño, J Gudiño-Bazaldúa, JL Rojas-Rentería, V Rodríguez-Hernández, VM Castaño ...

  13. The Caregiver Contribution to Heart Failure Self-Care (CACHS): Further Psychometric Testing of a Novel Instrument.

    Science.gov (United States)

    Buck, Harleah G; Harkness, Karen; Ali, Muhammad Usman; Carroll, Sandra L; Kryworuchko, Jennifer; McGillion, Michael

    2017-04-01

    Caregivers (CGs) contribute important assistance with heart failure (HF) self-care, including daily maintenance, symptom monitoring, and management. Until CGs' contributions to self-care can be quantified, it is impossible to characterize it, account for its impact on patient outcomes, or perform meaningful cost analyses. The purpose of this study was to conduct psychometric testing and item reduction on the recently developed 34-item Caregiver Contribution to Heart Failure Self-care (CACHS) instrument using classical and item response theory methods. Fifty CGs (mean age 63 years ±12.84; 70% female) recruited from a HF clinic completed the CACHS in 2014 and results evaluated using classical test theory and item response theory. Items would be deleted for low (.95) endorsement, low (.7) corrected item-total correlations, significant pairwise correlation coefficients, floor or ceiling effects, relatively low latent trait and item information function levels ( .5), and differential item functioning. After analysis, 14 items were excluded, resulting in a 20-item instrument (self-care maintenance eight items; monitoring seven items; and management five items). Most items demonstrated moderate to high discrimination (median 2.13, minimum .77, maximum 5.05), and appropriate item difficulty (-2.7 to 1.4). Internal consistency reliability was excellent (Cronbach α = .94, average inter-item correlation = .41) with no ceiling effects. The newly developed 20-item version of the CACHS is supported by rigorous instrument development and represents a novel instrument to measure CGs' contribution to HF self-care. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  14. Flood Frequency Analysis of Future Climate Projections in the Cache Creek Watershed

    Science.gov (United States)

    Fischer, I.; Trihn, T.; Ishida, K.; Jang, S.; Kavvas, E.; Kavvas, M. L.

    2014-12-01

    Effects of climate change on hydrologic flow regimes, particularly extreme events, necessitate modeling of future flows to best inform water resources management. Future flow projections may be modeled through the joint use of carbon emission scenarios, general circulation models and watershed models. This research effort ran 13 simulations for carbon emission scenarios (taken from the A1, A2 and B1 families) over the 21st century (2001-2100) for the Cache Creek watershed in Northern California. Atmospheric data from general circulation models, CCSM3 and ECHAM5, were dynamically downscaled to a 9 km resolution using MM5, a regional mesoscale model, before being input into the physically based watershed environmental hydrology (WEHY) model. Ensemble mean and standard deviation of simulated flows describe the expected hydrologic system response. Frequency histograms and cumulative distribution functions characterize the range of hydrologic responses that may occur. The modeled flow results comprise a dataset suitable for time series and frequency analysis allowing for more robust system characterization, including indices such as the 100 year flood return period. These results are significant for water quality management as the Cache Creek watershed is severely impacted by mercury pollution from historic mining activities. Extreme flow events control mercury fate and transport affecting the downstream water bodies of the Sacramento River and Sacramento- San Joaquin Delta which provide drinking water to over 25 million people.

  15. High-speed mapping of water isotopes and residence time in Cache Slough Complex, San Francisco Bay Delta, CA

    Data.gov (United States)

    Department of the Interior — Real-time, high frequency (1-second sample interval) GPS location, water quality, and water isotope (δ2H, δ18O) data was collected in the Cache Slough Complex (CSC),...

  16. Towards Cache-Enabled, Order-Aware, Ontology-Based Stream Reasoning Framework

    Energy Technology Data Exchange (ETDEWEB)

    Yan, Rui; Praggastis, Brenda L.; Smith, William P.; McGuinness, Deborah L.

    2016-08-16

    While streaming data have become increasingly more popular in business and research communities, semantic models and processing software for streaming data have not kept pace. Traditional semantic solutions have not addressed transient data streams. Semantic web languages (e.g., RDF, OWL) have typically addressed static data settings and linked data approaches have predominantly addressed static or growing data repositories. Streaming data settings have some fundamental differences; in particular, data are consumed on the fly and data may expire. Stream reasoning, a combination of stream processing and semantic reasoning, has emerged with the vision of providing "smart" processing of streaming data. C-SPARQL is a prominent stream reasoning system that handles semantic (RDF) data streams. Many stream reasoning systems including C-SPARQL use a sliding window and use data arrival time to evict data. For data streams that include expiration times, a simple arrival time scheme is inadequate if the window size does not match the expiration period. In this paper, we propose a cache-enabled, order-aware, ontology-based stream reasoning framework. This framework consumes RDF streams with expiration timestamps assigned by the streaming source. Our framework utilizes both arrival and expiration timestamps in its cache eviction policies. In addition, we introduce the notion of "semantic importance" which aims to address the relevance of data to the expected reasoning, thus enabling the eviction algorithms to be more context- and reasoning-aware when choosing what data to maintain for question answering. We evaluate this framework by implementing three different prototypes and utilizing five metrics. The trade-offs of deploying the proposed framework are also discussed.

  17. Black Males and Television: New Images Versus Old Stereotypes.

    Science.gov (United States)

    Douglas, Robert L.

    1987-01-01

    This paper focuses on historic portrayal of black males in service and support roles in the media and their relation to social reality. Both television and films use glamorous sophisticated trappings seemingly to enhance the image of black males, but the personalities of the characters they play remain stereotypic. (VM)

  18. MSCT versus CBCT: evaluation of high-resolution acquisition modes for dento-maxillary and skull-base imaging

    Energy Technology Data Exchange (ETDEWEB)

    Dillenseger, Jean-Philippe; Goetz, Christian [Hopitaux Universitaires de Strasbourg, Imagerie Preclinique-UF6237, Pole d' imagerie, Strasbourg (France); Universite de Strasbourg, Icube, equipe MMB, CNRS, Strasbourg (France); Universite de Strasbourg, Federation de Medecine Translationnelle de Strasbourg, Faculte de Medecine, Strasbourg (France); Matern, Jean-Francois [Hopitaux Universitaires de Strasbourg, Imagerie Preclinique-UF6237, Pole d' imagerie, Strasbourg (France); Universite de Strasbourg, Federation de Medecine Translationnelle de Strasbourg, Faculte de Medecine, Strasbourg (France); Gros, Catherine-Isabelle; Bornert, Fabien [Universite de Strasbourg, Federation de Medecine Translationnelle de Strasbourg, Faculte de Medecine, Strasbourg (France); Universite de Strasbourg, Faculte de Chirurgie Dentaire, Strasbourg (France); Le Minor, Jean-Marie [Universite de Strasbourg, Icube, equipe MMB, CNRS, Strasbourg (France); Universite de Strasbourg, Federation de Medecine Translationnelle de Strasbourg, Faculte de Medecine, Strasbourg (France); Universite de Strasbourg, Institut d' Anatomie Normale, Strasbourg (France); Constantinesco, Andre [Hopitaux Universitaires de Strasbourg, Imagerie Preclinique-UF6237, Pole d' imagerie, Strasbourg (France); Choquet, Philippe [Hopitaux Universitaires de Strasbourg, Imagerie Preclinique-UF6237, Pole d' imagerie, Strasbourg (France); Universite de Strasbourg, Icube, equipe MMB, CNRS, Strasbourg (France); Universite de Strasbourg, Federation de Medecine Translationnelle de Strasbourg, Faculte de Medecine, Strasbourg (France); Hopital de Hautepierre, Imagerie Preclinique, Biophysique et Medecine Nucleaire, Strasbourg Cedex (France)

    2014-09-24

    Our aim was to conduct a quantitative and qualitative evaluation of high-resolution skull-bone imaging for dentistry and otolaryngology using different architectures of recent X-ray computed tomography systems. Three multi-slice computed tomography (MSCT) systems and one Cone-beam computed tomography (CBCT) system were used in this study. All apparatuses were tested with installed acquisition modes and proprietary reconstruction software enabling high-resolution bone imaging. Quantitative analyses were performed with small fields of view with the preclinical vmCT phantom, which permits to measure spatial resolution, geometrical accuracy, linearity and homogeneity. Ten operators performed visual qualitative analyses on the vmCT phantom images, and on dry human skull images. Quantitative analysis showed no significant differences between protocols in terms of linearity and geometric accuracy. All MSCT systems present a better homogeneity than the CBCT. Both quantitative and visual analyses demonstrate that CBCT acquisitions are not better than the collimated helical MSCT mode. Our results demonstrate that current high-resolution MSCT protocols could exceed the performance of a previous generation CBCT system for spatial resolution and image homogeneity. (orig.)

  19. MSCT versus CBCT: evaluation of high-resolution acquisition modes for dento-maxillary and skull-base imaging

    International Nuclear Information System (INIS)

    Dillenseger, Jean-Philippe; Goetz, Christian; Matern, Jean-Francois; Gros, Catherine-Isabelle; Bornert, Fabien; Le Minor, Jean-Marie; Constantinesco, Andre; Choquet, Philippe

    2015-01-01

    Our aim was to conduct a quantitative and qualitative evaluation of high-resolution skull-bone imaging for dentistry and otolaryngology using different architectures of recent X-ray computed tomography systems. Three multi-slice computed tomography (MSCT) systems and one Cone-beam computed tomography (CBCT) system were used in this study. All apparatuses were tested with installed acquisition modes and proprietary reconstruction software enabling high-resolution bone imaging. Quantitative analyses were performed with small fields of view with the preclinical vmCT phantom, which permits to measure spatial resolution, geometrical accuracy, linearity and homogeneity. Ten operators performed visual qualitative analyses on the vmCT phantom images, and on dry human skull images. Quantitative analysis showed no significant differences between protocols in terms of linearity and geometric accuracy. All MSCT systems present a better homogeneity than the CBCT. Both quantitative and visual analyses demonstrate that CBCT acquisitions are not better than the collimated helical MSCT mode. Our results demonstrate that current high-resolution MSCT protocols could exceed the performance of a previous generation CBCT system for spatial resolution and image homogeneity. (orig.)

  20. I/O-Optimal Distribution Sweeping on Private-Cache Chip Multiprocessors

    DEFF Research Database (Denmark)

    Ajwani, Deepak; Sitchinava, Nodar; Zeh, Norbert

    2011-01-01

    /PB) for a number of problems on axis aligned objects; P denotes the number of cores/processors, B denotes the number of elements that fit in a cache line, N and K denote the sizes of the input and output, respectively, and sortp(N) denotes the I/O complexity of sorting N items using P processors in the PEM model...... framework was introduced recently, and a number of algorithms for problems on axis-aligned objects were obtained using this framework. The obtained algorithms were efficient but not optimal. In this paper, we improve the framework to obtain algorithms with the optimal I/O complexity of O(sortp(N) + K...

  1. A Cache-Oblivious Implicit Dictionary with the Working Set Property

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Kejlberg-Rasmussen, Casper; Truelsen, Jakob

    2010-01-01

    In this paper we present an implicit dictionary with the working set property i.e. a dictionary supporting \\op{insert}($e$), \\op{delete}($x$) and \\op{predecessor}($x$) in~$\\O(\\log n)$ time and \\op{search}($x$) in $\\O(\\log\\ell)$ time, where $n$ is the number of elements stored in the dictionary...... and $\\ell$ is the number of distinct elements searched for since the element with key~$x$ was last searched for. The dictionary stores the elements in an array of size~$n$ using \\emph{no} additional space. In the cache-oblivious model the operations \\op{insert}($e$), \\op{delete}($x$) and \\op...

  2. Feasibility Report and Environmental Statement for Water Resources Development, Cache Creek Basin, California

    Science.gov (United States)

    1979-02-01

    classified as Porno , Lake Miwok, and Patwin. Recent surveys within the Clear Lake-Cache Creek Basin have located 28 archeological sites, some of which...additional 8,400 acre-feet annually to the Lakeport area. Porno Reservoir on Kelsey Creek, being studied by Lake County, also would supplement M&l water...project on Scotts Creek could provide 9,100 acre- feet annually of irrigation water. Also, as previously discussed, Porno Reservoir would furnish

  3. Minimizing End-to-End Interference in I/O Stacks Spanning Shared Multi-Level Buffer Caches

    Science.gov (United States)

    Patrick, Christina M.

    2011-01-01

    This thesis presents an end-to-end interference minimizing uniquely designed high performance I/O stack that spans multi-level shared buffer cache hierarchies accessing shared I/O servers to deliver a seamless high performance I/O stack. In this thesis, I show that I can build a superior I/O stack which minimizes the inter-application interference…

  4. Neuroanatomy of the vmPFC and dlPFC predicts individual differences in cognitive regulation during dietary self-control across regulation strategies.

    Science.gov (United States)

    Schmidt, Liane; Tusche, Anita; Manoharan, Nicolas; Hutcherson, Cendri; Hare, Todd; Plassmann, Hilke

    2018-06-04

    Making healthy food choices is challenging for many people. Individuals differ greatly in their ability to follow health goals in the face of temptation, but it is unclear what underlies such differences. Using voxel-based morphometry (VBM), we investigated in healthy humans (i.e., men and women) links between structural variation in gray matter volume and individuals' level of success in shifting toward healthier food choices. We combined MRI and choice data into a joint dataset by pooling across three independent studies that employed a task prompting participants to explicitly focus on the healthiness of food items before making their food choices. Within this dataset, we found that individual differences in gray matter volume in the ventromedial prefrontal cortex (vmPFC) and dorsolateral prefrontal cortex (dlPFC) predicted regulatory success. We extended and confirmed these initial findings by predicting regulatory success out of sample and across tasks in a second dataset requiring participants to apply a different regulation strategy that entailed distancing from cravings for unhealthy, appetitive foods. Our findings suggest that neuroanatomical markers in the vmPFC and dlPFC generalized to different forms of dietary regulation strategies across participant groups. They provide novel evidence that structural differences in neuroanatomy of two key regions for valuation and its control, the vmPFC and dlPFC, predict an individual's ability to exert control in dietary choices. SIGNIFICANCE STATEMENT Dieting involves regulating food choices in order to eat healthier foods and fewer unhealthy foods. People differ dramatically in their ability to achieve or maintain this regulation, but it is unclear why. Here, we show that individuals with more gray matter volume in the dorsolateral and ventromedial prefrontal cortex are better at exercising dietary self-control. This relationship was observed across four different studies examining two different forms of dietary

  5. Temperature and Discharge on a Highly Altered Stream in Utah's Cache Valley

    OpenAIRE

    Pappas, Andy

    2013-01-01

    To study the River Continuum Concept (RCC) and the Serial Discontinuity Hypothesis (SDH), I looked at temperature and discharge changes along 52 km of the Little Bear River in Cache Valley, Utah. The Little Bear River is a fourth order stream with one major reservoir, a number of irrigation diversions, and one major tributary, the East Fork of the Little Bear River. Discharge data was collected at six sites on 29 September 2012 and temperature data was collected hourly at eleven sites from 1 ...

  6. Moving to a total VM environment

    International Nuclear Information System (INIS)

    Johnston, T.Y.

    1981-01-01

    The Stanford Linear Accelerator Center is a single purpose laboratory operated by Stanford University for the Department of Energy. Its mission is to do research in High Energy (particle) physics. This research involves the use of large and complex electronic detectors. Each of these detectors is a multi-million dollar device. A part of each detector is a computer for process control and data logging. Most detectors at SLAC now use VAX 11/780s for this purpose. Most detectors record digital data via this process control computer. Consequently, physics today is not bounded by the cost of analog to digital conversion as it was in the past, and the physicist is able to run larger experiments than were feasible a decade ago. Today a medium sized experiment will produce several hundred full reels of 6250 BPI tape whereas a large experiment is a couple of thousand reels. The raw data must first be transformed into physics events using data transformation programs. The physicists then use subsets of the data to understand what went on. The subset may be anywhere from a few megabytes to 5 or 6 gigabytes of data (30 or 40 full reels of tape). This searching would be best solved interactively (if computers and I/0 devices were fast enough). Instead what we find are very dynamic batch programs that are generally changed every run. The result is that on any day there are probably around 50 to 100 physicists interacting with a half dozen different experiments who are causing us to mount around 750 to 1000 tapes a day. This has been the style of computing for the last decade. Our going to VM is part of our effort to change this style of computing and to make physics computing more effective

  7. Using dCache in Archiving Systems oriented to Earth Observation

    Science.gov (United States)

    Garcia Gil, I.; Perez Moreno, R.; Perez Navarro, O.; Platania, V.; Ozerov, D.; Leone, R.

    2012-04-01

    The object of LAST activity (Long term data Archive Study on new Technologies) is to perform an independent study on best practices and assessment of different archiving technologies mature for operation in the short and mid-term time frame, or available in the long-term with emphasis on technologies better suited to satisfy the requirements of ESA, LTDP and other European and Canadian EO partners in terms of digital information preservation and data accessibility and exploitation. During the last phase of the project, a testing of several archiving solutions has been performed in order to evaluate their suitability. In particular, dCache, aimed to provide a file system tree view of the data repository exchanging this data with backend (tertiary) Storage Systems as well as space management, pool attraction, dataset replication, hot spot determination and recovery from disk or node failures. Connected to a tertiary storage system, dCache simulates unlimited direct access storage space. Data exchanges to and from the underlying HSM are performed automatically and invisibly to the user Dcache was created to solve the requirements of big computer centers and universities with big amounts of data, putting their efforts together and founding EMI (European Middleware Initiative). At the moment being, Dcache is mature enough to be implemented, being used by several research centers of relevance (e.g. LHC storing up to 50TB/day). This solution has been not used so far in Earth Observation and the results of the study are summarized in this article, focusing on the capacities over a simulated environment to get in line with the ESA requirements for a geographically distributed storage. The challenge of a geographically distributed storage system can be summarized as the way to provide a maximum quality for storage and dissemination services with the minimum cost.

  8. Behavior characterization of the shared last-level cache in a chip multiprocessor

    OpenAIRE

    Benedicte Illescas, Pedro

    2014-01-01

    [CATALÀ] Aquest projecte consisteix a analitzar diferents aspectes de la jerarquia de memòria i entendre la seva influència al rendiment del sistema. Els aspectes que s'analitzaran són els algorismes de reemplaçament, els esquemes de mapeig de memòria i les polítiques de pàgina de memòria. [ANGLÈS] This project consists in analyzing different aspects of the memory hierarchy and understanding its influence in the overall system performance. The aspects that will be analyzed are cache replac...

  9. Nasopharyngeal Cancers: Which Method Should be Used to Measure these Irregularly Shaped Tumors on Cross-Sectional Imaging?

    International Nuclear Information System (INIS)

    King, Ann D.; Zee, Benny; Yuen, Edmund H.Y.; Leung Singfai; Yeung, David K.W.; Ma, Brigette B.; Wong, Jeffrey K.T.; Kam, Michael K.M.; Ahuja, Anil T.; Chan, Anthony T.C.

    2007-01-01

    Purpose: To determine whether the standard techniques of measuring tumor size and change in size after treatment could be applied to the measurement of nasopharyngeal cancers, which are often irregular in shape. Methods and Materials: The standard measurements of bidimensional (BDM) (World Health Organization criteria) and unidimensional (UDM) (Response Evaluation Criteria in Solid Tumors [RECIST] criteria), together with the maximum depth of the tumor perpendicular to the pharyngeal wall (DM), were acquired from axial magnetic resonance images of primary nasopharyngeal carcinoma in 44 patients at diagnosis and in 29 of these patients after treatment. Tumor volume measurements (VM), acquired from the summation of areas from the axial magnetic resonance images, were used as the reference standard. Results: There was a significant association between VM and BDM with respect to tumor size at diagnosis (p = 0.002), absolute change in tumor size after treatment (p < 0.001), and percentage change in tumor size after treatment (p = 0.044), but not between VM and UDM. There was also a significant association between VM and DM with respect to percentage change in tumor size after treatment (p = <0.0001) but not absolute change (p = 0.222). Conclusion: When using simple measurements to assess irregularly shaped nasopharyngeal cancers, the BDM should be used to measure size at diagnosis and the BDM and percentage change in size with treatment. Unidimensional measurement does not reflect size or change in size, and therefore the RECIST criteria may not be applicable to all tumor shapes. The use of DM requires further evaluation

  10. Optimizing VM allocation and data placement for data-intensive applications in cloud using ACO metaheuristic algorithm

    Directory of Open Access Journals (Sweden)

    T.P. Shabeera

    2017-04-01

    Full Text Available Nowadays data-intensive applications for processing big data are being hosted in the cloud. Since the cloud environment provides virtualized resources for computation, and data-intensive applications require communication between the computing nodes, the placement of Virtual Machines (VMs and location of data affect the overall computation time. Majority of the research work reported in the current literature consider the selection of physical nodes for placing data and VMs as independent problems. This paper proposes an approach which considers VM placement and data placement hand in hand. The primary objective is to reduce cross network traffic and bandwidth usage, by placing required number of VMs and data in Physical Machines (PMs which are physically closer. The VM and data placement problem (referred as MinDistVMDataPlacement problem is defined in this paper and has been proved to be NP- Hard. This paper presents and evaluates a metaheuristic algorithm based on Ant Colony Optimization (ACO, which selects a set of adjacent PMs for placing data and VMs. Data is distributed in the physical storage devices of the selected PMs. According to the processing capacity of each PM, a set of VMs are placed on these PMs to process data stored in them. We use simulation to evaluate our algorithm. The results show that the proposed algorithm selects PMs in close proximity and the jobs executed in the VMs allocated by the proposed scheme outperforms other allocation schemes.

  11. Design of low noise imaging system

    Science.gov (United States)

    Hu, Bo; Chen, Xiaolai

    2017-10-01

    In order to meet the needs of engineering applications for low noise imaging system under the mode of global shutter, a complete imaging system is designed based on the SCMOS (Scientific CMOS) image sensor CIS2521F. The paper introduces hardware circuit and software system design. Based on the analysis of key indexes and technologies about the imaging system, the paper makes chips selection and decides SCMOS + FPGA+ DDRII+ Camera Link as processing architecture. Then it introduces the entire system workflow and power supply and distribution unit design. As for the software system, which consists of the SCMOS control module, image acquisition module, data cache control module and transmission control module, the paper designs in Verilog language and drives it to work properly based on Xilinx FPGA. The imaging experimental results show that the imaging system exhibits a 2560*2160 pixel resolution, has a maximum frame frequency of 50 fps. The imaging quality of the system satisfies the requirement of the index.

  12. Mercury and methylmercury concentrations and loads in the Cache Creek watershed, California

    Energy Technology Data Exchange (ETDEWEB)

    Domagalski, Joseph L.; Alpers, Charles N.; Slotton, Darell G.; Suchanek, Thomas H.; Ayers, Shaun M

    2004-07-05

    Concentrations and loads of total mercury and methylmercury were measured in streams draining abandoned mercury mines and in the proximity of geothermal discharge in the Cache Creek watershed of California during a 17-month period from January 2000 through May 2001. Rainfall and runoff were lower than long-term averages during the study period. The greatest loading of mercury and methylmercury from upstream sources to downstream receiving waters, such as San Francisco Bay, generally occurred during or after winter rainfall events. During the study period, loads of mercury and methylmercury from geothermal sources tended to be greater than those from abandoned mining areas, a pattern attributable to the lack of large precipitation events capable of mobilizing significant amounts of either mercury-laden sediment or dissolved mercury and methylmercury from mine waste. Streambed sediments of Cache Creek are a significant source of mercury and methylmercury to downstream receiving bodies of water. Much of the mercury in these sediments is the result of deposition over the last 100-150 years by either storm-water runoff, from abandoned mines, or continuous discharges from geothermal areas. Several geochemical constituents were useful as natural tracers for mining and geothermal areas, including the aqueous concentrations of boron, chloride, lithium and sulfate, and the stable isotopes of hydrogen and oxygen in water. Stable isotopes of water in areas draining geothermal discharges showed a distinct trend toward enrichment of {sup 18}O compared with meteoric waters, whereas much of the runoff from abandoned mines indicated a stable isotopic pattern more consistent with local meteoric water.

  13. Mercury and methylmercury concentrations and loads in the Cache Creek watershed, California

    International Nuclear Information System (INIS)

    Domagalski, Joseph L.; Alpers, Charles N.; Slotton, Darell G.; Suchanek, Thomas H.; Ayers, Shaun M.

    2004-01-01

    Concentrations and loads of total mercury and methylmercury were measured in streams draining abandoned mercury mines and in the proximity of geothermal discharge in the Cache Creek watershed of California during a 17-month period from January 2000 through May 2001. Rainfall and runoff were lower than long-term averages during the study period. The greatest loading of mercury and methylmercury from upstream sources to downstream receiving waters, such as San Francisco Bay, generally occurred during or after winter rainfall events. During the study period, loads of mercury and methylmercury from geothermal sources tended to be greater than those from abandoned mining areas, a pattern attributable to the lack of large precipitation events capable of mobilizing significant amounts of either mercury-laden sediment or dissolved mercury and methylmercury from mine waste. Streambed sediments of Cache Creek are a significant source of mercury and methylmercury to downstream receiving bodies of water. Much of the mercury in these sediments is the result of deposition over the last 100-150 years by either storm-water runoff, from abandoned mines, or continuous discharges from geothermal areas. Several geochemical constituents were useful as natural tracers for mining and geothermal areas, including the aqueous concentrations of boron, chloride, lithium and sulfate, and the stable isotopes of hydrogen and oxygen in water. Stable isotopes of water in areas draining geothermal discharges showed a distinct trend toward enrichment of 18 O compared with meteoric waters, whereas much of the runoff from abandoned mines indicated a stable isotopic pattern more consistent with local meteoric water

  14. The relationship of VOI threshold, volume and B/S on DISA images

    International Nuclear Information System (INIS)

    Song Liejing; Wang Mingming; Si Hongwei; Li Fei

    2011-01-01

    Objective: To explore the relationship of VOI threshold, Volume and B/S on DISA phantom images. Methods: Ten hollow spheres were placed in cylinder phantom. According to the B/S of 1 : 7, 1 : 5 and 1 : 4, 99m TcO 4- and 18 F-FDG was filled into the container and spheres simultaneously and separately. Images were acquired by DISA and SIDA protocol. Volume of interest (VOI) for each sphere was analyzed by threshold method and to fit expression individually for validating of the relationship. Results: The equation for the estimation of optimal threshold was as following Tm = d + c × Bm/(e + f × Vm) + b/Vm. In majority of data, the calculated threshold was in the 1% interval that optimal thresholds were really in. Those who were not in were at the lower or upper intervals. Conclusions: Both DISA and SIDA images, based o the relationship of VOI thresh- old. Volume and B/S and real volume, this method could accurately calculate optimal threshold with an error less than 1% for spheres whose volumes ranged from 3.3 to 30.8 ml. (authors)

  15. Comparison of the Frontier Distributed Database Caching System with NoSQL Databases

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Non-relational "NoSQL" databases such as Cassandra and CouchDB are best known for their ability to scale to large numbers of clients spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects, is based on traditional SQL databases but also has the same high scalability and wide-area distributability for an important subset of applications. This paper compares the architectures, behavior, performance, and maintainability of the two different approaches and identifies the criteria for choosing which approach to prefer over the other.

  16. Caching-Aided Collaborative D2D Operation for Predictive Data Dissemination in Industrial IoT

    OpenAIRE

    Orsino, Antonino; Kovalchukov, Roman; Samuylov, Andrey; Moltchanov, Dmitri; Andreev, Sergey; Koucheryavy, Yevgeni; Valkama, Mikko

    2018-01-01

    Industrial automation deployments constitute challenging environments where moving IoT machines may produce high-definition video and other heavy sensor data during surveying and inspection operations. Transporting massive contents to the edge network infrastructure and then eventually to the remote human operator requires reliable and high-rate radio links supported by intelligent data caching and delivery mechanisms. In this work, we address the challenges of contents dissemination in chara...

  17. Agricultural Influences on Cache Valley, Utah Air Quality During a Wintertime Inversion Episode

    Science.gov (United States)

    Silva, P. J.

    2017-12-01

    Several of northern Utah's intermountain valleys are classified as non-attainment for fine particulate matter. Past data indicate that ammonium nitrate is the major contributor to fine particles and that the gas phase ammonia concentrations are among the highest in the United States. During the 2017 Utah Winter Fine Particulate Study, USDA brought a suite of online and real-time measurement methods to sample particulate matter and potential gaseous precursors from agricultural emissions in the Cache Valley. Instruments were co-located at the State of Utah monitoring site in Smithfield, Utah from January 21st through February 12th, 2017. A Scanning mobility particle sizer (SMPS) and aerodynamic particle sizer (APS) acquired size distributions of particles from 10 nm - 10 μm in 5-min intervals. A URG ambient ion monitor (AIM) gave hourly concentrations for gas and particulate ions and a Chromatotec Trsmedor gas chromatograph obtained 10 minute measurements of gaseous sulfur species. High ammonia concentrations were detected at the Smithfield site with concentrations above 100 ppb at times, indicating a significant influence from agriculture at the sampling site. Ammonia is not the only agricultural emission elevated in Cache Valley during winter, as reduced sulfur gas concentrations of up to 20 ppb were also detected. Dimethylsulfide was the major sulfur-containing gaseous species. Analysis indicates that particle growth and particle nucleation events were both observed by the SMPS. Relationships between gas and particulate concentrations and correlations between the two will be discussed.

  18. The iMars WebGIS - Spatio-Temporal Data Queries and Single Image Map Web Services

    Science.gov (United States)

    Walter, Sebastian; Steikert, Ralf; Schreiner, Bjoern; Muller, Jan-Peter; van Gasselt, Stephan; Sidiropoulos, Panagiotis; Lanz-Kroechert, Julia

    2017-04-01

    WMS GetMap requests by setting additional TIME parameter values in the request. The values for the parameter represent an interval defined by its lower and upper bounds. As the WMS time standard only supports one time variable, only the start times of the images are considered. If no time values are submitted with the request, the full time range of all images is assumed as the default. Dynamic single image WMS: To compare images from different acquisition times at sites of multiple coverage, we have to load every image as a single WMS layer. Due to the vast amount of single images we need a way to set up the layers in a dynamic way - the map server does not know the images to be served beforehand. We use the MapScript interface to dynamically access MapServer's objects and configure the file name and path of the requested image in the map configuration. The layers are created on-the-fly each representing only one single image. On the frontend side, the vendor-specific WMS request parameter (PRODUCTID) has to be appended to the regular set of WMS parameters. The request is then passed on to the MapScript instance. Web Map Tile Cache: In order to speed up access of the WMS requests, a MapCache instance has been integrated in the pipeline. As it is not aware of the available PDS product IDs which will be queried, the PRODUCTID parameter is configured as an additional dimension of the cache. The WMS request is received by the Apache webserver configured with the MapCache module. If the tile is available in the tile cache, it is immediately commited to the client. If not available, the tile request is forwarded to Apache and the MapScript module. The Python script intercepts the WMS request and extracts the product ID from the parameter chain. It loads the layer object from the map file and appends the file name and path of the inquired image. After some possible further image processing inside the script (stretching, color matching), the request is submitted to the Map

  19. Design issues and caching strategies for CD-ROM-based multimedia storage

    Science.gov (United States)

    Shastri, Vijnan; Rajaraman, V.; Jamadagni, H. S.; Venkat-Rangan, P.; Sampath-Kumar, Srihari

    1996-03-01

    CD-ROMs have proliferated as a distribution media for desktop machines for a large variety of multimedia applications (targeted for a single-user environment) like encyclopedias, magazines and games. With CD-ROM capacities up to 3 GB being available in the near future, they will form an integral part of Video on Demand (VoD) servers to store full-length movies and multimedia. In the first section of this paper we look at issues related to the single- user desktop environment. Since these multimedia applications are highly interactive in nature, we take a pragmatic approach, and have made a detailed study of the multimedia application behavior in terms of the I/O request patterns generated to the CD-ROM subsystem by tracing these patterns. We discuss prefetch buffer design and seek time characteristics in the context of the analysis of these traces. We also propose an adaptive main-memory hosted cache that receives caching hints from the application to reduce the latency when the user moves from one node of the hyper graph to another. In the second section we look at the use of CD-ROM in a VoD server and discuss the problem of scheduling multiple request streams and buffer management in this scenario. We adapt the C-SCAN (Circular SCAN) algorithm to suit the CD-ROM drive characteristics and prove that it is optimal in terms of buffer size management. We provide computationally inexpensive relations by which this algorithm can be implemented. We then propose an admission control algorithm which admits new request streams without disrupting the continuity of playback of the previous request streams. The algorithm also supports operations such as fast forward and replay. Finally, we discuss the problem of optimal placement of MPEG streams on CD-ROMs in the third section.

  20. Reduced prefrontal connectivity in psychopathy.

    Science.gov (United States)

    Motzkin, Julian C; Newman, Joseph P; Kiehl, Kent A; Koenigs, Michael

    2011-11-30

    Linking psychopathy to a specific brain abnormality could have significant clinical, legal, and scientific implications. Theories on the neurobiological basis of the disorder typically propose dysfunction in a circuit involving ventromedial prefrontal cortex (vmPFC). However, to date there is limited brain imaging data to directly test whether psychopathy may indeed be associated with any structural or functional abnormality within this brain area. In this study, we employ two complementary imaging techniques to assess the structural and functional connectivity of vmPFC in psychopathic and non-psychopathic criminals. Using diffusion tensor imaging, we show that psychopathy is associated with reduced structural integrity in the right uncinate fasciculus, the primary white matter connection between vmPFC and anterior temporal lobe. Using functional magnetic resonance imaging, we show that psychopathy is associated with reduced functional connectivity between vmPFC and amygdala as well as between vmPFC and medial parietal cortex. Together, these data converge to implicate diminished vmPFC connectivity as a characteristic neurobiological feature of psychopathy.

  1. Postglacial Rebound and Current Ice Loss Estimates from Space Geodesy: The New ICE-6G (VM5a) Global Model

    Science.gov (United States)

    Peltier, W. R.; Argus, D.; Drummond, R.; Moore, A. W.

    2012-12-01

    We compare, on a global basis, estimates of site velocity against predictions of the newly constructed postglacial rebound model ICE-6G (VM5a). This model is fit to observations of North American postglacial rebound thereby demonstrating that the ice sheet at last glacial maximum must have been, relative to ICE-5G,thinner in southern Manitoba, thinner near Yellowknife (northwest Territories), thicker in eastern and southern Quebec, and thicker along the British Columbia-Alberta border. The GPS based estimates of site velocity that we employ are more accurate than were previously available because they are based on GPS estimates of position as a function of time determined by incorporating satellite phase center variations [Desai et al. 2011]. These GPS estimates are constraining postglacial rebound in North America and Europe more tightly than ever before. In particular, given the high density of GPS sites in North America, and the fact that the velocity of the mass center (CM) of Earth is also more tightly constrained, the new model much more strongly constrains both the lateral extent of the proglacial forebulge and the rate at which this peripheral bulge (that was emplaced peripheral to the late Pleistocence Laurentia ice sheet) is presently collapsing. This fact proves to be important to the more accurate inference of the current rate of ice loss from both Greenland and Alaska based upon the time dependent gravity observations being provided by the GRACE satellite system. In West Antarctica we have also been able to significantly revise the previously prevalent ICE-5G deglaciation history so as to enable its predictions to be optimally consistent with GPS site velocities determined by connecting campaign WAGN measurements to those provided by observations from the permanent ANET sites. Ellsworth Land (south of the Antarctic peninsula), is observed to be rising at 6 ±3 mm/yr according to our latest analyses; the Ellsworth mountains themselves are observed to be

  2. Virtual Machine Logbook - Enabling virtualization for ATLAS

    International Nuclear Information System (INIS)

    Yao Yushu; Calafiura, Paolo; Leggett, Charles; Poffet, Julien; Cavalli, Andrea; Frederic, Bapst

    2010-01-01

    ATLAS software has been developed mostly on CERN linux cluster lxplus or on similar facilities at the experiment Tier 1 centers. The fast rise of virtualization technology has the potential to change this model, turning every laptop or desktop into an ATLAS analysis platform. In the context of the CernVM project we are developing a suite of tools and CernVM plug-in extensions to promote the use of virtualization for ATLAS analysis and software development. The Virtual Machine Logbook (VML), in particular, is an application to organize work of physicists on multiple projects, logging their progress, and speeding up ''context switches'' from one project to another. An important feature of VML is the ability to share with a single 'click' the status of a given project with other colleagues. VML builds upon the save and restore capabilities of mainstream virtualization software like VMware, and provides a technology-independent client interface to them. A lot of emphasis in the design and implementation has gone into optimizing the save and restore process to makepractical to store many VML entries on a typical laptop disk or to share a VML entry over the network. At the same time, taking advantage of CernVM's plugin capabilities, we are extending the CernVM platform to help increase the usability of ATLAS software. For example, we added the ability to start the ATLAS event display on any computer running CernVM simply by clicking a button in a web browser. We want to integrate seamlessly VML with CernVM unique file system design to distribute efficiently ATLAS software on every physicist computer. The CernVM File System (CVMFS) download files on-demand via HTTP, and cache it locally for future use. This reduces by one order of magnitude the download sizes, making practical for a developer to work with multiple software releases on a virtual machine.

  3. The Antarctica component of postglacial rebound model ICE-6G_C (VM5a) based on GPS positioning, exposure age dating of ice thicknesses, and relative sea level histories

    Science.gov (United States)

    Argus, Donald F.; Peltier, W. R.; Drummond, R.; Moore, Angelyn W.

    2014-07-01

    A new model of the deglaciation history of Antarctica over the past 25 kyr has been developed, which we refer to herein as ICE-6G_C (VM5a). This revision of its predecessor ICE-5G (VM2) has been constrained to fit all available geological and geodetic observations, consisting of: (1) the present day uplift rates at 42 sites estimated from GPS measurements, (2) ice thickness change at 62 locations estimated from exposure-age dating, (3) Holocene relative sea level histories from 12 locations estimated on the basis of radiocarbon dating and (4) age of the onset of marine sedimentation at nine locations along the Antarctic shelf also estimated on the basis of 14C dating. Our new model fits the totality of these data well. An additional nine GPS-determined site velocities are also estimated for locations known to be influenced by modern ice loss from the Pine Island Bay and Northern Antarctic Peninsula regions. At the 42 locations not influenced by modern ice loss, the quality of the fit of postglacial rebound model ICE-6G_C (VM5A) is characterized by a weighted root mean square residual of 0.9 mm yr-1. The Southern Antarctic Peninsula is inferred to be rising at 2 mm yr-1, requiring there to be less Holocene ice loss there than in the prior model ICE-5G (VM2). The East Antarctica coast is rising at approximately 1 mm yr-1, requiring ice loss from this region to have been small since Last Glacial Maximum. The Ellsworth Mountains, at the base of the Antarctic Peninsula, are inferred to be rising at 5-8 mm yr-1, indicating large ice loss from this area during deglaciation that is poorly sampled by geological data. Horizontal deformation of the Antarctic Plate is minor with two exceptions. First, O'Higgins, at the tip of the Antarctic Peninsula, is moving southeast at a significant 2 mm yr-1 relative to the Antarctic Plate. Secondly, the margins of the Ronne and Ross Ice Shelves are moving horizontally away from the shelf centres at an approximate rate of 0.8 mm yr-1, in

  4. sRNAtoolboxVM: Small RNA Analysis in a Virtual Machine.

    Science.gov (United States)

    Gómez-Martín, Cristina; Lebrón, Ricardo; Rueda, Antonio; Oliver, José L; Hackenberg, Michael

    2017-01-01

    High-throughput sequencing (HTS) data for small RNAs (noncoding RNA molecules that are 20-250 nucleotides in length) can now be routinely generated by minimally equipped wet laboratories; however, the bottleneck in HTS-based research has shifted now to the analysis of such huge amount of data. One of the reasons is that many analysis types require a Linux environment but computers, system administrators, and bioinformaticians suppose additional costs that often cannot be afforded by small to mid-sized groups or laboratories. Web servers are an alternative that can be used if the data is not subjected to privacy issues (what very often is an important issue with medical data). However, in any case they are less flexible than stand-alone programs limiting the number of workflows and analysis types that can be carried out.We show in this protocol how virtual machines can be used to overcome those problems and limitations. sRNAtoolboxVM is a virtual machine that can be executed on all common operating systems through virtualization programs like VirtualBox or VMware, providing the user with a high number of preinstalled programs like sRNAbench for small RNA analysis without the need to maintain additional servers and/or operating systems.

  5. Comparison of the Frontier Distributed Database Caching System with NoSQL Databases

    CERN Document Server

    Dykstra, David

    2012-01-01

    One of the main attractions of non-relational "NoSQL" databases is their ability to scale to large numbers of readers, including readers spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects for Conditions data, is based on traditional SQL databases but also has high scalability and wide-area distributability for an important subset of applications. This paper compares the major characteristics of the two different approaches and identifies the criteria for choosing which approach to prefer over the other. It also compares in some detail the NoSQL databases used by CMS and ATLAS: MongoDB, CouchDB, HBase, and Cassandra.

  6. Simulation modeling of cloud computing for smart grid using CloudSim

    Directory of Open Access Journals (Sweden)

    Sandeep Mehmi

    2017-05-01

    Full Text Available In this paper a smart grid cloud has been simulated using CloudSim. Various parameters like number of virtual machines (VM, VM Image size, VM RAM, VM bandwidth, cloudlet length, and their effect on cost and cloudlet completion time in time-shared and space-shared resource allocation policy have been studied. As the number of cloudlets increased from 68 to 178, greater number of cloudlets completed their execution with high cloudlet completion time in time-shared allocation policy as compared to space-shared allocation policy. Similar trend has been observed when VM bandwidth is increased from 1 Gbps to 10 Gbps and VM RAM is increased from 512 MB to 5120 MB. The cost of processing increased linearly with respect to increase in number of VMs, VM Image size and cloudlet length.

  7. Antitumor activity of the two epipodophyllotoxin derivatives VP-16 and VM-26 in preclinical systems: a comparison of in vitro and in vivo drug evaluation

    DEFF Research Database (Denmark)

    Jensen, P B; Roed, H; Skovsgaard, T

    1990-01-01

    doses on an optimal schedule in vivo and it has not been clarified as to whether a therapeutic difference exists between them. A prolonged schedule is optimal for both drugs; accordingly we determined the toxicity in mice using a 5-day schedule. The dose killing 10% of the mice (LD10) was 9.4 mg...... the increase in life span and the number of cures. The drugs were also compared in nude mice inoculated with human small-cell lung cancer lines OC-TOL and CPH-SCCL-123; however, they were more toxic to the nude mice and only a limited therapeutic effect was observed. In conclusion, the complete cross......-resistance between the two drugs suggests that they have an identical antineoplastic spectrum. VM-26 was more potent than VP-16 in vitro; however, this was not correlated to a therapeutic advantage for VM-26 over VP-16 in vivo....

  8. Secure and Practical Defense Against Code-Injection Attacks using Software Dynamic Translation

    Science.gov (United States)

    2006-06-16

    Cache inst1 inst2 … instx inst3 inst4 cmpl %eax,%ecx trampoline Code Fragment1 inst7 inst8 … trampoline Code Fragment2 Context Switch Fetch Decode...inst4 cmpl %eax,%ecx bne L4 inst5 inst6 … jmp L8 L4: inst7 inst8 … Application Text CFn CFn+1 CFn+2 CFn+3 CFn+4 CFn+5 CFn+x inst5 inst6 … trampoline ...and client configurations was motivated by our desire to measure the processor over- head imposed by the Strata VM. Providing the server twice as much

  9. Sequential cranial ultrasound and cerebellar diffusion weighted imaging contribute to the early prognosis of neurodevelopmental outcome in preterm infants.

    Directory of Open Access Journals (Sweden)

    Margaretha J Brouwer

    Full Text Available OBJECTIVE: To evaluate the contribution of sequential cranial ultrasound (cUS and term-equivalent age magnetic resonance imaging (TEA-MRI including diffusion weighted imaging (DWI to the early prognosis of neurodevelopmental outcome in a cohort of very preterm infants (gestational age [GA] <31 weeks. STUDY DESIGN: In total, 93 preterm infants (median [range] GA in weeks: 28.3 [25.0-30.9] were enrolled in this prospective cohort study and underwent early and term cUS as well as TEA-MRI including DWI. Early cUS abnormalities were classified as normal, mild, moderate or severe. Term cUS was evaluated for ex-vacuo ventriculomegaly (VM and enlargement of the extracerebral cerebrospinal fluid (eCSF space. Abnormalities on T1- and T2-weighted TEA-MRI were scored according to Kidokoro et al. Using DWI at TEA, apparent diffusion coefficients (ADCs were measured in four white matter regions bilaterally and both cerebellar hemispheres. Neurodevelopmental outcome was assessed at two years' corrected age (CA using the Bayley Scales of Infant and Toddler Development, third edition. Linear regression analysis was conducted to explore the correlation between the different neuroimaging modalities and outcome. RESULTS: Moderate/severe abnormalities on early cUS, ex-vacuo VM and enlargement of the eCSF space on term cUS and increased cerebellar ADC values on term DWI were independently associated with worse motor outcome (p<.05. Ex-vacuo VM on term cUS was also related to worse cognitive performance at two years' CA (p<.01. CONCLUSION: These data support the clinical value of sequential cUS and recommend repeating cUS at TEA. In particular, assessment of moderate/severe early cUS abnormalities and ex-vacuo VM on term cUS provides important prognostic information. Cerebellar ADC values may further aid in the prognostication of gross motor function.

  10. Acceptance of the 2014 V.M. Goldschmidt Award of the Gochemical Society by Timothy L. Grove

    Science.gov (United States)

    Grove, Timothy L.

    2015-06-01

    I am deeply honored to be the recipient of the 2014 V.M. Goldschmidt Award. Many of the past recipients of this award have been scientific heroes to me, and it is hard to express how it feels to be included in this distinguished group. My feelings run the full spectrum; from exhilaration and deep personal satisfaction for the recognition of the work that I have done, to humility and anxiety that maybe I am really not good enough to deserve this award. This is called impostor syndrome. You younger scientists should know that many of us, even those who appear very successful, still experience it - don't let it hold you back.

  11. Comparison of the Frontier Distributed Database Caching System to NoSQL Databases

    Science.gov (United States)

    Dykstra, Dave

    2012-12-01

    One of the main attractions of non-relational “NoSQL” databases is their ability to scale to large numbers of readers, including readers spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects for Conditions data, is based on traditional SQL databases but also adds high scalability and the ability to be distributed over a wide-area for an important subset of applications. This paper compares the major characteristics of the two different approaches and identifies the criteria for choosing which approach to prefer over the other. It also compares in some detail the NoSQL databases used by CMS and ATLAS: MongoDB, CouchDB, HBase, and Cassandra.

  12. Comparison of the Frontier Distributed Database Caching System to NoSQL Databases

    International Nuclear Information System (INIS)

    Dykstra, Dave

    2012-01-01

    One of the main attractions of non-relational “NoSQL” databases is their ability to scale to large numbers of readers, including readers spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects for Conditions data, is based on traditional SQL databases but also adds high scalability and the ability to be distributed over a wide-area for an important subset of applications. This paper compares the major characteristics of the two different approaches and identifies the criteria for choosing which approach to prefer over the other. It also compares in some detail the NoSQL databases used by CMS and ATLAS: MongoDB, CouchDB, HBase, and Cassandra.

  13. Comparison of the Frontier Distributed Database Caching System to NoSQL Databases

    Energy Technology Data Exchange (ETDEWEB)

    Dykstra, Dave [Fermilab

    2012-07-20

    One of the main attractions of non-relational NoSQL databases is their ability to scale to large numbers of readers, including readers spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects for Conditions data, is based on traditional SQL databases but also adds high scalability and the ability to be distributed over a wide-area for an important subset of applications. This paper compares the major characteristics of the two different approaches and identifies the criteria for choosing which approach to prefer over the other. It also compares in some detail the NoSQL databases used by CMS and ATLAS: MongoDB, CouchDB, HBase, and Cassandra.

  14. Improved cache performance in Monte Carlo transport calculations using energy banding

    Science.gov (United States)

    Siegel, A.; Smith, K.; Felker, K.; Romano, P.; Forget, B.; Beckman, P.

    2014-04-01

    We present an energy banding algorithm for Monte Carlo (MC) neutral particle transport simulations which depend on large cross section lookup tables. In MC codes, read-only cross section data tables are accessed frequently, exhibit poor locality, and are typically too much large to fit in fast memory. Thus, performance is often limited by long latencies to RAM, or by off-node communication latencies when the data footprint is very large and must be decomposed on a distributed memory machine. The proposed energy banding algorithm allows maximal temporal reuse of data in band sizes that can flexibly accommodate different architectural features. The energy banding algorithm is general and has a number of benefits compared to the traditional approach. In the present analysis we explore its potential to achieve improvements in time-to-solution on modern cache-based architectures.

  15. Security in the Cache and Forward Architecture for the Next Generation Internet

    Science.gov (United States)

    Hadjichristofi, G. C.; Hadjicostis, C. N.; Raychaudhuri, D.

    The future Internet architecture will be comprised predominately of wireless devices. It is evident at this stage that the TCP/IP protocol that was developed decades ago will not properly support the required network functionalities since contemporary communication profiles tend to be data-driven rather than host-based. To address this paradigm shift in data propagation, a next generation architecture has been proposed, the Cache and Forward (CNF) architecture. This research investigates security aspects of this new Internet architecture. More specifically, we discuss content privacy, secure routing, key management and trust management. We identify security weaknesses of this architecture that need to be addressed and we derive security requirements that should guide future research directions. Aspects of the research can be adopted as a step-stone as we build the future Internet.

  16. Know Thy Neighbor: Crypto Library Detection in Cloud

    Directory of Open Access Journals (Sweden)

    Irazoqui Gorka

    2015-04-01

    Full Text Available Software updates and security patches have become a standard method to fix known and recently discovered security vulnerabilities in deployed software. In server applications, outdated cryptographic libraries allow adversaries to exploit weaknesses and launch attacks with significant security results. The proposed technique exploits leakages at the hardware level to first, determine if a specific cryptographic library is running inside (or not a co-located virtual machine (VM and second to discover the IP of the co-located target. To this end, we use a Flush+Reload cache side-channel technique to measure the time it takes to call (load a cryptographic library function. Shorter loading times are indicative of the library already residing in memory and shared by the VM manager through deduplication. We demonstrate the viability of the proposed technique by detecting and distinguishing various cryptographic libraries, including MatrixSSL, PolarSSL, GnuTLS, OpenSSL and CyaSSL along with the IP of the VM running these libraries. In addition, we show how to differentiate between various versions of libraries to better select an attack target as well as the applicable exploit. Our experiments show a complete attack setup scenario with single-trial success rates of up to 90% under light load and up to 50% under heavy load for libraries running in KVM.

  17. A study on the effectiveness of lockup-free caches for a Reduced Instruction Set Computer (RISC) processor

    OpenAIRE

    Tharpe, Leonard.

    1992-01-01

    Approved for public release; distribution is unlimited This thesis presents a simulation and analysis of the Reduced Instruction Set Computer (RISC) architecture and the effects on RISC performance of a lockup-free cache interface. RISC architectures achieve high performance by having a small, but sufficient, instruction set with most instructions executing in one clock cycle. Current RISC performance range from 1.5 to 2.0 CPI. The goal of RISC is to attain a CPI of 1.0. The major hind...

  18. CernVM Co-Pilot: an Extensible Framework for Building Scalable Computing Infrastructures on the Cloud

    Science.gov (United States)

    Harutyunyan, A.; Blomer, J.; Buncic, P.; Charalampidis, I.; Grey, F.; Karneyeu, A.; Larsen, D.; Lombraña González, D.; Lisec, J.; Segal, B.; Skands, P.

    2012-12-01

    CernVM Co-Pilot is a framework for instantiating an ad-hoc computing infrastructure on top of managed or unmanaged computing resources. Co-Pilot can either be used to create a stand-alone computing infrastructure, or to integrate new computing resources into existing infrastructures (such as Grid or batch). Unlike traditional middleware systems, Co-Pilot components communicate using the Extensible Messaging and Presence protocol (XMPP). This allows the system to be easily scaled in case of a high load, and it also simplifies the development of new components. In this contribution we present the latest developments and the current status of the framework, discuss how it can be extended to suit the needs of a particular community, as well as describe the operational experience of using the framework in the LHC@home 2.0 volunteer computing project.

  19. CernVM Co-Pilot: an Extensible Framework for Building Scalable Computing Infrastructures on the Cloud

    International Nuclear Information System (INIS)

    Harutyunyan, A; Blomer, J; Buncic, P; Charalampidis, I; Grey, F; Karneyeu, A; Larsen, D; Lombraña González, D; Lisec, J; Segal, B; Skands, P

    2012-01-01

    CernVM Co-Pilot is a framework for instantiating an ad-hoc computing infrastructure on top of managed or unmanaged computing resources. Co-Pilot can either be used to create a stand-alone computing infrastructure, or to integrate new computing resources into existing infrastructures (such as Grid or batch). Unlike traditional middleware systems, Co-Pilot components communicate using the Extensible Messaging and Presence protocol (XMPP). This allows the system to be easily scaled in case of a high load, and it also simplifies the development of new components. In this contribution we present the latest developments and the current status of the framework, discuss how it can be extended to suit the needs of a particular community, as well as describe the operational experience of using the framework in the LHC at home 2.0 volunteer computing project.

  20. Volumetric modulated arc therapy and breath-hold in image-guided locoregional left-sided breast irradiation

    International Nuclear Information System (INIS)

    Osman, Sarah O.S.; Hol, Sandra; Poortmans, Philip M.; Essers, Marion

    2014-01-01

    Purpose: To investigate the effects of using volumetric modulated arc therapy (VMAT) and/or voluntary moderate deep inspiration breath-hold (vmDIBH) in the radiation therapy (RT) of left-sided breast cancer including the regional lymph nodes. Materials and methods: For 13 patients, four treatment combinations were compared; 3D-conformal RT (i.e., forward IMRT) in free-breathing 3D-CRT(FB), 3D-CRT(vmDIBH), 2 partial arcs VMAT(FB), and VMAT(vmDIBH). Prescribed dose was 42.56 Gy in 16 fractions. For 10 additional patients, 3D-CRT and VMAT in vmDIBH only were also compared. Results: Dose conformity, PTV coverage, ipsilateral and total lung doses were significantly better for VMAT plans compared to 3D-CRT. Mean heart dose (D mean,heart ) reduction in 3D-CRT(vmDIBH) was between 0.9 and 8.6 Gy, depending on initial D mean,heart (in 3D-CRT(FB) plans). VMAT(vmDIBH) reduced the D mean,heart further when D mean,heart was still >3.2 Gy in 3D-CRT(vmDIBH). Mean contralateral breast dose was higher for VMAT plans (2.7 Gy) compared to 3DCRT plans (0.7 Gy). Conclusions: VMAT and 3D-CRT(vmDIBH) significantly reduced heart dose for patients treated with locoregional RT of left-sided breast cancer. When D mean,heart exceeded 3.2 Gy in 3D-CRT(vmDIBH) plans, VMAT(vmDIBH) resulted in a cumulative heart dose reduction. VMAT also provided better target coverage and reduced ipsilateral lung dose, at the expense of a small increase in the dose to the contralateral breast

  1. Texture analysis for mapping Tamarix parviflora using aerial photographs along the Cache Creek, California.

    Science.gov (United States)

    Ge, Shaokui; Carruthers, Raymond; Gong, Peng; Herrera, Angelica

    2006-03-01

    Natural color photographs were used to detect the coverage of saltcedar, Tamarix parviflora, along a 40 km portion of Cache Creek near Woodland, California. Historical aerial photographs from 2001 were retrospectively evaluated and compared with actual ground-based information to assess accuracy of the assessment process. The color aerial photos were sequentially digitized, georeferenced, classified using color and texture methods, and mosaiced into maps for field use. Eight types of ground cover (Tamarix, agricultural crops, roads, rocks, water bodies, evergreen trees, non-evergreen trees and shrubs (excluding Tamarix)) were selected from the digitized photos for separability analysis and supervised classification. Due to color similarities among the eight cover types, the average separability, based originally only on color, was very low. The separability was improved significantly through the inclusion of texture analysis. Six types of texture measures with various window sizes were evaluated. The best texture was used as an additional feature along with the color, for identifying Tamarix. A total of 29 color photographs were processed to detect Tamarix infestations using a combination of the original digital images and optimal texture features. It was found that the saltcedar covered a total of 3.96 km(2) (396 hectares) within the study area. For the accuracy assessment, 95 classified samples from the resulting map were checked in the field with a global position system (GPS) unit to verify Tamarix presence. The producer's accuracy was 77.89%. In addition, 157 independently located ground sites containing saltcedar were compared with the classified maps, producing a user's accuracy of 71.33%.

  2. The People of Bear Hunter Speak: Oral Histories of the Cache Valley Shoshones Regarding the Bear River Massacre

    OpenAIRE

    Crawford, Aaron L.

    2007-01-01

    The Cache Valley Shoshone are the survivors of the Bear River Massacre, where a battle between a group of US. volunteer troops from California and a Shoshone village degenerated into the worst Indian massacre in US. history, resulting in the deaths of over 200 Shoshones. The massacre occurred due to increasing tensions over land use between the Shoshones and the Mormon settlers. Following the massacre, the Shoshones attempted settling in several different locations in Box Elder County, eventu...

  3. Clinical validation of semi-automated software for volumetric and dynamic contrast enhancement analysis of soft tissue venous malformations on magnetic resonance imaging examination

    Energy Technology Data Exchange (ETDEWEB)

    Caty, Veronique [Hopital Maisonneuve-Rosemont, Universite de Montreal, Department of Radiology, Montreal, QC (Canada); Kauffmann, Claude; Giroux, Marie-France; Oliva, Vincent; Therasse, Eric [Centre Hospitalier de l' Universite de Montreal (CHUM), Universite de Montreal and Research Centre, CHUM (CRCHUM), Department of Radiology, Montreal, QC (Canada); Dubois, Josee [Centre Hospitalier Universitaire Sainte-Justine et Universite de Montreal, Department of Radiology, Montreal, QC (Canada); Mansour, Asmaa [Institut de Cardiologie de Montreal, Heart Institute Coordinating Centre, Montreal, QC (Canada); Piche, Nicolas [Object Research System, Montreal, QC (Canada); Soulez, Gilles [Centre Hospitalier de l' Universite de Montreal (CHUM), Universite de Montreal and Research Centre, CHUM (CRCHUM), Department of Radiology, Montreal, QC (Canada); CHUM - Hopital Notre-Dame, Department of Radiology, Montreal, Quebec (Canada)

    2014-02-15

    To evaluate venous malformation (VM) volume and contrast-enhancement analysis on magnetic resonance imaging (MRI) compared with diameter evaluation. Baseline MRI was undertaken in 44 patients, 20 of whom were followed by MRI after sclerotherapy. All patients underwent short-tau inversion recovery (STIR) acquisitions and dynamic contrast assessment. VM diameters in three orthogonal directions were measured to obtain the largest and mean diameters. Volumetric reconstruction of VM was generated from two orthogonal STIR sequences and fused with acquisitions after contrast medium injection. Reproducibility (interclass correlation coefficients [ICCs]) of diameter and volume measurements was estimated. VM size variations in diameter and volume after sclerotherapy and contrast enhancement before sclerotherapy were compared in patients with clinical success or failure. Inter-observer ICCs were similar for diameter and volume measurements at baseline and follow-up (range 0.87-0.99). Higher percentages of size reduction after sclerotherapy were observed with volume (32.6 ± 30.7 %) than with diameter measurements (14.4 ± 21.4 %; P = 0.037). Contrast enhancement values were estimated at 65.3 ± 27.5 % and 84 ± 13 % in patients with clinical failure and success respectively (P = 0.056). Venous malformation volume was as reproducible as diameter measurement and more sensitive in detecting therapeutic responses. Patients with better clinical outcome tend to have stronger malformation enhancement. (orig.)

  4. Summary and Synthesis of Mercury Studies in the Cache Creek Watershed, California, 2000-01

    Science.gov (United States)

    Domagalski, Joseph L.; Slotton, Darell G.; Alpers, Charles N.; Suchanek, Thomas H.; Churchill, Ronald; Bloom, Nicolas; Ayers, Shaun M.; Clinkenbeard, John

    2004-01-01

    This report summarizes the principal findings of the Cache Creek, California, components of a project funded by the CALFED Bay?Delta Program entitled 'An Assessment of Ecological and Human Health Impacts of Mercury in the Bay?Delta Watershed.' A companion report summarizes the key findings of other components of the project based in the San Francisco Bay and the Delta of the Sacramento and San Joaquin Rivers. These summary documents present the more important findings of the various studies in a format intended for a wide audience. For more in-depth, scientific presentation and discussion of the research, a series of detailed technical reports of the integrated mercury studies is available at the following website: .

  5. From the Island of the Blue Dolphins: A unique 19th century cache feature from San Nicolas Island, California

    Science.gov (United States)

    Erlandson, Jon M.; Thomas-Barnett, Lisa; Vellanoweth, René L.; Schwartz, Steven J.; Muhs, Daniel R.

    2013-01-01

    A cache feature salvaged from an eroding sea cliff on San Nicolas Island produced two redwood boxes containing more than 200 artifacts of Nicoleño, Native Alaskan, and Euro-American origin. Outside the boxes were four asphaltum-coated baskets, abalone shells, a sandstone dish, and a hafted stone knife. The boxes, made from split redwood planks, contained a variety of artifacts and numerous unmodified bones and teeth from marine mammals, fish, birds, and large land mammals. Nicoleño-style artifacts include 11 knives with redwood handles and stone blades, stone projectile points, steatite ornaments and effigies, a carved stone pipe, abraders and burnishing stones, bird bone whistles, bone and shell pendants, abalone shell dishes, and two unusual barbed shell fishhooks. Artifacts of Native Alaskan style include four bone toggling harpoons, two unilaterally barbed bone harpoon heads, bone harpoon fore-shafts, a ground slate blade, and an adze blade. Objects of Euro-American origin or materials include a brass button, metal harpoon blades, and ten flaked glass bifaces. The contents of the cache feature, dating to the early-to-mid nineteenth century, provide an extraordinary window on a time of European expansion and global economic development that created unique cultural interactions and social transformations.

  6. Performance Engineering for a Medical Imaging Application on the Intel Xeon Phi Accelerator

    OpenAIRE

    Hofmann, Johannes; Treibig, Jan; Hager, Georg; Wellein, Gerhard

    2013-01-01

    We examine the Xeon Phi, which is based on Intel's Many Integrated Cores architecture, for its suitability to run the FDK algorithm--the most commonly used algorithm to perform the 3D image reconstruction in cone-beam computed tomography. We study the challenges of efficiently parallelizing the application and means to enable sensible data sharing between threads despite the lack of a shared last level cache. Apart from parallelization, SIMD vectorization is critical for good performance on t...

  7. Data Locality via Coordinated Caching for Distributed Processing

    Science.gov (United States)

    Fischer, M.; Kuehn, E.; Giffels, M.; Jung, C.

    2016-10-01

    To enable data locality, we have developed an approach of adding coordinated caches to existing compute clusters. Since the data stored locally is volatile and selected dynamically, only a fraction of local storage space is required. Our approach allows to freely select the degree at which data locality is provided. It may be used to work in conjunction with large network bandwidths, providing only highly used data to reduce peak loads. Alternatively, local storage may be scaled up to perform data analysis even with low network bandwidth. To prove the applicability of our approach, we have developed a prototype implementing all required functionality. It integrates seamlessly into batch systems, requiring practically no adjustments by users. We have now been actively using this prototype on a test cluster for HEP analyses. Specifically, it has been integral to our jet energy calibration analyses for CMS during run 2. The system has proven to be easily usable, while providing substantial performance improvements. Since confirming the applicability for our use case, we have investigated the design in a more general way. Simulations show that many infrastructure setups can benefit from our approach. For example, it may enable us to dynamically provide data locality in opportunistic cloud resources. The experience we have gained from our prototype enables us to realistically assess the feasibility for general production use.

  8. CernVM Co-Pilot: a Framework for Orchestrating Virtual Machines Running Applications of LHC Experiments on the Cloud

    International Nuclear Information System (INIS)

    Harutyunyan, A; Sánchez, C Aguado; Blomer, J; Buncic, P

    2011-01-01

    CernVM Co-Pilot is a framework for the delivery and execution of the workload on remote computing resources. It consists of components which are developed to ease the integration of geographically distributed resources (such as commercial or academic computing clouds, or the machines of users participating in volunteer computing projects) into existing computing grid infrastructures. The Co-Pilot framework can also be used to build an ad-hoc computing infrastructure on top of distributed resources. In this paper we present the architecture of the Co-Pilot framework, describe how it is used to execute the jobs of the ALICE and ATLAS experiments, as well as to run the Monte-Carlo simulation application of CERN Theoretical Physics Group.

  9. Comment on "An Assessment of the ICE-6G_C (VM5a) Glacial Isostatic Adjustment Model" by Purcell et al.

    Science.gov (United States)

    Richard Peltier, W.; Argus, Donald F.; Drummond, Rosemarie

    2018-02-01

    The most recently published model of the glacial isostatic adjustment process in the ICE-NG (VMX) sequence from the University of Toronto, denoted ICE-6G_C (VM5a), was originally developed to degree and order 256 in spherical harmonics and has been shown to provide accurate fits to a voluminous database of GPS observations from North America, Eurasia, and Antarctica, to time dependent gravity data being provided by the GRACE satellites, and to radiocarbon-dated relative sea level histories through the Holocene epoch. The authors of the Purcell et al. (2016, https://doi.org/10.1002/2015JB012742) paper have suggested this model to be flawed. We have produced a further version of our model, denoted ICE-6G_D (VM5a), by employing the same BEDMAP2 bathymetry for the Southern Ocean as employed in their analysis which has somewhat reduced the differences between our results. However, significant physically important differences remain, including the magnitude of present-day vertical crustal motion in the embayments and in the spectrum of Stokes coefficients for present-day geoid height time dependence which continues to "flatten" at high spherical harmonic degree. We explore the reasons for these differences and trace them to the use by Purcell et al. of a loading history for the embayments that differs significantly from that tabulated for both the original and modified versions of our model.

  10. Optimizing transformations of stencil operations for parallel object-oriented scientific frameworks on cache-based architectures

    Energy Technology Data Exchange (ETDEWEB)

    Bassetti, F.; Davis, K.; Quinlan, D.

    1998-12-31

    High-performance scientific computing relies increasingly on high-level large-scale object-oriented software frameworks to manage both algorithmic complexity and the complexities of parallelism: distributed data management, process management, inter-process communication, and load balancing. This encapsulation of data management, together with the prescribed semantics of a typical fundamental component of such object-oriented frameworks--a parallel or serial array-class library--provides an opportunity for increasingly sophisticated compile-time optimization techniques. This paper describes two optimizing transformations suitable for certain classes of numerical algorithms, one for reducing the cost of inter-processor communication, and one for improving cache utilization; demonstrates and analyzes the resulting performance gains; and indicates how these transformations are being automated.

  11. Temporal locality optimizations for stencil operations for parallel object-oriented scientific frameworks on cache-based architectures

    Energy Technology Data Exchange (ETDEWEB)

    Bassetti, F.; Davis, K.; Quinlan, D.

    1998-12-01

    High-performance scientific computing relies increasingly on high-level large-scale object-oriented software frameworks to manage both algorithmic complexity and the complexities of parallelism: distributed data management, process management, inter-process communication, and load balancing. This encapsulation of data management, together with the prescribed semantics of a typical fundamental component of such object-oriented frameworks--a parallel or serial array-class library--provides an opportunity for increasingly sophisticated compile-time optimization techniques. This paper describes a technique for introducing cache blocking suitable for certain classes of numerical algorithms, demonstrates and analyzes the resulting performance gains, and indicates how this optimization transformation is being automated.

  12. Avaliação do compartilhamento das memórias cache no desempenho de arquiteturas multi-core

    OpenAIRE

    Marco Antonio Zanata Alves

    2009-01-01

    No atual contexto de inovações em multi-core, em que as novas tecnologias de integração estão fornecendo um número crescente de transistores por chip, o estudo de técnicas de aumento de vazão de dados é de suma importância para os atuais e futuros processadores multi-core e many-core. Com a contínua demanda por desempenho computacional, as memórias cache vêm sendo largamente adotadas nos diversos tipos de projetos arquiteturais de computadores. Os atuais processadores disponíveis no mercado a...

  13. An integrated GIS/remote sensing data base in North Cache soil conservation district, Utah: A pilot project for the Utah Department of Agriculture's RIMS (Resource Inventory and Monitoring System)

    Science.gov (United States)

    Wheeler, D. J.; Ridd, M. K.; Merola, J. A.

    1984-01-01

    A basic geographic information system (GIS) for the North Cache Soil Conservation District (SCD) was sought for selected resource problems. Since the resource management issues in the North Cache SCD are very complex, it is not feasible in the initial phase to generate all the physical, socioeconomic, and political baseline data needed for resolving all management issues. A selection of critical varables becomes essential. Thus, there are foud specific objectives: (1) assess resource management needs and determine which resource factors ae most fundamental for building a beginning data base; (2) evaluate the variety of data gathering and analysis techniques for the resource factors selected; (3) incorporate the resulting data into a useful and efficient digital data base; and (4) demonstrate the application of the data base to selected real world resoource management issues.

  14. Clock generation and distribution for the 130-nm Itanium$^{R}$ 2 processor with 6-MB on-die L3 cache

    CERN Document Server

    Tam, S; Limaye, R D

    2004-01-01

    The clock generation and distribution system for the 130-nm Itanium 2 processor operates at 1.5 GHz with a skew of 24 ps. The Itanium 2 processor features 6 MB of on-die L3 cache and has a die size of 374 mm/sup 2/. Fuse-based clock de-skew enables post-silicon clock optimization to gain higher frequency. This paper describes the clock generation, global clock distribution, local clocking, and the clock skew optimization feature.

  15. MIPAS temperature from the stratosphere to the lower thermosphere: Comparison of vM21 with ACE-FTS, MLS, OSIRIS, SABER, SOFIE and lidar measurements

    Directory of Open Access Journals (Sweden)

    M. García-Comas

    2014-11-01

    Full Text Available We present vM21 MIPAS temperatures from the lower stratosphere to the lower thermosphere, which cover all optimized resolution measurements performed by MIPAS in the middle-atmosphere, upper-atmosphere and noctilucent-cloud modes during its lifetime, i.e., from January 2005 to April 2012. The main upgrades with respect to the previous version of MIPAS temperatures (vM11 are the update of the spectroscopic database, the use of a different climatology of atomic oxygen and carbon dioxide, and the improvement in important technical aspects of the retrieval setup (temperature gradient along the line of sight and offset regularizations, apodization accuracy. Additionally, an updated version of ESA-calibrated L1b spectra (5.02/5.06 is used. The vM21 temperatures correct the main systematic errors of the previous version because they provide on average a 1–2 K warmer stratopause and middle mesosphere, and a 6–10 K colder mesopause (except in high-latitude summers and lower thermosphere. These lead to a remarkable improvement in MIPAS comparisons with ACE-FTS, MLS, OSIRIS, SABER, SOFIE and the two Rayleigh lidars at Mauna Loa and Table Mountain, which, with a few specific exceptions, typically exhibit differences smaller than 1 K below 50 km and than 2 K at 50–80 km in spring, autumn and winter at all latitudes, and summer at low to midlatitudes. Differences in the high-latitude summers are typically smaller than 1 K below 50 km, smaller than 2 K at 50–65 km and 5 K at 65–80 km. Differences between MIPAS and the other instruments in the mid-mesosphere are generally negative. MIPAS mesopause is within 4 K of the other instruments measurements, except in the high-latitude summers, when it is within 5–10 K, being warmer there than SABER, MLS and OSIRIS and colder than ACE-FTS and SOFIE. The agreement in the lower thermosphere is typically better than 5 K, except for high latitudes during spring and summer, when MIPAS usually exhibits larger

  16. Modified stretched exponential model of computer system resources management limitations-The case of cache memory

    Science.gov (United States)

    Strzałka, Dominik; Dymora, Paweł; Mazurek, Mirosław

    2018-02-01

    In this paper we present some preliminary results in the field of computer systems management with relation to Tsallis thermostatistics and the ubiquitous problem of hardware limited resources. In the case of systems with non-deterministic behaviour, management of their resources is a key point that guarantees theirs acceptable performance and proper working. This is very wide problem that stands for many challenges in financial, transport, water and food, health, etc. areas. We focus on computer systems with attention paid to cache memory and propose to use an analytical model that is able to connect non-extensive entropy formalism, long-range dependencies, management of system resources and queuing theory. Obtained analytical results are related to the practical experiment showing interesting and valuable results.

  17. An ecological response model for the Cache la Poudre River through Fort Collins

    Science.gov (United States)

    Shanahan, Jennifer; Baker, Daniel; Bledsoe, Brian P.; Poff, LeRoy; Merritt, David M.; Bestgen, Kevin R.; Auble, Gregor T.; Kondratieff, Boris C.; Stokes, John; Lorie, Mark; Sanderson, John

    2014-01-01

    The Poudre River Ecological Response Model (ERM) is a collaborative effort initiated by the City of Fort Collins and a team of nine river scientists to provide the City with a tool to improve its understanding of the past, present, and likely future conditions of the Cache la Poudre River ecosystem. The overall ecosystem condition is described through the measurement of key ecological indicators such as shape and character of the stream channel and banks, streamside plant communities and floodplain wetlands, aquatic vegetation and insects, and fishes, both coolwater trout and warmwater native species. The 13- mile-long study area of the Poudre River flows through Fort Collins, Colorado, and is located in an ecological transition zone between the upstream, cold-water, steep-gradient system in the Front Range of the Southern Rocky Mountains and the downstream, warm-water, low-gradient reach in the Colorado high plains.

  18. Multiple organ gigantism caused by mutation in VmPPD gene in blackgram (Vigna mungo).

    Science.gov (United States)

    Naito, Ken; Takahashi, Yu; Chaitieng, Bubpa; Hirano, Kumi; Kaga, Akito; Takagi, Kyoko; Ogiso-Tanaka, Eri; Thavarasook, Charaspon; Ishimoto, Masao; Tomooka, Norihiko

    2017-03-01

    Seed size is one of the most important traits in leguminous crops. We obtained a recessive mutant of blackgram that had greatly enlarged leaves, stems and seeds. The mutant produced 100% bigger leaves, 50% more biomass and 70% larger seeds though it produced 40% less number of seeds. We designated the mutant as multiple-organ-gigantism ( mog ) and found the mog phenotype was due to increase in cell numbers but not in cell size. We also found the mog mutant showed a rippled leaf ( rl ) phenotype, which was probably caused by a pleiotropic effect of the mutation. We performed a map-based cloning and successfully identified an 8 bp deletion in the coding sequence of VmPPD gene, an orthologue of Arabidopsis PEAPOD ( PPD ) that regulates arrest of cell divisions in meristematic cells . We found no other mutations in the neighboring genes between the mutant and the wild type. We also knocked down GmPPD genes and reproduced both the mog and rl phenotypes in soybean. Controlling PPD genes to produce the mog phenotype is highly valuable for breeding since larger seed size could directly increase the commercial values of grain legumes.

  19. Contrasting patterns of survival and dispersal in multiple habitats reveal an ecological trap in a food-caching bird.

    Science.gov (United States)

    Norris, D Ryan; Flockhart, D T Tyler; Strickland, Dan

    2013-11-01

    A comprehensive understanding of how natural and anthropogenic variation in habitat influences populations requires long-term information on how such variation affects survival and dispersal throughout the annual cycle. Gray jays Perisoreus canadensis are widespread boreal resident passerines that use cached food to survive over the winter and to begin breeding during the late winter. Using multistate capture-recapture analysis, we examined apparent survival and dispersal in relation to habitat quality in a gray jay population over 34 years (1977-2010). Prior evidence suggests that natural variation in habitat quality is driven by the proportion of conifers on territories because of their superior ability to preserve cached food. Although neither adults (>1 year) nor juveniles (conifer territories, both age classes were less likely to leave high-conifer territories and, when they did move, were more likely to disperse to high-conifer territories. In contrast, survival rates were lower on territories that were adjacent to a major highway compared to territories that did not border the highway but there was no evidence for directional dispersal towards or away from highway territories. Our results support the notion that natural variation in habitat quality is driven by the proportion of coniferous trees on territories and provide the first evidence that high-mortality highway habitats can act as an equal-preference ecological trap for birds. Reproductive success, as shown in a previous study, but not survival, is sensitive to natural variation in habitat quality, suggesting that gray jays, despite living in harsh winter conditions, likely favor the allocation of limited resources towards self-maintenance over reproduction.

  20. AirCache: A Crowd-Based Solution for Geoanchored Floating Data

    Directory of Open Access Journals (Sweden)

    Armir Bujari

    2016-01-01

    Full Text Available The Internet edge has evolved from a simple consumer of information and data to eager producer feeding sensed data at a societal scale. The crowdsensing paradigm is a representative example which has the potential to revolutionize the way we acquire and consume data. Indeed, especially in the era of smartphones, the geographical and temporal scopus of data is often local. For instance, users’ queries are more and more frequently about a nearby object, event, person, location, and so forth. These queries could certainly be processed and answered locally, without the need for contacting a remote server through the Internet. In this scenario, the data is alimented (sensed by the users and, as a consequence, data lifetime is limited by human organizational factors (e.g., mobility. From this basis, data survivability in the Area of Interest (AoI is crucial and, if not guaranteed, could undermine system deployment. Addressing this scenario, we discuss and contribute with a novel protocol named AirCache, whose aim is to guarantee data availability in the AoI while at the same time reducing the data access costs at the network edges. We assess our proposal through a simulation analysis showing that our approach effectively fulfills its design objectives.

  1. Potential Mechanisms Driving Population Variation in Spatial Memory and the Hippocampus in Food-caching Chickadees.

    Science.gov (United States)

    Croston, Rebecca; Branch, Carrie L; Kozlovsky, Dovid Y; Roth, Timothy C; LaDage, Lara D; Freas, Cody A; Pravosudov, Vladimir V

    2015-09-01

    Harsh environments and severe winters have been hypothesized to favor improvement of the cognitive abilities necessary for successful foraging. Geographic variation in winter climate, then, is likely associated with differences in selection pressures on cognitive ability, which could lead to evolutionary changes in cognition and its neural mechanisms, assuming that variation in these traits is heritable. Here, we focus on two species of food-caching chickadees (genus Poecile), which rely on stored food for survival over winter and require the use of spatial memory to recover their stores. These species also exhibit extensive climate-related population level variation in spatial memory and the hippocampus, including volume, the total number and size of neurons, and adults' rates of neurogenesis. Such variation could be driven by several mechanisms within the context of natural selection, including independent, population-specific selection (local adaptation), environment experience-based plasticity, developmental differences, and/or epigenetic differences. Extensive data on cognition, brain morphology, and behavior in multiple populations of these two species of chickadees along longitudinal, latitudinal, and elevational gradients in winter climate are most consistent with the hypothesis that natural selection drives the evolution of local adaptations associated with spatial memory differences among populations. Conversely, there is little support for the hypotheses that environment-induced plasticity or developmental differences are the main causes of population differences across climatic gradients. Available data on epigenetic modifications of memory ability are also inconsistent with the observed patterns of population variation, with birds living in more stressful and harsher environments having better spatial memory associated with a larger hippocampus and a larger number of hippocampal neurons. Overall, the existing data are most consistent with the

  2. Traversal Caches: A Framework for FPGA Acceleration of Pointer Data Structures

    Directory of Open Access Journals (Sweden)

    James Coole

    2010-01-01

    Full Text Available Field-programmable gate arrays (FPGAs and other reconfigurable computing (RC devices have been widely shown to have numerous advantages including order of magnitude performance and power improvements compared to microprocessors for some applications. Unfortunately, FPGA usage has largely been limited to applications exhibiting sequential memory access patterns, thereby prohibiting acceleration of important applications with irregular patterns (e.g., pointer-based data structures. In this paper, we present a design pattern for RC application development that serializes irregular data structure traversals online into a traversal cache, which allows the corresponding data to be efficiently streamed to the FPGA. The paper presents a generalized framework that benefits applications with repeated traversals, which we show can achieve between 7x and 29x speedup over pointer-based software. For applications without strictly repeated traversals, we present application-specialized extensions that benefit applications with highly similar traversals by exploiting similarity to improve memory bandwidth and execute multiple traversals in parallel. We show that these extensions can achieve a speedup between 11x and 70x on a Virtex4 LX100 for Barnes-Hut n-body simulation.

  3. Caching behaviour by red squirrels may contribute to food conditioning of grizzly bears

    Directory of Open Access Journals (Sweden)

    Julia Elizabeth Put

    2017-08-01

    Full Text Available We describe an interspecific relationship wherein grizzly bears (Ursus arctos horribilis appear to seek out and consume agricultural seeds concentrated in the middens of red squirrels (Tamiasciurus hudsonicus, which had collected and cached spilled grain from a railway. We studied this interaction by estimating squirrel density, midden density and contents, and bear activity along paired transects that were near (within 50 m or far (200 m from the railway. Relative to far ones, near transects had 2.4 times more squirrel sightings, but similar numbers of squirrel middens. Among 15 middens in which agricultural products were found, 14 were near the rail and 4 subsequently exhibited evidence of bear digging. Remote cameras confirmed the presence of squirrels on the rail and bears excavating middens. We speculate that obtaining grain from squirrel middens encourages bears to seek grain on the railway, potentially contributing to their rising risk of collisions with trains.

  4. Multi-provider architecture for cloud outsourcing of medical imaging repositories.

    Science.gov (United States)

    Godinho, Tiago Marques; Bastião Silva, Luís A; Costa, Carlos; Oliveira, José Luís

    2014-01-01

    Over the last few years, the extended usage of medical imaging procedures has raised the medical community attention towards the optimization of their workflows. More recently, the federation of multiple institutions into a seamless distribution network has brought hope of increased quality healthcare services along with more efficient resource management. As a result, medical institutions are constantly looking for the best infrastructure to deploy their imaging archives. In this scenario, public cloud infrastructures arise as major candidates, as they offer elastic storage space, optimal data availability without great requirements of maintenance costs or IT personnel, in a pay-as-you-go model. However, standard methodologies still do not take full advantage of outsourced archives, namely because their integration with other in-house solutions is troublesome. This document proposes a multi-provider architecture for integration of outsourced archives with in-house PACS resources, taking advantage of foreign providers to store medical imaging studies, without disregarding security. It enables the retrieval of images from multiple archives simultaneously, improving performance, data availability and avoiding the vendor-locking problem. Moreover it enables load balancing and cache techniques.

  5. A Yeast Purification System for Human Translation Initiation Factors eIF2 and eIF2B epsilon and Their Use in the Diagnosis of CACH/VWM Disease

    NARCIS (Netherlands)

    de Almeida, R.A.; Fogli, A.; Gaillard, M.; Scheper, G.C.; Boesflug-Tanguy, O.; Pavitt, G.D.

    2013-01-01

    Recessive inherited mutations in any of five subunits of the general protein synthesis factor eIF2B are responsible for a white mater neurodegenerative disease with a large clinical spectrum. The classical form is called Childhood Ataxia with CNS hypomyelination (CACH) or Vanishing White Matter

  6. Virtual machine provisioning, code management, and data movement design for the Fermilab HEPCloud Facility

    Science.gov (United States)

    Timm, S.; Cooper, G.; Fuess, S.; Garzoglio, G.; Holzman, B.; Kennedy, R.; Grassano, D.; Tiradani, A.; Krishnamurthy, R.; Vinayagam, S.; Raicu, I.; Wu, H.; Ren, S.; Noh, S.-Y.

    2017-10-01

    The Fermilab HEPCloud Facility Project has as its goal to extend the current Fermilab facility interface to provide transparent access to disparate resources including commercial and community clouds, grid federations, and HPC centers. This facility enables experiments to perform the full spectrum of computing tasks, including data-intensive simulation and reconstruction. We have evaluated the use of the commercial cloud to provide elasticity to respond to peaks of demand without overprovisioning local resources. Full scale data-intensive workflows have been successfully completed on Amazon Web Services for two High Energy Physics Experiments, CMS and NOνA, at the scale of 58000 simultaneous cores. This paper describes the significant improvements that were made to the virtual machine provisioning system, code caching system, and data movement system to accomplish this work. The virtual image provisioning and contextualization service was extended to multiple AWS regions, and to support experiment-specific data configurations. A prototype Decision Engine was written to determine the optimal availability zone and instance type to run on, minimizing cost and job interruptions. We have deployed a scalable on-demand caching service to deliver code and database information to jobs running on the commercial cloud. It uses the frontiersquid server and CERN VM File System (CVMFS) clients on EC2 instances and utilizes various services provided by AWS to build the infrastructure (stack). We discuss the architecture and load testing benchmarks on the squid servers. We also describe various approaches that were evaluated to transport experimental data to and from the cloud, and the optimal solutions that were used for the bulk of the data transport. Finally, we summarize lessons learned from this scale test, and our future plans to expand and improve the Fermilab HEP Cloud Facility.

  7. Virtual Machine Provisioning, Code Management, and Data Movement Design for the Fermilab HEPCloud Facility

    Energy Technology Data Exchange (ETDEWEB)

    Timm, S. [Fermilab; Cooper, G. [Fermilab; Fuess, S. [Fermilab; Garzoglio, G. [Fermilab; Holzman, B. [Fermilab; Kennedy, R. [Fermilab; Grassano, D. [Fermilab; Tiradani, A. [Fermilab; Krishnamurthy, R. [IIT, Chicago; Vinayagam, S. [IIT, Chicago; Raicu, I. [IIT, Chicago; Wu, H. [IIT, Chicago; Ren, S. [IIT, Chicago; Noh, S. Y. [KISTI, Daejeon

    2017-11-22

    The Fermilab HEPCloud Facility Project has as its goal to extend the current Fermilab facility interface to provide transparent access to disparate resources including commercial and community clouds, grid federations, and HPC centers. This facility enables experiments to perform the full spectrum of computing tasks, including data-intensive simulation and reconstruction. We have evaluated the use of the commercial cloud to provide elasticity to respond to peaks of demand without overprovisioning local resources. Full scale data-intensive workflows have been successfully completed on Amazon Web Services for two High Energy Physics Experiments, CMS and NOνA, at the scale of 58000 simultaneous cores. This paper describes the significant improvements that were made to the virtual machine provisioning system, code caching system, and data movement system to accomplish this work. The virtual image provisioning and contextualization service was extended to multiple AWS regions, and to support experiment-specific data configurations. A prototype Decision Engine was written to determine the optimal availability zone and instance type to run on, minimizing cost and job interruptions. We have deployed a scalable on-demand caching service to deliver code and database information to jobs running on the commercial cloud. It uses the frontiersquid server and CERN VM File System (CVMFS) clients on EC2 instances and utilizes various services provided by AWS to build the infrastructure (stack). We discuss the architecture and load testing benchmarks on the squid servers. We also describe various approaches that were evaluated to transport experimental data to and from the cloud, and the optimal solutions that were used for the bulk of the data transport. Finally, we summarize lessons learned from this scale test, and our future plans to expand and improve the Fermilab HEP Cloud Facility.

  8. Inhibition of xyloglucanase from an alkalothermophilic Thermomonospora sp. by a peptidic aspartic protease inhibitor from Penicillium sp. VM24.

    Science.gov (United States)

    Menon, Vishnu; Rao, Mala

    2012-11-01

    A bifunctional inhibitor from Penicillium sp VM24 causing inactivation of xyloglucanase from Thermomonospora sp and an aspartic protease from Aspergillus saitoi was identified. Steady state kinetics studies of xyloglucanase and the inhibitor revealed an irreversible, non-competitive, two-step inhibition mechanism with IC(50) and K(i) values of 780 and 500nM respectively. The interaction of o-phthalaldehyde (OPTA)-labeled xyloglucanase with the inhibitor revealed that the inhibitor binds to the active site of the enzyme. Far- and near-UV spectrophotometric analysis suggests that the conformational changes induced in xyloglucanase by the inhibitor may be due to irreversible denaturation of enzyme. The bifunctional inhibitor may have potential as a biocontrol agent for the protection of plants against phytopathogenic fungi. Copyright © 2012 Elsevier Ltd. All rights reserved.

  9. PROOF as a Service on the Cloud: a Virtual Analysis Facility based on the CernVM ecosystem

    CERN Document Server

    Berzano, Dario; Buncic, Predrag; Charalampidis, Ioannis; Ganis, Gerardo; Lestaris, Georgios; Meusel, René

    2014-01-01

    PROOF, the Parallel ROOT Facility, is a ROOT-based framework which enables interactive parallelism for event-based tasks on a cluster of computing nodes. Although PROOF can be used simply from within a ROOT session with no additional requirements, deploying and configuring a PROOF cluster used to be not as straightforward. Recently great efforts have been spent to make the provisioning of generic PROOF analysis facilities with zero configuration, with the added advantages of positively affecting both stability and scalability, making the deployment operations feasible even for the end user. Since a growing amount of large-scale computing resources are nowadays made available by Cloud providers in a virtualized form, we have developed the Virtual PROOF-based Analysis Facility: a cluster appliance combining the solid CernVM ecosystem and PoD (PROOF on Demand), ready to be deployed on the Cloud and leveraging some peculiar Cloud features such as elasticity. We will show how this approach is effective both for sy...

  10. A New Resources Provisioning Method Based on QoS Differentiation and VM Resizing in IaaS

    Directory of Open Access Journals (Sweden)

    Rongdong Hu

    2015-01-01

    Full Text Available In order to improve the host energy efficiency in IaaS, we proposed an adaptive host resource provisioning method, CoST, which is based on QoS differentiation and VM resizing. The control model can adaptively adjust control parameters according to real time application performance, in order to cope with changes in load. CoST takes advantage of the fact that different types of applications have different sensitivity degrees to performance and cost. It places two different types of VMs on the same host and dynamically adjusts their sizes based on the load forecasting and QoS feedback. It not only guarantees the performance defined in SLA, but also keeps the host running in energy-efficient state. Real Google cluster trace and host power data are used to evaluate the proposed method. Experimental results show that CoST can provide performance-sensitive application with a steady QoS and simultaneously speed up the overall processing of performance-tolerant application by 20~66%. The host energy efficiency is significantly improved by 7~23%.

  11. Advanced neuroblastoma: improved response rate using a multiagent regimen (OPEC) including sequential cisplatin and VM-26.

    Science.gov (United States)

    Shafford, E A; Rogers, D W; Pritchard, J

    1984-07-01

    Forty-two children, all over one year of age, were given vincristine, cyclophosphamide, and sequentially timed cisplatin and VM-26 (OPEC) or OPEC and doxorubicin (OPEC-D) as initial treatment for newly diagnosed stage III or IV neuroblastoma. Good partial response was achieved in 31 patients (74%) overall and in 28 (78%) of 36 patients whose treatment adhered to the chemotherapy protocol, compared with a 65% response rate achieved in a previous series of children treated with pulsed cyclophosphamide and vincristine with or without doxorubicin. Only six patients, including two of the six children whose treatment did not adhere to protocol, failed to respond, but there were five early deaths from treatment-related complications. Tumor response to OPEC, which was the less toxic of the two regimens, was at least as good as tumor response to OPEC-D. Cisplatin-induced morbidity was clinically significant in only one patient and was avoided in others by careful monitoring of glomerular filtration rate and hearing. Other centers should test the efficacy of OPEC or equivalent regimens in the treatment of advanced neuroblastoma.

  12. Smart Collaborative Caching for Information-Centric IoT in Fog Computing

    Directory of Open Access Journals (Sweden)

    Fei Song

    2017-11-01

    Full Text Available The significant changes enabled by the fog computing had demonstrated that Internet of Things (IoT urgently needs more evolutional reforms. Limited by the inflexible design philosophy; the traditional structure of a network is hard to meet the latest demands. However, Information-Centric Networking (ICN is a promising option to bridge and cover these enormous gaps. In this paper, a Smart Collaborative Caching (SCC scheme is established by leveraging high-level ICN principles for IoT within fog computing paradigm. The proposed solution is supposed to be utilized in resource pooling, content storing, node locating and other related situations. By investigating the available characteristics of ICN, some challenges of such combination are reviewed in depth. The details of building SCC, including basic model and advanced algorithms, are presented based on theoretical analysis and simplified examples. The validation focuses on two typical scenarios: simple status inquiry and complex content sharing. The number of clusters, packet loss probability and other parameters are also considered. The analytical results demonstrate that the performance of our scheme, regarding total packet number and average transmission latency, can outperform that of the original ones. We expect that the SCC will contribute an efficient solution to the related studies.

  13. Smart Collaborative Caching for Information-Centric IoT in Fog Computing.

    Science.gov (United States)

    Song, Fei; Ai, Zheng-Yang; Li, Jun-Jie; Pau, Giovanni; Collotta, Mario; You, Ilsun; Zhang, Hong-Ke

    2017-11-01

    The significant changes enabled by the fog computing had demonstrated that Internet of Things (IoT) urgently needs more evolutional reforms. Limited by the inflexible design philosophy; the traditional structure of a network is hard to meet the latest demands. However, Information-Centric Networking (ICN) is a promising option to bridge and cover these enormous gaps. In this paper, a Smart Collaborative Caching (SCC) scheme is established by leveraging high-level ICN principles for IoT within fog computing paradigm. The proposed solution is supposed to be utilized in resource pooling, content storing, node locating and other related situations. By investigating the available characteristics of ICN, some challenges of such combination are reviewed in depth. The details of building SCC, including basic model and advanced algorithms, are presented based on theoretical analysis and simplified examples. The validation focuses on two typical scenarios: simple status inquiry and complex content sharing. The number of clusters, packet loss probability and other parameters are also considered. The analytical results demonstrate that the performance of our scheme, regarding total packet number and average transmission latency, can outperform that of the original ones. We expect that the SCC will contribute an efficient solution to the related studies.

  14. A Query Cache Tool for Optimizing Repeatable and Parallel OLAP Queries

    Science.gov (United States)

    Santos, Ricardo Jorge; Bernardino, Jorge

    On-line analytical processing against data warehouse databases is a common form of getting decision making information for almost every business field. Decision support information oftenly concerns periodic values based on regular attributes, such as sales amounts, percentages, most transactioned items, etc. This means that many similar OLAP instructions are periodically repeated, and simultaneously, between the several decision makers. Our Query Cache Tool takes advantage of previously executed queries, storing their results and the current state of the data which was accessed. Future queries only need to execute against the new data, inserted since the queries were last executed, and join these results with the previous ones. This makes query execution much faster, because we only need to process the most recent data. Our tool also minimizes the execution time and resource consumption for similar queries simultaneously executed by different users, putting the most recent ones on hold until the first finish and returns the results for all of them. The stored query results are held until they are considered outdated, then automatically erased. We present an experimental evaluation of our tool using a data warehouse based on a real-world business dataset and use a set of typical decision support queries to discuss the results, showing a very high gain in query execution time.

  15. Smart Collaborative Caching for Information-Centric IoT in Fog Computing

    Science.gov (United States)

    Song, Fei; Ai, Zheng-Yang; Li, Jun-Jie; Zhang, Hong-Ke

    2017-01-01

    The significant changes enabled by the fog computing had demonstrated that Internet of Things (IoT) urgently needs more evolutional reforms. Limited by the inflexible design philosophy; the traditional structure of a network is hard to meet the latest demands. However, Information-Centric Networking (ICN) is a promising option to bridge and cover these enormous gaps. In this paper, a Smart Collaborative Caching (SCC) scheme is established by leveraging high-level ICN principles for IoT within fog computing paradigm. The proposed solution is supposed to be utilized in resource pooling, content storing, node locating and other related situations. By investigating the available characteristics of ICN, some challenges of such combination are reviewed in depth. The details of building SCC, including basic model and advanced algorithms, are presented based on theoretical analysis and simplified examples. The validation focuses on two typical scenarios: simple status inquiry and complex content sharing. The number of clusters, packet loss probability and other parameters are also considered. The analytical results demonstrate that the performance of our scheme, regarding total packet number and average transmission latency, can outperform that of the original ones. We expect that the SCC will contribute an efficient solution to the related studies. PMID:29104219

  16. An open architecture for medical image workstation

    Science.gov (United States)

    Liang, Liang; Hu, Zhiqiang; Wang, Xiangyun

    2005-04-01

    Dealing with the difficulties of integrating various medical image viewing and processing technologies with a variety of clinical and departmental information systems and, in the meantime, overcoming the performance constraints in transferring and processing large-scale and ever-increasing image data in healthcare enterprise, we design and implement a flexible, usable and high-performance architecture for medical image workstations. This architecture is not developed for radiology only, but for any workstations in any application environments that may need medical image retrieving, viewing, and post-processing. This architecture contains an infrastructure named Memory PACS and different kinds of image applications built on it. The Memory PACS is in charge of image data caching, pre-fetching and management. It provides image applications with a high speed image data access and a very reliable DICOM network I/O. In dealing with the image applications, we use dynamic component technology to separate the performance-constrained modules from the flexibility-constrained modules so that different image viewing or processing technologies can be developed and maintained independently. We also develop a weakly coupled collaboration service, through which these image applications can communicate with each other or with third party applications. We applied this architecture in developing our product line and it works well. In our clinical sites, this architecture is applied not only in Radiology Department, but also in Ultrasonic, Surgery, Clinics, and Consultation Center. Giving that each concerned department has its particular requirements and business routines along with the facts that they all have different image processing technologies and image display devices, our workstations are still able to maintain high performance and high usability.

  17. Simulation of an image network in a medical image information system

    International Nuclear Information System (INIS)

    Massar, A.D.A.; De Valk, J.P.J.; Reijns, G.L.; Bakker, A.R.

    1985-01-01

    The desirability of an integrated (digital) communication system for medical images is widely accepted. In the USA and in Europe several experimental projects are in progress to realize (a part of) such a system. Among these is the IMAGIS project in the Netherlands. From the conclusions of the preliminary studies performed, some requirements can be formulated such a system should meet in order to be accepted by its users. For example, the storage resolution of the images should match the maximum resolution of the presently acquired digital images. This determines the amount of data and therefore the storage requirements. Further, the desired images should be there when needed. This time constraint determines the speed requirements to be imposed on the system. As compared to current standards, very large storage capacities and very fast communication media are needed to meet these requirements. By employing cacheing techniques and suitable data compression schemes for the storage and by carefully choosing the network protocols, bare capacity demands can be alleviated. A communication network is needed to make the imaging system available over a larger area. As the network is very likely to become a major bottleneck for system performance, effects of variation of various attributes have to be carefully studied and analysed. After interesting results had been obtained (although preliminary) using a simulation model for a layered storage structure, it was decided to apply simulation also to this problem. Effects of network topology, access protocols and buffering strategies will be tested. Changes in performance resulting from changes in various network parameters will be studied. Results of this study at its present state are presented

  18. Measurement and control systems for an imaging electromagnetic flow metre.

    Science.gov (United States)

    Zhao, Y Y; Lucas, G; Leeungculsatien, T

    2014-03-01

    Electromagnetic flow metres based on the principles of Faraday's laws of induction have been used successfully in many industries. The conventional electromagnetic flow metre can measure the mean liquid velocity in axisymmetric single phase flows. However, in order to achieve velocity profile measurements in single phase flows with non-uniform velocity profiles, a novel imaging electromagnetic flow metre (IEF) has been developed which is described in this paper. The novel electromagnetic flow metre which is based on the 'weight value' theory to reconstruct velocity profiles is interfaced with a 'Microrobotics VM1' microcontroller as a stand-alone unit. The work undertaken in the paper demonstrates that an imaging electromagnetic flow metre for liquid velocity profile measurement is an instrument that is highly suited for control via a microcontroller. © 2013 ISA Published by ISA All rights reserved.

  19. Percolation-theoretic bounds on the cache size of nodes in mobile opportunistic networks.

    Science.gov (United States)

    Yuan, Peiyan; Wu, Honghai; Zhao, Xiaoyan; Dong, Zhengnan

    2017-07-18

    The node buffer size has a large influence on the performance of Mobile Opportunistic Networks (MONs). This is mainly because each node should temporarily cache packets to deal with the intermittently connected links. In this paper, we study fundamental bounds on node buffer size below which the network system can not achieve the expected performance such as the transmission delay and packet delivery ratio. Given the condition that each link has the same probability p to be active in the next time slot when the link is inactive and q to be inactive when the link is active, there exists a critical value p c from a percolation perspective. If p > p c , the network is in the supercritical case, where we found that there is an achievable upper bound on the buffer size of nodes, independent of the inactive probability q. When p network is in the subcritical case, and there exists a closed-form solution for buffer occupation, which is independent of the size of the network.

  20. Activity in dlPFC and its effective connectivity to vmPFC are associated with temporal discounting

    Directory of Open Access Journals (Sweden)

    Todd A Hare

    2014-03-01

    Full Text Available There is widespread interest in identifying computational and neurobiological mechanisms that influence the ability to choose long-term benefits over more proximal and readily available rewards in domains such as dietary and economic choice. We present the results of a human fMRI study that examines how neural activity relates to observed individual differences in the discounting of future rewards during an intertemporal monetary choice task. We found that a region of left dlPFC BA-46 was more active in trials where subjects chose delayed rewards, after controlling for the subjective value of those rewards. We also found that the connectivity from dlPFC BA-46 to a region of vmPFC widely associated with the computational of stimulus values, increased at the time of choice, and especially during trials in which subjects chose delayed rewards. Finally, we found that estimates of effective connectivity between these two regions played a critical role in predicting out-of-sample, between-subject differences in discount rates. Together with previous findings in dietary choice, these results suggest that a common set of computational and neurobiological mechanisms facilitate choices in favor of long- term reward in both settings.

  1. Políticas de reemplazo en la caché de web

    Directory of Open Access Journals (Sweden)

    Carlos Quesada Sánchez

    2006-05-01

    Full Text Available La web es el mecanismo de comunicación más utilizado en la actualidad debido a su flexibilidad y a la oferta casi interminable de herramientas para navegarla. Esto hace que día con día se agreguen alrededor de un millón de páginas en ella. De esta manera, es entonces la biblioteca más grande, con recursos textuales y de multimedia, que jamás se haya visto antes. Eso sí, es una biblioteca distribuida alrededor de todos los servidores que contienen esa información. Como fuente de consulta, es importante que la recuperación de los datos sea eficiente. Para ello existe el Web Caching, técnica mediante la cual se almacenan temporalmente algunos datos de la web en los servidores locales, de manera que no haya que pedirlos al servidor remoto cada vez que un usuario los solicita. Empero, la cantidad de memoria disponible en los servidores locales para almacenar esa información es limitada: hay que decidir cuáles objetos de la web se almacenan y cuáles no. Esto da pie a varias políticas de reemplazo que se explorarán en este artículo. Mediante un experimento de peticiones reales de la Web, compararemos el desempeño de estas técnicas.

  2. The Small GTPase Rac1 Contributes to Extinction of Aversive Memories of Drug Withdrawal by Facilitating GABAA Receptor Endocytosis in the vmPFC.

    Science.gov (United States)

    Wang, Weisheng; Ju, Yun-Yue; Zhou, Qi-Xin; Tang, Jian-Xin; Li, Meng; Zhang, Lei; Kang, Shuo; Chen, Zhong-Guo; Wang, Yu-Jun; Ji, Hui; Ding, Yu-Qiang; Xu, Lin; Liu, Jing-Gen

    2017-07-26

    Extinction of aversive memories has been a major concern in neuropsychiatric disorders, such as anxiety disorders and drug addiction. However, the mechanisms underlying extinction of aversive memories are not fully understood. Here, we report that extinction of conditioned place aversion (CPA) to naloxone-precipitated opiate withdrawal in male rats activates Rho GTPase Rac1 in the ventromedial prefrontal cortex (vmPFC) in a BDNF-dependent manner, which determines GABA A receptor (GABA A R) endocytosis via triggering synaptic translocation of activity-regulated cytoskeleton-associated protein (Arc) through facilitating actin polymerization. Active Rac1 is essential and sufficient for GABA A R endocytosis and CPA extinction. Knockdown of Rac1 expression within the vmPFC of rats using Rac1-shRNA suppressed GABA A R endocytosis and CPA extinction, whereas expression of a constitutively active form of Rac1 accelerated GABA A R endocytosis and CPA extinction. The crucial role of GABA A R endocytosis in the LTP induction and CPA extinction is evinced by the findings that blockade of GABA A R endocytosis by a dynamin function-blocking peptide (Myr-P4) abolishes LTP induction and CPA extinction. Thus, the present study provides first evidence that Rac1-dependent GABA A R endocytosis plays a crucial role in extinction of aversive memories and reveals the sequence of molecular events that contribute to learning experience modulation of synaptic GABA A R endocytosis. SIGNIFICANCE STATEMENT This study reveals that Rac1-dependent GABA A R endocytosis plays a crucial role in extinction of aversive memories associated with drug withdrawal and identifies Arc as a downstream effector of Rac1 regulations of synaptic plasticity as well as learning and memory, thereby suggesting therapeutic targets to promote extinction of the unwanted memories. Copyright © 2017 the authors 0270-6474/17/377096-15$15.00/0.

  3. An alternative model to distribute VO software to WLCG sites based on CernVM-FS: a prototype at PIC Tier1

    International Nuclear Information System (INIS)

    Lanciotti, E; Merino, G; Blomer, J; Bria, A

    2011-01-01

    In a distributed computing model as WLCG the software of experiment specific application software has to be efficiently distributed to any site of the Grid. Application software is currently installed in a shared area of the site visible for all Worker Nodes (WNs) of the site through some protocol (NFS, AFS or other). The software is installed at the site by jobs which run on a privileged node of the computing farm where the shared area is mounted in write mode. This model presents several drawbacks which cause a non-negligible rate of job failure. An alternative model for software distribution based on the CERN Virtual Machine File System (CernVM-FS) has been tried at PIC, the Spanish Tierl site of WLCG. The test bed used and the results are presented in this paper.

  4. Data Rate Estimation for Wireless Core-to-Cache Communication in Multicore CPUs

    Directory of Open Access Journals (Sweden)

    M. Komar

    2015-01-01

    Full Text Available In this paper, a principal architecture of common purpose CPU and its main components are discussed, CPUs evolution is considered and drawbacks that prevent future CPU development are mentioned. Further, solutions proposed so far are addressed and a new CPU architecture is introduced. The proposed architecture is based on wireless cache access that enables a reliable interaction between cores in multicore CPUs using terahertz band, 0.1-10THz. The presented architecture addresses the scalability problem of existing processors and may potentially allow to scale them to tens of cores. As in-depth analysis of the applicability of the suggested architecture requires accurate prediction of traffic in current and next generations of processors, we consider a set of approaches for traffic estimation in modern CPUs discussing their benefits and drawbacks. The authors identify traffic measurements by using existing software tools as the most promising approach for traffic estimation, and they use Intel Performance Counter Monitor for this purpose. Three types of CPU loads are considered including two artificial tests and background system load. For each load type the amount of data transmitted through the L2-L3 interface is reported for various input parameters including the number of active cores and their dependences on the number of cores and operational frequency.

  5. Performance evaluation of the General Electric eXplore CT 120 micro-CT using the vmCT phantom

    Energy Technology Data Exchange (ETDEWEB)

    Bahri, M.A., E-mail: M.Bahri@ulg.ac.be [ULg-Liege University, Cyclotron Research Centre, Liege, Bat. 30, Allee du 6 aout, 8 (Belgium); Warnock, G.; Plenevaux, A. [ULg-Liege University, Cyclotron Research Centre, Liege, Bat. 30, Allee du 6 aout, 8 (Belgium); Choquet, P.; Constantinesco, A. [Biophysique et Medecine Nucleaire, Hopitaux universitaires de Strasbourg, Strasbourg (France); Salmon, E.; Luxen, A. [ULg-Liege University, Cyclotron Research Centre, Liege, Bat. 30, Allee du 6 aout, 8 (Belgium); Seret, A. [ULg-Liege University, Cyclotron Research Centre, Liege, Bat. 30, Allee du 6 aout, 8 (Belgium); ULg-Liege University, Experimental Medical Imaging, Liege (Belgium)

    2011-08-21

    The eXplore CT 120 is the latest generation micro-CT from General Electric. It is equipped with a high-power tube and a flat-panel detector. It allows high resolution and high contrast fast CT scanning of small animals. The aim of this study was to compare the performance of the eXplore CT 120 with that of the eXplore Ultra, its predecessor for which the methodology using the vmCT phantom has already been described . The phantom was imaged using typical a rat (fast scan or F) or mouse (in vivo bone scan or H) scanning protocols. With the slanted edge method, a 10% modulation transfer function (MTF) was observed at 4.4 (F) and 3.9-4.4 (H) mm{sup -1} corresponding to 114 {mu}m resolution. A fairly larger MTF was obtained by the coil method with the MTF for the thinnest coil (3.3 mm{sup -1}) equal to 0.32 (F) and 0.34 (H). The geometric accuracy was better than 0.3%. There was a highly linear (R{sup 2}>0.999) relationship between measured and expected CT numbers for both the CT number accuracy and linearity sections of the phantom. A cupping effect was clearly seen on the uniform slices and the uniformity-to-noise ratio ranged from 0.52 (F) to 0.89 (H). The air CT number depended on the amount of polycarbonate surrounding the area where it was measured; a difference as high as approximately 200 HU was observed. This hindered the calibration of this scanner in HU. This is likely due to the absence of corrections for beam hardening and scatter in the reconstruction software. However in view of the high linearity of the system, the implementation of these corrections would allow a good quality calibration of the scanner in HU. In conclusion, the eXplore CT 120 achieved a better spatial resolution than the eXplore Ultra (based on previously reported specifications) and future software developments will include beam hardening and scatter corrections that will make the new generation CT scanner even more promising.

  6. Speckle Imaging of Binary Stars with Large-Format CCDs

    Science.gov (United States)

    Horch, E.; Ninkov, Z.; Slawson, R. W.; van Altena, W. F.; Meyer, R. D.; Girard, T. M.

    1997-12-01

    In the past, bare (unintensified) CCDs have not been widely used in speckle imaging for two main reasons: 1) the readout rate of most scientific-grade CCDs is too slow to be able to observe at the high frame rates necessary to capture speckle patterns efficiently, and 2) the read noise of CCDs limits the detectability of fainter objects where it becomes difficult to distinguish between speckles and noise peaks in the image. These facts have led to the current supremacy of intensified imaging systems (such as intensified-CCDs) in this field, which can typically be read out at video rates or faster. We have developed a new approach that uses a large format CCD not only to detect the incident photons but also to record many speckle patterns before the chip is read out. This approach effectively uses the large area of the CCD as a physical ``memory cache'' of previous speckle data frames. The method is described, and binary star observations from the University of Toronto Southern Observatory 60-cm telescope and the Wisconsin-Indiana-Yale-NOAO (WIYN) 3.5-m telescope are presented. Plans for future observing and instrumentation improvements are also outlined.

  7. Investigating the role of the ventromedial prefrontal cortex in the assessment of brands.

    Science.gov (United States)

    Santos, José Paulo; Seixas, Daniela; Brandão, Sofia; Moutinho, Luiz

    2011-01-01

    The ventromedial prefrontal cortex (vmPFC) is believed to be important in everyday preference judgments, processing emotions during decision-making. However, there is still controversy in the literature regarding the participation of the vmPFC. To further elucidate the contribution of the vmPFC in brand preference, we designed a functional magnetic resonance imaging (fMRI) study where 18 subjects assessed positive, indifferent, and fictitious brands. Also, both the period during and after the decision process were analyzed, hoping to unravel temporally the role of the vmPFC, using modeled and model-free fMRI analysis. Considering together the period before and after decision-making, there was activation of the vmPFC when comparing positive with indifferent or fictitious brands. However, when the decision-making period was separated from the moment after the response, and especially for positive brands, the vmPFC was more active after the choice than during the decision process itself, challenging some of the existing literature. The results of the present study support the notion that the vmPFC may be unimportant in the decision stage of brand preference, questioning theories that postulate that the vmPFC is in the origin of such a choice. Further studies are needed to investigate in detail why the vmPFC seems to be involved in brand preference only after the decision process.

  8. Diagnostic performance of calcification-suppressed coronary CT angiography using rapid kilovolt-switching dual-energy CT

    Energy Technology Data Exchange (ETDEWEB)

    Yunaga, Hiroto; Ohta, Yasutoshi; Kitao, Shinichiro; Ogawa, Toshihide [Tottori University, Division of Radiology, Department of Pathophysiological Therapeutic Science, Faculty of Medicine, Yonago City, Tottori (Japan); Kaetsu, Yasuhiro [Kakogawa Higashi Hospital, Department of Cardiology, Kakogawa (Japan); Watanabe, Tomomi; Furuse, Yoshiyuki; Yamamoto, Kazuhiro [Tottori University, Division of Cardiology, Department of Molecular Medicine and Therapeutics, Faculty of Medicine, Yonago (Japan)

    2017-07-15

    Multi-detector-row computed tomography angiography (MDCTA) plays an important role in the assessment of patients with suspected coronary artery disease. However, MDCTA tends to overestimate stenosis in calcified coronary artery lesions. The aim of our study was to evaluate the diagnostic performance of calcification-suppressed material density (MD) images produced by using a single-detector single-source dual-energy computed tomography (ssDECT). We enrolled 67 patients with suspected or known coronary artery disease who underwent ssDECT with rapid kilovolt-switching (80 and 140 kVp). Coronary artery stenosis was evaluated on the basis of MD images and virtual monochromatic (VM) images. The diagnostic performance of the two methods for detecting coronary artery disease was compared with that of invasive coronary angiography as a reference standard. We evaluated 239 calcified segments. In all the segments, the sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV) and accuracy for detecting significant stenosis were respectively 88%, 88%, 75%, 95% and 88% for the MD images, 91%, 71%, 56%, 95% and 77% for the VM images. PPV was significantly higher on the MD images than on the VM images (P < 0.0001). Calcification-suppressed MD images improved PPV and diagnostic performance for calcified coronary artery lesions. (orig.)

  9. Germinación, microinjertación y cultivo de callos in vitro de Vasconcellea stipulata V.M. Badillo y Vasconcellea pubescens A.DC

    OpenAIRE

    Espinosa Soto, Isabel

    2016-01-01

    En varias poblaciones rurales de regiones andinas y centroamericanas se utilizan con fines etnomédicos las hojas, el látex y el fruto de las plantas de la familia Caricaceae cuyo miembro más representativo es la papaya (Carica papaya L.), la cual produce la fuente comercial más importante de la enzima proteolítica papaína. El presente trabajo estudia las especies Vasconcellea stipulata V.M. Badillo y Vasconcellea pubescens A.DC que pertenecen al género Vasconcellea, miembros de la familia Car...

  10. The ICE-6G_C (VM5a) Global Model of the GIA Process: Antarctica at High Spatial Resolution

    Science.gov (United States)

    Peltier, W. R.; Drummond, R.; Argus, D. F.

    2016-12-01

    The ICE-6G_C (VM5a) global model of the glacial isostatic adjustment process (Argus et al., 2014 GJI 198, 537-563; Peltier et al. , 2015, JGR 119, doi:10.1002/2014JB011176) is the latest model in the ICE-nG (VMx) sequence. The model continues to be unique in that it is the only model whose properties are made freely available at each iterative step in its development. This latest version, which embodies detailed descriptions of the Laurentide , Fennoscandian/Barents Sea, Greenland and Antarctic ice sheets through the most recent glacial cycle, is a refinement based primarily upon the incorporation of the constraints being provided by GPS measurements of the vertical and horizontal motion of the crust as well as GRACE observations of the time dependent gravity field. The model has been shown to provide exceptionally accurate predictions of these space geodetic observations of the response to the most recent Late Quaternary glacial cycle. Particular attention has been paid to the Antarctic component as it is well known on the basis of analyses of the sedimentary stratigraphy off-shore and geomorphological characteristics of the continental shelf, that the Last Glacial Maximum state of the southern continent was one in which grounded ice extended out to the shelf break in most locations, including significant fractions of the Ross Sea and Weddell Sea embayments. In the latter regions especially, it is expected that grounded ice would have existed below sea level. In ICE-6G_C (VM5a) a grounding line tracking algorithm was employed (Stuhne and Peltier, 2015 JGR 120, 1841-1865) in order to describe the unloading of the solid surface by ice that was initially grounded below sea level, an apparently unique characteristic of this model. In the initially published version, in which the Sea Level Equation (SLE) was inverted on a basis of spherical harmonics truncated at degree and order 256, this led to "ringing" in the embayments when the Stokes coefficients of the model

  11. La face cachée de l’ancestralité. Masques et affinité chez les Matis d’Amazonie brésilienne

    OpenAIRE

    Erikson, Philippe

    2009-01-01

    La face cachée de l’ancestralité. Masques et affinité chez les Matis d’Amazonie brésilienne. Cet article montre en quoi les masques matis sont révélateurs des conceptions ouest amazoniennes de la temporalité et de la succession des générations. Après une discussion sur l’aspect cérémoniel des mascarades et sur les caractéristiques ontologiques imputées aux esprits mariwin, ce texte soutient que ces derniers, bien qu’associés aux morts du groupe et à des valeurs endogènes, représentent des aff...

  12. HD Photo: a new image coding technology for digital photography

    Science.gov (United States)

    Srinivasan, Sridhar; Tu, Chengjie; Regunathan, Shankar L.; Sullivan, Gary J.

    2007-09-01

    This paper introduces the HD Photo coding technology developed by Microsoft Corporation. The storage format for this technology is now under consideration in the ITU-T/ISO/IEC JPEG committee as a candidate for standardization under the name JPEG XR. The technology was developed to address end-to-end digital imaging application requirements, particularly including the needs of digital photography. HD Photo includes features such as good compression capability, high dynamic range support, high image quality capability, lossless coding support, full-format 4:4:4 color sampling, simple thumbnail extraction, embedded bitstream scalability of resolution and fidelity, and degradation-free compressed domain support of key manipulations such as cropping, flipping and rotation. HD Photo has been designed to optimize image quality and compression efficiency while also enabling low-complexity encoding and decoding implementations. To ensure low complexity for implementations, the design features have been incorporated in a way that not only minimizes the computational requirements of the individual components (including consideration of such aspects as memory footprint, cache effects, and parallelization opportunities) but results in a self-consistent design that maximizes the commonality of functional processing components.

  13. Influence of knee joint position and sex on vastus medialis regional architecture.

    Science.gov (United States)

    Gallina, Alessio; Render, Jacqueline N; Santos, Jacquelyne; Shah, Hershal; Taylor, Dayna; Tomlin, Travis; Garland, S Jayne

    2018-06-01

    Ultrasound imaging was used to investigate vastus medialis (VM) architecture in 10 males and 10 females at different knee angles. Increase in muscle thickness occurs predominantly when the knee angle is changed from 0° (full extension) and 45° (p Sex differences in the VM architecture can be observed in the distal (p 0.11).

  14. Virtual Mirror gaming in libraries

    NARCIS (Netherlands)

    Speelman, M.; Kröse, B.; Nijholt, A.; Poppe, R.

    2008-01-01

    This paper presents a study on a natural interface game in the context of a library. We developed a camera-based Virtual Mirror (VM) game, in which the player can see himself on the screen as if he looks at a mirror image. We present an overview of the different aspects of VM games and technologies

  15. Integration and acceleration of virtual microscopy as the key to successful implementation into the routine diagnostic process.

    Science.gov (United States)

    Wienert, Stephan; Beil, Michael; Saeger, Kai; Hufnagl, Peter; Schrader, Thomas

    2009-01-09

    The virtual microscopy is widely accepted in Pathology for educational purposes and teleconsultation but is far from the routine use in surgical pathology due to the technical requirements and some limitations. A technical problem is the limited bandwidth of a usual network and the delayed transmission rate and presentation time on the screen. In this study the process of secondary diagnostic was evaluated using the "T.Konsult Pathologie" service of the Professional Association of German Pathologists within the German breast cancer screening program. The characteristics of the access to the WSI (Whole Slide Images) have been analyzed to explore the possibilities of prefetching and caching to reduce the presentation and transfer time with the goal to increase user acceptance. The log files of the web server were analyzed to reconstruct the movements of the pathologist on the WSI and to create the observation path. Using a specialized tool the observation paths were extracted automatically from the log files. The attributes linearity, 3-point-linearity, changes per request, and number of consecutive requests were calculated to design, develop and evaluate different caching and prefetching strategies. The analysis of the observation paths showed that a complete accordance of two image requests is a very rare event. But more frequently a partial covering of two requested image areas can be found. In total 257 diagnostic paths from 131 WSI have been extracted and analysed. On average a diagnostic path consists of 16 image requests and takes 189 seconds between first and last image request. The mean linearity was 0,41 and the mean 3-point-linearity 0,85. Three different caching algorithms have been compared with respect to hit rate and additional image requests on the WSI server. Tests demonstrated that 95% of the diagnostic paths could be loaded without any deletion of entries in the cache (cache size 12,2 Megapixel). If the image parts are stored after JPEG compression

  16. Integration and acceleration of virtual microscopy as the key to successful implementation into the routine diagnostic process

    Directory of Open Access Journals (Sweden)

    Hufnagl Peter

    2009-01-01

    Full Text Available Abstract Background The virtual microscopy is widely accepted in Pathology for educational purposes and teleconsultation but is far from the routine use in surgical pathology due to the technical requirements and some limitations. A technical problem is the limited bandwidth of a usual network and the delayed transmission rate and presentation time on the screen. Methods In this study the process of secondary diagnostic was evaluated using the "T.Konsult Pathologie" service of the Professional Association of German Pathologists within the German breast cancer screening program. The characteristics of the access to the WSI (Whole Slide Images have been analyzed to explore the possibilities of prefetching and caching to reduce the presentation and transfer time with the goal to increase user acceptance. The log files of the web server were analyzed to reconstruct the movements of the pathologist on the WSI and to create the observation path. Using a specialized tool the observation paths were extracted automatically from the log files. The attributes linearity, 3-point-linearity, changes per request, and number of consecutive requests were calculated to design, develop and evaluate different caching and prefetching strategies. Results The analysis of the observation paths showed that a complete accordance of two image requests is a very rare event. But more frequently a partial covering of two requested image areas can be found. In total 257 diagnostic paths from 131 WSI have been extracted and analysed. On average a diagnostic path consists of 16 image requests and takes 189 seconds between first and last image request. The mean linearity was 0,41 and the mean 3-point-linearity 0,85. Three different caching algorithms have been compared with respect to hit rate and additional image requests on the WSI server. Tests demonstrated that 95% of the diagnostic paths could be loaded without any deletion of entries in the cache (cache size 12,2 Megapixel

  17. MRI demonstrates the extension of juxta-articular venous malformation of the knee and correlates with joint changes

    Energy Technology Data Exchange (ETDEWEB)

    Jans, L. [University of Melbourne, Royal Children' s Hospital, Department of Medical Imaging, Melbourne, Victoria (Australia); Ghent University Hospital, Department of Radiology and Medical Imaging, Gent (Belgium); Ditchfield, M.; Jaremko, J.L.; Stephens, N. [University of Melbourne, Royal Children' s Hospital, Department of Medical Imaging, Melbourne, Victoria (Australia); Verstraete, K. [Ghent University Hospital, Department of Radiology and Medical Imaging, Gent (Belgium)

    2010-07-15

    Juxta-articular venous malformations (VMs) are uncommon, but may cause early arthropathy of the knee in children and adolescents. We sought to describe the prevalence, extent and initial magnetic resonance imaging (MRI) features of knee arthropathy in children with VM adjacent to the knee joint. Thirty-five patients with VM adjacent to the knee who had MRI performed between 2000 and 2009 were identified through a keyword search of the radiology information system. VM extended to the joint in 17 of the 35 patients (5.4-21.5 years, mean 11.8 years). Most of these 17 patients had joint changes (15/17, 88%), most commonly haemosiderin deposition (14/17, 82%). Other findings included the presence of subchondral bone lesions (eight, 47%), cartilage loss (six, 35%), synovial thickening (six, 35%), marrow oedema (six, 35%), joint effusion (five, 29%), subchondral cysts (five, 29%) and one loose body (6%). VM location and size did not correlate with the degree of articular involvement. Joint changes were present in focal as well as non-discrete VM. We found that the frequency of arthropathy increased with extension of VM into the joint itself. This finding stresses the importance of early MRI evaluation of all juxta-articular VM. (orig.)

  18. Abnormal fear circuitry in Attention Deficit Hyperactivity Disorder: A controlled magnetic resonance imaging study.

    Science.gov (United States)

    Spencer, Andrea E; Marin, Marie-France; Milad, Mohammed R; Spencer, Thomas J; Bogucki, Olivia E; Pope, Amanda L; Plasencia, Natalie; Hughes, Brittany; Pace-Schott, Edward F; Fitzgerald, Maura; Uchida, Mai; Biederman, Joseph

    2017-04-30

    We examined whether non-traumatized subjects with Attention Deficit Hyperactivity Disorder (ADHD) have dysfunctional activation in brain structures mediating fear extinction, possibly explaining the statistical association between ADHD and other disorders characterized by aberrant fear processing such as PTSD. Medication naïve, non-traumatized young adult subjects with (N=27) and without (N=20) ADHD underwent a 2-day fear conditioning and extinction protocol in a 3T functional magnetic resonance imaging (fMRI) scanner. Skin conductance response (SCR) was recorded as a measure of conditioned response. Compared to healthy controls, ADHD subjects had significantly greater insular cortex activation during early extinction, lesser dorsal anterior cingulate cortex (dACC) activation during late extinction, lesser ventromedial prefrontal cortex (vmPFC) activation during late extinction learning and extinction recall, and greater hippocampal activation during extinction recall. Hippocampal and vmPFC deficits were similar to those documented in PTSD subjects compared to traumatized controls without PTSD. Non-traumatized, medication naive adults with ADHD had abnormalities in fear circuits during extinction learning and extinction recall, and some findings were consistent with those previously documented in subjects with PTSD compared to traumatized controls without PTSD. These findings could explain the significant association between ADHD and PTSD as well as impaired emotion regulation in ADHD. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.

  19. Diagnostic performance of calcification-suppressed coronary CT angiography using rapid kilovolt-switching dual-energy CT.

    Science.gov (United States)

    Yunaga, Hiroto; Ohta, Yasutoshi; Kaetsu, Yasuhiro; Kitao, Shinichiro; Watanabe, Tomomi; Furuse, Yoshiyuki; Yamamoto, Kazuhiro; Ogawa, Toshihide

    2017-07-01

    Multi-detector-row computed tomography angiography (MDCTA) plays an important role in the assessment of patients with suspected coronary artery disease. However, MDCTA tends to overestimate stenosis in calcified coronary artery lesions. The aim of our study was to evaluate the diagnostic performance of calcification-suppressed material density (MD) images produced by using a single-detector single-source dual-energy computed tomography (ssDECT). We enrolled 67 patients with suspected or known coronary artery disease who underwent ssDECT with rapid kilovolt-switching (80 and 140 kVp). Coronary artery stenosis was evaluated on the basis of MD images and virtual monochromatic (VM) images. The diagnostic performance of the two methods for detecting coronary artery disease was compared with that of invasive coronary angiography as a reference standard. We evaluated 239 calcified segments. In all the segments, the sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV) and accuracy for detecting significant stenosis were respectively 88%, 88%, 75%, 95% and 88% for the MD images, 91%, 71%, 56%, 95% and 77% for the VM images. PPV was significantly higher on the MD images than on the VM images (P < 0.0001). Calcification-suppressed MD images improved PPV and diagnostic performance for calcified coronary artery lesions. • Computed tomography angiography tends to overestimate stenosis in calcified coronary artery. • Dual-energy CT enables us to suppress calcification of coronary artery lesions. • Calcification-suppressed material density imaging reduces false-positive diagnosis of calcified lesion.

  20. Experimental Evaluation of a High Speed Flywheel for an Energy Cache System

    Science.gov (United States)

    Haruna, J.; Murai, K.; Itoh, J.; Yamada, N.; Hirano, Y.; Fujimori, T.; Homma, T.

    2011-03-01

    A flywheel energy cache system (FECS) is a mechanical battery that can charge/discharge electricity by converting it into the kinetic energy of a rotating flywheel, and vice versa. Compared to a chemical battery, a FECS has great advantages in durability and lifetime, especially in hot or cold environments. Design simulations of the FECS were carried out to clarify the effects of the composition and dimensions of the flywheel rotor on the charge/discharge performance. The rotation speed of a flywheel is limited by the strength of the materials from which it is constructed. Three materials, carbon fiber-reinforced polymer (CFRP), Cr-Mo steel, and a Mg alloy were examined with respect to the required weight and rotation speed for a 3 MJ (0.8 kWh) charging/discharging energy, which is suitable for an FECS operating with a 3-5 kW photovoltaic device in an ordinary home connected to a smart grid. The results demonstrate that, for a stationary 3 MJ FECS, Cr-Mo steel was the most cost-effective, but also the heaviest, Mg-alloy had a good balance of rotation speed and weight, which should result in reduced mechanical loss and enhanced durability and lifetime of the system, and CFRP should be used for applications requiring compactness and a higher energy density. Finally, a high-speed prototype FW was analyzed to evaluate its fundamental characteristics both under acceleration and in the steady state.

  1. Experimental Evaluation of a High Speed Flywheel for an Energy Cache System

    International Nuclear Information System (INIS)

    Haruna, J; Itoh, J; Murai, K; Yamada, N; Hirano, Y; Homma, T; Fujimori, T

    2011-01-01

    A flywheel energy cache system (FECS) is a mechanical battery that can charge/discharge electricity by converting it into the kinetic energy of a rotating flywheel, and vice versa. Compared to a chemical battery, a FECS has great advantages in durability and lifetime, especially in hot or cold environments. Design simulations of the FECS were carried out to clarify the effects of the composition and dimensions of the flywheel rotor on the charge/discharge performance. The rotation speed of a flywheel is limited by the strength of the materials from which it is constructed. Three materials, carbon fiber-reinforced polymer (CFRP), Cr-Mo steel, and a Mg alloy were examined with respect to the required weight and rotation speed for a 3 MJ (0.8 kWh) charging/discharging energy, which is suitable for an FECS operating with a 3-5 kW photovoltaic device in an ordinary home connected to a smart grid. The results demonstrate that, for a stationary 3 MJ FECS, Cr-Mo steel was the most cost-effective, but also the heaviest, Mg-alloy had a good balance of rotation speed and weight, which should result in reduced mechanical loss and enhanced durability and lifetime of the system, and CFRP should be used for applications requiring compactness and a higher energy density. Finally, a high-speed prototype FW was analyzed to evaluate its fundamental characteristics both under acceleration and in the steady state.

  2. Progressive attenuation fields: Fast 2D-3D image registration without precomputation

    International Nuclear Information System (INIS)

    Rohlfing, Torsten; Russakoff, Daniel B.; Denzler, Joachim; Mori, Kensaku; Maurer, Calvin R. Jr.

    2005-01-01

    Computation of digitally reconstructed radiograph (DRR) images is the rate-limiting step in most current intensity-based algorithms for the registration of three-dimensional (3D) images to two-dimensional (2D) projection images. This paper introduces and evaluates the progressive attenuation field (PAF), which is a new method to speed up DRR computation. A PAF is closely related to an attenuation field (AF). A major difference is that a PAF is constructed on the fly as the registration proceeds; it does not require any precomputation time, nor does it make any prior assumptions of the patient pose or limit the permissible range of patient motion. A PAF effectively acts as a cache memory for projection values once they are computed, rather than as a lookup table for precomputed projections like standard AFs. We use a cylindrical attenuation field parametrization, which is better suited for many medical applications of 2D-3D registration than the usual two-plane parametrization. The computed attenuation values are stored in a hash table for time-efficient storage and access. Using clinical gold-standard spine image data sets from five patients, we demonstrate consistent speedups of intensity-based 2D-3D image registration using PAF DRRs by a factor of 10 over conventional ray casting DRRs with no decrease of registration accuracy or robustness

  3. Acute non-contact anterior cruciate ligament tears are associated with relatively increased vastus medialis to semimembranosus cross-sectional area ratio: a case-control retrospective MR study

    International Nuclear Information System (INIS)

    Wieschhoff, Ged G.; Mandell, Jacob C.; Czuczman, Gregory J.; Nikac, Violeta; Shah, Nehal; Smith, Stacy E.

    2017-01-01

    Hamstring muscle deficiency is increasingly recognized as a risk factor for anterior cruciate ligament (ACL) tears. The purpose of this study is to evaluate the vastus medialis to semimembranosus cross-sectional area (VM:SM CSA) ratio on magnetic resonance imaging (MRI) in patients with ACL tears compared to controls. One hundred knee MRIs of acute ACL tear patients and 100 age-, sex-, and side-matched controls were included. Mechanism of injury, contact versus non-contact, was determined for each ACL tear subject. The VM:SM CSA was measured on individual axial slices with a novel method using image-processing software. One reader measured all 200 knees and the second reader measured 50 knees at random to assess inter-reader variability. The intraclass correlation coefficient (ICC) was calculated to evaluate for correlation between readers. T-tests were performed to evaluate for differences in VM:SM CSA ratios between the ACL tear group and control group. The ICC for agreement between the two readers was 0.991 (95% confidence interval 0.984-0.995). Acute ACL tear patients have an increased VM:SM CSA ratio compared to controls (1.44 vs. 1.28; p = 0.005). Non-contact acute ACL tear patients have an increased VM:SM CSA ratio compared to controls (1.48 vs. 1.20; p = 0.003), whereas contact acute ACL tear patients do not (1.23 vs. 1.26; p = 0.762). Acute non-contact ACL tears are associated with increased VM:SM CSA ratios, which may imply a relative deficiency in hamstring strength. This study also demonstrates a novel method of measuring the relative CSA of muscles on MRI. (orig.)

  4. Acute non-contact anterior cruciate ligament tears are associated with relatively increased vastus medialis to semimembranosus cross-sectional area ratio: a case-control retrospective MR study.

    Science.gov (United States)

    Wieschhoff, Ged G; Mandell, Jacob C; Czuczman, Gregory J; Nikac, Violeta; Shah, Nehal; Smith, Stacy E

    2017-11-01

    Hamstring muscle deficiency is increasingly recognized as a risk factor for anterior cruciate ligament (ACL) tears. The purpose of this study is to evaluate the vastus medialis to semimembranosus cross-sectional area (VM:SM CSA) ratio on magnetic resonance imaging (MRI) in patients with ACL tears compared to controls. One hundred knee MRIs of acute ACL tear patients and 100 age-, sex-, and side-matched controls were included. Mechanism of injury, contact versus non-contact, was determined for each ACL tear subject. The VM:SM CSA was measured on individual axial slices with a novel method using image-processing software. One reader measured all 200 knees and the second reader measured 50 knees at random to assess inter-reader variability. The intraclass correlation coefficient (ICC) was calculated to evaluate for correlation between readers. T-tests were performed to evaluate for differences in VM:SM CSA ratios between the ACL tear group and control group. The ICC for agreement between the two readers was 0.991 (95% confidence interval 0.984-0.995). Acute ACL tear patients have an increased VM:SM CSA ratio compared to controls (1.44 vs. 1.28; p = 0.005). Non-contact acute ACL tear patients have an increased VM:SM CSA ratio compared to controls (1.48 vs. 1.20; p = 0.003), whereas contact acute ACL tear patients do not (1.23 vs. 1.26; p = 0.762). Acute non-contact ACL tears are associated with increased VM:SM CSA ratios, which may imply a relative deficiency in hamstring strength. This study also demonstrates a novel method of measuring the relative CSA of muscles on MRI.

  5. Acute non-contact anterior cruciate ligament tears are associated with relatively increased vastus medialis to semimembranosus cross-sectional area ratio: a case-control retrospective MR study

    Energy Technology Data Exchange (ETDEWEB)

    Wieschhoff, Ged G.; Mandell, Jacob C.; Czuczman, Gregory J.; Nikac, Violeta; Shah, Nehal; Smith, Stacy E. [Brigham and Women' s Hospital, Harvard Medical School, Division of Musculoskeletal Imaging and Intervention, Department of Radiology, Boston, MA (United States)

    2017-11-15

    Hamstring muscle deficiency is increasingly recognized as a risk factor for anterior cruciate ligament (ACL) tears. The purpose of this study is to evaluate the vastus medialis to semimembranosus cross-sectional area (VM:SM CSA) ratio on magnetic resonance imaging (MRI) in patients with ACL tears compared to controls. One hundred knee MRIs of acute ACL tear patients and 100 age-, sex-, and side-matched controls were included. Mechanism of injury, contact versus non-contact, was determined for each ACL tear subject. The VM:SM CSA was measured on individual axial slices with a novel method using image-processing software. One reader measured all 200 knees and the second reader measured 50 knees at random to assess inter-reader variability. The intraclass correlation coefficient (ICC) was calculated to evaluate for correlation between readers. T-tests were performed to evaluate for differences in VM:SM CSA ratios between the ACL tear group and control group. The ICC for agreement between the two readers was 0.991 (95% confidence interval 0.984-0.995). Acute ACL tear patients have an increased VM:SM CSA ratio compared to controls (1.44 vs. 1.28; p = 0.005). Non-contact acute ACL tear patients have an increased VM:SM CSA ratio compared to controls (1.48 vs. 1.20; p = 0.003), whereas contact acute ACL tear patients do not (1.23 vs. 1.26; p = 0.762). Acute non-contact ACL tears are associated with increased VM:SM CSA ratios, which may imply a relative deficiency in hamstring strength. This study also demonstrates a novel method of measuring the relative CSA of muscles on MRI. (orig.)

  6. Diets of three species of anurans from the cache creek watershed, California, USA

    Science.gov (United States)

    Hothem, R.L.; Meckstroth, A.M.; Wegner, K.E.; Jennings, M.R.; Crayon, J.J.

    2009-01-01

    We evaluated the diets of three sympatric anuran species, the native Northern Pacific Treefrog, Pseudacris regilla, and Foothill Yellow-Legged Frog, Rana boylii, and the introduced American Bullfrog, Lithobates catesbeianus, based on stomach contents of frogs collected at 36 sites in 1997 and 1998. This investigation was part of a study of mercury bioaccumulation in the biota of the Cache Creek Watershed in north-central California, an area affected by mercury contamination from natural sources and abandoned mercury mines. We collected R. boylii at 22 sites, L. catesbeianus at 21 sites, and P. regilla at 13 sites. We collected both L. catesbeianus and R. boylii at nine sites and all three species at five sites. Pseudacris regilla had the least aquatic diet (100% of the samples had terrestrial prey vs. 5% with aquatic prey), followed by R. boylii (98% terrestrial, 28% aquatic), and L. catesbeianus, which had similar percentages of terrestrial (81%) and aquatic prey (74%). Observed predation by L. catesbeianus on R. boylii may indicate that interaction between these two species is significant. Based on their widespread abundance and their preference for aquatic foods, we suggest that, where present, L. catesbeianus should be the species of choice for all lethal biomonitoring of mercury in amphibians. Copyright ?? 2009 Society for the Study of Amphibians and Reptiles.

  7. PROOF as a service on the cloud: a virtual analysis facility based on the CernVM ecosystem

    International Nuclear Information System (INIS)

    Berzano, D; Blomer, J; Buncic, P; Charalampidis, I; Ganis, G; Lestaris, G; Meusel, R

    2014-01-01

    PROOF, the Parallel ROOT Facility, is a ROOT-based framework which enables interactive parallelism for event-based tasks on a cluster of computing nodes. Although PROOF can be used simply from within a ROOT session with no additional requirements, deploying and configuring a PROOF cluster used to be not as straightforward. Recently great efforts have been spent to make the provisioning of generic PROOF analysis facilities with zero configuration, with the added advantages of positively affecting both stability and scalability, making the deployment operations feasible even for the end user. Since a growing amount of large-scale computing resources are nowadays made available by Cloud providers in a virtualized form, we have developed the Virtual PROOF-based Analysis Facility: a cluster appliance combining the solid CernVM ecosystem and PoD (PROOF on Demand), ready to be deployed on the Cloud and leveraging some peculiar Cloud features such as elasticity. We will show how this approach is effective both for sysadmins, who will have little or no configuration to do to run it on their Clouds, and for the end users, who are ultimately in full control of their PROOF cluster and can even easily restart it by themselves in the unfortunate event of a major failure. We will also show how elasticity leads to a more optimal and uniform usage of Cloud resources.

  8. Using caching and optimization techniques to improve performance of the Ensembl website

    Directory of Open Access Journals (Sweden)

    Smith James A

    2010-05-01

    Full Text Available Abstract Background The Ensembl web site has provided access to genomic information for almost 10 years. During this time the amount of data available through Ensembl has grown dramatically. At the same time, the World Wide Web itself has become a dramatically more important component of the scientific workflow and the way that scientists share and access data and scientific information. Since 2000, the Ensembl web interface has had three major updates and numerous smaller updates. These have largely been in response to expanding data types and valuable representations of existing data types. In 2007 it was realised that a radical new approach would be required in order to serve the project's future requirements, and development therefore focused on identifying suitable web technologies for implementation in the 2008 site redesign. Results By comparing the Ensembl website to well-known "Web 2.0" sites, we were able to identify two main areas in which cutting-edge technologies could be advantageously deployed: server efficiency and interface latency. We then evaluated the performance of the existing site using browser-based tools and Apache benchmarking, and selected appropriate technologies to overcome any issues found. Solutions included optimization of the Apache web server, introduction of caching technologies and widespread implementation of AJAX code. These improvements were successfully deployed on the Ensembl website in late 2008 and early 2009. Conclusions Web 2.0 technologies provide a flexible and efficient way to access the terabytes of data now available from Ensembl, enhancing the user experience through improved website responsiveness and a rich, interactive interface.

  9. Posttraumatic stress disorder: the role of medial prefrontal cortex and amygdala.

    Science.gov (United States)

    Koenigs, Michael; Grafman, Jordan

    2009-10-01

    Posttraumatic stress disorder (PTSD) is characterized by recurrent distressing memories of an emotionally traumatic event. In this review, the authors present neuroscientific data highlighting the function of two brain areas--the amygdala and ventromedial prefrontal cortex (vmPFC)--in PTSD and related emotional processes. A convergent body of human and nonhuman studies suggests that the amygdala mediates the acquisition and expression of conditioned fear and the enhancement of emotional memory, whereas the vmPFC mediates the extinction of conditioned fear and the volitional regulation of negative emotion. It has been theorized that the vmPFC exerts inhibition on the amygdala, and that a defect in this inhibition could account for the symptoms of PTSD. This theory is supported by functional imaging studies of PTSD patients, who exhibit hypoactivity in the vmPFC but hyperactivity in the amygdala. A recent study of brain-injured and trauma-exposed combat veterans confirms that amygdala damage reduces the likelihood of developing PTSD. But contrary to the prediction of the top-down inhibition model, vmPFC damage also reduces the likelihood of developing PTSD. The putative roles of the amygdala and the vmPFC in the pathophysiology of PTSD, as well as implications for potential treatments, are discussed in light of these results.

  10. Towards Fast Reverse Time Migration Kernels using Multi-threaded Wavefront Diamond Tiling

    KAUST Repository

    Malas, T.

    2015-09-13

    Today’s high-end multicore systems are characterized by a deep memory hierarchy, i.e., several levels of local and shared caches, with limited size and bandwidth per core. The ever-increasing gap between the processor and memory speed will further exacerbate the problem and has lead the scientific community to revisit numerical software implementations to better suit the underlying memory subsystem for performance (data reuse) as well as energy efficiency (data locality). The authors propose a novel multi-threaded wavefront diamond blocking (MWD) implementation in the context of stencil computations, which represents the core operation for seismic imaging in oil industry. The stencil diamond formulation introduces temporal blocking for high data reuse in the upper cache levels. The wavefront optimization technique ensures data locality by allowing multiple threads to share common adjacent point stencil. Therefore, MWD is able to take up the aforementioned challenges by alleviating the cache size limitation and releasing pressure from the memory bandwidth. Performance comparisons are shown against the optimized 25-point stencil standard seismic imaging scheme using spatial and temporal blocking and demonstrate the effectiveness of MWD.

  11. Development of White Matter Microstructure and Intrinsic Functional Connectivity Between the Amygdala and Ventromedial Prefrontal Cortex: Associations With Anxiety and Depression.

    Science.gov (United States)

    Jalbrzikowski, Maria; Larsen, Bart; Hallquist, Michael N; Foran, William; Calabro, Finnegan; Luna, Beatriz

    2017-10-01

    Connectivity between the amygdala and ventromedial prefrontal cortex (vmPFC) is compromised in multiple psychiatric disorders, many of which emerge during adolescence. To identify to what extent the deviations in amygdala-vmPFC maturation contribute to the onset of psychiatric disorders, it is essential to characterize amygdala-vmPFC connectivity changes during typical development. Using an accelerated cohort longitudinal design (1-3 time points, 10-25 years old, n = 246), we characterized developmental changes of the amygdala-vmPFC subregion functional and structural connectivity using resting-state functional magnetic resonance imaging and diffusion-weighted imaging. Functional connectivity between the centromedial amygdala and rostral anterior cingulate cortex (rACC), anterior vmPFC, and subgenual cingulate significantly decreased from late childhood to early adulthood in male and female subjects. Age-associated decreases were also observed between the basolateral amygdala and the rACC. Importantly, these findings were replicated in a separate cohort (10-22 years old, n = 327). Similarly, structural connectivity, as measured by quantitative anisotropy, significantly decreased with age in the same regions. Functional connectivity between the centromedial amygdala and the rACC was associated with structural connectivity in these same regions during early adulthood (22-25 years old). Finally, a novel time-varying coefficient analysis showed that increased centromedial amygdala-rACC functional connectivity was associated with greater anxiety and depression symptoms during early adulthood, while increased structural connectivity in centromedial amygdala-anterior vmPFC white matter was associated with greater anxiety/depression during late childhood. Specific developmental periods of functional and structural connectivity between the amygdala and the prefrontal systems may contribute to the emergence of anxiety and depressive symptoms and may play a critical role in

  12. Generation of virtual monochromatic CBCT from dual kV/MV beam projections

    International Nuclear Information System (INIS)

    Li, Hao; Liu, Bo; Yin, Fang-Fang

    2013-01-01

    Purpose: To develop a novel on-board imaging technique which allows generation of virtual monochromatic (VM) cone-beam CT (CBCT) with a selected energy from combined kilovoltage (kV)/megavoltage (MV) beam projections. Methods: With the current orthogonal kV/MV imaging hardware equipped in modern linear accelerators, both MV projections (from gantry angle of 0°–100°) and kV projections (90°–200°) were acquired as gantry rotated a total of 110°. A selected range of overlap projections between 90° to 100° were then decomposed into two material projections using experimentally determined parameters from orthogonally stacked aluminum and acrylic step-wedges. Given attenuation coefficients of aluminum and acrylic at a predetermined energy, one set of VM projections could be synthesized from two corresponding sets of decomposed projections. Two linear functions were generated using projection information at overlap angles to convert kV and MV projections at nonoverlap angles to approximate VM projections for CBCT reconstruction. The contrast-to-noise ratios (CNRs) were calculated for different inserts in VM CBCTs of a CatPhan phantom with various selected energies and compared with those in kV and MV CBCTs. The effect of overlap projection number on CNR was evaluated. Additionally, the effect of beam orientation was studied by scanning the CatPhan sandwiched with two 5 cm solid-water phantoms on both lateral sides and an electronic density phantom with two metal bolt inserts. Results: Proper selection of VM energy [30 and 40 keV for low-density polyethylene (LDPE), polymethylpentene, 2 MeV for Delrin] provided comparable or even better CNR results as compared with kV or MV CBCT. An increased number of overlap kV and MV projection demonstrated only marginal improvements of CNR for different inserts (with the exception of LDPE) and therefore one projection overlap was found to be sufficient for the CatPhan study. It was also evident that the optimal CBCT image

  13. Relative sea level in the Western Mediterranean basin: A regional test of the ICE-7G_NA (VM7) model and a constraint on late Holocene Antarctic deglaciation

    Science.gov (United States)

    Roy, Keven; Peltier, W. R.

    2018-03-01

    The Mediterranean Basin is a region of special interest in the study of past and present relative sea level evolution, given its location south of the ice sheets that covered large fractions of Northern Europe during the last glaciation, the large number of biological, geological and archaeological sea level indicators that have been retrieved from its coastal regions, as well as its high density of modern coastal infrastructure. Models of the Glacial Isostatic Adjustment (GIA) process provide reconstructions of past relative sea level evolution, and can be tested for validity against past sea level indicators from the region. It is demonstrated herein that the latest ICE-7G_NA (VM7) model of the GIA process, the North American component of which was refined using a full suite of geophysical observables, is able to reconcile the vast majority of uniformly analyzed relative sea level constraints available for the Western part of the Mediterranean basin, a region to which it was not tuned. We also revisit herein the previously published interpretations of relative sea level information obtained from Roman-era coastal Mediterranean "fish tanks", analyze the far-field influence of the rate of late Holocene Antarctic ice sheet melting history on the exceptionally detailed relative sea level history available from southern Tunisia, and extend the analysis to complementary constraints on the history of Antarctic ice-sheet melting available from islands in the equatorial Pacific Ocean. The analyses reported herein provide strong support for the global "exportability" of the ICE-7G_NA (VM7) model, a result that speaks directly to the ability of spherically symmetric models of the internal viscoelastic structure to explain globally distributed observations, while also identifying isolated regions of remaining misfit which will benefit from further study.

  14. Sclerotherapy of Diffuse and Infiltrative Venous Malformations of the Hand and Distal Forearm

    Energy Technology Data Exchange (ETDEWEB)

    Guevara, Carlos J., E-mail: guevarac@mir.wustl.edu; Gonzalez-Araiza, Guillermo; Kim, Seung K.; Sheybani, Elizabeth; Darcy, Michael D. [Washington University School of Medicine, Mallinckrodt Institute of Radiology (United States)

    2016-05-15

    PurposeVenous malformations (VM) involving the hand and forearm often lead to chronic pain and dysfunction, and the threshold for treatment is high due to the risk of nerve and skin damage, functional deterioration and compartment syndrome. The purpose of this study is to demonstrate that sclerotherapy of diffuse and infiltrative VM of the hand is a safe and effective therapy.Materials and MethodsA retrospective review of all patients with diffuse and infiltrative VM of the hand and forearm treated with sclerotherapy from 2001 to 2014 was conducted. All VM were diagnosed during the clinical visit by a combination of physical examination and imaging. Sclerotherapy was performed under imaging guidance using ethanol and/or sodium tetradecyl sulfate foam. Clinical notes were reviewed for signs of treatment response and complications, including skin blistering and nerve injury.ResultsSeventeen patients underwent a total of 40 sclerotherapy procedures. Patients were treated for pain (76 %), swelling (29 %) or paresthesias (6 %). Treatments utilized ethanol (70 %), sodium tetradecyl sulfate foam (22.5 %) or a combination of these (7.5 %). Twenty-four percent of patients had complete resolution of symptoms, 24 % had partial relief of symptoms without need for further intervention, and 35 % had some improvement after initial treatment but required additional treatments. Two skin complications were noted, both of which resolved. No motor or sensory loss was reported.ConclusionSclerotherapy is a safe and effective therapy for VM of the hand with over 83 % of patients experiencing relief.

  15. Sclerotherapy of Diffuse and Infiltrative Venous Malformations of the Hand and Distal Forearm

    International Nuclear Information System (INIS)

    Guevara, Carlos J.; Gonzalez-Araiza, Guillermo; Kim, Seung K.; Sheybani, Elizabeth; Darcy, Michael D.

    2016-01-01

    PurposeVenous malformations (VM) involving the hand and forearm often lead to chronic pain and dysfunction, and the threshold for treatment is high due to the risk of nerve and skin damage, functional deterioration and compartment syndrome. The purpose of this study is to demonstrate that sclerotherapy of diffuse and infiltrative VM of the hand is a safe and effective therapy.Materials and MethodsA retrospective review of all patients with diffuse and infiltrative VM of the hand and forearm treated with sclerotherapy from 2001 to 2014 was conducted. All VM were diagnosed during the clinical visit by a combination of physical examination and imaging. Sclerotherapy was performed under imaging guidance using ethanol and/or sodium tetradecyl sulfate foam. Clinical notes were reviewed for signs of treatment response and complications, including skin blistering and nerve injury.ResultsSeventeen patients underwent a total of 40 sclerotherapy procedures. Patients were treated for pain (76 %), swelling (29 %) or paresthesias (6 %). Treatments utilized ethanol (70 %), sodium tetradecyl sulfate foam (22.5 %) or a combination of these (7.5 %). Twenty-four percent of patients had complete resolution of symptoms, 24 % had partial relief of symptoms without need for further intervention, and 35 % had some improvement after initial treatment but required additional treatments. Two skin complications were noted, both of which resolved. No motor or sensory loss was reported.ConclusionSclerotherapy is a safe and effective therapy for VM of the hand with over 83 % of patients experiencing relief.

  16. Fast ℓ1-SPIRiT Compressed Sensing Parallel Imaging MRI: Scalable Parallel Implementation and Clinically Feasible Runtime

    Science.gov (United States)

    Murphy, Mark; Alley, Marcus; Demmel, James; Keutzer, Kurt; Vasanawala, Shreyas; Lustig, Michael

    2012-01-01

    We present ℓ1-SPIRiT, a simple algorithm for auto calibrating parallel imaging (acPI) and compressed sensing (CS) that permits an efficient implementation with clinically-feasible runtimes. We propose a CS objective function that minimizes cross-channel joint sparsity in the Wavelet domain. Our reconstruction minimizes this objective via iterative soft-thresholding, and integrates naturally with iterative Self-Consistent Parallel Imaging (SPIRiT). Like many iterative MRI reconstructions, ℓ1-SPIRiT’s image quality comes at a high computational cost. Excessively long runtimes are a barrier to the clinical use of any reconstruction approach, and thus we discuss our approach to efficiently parallelizing ℓ1-SPIRiT and to achieving clinically-feasible runtimes. We present parallelizations of ℓ1-SPIRiT for both multi-GPU systems and multi-core CPUs, and discuss the software optimization and parallelization decisions made in our implementation. The performance of these alternatives depends on the processor architecture, the size of the image matrix, and the number of parallel imaging channels. Fundamentally, achieving fast runtime requires the correct trade-off between cache usage and parallelization overheads. We demonstrate image quality via a case from our clinical experimentation, using a custom 3DFT Spoiled Gradient Echo (SPGR) sequence with up to 8× acceleration via poisson-disc undersampling in the two phase-encoded directions. PMID:22345529

  17. Miniature Variable Pressure Scanning Electron Microscope for In-Situ Imaging and Chemical Analysis

    Science.gov (United States)

    Gaskin, Jessica A.; Jerman, Gregory; Gregory, Don; Sampson, Allen R.

    2012-01-01

    NASA Marshall Space Flight Center (MSFC) is leading an effort to develop a Miniaturized Variable Pressure Scanning Electron Microscope (MVP-SEM) for in-situ imaging and chemical analysis of uncoated samples. This instrument development will be geared towards operation on Mars and builds on a previous MSFC design of a mini-SEM for the moon (funded through the NASA Planetary Instrument Definition and Development Program). Because Mars has a dramatically different environment than the moon, modifications to the MSFC lunar mini-SEM are necessary. Mainly, the higher atmospheric pressure calls for the use of an electron gun that can operate at High Vacuum, rather than Ultra-High Vacuum. The presence of a CO2-rich atmosphere also allows for the incorporation of a variable pressure system that enables the in-situ analysis of nonconductive geological specimens. Preliminary testing of Mars meteorites in a commercial Environmental SEM(Tradmark) (FEI) confirms the usefulness of lowcurrent/low-accelerating voltage imaging and highlights the advantages of using the Mars atmosphere for environmental imaging. The unique capabilities of the MVP-SEM make it an ideal tool for pursuing key scientific goals of NASA's Flagship Mission Max-C; to perform in-situ science and collect and cache samples in preparation for sample return from Mars.

  18. Large-Scale Image Analytics Using Deep Learning

    Science.gov (United States)

    Ganguly, S.; Nemani, R. R.; Basu, S.; Mukhopadhyay, S.; Michaelis, A.; Votava, P.

    2014-12-01

    AWS core components that we use to solve this problem are DynamoDB along with S3 for database query and storage, ElastiCache shared memory architecture for image segmentation, Elastic Map Reduce (EMR) for image feature extraction, and the memory optimized Elastic Cloud Compute (EC2) for the learning algorithm.

  19. Spinal tract pathology in AIDS: postmortem MRI correlation with neuropathology

    Energy Technology Data Exchange (ETDEWEB)

    Santosh, C.G. [City Hospital, Edinburgh (United Kingdom). MRI Unit; Bell, J.E. [Western General Hospital, Edinburgh (United Kingdom). Neuropathology Lab.; Best, J.J.K. [City Hospital, Edinburgh (United Kingdom). MRI Unit

    1995-02-01

    Vacuolar myelopathy (VM) and tract pallor are poorly understood spinal tract abnormalities in patients with the acquired immunodeficiency syndrome (AIDS). We studied the ability of magnetic resonance imaging (MRI) to detect these changes in spinal cord specimens postmortem and whether criteria could be formulated which would allow these conditions to be differentiated from other lesions of the spinal cord in AIDS, such as lymphoma, cytomegalovirus (CMV) and human immunodeficiency virus (HIV) myelitis. We imaged 38 postmortem specimens of spinal cord. The MRI studies were interpreted blind. The specimens included cases of VM myelin pallor. CMV myeloradiculitis, HIV myelitis, lymphoma as well as normal cords, both HIV+ve and HIV-ve. MRI showed abnormal signal, suggestive of tract pathology, in 10 of the 14 cases with histopathological evidence of tract changes. The findings in VM and tract pallor on proton-density and T{sub 2}-weighted MRI were increased signal from the affected white-matter tracts, present on multiple contiguous slices and symmetrical in most cases. The pattern was sufficiently distinct to differentiate spinal tract pathology from other spinal cord lesions in AIDS. (orig.)

  20. Research of aerial imaging spectrometer data acquisition technology based on USB 3.0

    Science.gov (United States)

    Huang, Junze; Wang, Yueming; He, Daogang; Yu, Yanan

    2016-11-01

    With the emergence of UAV (unmanned aerial vehicle) platform for aerial imaging spectrometer, research of aerial imaging spectrometer DAS(data acquisition system) faces new challenges. Due to the limitation of platform and other factors, the aerial imaging spectrometer DAS requires small-light, low-cost and universal. Traditional aerial imaging spectrometer DAS system is expensive, bulky, non-universal and unsupported plug-and-play based on PCIe. So that has been unable to meet promotion and application of the aerial imaging spectrometer. In order to solve these problems, the new data acquisition scheme bases on USB3.0 interface.USB3.0 can provide guarantee of small-light, low-cost and universal relying on the forward-looking technology advantage. USB3.0 transmission theory is up to 5Gbps.And the GPIF programming interface achieves 3.2Gbps of the effective theoretical data bandwidth.USB3.0 can fully meet the needs of the aerial imaging spectrometer data transmission rate. The scheme uses the slave FIFO asynchronous data transmission mode between FPGA and USB3014 interface chip. Firstly system collects spectral data from TLK2711 of high-speed serial interface chip. Then FPGA receives data in DDR2 cache after ping-pong data processing. Finally USB3014 interface chip transmits data via automatic-dma approach and uploads to PC by USB3.0 cable. During the manufacture of aerial imaging spectrometer, the DAS can achieve image acquisition, transmission, storage and display. All functions can provide the necessary test detection for aerial imaging spectrometer. The test shows that system performs stable and no data lose. Average transmission speed and storage speed of writing SSD can stabilize at 1.28Gbps. Consequently ,this data acquisition system can meet application requirements for aerial imaging spectrometer.

  1. Screen-Space Normal Distribution Function Caching for Consistent Multi-Resolution Rendering of Large Particle Data

    KAUST Repository

    Ibrahim, Mohamed

    2017-08-28

    Molecular dynamics (MD) simulations are crucial to investigating important processes in physics and thermodynamics. The simulated atoms are usually visualized as hard spheres with Phong shading, where individual particles and their local density can be perceived well in close-up views. However, for large-scale simulations with 10 million particles or more, the visualization of large fields-of-view usually suffers from strong aliasing artifacts, because the mismatch between data size and output resolution leads to severe under-sampling of the geometry. Excessive super-sampling can alleviate this problem, but is prohibitively expensive. This paper presents a novel visualization method for large-scale particle data that addresses aliasing while enabling interactive high-quality rendering. We introduce the novel concept of screen-space normal distribution functions (S-NDFs) for particle data. S-NDFs represent the distribution of surface normals that map to a given pixel in screen space, which enables high-quality re-lighting without re-rendering particles. In order to facilitate interactive zooming, we cache S-NDFs in a screen-space mipmap (S-MIP). Together, these two concepts enable interactive, scale-consistent re-lighting and shading changes, as well as zooming, without having to re-sample the particle data. We show how our method facilitates the interactive exploration of real-world large-scale MD simulation data in different scenarios.

  2. Screen-Space Normal Distribution Function Caching for Consistent Multi-Resolution Rendering of Large Particle Data

    KAUST Repository

    Ibrahim, Mohamed; Wickenhauser, Patrick; Rautek, Peter; Reina, Guido; Hadwiger, Markus

    2017-01-01

    Molecular dynamics (MD) simulations are crucial to investigating important processes in physics and thermodynamics. The simulated atoms are usually visualized as hard spheres with Phong shading, where individual particles and their local density can be perceived well in close-up views. However, for large-scale simulations with 10 million particles or more, the visualization of large fields-of-view usually suffers from strong aliasing artifacts, because the mismatch between data size and output resolution leads to severe under-sampling of the geometry. Excessive super-sampling can alleviate this problem, but is prohibitively expensive. This paper presents a novel visualization method for large-scale particle data that addresses aliasing while enabling interactive high-quality rendering. We introduce the novel concept of screen-space normal distribution functions (S-NDFs) for particle data. S-NDFs represent the distribution of surface normals that map to a given pixel in screen space, which enables high-quality re-lighting without re-rendering particles. In order to facilitate interactive zooming, we cache S-NDFs in a screen-space mipmap (S-MIP). Together, these two concepts enable interactive, scale-consistent re-lighting and shading changes, as well as zooming, without having to re-sample the particle data. We show how our method facilitates the interactive exploration of real-world large-scale MD simulation data in different scenarios.

  3. Distributed late-binding micro-scheduling and data caching for data-intensive workflows

    International Nuclear Information System (INIS)

    Delgado Peris, A.

    2015-01-01

    Today's world is flooded with vast amounts of digital information coming from innumerable sources. Moreover, it seems clear that this trend will only intensify in the future. Industry, society and remarkably science are not indifferent to this fact. On the contrary, they are struggling to get the most out of this data, which means that they need to capture, transfer, store and process it in a timely and efficient manner, using a wide range of computational resources. And this task is not always simple. A very representative example of the challenges posed by the management and processing of large quantities of data is that of the Large Hadron Collider experiments, which handle tens of petabytes of physics information every year. Based on the experience of one of these collaborations, we have studied the main issues involved in the management of huge volumes of data and in the completion of sizeable workflows that consume it. In this context, we have developed a general-purpose architecture for the scheduling and execution of workflows with heavy data requirements: the Task Queue. This new system builds on the late-binding overlay model, which has helped experiments to successfully overcome the problems associated to the heterogeneity and complexity of large computational grids. Our proposal introduces several enhancements to the existing systems. The execution agents of the Task Queue architecture share a Distributed Hash Table (DHT) and perform job matching and assignment cooperatively. In this way, scalability problems of centralized matching algorithms are avoided and workflow execution times are improved. Scalability makes fine-grained micro-scheduling possible and enables new functionalities, like the implementation of a distributed data cache on the execution nodes and the integration of data location information in the scheduling decisions...(Author)

  4. Disentangling brain activity related to the processing of emotional visual information and emotional arousal.

    Science.gov (United States)

    Kuniecki, Michał; Wołoszyn, Kinga; Domagalik, Aleksandra; Pilarczyk, Joanna

    2018-05-01

    Processing of emotional visual information engages cognitive functions and induces arousal. We aimed to examine the modulatory role of emotional valence on brain activations linked to the processing of visual information and those linked to arousal. Participants were scanned and their pupil size was measured while viewing negative and neutral images. The visual noise was added to the images in various proportions to parametrically manipulate the amount of visual information. Pupil size was used as an index of physiological arousal. We show that arousal induced by the negative images, as compared to the neutral ones, is primarily related to greater amygdala activity while increasing visibility of negative content to enhanced activity in the lateral occipital complex (LOC). We argue that more intense visual processing of negative scenes can occur irrespective of the level of arousal. It may suggest that higher areas of the visual stream are fine-tuned to process emotionally relevant objects. Both arousal and processing of emotional visual information modulated activity within the ventromedial prefrontal cortex (vmPFC). Overlapping activations within the vmPFC may reflect the integration of these aspects of emotional processing. Additionally, we show that emotionally-evoked pupil dilations are related to activations in the amygdala, vmPFC, and LOC.

  5. A province of many eyes – Rear window and caché: when the city discloses secrets through the cinema

    Directory of Open Access Journals (Sweden)

    Eliana Kuster

    2009-06-01

    Full Text Available In the city, all people see. In the city, all people are seen. The look and its related questions – what to see, as to see, the interpretation of what is seen – Is one of the central questions of the urban space since century XIX, with the growth of the cities and the phenomenon of the multitude. The look becomes, therefore, crucial to this urban man, whom it looks to recognize in this another one – the stranger – the signals of friendship or danger. This importance of the look in the city is investigated in this essay through two films: Rear window, Alfred Hitchcock (1954, and Caché, Michael Haneke (2005. In the first movie, the personages look the city. In the other, they are seen by this city. In the two films, we have the extremities of the same process: the social life transformed into spectacle. And the cinema, playing one of its main functions: the construction of representations of the human lives in the city.

  6. TOWARDS ENERGY-AWARE CODING PRACTICES FOR ANDROID

    Directory of Open Access Journals (Sweden)

    João SARAIVA

    2018-03-01

    Full Text Available This paper studies how the use of different coding practices when developing Android applications influence energy consumption. We consider two common Java/Android programming practices, namely string operations and (non cached image loading, and we show the energy profile of different coding practices for doing them. With string operations, we compare the performance of the usage of the standard String class to the usage of the StringBuilder class, while with our second practice we evaluate the benefits of image caching with asynchronous loading. We externally measure energy consumption of the example applications using the Trepn profiler application by Qualcomm. Our preliminary results show that selected coding practices do significantly affect energy consumption, in the particular cases of our practice selection, this difference varies between 20% and 50%.

  7. La photographie industrielle entre image documentaire et image publicitaire

    Directory of Open Access Journals (Sweden)

    Régis Huguenin

    2009-12-01

    Full Text Available La chocolaterie Suchard à Neuchâtel (Suisse réalise des photographies de natures très diverses qu’elle diffuse dans des cercles plus ou moins restreints. En 1967, elle inaugure un entrepôt automatique, point d’orgue d’une restructuration quasi complète de la production et dont des clichés réalisés avant et pendant les journées « portes ouvertes » témoignent. En nous intéressant aux documents iconographiques relatant l’inauguration d’un bâtiment industriel chez Suchard, nous montrons comment l’image produit une réalité qu’elle n’incarne pas automatiquement. On montre ou on cache, on accentue ou on atténue en fonction des objectifs recherchés. C’est le cas aussi des légendes qui accompagnent les clichés, toutes axées sur la rationalisation mais n’évoquant jamais les conséquences sociales de la nouvelle stratégie productive. L’oscillation entre document et publicité est en prise directe avec le temps : celui du travail du photographe, celui de la sélection par l’entreprise, celui de la perception par les employés ou par le public, le temps du classement et de l’oubli avant celui de l’exhumation et de la réutilisation  des documents par l’entreprise pour une commémoration.

  8. Suspense, culpa y cintas de vídeo. Caché/Escondido de Michael Haneke

    Directory of Open Access Journals (Sweden)

    Miguel Martínez-Cabeza

    2011-12-01

    Full Text Available Caché/Escondido (2005 representa dentro de la filmografía de Michael Haneke el ejemplo más destacado de síntesis de los planteamientos formales e ideológicos del cineasta austriaco. Este artículo analiza el filme como manifiesto cinematográfico y como explotación de las convenciones genéricas para construir un modelo de espectador reflexivo. La investigación del modo en que el director plantea y abandona las técnicas del suspense aporta claves para explicar el éxito casi unánime de crítica y la respuesta mucho menos homogénea de las audiencias. El desencadenante de la trama, unas cintas de vídeo que reciben los Laurent, es alusión directa a Carretera Perdida (1997 de David Lynch; no obstante, el misterio acerca del autor de la videovigilancia pierde interés en relación al sentimiento de culpa que desencadena en el protagonista. El episodio infantil de celos y venganza hacia un niño argelino y la actitud del Georges adulto representan una alegoría de la relación de Francia con su pasado colonial que tampoco cierra la narración de Haneke. Es precisamente la apertura formal con que el filme (desestructura cuestiones actuales como el límite entre la responsabilidad individual y colectiva lo que conforma un espectador tan distanciado de la diégesis como consciente de su propio papel de observador.

  9. Using Puppet to contextualize computing resources for ATLAS analysis on Google Compute Engine

    International Nuclear Information System (INIS)

    Öhman, Henrik; Panitkin, Sergey; Hendrix, Valerie

    2014-01-01

    With the advent of commercial as well as institutional and national clouds, new opportunities for on-demand computing resources for the HEP community become available. The new cloud technologies also come with new challenges, and one such is the contextualization of computing resources with regard to requirements of the user and his experiment. In particular on Google's new cloud platform Google Compute Engine (GCE) upload of user's virtual machine images is not possible. This precludes application of ready to use technologies like CernVM and forces users to build and contextualize their own VM images from scratch. We investigate the use of Puppet to facilitate contextualization of cloud resources on GCE, with particular regard to ease of configuration and dynamic resource scaling.

  10. SOFTWARE FOR REGIONS OF INTEREST RETRIEVAL ON MEDICAL 3D IMAGES

    Directory of Open Access Journals (Sweden)

    G. G. Stromov

    2014-01-01

    Full Text Available Background. Implementation of software for areas of interest retrieval in 3D medical images is described in this article. It has been tested against large volume of model MRIs.Material and methods. We tested software against normal and pathological (severe multiple sclerosis model MRIs from tge BrainWeb resource. Technological stack is based on open-source cross-platform solutions. We implemented storage system on Maria DB (an open-sourced fork of MySQL with P/SQL extensions. Python 2.7 scripting was used for automatization of extract-transform-load operations. The computational core is written on Java 7 with Spring framework 3. MongoDB was used as a cache in the cluster of workstations. Maven 3 was chosen as a dependency manager and build system, the project is hosted at Github.Results. As testing on SSMU's LAN has showed, software has been developed is quite efficiently retrieves ROIs are matching for the morphological substratum on pathological MRIs.Conclusion. Automation of a diagnostic process using medical imaging allows to level down the subjective component in decision making and increase the availability of hi-tech medicine. Software has shown in the article is a complex solution for ROI retrieving and segmentation process on model medical images in full-automated mode.We would like to thank Robert Vincent for great help with consulting of usage the BrainWeb resource.

  11. Diffusion of responsibility attenuates altruistic punishment: A functional magnetic resonance imaging effective connectivity study.

    Science.gov (United States)

    Feng, Chunliang; Deshpande, Gopikrishna; Liu, Chao; Gu, Ruolei; Luo, Yue-Jia; Krueger, Frank

    2016-02-01

    Humans altruistically punish violators of social norms to enforce cooperation and pro-social behaviors. However, such altruistic behaviors diminish when others are present, due to a diffusion of responsibility. We investigated the neural signatures underlying the modulations of diffusion of responsibility on altruistic punishment, conjoining a third-party punishment task with event-related functional magnetic resonance imaging and multivariate Granger causality mapping. In our study, participants acted as impartial third-party decision-makers and decided how to punish norm violations under two different social contexts: alone (i.e., full responsibility) or in the presence of putative other third-party decision makers (i.e., diffused responsibility). Our behavioral results demonstrated that the diffusion of responsibility served as a mediator of context-dependent punishment. In the presence of putative others, participants who felt less responsible also punished less severely in response to norm violations. Our neural results revealed that underlying this behavioral effect was a network of interconnected brain regions. For unfair relative to fair splits, the presence of others led to attenuated responses in brain regions implicated in signaling norm violations (e.g., AI) and to increased responses in brain regions implicated in calculating values of norm violations (e.g., vmPFC, precuneus) and mentalizing about others (dmPFC). The dmPFC acted as the driver of the punishment network, modulating target regions, such as AI, vmPFC, and precuneus, to adjust altruistic punishment behavior. Our results uncovered the neural basis of the influence of diffusion of responsibility on altruistic punishment and highlighted the role of the mentalizing network in this important phenomenon. Hum Brain Mapp 37:663-677, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  12. Structural and functional involvement of amygdale in posttraumatic stress disorder

    International Nuclear Information System (INIS)

    Hakamata, Yuko; Matsuoka, Yutaka

    2007-01-01

    Pathophysiological imaging studies of brain neuro-network concerned with posttraumatic stress disorder (PTSD) have given several models, where the interaction in the amygdala-ventral/medial prefrontal cortex-hippocampus (Am-vmPFC-Hp) system is widely noticed. This paper describes the review of the structure, function and functional connectivity of Am in PTSD in relation to the Am-vmPFC-Hp system. For the structure of Am in PTSD, many studies by MRI have shown that its volume is unchanged but, exceptionally, authors have found the significant 6% volume reduction. Increased functional activity of Am has been demonstrated in PTSD by positron emission tomography (PET), but refuting findings are still presented. Studies in a larger scale are awaited for conclusion. The functional connectivity of Am in PTSD seems still controversial in an aspect of its direction in the Am-vmPFC-Hp system and participation of other systems, not studied hitherto, is thought possible. Authors expect further progress of PTSD imaging study not only from its pathophysiologic aspect but also from its therapeutic (mental and medical) view, which can compensate our knowledge of PTSD from both standpoints. (R.T.)

  13. Resting-state functional connectivity between amygdala and the ventromedial prefrontal cortex following fear reminder predicts fear extinction

    Science.gov (United States)

    Feng, Pan; Zheng, Yong

    2016-01-01

    Investigations of fear conditioning have elucidated the neural mechanisms of fear acquisition, consolidation and extinction, but it is not clear how the neural activation following fear reminder influence the following extinction. To address this question, we measured human brain activity following fear reminder using resting-state functional magnetic resonance imaging, and investigated whether the extinction effect can be predicted by resting-state functional connectivity (RSFC). Behaviorally, we found no significant differences of fear ratings between the reminder group and the no reminder group at the fear acquisition and extinction stages, but spontaneous recovery during re-extinction stage appeared only in the no reminder group. Imaging data showed that functional connectivity between ventromedial prefrontal cortex (vmPFC) and amygdala in the reminder group was greater than that in the no reminder group after fear memory reactivation. More importantly, the functional connectivity between amygdala and vmPFC of the reminder group after fear memory reactivation was positively correlated with extinction effect. These results suggest RSFC between amygdala and the vmPFC following fear reminder can predict fear extinction, which provide important insight into the neural mechanisms of fear memory after fear memory reactivation. PMID:27013104

  14. USGS Imagery Only Base Map Service from The National Map

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — USGS Imagery Only is a tile cache base map of orthoimagery in The National Map visible to the 1:18,000 scale. Orthoimagery data are typically high resolution images...

  15. Persistent schema-dependent hippocampal-neocortical connectivity during memory encoding and postencoding rest in humans.

    Science.gov (United States)

    van Kesteren, Marlieke T R; Fernández, Guillén; Norris, David G; Hermans, Erno J

    2010-04-20

    The hippocampus is thought to promote gradual incorporation of novel information into long-term memory by binding, reactivating, and strengthening distributed cortical-cortical connections. Recent studies implicate a key role in this process for hippocampally driven crosstalk with the (ventro)medial prefrontal cortex (vmPFC), which is proposed to become a central node in such representational networks over time. The existence of a relevant prior associative network, or schema, may moreover facilitate this process. Thus, hippocampal-vmPFC crosstalk may support integration of new memories, particularly in the absence of a relevant prior schema. To address this issue, we used functional magnetic resonance imaging (fMRI) and prior schema manipulation to track hippocampal-vmPFC connectivity during encoding and postencoding rest. We manipulated prior schema knowledge by exposing 30 participants to the first part of a movie that was temporally scrambled for 15 participants. The next day, participants underwent fMRI while encoding the movie's final 15 min in original order and, subsequently, while resting. Schema knowledge and item recognition performance show that prior schema was successfully and selectively manipulated. Intersubject synchronization (ISS) and interregional partial correlation analyses furthermore show that stronger prior schema was associated with more vmPFC ISS and less hippocampal-vmPFC interregional connectivity during encoding. Notably, this connectivity pattern persisted during postencoding rest. These findings suggest that additional crosstalk between hippocampus and vmPFC is required to compensate for difficulty integrating novel information during encoding and provide tentative support for the notion that functionally relevant hippocampal-neocortical crosstalk persists during off-line periods after learning.

  16. SU-E-J-62: Breath Hold for Left-Sided Breast Cancer: Visually Monitored Deep Inspiration Breath Hold Amplitude Evaluated Using Real-Time Position Management

    Energy Technology Data Exchange (ETDEWEB)

    Conroy, L; Quirk, S; Smith, WL [The University of Calgary, Calgary, AB (Canada); Tom Baker Cancer Centre, Calgary, AB (Canada); Yeung, R; Phan, T [The University of Calgary, Calgary, AB (Canada); Hudson, A [Tom Baker Cancer Centre, Calgary, AB (Canada)

    2015-06-15

    Purpose: We used Real-Time Position Management (RPM) to evaluate breath hold amplitude and variability when gating with a visually monitored deep inspiration breath hold technique (VM-DIBH) with retrospective cine image chest wall position verification. Methods: Ten patients with left-sided breast cancer were treated using VM-DIBH. Respiratory motion was passively collected once weekly using RPM with the marker block positioned at the xiphoid process. Cine images on the tangent medial field were acquired on fractions with RPM monitoring for retrospective verification of chest wall position during breath hold. The amplitude and duration of all breath holds on which treatment beams were delivered were extracted from the RPM traces. Breath hold position coverage was evaluated for symmetric RPM gating windows from ± 1 to 5 mm centered on the average breath hold amplitude of the first measured fraction as a baseline. Results: The average (range) breath hold amplitude and duration was 18 mm (3–36 mm) and 19 s (7–34 s). The average (range) of amplitude standard deviation per patient over all breath holds was 2.7 mm (1.2–5.7 mm). With the largest allowable RPM gating window (± 5 mm), 4 of 10 VM-DIBH patients would have had ≥ 10% of their breath hold positions excluded by RPM. Cine verification of the chest wall position during the medial tangent field showed that the chest wall was greater than 5 mm from the baseline in only 1 out of 4 excluded patients. Cine images verify the chest wall/breast position only, whether this variation is acceptable in terms of heart sparing is a subject of future investigation. Conclusion: VM-DIBH allows for greater breath hold amplitude variability than using a 5 mm gating window with RPM, while maintaining chest wall positioning accuracy within 5 mm for the majority of patients.

  17. Data acquisition and control system for the ECE imaging diagnostic on the EAST tokamak

    Science.gov (United States)

    Luo, C.; Lan, T.; Zhu, Y.; Xie, J.; Gao, B.; Liu, W.; Yu, C.; Milne, P. G.; Domier, C. W.; Luhmann, N. C.

    2017-06-01

    A 384-channel electron cyclotron emission imaging (ECEI) system is installed on the experimental advanced superconducting tokamak (EAST) and 7-gigabyte data is produced for each regular discharge of a 10-second pulse. The data acquisition and control (DAC) system for the EAST ECEI diagnostics covers the large data production and embeds the ability to report the data quality instantly after the discharge. The symmetric routing design of the timing signal distributions among the 384 channels provides a low-cost solution to the synchronization of a large number of channels. The application of the load-balance bond service largely reduces the configuration difficulty and the cost in the high-speed data transferring tasks. Benefiting from the various kinds of hardware units with dedicated functionalities, an automated and user interactive DAC work flow is achieved, including the pre-selections of the automation scheme and the observation region, 384-channel data acquisition and local caching, post-discharge imaging data quality evaluation, remote system status monitoring, and inter-discharge imaging system event handling. The system configuration in a specific physics experiment is further optimized through the associated operating software which is enhanced by the input of the tokamak operation status and the region of interest (ROI) from other diagnostics. The DAC system is based on a modularized design and scalable to the long-pulse discharges in the EAST tokamak.

  18. Data acquisition and control system for the ECE imaging diagnostic on the EAST tokamak

    International Nuclear Information System (INIS)

    Luo, C.; Lan, T.; Xie, J.; Gao, B.; Liu, W.; Yu, C.; Zhu, Y.; Domier, C.W.; Luhmann, N.C.; Milne, P.G.

    2017-01-01

    A 384-channel electron cyclotron emission imaging (ECEI) system is installed on the experimental advanced superconducting tokamak (EAST) and 7-gigabyte data is produced for each regular discharge of a 10-second pulse. The data acquisition and control (DAC) system for the EAST ECEI diagnostics covers the large data production and embeds the ability to report the data quality instantly after the discharge. The symmetric routing design of the timing signal distributions among the 384 channels provides a low-cost solution to the synchronization of a large number of channels. The application of the load-balance bond service largely reduces the configuration difficulty and the cost in the high-speed data transferring tasks. Benefiting from the various kinds of hardware units with dedicated functionalities, an automated and user interactive DAC work flow is achieved, including the pre-selections of the automation scheme and the observation region, 384-channel data acquisition and local caching, post-discharge imaging data quality evaluation, remote system status monitoring, and inter-discharge imaging system event handling. The system configuration in a specific physics experiment is further optimized through the associated operating software which is enhanced by the input of the tokamak operation status and the region of interest (ROI) from other diagnostics. The DAC system is based on a modularized design and scalable to the long-pulse discharges in the EAST tokamak.

  19. Doppler ultrasonographic measurement of short-term effects of valsalva maneuver on retrobulbar blood flow.

    Science.gov (United States)

    Kimyon, Sabit; Mete, Ahmet; Mete, Alper; Mete, Duçem

    2017-11-12

    To investigate the effects of Valsalva maneuver (VM) on retrobulbar blood flow parameters in healthy subjects. Participants without any ophthalmologic or systemic pathology were examined in supine position with color and pulsed Doppler imaging for blood flow measurement, via a paraocular approach, in the ophthalmic artery (OA), central retinal artery (CRA), central retinal vein (CRV), nasal posterior ciliary artery (NPCA), and temporal posterior ciliary artery (TPCA), 10 seconds after a 35- to 40-mm Hg expiratory pressure was reached. Peak systolic velocity (PSV), end-diastolic velocity (EDV), pulsatility index (PI), and resistivity index (RI) values were recorded for each artery. PSV and EDV values were recorded for CRV. There were significant differences between resting and VM values of PSV and EDV of CRA, RI of NPCA, and PI, RI, and EDV of TPCA. Resting CRA-EDV, CRV-PSV, and CRV-EDV were positively correlated whereas resting OA-PSV and CRA-PI, and OA-PSV, CRA-PSV, and CRA-EDV during VM, were negatively correlated with age. VM induces a short-term increase in CRA blood flow and a decrease in NPCA and TPCA RI. Additional studies with a longer Doppler recording during VM, in a larger population sample, are required to allow definitive interpretation. © 2017 Wiley Periodicals, Inc. J Clin Ultrasound 45:551-555, 2017. © 2017 Wiley Periodicals, Inc.

  20. Web proxy auto discovery for the WLCG

    CERN Document Server

    Dykstra, D; Blumenfeld, B; De Salvo, A; Dewhurst, A; Verguilov, V

    2017-01-01

    All four of the LHC experiments depend on web proxies (that is, squids) at each grid site to support software distribution by the CernVM FileSystem (CVMFS). CMS and ATLAS also use web proxies for conditions data distributed through the Frontier Distributed Database caching system. ATLAS & CMS each have their own methods for their grid jobs to find out which web proxies to use for Frontier at each site, and CVMFS has a third method. Those diverse methods limit usability and flexibility, particularly for opportunistic use cases, where an experiment’s jobs are run at sites that do not primarily support that experiment. This paper describes a new Worldwide LHC Computing Grid (WLCG) system for discovering the addresses of web proxies. The system is based on an internet standard called Web Proxy Auto Discovery (WPAD). WPAD is in turn based on another standard called Proxy Auto Configuration (PAC). Both the Frontier and CVMFS clients support this standard. The input into the WLCG system comes from squids regis...

  1. Cloud services for the Fermilab scientific stakeholders

    International Nuclear Information System (INIS)

    Timm, S; Garzoglio, G; Mhashilkar, P

    2015-01-01

    As part of the Fermilab/KISTI cooperative research project, Fermilab has successfully run an experimental simulation workflow at scale on a federation of Amazon Web Services (AWS), FermiCloud, and local FermiGrid resources. We used the CernVM-FS (CVMFS) file system to deliver the application software. We established Squid caching servers in AWS as well, using the Shoal system to let each individual virtual machine find the closest squid server. We also developed an automatic virtual machine conversion system so that we could transition virtual machines made on FermiCloud to Amazon Web Services. We used this system to successfully run a cosmic ray simulation of the NOvA detector at Fermilab, making use of both AWS spot pricing and network bandwidth discounts to minimize the cost. On FermiCloud we also were able to run the workflow at the scale of 1000 virtual machines, using a private network routable inside of Fermilab. We present in detail the technological improvements that were used to make this work a reality. (paper)

  2. Evaluation of low-temperature geothermal potential in Cache Valley, Utah. Report of investigation No. 174

    Energy Technology Data Exchange (ETDEWEB)

    de Vries, J.L.

    1982-11-01

    Field work consisted of locating 90 wells and springs throughout the study area, collecting water samples for later laboratory analyses, and field measurement of pH, temperature, bicarbonate alkalinity, and electrical conductivity. Na/sup +/, K/sup +/, Ca/sup +2/, Mg/sup +2/, SiO/sub 2/, Fe, SO/sub 4//sup -2/, Cl/sup -/, F/sup -/, and total dissolved solids were determined in the laboratory. Temperature profiles were measured in 12 additional, unused walls. Thermal gradients calculated from the profiles were approximately the same as the average for the Basin and Range province, about 35/sup 0/C/km. One well produced a gradient of 297/sup 0/C/km, most probably as a result of a near-surface occurrence of warm water. Possible warm water reservoir temperatures were calculated using both the silica and the Na-K-Ca geothermometers, with the results averaging about 50 to 100/sup 0/C. If mixing calculations were applied, taking into account the temperatures and silica contents of both warm springs or wells and the cold groundwater, reservoir temperatures up to about 200/sup 0/C were indicated. Considering measured surface water temperatures, calculated reservoir temperatures, thermal gradients, and the local geology, most of the Cache Valley, Utah area is unsuited for geothermal development. However, the areas of North Logan, Benson, and Trenton were found to have anomalously warm groundwater in comparison to the background temperature of 13.0/sup 0/C for the study area. The warm water has potential for isolated energy development but is not warm enough for major commercial development.

  3. Graded Mirror Self-Recognition by Clark's Nutcrackers.

    Science.gov (United States)

    Clary, Dawson; Kelly, Debbie M

    2016-11-04

    The traditional 'mark test' has shown some large-brained species are capable of mirror self-recognition. During this test a mark is inconspicuously placed on an animal's body where it can only be seen with the aid of a mirror. If the animal increases the number of actions directed to the mark region when presented with a mirror, the animal is presumed to have recognized the mirror image as its reflection. However, the pass/fail nature of the mark test presupposes self-recognition exists in entirety or not at all. We developed a novel mirror-recognition task, to supplement the mark test, which revealed gradation in the self-recognition of Clark's nutcrackers, a large-brained corvid. To do so, nutcrackers cached food alone, observed by another nutcracker, or with a regular or blurry mirror. The nutcrackers suppressed caching with a regular mirror, a behavioural response to prevent cache theft by conspecifics, but did not suppress caching with a blurry mirror. Likewise, during the mark test, most nutcrackers made more self-directed actions to the mark with a blurry mirror than a regular mirror. Both results suggest self-recognition was more readily achieved with the blurry mirror and that self-recognition may be more broadly present among animals than currently thought.

  4. Science applications of a multispectral microscopic imager for the astrobiological exploration of Mars.

    Science.gov (United States)

    Núñez, Jorge I; Farmer, Jack D; Sellar, R Glenn; Swayze, Gregg A; Blaney, Diana L

    2014-02-01

    Future astrobiological missions to Mars are likely to emphasize the use of rovers with in situ petrologic capabilities for selecting the best samples at a site for in situ analysis with onboard lab instruments or for caching for potential return to Earth. Such observations are central to an understanding of the potential for past habitable conditions at a site and for identifying samples most likely to harbor fossil biosignatures. The Multispectral Microscopic Imager (MMI) provides multispectral reflectance images of geological samples at the microscale, where each image pixel is composed of a visible/shortwave infrared spectrum ranging from 0.46 to 1.73 μm. This spectral range enables the discrimination of a wide variety of rock-forming minerals, especially Fe-bearing phases, and the detection of hydrated minerals. The MMI advances beyond the capabilities of current microimagers on Mars by extending the spectral range into the infrared and increasing the number of spectral bands. The design employs multispectral light-emitting diodes and an uncooled indium gallium arsenide focal plane array to achieve a very low mass and high reliability. To better understand and demonstrate the capabilities of the MMI for future surface missions to Mars, we analyzed samples from Mars-relevant analog environments with the MMI. Results indicate that the MMI images faithfully resolve the fine-scale microtextural features of samples and provide important information to help constrain mineral composition. The use of spectral endmember mapping reveals the distribution of Fe-bearing minerals (including silicates and oxides) with high fidelity, along with the presence of hydrated minerals. MMI-based petrogenetic interpretations compare favorably with laboratory-based analyses, revealing the value of the MMI for future in situ rover-mediated astrobiological exploration of Mars. Mars-Microscopic imager-Multispectral imaging-Spectroscopy-Habitability-Arm instrument.

  5. ARC-VM: An architecture real options complexity-based valuation methodology for military systems-of-systems acquisitions

    Science.gov (United States)

    Domercant, Jean Charles

    The combination of today's national security environment and mandated acquisition policies makes it necessary for military systems to interoperate with each other to greater degrees. This growing interdependency results in complex Systems-of-Systems (SoS) that only continue to grow in complexity to meet evolving capability needs. Thus, timely and affordable acquisition becomes more difficult, especially in the face of mounting budgetary pressures. To counter this, architecting principles must be applied to SoS design. The research objective is to develop an Architecture Real Options Complexity-Based Valuation Methodology (ARC-VM) suitable for acquisition-level decision making, where there is a stated desire for more informed tradeoffs between cost, schedule, and performance during the early phases of design. First, a framework is introduced to measure architecture complexity as it directly relates to military SoS. Development of the framework draws upon a diverse set of disciplines, including Complexity Science, software architecting, measurement theory, and utility theory. Next, a Real Options based valuation strategy is developed using techniques established for financial stock options that have recently been adapted for use in business and engineering decisions. The derived complexity measure provides architects with an objective measure of complexity that focuses on relevant complex system attributes. These attributes are related to the organization and distribution of SoS functionality and the sharing and processing of resources. The use of Real Options provides the necessary conceptual and visual framework to quantifiably and traceably combine measured architecture complexity, time-valued performance levels, as well as programmatic risks and uncertainties. An example suppression of enemy air defenses (SEAD) capability demonstrates the development and usefulness of the resulting architecture complexity & Real Options based valuation methodology. Different

  6. Functional compensation in the ventromedial prefrontal cortex improves memory-dependent decisions in older adults.

    Science.gov (United States)

    Lighthall, Nichole R; Huettel, Scott A; Cabeza, Roberto

    2014-11-19

    Everyday consumer choices frequently involve memory, as when we retrieve information about consumer products when making purchasing decisions. In this context, poor memory may affect decision quality, particularly in individuals with memory decline, such as older adults. However, age differences in choice behavior may be reduced if older adults can recruit additional neural resources that support task performance. Although such functional compensation is well documented in other cognitive domains, it is presently unclear whether it can support memory-guided decision making and, if so, which brain regions play a role in compensation. The current study engaged younger and older humans in a memory-dependent choice task in which pairs of consumer products from a popular online-shopping site were evaluated with different delays between the first and second product. Using functional imaging (fMRI), we found that the ventromedial prefrontal cortex (vmPFC) supports compensation as defined by three a priori criteria: (1) increased vmPFC activation was observed in older versus younger adults; (2) age-related increases in vmPFC activity were associated with increased retrieval demands; and (3) increased vmPFC activity was positively associated with performance in older adults-evidence of successful compensation. Extending these results, we observed evidence for compensation in connectivity between vmPFC and the dorsolateral PFC during memory-dependent choice. In contrast, we found no evidence for age differences in value-related processing or age-related compensation for choices without delayed retrieval. Together, these results converge on the conclusion that age-related decline in memory-dependent choice performance can be minimized via functional compensation in vmPFC. Copyright © 2014 the authors 0270-6474/14/3415648-10$15.00/0.

  7. An experimental study on the readability of the digital images in the furcal bone defects

    International Nuclear Information System (INIS)

    Oh, Bong Hyeon; Hwang, Eui Hwan; Lee, Sang Rae

    1995-01-01

    The aim of this study was to evaluate and compare observer performance between conventional radiographs and their digitized images for the detection of bone loss in the bifurcation of mandibular first molar. One dried human mandible with minimal periodontal bone loss around the first molar was selected and serially enlarged 27 step defects were prepared in the bifurcation area. The mandible was radiographed with exposure time of 0.12, 0.20, 0.25, 0.32, 0.40, 0.64 seconds, after each successive step in the preperation and all radiographs were digitized with IBM-PC/32 bit-Dx compatible, video camera (VM-S8200, Hitachi Co., Japan), and color monitor (Multisync 3D, NEC, Japan). Sylvia Image Capture Board for the ADC (analog to digital converter) was used. The following results obtained: 1. In the conventional radiographs, the mean score of the readability was higher at the condition of exposure time with 0.32 second. Also, as the size of artificial lesion was increased, the readability of radiographs was elevated (p<0.05). 2. In the digital images, the mean score of the readability was higher at the condition of exposure time with 0.40 second. Also, as the size of artificial lesion was increased, the readability of digital images was elevated (p<0.05). 3. At the same exposure time, the mean scores of readability were mostly higher in the digitized images. As the exposure time was increased, the digital images were superior to radiographs in readability. 4. As the size of lesion was changed, the digital images were superior to radiographs in detecting small lesion. 5. The coefficient of variation of mean score has no significant difference between digital images and radiographs.

  8. A batch system for HEP applications on a distributed IaaS cloud

    International Nuclear Information System (INIS)

    Gable, I; Agarwal, A; Anderson, M; Armstrong, P; Fransham, K; Leavett-Brown, D Harris C; Paterson, M; Penfold-Brown, D; Sobie, R J; Vliet, M; Charbonneau, A; Impey, R; Podaima, W

    2011-01-01

    The emergence of academic and commercial Infrastructure-as-a-Service (IaaS) clouds is opening access to new resources for the HEP community. In this paper we will describe a system we have developed for creating a single dynamic batch environment spanning multiple IaaS clouds of different types (e.g. Nimbus, OpenNebula, Amazon EC2). A HEP user interacting with the system submits a job description file with a pointer to their VM image. VM images can either be created by users directly or provided to the users. We have created a new software component called Cloud Scheduler that detects waiting jobs and boots the user VM required on any one of the available cloud resources. As the user VMs appear, they are attached to the job queues of a central Condor job scheduler, the job scheduler then submits the jobs to the VMs. The number of VMs available to the user is expanded and contracted dynamically depending on the number of user jobs. We present the motivation and design of the system with particular emphasis on Cloud Scheduler. We show that the system provides the ability to exploit academic and commercial cloud sites in a transparent fashion.

  9. Efficient temporal access of satellite image data

    CSIR Research Space (South Africa)

    Bachoo, A

    2008-11-01

    Full Text Available in the spatial representation are now consecutive rows in the serialised representation, which implies that the contiguity of rows of pixels in the spatial representation is preserved. This allows operating system level read-ahead and caching... structure has several disadvantages: fixed size operating system disk blocks result in a significant amount of wasted disk space (slack space), a file has to be opened (and closed again) for every location read, and the three-level directory structure...

  10. Determination of the size of an imaging data storage device at a full PACS hospital

    International Nuclear Information System (INIS)

    Cha, S. J.; Kim, Y. H.; Hur, G.

    2000-01-01

    To determine the appropriate size of a short and long-term storage device, bearing in mind the design factors involved and the installation costs. The number of radiologic studies quoted is the number of these undertaken during a one-year period at a university hospital with 650 beds, and reflects the actual number of each type of examination performed at a full PACS hospital. The average daily number of outpatients was 1586, while that of inpatients was 639.5. The numbers of radiologic studies performed were as follows : 378 among 189 outpatients, and 165 among 41 inpatients. The average daily number of examinations was 543, comprising 460 CR, 30 ultrasonograms, 25 CT, 8 MRI, 20 others. The total amount of digital images was 17.4 GB per day, while the amount of short-term data with lossless compression was 6.7 GB per day. During 14 days short-term storage, the amount of image data was 93.7 GB in disk array. The amount of data stored mid term (1 year), with lossy compression, was 369.1 GB. The amount of data stored in the form of long-term cache and educational images was 38.7 GB and 30 GB, respectively, The total size of disk array was 531.5 GB. A device suitable for the long-term storage of images, for at least five years, requires a capacity of 1845.5 GB. At a full PACS hospital with 600 beds, the minimum disk space required for the short-and mid-term storage of image data in disk array is 540 GB. The capacity required for long term storage (at least five years) is 1900 GB. (author)

  11. Vestibular migraine in multicenter neurology clinics according to the appendix criteria in the third beta edition of the International Classification of Headache Disorders.

    Science.gov (United States)

    Cho, Soo-Jin; Kim, Byung-Kun; Kim, Byung-Su; Kim, Jae-Moon; Kim, Soo-Kyoung; Moon, Heui-Soo; Song, Tae-Jin; Cha, Myoung-Jin; Park, Kwang-Yeol; Sohn, Jong-Hee

    2016-04-01

    Vestibular migraine (VM), the common term for recurrent vestibular symptoms with migraine features, has been recognized in the appendix criteria of the third beta edition of the International Classification of Headache Disorders (ICHD-3β). We applied the criteria for VM in a prospective, multicenter headache registry study. Nine neurologists enrolled consecutive patients visiting outpatient clinics for headache. The presenting headache disorder and additional VM diagnoses were classified according to the ICHD-3β. The rates of patients diagnosed with VM and probable VM using consensus criteria were assessed. A total of 1414 patients were enrolled. Of 631 migraineurs, 65 were classified with VM (10.3%) and 16 with probable VM (2.5%). Accompanying migraine subtypes in VM were migraine without aura (66.2%), chronic migraine (29.2%), and migraine with aura (4.6%). Probable migraine (75%) was common in those with probable VM. The most common vestibular symptom was head motion-induced dizziness with nausea in VM and spontaneous vertigo in probable VM. The clinical characteristics of VM did not differ from those of migraine without VM. We diagnosed VM in 10.3% of first-visit migraineurs in neurology clinics using the ICHD-3β. Applying the diagnosis of probable VM can increase the identification of VM. © International Headache Society 2015.

  12. Carbon stored in forest plantations of Pinus caribaea, Cupressus lusitanica and Eucalyptus deglupta in Cachí Hydroelectric Project

    Directory of Open Access Journals (Sweden)

    Marylin Rojas

    2014-06-01

    Full Text Available Forest plantations are considered the main carbon sinks thought to reduce the impact of climate change. Regarding many species, however, there is a lack of information in order to establish metrics on accumulation of biomass and carbon, principally due to the level of difficulty and the cost of quantification through direct measurement and destructive sampling. In this research, it was evaluated carbon stocks of forest plantations near the dam of hydroelectric project Cachí, which belongs to Instituto Costarricense de Electricidad. 25 unit samples were evaluated along some plantations that contain three different species. 30 Pinus caribacea trees, 14 Cupressus lusitanica and 15 Eucalyptus deglupta were extracted. The biomass was quantified by means of the destructive method. First of all, every component of the tree was weighed separately; then, sampling was obtained in order to determine the dry matter and the carbon fraction. 110 biomass samples from the three species were analyzed in laboratory, including all the components (leaves, branches, shaft, and root. The carbon fraction varied between 47,5 and 48,0 for Pinus caribacea; between 32,6 and 52,7 for Cupressus lusitanica, and beween 36,4 and 50,3% for Eucalyptus deglupta. The stored carbon was 230, 123, and 69 Mg ha-1 in plantations of P. caribaea, C. lusitanica and E. deglupta, respectively. Approximately, 75% of the stored carbon was detected in the shaft.

  13. LHCb experience with running jobs in virtual machines

    CERN Document Server

    McNab, A; Luzzi, C

    2015-01-01

    The LHCb experiment has been running production jobs in virtual machines since 2013 as part of its DIRAC-based infrastructure. We describe the architecture of these virtual machines and the steps taken to replicate the WLCG worker node environment expected by user and production jobs. This relies on the uCernVM system for providing root images for virtual machines. We use the CernVM-FS distributed filesystem to supply the root partition files, the LHCb software stack, and the bootstrapping scripts necessary to configure the virtual machines for us. Using this approach, we have been able to minimise the amount of contextualisation which must be provided by the virtual machine managers. We explain the process by which the virtual machine is able to receive payload jobs submitted to DIRAC by users and production managers, and how this differs from payloads executed within conventional DIRAC pilot jobs on batch queue based sites. We describe our operational experiences in running production on VM based sites mana...

  14. Towards a Time-predictable Dual-Issue Microprocessor: The Patmos Approach

    DEFF Research Database (Denmark)

    Schoeberl, Martin; Schleuniger, Pascal; Puffitsch, Wolfgang

    2011-01-01

    for low WCET bounds rather than high average case performance. Patmos is a dualissue, statically scheduled RISC processor. The instruction cache is organized as a method cache and the data cache is organized as a split cache in order to simplify the cache WCET analysis. To fill the dual-issue pipeline...

  15. A novel pre-clinical in vivo mouse model for malignant brain tumor growth and invasion.

    Science.gov (United States)

    Shelton, Laura M; Mukherjee, Purna; Huysentruyt, Leanne C; Urits, Ivan; Rosenberg, Joshua A; Seyfried, Thomas N

    2010-09-01

    Glioblastoma multiforme (GBM) is a rapidly progressive disease of morbidity and mortality and is the most common form of primary brain cancer in adults. Lack of appropriate in vivo models has been a major roadblock to developing effective therapies for GBM. A new highly invasive in vivo GBM model is described that was derived from a spontaneous brain tumor (VM-M3) in the VM mouse strain. Highly invasive tumor cells could be identified histologically on the hemisphere contralateral to the hemisphere implanted with tumor cells or tissue. Tumor cells were highly expressive for the chemokine receptor CXCR4 and the proliferation marker Ki-67 and could be identified invading through the pia mater, the vascular system, the ventricular system, around neurons, and over white matter tracts including the corpus callosum. In addition, the brain tumor cells were labeled with the firefly luciferase gene, allowing for non-invasive detection and quantitation through bioluminescent imaging. The VM-M3 tumor has a short incubation time with mortality occurring in 100% of the animals within approximately 15 days. The VM-M3 brain tumor model therefore can be used in a pre-clinical setting for the rapid evaluation of novel anti-invasive therapies.

  16. Variation in orbitofrontal cortex volume: relation to sex, emotion regulation and affect.

    Science.gov (United States)

    Welborn, B Locke; Papademetris, Xenophon; Reis, Deidre L; Rajeevan, Nallakkandi; Bloise, Suzanne M; Gray, Jeremy R

    2009-12-01

    Sex differences in brain structure have been examined extensively but are not completely understood, especially in relation to possible functional correlates. Our two aims in this study were to investigate sex differences in brain structure, and to investigate a possible relation between orbitofrontal cortex subregions and affective individual differences. We used tensor-based morphometry to estimate local brain volume from MPRAGE images in 117 healthy right-handed adults (58 female), age 18-40 years. We entered estimates of local brain volume as the dependent variable in a GLM, controlling for age, intelligence and whole-brain volume. Men had larger left planum temporale. Women had larger ventromedial prefrontal cortex (vmPFC), right lateral orbitofrontal (rlOFC), cerebellum, and bilateral basal ganglia and nearby white matter. vmPFC but not rlOFC volume covaried with self-reported emotion regulation strategies (reappraisal, suppression), expressivity of positive emotions (but not of negative), strength of emotional impulses, and cognitive but not somatic anxiety. vmPFC volume statistically mediated sex differences in emotion suppression. The results confirm prior reports of sex differences in orbitofrontal cortex structure, and are the first to show that normal variation in vmPFC volume is systematically related to emotion regulation and affective individual differences.

  17. Risk factors associated with cognitions for late-onset depression based on anterior and posterior default mode sub-networks.

    Science.gov (United States)

    Liu, Rui; Yue, Yingying; Hou, Zhenghua; Yuan, Yonggui; Wang, Qiao

    2018-08-01

    Abnormal functional connectivity (FC) in the default mode network (DMN) plays an important role in late-onset depression (LOD) patients. In this study, the risk predictors of LOD based on anterior and posterior DMN are explored. A total of 27 LOD patients and 40 healthy controls (HC) underwent resting-state functional magnetic resonance imaging and cognitive assessments. Firstly, FCs within DMN sub-networks were determined by placing seeds in the ventral medial prefrontal cortex (vmPFC) and posterior cingulate cortex (PCC). Secondly, multivariable logistic regression was used to identify risk factors for LOD patients. Finally, correlation analysis was performed to investigate the relationship between risk factors and the cognitive value. Multivariable logistic regression showed that the FCs between the vmPFC and right middle temporal gyrus (MTG) (vmPFC-MTG_R), FCs between the vmPFC and left precuneus (PCu), and FCs between the PCC and left PCu (PCC-PCu_L) were the risk factors for LOD. Furthermore, FCs of the vmPFC-MTG_R and PCC-PCu_L correlated with processing speed (R = 0.35, P = 0.002; R = 0.32, P = 0.009), and FCs of the vmPFC-MTG_R correlated with semantic memory (R = 0.41, P = 0.001). The study was a cross-sectional study. The results may be potentially biased because of a small sample. In this study, we confirmed that LOD patients mainly present cognitive deficits in processing speed and semantic memory. Moreover, our findings further suggested that FCs within DMN sub-networks associated with cognitions were risk factors, which may be used for the prediction of LOD. Copyright © 2018 Elsevier B.V. All rights reserved.

  18. Neuro-Epigenetic Indications of Acute Stress Response in Humans: The Case of MicroRNA-29c.

    Directory of Open Access Journals (Sweden)

    Sharon Vaisvaser

    Full Text Available Stress research has progressively become more integrative in nature, seeking to unfold crucial relations between the different phenotypic levels of stress manifestations. This study sought to unravel stress-induced variations in expression of human microRNAs sampled in peripheral blood mononuclear cells and further assess their relationship with neuronal and psychological indices. We obtained blood samples from 49 healthy male participants before and three hours after performing a social stress task, while undergoing functional magnetic resonance imaging (fMRI. A seed-based functional connectivity (FC analysis was conducted for the ventro-medial prefrontal cortex (vmPFC, a key area of stress regulation. Out of hundreds of microRNAs, a specific increase was identified in microRNA-29c (miR-29c expression, corresponding with both the experience of sustained stress via self-reports, and alterations in vmPFC functional connectivity. Explicitly, miR-29c expression levels corresponded with both increased connectivity of the vmPFC with the anterior insula (aIns, and decreased connectivity of the vmPFC with the left dorso-lateral prefrontal cortex (dlPFC. Our findings further revealed that miR-29c mediates an indirect path linking enhanced vmPFC-aIns connectivity during stress with subsequent experiences of sustained stress. The correlative patterns of miR-29c expression and vmPFC FC, along with the mediating effects on subjective stress sustainment and the presumed localization of miR-29c in astrocytes, together point to an intriguing assumption; miR-29c may serve as a biomarker in the blood for stress-induced functional neural alterations reflecting regulatory processes. Such a multi-level model may hold the key for future personalized intervention in stress psychopathology.

  19. WE-A-BRF-01: Dual-Energy CT Imaging in Diagnostic Imaging and Radiation Therapy

    International Nuclear Information System (INIS)

    Molloi, S; Li, B; Yin, F; Chen, H

    2014-01-01

    classification based on calcium scores shows excellent agreement with classification on the basis of conventional coronary artery calcium scoring. These studies demonstrate dual-energy cardiovascular CT can potentially be a noninvasive and sensitive modality in high risk patients. On-board KV/MV Imaging. To enhance soft tissue contrast and reduce metal artifacts, we have developed a dual-energy CBCT technique and a novel on-board kV/MV imaging technique based on hardware available on modern linear accelerators. We have also evaluated the feasibility of these two techniques in various phantom studies. Optimal techniques (energy, beam filtration, # of overlapping projections, etc) have been investigated with unique calibration procedures, which leads to successful decomposition of imaged material into acrylic-aluminum basis material pair. This enables the synthesis of virtual monochromatic (VM) CBCT images that demonstrate much less beam hardening, significantly reduced metal artifacts, and/or higher soft tissue CNR compared to single-energy CBCT. Adaptive Radiation Therapy. DECT could actually contribute to the area of Dose-Guided Radiation Therapy (or Adaptive Therapy). The application of DECT imaging using 80kV and 140 kV combinations could potentially increase the image quality by reducing the bone or high density material artifacts and also increase the soft tissue contrast by a light contrast agent. The result of this higher contrast / quality images is beneficial for deformable image registration / segmentation algorithm to improve its accuracy hence to make adaptive therapy less time consuming in its recontouring process. The real time re-planning prior to per treatment fraction could become more realistic with this improvement especially in hypofractional SBRT cases. Learning Objectives: Learn recent developments of dual-energy imaging in diagnosis and radiation therapy; Understand the unique clinical problem and required quantification accuracy in each application

  20. Medical high-resolution image sharing and electronic whiteboard system: A pure-web-based system for accessing and discussing lossless original images in telemedicine.

    Science.gov (United States)

    Qiao, Liang; Li, Ying; Chen, Xin; Yang, Sheng; Gao, Peng; Liu, Hongjun; Feng, Zhengquan; Nian, Yongjian; Qiu, Mingguo

    2015-09-01

    There are various medical image sharing and electronic whiteboard systems available for diagnosis and discussion purposes. However, most of these systems ask clients to install special software tools or web plug-ins to support whiteboard discussion, special medical image format, and customized decoding algorithm of data transmission of HRIs (high-resolution images). This limits the accessibility of the software running on different devices and operating systems. In this paper, we propose a solution based on pure web pages for medical HRIs lossless sharing and e-whiteboard discussion, and have set up a medical HRI sharing and e-whiteboard system, which has four-layered design: (1) HRIs access layer: we improved an tile-pyramid model named unbalanced ratio pyramid structure (URPS), to rapidly share lossless HRIs and to adapt to the reading habits of users; (2) format conversion layer: we designed a format conversion engine (FCE) on server side to real time convert and cache DICOM tiles which clients requesting with window-level parameters, to make browsers compatible and keep response efficiency to server-client; (3) business logic layer: we built a XML behavior relationship storage structure to store and share users' behavior, to keep real time co-browsing and discussion between clients; (4) web-user-interface layer: AJAX technology and Raphael toolkit were used to combine HTML and JavaScript to build client RIA (rich Internet application), to meet clients' desktop-like interaction on any pure webpage. This system can be used to quickly browse lossless HRIs, and support discussing and co-browsing smoothly on any web browser in a diversified network environment. The proposal methods can provide a way to share HRIs safely, and may be used in the field of regional health, telemedicine and remote education at a low cost. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  1. Feature Sets for Screenshot Detection

    Science.gov (United States)

    2013-06-01

    containing screenshots of computer programs as part of their instructions. Additionally, web browsers such as Firefox capture and cache screenshots of user...provide indications as to the class of image. At an even higher level, there is often distinguishing metadata associated with digital images. Mac’s OS X...mac os x malware discovered, takes screenshots and uploads them to unknown servers without user’s consent. [Online]. Available: http

  2. Alterations in white matter microstructure as vulnerability factors and acquired signs of traffic accident-induced PTSD.

    Directory of Open Access Journals (Sweden)

    Yawen Sun

    Full Text Available It remains unclear whether white matter (WM changes found in post-traumatic stress disorder (PTSD patients are stress-induced or precursors for vulnerability. The current study aimed to identify susceptibility factors relating to the development of PTSD and to examine the ability of these factors to predict the course of longitudinal PTSD. Sixty two victims who had experienced traffic accidents underwent diffusion tensor imaging using a 3.0T MRI system within 2 days after their accidents. Of these, 21 were diagnosed with PTSD at 1 or 6 months using the Clinician-Administered Ptsd Scale (CAPS. Then, 11 trauma-exposed victims with PTSD underwent the second MRI scan. Compared with the victims without PTSD, the victims with PTSD showed decreased fractional anisotropy (FA in WM of the anterior cingulate cortex, ventromedial prefrontal cortex (vmPFC, temporal lobes and midbrain, and increased mean diffusivity (MD in the vmPFC within 2 days after the traumatic event. Importantly, decreased FA of the vmPFC in the acute phase predicted greater future CAPS scores. In addition, we found decreased FA in the insula in the follow-up scan in the victims with PTSD, which correlated with the decreased FA of the vmPFC in their baseline scan. These results suggested that the WM might have changed within 2 days after the traumatic event in the individuals who would later develop PTSD. Furthermore, decreased FA of the vmPFC could be a possible vulnerability marker predicting future development of PTSD and may provide an outcome prediction of the acquired signs.

  3. Improving automated 3D reconstruction methods via vision metrology

    Science.gov (United States)

    Toschi, Isabella; Nocerino, Erica; Hess, Mona; Menna, Fabio; Sargeant, Ben; MacDonald, Lindsay; Remondino, Fabio; Robson, Stuart

    2015-05-01

    This paper aims to provide a procedure for improving automated 3D reconstruction methods via vision metrology. The 3D reconstruction problem is generally addressed using two different approaches. On the one hand, vision metrology (VM) systems try to accurately derive 3D coordinates of few sparse object points for industrial measurement and inspection applications; on the other, recent dense image matching (DIM) algorithms are designed to produce dense point clouds for surface representations and analyses. This paper strives to demonstrate a step towards narrowing the gap between traditional VM and DIM approaches. Efforts are therefore intended to (i) test the metric performance of the automated photogrammetric 3D reconstruction procedure, (ii) enhance the accuracy of the final results and (iii) obtain statistical indicators of the quality achieved in the orientation step. VM tools are exploited to integrate their main functionalities (centroid measurement, photogrammetric network adjustment, precision assessment, etc.) into the pipeline of 3D dense reconstruction. Finally, geometric analyses and accuracy evaluations are performed on the raw output of the matching (i.e. the point clouds) by adopting a metrological approach. The latter is based on the use of known geometric shapes and quality parameters derived from VDI/VDE guidelines. Tests are carried out by imaging the calibrated Portable Metric Test Object, designed and built at University College London (UCL), UK. It allows assessment of the performance of the image orientation and matching procedures within a typical industrial scenario, characterised by poor texture and known 3D/2D shapes.

  4. Science Applications of a Multispectral Microscopic Imager for the Astrobiological Exploration of Mars

    Science.gov (United States)

    Farmer, Jack D.; Sellar, R. Glenn; Swayze, Gregg A.; Blaney, Diana L.

    2014-01-01

    Abstract Future astrobiological missions to Mars are likely to emphasize the use of rovers with in situ petrologic capabilities for selecting the best samples at a site for in situ analysis with onboard lab instruments or for caching for potential return to Earth. Such observations are central to an understanding of the potential for past habitable conditions at a site and for identifying samples most likely to harbor fossil biosignatures. The Multispectral Microscopic Imager (MMI) provides multispectral reflectance images of geological samples at the microscale, where each image pixel is composed of a visible/shortwave infrared spectrum ranging from 0.46 to 1.73 μm. This spectral range enables the discrimination of a wide variety of rock-forming minerals, especially Fe-bearing phases, and the detection of hydrated minerals. The MMI advances beyond the capabilities of current microimagers on Mars by extending the spectral range into the infrared and increasing the number of spectral bands. The design employs multispectral light-emitting diodes and an uncooled indium gallium arsenide focal plane array to achieve a very low mass and high reliability. To better understand and demonstrate the capabilities of the MMI for future surface missions to Mars, we analyzed samples from Mars-relevant analog environments with the MMI. Results indicate that the MMI images faithfully resolve the fine-scale microtextural features of samples and provide important information to help constrain mineral composition. The use of spectral endmember mapping reveals the distribution of Fe-bearing minerals (including silicates and oxides) with high fidelity, along with the presence of hydrated minerals. MMI-based petrogenetic interpretations compare favorably with laboratory-based analyses, revealing the value of the MMI for future in situ rover-mediated astrobiological exploration of Mars. Key Words: Mars—Microscopic imager—Multispectral imaging

  5. Science applications of a multispectral microscopic imager for the astrobiological exploration of Mars

    Science.gov (United States)

    Nunez, Jorge; Farmer, Jack; Sellar, R. Glenn; Swayze, Gregg A.; Blaney, Diana L.

    2014-01-01

    Future astrobiological missions to Mars are likely to emphasize the use of rovers with in situ petrologic capabilities for selecting the best samples at a site for in situ analysis with onboard lab instruments or for caching for potential return to Earth. Such observations are central to an understanding of the potential for past habitable conditions at a site and for identifying samples most likely to harbor fossil biosignatures. The Multispectral Microscopic Imager (MMI) provides multispectral reflectance images of geological samples at the microscale, where each image pixel is composed of a visible/shortwave infrared spectrum ranging from 0.46 to 1.73 μm. This spectral range enables the discrimination of a wide variety of rock-forming minerals, especially Fe-bearing phases, and the detection of hydrated minerals. The MMI advances beyond the capabilities of current microimagers on Mars by extending the spectral range into the infrared and increasing the number of spectral bands. The design employs multispectral light-emitting diodes and an uncooled indium gallium arsenide focal plane array to achieve a very low mass and high reliability. To better understand and demonstrate the capabilities of the MMI for future surface missions to Mars, we analyzed samples from Mars-relevant analog environments with the MMI. Results indicate that the MMI images faithfully resolve the fine-scale microtextural features of samples and provide important information to help constrain mineral composition. The use of spectral endmember mapping reveals the distribution of Fe-bearing minerals (including silicates and oxides) with high fidelity, along with the presence of hydrated minerals. MMI-based petrogenetic interpretations compare favorably with laboratory-based analyses, revealing the value of the MMI for future in situ rover-mediated astrobiological exploration of Mars.

  6. Elements of episodic-like memory in animals.

    Science.gov (United States)

    Clayton, N S; Griffiths, D P; Emery, N J; Dickinson, A

    2001-09-29

    A number of psychologists have suggested that episodic memory is a uniquely human phenomenon and, until recently, there was little evidence that animals could recall a unique past experience and respond appropriately. Experiments on food-caching memory in scrub jays question this assumption. On the basis of a single caching episode, scrub jays can remember when and where they cached a variety of foods that differ in the rate at which they degrade, in a way that is inexplicable by relative familiarity. They can update their memory of the contents of a cache depending on whether or not they have emptied the cache site, and can also remember where another bird has hidden caches, suggesting that they encode rich representations of the caching event. They make temporal generalizations about when perishable items should degrade and also remember the relative time since caching when the same food is cached in distinct sites at different times. These results show that jays form integrated memories for the location, content and time of caching. This memory capability fulfils Tulving's behavioural criteria for episodic memory and is thus termed 'episodic-like'. We suggest that several features of episodic memory may not be unique to humans.

  7. Medial prefrontal cortex involvement in the expression of extinction and ABA renewal of instrumental behavior for a food reinforcer.

    Science.gov (United States)

    Eddy, Meghan C; Todd, Travis P; Bouton, Mark E; Green, John T

    2016-02-01

    Instrumental renewal, the return of extinguished instrumental responding after removal from the extinction context, is an important model of behavioral relapse that is poorly understood at the neural level. In two experiments, we examined the role of the dorsomedial prefrontal cortex (dmPFC) and the ventromedial prefrontal cortex (vmPFC) in extinction and ABA renewal of instrumental responding for a sucrose reinforcer. Previous work, exclusively using drug reinforcers, has suggested that the roles of the dmPFC and vmPFC in expression of extinction and ABA renewal may depend at least in part on the type of drug reinforcer used. The current experiments used a food reinforcer because the behavioral mechanisms underlying the extinction and renewal of instrumental responding are especially well worked out in this paradigm. After instrumental conditioning in context A and extinction in context B, we inactivated dmPFC, vmPFC, or a more ventral medial prefrontal cortex region by infusing baclofen/muscimol (B/M) just prior to testing in both contexts. In rats with inactivated dmPFC, ABA renewal was still present (i.e., responding increased when returned to context A); however responding was lower (less renewal) than controls. Inactivation of vmPFC increased responding in context B (the extinction context) and decreased responding in context A, indicating no renewal in these animals. There was no effect of B/M infusion on rats with cannula placements ventral to the vmPFC. Fluorophore-conjugated muscimol was infused in a subset of rats following test to visualize infusion spread. Imaging suggested that the infusion spread was minimal and mainly constrained to the targeted area. Together, these experiments suggest that there is a region of medial prefrontal cortex encompassing both dmPFC and vmPFC that is important for ABA renewal of extinguished instrumental responding for a food reinforcer. In addition, vmPFC, but not dmPFC, is important for expression of extinction of

  8. Putting race in context: social class modulates processing of race in the ventromedial prefrontal cortex and amygdala.

    Science.gov (United States)

    Firat, Rengin B; Hitlin, Steven; Magnotta, Vincent; Tranel, Daniel

    2017-08-01

    A growing body of literature demonstrates that racial group membership can influence neural responses, e.g. when individuals perceive or interact with persons of another race. However, little attention has been paid to social class, a factor that interacts with racial inequalities in American society. We extend previous literature on race-related neural activity by focusing on how the human brain responds to racial out-groups cast in positively valued social class positions vs less valued ones. We predicted that the ventromedial prefrontal cortex (vmPFC) and the amygdala would have functionally dissociable roles, with the vmPFC playing a more significant role within socially valued in-groups (i.e. the middle-class) and the amygdala having a more crucial role for socially ambivalent and threatening categories (i.e. upper and lower class). We tested these predictions with two complementary studies: (i) a neuropsychological experiment with patients with the vmPFC or amygdala lesions, contrasted with brain damaged and normal comparison participants, and (ii) a functional magnetic resonance imaging experiment with 15 healthy adults. Our findings suggest that two distinct mechanisms underlie class-based racial evaluations, one engaging the vmPFC for positively identified in-group class and another recruiting the amygdala for the class groups that are marginalized or perceived as potential threats. © The Author (2017). Published by Oxford University Press.

  9. Patmos: a time-predictable microprocessor

    DEFF Research Database (Denmark)

    Schoeberl, Martin; Puffitsch, Wolfgang; Hepp, Stefan

    2018-01-01

    rather than for high average-case performance. Patmos is a dual-issue, statically scheduled RISC processor. A method cache serves as the cache for the instructions and a split cache organization simplifies the WCET analysis of the data cache. To fill the dual-issue pipeline with enough useful...

  10. Neurocircuits underlying cognition-emotion interaction in a social decision making context.

    Science.gov (United States)

    Ho, S Shaun; Gonzalez, Richard D; Abelson, James L; Liberzon, Israel

    2012-11-01

    Decision making (DM) in the context of others often entails complex cognition-emotion interaction. While the literature suggests that the ventromedial prefrontal cortex (vmPFC), striatum, and amygdala are involved in valuation-based DM and hippocampus in context processing, how these neural mechanisms subserve the integration of cognitive and emotional values in a social context remains unclear. In this study we addressed this gap by systematically manipulating cognition-emotion interaction in a social DM context, when the participants played a card game with a hypothetical opponent in a behavioral study (n=73) and a functional magnetic-resonance-imaging study (n=16). We observed that payoff-based behavioral choices were influenced by emotional values carried by face pictures and identified neurocircuits involved in cognitive valuation, emotional valuation, and concurrent cognition-emotion value integration. Specifically, while the vmPFC, amygdala, and ventral striatum were all involved in both cognitive and emotional domains of valuation, these regions played dissociable roles in social DM. The payoff-dependent responses in vmPFC and amygdala, but not ventral striatum, were moderated by the social context. Furthermore, the vmPFC, but not amygdala, not only encoded the opponent's gains as if self's losses, but also represented a "final common currency" during valuation-based decisions. The extent to which emotional input influenced choices was associated with the functional connectivity between the value-signaling amygdala and value integrating vmPFC, and also with the functional connectivity between the context-setting hippocampus and value-signaling amygdala and ventral striatum. These results identify brain pathways through which emotion shapes subjective values in a social DM context. Copyright © 2012 Elsevier Inc. All rights reserved.

  11. Interference control by best-effort process duty-cycling in chip multi-processor systems for real-time medical image processing

    NARCIS (Netherlands)

    Westmijze, M.; Bekooij, Marco Jan Gerrit; Smit, Gerardus Johannes Maria

    2013-01-01

    Systems with chip multi-processors are currently used for several applications that have real-time requirements. In chip multi-processor architectures, many hardware resources such as parts of the cache hierarchy are shared between cores and by using such resources, applications can significantly

  12. Exponiendo el Paralelismo a Nivel de Instrucciones en Presencia de Bucles

    OpenAIRE

    de Alba, Marcos R; Kaeli, David

    2004-01-01

    In this thesis we explore how to utilize a loop cache to relieve the unnecessary pressure placed on the trace cache by loops. Due to the high temporal locality of loops, loops should be cached. We have observed that when loops contain control flow instructions in their bodies it is better to collect traces on a dedicated loop cache instead of using trace cache space. The traces of instructions within loops tend to exhibit predictable patterns that can be detected and exploited at run-time. We...

  13. The potential inclusion of value management subject for postgraduate programmes in Malaysia

    Science.gov (United States)

    Che Mat, M.; Karim, S. B. Abd; Amran, N. A. E.

    2018-02-01

    The development of construction industry is increasing tremendously. To complement with this scenario, Value Management (VM) is needed to achieve the optimum function by reducing or eliminating the unnecessary cost that does not contribute to the product, system or service. As VM has been increasingly applied to enhance and improve value in construction projects, the purpose of this study is to implement VM as a subject for master’s students at selected public universities in Malaysia. The research is conducted to investigate the potential inclusion of VM as a subject at master degree programmes in Malaysia. Questionnaire survey was designed and delivered to existing master students to explore the current understanding of VM as well as the possibility of introducing VM as a subject. The results showed that the level of awareness on VM is high, yet the understanding of VM is low. This research presents the result of implementing VM as a subject learning for master’s level programme at selected public universities in Malaysia.

  14. Toward Millions of File System IOPS on Low-Cost, Commodity Hardware.

    Science.gov (United States)

    Zheng, Da; Burns, Randal; Szalay, Alexander S

    2013-01-01

    We describe a storage system that removes I/O bottlenecks to achieve more than one million IOPS based on a user-space file abstraction for arrays of commodity SSDs. The file abstraction refactors I/O scheduling and placement for extreme parallelism and non-uniform memory and I/O. The system includes a set-associative, parallel page cache in the user space. We redesign page caching to eliminate CPU overhead and lock-contention in non-uniform memory architecture machines. We evaluate our design on a 32 core NUMA machine with four, eight-core processors. Experiments show that our design delivers 1.23 million 512-byte read IOPS. The page cache realizes the scalable IOPS of Linux asynchronous I/O (AIO) and increases user-perceived I/O performance linearly with cache hit rates. The parallel, set-associative cache matches the cache hit rates of the global Linux page cache under real workloads.

  15. Conditional load and store in a shared memory

    Science.gov (United States)

    Blumrich, Matthias A; Ohmacht, Martin

    2015-02-03

    A method, system and computer program product for implementing load-reserve and store-conditional instructions in a multi-processor computing system. The computing system includes a multitude of processor units and a shared memory cache, and each of the processor units has access to the memory cache. In one embodiment, the method comprises providing the memory cache with a series of reservation registers, and storing in these registers addresses reserved in the memory cache for the processor units as a result of issuing load-reserve requests. In this embodiment, when one of the processor units makes a request to store data in the memory cache using a store-conditional request, the reservation registers are checked to determine if an address in the memory cache is reserved for that processor unit. If an address in the memory cache is reserved for that processor, the data are stored at this address.

  16. Barratt Impulsivity and Neural Regulation of Physiological Arousal.

    Directory of Open Access Journals (Sweden)

    Sheng Zhang

    Full Text Available Theories of personality have posited an increased arousal response to external stimulation in impulsive individuals. However, there is a dearth of studies addressing the neural basis of this association.We recorded skin conductance in 26 individuals who were assessed with Barratt Impulsivity Scale (BIS-11 and performed a stop signal task during functional magnetic resonance imaging. Imaging data were processed and modeled with Statistical Parametric Mapping. We used linear regressions to examine correlations between impulsivity and skin conductance response (SCR to salient events, identify the neural substrates of arousal regulation, and examine the relationship between the regulatory mechanism and impulsivity.Across subjects, higher impulsivity is associated with greater SCR to stop trials. Activity of the ventromedial prefrontal cortex (vmPFC negatively correlated to and Granger caused skin conductance time course. Furthermore, higher impulsivity is associated with a lesser strength of Granger causality of vmPFC activity on skin conductance, consistent with diminished control of physiological arousal to external stimulation. When men (n = 14 and women (n = 12 were examined separately, however, there was evidence suggesting association between impulsivity and vmPFC regulation of arousal only in women.Together, these findings confirmed the link between Barratt impulsivity and heightened arousal to salient stimuli in both genders and suggested the neural bases of altered regulation of arousal in impulsive women. More research is needed to explore the neural processes of arousal regulation in impulsive individuals and in clinical conditions that implicate poor impulse control.

  17. Barratt Impulsivity and Neural Regulation of Physiological Arousal.

    Science.gov (United States)

    Zhang, Sheng; Hu, Sien; Hu, Jianping; Wu, Po-Lun; Chao, Herta H; Li, Chiang-shan R

    2015-01-01

    Theories of personality have posited an increased arousal response to external stimulation in impulsive individuals. However, there is a dearth of studies addressing the neural basis of this association. We recorded skin conductance in 26 individuals who were assessed with Barratt Impulsivity Scale (BIS-11) and performed a stop signal task during functional magnetic resonance imaging. Imaging data were processed and modeled with Statistical Parametric Mapping. We used linear regressions to examine correlations between impulsivity and skin conductance response (SCR) to salient events, identify the neural substrates of arousal regulation, and examine the relationship between the regulatory mechanism and impulsivity. Across subjects, higher impulsivity is associated with greater SCR to stop trials. Activity of the ventromedial prefrontal cortex (vmPFC) negatively correlated to and Granger caused skin conductance time course. Furthermore, higher impulsivity is associated with a lesser strength of Granger causality of vmPFC activity on skin conductance, consistent with diminished control of physiological arousal to external stimulation. When men (n = 14) and women (n = 12) were examined separately, however, there was evidence suggesting association between impulsivity and vmPFC regulation of arousal only in women. Together, these findings confirmed the link between Barratt impulsivity and heightened arousal to salient stimuli in both genders and suggested the neural bases of altered regulation of arousal in impulsive women. More research is needed to explore the neural processes of arousal regulation in impulsive individuals and in clinical conditions that implicate poor impulse control.

  18. Urban Geocaching: what Happened in Lisbon during the Last Decade?

    Science.gov (United States)

    Nogueira Mendes, R.; Rodrigues, T.; Rodrigues, A. M.

    2013-05-01

    Created in 2000 in the United States of America, Geocaching has become a major phenomenon all around the world, counting actually with millions of Geocaches (or caches) that work as a recreational motivation for millions of users, called Geocachers. During the last 30 days over 5,000,000 new logs have been submitted worldwide, disseminating individual experiences, motivations, emotions and photos through the official Geocaching website (www.geocaching.com), and several official or informal national web forums. The activity itself can be compared with modern treasure hunting that uses handheld GPS, Smartphones or Tablets, WEB 2.0, wiki features and technologies to keep Geocachers engaged with their activity, in a strong social-network. All these characteristics make Geocaching an activity with a strong geographic component that deals closely with the surrounding environment where each cache has been hidden. From previous work, significance correlation has been found regarding hides and natural/rural environments, but metropolitan and urban areas like Lisbon municipality (that holds 3.23% of the total 27534 Portuguese caches), still registers the higher density of Geocaches, and logs numbers. Lacking "natural/rural" environment, Geocaching in cities tend to happen in symbolic areas, like public parks and places, sightseeing spots and historical neighborhoods. The present study looks to Geocaching within the city of Lisbon, in order to understand how it works, and if this activity reflects the city itself, promoting its image and cultural heritage. From a freely available dataset that includes all Geocaches that have been placed in Lisbon since February 2001, spatial analysis has been conducted, showing the informal preferences of this activity. Results show a non-random distribution of caches within the study area, similar to the land use distribution. Preferable locations tend to be in iconic places of the city, usually close to the Tagus River, that concentrates 25

  19. Modeled Urea Distribution Volume and Mortality in the HEMO Study

    Science.gov (United States)

    Greene, Tom; Depner, Thomas A.; Levin, Nathan W.; Chertow, Glenn M.

    2011-01-01

    Summary Background and objectives In the Hemodialysis (HEMO) Study, observed small decreases in achieved equilibrated Kt/Vurea were noncausally associated with markedly increased mortality. Here we examine the association of mortality with modeled volume (Vm), the denominator of equilibrated Kt/Vurea. Design, setting, participants, & measurements Parameters derived from modeled urea kinetics (including Vm) and blood pressure (BP) were obtained monthly in 1846 patients. Case mix–adjusted time-dependent Cox regressions were used to relate the relative mortality hazard at each time point to Vm and to the change in Vm over the preceding 6 months. Mixed effects models were used to relate Vm to changes in intradialytic systolic BP and to other factors at each follow-up visit. Results Mortality was associated with Vm and change in Vm over the preceding 6 months. The association between change in Vm and mortality was independent of vascular access complications. In contrast, mortality was inversely associated with V calculated from anthropometric measurements (Vant). In case mix–adjusted analysis using Vm as a time-dependent covariate, the association of mortality with Vm strengthened after statistical adjustment for Vant. After adjustment for Vant, higher Vm was associated with slightly smaller reductions in intradialytic systolic BP and with risk factors for mortality including recent hospitalization and reductions in serum albumin concentration and body weight. Conclusions An increase in Vm is a marker for illness and mortality risk in hemodialysis patients. PMID:21511841

  20. Olivetti M6 640

    CERN Multimedia

    1993-01-01

    The M6-640 is the highest performance personal computer workstation in the Suprema range with multimedia, document imaging and communications capabilities. It has a 90MHz Pentium processor with 256Kb of secondary cache. It can accommodate up to 128Mb RAM and supports hard disks of up to 1Gb through an IDE interface.