WorldWideScience

Sample records for web proxy cache

  1. Web proxy cache replacement strategies simulation, implementation, and performance evaluation

    CERN Document Server

    ElAarag, Hala; Cobb, Jake

    2013-01-01

    This work presents a study of cache replacement strategies designed for static web content. Proxy servers can improve performance by caching static web content such as cascading style sheets, java script source files, and large files such as images. This topic is particularly important in wireless ad hoc networks, in which mobile devices act as proxy servers for a group of other mobile devices. Opening chapters present an introduction to web requests and the characteristics of web objects, web proxy servers and Squid, and artificial neural networks. This is followed by a comprehensive review o

  2. Cooperative Proxy Caching for Wireless Base Stations

    Directory of Open Access Journals (Sweden)

    James Z. Wang

    2007-01-01

    Full Text Available This paper proposes a mobile cache model to facilitate the cooperative proxy caching in wireless base stations. This mobile cache model uses a network cache line to record the caching state information about a web document for effective data search and cache space management. Based on the proposed mobile cache model, a P2P cooperative proxy caching scheme is proposed to use a self-configured and self-managed virtual proxy graph (VPG, independent of the underlying wireless network structure and adaptive to the network and geographic environment changes, to achieve efficient data search, data cache and date replication. Based on demand, the aggregate effect of data caching, searching and replicating actions by individual proxy servers automatically migrates the cached web documents closer to the interested clients. In addition, a cache line migration (CLM strategy is proposed to flow and replicate the heads of network cache lines of web documents associated with a moving mobile host to the new base station during the mobile host handoff. These replicated cache line heads provide direct links to the cached web documents accessed by the moving mobile hosts in the previous base station, thus improving the mobile web caching performance. Performance studies have shown that the proposed P2P cooperative proxy caching schemes significantly outperform existing caching schemes.

  3. Web Caching

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 7; Issue 7. Web Caching - A Technique to Speedup Access to Web Contents. Harsha Srinath Shiva Shankar Ramanna. General Article Volume 7 Issue 7 July 2002 pp 54-62 ... Keywords. World wide web; data caching; internet traffic; web page access.

  4. Web Caching

    Indian Academy of Sciences (India)

    E-commerce and security. The World Wide Web has been growing in leaps and bounds. Studies have indicated that this massive distributed system can benefit greatly by making use of appropriate caching methods. Intelligent Web caching can lessen the burden on. Web servers, improves its performance and at the same ...

  5. Enhancement web proxy cache performance using Wrapper Feature Selection methods with NB and J48

    Science.gov (United States)

    Mahmoud Al-Qudah, Dua’a.; Funke Olanrewaju, Rashidah; Wong Azman, Amelia

    2017-11-01

    Web proxy cache technique reduces response time by storing a copy of pages between client and server sides. If requested pages are cached in the proxy, there is no need to access the server. Due to the limited size and excessive cost of cache compared to the other storages, cache replacement algorithm is used to determine evict page when the cache is full. On the other hand, the conventional algorithms for replacement such as Least Recently Use (LRU), First in First Out (FIFO), Least Frequently Use (LFU), Randomized Policy etc. may discard important pages just before use. Furthermore, using conventional algorithm cannot be well optimized since it requires some decision to intelligently evict a page before replacement. Hence, most researchers propose an integration among intelligent classifiers and replacement algorithm to improves replacement algorithms performance. This research proposes using automated wrapper feature selection methods to choose the best subset of features that are relevant and influence classifiers prediction accuracy. The result present that using wrapper feature selection methods namely: Best First (BFS), Incremental Wrapper subset selection(IWSS)embedded NB and particle swarm optimization(PSO)reduce number of features and have a good impact on reducing computation time. Using PSO enhance NB classifier accuracy by 1.1%, 0.43% and 0.22% over using NB with all features, using BFS and using IWSS embedded NB respectively. PSO rises J48 accuracy by 0.03%, 1.91 and 0.04% over using J48 classifier with all features, using IWSS-embedded NB and using BFS respectively. While using IWSS embedded NB fastest NB and J48 classifiers much more than BFS and PSO. However, it reduces computation time of NB by 0.1383 and reduce computation time of J48 by 2.998.

  6. Improving Internet Archive Service through Proxy Cache.

    Science.gov (United States)

    Yu, Hsiang-Fu; Chen, Yi-Ming; Wang, Shih-Yong; Tseng, Li-Ming

    2003-01-01

    Discusses file transfer protocol (FTP) servers for downloading archives (files with particular file extensions), and the change to HTTP (Hypertext transfer protocol) with increased Web use. Topics include the Archie server; proxy cache servers; and how to improve the hit rate of archives by a combination of caching and better searching mechanisms.…

  7. Maintaining Web Cache Coherency

    Directory of Open Access Journals (Sweden)

    2000-01-01

    Full Text Available Document coherency is a challenging problem for Web caching. Once the documents are cached throughout the Internet, it is often difficult to keep them coherent with the origin document without generating a new traffic that could increase the traffic on the international backbone and overload the popular servers. Several solutions have been proposed to solve this problem, among them two categories have been widely discussed: the strong document coherency and the weak document coherency. The cost and the efficiency of the two categories are still a controversial issue, while in some studies the strong coherency is far too expensive to be used in the Web context, in other studies it could be maintained at a low cost. The accuracy of these analysis is depending very much on how the document updating process is approximated. In this study, we compare some of the coherence methods proposed for Web caching. Among other points, we study the side effects of these methods on the Internet traffic. The ultimate goal is to study the cache behavior under several conditions, which will cover some of the factors that play an important role in the Web cache performance evaluation and quantify their impact on the simulation accuracy. The results presented in this study show indeed some differences in the outcome of the simulation of a Web cache depending on the workload being used, and the probability distribution used to approximate updates on the cached documents. Each experiment shows two case studies that outline the impact of the considered parameter on the performance of the cache.

  8. Visits, Hits, Caching and Counting on the World Wide Web: Old Wine in New Bottles?

    Science.gov (United States)

    Berthon, Pierre; Pitt, Leyland; Prendergast, Gerard

    1997-01-01

    Although web browser caching speeds up retrieval, reduces network traffic, and decreases the load on servers and browser's computers, an unintended consequence for marketing research is that Web servers undercount hits. This article explores counting problems, caching, proxy servers, trawler software and presents a series of correction factors…

  9. Web proxy auto discovery for the WLCG

    CERN Document Server

    Dykstra, D; Blumenfeld, B; De Salvo, A; Dewhurst, A; Verguilov, V

    2017-01-01

    All four of the LHC experiments depend on web proxies (that is, squids) at each grid site to support software distribution by the CernVM FileSystem (CVMFS). CMS and ATLAS also use web proxies for conditions data distributed through the Frontier Distributed Database caching system. ATLAS & CMS each have their own methods for their grid jobs to find out which web proxies to use for Frontier at each site, and CVMFS has a third method. Those diverse methods limit usability and flexibility, particularly for opportunistic use cases, where an experiment’s jobs are run at sites that do not primarily support that experiment. This paper describes a new Worldwide LHC Computing Grid (WLCG) system for discovering the addresses of web proxies. The system is based on an internet standard called Web Proxy Auto Discovery (WPAD). WPAD is in turn based on another standard called Proxy Auto Configuration (PAC). Both the Frontier and CVMFS clients support this standard. The input into the WLCG system comes from squids regis...

  10. Web Proxy Auto Discovery for the WLCG

    Science.gov (United States)

    Dykstra, D.; Blomer, J.; Blumenfeld, B.; De Salvo, A.; Dewhurst, A.; Verguilov, V.

    2017-10-01

    All four of the LHC experiments depend on web proxies (that is, squids) at each grid site to support software distribution by the CernVM FileSystem (CVMFS). CMS and ATLAS also use web proxies for conditions data distributed through the Frontier Distributed Database caching system. ATLAS & CMS each have their own methods for their grid jobs to find out which web proxies to use for Frontier at each site, and CVMFS has a third method. Those diverse methods limit usability and flexibility, particularly for opportunistic use cases, where an experiment’s jobs are run at sites that do not primarily support that experiment. This paper describes a new Worldwide LHC Computing Grid (WLCG) system for discovering the addresses of web proxies. The system is based on an internet standard called Web Proxy Auto Discovery (WPAD). WPAD is in turn based on another standard called Proxy Auto Configuration (PAC). Both the Frontier and CVMFS clients support this standard. The input into the WLCG system comes from squids registered in the ATLAS Grid Information System (AGIS) and CMS SITECONF files, cross-checked with squids registered by sites in the Grid Configuration Database (GOCDB) and the OSG Information Management (OIM) system, and combined with some exceptions manually configured by people from ATLAS and CMS who operate WLCG Squid monitoring. WPAD servers at CERN respond to http requests from grid nodes all over the world with a PAC file that lists available web proxies, based on IP addresses matched from a database that contains the IP address ranges registered to organizations. Large grid sites are encouraged to supply their own WPAD web servers for more flexibility, to avoid being affected by short term long distance network outages, and to offload the WLCG WPAD servers at CERN. The CERN WPAD servers additionally support requests from jobs running at non-grid sites (particularly for LHC@Home) which they direct to the nearest publicly accessible web proxy servers. The responses

  11. Here be web proxies

    DEFF Research Database (Denmark)

    Weaver, Nicholas; Kreibich, Christian; Dam, Martin

    2014-01-01

    HTTP proxies serve numerous roles, from performance enhancement to access control to network censorship, but often operate stealthily without explicitly indicating their presence to the communicating endpoints. In this paper we present an analysis of the evidence of proxying manifest in executions...... of the ICSI Netalyzr spanning 646,000 distinct IP addresses ("clients"). To identify proxies we employ a range of detectors at the transport and application layer, and report in detail on the extent to which they allow us to fingerprint and map proxies to their likely intended uses. We also analyze 17...

  12. Integration of recommender system for Web cache management

    Directory of Open Access Journals (Sweden)

    Pattarasinee Bhattarakosol

    2013-06-01

    Full Text Available Web caching is widely recognised as an effective technique that improves the quality of service over the Internet, such as reduction of user latency and network bandwidth usage. However, this method has limitations due to hardware and management policies of caches. The Behaviour-Based Cache Management Model (BBCMM is therefore proposed as an alternative caching architecture model with the integration of a recommender system. This architecture is a cache grouping mechanism where browsing characteristics are applied to improve the performance of the Internet services. The results indicate that the byte hit rate of the new architecture increases by more than 18% and the delay measurement drops by more than 56%. In addition, a theoretical comparison between the proposed model and the traditional cooperative caching models shows a performance improvement of the proposed model in the cache system.

  13. Food caching in orb-web spiders (Araneae: Araneoidea)

    Science.gov (United States)

    Champion de Crespigny, Fleur E.; Herberstein, Marie E.; Elgar, Mark A.

    2001-01-01

    Caching or storing surplus prey may reduce the risk of starvation during periods of food deprivation. While this behaviour occurs in a variety of birds and mammals, it is infrequent among invertebrates. However, golden orb-web spiders, Nephila edulis, incorporate a prey cache in their relatively permanent web, which they feed on during periods of food shortage. Heavier spiders significantly reduced weight loss if they were able to access a cache, but lost weight if the cache was removed. The presence or absence of stored prey had no effect on the weight loss of lighter spiders. Furthermore, N. edulis always attacked new prey, irrespective of the number of unprocessed prey in the web. In contrast, females of Argiope keyserlingi, who build a new web every day and do not cache prey, attacked fewer new prey items if some had already been caught. Thus, a necessary pre-adaptation to the evolution of prey caching in orb-web spiders may be a durable or permanent web, such as that constructed by Nephila.

  14. Caching Strategies for Data-Intensive Web Sites

    OpenAIRE

    Florescu, Daniela; Issarny, Valérie; Valduriez, Patrick; Yagoub, Khaled

    2000-01-01

    Projet CARAVEL; A data-intensive Web site is a Web server that accesses large numbers of pages whose content is dynamically extracted from a database. In this context, returning a Web page may require costly interaction with the database system (for connection and querying) thereby increasing much the response time. In this paper, we address this performance problem. Our approach relies on the declarative specification of the Web site. We propose a customizable cache system architecture and i...

  15. Proxy-based Video Transmission: Error Resiliency, Resource Allocation, and Dynamic Caching

    OpenAIRE

    Tu, Wei

    2009-01-01

    In this dissertation, several approaches are proposed to improve the quality of video transmission over wired and wireless networks. To improve the robustness of video transmission over error-prone mobile networks, a proxy-based reference picture selection scheme is proposed. In the second part of the dissertation, rate-distortion optimized rate adaptation algorithms are proposed for video applications over congested network nodes. A segment-based proxy caching algorithm for video-on-demand a...

  16. Servidor proxy caché: comprensión y asimilación tecnológica

    Directory of Open Access Journals (Sweden)

    Carlos E. Gómez

    2012-01-01

    Full Text Available Los proveedores de acceso a Internet usualmente incluyen el concepto de aceleradores de Internet para reducir el tiempo promedio que tarda un navegador en obtener los archivos solicitados. Para los administradores del sistema es difícil elegir la configuración del servidor proxy caché, ya que es necesario decidir los valores que se deben usar en diferentes variables. En este artículo se presenta la forma como se abordó el proceso de comprensión y asimilación tecnológica del servicio de proxy caché, un servicio de alto impacto organizacional. Además, este artículo es producto del proyecto de investigación “Análisis de configuraciones de servidores proxy caché”, en el cual se estudiaron aspectos relevantes del rendimiento de Squid como servidor proxy caché.

  17. Dynamic web cache publishing for IaaS clouds using Shoal

    Science.gov (United States)

    Gable, Ian; Chester, Michael; Armstrong, Patrick; Berghaus, Frank; Charbonneau, Andre; Leavett-Brown, Colin; Paterson, Michael; Prior, Robert; Sobie, Randall; Taylor, Ryan

    2014-06-01

    We have developed a highly scalable application, called Shoal, for tracking and utilizing a distributed set of HTTP web caches. Our application uses the Squid HTTP cache. Squid servers advertise their existence to the Shoal server via AMQP messaging by running Shoal Agent. The Shoal server provides a simple REST interface that allows clients to determine their closest Squid cache. Our goal is to dynamically instantiate Squid caches on IaaS clouds in response to client demand. Shoal provides the VMs on IaaS clouds with the location of the nearest dynamically instantiated Squid Cache.

  18. Web Cache Prefetching as an Aspect: Towards a Dynamic-Weaving Based Solution

    DEFF Research Database (Denmark)

    Segura-Devillechaise, Marc; Menaud, Jean-Marc; Muller, Gilles

    2003-01-01

    application characteristics. Thus, new prefetching policies must be loaded dynamically as needs change.Most Web caches are large C programs, and thus adding one or more prefetching policies to an existing Web cache is a daunting task. The main problem is that prefetching concerns crosscut the cache structure......Given the high proportion of HTTP traffic in the Internet, Web caches are crucial to reduce user access time, network latency, and bandwidth consumption. Prefetching in a Web cache can further enhance these benefits. For the best performance, however, the prefetching policy must match user and Web....... Aspect-oriented programming is a natural technique to address this issue. Nevertheless, existing approaches either do not provide dynamic weaving, incur a high overhead for invocation of dynamically loaded code, or do not target C applications. In this paper we present µ-Dyner, which addresses...

  19. XRootd, disk-based, caching-proxy for optimization of data-access, data-placement and data-replication

    CERN Document Server

    Tadel, Matevz

    2013-01-01

    Following the smashing success of XRootd-based USCMS data-federation, AAA project investigated extensions of the federation architecture by developing two sample implementations of an XRootd, disk-based, caching-proxy. The first one simply starts fetching a whole file as soon as a file-open request is received and is suitable when completely random file access is expected or it is already known that a whole file be read. The second implementation supports on-demand downloading of partial files. Extensions to the Hadoop file-system have been developed to allow foran immediate fallback to network access when local HDFS storage fails to provide the requested block. Tools needed to analyze and to tweak block replication factors and to inject downloaded blocks into a running HDFS installation have also been developed. Both cache implementations are in operation at UCSD and several tests were also performed at UNL and UW-M. Operational experience and applications to automatic storage healing and opportunistic compu...

  20. A Caching Mechanism for Semantic Web Service Discovery

    Science.gov (United States)

    Stollberg, Michael; Hepp, Martin; Hoffmann, Jörg

    The discovery of suitable Web services for a given task is one of the central operations in Service-oriented Architectures (SOA), and research on Semantic Web services (SWS) aims at automating this step. For the large amount of available Web services that can be expected in real-world settings, the computational costs of automated discovery based on semantic matchmaking become important. To make a discovery engine a reliable software component, we must thus aim at minimizing both the mean and the variance of the duration of the discovery task. For this, we present an extension for discovery engines in SWS environments that exploits structural knowledge and previous discovery results for reducing the search space of consequent discovery operations. Our prototype implementation shows significant improvements when applied to the Stanford SWS Challenge scenario and dataset.

  1. Effective caching of shortest paths for location-based services

    DEFF Research Database (Denmark)

    Jensen, Christian S.; Thomsen, Jeppe Rishede; Yiu, Man Lung

    2012-01-01

    Web search is ubiquitous in our daily lives. Caching has been extensively used to reduce the computation time of the search engine and reduce the network traffic beyond a proxy server. Another form of web search, known as online shortest path search, is popular due to advances in geo-positioning...

  2. Políticas de reemplazo en la caché de web

    Directory of Open Access Journals (Sweden)

    Carlos Quesada Sánchez

    2006-05-01

    Full Text Available La web es el mecanismo de comunicación más utilizado en la actualidad debido a su flexibilidad y a la oferta casi interminable de herramientas para navegarla. Esto hace que día con día se agreguen alrededor de un millón de páginas en ella. De esta manera, es entonces la biblioteca más grande, con recursos textuales y de multimedia, que jamás se haya visto antes. Eso sí, es una biblioteca distribuida alrededor de todos los servidores que contienen esa información. Como fuente de consulta, es importante que la recuperación de los datos sea eficiente. Para ello existe el Web Caching, técnica mediante la cual se almacenan temporalmente algunos datos de la web en los servidores locales, de manera que no haya que pedirlos al servidor remoto cada vez que un usuario los solicita. Empero, la cantidad de memoria disponible en los servidores locales para almacenar esa información es limitada: hay que decidir cuáles objetos de la web se almacenan y cuáles no. Esto da pie a varias políticas de reemplazo que se explorarán en este artículo. Mediante un experimento de peticiones reales de la Web, compararemos el desempeño de estas técnicas.

  3. Secure Service Proxy: A CoAP(s) Intermediary for a Securer and Smarter Web of Things.

    Science.gov (United States)

    Van den Abeele, Floris; Moerman, Ingrid; Demeester, Piet; Hoebeke, Jeroen

    2017-07-11

    As the IoT continues to grow over the coming years, resource-constrained devices and networks will see an increase in traffic as everything is connected in an open Web of Things. The performance- and function-enhancing features are difficult to provide in resource-constrained environments, but will gain importance if the WoT is to be scaled up successfully. For example, scalable open standards-based authentication and authorization will be important to manage access to the limited resources of constrained devices and networks. Additionally, features such as caching and virtualization may help further reduce the load on these constrained systems. This work presents the Secure Service Proxy (SSP): a constrained-network edge proxy with the goal of improving the performance and functionality of constrained RESTful environments. Our evaluations show that the proposed design reaches its goal by reducing the load on constrained devices while implementing a wide range of features as different adapters. Specifically, the results show that the SSP leads to significant savings in processing, network traffic, network delay and packet loss rates for constrained devices. As a result, the SSP helps to guarantee the proper operation of constrained networks as these networks form an ever-expanding Web of Things.

  4. Secure Service Proxy: A CoAP(s Intermediary for a Securer and Smarter Web of Things

    Directory of Open Access Journals (Sweden)

    Floris Van den Abeele

    2017-07-01

    Full Text Available As the IoT continues to grow over the coming years, resource-constrained devices and networks will see an increase in traffic as everything is connected in an open Web of Things. The performance- and function-enhancing features are difficult to provide in resource-constrained environments, but will gain importance if the WoT is to be scaled up successfully. For example, scalable open standards-based authentication and authorization will be important to manage access to the limited resources of constrained devices and networks. Additionally, features such as caching and virtualization may help further reduce the load on these constrained systems. This work presents the Secure Service Proxy (SSP: a constrained-network edge proxy with the goal of improving the performance and functionality of constrained RESTful environments. Our evaluations show that the proposed design reaches its goal by reducing the load on constrained devices while implementing a wide range of features as different adapters. Specifically, the results show that the SSP leads to significant savings in processing, network traffic, network delay and packet loss rates for constrained devices. As a result, the SSP helps to guarantee the proper operation of constrained networks as these networks form an ever-expanding Web of Things.

  5. Secure Service Proxy: A CoAP(s) Intermediary for a Securer and Smarter Web of Things

    Science.gov (United States)

    Van den Abeele, Floris; Moerman, Ingrid; Demeester, Piet

    2017-01-01

    As the IoT continues to grow over the coming years, resource-constrained devices and networks will see an increase in traffic as everything is connected in an open Web of Things. The performance- and function-enhancing features are difficult to provide in resource-constrained environments, but will gain importance if the WoT is to be scaled up successfully. For example, scalable open standards-based authentication and authorization will be important to manage access to the limited resources of constrained devices and networks. Additionally, features such as caching and virtualization may help further reduce the load on these constrained systems. This work presents the Secure Service Proxy (SSP): a constrained-network edge proxy with the goal of improving the performance and functionality of constrained RESTful environments. Our evaluations show that the proposed design reaches its goal by reducing the load on constrained devices while implementing a wide range of features as different adapters. Specifically, the results show that the SSP leads to significant savings in processing, network traffic, network delay and packet loss rates for constrained devices. As a result, the SSP helps to guarantee the proper operation of constrained networks as these networks form an ever-expanding Web of Things. PMID:28696393

  6. Contribution in Adaptating Web Interfaces to any Device on the Fly: The HCI Proxy

    Science.gov (United States)

    Lardon, Jérémy; Fayolle, Jacques; Gravier, Christophe; Ates, Mikaël

    2008-11-01

    Lot of work has been done on the adaptation of UIs. In the particular field of Web UI adaptation, many research projects aim at displaying web content designed for PCs on poorer supports. In this paper, we present previous work in the domain and then our proxy architecture, HCI proxy, to test solutions for the problem of adapting Web UIs for mobile phones, PDA and smartphones but also for TVs through browser-embedding STBs, and this on the fly.

  7. What's New in Apache Web Server 22?

    CERN Document Server

    Bowen, Rich

    2007-01-01

    What's New in Apache Web Server 2.2? shows you all the new features you'll know to set up and administer the Apache 2.2 web server. Learn how to take advantage of its improved caching, proxying, authentication, and other improvements in your Web 2.0 applications.

  8. Processor Cache

    NARCIS (Netherlands)

    P.A. Boncz (Peter); L. Liu (Lei); M. Tamer Özsu

    2008-01-01

    htmlabstractTo hide the high latencies of DRAM access, modern computer architecture now features a memory hierarchy that besides DRAM also includes SRAM cache memories, typically located on the CPU chip. Memory access first check these caches, which takes only a few cycles. Only if the needed data

  9. A Proxy Design to Leverage the Interconnection of CoAP Wireless Sensor Networks with Web Applications

    Science.gov (United States)

    Ludovici, Alessandro; Calveras, Anna

    2015-01-01

    In this paper, we present the design of a Constrained Application Protocol (CoAP) proxy able to interconnect Web applications based on Hypertext Transfer Protocol (HTTP) and WebSocket with CoAP based Wireless Sensor Networks. Sensor networks are commonly used to monitor and control physical objects or environments. Smart Cities represent applications of such a nature. Wireless Sensor Networks gather data from their surroundings and send them to a remote application. This data flow may be short or long lived. The traditional HTTP long-polling used by Web applications may not be adequate in long-term communications. To overcome this problem, we include the WebSocket protocol in the design of the CoAP proxy. We evaluate the performance of the CoAP proxy in terms of latency and memory consumption. The tests consider long and short-lived communications. In both cases, we evaluate the performance obtained by the CoAP proxy according to the use of WebSocket and HTTP long-polling. PMID:25585107

  10. A proxy design to leverage the interconnection of CoAP Wireless Sensor Networks with Web applications.

    Science.gov (United States)

    Ludovici, Alessandro; Calveras, Anna

    2015-01-09

    In this paper, we present the design of a Constrained Application Protocol (CoAP) proxy able to interconnect Web applications based on Hypertext Transfer Protocol (HTTP) and WebSocket with CoAP based Wireless Sensor Networks. Sensor networks are commonly used to monitor and control physical objects or environments. Smart Cities represent applications of such a nature. Wireless Sensor Networks gather data from their surroundings and send them to a remote application. This data flow may be short or long lived. The traditional HTTP long-polling used by Web applications may not be adequate in long-term communications. To overcome this problem, we include the WebSocket protocol in the design of the CoAP proxy. We evaluate the performance of the CoAP proxy in terms of latency and memory consumption. The tests consider long and short-lived communications. In both cases, we evaluate the performance obtained by the CoAP proxy according to the use of WebSocket and HTTP long-polling.

  11. SIDECACHE: Information access, management and dissemination framework for web services.

    Science.gov (United States)

    Doderer, Mark S; Burkhardt, Cory; Robbins, Kay A

    2011-06-14

    Many bioinformatics algorithms and data sets are deployed using web services so that the results can be explored via the Internet and easily integrated into other tools and services. These services often include data from other sites that is accessed either dynamically or through file downloads. Developers of these services face several problems because of the dynamic nature of the information from the upstream services. Many publicly available repositories of bioinformatics data frequently update their information. When such an update occurs, the developers of the downstream service may also need to update. For file downloads, this process is typically performed manually followed by web service restart. Requests for information obtained by dynamic access of upstream sources is sometimes subject to rate restrictions. SideCache provides a framework for deploying web services that integrate information extracted from other databases and from web sources that are periodically updated. This situation occurs frequently in biotechnology where new information is being continuously generated and the latest information is important. SideCache provides several types of services including proxy access and rate control, local caching, and automatic web service updating. We have used the SideCache framework to automate the deployment and updating of a number of bioinformatics web services and tools that extract information from remote primary sources such as NCBI, NCIBI, and Ensembl. The SideCache framework also has been used to share research results through the use of a SideCache derived web service.

  12. SIDECACHE: Information access, management and dissemination framework for web services

    Directory of Open Access Journals (Sweden)

    Robbins Kay A

    2011-06-01

    Full Text Available Abstract Background Many bioinformatics algorithms and data sets are deployed using web services so that the results can be explored via the Internet and easily integrated into other tools and services. These services often include data from other sites that is accessed either dynamically or through file downloads. Developers of these services face several problems because of the dynamic nature of the information from the upstream services. Many publicly available repositories of bioinformatics data frequently update their information. When such an update occurs, the developers of the downstream service may also need to update. For file downloads, this process is typically performed manually followed by web service restart. Requests for information obtained by dynamic access of upstream sources is sometimes subject to rate restrictions. Findings SideCache provides a framework for deploying web services that integrate information extracted from other databases and from web sources that are periodically updated. This situation occurs frequently in biotechnology where new information is being continuously generated and the latest information is important. SideCache provides several types of services including proxy access and rate control, local caching, and automatic web service updating. Conclusions We have used the SideCache framework to automate the deployment and updating of a number of bioinformatics web services and tools that extract information from remote primary sources such as NCBI, NCIBI, and Ensembl. The SideCache framework also has been used to share research results through the use of a SideCache derived web service.

  13. Stack Caching Using Split Data Caches

    DEFF Research Database (Denmark)

    Nielsen, Carsten; Schoeberl, Martin

    2015-01-01

    In most embedded and general purpose architectures, stack data and non-stack data is cached together, meaning that writing to or loading from the stack may expel non-stack data from the data cache. Manipulation of the stack has a different memory access pattern than that of non-stack data, showing...... higher temporal and spatial locality. We propose caching stack and non-stack data separately and develop four different stack caches that allow this separation without requiring compiler support. These are the simple, window, and prefilling with and without tag stack caches. The performance of the stack...... cache architectures was evaluated using the Simple Scalar toolset where the window and prefilling stack cache without tag resulted in an execution speedup of up to 3.5% for the MiBench benchmarks, executed on an out-of-order processor with the ARM instruction set....

  14. Web of Deceit: A Literature Review of Munchausen Syndrome by Proxy.

    Science.gov (United States)

    Rosenberg, Donna A.

    1987-01-01

    A literature review and case study provide detailed characteristics of Munchausen syndrome by proxy, a form of child abuse wherein the mother falsifies illness in her child through simulation and/or production of illness, presenting the child for medical care while disclaiming knowledge of its etiology. Guidelines for medical, social service, and…

  15. Linking Annual Prescription Volume of Antidepressants to Corresponding Web Search Query Data: A Possible Proxy for Medical Prescription Behavior?

    Science.gov (United States)

    Gahr, Maximilian; Uzelac, Zeljko; Zeiss, René; Connemann, Bernhard J; Lang, Dirk; Schönfeldt-Lecuona, Carlos

    2015-12-01

    Persons using the Internet to retrieve medical information generate large amounts of health-related data, which are increasingly used in modern health sciences. We analyzed the relation between annual prescription volumes (APVs) of several antidepressants with marketing approval in Germany and corresponding web search query data generated in Google to test whether web search query volume may be a proxy for medical prescription practice. We obtained APVs of several antidepressants related to corresponding prescriptions at the expense of the statutory health insurance in Germany from 2004 to 2013. Web search query data generated in Germany and related to defined search terms (active substance or brand name) were obtained with Google Trends. We calculated correlations (Person's r) between the APVs of each substance and the respective annual "search share" values; coefficients of determination (R) were computed to determine the amount of variability shared by the 2 variables. Significant and strong correlations between substance-specific APVs and corresponding annual query volumes were found for each substance during the observational interval: agomelatine (r = 0.968, R = 0.932, P = 0.01), bupropion (r = 0.962, R = 0.925, P = 0.01), citalopram (r = 0.970, R = 0.941, P = 0.01), escitalopram (r = 0.824, R = 0.682, P = 0.01), fluoxetine (r = 0.885, R = 0.783, P = 0.01), paroxetine (r = 0.801, R = 0.641, P = 0.01), and sertraline (r = 0.880, R = 0.689, P = 0.01). Although the used data did not allow to perform an analysis with a higher temporal resolution (quarters, months), our results suggest that web search query volume may be a proxy for corresponding prescription behavior. However, further studies analyzing other pharmacologic agents and prescription data that facilitate an increased temporal resolution are needed to confirm this hypothesis.

  16. Magnetic susceptibility of spider webs as a proxy of airborne metal pollution.

    Science.gov (United States)

    Rachwał, Marzena; Rybak, Justyna; Rogula-Kozłowska, Wioletta

    2017-12-05

    The purpose of this pilot study was to test spider webs as a fast tool for magnetic biomonitoring of air pollution. The study involved the investigation of webs made by four types of spiders: Pholcus phalangioides (Pholcidae), Eratigena atrica and Agelena labirynthica (Agelenidae) and Linyphia triangularis (Linyphiidae). These webs were obtained from outdoor and indoor study sites. Compared to the clean reference webs, an increase was observed in the values of magnetic susceptibility in the webs sampled from both indoor and outdoor sites, which indicates contamination by anthropogenically produced pollution particles that contain ferrimagnetic iron minerals. This pilot study has demonstrated that spider webs are able to capture particulate matter in a manner that is equivalent to flora-based bioindicators applied to date (such as mosses, lichens, leaves). They also have additional advantages; for example, they can be generated in isolated clean habitats, and exposure can be monitored in indoor and outdoor locations, at any height and for any period of time. Moreover, webs are ubiquitous in an anthropogenic, heavily polluted environment, and they can be exposed throughout the year. As spider webs accumulate pollutants to which humans are exposed, they become a reliable source of information about the quality of the environment. Therefore, spider webs are recommended for magnetic biomonitoring of airborne pollution and for the assessment of the environment because they are non-destructive, low-cost, sensitive and efficient. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. New proxy replacement algorithm for multimedia streaming

    Science.gov (United States)

    Wong, Hau Ling; Lo, Kwok-Tung

    2001-11-01

    Proxy servers play an important role in between servers and clients in various multimedia systems on the Internet. Since proxy servers do not have an infinite-capacity cache for keeping all the continuous media data, the challenge for the replacement policy is to determine which streams should be cached or removed from the proxy server. In this paper, a new proxy replacement algorithm, named the Least Popular Used (LPU) caching algorithm, is proposed for layered encoded multimedia streams in the Internet. The LPU method takes both the short-term and long-term popularity of the video into account in determining the replacement policy. Simulation evaluation shows that our proposed scheme achieves better results than some existing methods in term of the cache efficiency and replacement frequency under both static and dynamic access environments.

  18. Client-Driven Joint Cache Management and Rate Adaptation for Dynamic Adaptive Streaming over HTTP

    Directory of Open Access Journals (Sweden)

    Chenghao Liu

    2013-01-01

    Full Text Available Due to the fact that proxy-driven proxy cache management and the client-driven streaming solution of Dynamic Adaptive Streaming over HTTP (DASH are two independent processes, some difficulties and challenges arise in media data management at the proxy cache and rate adaptation at the DASH client. This paper presents a novel client-driven joint proxy cache management and DASH rate adaptation method, named CLICRA, which moves prefetching intelligence from the proxy cache to the client. Based on the philosophy of CLICRA, this paper proposes a rate adaptation algorithm, which selects bitrates for the next media segments to be requested by using the predicted buffered media time in the client. CLICRA is realized by conveying information on the segments that are likely to be fetched subsequently to the proxy cache so that it can use the information for prefetching. Simulation results show that the proposed method outperforms the conventional segment-fetch-time-based rate adaptation and the proxy-driven proxy cache management significantly not only in streaming quality at the client but also in bandwidth and storage usage in proxy caches.

  19. A Fuzzy Inference System Design for ICP Protocol Optimization in Cache Appliances Hierarchies

    Directory of Open Access Journals (Sweden)

    Oscar Linares

    2007-12-01

    Full Text Available A cache appliance is a network terminal which provides cache memory functions, such as object queries service from a user; such objects could be stored in one cache or in a cache hierarchy, trying to avoid carry out service from an origin server. This cache appliance structure improves network performance and quality of service. These appliances use ICP protocol (Internet Cache Protocol to support interoperation between existing cache hierarchies and web servers, through implementation of a message format to communicate web caches. One cache sends an ICP query to its neighbors. The neighbors send back ICP replies indicating "HIT" or "MISS". When one cache faces an excessive traffic situation, that is, a very high number of service queries from users, ICP protocol may allocate the service to cache which has the desired object. Because of traffic conditions, specific appliance may congests and the requests may be refused, which can decrease network's quality of service. So, a system designed for optimizing cache allocation, considering factors as traffic and priority could be useful. This paper presents a fuzzy inference system design, which uses entries such as number of queries over a time interval and traffic tendency, and as output, the web cache allocation decision that will provides the service; this design is proposed to optimize allocation of caches into a hierarchy for network services performance, so balancing out requesting among hierarchy members and improving services performance.

  20. Caching Servers for ATLAS

    CERN Document Server

    Gardner, Robert; The ATLAS collaboration

    2017-01-01

    As many LHC Tier-3 and some Tier-2 centers look toward streamlining operations, they are considering autonomously managed storage elements as part of the solution. These storage elements are essentially file caching servers. They can operate as whole file or data block level caches. Several implementations exist. In this paper we explore using XRootD caching servers that can operate in either mode. They can also operate autonomously (i.e. demand driven), be centrally managed (i.e. a Rucio managed cache), or operate in both modes. We explore the pros and cons of various configurations as well as practical requirements for caching to be effective. While we focus on XRootD caches, the analysis should apply to other kinds of caches as well.

  1. Caching Servers for ATLAS

    CERN Document Server

    Gardner, Robert; The ATLAS collaboration

    2016-01-01

    As many Tier 3 and some Tier 2 centers look toward streamlining operations, they are considering autonomously managed storage elements as part of the solution. These storage elements are essentially file caching servers. They can operate as whole file or data block level caches. Several implementations exist. In this paper we explore using XRootD caching servers that can operate in either mode. They can also operate autonomously (i.e. demand driven), be centrally managed (i.e. a Rucio managed cache), or operate in both modes. We explore the pros and cons of various configurations as well as practical requirements for caching to be effective. While we focus on XRootD caches, the analysis should apply to other kinds of caches as well.

  2. Caching Servers for ATLAS

    Science.gov (United States)

    Gardner, R. W.; Hanushevsky, A.; Vukotic, I.; Yang, W.

    2017-10-01

    As many LHC Tier-3 and some Tier-2 centers look toward streamlining operations, they are considering autonomously managed storage elements as part of the solution. These storage elements are essentially file caching servers. They can operate as whole file or data block level caches. Several implementations exist. In this paper we explore using XRootD caching servers that can operate in either mode. They can also operate autonomously (i.e. demand driven), be centrally managed (i.e. a Rucio managed cache), or operate in both modes. We explore the pros and cons of various configurations as well as practical requirements for caching to be effective. While we focus on XRootD caches, the analysis should apply to other kinds of caches as well.

  3. Multi Service Proxy: Mobile Web Traffic Entitlement Point in 4G Core Network

    Directory of Open Access Journals (Sweden)

    Dalibor Uhlir

    2015-05-01

    Full Text Available Core part of state-of-the-art mobile networks is composed of several standard elements like GGSN (Gateway General Packet Radio Service Support Node, SGSN (Serving GPRS Support Node, F5 or MSP (Multi Service Proxy. Each node handles network traffic from a slightly different perspective, and with various goals. In this article we will focus only on the MSP, its key features and especially on related security issues. MSP handles all HTTP traffic in the mobile network and therefore it is a suitable point for the implementation of different optimization functions, e.g. to reduce the volume of data generated by YouTube or similar HTTP-based service. This article will introduce basic features and functions of MSP as well as ways of remote access and security mechanisms of this key element in state-of-the-art mobile networks.

  4. A method cache for Patmos

    DEFF Research Database (Denmark)

    Degasperi, Philipp; Hepp, Stefan; Puffitsch, Wolfgang

    2014-01-01

    For real-time systems we need time-predictable processors. This paper presents a method cache as a time-predictable solution for instruction caching. The method cache caches whole methods (or functions) and simplifies worst-case execution time analysis. We have integrated the method cache...

  5. Pipelined Asynchronous Cache Design

    OpenAIRE

    Nyströem, Mika

    1997-01-01

    This thesis describes the development of pipelined asynchronous cache memories. The work is done in the context of the performance characteristics of memories and transistor logic of a late 1990's high-performance asynchronous microprocessor. We describe the general framework of asynchronous memory systems, caching, and those system characteristics that make caching of growing importance and keep it an interesting research topic. Finally, we present the main contribution of this work, whi...

  6. Cache Consistency by Design

    NARCIS (Netherlands)

    Brinksma, Hendrik

    In this paper we present a proof of the sequential consistency of the lazy caching protocol of Afek, Brown, and Merritt. The proof will follow a strategy of stepwise refinement, developing the distributed caching memory in five transformation steps from a specification of the serial memory, whilst

  7. Cache Creek mercury investigation

    Data.gov (United States)

    US Fish and Wildlife Service, Department of the Interior — The Cache Creek watershed is located in the California Coastal range approximately 100 miles north of San Francisco in Lake, Colusa and Yolo Counties. Wildlife...

  8. C-Aware: A Cache Management Algorithm Considering Cache Media Access Characteristic in Cloud Computing

    Directory of Open Access Journals (Sweden)

    Zhu Xudong

    2013-01-01

    Full Text Available Data congestion and network delay are the important factors that affect performance of cloud computing systems. Using local disk of computing nodes as a cache can sometimes get better performance than accessing data through the network. This paper presents a storage cache placement algorithm—C-Aware, which traces history access information of cache and data source, adaptively decides whether to cache data according to cache media characteristic and current access environment, and achieves good performance under different workload on storage server. We implement this algorithm in both simulated and real environments. Our simulation results using OLTP and WebSearch traces demonstrate that C-Aware achieves better adaptability to the changes of server workload. Our benchmark results in real system show that, in the scenario where the size of local cache is half of data set, C-Aware gets nearly 80% improvement compared with traditional methods when the server is not busy and still presents comparable performance when there is high workload on server side.

  9. Cache Oblivious Distribution Sweeping

    DEFF Research Database (Denmark)

    Brodal, G.S.; Fagerberg, R.

    2002-01-01

    We adapt the distribution sweeping method to the cache oblivious model. Distribution sweeping is the name used for a general approach for divide-and-conquer algorithms where the combination of solved subproblems can be viewed as a merging process of streams. We demonstrate by a series of algorith...

  10. A Framework for Consistent, Replicated Web Objects

    NARCIS (Netherlands)

    Kermarrec, A.-M.; Kuz, I.; Steen, M. van; Tanenbaum, A.S.

    1998-01-01

    Despite the extensive use of caching techniques, the Web is overloaded. While the caching techniques currently used help some, it would be better to use different caching and replication strategies for different Web pages, depending on their characteristics. We propose a framework in which such

  11. Cache-oblivious String Dictionaries

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Fagerberg, Rolf

    2006-01-01

    We present static cache-oblivious dictionary structures for strings which provide analogues of tries and suffix trees in the cache-oblivious model. Our construction takes as input either a set of strings to store, a single string for which all suffixes are to be stored, a trie, a compressed trie...

  12. Random Fill Cache Architecture (Preprint)

    Science.gov (United States)

    2014-10-01

    D. Gullasch, E. Bangerter, and S. Krenn, “Cache Games — Bringing Access-Based Cache Attacks on AES to Practice,” in Proc. IEEE Symposium on Security...Effectiveness,” in Cryptogra- phers’ Track at the RSA Conference (CT-RSA’04), 2004, pp. 222–235. [27] K. Tiri, O. Aciicmez, M. Neve, and F. Andersen , “An

  13. Instant Varnish Cache how-to

    CERN Document Server

    Moutinho, Roberto

    2013-01-01

    Filled with practical, step-by-step instructions and clear explanations for the most important and useful tasks. Get the job done and learn as you go. Easy-to-follow, step-by-step recipes which will get you started with Varnish Cache. Practical examples will help you to get set up quickly and easily.This book is aimed at system administrators and web developers who need to scale websites without tossing money on a large and costly infrastructure. It's assumed that you have some knowledge of the HTTP protocol, how browsers and server communicate with each other, and basic Linux systems.

  14. Cache-Oblivious Hashing

    DEFF Research Database (Denmark)

    Pagh, Rasmus; Wei, Zhewei; Yi, Ke

    2014-01-01

    conditions hold: (a) b is a power of 2; and (b) every block starts at a memory address divisible by b. Note that the two conditions hold on a real machine, although they are not stated in the cache-oblivious model. Interestingly, we also show that neither condition is dispensable: if either of them......The hash table, especially its external memory version, is one of the most important index structures in large databases. Assuming a truly random hash function, it is known that in a standard external hash table with block size b, searching for a particular key only takes expected average t q =1......+1/2 Ω(b) disk accesses for any load factor α bounded away from 1. However, such near-perfect performance is achieved only when b is known and the hash table is particularly tuned for working with such a blocking. In this paper we study if it is possible to build a cache-oblivious hash table that works...

  15. Language-Based Caching of Dynamically Generated HTML

    DEFF Research Database (Denmark)

    Brabrand, Claus; Møller, Anders; Olesen, Steffan

    2002-01-01

    Increasingly, HTML documents are dynamically generated by interactive Web services. To ensure that the client is presented with the newest versions of such documents it is customary to disable client caching causing a seemingly inevitable performance penalty. In the system, dynamic HTML documents...... are composed of higher-order templates that are plugged together to construct complete documents. We show how to exploit this feature to provide an automatic fine-grained caching of document templates, based on the service source code. A service transmits not the full HTML document but instead a compact Java...

  16. Time-predictable Stack Caching

    DEFF Research Database (Denmark)

    Abbaspourseyedi, Sahar

    complicated and less imprecise. Time-predictable computer architectures provide solutions to this problem. As accesses to the data in caches are one source of timing unpredictability, devising methods for improving the timepredictability of caches are important. Stack data, with statically analyzable...... completely. Thus, in systems with hard deadlines the worst-case execution time (WCET) of the real-time software running on them needs to be bounded. Modern architectures use features such as pipelining and caches for improving the average performance. These features, however, make the WCET analysis more...

  17. Cache-Oblivious Algorithms and Data Structures

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting

    2004-01-01

    Frigo, Leiserson, Prokop and Ramachandran in 1999 introduced the ideal-cache model as a formal model of computation for developing algorithms in environments with multiple levels of caching, and coined the terminology of cache-oblivious algorithms. Cache-oblivious algorithms are described as stan...... apply to multi-level memory hierarchies. This paper gives an overview of the results achieved on cache-oblivious algorithms and data structures since the seminal paper by Frigo et al....

  18. LPPS: A Distributed Cache Pushing Based K-Anonymity Location Privacy Preserving Scheme

    Directory of Open Access Journals (Sweden)

    Ming Chen

    2016-01-01

    Full Text Available Recent years have witnessed the rapid growth of location-based services (LBSs for mobile social network applications. To enable location-based services, mobile users are required to report their location information to the LBS servers and receive answers of location-based queries. Location privacy leak happens when such servers are compromised, which has been a primary concern for information security. To address this issue, we propose the Location Privacy Preservation Scheme (LPPS based on distributed cache pushing. Unlike existing solutions, LPPS deploys distributed cache proxies to cover users mostly visited locations and proactively push cache content to mobile users, which can reduce the risk of leaking users’ location information. The proposed LPPS includes three major process. First, we propose an algorithm to find the optimal deployment of proxies to cover popular locations. Second, we present cache strategies for location-based queries based on the Markov chain model and propose update and replacement strategies for cache content maintenance. Third, we introduce a privacy protection scheme which is proved to achieve k-anonymity guarantee for location-based services. Extensive experiments illustrate that the proposed LPPS achieves decent service coverage ratio and cache hit ratio with lower communication overhead compared to existing solutions.

  19. A Time-predictable Stack Cache

    DEFF Research Database (Denmark)

    Abbaspour, Sahar; Brandner, Florian; Schoeberl, Martin

    2013-01-01

    Real-time systems need time-predictable architectures to support static worst-case execution time (WCET) analysis. One architectural feature, the data cache, is hard to analyze when different data areas (e.g., heap allocated and stack allocated data) share the same cache. This sharing leads to less...... precise results of the cache analysis part of the WCET analysis. Splitting the data cache for different data areas enables composable data cache analysis. The WCET analysis tool can analyze the accesses to these different data areas independently. In this paper we present the design and implementation...... of a cache for stack allocated data. Our port of the LLVM C++ compiler supports the management of the stack cache. The combination of stack cache instructions and the hardware implementation of the stack cache is a further step towards timepredictable architectures....

  20. A Time-predictable Stack Cache

    DEFF Research Database (Denmark)

    Abbaspourseyedi, Sahar; Brandner, Florian; Schoeberl, Martin

    2013-01-01

    Real-time systems need time-predictable architectures to support static worst-case execution time (WCET) analysis. One architectural feature, the data cache, is hard to analyze when different data areas (e.g., heap allocated and stack allocated data) share the same cache. This sharing leads to le...... of a cache for stack allocated data. Our port of the LLVM C++ compiler supports the management of the stack cache. The combination of stack cache instructions and the hardware implementation of the stack cache is a further step towards timepredictable architectures....

  1. Cache-oblivious mesh layouts

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, Sung-Eui [Univ. of North Carolina, Chapel Hill, NC (United States); Lindstrom, Peter [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Pascucci, Valerio [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Manocha, Dinesh [Univ. of North Carolina, Chapel Hill, NC (United States)

    2005-07-01

    We present a novel method for computing cache-oblivious layouts of large meshes that improve the performance of interactive visualization and geometric processing algorithms. Given that the mesh is accessed in a reasonably coherent manner, we assume no particular data access patterns or cache parameters of the memory hierarchy involved in the computation. Furthermore, our formulation extends directly to computing layouts of multi-resolution and bounding volume hierarchies of large meshes. We develop a simple and practical cache-oblivious metric for estimating cache misses. Computing a coherent mesh layout is reduced to a combinatorial optimization problem. We designed and implemented an out-of-core multilevel minimization algorithm and tested its performance on unstructured meshes composed of tens to hundreds of millions of triangles. Our layouts can significantly reduce the number of cache misses. We have observed 2-20 times speedups in view-dependent rendering, collision detection, and isocontour extraction without any modification of the algorithms or runtime applications.

  2. dCache: Big Data storage for HEP communities and beyond

    Energy Technology Data Exchange (ETDEWEB)

    Millar, A. P. [DESY; Behrmann, G. [Unlisted, DK; Bernardt, C. [DESY; Fuhrmann, P. [DESY; Litvintsev, D. [Fermilab; Mkrtchyan, T. [DESY; Petersen, A. [DESY; Rossi, A. [Fermilab; Schwank, K. [DESY

    2014-01-01

    With over ten years in production use dCache data storage system has evolved to match ever changing lansdcape of continually evolving storage technologies with new solutions to both existing problems and new challenges. In this paper, we present three areas of innovation in dCache: providing efficient access to data with NFS v4.1 pNFS, adoption of CDMI and WebDAV as an alternative to SRM for managing data, and integration with alternative authentication mechanisms.

  3. Cache-Aware and Cache-Oblivious Adaptive Sorting

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Fagerberg, Rolf; Moruz, Gabriel

    2005-01-01

    Two new adaptive sorting algorithms are introduced which perform an optimal number of comparisons with respect to the number of inversions in the input. The first algorithm is based on a new linear time reduction to (non-adaptive) sorting. The second algorithm is based on a new division protocol...... for the GenericSort algorithm by Estivill-Castro and Wood. From both algorithms we derive I/O-optimal cache-aware and cache-oblivious adaptive sorting algorithms. These are the first I/O-optimal adaptive sorting algorithms....

  4. Data cache organization for accurate timing analysis

    DEFF Research Database (Denmark)

    Schoeberl, Martin; Huber, Benedikt; Puffitsch, Wolfgang

    2013-01-01

    Caches are essential to bridge the gap between the high latency main memory and the fast processor pipeline. Standard processor architectures implement two first-level caches to avoid a structural hazard in the pipeline: an instruction cache and a data cache. For tight worst-case execution times...... different data areas, such as stack, global data, and heap allocated data, share the same cache. Some addresses are known statically, other addresses are only known at runtime. With a standard cache organization all those different data areas must be considered by worst-case execution time analysis...... associative cache for the heap area. We designed and implemented a static analysis for this cache, and integrated it into a worst-case execution time analysis tool....

  5. Cache Timing Analysis of HC-256

    DEFF Research Database (Denmark)

    Zenner, Erik

    2008-01-01

    In this paper, we describe an abstract model of cache timing attacks that can be used for designing ciphers. We then analyse HC-256 under this model, demonstrating a cache timing attack under certain strong assumptions. From the observations made in our analysis, we derive a number of design...... principles for hardening ciphers against cache timing attacks....

  6. Funnel Heap - A Cache Oblivious Priority Queue

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Fagerberg, Rolf

    2002-01-01

    model. Arge et al. recently presented the first optimal cache oblivious priority queue, and demonstrated the importance of this result by providing the first cache oblivious algorithms for graph problems. Their structure uses cache oblivious sorting and selection as subroutines. In this paper, we devise......The cache oblivious model of computation is a two-level memory model with the assumption that the parameters of the model are unknown to the algorithms. A consequence of this assumption is that an algorithm efficient in the cache oblivious model is automatically efficient in a multi-level memory...

  7. Nature as a treasure map! Teaching geoscience with the help of earth caches?!

    Science.gov (United States)

    Zecha, Stefanie; Schiller, Thomas

    2015-04-01

    This presentation looks at how earth caches are influence the learning process in the field of geo science in non-formal education. The development of mobile technologies using Global Positioning System (GPS) data to point geographical location together with the evolving Web 2.0 supporting the creation and consumption of content, suggest a potential for collaborative informal learning linked to location. With the help of the GIS in smartphones you can go directly in nature, search for information by your smartphone, and learn something about nature. Earth caches are a very good opportunity, which are organized and supervised geocaches with special information about physical geography high lights. Interested people can inform themselves about aspects in geoscience area by earth caches. The main question of this presentation is how these caches are created in relation to learning processes. As is not possible, to analyze all existing earth caches, there was focus on Bavaria and a certain feature of earth caches. At the end the authors show limits and potentials for the use of earth caches and give some remark for the future.

  8. Security in the CernVM File System and the Frontier Distributed Database Caching System

    CERN Document Server

    Dykstra, David

    2014-01-01

    Both the CernVM File System (CVMFS) and the Frontier Distributed Database Caching System (Frontier) distribute centrally updated data worldwide for LHC experiments using http proxy caches. Neither system provides privacy or access control on reading the data, but both control access to updates of the data and can guarantee the integrity of the data transferred to clients over the internet. CVMFS has since its early days required digital signatures and secure hashes on all distributed data, and recently both CVMFS and Frontier have added X509-based integrity checking. In this paper we detail and compare the security models of CVMFS and Frontier.

  9. On the Limits of Cache-Obliviousness

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Fagerberg, Rolf

    2003-01-01

    In this paper, we present lower bounds for permuting and sorting in the cache-oblivious model. We prove that (1) I/O optimal cache-oblivious comparison based sorting is not possible without a tall cache assumption, and (2) there does not exist an I/O optimal cache-oblivious algorithm for permuting......, not even in the presence of a tall cache assumption.Our results for sorting show the existence of an inherent trade-off in the cache-oblivious model between the strength of the tall cache assumption and the overhead for the case M » B, and show that Funnelsort and recursive binary mergesort are optimal...... algorithms in the sense that they attain this trade-off....

  10. Minimizing cache misses in an event-driven network server: A case study of TUX

    DEFF Research Database (Denmark)

    Bhatia, Sapan; Consel, Charles; Lawall, Julia Laetitia

    2006-01-01

    servers by optimizing their use of the L2 CPU cache in the context of the TUX Web server, known for its robustness to heavy load. Our approach is based on a novel cache-aware memory allocator and a specific scheduling strategy that together ensure that the total working data set of the server stays......We analyze the performance of CPU-bound network servers and demonstrate experimentally that the degradation in the performance of these servers under high-concurrency workloads is largely due to inefficient use of the hardware caches. We then describe an approach to speeding up event-driven network...... in the L2 cache. Experiments show that under high concurrency, our optimizations improve the throughput of TUX by up to 40% and the number of requests serviced at the time of failure by 21%....

  11. CryptoCache: A Secure Sharable File Cache for Roaming Users

    DEFF Research Database (Denmark)

    Jensen, Christian D.

    2000-01-01

    Small mobile computers are now sufficiently powerful to run many applications, but storage capacity remains limited so working files cannot be cached or stored locally. Even if files can be stored locally, the mobile device is not powerful enough to act as server in collaborations with other users....... Conventional distributed file systems cache everything locally or not at all; there is no possibility to cache files on nearby nodes.In this paper we present the design of a secure cache system called CryptoCache that allows roaming users to cache files on untrusted file hosting servers. The system allows...

  12. Engineering a Cache-Oblivious Sorting Algorithm

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Fagerberg, Rolf; Vinther, Kristoffer

    2007-01-01

    This paper is an algorithmic engineering study of cache-oblivious sorting. We investigate by empirical methods a number of implementation issues and parameter choices for the cache-oblivious sorting algorithm Lazy Funnelsort, and compare the final algorithm with Quicksort, the established standard...... for comparison-based sorting, as well as with recent cache-aware proposals. The main result is a carefully implemented cache-oblivious sorting algorithm, which our experiments show can be faster than the best Quicksort implementation we are able to find, already for input sizes well within the limits of RAM....... It is also at least as fast as the recent cache-aware implementations included in the test. On disk the difference is even more pronounced regarding Quicksort and the cache-aware algorithms, whereas the algorithm is slower than a careful implementation of multiway Mergesort such as TPIE....

  13. Scope-Based Method Cache Analysis

    DEFF Research Database (Denmark)

    Huber, Benedikt; Hepp, Stefan; Schoeberl, Martin

    2014-01-01

    The quest for time-predictable systems has led to the exploration of new hardware architectures that simplify analysis and reasoning in the temporal domain, while still providing competitive performance. For the instruction memory, the method cache is a conceptually attractive solution, as it req......The quest for time-predictable systems has led to the exploration of new hardware architectures that simplify analysis and reasoning in the temporal domain, while still providing competitive performance. For the instruction memory, the method cache is a conceptually attractive solution......, as it requests memory transfers at well-defined instructions only. In this article, we present a new cache analysis framework that generalizes and improves work on cache persistence analysis. The analysis demonstrates that a global view on the cache behavior permits the precise analyses of caches which are hard...

  14. Store operations to maintain cache coherence

    Energy Technology Data Exchange (ETDEWEB)

    Evangelinos, Constantinos; Nair, Ravi; Ohmacht, Martin

    2017-08-01

    In one embodiment, a computer-implemented method includes encountering a store operation during a compile-time of a program, where the store operation is applicable to a memory line. It is determined, by a computer processor, that no cache coherence action is necessary for the store operation. A store-without-coherence-action instruction is generated for the store operation, responsive to determining that no cache coherence action is necessary. The store-without-coherence-action instruction specifies that the store operation is to be performed without a cache coherence action, and cache coherence is maintained upon execution of the store-without-coherence-action instruction.

  15. Store operations to maintain cache coherence

    Energy Technology Data Exchange (ETDEWEB)

    Evangelinos, Constantinos; Nair, Ravi; Ohmacht, Martin

    2017-09-12

    In one embodiment, a computer-implemented method includes encountering a store operation during a compile-time of a program, where the store operation is applicable to a memory line. It is determined, by a computer processor, that no cache coherence action is necessary for the store operation. A store-without-coherence-action instruction is generated for the store operation, responsive to determining that no cache coherence action is necessary. The store-without-coherence-action instruction specifies that the store operation is to be performed without a cache coherence action, and cache coherence is maintained upon execution of the store-without-coherence-action instruction.

  16. A Survey of Cache Bypassing Techniques

    Directory of Open Access Journals (Sweden)

    Sparsh Mittal

    2016-04-01

    Full Text Available With increasing core-count, the cache demand of modern processors has also increased. However, due to strict area/power budgets and presence of poor data-locality workloads, blindly scaling cache capacity is both infeasible and ineffective. Cache bypassing is a promising technique to increase effective cache capacity without incurring power/area costs of a larger sized cache. However, injudicious use of cache bypassing can lead to bandwidth congestion and increased miss-rate and hence, intelligent techniques are required to harness its full potential. This paper presents a survey of cache bypassing techniques for CPUs, GPUs and CPU-GPU heterogeneous systems, and for caches designed with SRAM, non-volatile memory (NVM and die-stacked DRAM. By classifying the techniques based on key parameters, it underscores their differences and similarities. We hope that this paper will provide insights into cache bypassing techniques and associated tradeoffs and will be useful for computer architects, system designers and other researchers.

  17. La honte qui cache la honte qui cache...

    OpenAIRE

    Dussy, Dorothée

    2004-01-01

    Sommaire : http://www.sigila.msh-paris.fr/la_honte.htm; International audience; Ce texte explore les mécanismes par lesquels Louise, ancienne religieuse et secrétaire médicale à la retraite, a tout au long de sa vie enchaîné les raisons d'avoir honte se rapportant invariablement à une infraction à son intimité. Louise amnésique a caché une honte par une autre honte, sans souvenir de son secret originel. Jusqu'à ce que la mémoire lui revienne un matin, sur le trajet la menant à son travail, gr...

  18. A Stack Cache for Real-Time Systems

    DEFF Research Database (Denmark)

    Schoeberl, Martin; Nielsen, Carsten

    2016-01-01

    Real-time systems need time-predictable computing platforms to allowfor static analysis of the worst-case execution time. Caches are important for good performance, but data caches arehard to analyze for the worst-case execution time. Stack allocated data has different properties related...... data cache. As stack allocated datahas a high locality, even a small stack cache gives a high hit rate. A stack cache added to a write-through data cache considerablyimproves the performance, while a stack cache compared tothe harder to analyze write-back cache has about the sameaverage case...

  19. Caching at a distance: a cache protection strategy in Eurasian jays.

    Science.gov (United States)

    Legg, Edward W; Ostojić, Ljerka; Clayton, Nicola S

    2016-07-01

    A fundamental question about the complexity of corvid social cognition is whether behaviours exhibited when caching in front of potential pilferers represent specific attempts to prevent cache loss (cache protection hypothesis) or whether they are by-products of other behaviours (by-product hypothesis). Here, we demonstrate that Eurasian jays preferentially cache at a distance when observed by conspecifics. This preference for a 'far' location could be either a by-product of a general preference for caching at that specific location regardless of the risk of cache loss or a by-product of a general preference to be far away from conspecifics due to low intra-species tolerance. Critically, we found that neither by-product account explains the jays' behaviour: the preference for the 'far' location was not shown when caching in private or when eating in front of a conspecific. In line with the cache protection hypothesis we found that jays preferred the distant location only when caching in front of a conspecific. Thus, it seems likely that for Eurasian jays, caching at a distance from an observer is a specific cache protection strategy.

  20. A Cache Timing Analysis of HC-256

    DEFF Research Database (Denmark)

    Zenner, Erik

    2009-01-01

    In this paper, we describe a cache-timing attack against the stream cipher HC-256, which is the strong version of eStream winner HC-128. The attack is based on an abstract model of cache timing attacks that can also be used for designing stream ciphers. From the observations made in our analysis,...

  1. Search-Order Independent State Caching

    DEFF Research Database (Denmark)

    Evangelista, Sami; Kristensen, Lars Michael

    2010-01-01

    State caching is a memory reduction technique used by model checkers to alleviate the state explosion problem. It has traditionally been coupled with a depth-first search to ensure termination.We propose and experimentally evaluate an extension of the state caching method for general state...

  2. Optoelectronic-cache memory system architecture.

    Science.gov (United States)

    Chiarulli, D M; Levitan, S P

    1996-05-10

    We present an investigation of the architecture of an optoelectronic cache that can integrate terabit optical memories with the electronic caches associated with high-performance uniprocessors and multiprocessors. The use of optoelectronic-cache memories enables these terabit technologies to provide transparently low-latency secondary memory with frame sizes comparable with disk pages but with latencies that approach those of electronic secondary-cache memories. This enables the implementation of terabit memories with effective access times comparable with the cycle times of current microprocessors. The cache design is based on the use of a smart-pixel array and combines parallel free-space optical input-output to-and-from optical memory with conventional electronic communication to the processor caches. This cache and the optical memory system to which it will interface provide a large random-access memory space that has a lower overall latency than that of magnetic disks and disk arrays. In addition, as a consequence of the high-bandwidth parallel input-output capabilities of optical memories, fault service times for the optoelectronic cache are substantially less than those currently achievable with any rotational media.

  3. Search-Order Independent State Caching

    DEFF Research Database (Denmark)

    Evangelista, Sami; Kristensen, Lars Michael

    2009-01-01

    State caching is a memory reduction technique used by model checkers to alleviate the state explosion problem. It has traditionally been coupled with a depth-first search to ensure termination.We propose and experimentally evaluate an extension of the state caching method for general state...

  4. Cache Energy Optimization Techniques For Modern Processors

    Energy Technology Data Exchange (ETDEWEB)

    Mittal, Sparsh [ORNL

    2013-01-01

    Modern multicore processors are employing large last-level caches, for example Intel's E7-8800 processor uses 24MB L3 cache. Further, with each CMOS technology generation, leakage energy has been dramatically increasing and hence, leakage energy is expected to become a major source of energy dissipation, especially in last-level caches (LLCs). The conventional schemes of cache energy saving either aim at saving dynamic energy or are based on properties specific to first-level caches, and thus these schemes have limited utility for last-level caches. Further, several other techniques require offline profiling or per-application tuning and hence are not suitable for product systems. In this book, we present novel cache leakage energy saving schemes for single-core and multicore systems; desktop, QoS, real-time and server systems. Also, we present cache energy saving techniques for caches designed with both conventional SRAM devices and emerging non-volatile devices such as STT-RAM (spin-torque transfer RAM). We present software-controlled, hardware-assisted techniques which use dynamic cache reconfiguration to configure the cache to the most energy efficient configuration while keeping the performance loss bounded. To profile and test a large number of potential configurations, we utilize low-overhead, micro-architecture components, which can be easily integrated into modern processor chips. We adopt a system-wide approach to save energy to ensure that cache reconfiguration does not increase energy consumption of other components of the processor. We have compared our techniques with state-of-the-art techniques and have found that our techniques outperform them in terms of energy efficiency and other relevant metrics. The techniques presented in this book have important applications in improving energy-efficiency of higher-end embedded, desktop, QoS, real-time, server processors and multitasking systems. This book is intended to be a valuable guide for both

  5. Query Load Balancing by Caching Search Results in Peer-to-Peer Information Retrieval Networks

    NARCIS (Netherlands)

    Tigelaar, A.S.; Hiemstra, Djoerd

    2011-01-01

    For peer-to-peer web search engines it is important to keep the delay between receiving a query and providing search results within an acceptable range for the end user. How to achieve this remains an open challenge. One way to reduce delays is by caching search results for queries and allowing

  6. Search Result Caching in Peer-to-Peer Information Retrieval Networks

    NARCIS (Netherlands)

    Tigelaar, A.S.; Hiemstra, Djoerd; Trieschnigg, Rudolf Berend

    2011-01-01

    For peer-to-peer web search engines it is important to quickly process queries and return search results. How to keep the perceived latency low is an open challenge. In this paper we explore the solution potential of search result caching in large-scale peer-to-peer information retrieval networks by

  7. The dCache scientific storage cloud

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    For over a decade, the dCache team has provided software for handling big data for a diverse community of scientists. The team has also amassed a wealth of operational experience from using this software in production. With this experience, the team have refined dCache with the goal of providing a "scientific cloud": a storage solution that satisfies all requirements of a user community by exposing different facets of dCache with which users interact. Recent development, as part of this "scientific cloud" vision, has introduced a new facet: a sync-and-share service, often referred to as "dropbox-like storage". This work has been strongly focused on local requirements, but will be made available in future releases of dCache allowing others to adopt dCache solutions. In this presentation we will outline the current status of the work: both the successes and limitations, and the direction and time-scale of future work.

  8. The Cost of Cache-Oblivious Searching

    DEFF Research Database (Denmark)

    Bender, Michael A.; Brodal, Gerth Stølting; Fagerberg, Rolf

    2011-01-01

    This paper gives tight bounds on the cost of cache-oblivious searching. The paper shows that no cache-oblivious search structure can guarantee a search performance of fewer than lg elog  B N memory transfers between any two levels of the memory hierarchy. This lower bound holds even if all...... increases. The expectation is taken over the random placement in memory of the first element of the structure. Because searching in the disk-access machine (DAM) model can be performed in log  B N+O(1) block transfers, this result establishes a separation between the (2-level) DAM model and cache......-oblivious model. The DAM model naturally extends to k levels. The paper also shows that as k grows, the search costs of the optimal k-level DAM search structure and the optimal cache-oblivious search structure rapidly converge. This result demonstrates that for a multilevel memory hierarchy, a simple cache...

  9. Efficient sorting using registers and caches

    DEFF Research Database (Denmark)

    Wickremesinghe, Rajiv; Arge, Lars Allan; Chase, Jeffrey S.

    2002-01-01

    on sorting performance. We introduce a new cache-conscious sorting algorithm, R-MERGE, which achieves better performance in practice over algorithms that are superior in the theoretical models. R-MERGE is designed to minimize memory stall cycles rather than cache misses by considering features common to many......Modern computer systems have increasingly complex memory systems. Common machine models for algorithm analysis do not reflect many of the features of these systems, e.g., large register sets, lockup-free caches, cache hierarchies, associativity, cache line fetching, and streaming behavior....... Inadequate models lead to poor algorithmic choices and an incomplete understanding of algorithm behavior on real machines.A key step toward developing better models is to quantify the performance effects of features not reflected in the models. This paper explores the effect of memory system features...

  10. Application and Network-Cognizant Proxies - Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Antonio Ortega; Daniel C. Lee

    2003-03-24

    OAK B264 Application and Network-Cognizant Proxies - Final Report. Current networks show increasing heterogeneity both in terms of their bandwidths/delays and the applications they are required to support. This is a trend that is likely to intensify in the future, as real-time services, such as video, become more widely available and networking access over wireless links becomes more widespread. For this reason they propose that application-specific proxies, intermediate network nodes that broker the interactions between server and client, will become an increasingly important network element. These proxies will allow adaptation to changes in network characteristics without requiring a direct intervention of either server or client. Moreover, it will be possible to locate these proxies strategically at those points where a mismatch occurs between subdomains (for example, a proxy could be placed so as to act as a bridge between a reliable network domain and an unreliable one). This design philosophy favors scalability in the sense that the basic network infrastructure can remain unchanged while new functionality can be added to proxies, as required by the applications. While proxies can perform numerous generic functions, such as caching or security, they concentrate here on media-specific, and in particular video-specific, tasks. The goal of this project was to demonstrate that application- and network-specific knowledge at a proxy can improve overall performance especially under changing network conditions. They summarize below the work performed to address these issues. Particular effort was spent in studying caching techniques and on video classification to enable DiffServ delivery. other work included analysis of traffic characteristics, optimized media scheduling, coding techniques based on multiple description coding, and use of proxies to reduce computation costs. This work covered much of what was originally proposed but with a necessarily reduced scope.

  11. WATCHMAN: A Data Warehouse Intelligent Cache Manager

    Science.gov (United States)

    Scheuermann, Peter; Shim, Junho; Vingralek, Radek

    1996-01-01

    Data warehouses store large volumes of data which are used frequently by decision support applications. Such applications involve complex queries. Query performance in such an environment is critical because decision support applications often require interactive query response time. Because data warehouses are updated infrequently, it becomes possible to improve query performance by caching sets retrieved by queries in addition to query execution plans. In this paper we report on the design of an intelligent cache manager for sets retrieved by queries called WATCHMAN, which is particularly well suited for data warehousing environment. Our cache manager employs two novel, complementary algorithms for cache replacement and for cache admission. WATCHMAN aims at minimizing query response time and its cache replacement policy swaps out entire retrieved sets of queries instead of individual pages. The cache replacement and admission algorithms make use of a profit metric, which considers for each retrieved set its average rate of reference, its size, and execution cost of the associated query. We report on a performance evaluation based on the TPC-D and Set Query benchmarks. These experiments show that WATCHMAN achieves a substantial performance improvement in a decision support environment when compared to a traditional LRU replacement algorithm.

  12. Corvid caching: Insights from a cognitive model.

    Science.gov (United States)

    van der Vaart, Elske; Verbrugge, Rineke; Hemelrijk, Charlotte K

    2011-07-01

    Caching and recovery of food by corvids is well-studied, but some ambiguous results remain. To help clarify these, we built a computational cognitive model. It is inspired by similar models built for humans, and it assumes that memory strength depends on frequency and recency of use. We compared our model's behavior to that of real birds in previously published experiments. Our model successfully replicated the outcomes of two experiments on recovery behavior and two experiments on cache site choice. Our "virtual birds" reproduced declines in recovery accuracy across sessions, revisits to previously emptied cache sites, a lack of correlation between caching and recovery order, and a preference for caching in safe locations. The model also produced two new explanations. First, that Clark's nutcrackers may become less accurate as recovery progresses not because of differential memory for different cache sites, as was once assumed, but because of chance effects. And second, that Western scrub jays may choose their cache sites not on the basis of negative recovery experiences only, as was previously thought, but on the basis of positive recovery experiences instead. Alternatively, both "punishment" and "reward" may be playing a role. We conclude with a set of new insights, a testable prediction, and directions for future work. PsycINFO Database Record (c) 2011 APA, all rights reserved

  13. A Simple Cache Emulator for Evaluating Cache Behavior for SMP Systems

    Directory of Open Access Journals (Sweden)

    I. Šimeček

    2006-01-01

    Full Text Available Every modern CPU uses a complex memory hierarchy, which consists of multiple cache memory levels. It is very difficult to predict the behavior of this hierarchy for a given program (for details see [1, 2]. The situation is even worse for systems with a shared memory. The most important example is the case of SMP (symmetric multiprocessing systems [3]. The importance of these systems is growing due to the multi-core feature of the newest CPUs.The Cache Emulator (CE can simulate the behavior of caches inside an SMP system and compute the number of cache misses during a computation. All measurements are done in the “off-line” mode on a single CPU. The CE uses its own emulated cache memory for an exact simulation. This means that no other CPU activity influences the behavior of the CE. This work extends the Cache Analyzer introduced in [4]. 

  14. Implementasi Proxy Server Dengan Linux Clear OS 5.2

    OpenAIRE

    Setiadi, Aprian

    2013-01-01

    Tugas Akhir ini membahas mengenai cara untuk membangun sebuah proxy server dalam jaringan LAN. Jaringan LAN yang dibangun menggunakan arsitektur topologi star dengan menjadikan komputer server sebagai Gateway Server dan Proxy Server, sehingga tidak membutuhkan perangkat tambahan Router yang berfungsi sebagai Gateway Server. Proxy Server yang yang dibangun menggunakan metode Transparent Mode, sehingga pada komputer klien tidak perlu mengkonfigurasi port proxy server pada Web Browser. Hasil ya...

  15. Greatly improved cache update times for conditions data with Frontier/Squid

    Energy Technology Data Exchange (ETDEWEB)

    Dykstra, Dave; Lueking, Lee; /Fermilab

    2009-05-01

    The CMS detector project loads copies of conditions data to over 100,000 computer cores worldwide by using a software subsystem called Frontier. This subsystem translates database queries into HTTP, looks up the results in a central database at CERN, and caches the results in an industry-standard HTTP proxy/caching server called Squid. One of the most challenging aspects of any cache system is coherency, that is, ensuring that changes made to the underlying data get propagated out to all clients in a timely manner. Recently, the Frontier system was enhanced to drastically reduce the time for changes to be propagated everywhere without heavily loading servers. The propagation time is now as low as 15 minutes for some kinds of data and no more than 60 minutes for the rest of the data. This was accomplished by taking advantage of an HTTP and Squid feature called If-Modified-Since. In order to use this feature, the Frontier server sends a Last-Modified timestamp, but since modification times are not normally tracked by Oracle databases, a PL/SQL program was developed to track the modification times of database tables. We discuss the details of this caching scheme and the obstacles overcome including database and Squid bugs.

  16. Munchausen syndrome by proxy

    Science.gov (United States)

    ... this page: //medlineplus.gov/ency/article/001555.htm Munchausen syndrome by proxy To use the sharing features on this page, please enable JavaScript. Munchausen syndrome by proxy is a mental illness and ...

  17. BidCache: Auction-based in-network caching in ICN

    NARCIS (Netherlands)

    Gill, A.S.; D'Acunto, L.; Trichias, K.; Brandenburg, R. van

    2016-01-01

    In Information Centric Networks, each node on the Data delivery path has the ability to cache the data items that flow through it. However, each node takes the decision on whether to cache a particular data item or not independently from the other nodes on the Data delivery path. This approach might

  18. dCache, agile adoption of storage technology

    Energy Technology Data Exchange (ETDEWEB)

    Millar, A. P. [Hamburg U.; Baranova, T. [Hamburg U.; Behrmann, G. [Unlisted, DK; Bernardt, C. [Hamburg U.; Fuhrmann, P. [Hamburg U.; Litvintsev, D. O. [Fermilab; Mkrtchyan, T. [Hamburg U.; Petersen, A. [Hamburg U.; Rossi, A. [Fermilab; Schwank, K. [Hamburg U.

    2012-01-01

    For over a decade, dCache has been synonymous with large-capacity, fault-tolerant storage using commodity hardware that supports seamless data migration to and from tape. In this paper we provide some recent news of changes within dCache and the community surrounding it. We describe the flexible nature of dCache that allows both externally developed enhancements to dCache facilities and the adoption of new technologies. Finally, we present information about avenues the dCache team is exploring for possible future improvements in dCache.

  19. Private Web Browsing

    National Research Council Canada - National Science Library

    Syverson, Paul F; Reed, Michael G; Goldschlag, David M

    1997-01-01

    .... These are both kept confidential from network elements as well as external observers. Private Web browsing is achieved by unmodified Web browsers using anonymous connections by means of HTTP proxies...

  20. Memory Map: A Multiprocessor Cache Simulator

    Directory of Open Access Journals (Sweden)

    Shaily Mittal

    2012-01-01

    Full Text Available Nowadays, Multiprocessor System-on-Chip (MPSoC architectures are mainly focused on by manufacturers to provide increased concurrency, instead of increased clock speed, for embedded systems. However, managing concurrency is a tough task. Hence, one major issue is to synchronize concurrent accesses to shared memory. An important characteristic of any system design process is memory configuration and data flow management. Although, it is very important to select a correct memory configuration, it might be equally imperative to choreograph the data flow between various levels of memory in an optimal manner. Memory map is a multiprocessor simulator to choreograph data flow in individual caches of multiple processors and shared memory systems. This simulator allows user to specify cache reconfigurations and number of processors within the application program and evaluates cache miss and hit rate for each configuration phase taking into account reconfiguration costs. The code is open source and in java.

  1. Best practice for caching of single-path code

    DEFF Research Database (Denmark)

    Schoeberl, Martin; Cilku, Bekim; Prokesch, Daniel

    2017-01-01

    Single-path code has some unique properties that make it interesting to explore different caching and prefetching alternatives for the stream of instructions. In this paper, we explore different cache organizations and how they perform with single-path code.......Single-path code has some unique properties that make it interesting to explore different caching and prefetching alternatives for the stream of instructions. In this paper, we explore different cache organizations and how they perform with single-path code....

  2. DSP code optimization based on cache

    Science.gov (United States)

    Xu, Chengfa; Li, Chengcheng; Tang, Bin

    2013-03-01

    DSP program's running efficiency on board is often lower than which via the software simulation during the program development, which is mainly resulted from the user's improper use and incomplete understanding of the cache-based memory. This paper took the TI TMS320C6455 DSP as an example, analyzed its two-level internal cache, and summarized the methods of code optimization. Processor can achieve its best performance when using these code optimization methods. At last, a specific algorithm application in radar signal processing is proposed. Experiment result shows that these optimization are efficient.

  3. Victim Migration: Dynamically Adapting Between Private and Shared CMP Caches

    National Research Council Canada - National Science Library

    Zhang, Michael; Asanovic, Krste

    2005-01-01

    .... Victim replication was previously introduced as a way of reducing the average hit latency of a shared cache by allowing a processor to make a replica of a primary cache victim in its local slice of the global L2 cache...

  4. Don't make cache too complex: A simple probability-based cache management scheme for SSDs.

    Directory of Open Access Journals (Sweden)

    Seungjae Baek

    Full Text Available Solid-state drives (SSDs have recently become a common storage component in computer systems, and they are fueled by continued bit cost reductions achieved with smaller feature sizes and multiple-level cell technologies. However, as the flash memory stores more bits per cell, the performance and reliability of the flash memory degrade substantially. To solve this problem, a fast non-volatile memory (NVM-based cache has been employed within SSDs to reduce the long latency required to write data. Absorbing small writes in a fast NVM cache can also reduce the number of flash memory erase operations. To maximize the benefits of an NVM cache, it is important to increase the NVM cache utilization. In this paper, we propose and study ProCache, a simple NVM cache management scheme, that makes cache-entrance decisions based on random probability testing. Our scheme is motivated by the observation that frequently written hot data will eventually enter the cache with a high probability, and that infrequently accessed cold data will not enter the cache easily. Owing to its simplicity, ProCache is easy to implement at a substantially smaller cost than similar previously studied techniques. We evaluate ProCache and conclude that it achieves comparable performance compared to a more complex reference counter-based cache-management scheme.

  5. Don't make cache too complex: A simple probability-based cache management scheme for SSDs.

    Science.gov (United States)

    Baek, Seungjae; Cho, Sangyeun; Choi, Jongmoo

    2017-01-01

    Solid-state drives (SSDs) have recently become a common storage component in computer systems, and they are fueled by continued bit cost reductions achieved with smaller feature sizes and multiple-level cell technologies. However, as the flash memory stores more bits per cell, the performance and reliability of the flash memory degrade substantially. To solve this problem, a fast non-volatile memory (NVM-)based cache has been employed within SSDs to reduce the long latency required to write data. Absorbing small writes in a fast NVM cache can also reduce the number of flash memory erase operations. To maximize the benefits of an NVM cache, it is important to increase the NVM cache utilization. In this paper, we propose and study ProCache, a simple NVM cache management scheme, that makes cache-entrance decisions based on random probability testing. Our scheme is motivated by the observation that frequently written hot data will eventually enter the cache with a high probability, and that infrequently accessed cold data will not enter the cache easily. Owing to its simplicity, ProCache is easy to implement at a substantially smaller cost than similar previously studied techniques. We evaluate ProCache and conclude that it achieves comparable performance compared to a more complex reference counter-based cache-management scheme.

  6. Cache-mesh, a Dynamics Data Structure for Performance Optimization

    DEFF Research Database (Denmark)

    Nguyen, Tuan T.; Dahl, Vedrana Andersen; Bærentzen, J. Andreas

    2017-01-01

    This paper proposes the cache-mesh, a dynamic mesh data structure in 3D that allows modifications of stored topological relations effortlessly. The cache-mesh can adapt to arbitrary problems and provide fast retrieval to the most-referred-to topological relations. This adaptation requires trivial...... of the cache-mesh, and the extra work for caching is also trivial. Though it appears that it takes effort for initial implementation, building the cache-mesh is comparable to a traditional mesh in terms of implementation....

  7. Meat and Dairy Goats in Cache County

    OpenAIRE

    Extension, USU

    2000-01-01

    Cache County, like other counties in the Western United States, is experiencing a major transition in land use. Though we still have a host of relatively large acreage, well managed crop and livestock farms, the number of smaller acreages is increasing.

  8. Corvid Caching : Insights From a Cognitive Model

    NARCIS (Netherlands)

    van der Vaart, Elske; Verbrugge, Rineke; Hemelrijk, Charlotte K.

    Caching and recovery of food by corvids is well-studied, but some ambiguous results remain. To help clarify these, we built a computational cognitive model. It is inspired by similar models built for humans, and it assumes that memory strength depends on frequency and recency of use. We compared our

  9. The Cost of Cache-Oblivious Searching

    DEFF Research Database (Denmark)

    Bender, Michael A.; Brodal, Gert Stølting; Fagerberg, Rolf

    2003-01-01

    Tight bounds on the cost of cache-oblivious searching are proved. It is shown that no cache-oblivious search structure can guarantee that a search performs fewer than lg e log B N block transfers between any two levels of the memory hierarchy. This lower bound holds even if all of the block sizes......, multilevel memory hierarchies can be modelled. It is shown that as k grows, the search costs of the optimal k-level DAM search structure and of the optimal cache-oblivious search structure rapidly converge. This demonstrates that for a multilevel memory hierarchy, a simple cache-oblivious structure almost...... are limited to be powers of 2. A modied version of the van Emde Boas layout is proposed, whose expected block transfers between any two levels of the memory hierarchy arbitrarily close to [lg e +O(lg lg B = lg B)] log B N +O(1). This factor approaches lg e 1:443 as B increases. The expectation is taken over...

  10. Cache-conscious radix-decluster projections

    NARCIS (Netherlands)

    S. Manegold (Stefan); P.A. Boncz (Peter); N.J. Nes (Niels); M.L. Kersten (Martin)

    2004-01-01

    textabstractAs CPUs become more powerful with Moore's law and memory latencies stay constant, the impact of the memory access performance bottleneck continues to grow on relational operators like join, which can exhibit random access on a memory region larger than the hardware caches. While

  11. Cache-Conscious Radix-Decluster Projections

    NARCIS (Netherlands)

    S. Manegold (Stefan); P.A. Boncz (Peter); N.J. Nes (Niels); M.L. Kersten (Martin)

    2004-01-01

    textabstractAs CPUs become more powerful with Moore's law and memory latencies stay constant, the impact of the memory access performance bottleneck continues to grow on relational operators like join, which can exhibit random access on a memory region larger than the hardware caches. While

  12. Multi-level Hybrid Cache: Impact and Feasibility

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Zhe [ORNL; Kim, Youngjae [ORNL; Ma, Xiaosong [ORNL; Shipman, Galen M [ORNL; Zhou, Yuanyuan [University of California, San Diego

    2012-02-01

    Storage class memories, including flash, has been attracting much attention as promising candidates fitting into in today's enterprise storage systems. In particular, since the cost and performance characteristics of flash are in-between those of DRAM and hard disks, it has been considered by many studies as an secondary caching layer underneath main memory cache. However, there has been a lack of studies of correlation and interdependency between DRAM and flash caching. This paper views this problem as a special form of multi-level caching, and tries to understand the benefits of this multi-level hybrid cache hierarchy. We reveal that significant costs could be saved by using Flash to reduce the size of DRAM cache, while maintaing the same performance. We also discuss design challenges of using flash in the caching hierarchy and present potential solutions.

  13. Jemen - the Proxy War

    Directory of Open Access Journals (Sweden)

    Magdalena El Ghamari

    2015-12-01

    Full Text Available The military operation in Yemen is significant departure from Saudi Arabia's foreign policy tradition and customs. Riyadh has always relied on three strategies to pursue its interests abroad: wealth, establish a global network and muslim education and diplomacy and meadiation. The term "proxy war" has experienced a new popularity in stories on the Middle East. A proxy war is two opposing countries avoiding direct war, and instead supporting combatants that serve their interests. In some occasions, one country is a direct combatant whilst the other supporting its enemy. Various news sources began using the term to describe the conflict in Yemen immediately, as if on cue, after Saudi Arabia launched its bombing campaign against Houthi targets in Yemen on 25 March 2015. This is the reason, why author try to answer for following questions: Is the Yemen Conflict Devolves into Proxy War? and Who's fighting whom in Yemen's proxy war?" Research area includes the problem of proxy war in the Middle East. For sure, the real problem of proxy war must begin with the fact that the United States and its NATO allies opened the floodgates for regional proxy wars by the two major wars for regime change: in Iraq and Libya. Those two destabilising wars provided opportunities and motives for Sunni states across the Middle East to pursue their own sectarian and political power objectives through "proxy war".

  14. Greatly improved cache update times for conditions data with Frontier/Squid

    CERN Document Server

    Dykstra, Dave

    2009-01-01

    The CMS detector project loads copies of conditions data to over 100,000 computer cores worldwide by using a software subsystem called Frontier. This subsystem translates database queries into HTTP, looks up the results in a central database at CERN, and caches the results in an industry-standard HTTP proxy/caching server called Squid. One of the most challenging aspects of any cache system is coherency, that is, ensuring that changes made to the underlying data get propagated out to all clients in a timely manner. Recently, the Frontier system was enhanced to drastically reduce the time for changes to be propagated everywhere without heavily loading servers. The propagation time is now as low as 15 minutes for some kinds of data and no more than 60 minutes for the rest of the data. This was accomplished by taking advantage of an HTTP and Squid feature called If-Modified-Since. In order to use this feature, the Frontier server sends a Last-Modified timestamp, but since modification times are not normally trac...

  15. dCache: implementing a high-end NFSv4.1 service using a Java NIO framework

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    dCache is a high performance scalable storage system widely used by HEP community. In addition to set of home grown protocols we also provide industry standard access mechanisms like WebDAV and NFSv4.1. This support places dCache as a direct competitor to commercial solutions. Nevertheless conforming to a protocol is not enough; our implementations must perform comparably or even better than commercial systems. To achieve this, dCache uses two high-end IO frameworks from well know application servers: GlassFish and JBoss. This presentation describes how we implemented an rfc1831 and rfc2203 compliant ONC RPC (Sun RPC) service based on the Grizzly NIO framework, part of the GlassFish application server. This ONC RPC service is the key component of dCache’s NFSv4.1 implementation, but is independent of dCache and available for other projects. We will also show some details of dCache NFS v4.1 implementations, describe some of the Java NIO techniques used and, finally, present details of our performance e...

  16. Scatter hoarding by the Central American agouti : a test of optimal cache spacing theory

    NARCIS (Netherlands)

    Galvez, Dumas; Kranstauber, Bart; Kays, Roland W.; Jansen, Patrick A.

    2009-01-01

    Optimal cache spacing theory predicts that scatter-hoarding animals store food at a density that balances the gains of reducing cache robbery against the costs of spacing out caches further. We tested the key prediction that cache robbery and cache spacing increase with the economic value of food:

  17. Scatter hoarding by the Central American agouti: a test of optimal cache spacing theory

    NARCIS (Netherlands)

    Gálvez, D.; Kranstauber, B.; Kays, R.W.; Jansen, P.A.

    2009-01-01

    Optimal cache spacing theory predicts that scatter-hoarding animals store food at a density that balances the gains of reducing cache robbery against the costs of spacing out caches further. We tested the key prediction that cache robbery and cache spacing increase with the economic value of food:

  18. Cache write generate for parallel image processing on shared memory architectures.

    Science.gov (United States)

    Wittenbrink, C M; Somani, A K; Chen, C H

    1996-01-01

    We investigate cache write generate, our cache mode invention. We demonstrate that for parallel image processing applications, the new mode improves main memory bandwidth, CPU efficiency, cache hits, and cache latency. We use register level simulations validated by the UW-Proteus system. Many memory, cache, and processor configurations are evaluated.

  19. Cooperative Caching Framework for Mobile Cloud Computing

    OpenAIRE

    Joy, Preetha Theresa; Jacob, K. Poulose

    2013-01-01

    Due to the advancement in mobile devices and wireless networks mobile cloud computing, which combines mobile computing and cloud computing has gained momentum since 2009. The characteristics of mobile devices and wireless network makes the implementation of mobile cloud computing more complicated than for fixed clouds. This section lists some of the major issues in Mobile Cloud Computing. One of the key issues in mobile cloud computing is the end to end delay in servicing a request. Data cach...

  20. Virentrack: A heuristic for reducing cache contention

    OpenAIRE

    Kumar, Viren

    2009-01-01

    Multicore processors are the dominant paradigm in mainstream computing for the present and foreseeable future. Current operating system schedulers on multicore systems co-schedule applications on cores at random. This often exacerbates issues such as cache contention, leading to a performance decrease. Optimally scheduling applications to take advantage of multicore characteristics remains a difficult and open problem. In this thesis, I advocate a method of optimized scheduling on multicore sys...

  1. Jemen - the Proxy War

    OpenAIRE

    Magdalena El Ghamari

    2015-01-01

    The military operation in Yemen is significant departure from Saudi Arabia's foreign policy tradition and customs. Riyadh has always relied on three strategies to pursue its interests abroad: wealth, establish a global network and muslim education and diplomacy and meadiation. The term "proxy war" has experienced a new popularity in stories on the Middle East. A proxy war is two opposing countries avoiding direct war, and instead supporting combatants that serve their interests. In some occas...

  2. A Multiresolution Image Cache for Volume Rendering

    Energy Technology Data Exchange (ETDEWEB)

    LaMar, E; Pascucci, V

    2003-02-27

    The authors discuss the techniques and implementation details of the shared-memory image caching system for volume visualization and iso-surface rendering. One of the goals of the system is to decouple image generation from image display. This is done by maintaining a set of impostors for interactive display while the production of the impostor imagery is performed by a set of parallel, background processes. The system introduces a caching basis that is free of the gap/overlap artifacts of earlier caching techniques. instead of placing impostors at fixed, pre-defined positions in world space, the technique is to adaptively place impostors relative to the camera viewpoint. The positions translate with the camera but stay aligned to the data; i.e., the positions translate, but do not rotate, with the camera. The viewing transformation is factored into a translation transformation and a rotation transformation. The impostor imagery is generated using just the translation transformation and visible impostors are displayed using just the rotation transformation. Displayed image quality is improved by increasing the number of impostors and the frequency that impostors are re-rendering is improved by decreasing the number of impostors.

  3. OPTIMAL DATA REPLACEMENT TECHNIQUE FOR COOPERATIVE CACHING IN MANET

    Directory of Open Access Journals (Sweden)

    P. Kuppusamy

    2014-09-01

    Full Text Available A cooperative caching approach improves data accessibility and reduces query latency in Mobile Ad hoc Network (MANET. Maintaining the cache is challenging issue in large MANET due to mobility, cache size and power. The previous research works on caching primarily have dealt with LRU, LFU and LRU-MIN cache replacement algorithms that offered low query latency and greater data accessibility in sparse MANET. This paper proposes Memetic Algorithm (MA to locate the better replaceable data based on neighbours interest and fitness value of cached data to store the newly arrived data. This work also elects ideal CH using Meta heuristic search Ant Colony Optimization algorithm. The simulation results shown that proposed algorithm reduces the latency, control overhead and increases the packet delivery rate than existing approach by increasing nodes and speed respectively.

  4. Static analysis of worst-case stack cache behavior

    DEFF Research Database (Denmark)

    Jordan, Alexander; Brandner, Florian; Schoeberl, Martin

    2013-01-01

    Utilizing a stack cache in a real-time system can aid predictability by avoiding interference that heap memory traffic causes on the data cache. While loads and stores are guaranteed cache hits, explicit operations are responsible for managing the stack cache. The behavior of these operations can......-graph, the worst-case bounds can be efficiently yet precisely determined. Our evaluation using the MiBench benchmark suite shows that only 37% and 21% of potential stack cache operations actually store to and load from memory, respectively. Analysis times are modest, on average running between 0.46s and 1.30s per...... be analyzed statically. We present algorithms that derive worst-case bounds on the latency-inducing operations of the stack cache. Their results can be used by a static WCET tool. By breaking the analysis down into subproblems that solve intra-procedural data-flow analysis and path searches on the call...

  5. 17 CFR 240.14a-16 - Internet availability of proxy materials.

    Science.gov (United States)

    2010-04-01

    ... holder to access and review the proxy materials before voting; (3) The Internet Web site address where... 17 Commodity and Securities Exchanges 3 2010-04-01 2010-04-01 false Internet availability of proxy... Under the Securities Exchange Act of 1934 Regulation 14a: Solicitation of Proxies § 240.14a-16 Internet...

  6. Computer Cache. Diseases: Locating Helpful Information on the Web

    Science.gov (United States)

    Byerly, Greg; Brodie, Carolyn S.

    2005-01-01

    According to the American Heritage Dictionary, disease is "a pathological condition of a part, organ, or system of an organism resulting from various causes, such as infection, genetic defect, or environmental stress, and characterized by an identifiable group of signs or symptoms" (2000). Students in upper elementary school and in middle school…

  7. The Influence of Cache Organization on E-learning Environments

    Directory of Open Access Journals (Sweden)

    Ayman Al-Nsour

    2009-06-01

    Full Text Available In this study, the performance of an e-learning environment is analyzed and evaluated in terms of average network traffic and upload/download rates under various cache memory organizations. In particular, we study the influence of three cache organizations, namely fully associative, direct, and set associative caches. As a result of this work, we recommend the set associative cache memory organization with the LFU replacement policy, as this led to optimal performance in e-learning environments with the highest hit ratio and upload/download rates.

  8. Cache-based error recovery for shared memory multiprocessor systems

    Science.gov (United States)

    Wu, Kun-Lung; Fuchs, W. Kent; Patel, Janak H.

    1989-01-01

    A multiprocessor cache-based checkpointing and recovery scheme for of recovering from transient processor errors in a shared-memory multiprocessor with private caches is presented. New implementation techniques that use checkpoint identifiers and recovery stacks to reduce performance degradation in processor utilization during normal execution are examined. This cache-based checkpointing technique prevents rollback propagation, provides for rapid recovery, and can be integrated into standard cache coherence protocols. An analytical model is used to estimate the relative performance of the scheme during normal execution. Extensions that take error latency into account are presented.

  9. Legal Aspects of the Web.

    Science.gov (United States)

    Borrull, Alexandre Lopez; Oppenheim, Charles

    2004-01-01

    Presents a literature review that covers the following topics related to legal aspects of the Web: copyright; domain names and trademarks; linking, framing, caching, and spamdexing; patents; pornography and censorship on the Internet; defamation; liability; conflict of laws and jurisdiction; legal deposit; and spam, i.e., unsolicited mails.…

  10. Perbandingan proxy pada linux dan windows untuk mempercepat browsing website

    Directory of Open Access Journals (Sweden)

    Dafwen Toresa

    2017-05-01

    Full Text Available AbstrakPada saat ini sangat banyak organisasi, baik pendidikan, pemerintahan,  maupun perusahaan swasta berusaha membatasi akses para pengguna ke internet dengan alasan bandwidth yang dimiliki mulai terasa lambat ketika para penggunanya mulai banyak yang melakukan browsing ke internet. Mempercepat akses browsing menjadi perhatian utama dengan memanfaatkan teknologi Proxy server. Penggunaan proxy server perlu mempertimbangkan sistem operasi pada server dan tool yang digunakan belum diketahui performansi terbaiknya pada sistem operasi apa.  Untuk itu dirasa perlu untuk menganalisis performan Proxy server pada sistem operasi berbeda yaitu Sistem Operasi Linux dengan tools Squid  dan Sistem Operasi Windows dengan tool Winroute. Kajian ini dilakukan untuk mengetahui perbandingan kecepatan browsing dari komputer pengguna (client. Browser yang digunakan di komputer pengguna adalah Mozilla Firefox. Penelitian ini menggunakan 2 komputer klien dengan pengujian masing-masingnya 5 kali pengujian pengaksesan/browsing web yang dituju melalui proxy server. Dari hasil pengujian yang dilakukan, diperoleh kesimpulan bahwa penerapan proxy server di sistem operasi linux dengan tools squid lebih cepat browsing dari klien menggunakan web browser yang sama dan komputer klien yang berbeda dari pada proxy server sistem operasi windows dengan tools winroute.  Kata kunci: Proxy, Bandwidth, Browsing, Squid, Winroute AbstractAt this time very many organizations, both education, government, and private companies try to limit the access of users to the internet on the grounds that the bandwidth owned began to feel slow when the users began to do a lot of browsing to the internet. Speed up browsing access is a major concern by utilizing Proxy server technology. The use of proxy servers need to consider the operating system on the server and the tool used is not yet known the best performance on what operating system. For that it is necessary to analyze Performance Proxy

  11. Corvid re-caching without 'theory of mind': a model.

    Directory of Open Access Journals (Sweden)

    Elske van der Vaart

    Full Text Available Scrub jays are thought to use many tactics to protect their caches. For instance, they predominantly bury food far away from conspecifics, and if they must cache while being watched, they often re-cache their worms later, once they are in private. Two explanations have been offered for such observations, and they are intensely debated. First, the birds may reason about their competitors' mental states, with a 'theory of mind'; alternatively, they may apply behavioral rules learned in daily life. Although this second hypothesis is cognitively simpler, it does seem to require a different, ad-hoc behavioral rule for every caching and re-caching pattern exhibited by the birds. Our new theory avoids this drawback by explaining a large variety of patterns as side-effects of stress and the resulting memory errors. Inspired by experimental data, we assume that re-caching is not motivated by a deliberate effort to safeguard specific caches from theft, but by a general desire to cache more. This desire is brought on by stress, which is determined by the presence and dominance of onlookers, and by unsuccessful recovery attempts. We study this theory in two experiments similar to those done with real birds with a kind of 'virtual bird', whose behavior depends on a set of basic assumptions about corvid cognition, and a well-established model of human memory. Our results show that the 'virtual bird' acts as the real birds did; its re-caching reflects whether it has been watched, how dominant its onlooker was, and how close to that onlooker it has cached. This happens even though it cannot attribute mental states, and it has only a single behavioral rule assumed to be previously learned. Thus, our simulations indicate that corvid re-caching can be explained without sophisticated social cognition. Given our specific predictions, our theory can easily be tested empirically.

  12. Molecular proxies for paleoclimatology

    Science.gov (United States)

    Eglinton, Timothy I.; Eglinton, Geoffrey

    2008-10-01

    We summarize the applications of molecular proxies in paleoclimatology. Marine molecular records especially are proving to be of value but certain environmentally persistent compounds can also be measured in lake sediments, loess deposits and ice cores. The fundamentals of this approach are the molecular parameters, the compound abundances and carbon, hydrogen, nitrogen and oxygen isotopic contents which can be derived by the analysis of sediment extracts. These afford proxy measures which can be interpreted in terms of the conditions which control climate and also reflect its operation. We discuss two types of proxy; those of terrigenous and those of aquatic origin, and exemplify their application in the study of marine sediments through the medium of ten case studies based in the Atlantic, Mediterranean and Pacific Oceans, and in Antarctica. The studies are mainly for periods in the present, the Holocene and particularly the last glacial/interglacial, but they also include one study from the Cretaceous. The terrigenous proxies, which are measures of continental vegetation, are based on higher plant leaf wax compounds, i.e. long-chain (circa C 30) hydrocarbons, alcohols and acids. They register the relative contributions of C 3 vs. C 4 type plants to the vegetation in the source areas. The two marine proxies are measures of sea surface temperatures (SST). The longer established one, (U 37K') is based on the relative abundances of C 37 alkenones photosynthesized by unicellular algae, members of the Haptophyta. The newest proxy (TEX 86) is based on C 86 glycerol tetraethers (GDGTs) synthesized in the water column by some of the archaeal microbiota, the Crenarchaeota.

  13. Clark’s Nutcrackers (Nucifraga columbiana Flexibly Adapt Caching Behaviour to a Cooperative Context

    Directory of Open Access Journals (Sweden)

    Dawson Clary

    2016-10-01

    Full Text Available Corvids recognize when their caches are at risk of being stolen by others and have developed strategies to protect these caches from pilferage. For instance, Clark’s nutcrackers will suppress the number of caches they make if being observed by a potential thief. However, cache protection has most often been studied using competitive contexts, so it is unclear whether corvids can adjust their caching in beneficial ways to accommodate non-competitive situations. Therefore, we examined whether Clark’s nutcrackers, a non-social corvid, would flexibly adapt their caching behaviours to a cooperative context. To do so, birds were given a caching task during which caches made by one individual were reciprocally exchanged for the caches of a partner bird over repeated trials. In this scenario, if caching behaviours can be flexibly deployed, then the birds should recognize the cooperative nature of the task and maintain or increase caching levels over time. However, if cache protection strategies are applied independent of social context and simply in response to cache theft, then cache suppression should occur. In the current experiment, we found that the birds maintained caching throughout the experiment. We report that males increased caching in response to a manipulation in which caches were artificially added, suggesting the birds could adapt to the cooperative nature of the task. Additionally, we show that caching decisions were not solely due to motivational factors, instead showing an additional influence attributed to the behaviour of the partner bird.

  14. Cache directory lookup reader set encoding for partial cache line speculation support

    Science.gov (United States)

    Gara, Alan; Ohmacht, Martin

    2014-10-21

    In a multiprocessor system, with conflict checking implemented in a directory lookup of a shared cache memory, a reader set encoding permits dynamic recordation of read accesses. The reader set encoding includes an indication of a portion of a line read, for instance by indicating boundaries of read accesses. Different encodings may apply to different types of speculative execution.

  15. Cache timing attacks on recent microarchitectures

    DEFF Research Database (Denmark)

    Andreou, Alexandres; Bogdanov, Andrey; Tischhauser, Elmar Wolfgang

    2017-01-01

    AES or similar algorithms in virtualized environments. This paper applies variants of this cache timing attack to Intel's latest generation of microprocessors. It enables a spy-process to recover cryptographic keys, interacting with the victim processes only over TCP. The threat model is a logically...... separated but CPU co-located attacker with root privileges. We report successful and practically verified applications of this attack against a wide range of microarchitectures, from a two-core Nehalem processor (i5-650) to two-core Haswell (i7-4600M) and four-core Skylake processors (i7-6700). The attack...

  16. Formal verification of an MMU and MMU cache

    Science.gov (United States)

    Schubert, E. T.

    1991-01-01

    We describe the formal verification of a hardware subsystem consisting of a memory management unit and a cache. These devices are verified independently and then shown to interact correctly when composed. The MMU authorizes memory requests and translates virtual addresses to real addresses. The cache improves performance by maintaining a LRU (least recently used) list from the memory resident segment table.

  17. Design Space Exploration of Object Caches with Cross-Profiling

    DEFF Research Database (Denmark)

    Schoeberl, Martin; Binder, Walter; Villazon, Alex

    2011-01-01

    . However, before implementing such an object cache, an empirical analysis of different organization forms is needed. We use a cross-profiling technique based on aspect-oriented programming in order to evaluate different object cache organizations with standard Java benchmarks. From the evaluation we...

  18. Smart Caching for Efficient Information Sharing in Distributed Information Systems

    Science.gov (United States)

    2008-09-01

    Leighton, Matthew Levine, Daniel Lewin , Rina Panigrahy (1997), “Consistent Hashing and Random Trees: Distributed Caching Protocols for Relieving Hot...Danzig, Chuck Neerdaels, Michael Schwartz and Kurt Worrell (1996), “A Hierarchical Internet Object Cache,” in USENIX Proceedings, 1996. 51 INITIAL

  19. Test Set Development for Cache Memory in Modern Microprocessors

    NARCIS (Netherlands)

    Al-Ars, Z.; Hamdioui, S.; Gaydadjiev, G.; Vassiliadis, S.

    2008-01-01

    Up to 53% of the time spent on testing current Intel microprocessors is needed to test on-chip caches, due to the high complexity of memory tests and to the large amount of transistors dedicated to such memories. This paper discusses the methodology used to develop effective and efficient cache

  20. Best Practice for Caching of Single-Path Code

    DEFF Research Database (Denmark)

    Schoeberl, Martin; Cilku, Bekim; Prokesch, Daniel

    2017-01-01

    Single-path code has some unique properties that make it interesting to explore different caching and prefetching alternatives for the stream of instructions. In this paper, we explore different cache organizations and how they perform with single-path code....

  1. Cache valley virus in a patient diagnosed with aseptic meningitis.

    Science.gov (United States)

    Nguyen, Nang L; Zhao, Guoyan; Hull, Rene; Shelly, Mark A; Wong, Susan J; Wu, Guang; St George, Kirsten; Wang, David; Menegus, Marilyn A

    2013-06-01

    Cache Valley virus was initially isolated from mosquitoes and had been linked to central nervous system-associated diseases. A case of Cache Valley virus infection is described. The virus was cultured from a patient's cerebrospinal fluid and identified with real-time reverse transcription-PCR and sequencing, which also yielded the complete viral coding sequences.

  2. Evidence for cache surveillance by a scatter-hoarding rodent

    NARCIS (Netherlands)

    Hirsch, B. T.; Kays, R.; Jansen, P. A.

    The mechanisms by which food-hoarding animals are capable of remembering the locations of numerous cached food items over long time spans has been the focus of intensive research. The 'memory enhancement hypothesis' states that hoarders reinforce spatial memory of their caches by repeatedly

  3. Evidence for cache surveillance by a scatter-hoarding rodent

    NARCIS (Netherlands)

    Hirsch, B.T.; Kays, R.; Jansen, P.A.

    2013-01-01

    The mechanisms by which food-hoarding animals are capable of remembering the locations of numerous cached food items over long time spans has been the focus of intensive research. The ‘memory enhancement hypothesis’ states that hoarders reinforce spatial memory of their caches by repeatedly

  4. Experimental Results of Rover-Based Coring and Caching

    Science.gov (United States)

    Backes, Paul G.; Younse, Paulo; DiCicco, Matthew; Hudson, Nicolas; Collins, Curtis; Allwood, Abigail; Paolini, Robert; Male, Cason; Ma, Jeremy; Steele, Andrew; hide

    2011-01-01

    Experimental results are presented for experiments performed using a prototype rover-based sample coring and caching system. The system consists of a rotary percussive coring tool on a five degree-of-freedom manipulator arm mounted on a FIDO-class rover and a sample caching subsystem mounted on the rover. Coring and caching experiments were performed in a laboratory setting and in a field test at Mono Lake, California. Rock abrasion experiments using an abrading bit on the coring tool were also performed. The experiments indicate that the sample acquisition and caching architecture is viable for use in a 2018 timeframe Mars caching mission and that rock abrasion using an abrading bit may be feasible in place of a dedicated rock abrasion tool.

  5. Cache Timing Analysis of LFSR-based Stream Ciphers

    DEFF Research Database (Denmark)

    Zenner, Erik; Leander, Gregor; Hawkes, Philip

    2009-01-01

    Cache timing attacks are a class of side-channel attacks that is applicable against certain software implementations. They have generated significant interest when demonstrated against the Advanced Encryption Standard (AES), but have more recently also been applied against other cryptographic...... primitives. In this paper, we give a cache timing cryptanalysis of stream ciphers using word-based linear feedback shift registers (LFSRs), such as Snow, Sober, Turing, or Sosemanuk. Fast implementations of such ciphers use tables that can be the target for a cache timing attack. Assuming that a small number...... of noise-free cache timing measurements are possible, we describe a general framework showing how the LFSR state for any such cipher can be recovered using very little computational effort. For the ciphers mentioned above, we show how this knowledge can be turned into efficient cache-timing attacks against...

  6. Compiler-Enforced Cache Coherence Using a Functional Language

    Directory of Open Access Journals (Sweden)

    Rich Wolski

    1996-01-01

    Full Text Available The cost of hardware cache coherence, both in terms of execution delay and operational cost, is substantial for scalable systems. Fortunately, compiler-generated cache management can reduce program serialization due to cache contention; increase execution performance; and reduce the cost of parallel systems by eliminating the need for more expensive hardware support. In this article, we use the Sisal functional language system as a vehicle to implement and investigate automatic, compiler-based cache management. We describe our implementation of Sisal for the IBM Power/4. The Power/4, briefly available as a product, represents an early attempt to build a shared memory machine that relies strictly on the language system for cache coherence. We discuss the issues associated with deterministic execution and program correctness on a system without hardware coherence, and demonstrate how Sisal (as a functional language is able to address those issues.

  7. A Refreshable, On-line Cache for HST Data Retrieval

    Science.gov (United States)

    Fraquelli, Dorothy A.; Ellis, Tracy A.; Ridgaway, Michael; DPAS Team

    2016-01-01

    We discuss upgrades to the HST Data Processing System, with an emphasis on the changes Hubble Space Telescope (HST) Archive users will experience. In particular, data are now held on-line (in a cache) removing the need to reprocess the data every time they are requested from the Archive. OTFR (on the fly reprocessing) has been replaced by a reprocessing system, which runs in the background. Data in the cache are automatically placed in the reprocessing queue when updated calibration reference files are received or when an improved calibration algorithm is installed. Data in the on-line cache are expected to be the most up to date version. These changes were phased in throughout 2015 for all active instruments.The on-line cache was populated instrument by instrument over the course of 2015. As data were placed in the cache, the flag that triggers OTFR was reset so that OTFR no longer runs on these data. "Hybrid" requests to the Archive are handled transparently, with data not yet in the cache provided via OTFR and the remaining data provided from the cache. Users do not need to make separate requests.Users of the MAST Portal will be able to download data from the cache immediately. For data not in the cache, the Portal will send the user to the standard "Retrieval Options Page," allowing the user to direct the Archive to process and deliver the data.The classic MAST Search and Retrieval interface has the same look and feel as previously. Minor changes, unrelated to the cache, have been made to the format of the Retrieval Options Page.

  8. Proxy Smart Card Systems

    OpenAIRE

    Cattaneo, Giuseppe; Faruolo, Pompeo; Palazzo, Vincenzo; Visconti, Ivan

    2010-01-01

    International audience; The established legal value of digital signatures and the growing availability of identity-based digital services are progressively extending the use of smart cards to all citizens, opening new challenging scenarios. Among them, motivated by concrete applications, secure and practical delegation of digital signatures is becoming more and more critical. Unfortunately, secure delegation systems proposed so far (e.g., proxy signatures) include various drawbacks for any pr...

  9. Cache Memory: An Analysis on Replacement Algorithms and Optimization Techniques

    Directory of Open Access Journals (Sweden)

    QAISAR JAVAID

    2017-10-01

    Full Text Available Caching strategies can improve the overall performance of a system by allowing the fast processor and slow memory to at a same pace. One important factor in caching is the replacement policy. Advancement in technology results in evolution of a huge number of techniques and algorithms implemented to improve cache performance. In this paper, analysis is done on different cache optimization techniques as well as replacement algorithms. Furthermore this paper presents a comprehensive statistical comparison of cache optimization techniques.To the best of our knowledge there is no numerical measure which can tell us the rating of specific cache optimization technique. We tried to come up with such a numerical figure. By statistical comparison we find out which technique is more consistent among all others. For said purpose we calculated mean and CV (Coefficient of Variation. CV tells us about which technique is more consistent. Comparative analysis of different techniques shows that victim cache has more consistent technique among all.

  10. A two-level cache for distributed information retrieval in search engines.

    Science.gov (United States)

    Zhang, Weizhe; He, Hui; Ye, Jianwei

    2013-01-01

    To improve the performance of distributed information retrieval in search engines, we propose a two-level cache structure based on the queries of the users' logs. We extract the highest rank queries of users from the static cache, in which the queries are the most popular. We adopt the dynamic cache as an auxiliary to optimize the distribution of the cache data. We propose a distribution strategy of the cache data. The experiments prove that the hit rate, the efficiency, and the time consumption of the two-level cache have advantages compared with other structures of cache.

  11. Cache and memory hierarchy design a performance directed approach

    CERN Document Server

    Przybylski, Steven A

    1991-01-01

    An authoritative book for hardware and software designers. Caches are by far the simplest and most effective mechanism for improving computer performance. This innovative book exposes the characteristics of performance-optimal single and multi-level cache hierarchies by approaching the cache design process through the novel perspective of minimizing execution times. It presents useful data on the relative performance of a wide spectrum of machines and offers empirical and analytical evaluations of the underlying phenomena. This book will help computer professionals appreciate the impact of ca

  12. Efficient Context Switching for the Stack Cache: Implementation and Analysis

    DEFF Research Database (Denmark)

    Abbaspourseyedi, Sahar; Brandner, Florian; Naji, Amine

    2015-01-01

    The design of tailored hardware has proven a successful strategy to reduce the timing analysis overhead for (hard) real-time systems. The stack cache is an example of such a design that has been proven to provide good average-case performance, while being easy to analyze. So far, however, the ana...... and restored when a task is preempted. We propose (a) an analysis exploiting the simplicity of the stack cache to bound the overhead induced by task pre-emption and (b) an extension of the design that allows to (partially) hide the overhead by virtualizing stack caches....

  13. A Software Managed Stack Cache for Real-Time Systems

    DEFF Research Database (Denmark)

    Jordan, Alexander; Abbaspourseyedi, Sahar; Schoeberl, Martin

    2016-01-01

    to scratchpad memory regions aids predictability, it is limited to non-recursive programs and static allocation has to take different calling contexts into account. Using a stack cache that dynamically spills data to and fills data from external memory avoids these problems, while its simple design allows...... for efficiently deriving worst-case bounds through static analysis. In this paper we present the design and implementation of software managed caching of stack allocated data in a scratchpad memory. We demonstrate a compiler-aided implementation of a stack cache using the LLVM compiler framework and report on its...

  14. Performance of defect-tolerant set-associative cache memories

    Science.gov (United States)

    Frenzel, J. F.

    1991-01-01

    The increased use of on-chip cache memories has led researchers to investigate their performance in the presence of manufacturing defects. Several techniques for yield improvement are discussed and results are presented which indicate that set-associativity may be used to provide defect tolerance as well as improve the cache performance. Tradeoffs between several cache organizations and replacement strategies are investigated and it is shown that token-based replacement may be a suitable alternative to the widely-used LRU strategy.

  15. Error recovery in shared memory multiprocessors using private caches

    Science.gov (United States)

    Wu, Kun-Lung; Fuchs, W. Kent; Patel, Janak H.

    1990-01-01

    The problem of recovering from processor transient faults in shared memory multiprocesses systems is examined. A user-transparent checkpointing and recovery scheme using private caches is presented. Processes can recover from errors due to faulty processors by restarting from the checkpointed computation state. Implementation techniques using checkpoint identifiers and recovery stacks are examined as a means of reducing performance degradation in processor utilization during normal execution. This cache-based checkpointing technique prevents rollback propagation, provides rapid recovery, and can be integrated into standard cache coherence protocols. An analytical model is used to estimate the relative performance of the scheme during normal execution. Extensions to take error latency into account are presented.

  16. A distributed storage system with dCache

    DEFF Research Database (Denmark)

    Behrmann, Gerd; Fuhrmann, Patrick; Grønager, Michael

    2008-01-01

    The LCG collaboration is encompassed by a number of Tier 1 centers. The Nordic LCG Tier 1, operated by NDGF, is in contrast to many other Tier 1 centers distributed over the Nordic countries. A distributed setup was chosen for both political and technical reasons, but also provides a number...... of unique challenges. dCache is well known and respected as a powerful distributed storage resource manager, and was chosen for implementing the storage aspects of the Nordic Tier 1. In contrast to classic dCache deployments, we deploy dCache over a WAN with limited bandwidth, high latency, frequent network...

  17. Finite Automata Implementations Considering CPU Cache

    Directory of Open Access Journals (Sweden)

    J. Holub

    2007-01-01

    Full Text Available The finite automata are mathematical models for finite state systems. More general finite automaton is the nondeterministic finite automaton (NFA that cannot be directly used. It is usually transformed to the deterministic finite automaton (DFA that then runs in time O(n, where n is the size of the input text. We present two main approaches to practical implementation of DFA considering CPU cache. The first approach (represented by Table Driven and Hard Coded implementations is suitable forautomata being run very frequently, typically having cycles. The other approach is suitable for a collection of automata from which various automata are retrieved and then run. This second kind of automata are expected to be cycle-free. 

  18. Automated Cache Performance Analysis And Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Mohror, Kathryn [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2013-12-23

    While there is no lack of performance counter tools for coarse-grained measurement of cache activity, there is a critical lack of tools for relating data layout to cache behavior to application performance. Generally, any nontrivial optimizations are either not done at all, or are done ”by hand” requiring significant time and expertise. To the best of our knowledge no tool available to users measures the latency of memory reference instructions for partic- ular addresses and makes this information available to users in an easy-to-use and intuitive way. In this project, we worked to enable the Open|SpeedShop performance analysis tool to gather memory reference latency information for specific instructions and memory ad- dresses, and to gather and display this information in an easy-to-use and intuitive way to aid performance analysts in identifying problematic data structures in their codes. This tool was primarily designed for use in the supercomputer domain as well as grid, cluster, cloud-based parallel e-commerce, and engineering systems and middleware. Ultimately, we envision a tool to automate optimization of application cache layout and utilization in the Open|SpeedShop performance analysis tool. To commercialize this soft- ware, we worked to develop core capabilities for gathering enhanced memory usage per- formance data from applications and create and apply novel methods for automatic data structure layout optimizations, tailoring the overall approach to support existing supercom- puter and cluster programming models and constraints. In this Phase I project, we focused on infrastructure necessary to gather performance data and present it in an intuitive way to users. With the advent of enhanced Precise Event-Based Sampling (PEBS) counters on recent Intel processor architectures and equivalent technology on AMD processors, we are now in a position to access memory reference information for particular addresses. Prior to the introduction of PEBS counters

  19. Munchausen syndrome by proxy

    Directory of Open Access Journals (Sweden)

    Jovanović Aleksandar A.

    2005-01-01

    Full Text Available This review deals with bibliography on Munchausen syndrome by proxy (MSbP. The name of this disorder was introduced by English psychiatrist Roy Meadow who pointed to diagnostic difficulties as well as to serious medical and legal connotations of MSbP. MSbP was classified in DSM-IV among criteria sets provided for further study as "factitious disorder by proxy", while in ICD-10, though not explicitly cited, MSbP might be classified as "factitious disorders" F68.1. MSbP is a special form of abuse where the perpetrator induces somatic or mental symptoms of illness in the victim under his/her care and then persistently presents the victims for medical examinations and care. The victim is usually a preschool child and the perpetrator is the child's mother. Motivation for such pathological behavior of perpetrator is considered to be unconscious need to assume sick role by proxy while external incentives such as economic gain are absent. Conceptualization of MSbP development is still in the domain of psychodynamic speculation, its course is chronic and the prognosis is poor considering lack of consistent, efficient and specific treatment. The authors also present the case report of thirty-three year-old mother who had been abusing her nine year-old son both emotionally and physically over the last several years forcing him to, together with her, report to the police, medical and educational institutions that he had been the victim of rape, poisoning and beating by various individuals, especially teaching and medical staff. Mother manifested psychosis and her child presented with impaired cognitive development, emotional problems and conduct disorder.

  20. NIC atomic operation unit with caching and bandwidth mitigation

    Energy Technology Data Exchange (ETDEWEB)

    Hemmert, Karl Scott; Underwood, Keith D.; Levenhagen, Michael J.

    2016-03-01

    A network interface controller atomic operation unit and a network interface control method comprising, in an atomic operation unit of a network interface controller, using a write-through cache and employing a rate-limiting functional unit.

  1. Reducing Soft-error Vulnerability of Caches using Data Compression

    Energy Technology Data Exchange (ETDEWEB)

    Mittal, Sparsh [ORNL; Vetter, Jeffrey S [ORNL

    2016-01-01

    With ongoing chip miniaturization and voltage scaling, particle strike-induced soft errors present increasingly severe threat to the reliability of on-chip caches. In this paper, we present a technique to reduce the vulnerability of caches to soft-errors. Our technique uses data compression to reduce the number of vulnerable data bits in the cache and performs selective duplication of more critical data-bits to provide extra protection to them. Microarchitectural simulations have shown that our technique is effective in reducing architectural vulnerability factor (AVF) of the cache and outperforms another technique. For single and dual-core system configuration, the average reduction in AVF is 5.59X and 8.44X, respectively. Also, the implementation and performance overheads of our technique are minimal and it is useful for a broad range of workloads.

  2. Data Resilience in the dCache Storage System

    Science.gov (United States)

    Rossi, A. L.; Adeyemi, F.; Ashish, A.; Behrmann, G.; Fuhrmann, P.; Litvintsev, D.; Millar, P.; Mkrtchyan, T.; Mohiuddin, A.; Sahakyan, M.; Starek, J.; Yasar, S.

    2017-10-01

    In this paper we discuss design, implementation considerations, and performance of a new Resilience Service in the dCache storage system responsible for file availability and durability functionality.

  3. Distributed caching mechanism for various MPE software services

    CERN Document Server

    Svec, Andrej

    2017-01-01

    The MPE Software Section provides multiple software services to facilitate the testing and the operation of the CERN Accelerator complex. Continuous growth in the number of users and the amount of processed data result in the requirement of high scalability. Our current priority is to move towards a distributed and properly load balanced set of services based on containers. The aim of this project is to implement the generic caching mechanism applicable to our services and chosen architecture. The project will at first require research about the different aspects of distributed caching (persistence, no gc-caching, cache consistency etc.) and the available technologies followed by the implementation of the chosen solution. In order to validate the correctness and performance of the implementation in the last phase of the project it will be required to implement a monitoring layer and integrate it with the current ELK stack.

  4. Caching Strategy Based on Hierarchical Cluster for Named Data Networking

    National Research Council Canada - National Science Library

    Yan, Huan; Gao, Deyun; Su, Wei; Foh, Chuan Heng; Zhang, Hongke; Vasilakos, Athanasios V

    2017-01-01

    The in-network caching strategy in named data networking can not only reduce the unnecessary fetching of content from the original content server deep in the core network and improve the user response...

  5. Cache River National Wildlife Refuge Water Resource Inventory and Assessment

    Data.gov (United States)

    US Fish and Wildlife Service, Department of the Interior — This Water Resource Inventory and Assessment (WRIA) for Cache River National Wildlife Refuge summarizes available and relevant information for refuge water...

  6. Energy Constraint Node Cache Based Routing Protocol For Adhoc Network

    OpenAIRE

    Dhiraj Nitnaware; Ajay Verma

    2010-01-01

    Mobile Adhoc Networks (MANETs) is a wireless infrastructureless network, where nodes are free to move independently in any direction. The nodes have limited battery power; hence we require energy efficient routing protocols to optimize network performance. This paper aims to develop a new routing algorithm based on the energy status of the node cache. We have named this algorithm as ECNC_AODV (Energy Constraint Node Cache) based routing protocol which is derived from the AODV protocol. The al...

  7. Binary mesh partitioning for cache-efficient visualization.

    Science.gov (United States)

    Tchiboukdjian, Marc; Danjean, Vincent; Raffin, Bruno

    2010-01-01

    One important bottleneck when visualizing large data sets is the data transfer between processor and memory. Cache-aware (CA) and cache-oblivious (CO) algorithms take into consideration the memory hierarchy to design cache efficient algorithms. CO approaches have the advantage to adapt to unknown and varying memory hierarchies. Recent CA and CO algorithms developed for 3D mesh layouts significantly improve performance of previous approaches, but they lack of theoretical performance guarantees. We present in this paper a {\\schmi O}(N\\log N) algorithm to compute a CO layout for unstructured but well shaped meshes. We prove that a coherent traversal of a N-size mesh in dimension d induces less than N/B+{\\schmi O}(N/M;{1/d}) cache-misses where B and M are the block size and the cache size, respectively. Experiments show that our layout computation is faster and significantly less memory consuming than the best known CO algorithm. Performance is comparable to this algorithm for classical visualization algorithm access patterns, or better when the BSP tree produced while computing the layout is used as an acceleration data structure adjusted to the layout. We also show that cache oblivious approaches lead to significant performance increases on recent GPU architectures.

  8. Pilfering Eurasian jays use visual and acoustic information to locate caches.

    Science.gov (United States)

    Shaw, Rachael C; Clayton, Nicola S

    2014-11-01

    Pilfering corvids use observational spatial memory to accurately locate caches that they have seen another individual make. Accordingly, many corvid cache-protection strategies limit the transfer of visual information to potential thieves. Eurasian jays (Garrulus glandarius) employ strategies that reduce the amount of visual and auditory information that is available to competitors. Here, we test whether or not the jays recall and use both visual and auditory information when pilfering other birds' caches. When jays had no visual or acoustic information about cache locations, the proportion of available caches that they found did not differ from the proportion expected if jays were searching at random. By contrast, after observing and listening to a conspecific caching in gravel or sand, jays located a greater proportion of caches, searched more frequently in the correct substrate type and searched in fewer empty locations to find the first cache than expected. After only listening to caching in gravel and sand, jays also found a larger proportion of caches and searched in the substrate type where they had heard caching take place more frequently than expected. These experiments demonstrate that Eurasian jays possess observational spatial memory and indicate that pilfering jays may gain information about cache location merely by listening to caching. This is the first evidence that a corvid may use recalled acoustic information to locate and pilfer caches.

  9. Vigi4Med Scraper: A Framework for Web Forum Structured Data Extraction and Semantic Representation.

    Directory of Open Access Journals (Sweden)

    Bissan Audeh

    Full Text Available The extraction of information from social media is an essential yet complicated step for data analysis in multiple domains. In this paper, we present Vigi4Med Scraper, a generic open source framework for extracting structured data from web forums. Our framework is highly configurable; using a configuration file, the user can freely choose the data to extract from any web forum. The extracted data are anonymized and represented in a semantic structure using Resource Description Framework (RDF graphs. This representation enables efficient manipulation by data analysis algorithms and allows the collected data to be directly linked to any existing semantic resource. To avoid server overload, an integrated proxy with caching functionality imposes a minimal delay between sequential requests. Vigi4Med Scraper represents the first step of Vigi4Med, a project to detect adverse drug reactions (ADRs from social networks founded by the French drug safety agency Agence Nationale de Sécurité du Médicament (ANSM. Vigi4Med Scraper has successfully extracted greater than 200 gigabytes of data from the web forums of over 20 different websites.

  10. Vigi4Med Scraper: A Framework for Web Forum Structured Data Extraction and Semantic Representation.

    Science.gov (United States)

    Audeh, Bissan; Beigbeder, Michel; Zimmermann, Antoine; Jaillon, Philippe; Bousquet, Cédric

    2017-01-01

    The extraction of information from social media is an essential yet complicated step for data analysis in multiple domains. In this paper, we present Vigi4Med Scraper, a generic open source framework for extracting structured data from web forums. Our framework is highly configurable; using a configuration file, the user can freely choose the data to extract from any web forum. The extracted data are anonymized and represented in a semantic structure using Resource Description Framework (RDF) graphs. This representation enables efficient manipulation by data analysis algorithms and allows the collected data to be directly linked to any existing semantic resource. To avoid server overload, an integrated proxy with caching functionality imposes a minimal delay between sequential requests. Vigi4Med Scraper represents the first step of Vigi4Med, a project to detect adverse drug reactions (ADRs) from social networks founded by the French drug safety agency Agence Nationale de Sécurité du Médicament (ANSM). Vigi4Med Scraper has successfully extracted greater than 200 gigabytes of data from the web forums of over 20 different websites.

  11. [Munchausen by proxy syndrome].

    Science.gov (United States)

    Depauw, A; Loas, G; Delhaye, M

    2015-01-01

    The Munchausen syndrome by proxy (MSBP) was first described in 1977 by the English paediatrician Roy Meadow. The MSBP is an extremely complicated diagnosis because of the difficulty in finding the incriminating evidence of its existence and because of the ethical issue it raises for caregivers. Its implications from a medical, psychological and legal point of view raise difficult questions for any professional confronted to it. In this article we will first present the case of a 16-year-old teenager who had been bedridden in hospital for a year, before an atypical form of MSBP was finally diagnosed, after a stay in a child and adolescent psychiatry unit. We will then discuss this case in light of a literature review on the MSBP.

  12. Ajax and Web Services

    CERN Document Server

    Pruett, Mark

    2006-01-01

    Ajax and web services are a perfect match for developing web applications. Ajax has built-in abilities to access and manipulate XML data, the native format for almost all REST and SOAP web services. Using numerous examples, this document explores how to fit the pieces together. Examples demonstrate how to use Ajax to access publicly available web services fromYahoo! and Google. You'll also learn how to use web proxies to access data on remote servers and how to transform XML data using XSLT.

  13. Lazy Spilling for a Time-Predictable Stack Cache: Implementation and Analysis

    DEFF Research Database (Denmark)

    Abbaspourseyedi, Sahar; Jordan, Alexander; Brandner, Florian

    2014-01-01

    of the cache content to main memory, if the content was not modified in the meantime. At first sight, this appears to be an average-case optimization. Indeed, measurements show that the number of cache blocks spilled is reduced to about 17% and 30% in the mean, depending on the stack cache size. Furthermore...... this problem. A stack cache, for instance, allows the compiler to efficiently cache a program's stack, while static analysis of its behavior remains easy. Likewise, its implementation requires little hardware overhead. This work introduces an optimization of the standard stack cache to avoid redundant spilling...

  14. dCache on Steroids - Delegated Storage Solutions

    Science.gov (United States)

    Mkrtchyan, T.; Adeyemi, F.; Ashish, A.; Behrmann, G.; Fuhrmann, P.; Litvintsev, D.; Millar, P.; Rossi, A.; Sahakyan, M.; Starek, J.

    2017-10-01

    For over a decade, dCache.org has delivered a robust software used at more than 80 Universities and research institutes around the world, allowing these sites to provide reliable storage services for the WLCG experiments as well as many other scientific communities. The flexible architecture of dCache allows running it in a wide variety of configurations and platforms - from a SoC based all-in-one Raspberry-Pi up to hundreds of nodes in a multipetabyte installation. Due to lack of managed storage at the time, dCache implemented data placement, replication and data integrity directly. Today, many alternatives are available: S3, GlusterFS, CEPH and others. While such solutions position themselves as scalable storage systems, they cannot be used by many scientific communities out of the box. The absence of community-accepted authentication and authorization mechanisms, the use of product specific protocols and the lack of namespace are some of the reasons that prevent wide-scale adoption of these alternatives. Most of these limitations are already solved by dCache. By delegating low-level storage management functionality to the above-mentioned new systems and providing the missing layer through dCache, we provide a solution which combines the benefits of both worlds - industry standard storage building blocks with the access protocols and authentication required by scientific communities. In this paper, we focus on CEPH, a popular software for clustered storage that supports file, block and object interfaces. CEPH is often used in modern computing centers, for example as a backend to OpenStack services. We will show prototypes of dCache running with a CEPH backend and discuss the benefits and limitations of such an approach. We will also outline the roadmap for supporting ‘delegated storage’ within the dCache releases.

  15. 75 FR 9073 - Amendments to Rules Requiring Internet Availability of Proxy Materials

    Science.gov (United States)

    2010-02-26

    ... an Internet Web site; it is not intended to serve as a stand-alone basis for making a voting decision... Exchange Commission 17 CFR Parts 230 and 240 Amendments to Rules Requiring Internet Availability of Proxy... Rules Requiring Internet Availability of Proxy Materials AGENCY: Securities and Exchange Commission...

  16. Proxy consent: moral authority misconceived.

    Science.gov (United States)

    Wrigley, A

    2007-09-01

    The Mental Capacity Act 2005 has provided unified scope in the British medical system for proxy consent with regard to medical decisions, in the form of a lasting power of attorney. While the intentions are to increase the autonomous decision making powers of those unable to consent, the author of this paper argues that the whole notion of proxy consent collapses into a paternalistic judgement regarding the other person's best interests and that the new legislation introduces only an advisor, not a proxy with the moral authority to make treatment decisions on behalf of another. The criticism is threefold. First, there is good empirical evidence that people are poor proxy decision makers as regards accurately representing other people's desires and wishes, and this is therefore a pragmatically inadequate method of gaining consent. Second, philosophical theory explaining how we represent other people's thought processes indicates that we are unlikely ever to achieve accurate simulations of others' wishes in making a proxy decision. Third, even if we could accurately simulate other people's beliefs and wishes, the current construction of proxy consent in the Mental Capacity Act means that it has no significant ethical authority to match that of autonomous decision making. Instead, it is governed by a professional, paternalistic, best-interests judgement that undermines the intended role of a proxy decision maker. The author argues in favour of clearly adopting the paternalistic best-interests option and viewing the proxy as solely an advisor to the professional medical team in helping make best-interests judgements.

  17. Private Computing with Untrustworthy Proxies

    NARCIS (Netherlands)

    Gedrojc, B.

    2011-01-01

    The objective of this thesis is to preserve privacy for the user while untrustworthy proxies are involved in the communication and computation i.e. private computing. A basic example of private computing is an access control system (proxy) which grants access (or not) to users based on fingerprints.

  18. Episodic-like memory during cache recovery by scrub jays.

    Science.gov (United States)

    Clayton, N S; Dickinson, A

    1998-09-17

    The recollection of past experiences allows us to recall what a particular event was, and where and when it occurred, a form of memory that is thought to be unique to humans. It is known, however, that food-storing birds remember the spatial location and contents of their caches. Furthermore, food-storing animals adapt their caching and recovery strategies to the perishability of food stores, which suggests that they are sensitive to temporal factors. Here we show that scrub jays (Aphelocoma coerulescens) remember 'when' food items are stored by allowing them to recover perishable 'wax worms' (wax-moth larvae) and non-perishable peanuts which they had previously cached in visuospatially distinct sites. Jays searched preferentially for fresh wax worms, their favoured food, when allowed to recover them shortly after caching. However, they rapidly learned to avoid searching for worms after a longer interval during which the worms had decayed. The recovery preference of jays demonstrates memory of where and when particular food items were cached, thereby fulfilling the behavioural criteria for episodic-like memory in non-human animals.

  19. Fundamental Parallel Algorithms for Private-Cache Chip Multiprocessors

    DEFF Research Database (Denmark)

    Arge, Lars Allan; Goodrich, Michael T.; Nelson, Michael

    2008-01-01

    about the way cores are interconnected, for we assume that all inter-processor communication occurs through the memory hierarchy. We study several fundamental problems, including prefix sums, selection, and sorting, which often form the building blocks of other parallel algorithms. Indeed, we present...... two sorting algorithms, a distribution sort and a mergesort. Our algorithms are asymptotically optimal in terms of parallel cache accesses and space complexity under reasonable assumptions about the relationships between the number of processors, the size of memory, and the size of cache blocks....... In addition, we study sorting lower bounds in a computational model, which we call the parallel external-memory (PEM) model, that formalizes the essential properties of our algorithms for private-cache CMPs....

  20. Lock and Unlock: A Data Management Algorithm for A Security-Aware Cache

    OpenAIRE

    Inoue, Koji

    2006-01-01

    This paper proposes an efficient cache line management algorithm for a security-aware cache architecture (SCache). SCache attempts to detect the corruption of return address values at runtime. When a return address store is executed, the cache generates a replica of the return address. This copied data is treated as read only. Subsequently, when the corresponding return address load is performed, the cache verifies the return address value loaded from the memory stack by means of comparing it...

  1. The Spy in the Sandbox: Practical Cache Attacks in JavaScript and their Implications

    Science.gov (United States)

    2015-10-16

    attacks that are relevant to personal computers are cache attacks, which exploit the use of cache memory as a shared resource be- tween different...speed CPUs and a large amount of lower-speed RAM. To bridge the per- formance gap between these two components, they make use of cache memory : a...type of memory that is smaller but faster than RAM (in terms of access time). Cache memory contains a subset of the RAM’s contents recently accessed by

  2. A Primer on Memory Consistency and Cache Coherence

    CERN Document Server

    Sorin, Daniel; Wood, David

    2011-01-01

    Many modern computer systems and most multicore chips (chip multiprocessors) support shared memory in hardware. In a shared memory system, each of the processor cores may read and write to a single shared address space. For a shared memory machine, the memory consistency model defines the architecturally visible behavior of its memory system. Consistency definitions provide rules about loads and stores (or memory reads and writes) and how they act upon memory. As part of supporting a memory consistency model, many machines also provide cache coherence protocols that ensure that multiple cached

  3. Effectiveness of caching in a distributed digital library system

    DEFF Research Database (Denmark)

    Hollmann, J.; Ardø, Anders; Stenstrom, P.

    2007-01-01

    offers a tremendous functional advantage to a user, the fulltext download delays caused by the network and queuing in servers make the user-perceived interactive performance poor. This paper studies how effective caching of articles at the client level can be achieved as well as at intermediate points...... as manifested by gateways that implement the interfaces to the many fulltext archives. A central research question in this approach is: What is the nature of locality in the user access stream to such a digital library? Based on access logs that drive the simulations, it is shown that client-side caching can...

  4. Scatter hoarding and cache pilferage by superior competitors: an experiment with wild boar, Sus scrofa

    NARCIS (Netherlands)

    Suselbeek, L.; Adamczyk, V.M.A.P.; Bongers, F.; Nolet, B.A.; Prins, H.H.T.; Wieren, van S.E.; Jansen, P.A.

    2014-01-01

    Food-hoarding patterns range between larder hoarding (a few large caches) and scatter hoarding (many small caches), and are, in essence, the outcome of a hoard size–number trade-off in pilferage risk. Animals that scatter hoard are believed to do so, despite higher costs, to reduce loss of cached

  5. Cache directory look-up re-use as conflict check mechanism for speculative memory requests

    Science.gov (United States)

    Ohmacht, Martin

    2013-09-10

    In a cache memory, energy and other efficiencies can be realized by saving a result of a cache directory lookup for sequential accesses to a same memory address. Where the cache is a point of coherence for speculative execution in a multiprocessor system, with directory lookups serving as the point of conflict detection, such saving becomes particularly advantageous.

  6. Cache-Oblivious Implicit Predecessor Dictionaries with the Working-Set Property

    DEFF Research Database (Denmark)

    Kejlberg-Rasmussen, Casper; Brodal, Gerth Stølting

    2012-01-01

    * additional space. In the cache-oblivious model the log is base B and the cache-obliviousness is due to our black box use of an existing cache-oblivious implicit dictionary. This is the first implicit dictionary supporting predecessor and successor searches in the working-set bound. Previous implicit...

  7. Professional JavaScript for Web Developers

    CERN Document Server

    Zakas, Nicholas C

    2011-01-01

    A significant update to a bestselling JavaScript book As the key scripting language for the web, JavaScript is supported by every modern web browser and allows developers to create client-side scripts that take advantage of features such as animating the canvas tag and enabling client-side storage and application caches. After an in-depth introduction to the JavaScript language, this updated edition of a bestseller progresses to break down how JavaScript is applied for web development using the latest web development technologies. Veteran author and JavaScript guru Nicholas Zakas shows how Jav

  8. Robotic Vehicle Proxy Simulation Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Energid Technologies proposes the development of a digital simulation that can replace robotic vehicles in field studies. This proxy simulation will model the...

  9. Alignment of Memory Transfers of a Time-Predictable Stack Cache

    DEFF Research Database (Denmark)

    Abbaspourseyedi, Sahar; Brandner, Florian

    2014-01-01

    Modern computer architectures use features which often complicate the WCET analysis of real-time software. Alternative time-predictable designs, and in particular caches, thus are gaining more and more interest. A recently proposed stack cache, for instance, avoids the need for the analysis...... of complex cache states. Instead, only the occupancy level of the cache has to be determined. The memory transfers generated by the standard stack cache are not generally aligned. These unaligned accesses risk to introduce complexity to the otherwise simple WCET analysis. In this work, we investigate three...... average-case performance and analysis complexity....

  10. Unfavorable Strides in Cache Memory Systems (RNR Technical Report RNR-92-015

    Directory of Open Access Journals (Sweden)

    David H. Bailey

    1995-01-01

    Full Text Available An important issue in obtaining high performance on a scientific application running on a cache-based computer system is the behavior of the cache when data are accessed at a constant stride. Others who have discussed this issue have noted an odd phenomenon in such situations: A few particular innocent-looking strides result in sharply reduced cache efficiency. In this article, this problem is analyzed, and a simple formula is presented that accurately gives the cache efficiency for various cache parameters and data strides.

  11. dCache, agile adoption of storage technology

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    For over a decade, dCache has been synonymous with large-capacity, fault-tolerant storage using commodity hardware that supports seamless data migration to and from tape. Over that time, it has satisfied the requirements of various demanding scientific user communities to store their data, transfer it between sites and fast, site-local access. When the dCache project started, the focus was on managing a relatively small disk cache in front of large tape archives. Over the project's lifetime storage technology has changed. During this period, technology changes have driven down the cost-per-GiB of harddisks. This resulted in a shift towards systems where the majority of data is stored on disk. More recently, the availability of Solid State Disks, while not yet a replacement for magnetic disks, offers an intriguing opportunity for significant performance improvement if they can be used intelligently within an existing system. New technologies provide new opportunities and dCache user communities' computi...

  12. Geometric Algorithms for Private-Cache Chip Multiprocessors

    DEFF Research Database (Denmark)

    Ajwani, Deepak; Sitchinava, Nodari; Zeh, Norbert

    2010-01-01

    We study techniques for obtaining efficient algorithms for geometric problems on private-cache chip multiprocessors. We show how to obtain optimal algorithms for interval stabbing counting, 1-D range counting, weighted 2-D dominance counting, and for computing 3-D maxima, 2-D lower envelopes, and 2...

  13. Caching Over-The-Top Services, the Netflix Case

    DEFF Research Database (Denmark)

    Jensen, Stefan; Jensen, Michael; Gutierrez Lopez, Jose Manuel

    2015-01-01

    Problem (LLB-CFL). The solution search processes are implemented based on Genetic Algorithms (GA), designing genetic operators highly targeted towards this specific problem. The proposed methods are applied to a case study focusing on the demand and cache specifications of Netflix, and framed into a real...

  14. ARC Cache: A solution for lightweight Grid sites in ATLAS

    CERN Document Server

    Garonne, Vincent; The ATLAS collaboration

    2016-01-01

    Many Grid sites have the need to reduce operational manpower, and running a storage element consumes a large amount of effort. In addition, setting up a new Grid site including a storage element involves a steep learning curve and large investment of time. For these reasons so-called storage-less sites are becoming more popular as a way to provide Grid computing resources with less operational overhead. ARC CE is a widely-used and mature Grid middleware which was designed from the start to be used on sites with no persistent storage element. Instead, it maintains a local self-managing cache of data which retains popular data for future jobs. As the cache is simply an area on a local posix shared filesystem with no external-facing service, it requires no extra maintenance. The cache can be scaled up as required by increasing the size of the filesystem or adding new filesystems. This paper describes how ARC CE and its cache are an ideal solution for lightweight Grid sites in the ATLAS experiment, and the integr...

  15. Fast and Cache-Oblivious Dynamic Programming with Local Dependencies

    DEFF Research Database (Denmark)

    Bille, Philip; Stöckel, Morten

    2012-01-01

    -oblivious algorithm for this type of local dynamic programming suitable for comparing large-scale strings. Our algorithm outperforms the previous state-of-the-art solutions. Surprisingly, our new simple algorithm is competitive with a complicated, optimized, and tuned implementation of the best cache-aware algorithm...

  16. Something different - caching applied to calculation of impedance matrix elements

    CSIR Research Space (South Africa)

    Lysko, AA

    2012-09-01

    Full Text Available This paper introduces a new method generally termed memoization, to accelerate filling in the impedance matrix, e.g. in the method of moments (MoM). The memoization stores records for recently computed matrix elements in a cache, and, when...

  17. Cache Timing Analysis of eStream Finalists

    DEFF Research Database (Denmark)

    Zenner, Erik

    2009-01-01

    Cache Timing Attacks have attracted a lot of cryptographic attention due to their relevance for the AES. However, their applicability to other cryptographic primitives is less well researched. In this talk, we give an overview over our analysis of the stream ciphers that were selected for phase 3...

  18. A Cache-Based Hardware Accelerator for Memory Data Movements

    NARCIS (Netherlands)

    Duarte, F.

    2008-01-01

    This dissertation presents a hardware accelerator that is able to accelerate large (including non-parallel) memory data movements, in particular memory copies, performed traditionally by the processors. As todays processors are tied with or have integrated caches with varying sizes (from several

  19. Cache-based memory copy hardware accelerator for multicore systems

    NARCIS (Netherlands)

    Duarte, F.; Wong, S.

    2010-01-01

    In this paper, we present a new architecture of the cache-based memory copy hardware accelerator in a multicore system supporting message passing. The accelerator is able to accelerate memory data movements, in particular memory copies. We perform an analytical analysis based on open-queuing theory

  20. A Cache Architecture for Counting Bloom Filters: Theory and Application

    Directory of Open Access Journals (Sweden)

    Mahmood Ahmadi

    2011-01-01

    Full Text Available Within packet processing systems, lengthy memory accesses greatly reduce performance. To overcome this limitation, network processors utilize many different techniques, for example, utilizing multilevel memory hierarchies, special hardware architectures, and hardware threading. In this paper, we introduce a multilevel memory architecture for counting Bloom filters. Based on the probabilities of incrementing of the counters in the counting Bloom filter, a multi-level cache architecture called the cached counting Bloom filter (CCBF is presented, where each cache level stores the items with the same counters. To test the CCBF architecture, we implement a software packet classifier that utilizes basic tuple space search using a 3-level CCBF. The results of mathematical analysis and implementation of the CCBF for packet classification show that the proposed cache architecture decreases the number of memory accesses when compared to a standard Bloom filter. Based on the mathematical analysis of CCBF, the number of accesses is decreased by at least 53%. The implementation results of the software packet classifier are at most 7.8% (3.5% in average less than corresponding mathematical analysis results. This difference is due to some parameters in the packet classification application such as number of tuples, distribution of rules through the tuples, and utilized hashing functions.

  1. Combining instruction prefetching with partial cache locking to improve WCET in real-time systems.

    Science.gov (United States)

    Ni, Fan; Long, Xiang; Wan, Han; Gao, Xiaopeng

    2013-01-01

    Caches play an important role in embedded systems to bridge the performance gap between fast processor and slow memory. And prefetching mechanisms are proposed to further improve the cache performance. While in real-time systems, the application of caches complicates the Worst-Case Execution Time (WCET) analysis due to its unpredictable behavior. Modern embedded processors often equip locking mechanism to improve timing predictability of the instruction cache. However, locking the whole cache may degrade the cache performance and increase the WCET of the real-time application. In this paper, we proposed an instruction-prefetching combined partial cache locking mechanism, which combines an instruction prefetching mechanism (termed as BBIP) with partial cache locking to improve the WCET estimates of real-time applications. BBIP is an instruction prefetching mechanism we have already proposed to improve the worst-case cache performance and in turn the worst-case execution time. The estimations on typical real-time applications show that the partial cache locking mechanism shows remarkable WCET improvement over static analysis and full cache locking.

  2. Combining instruction prefetching with partial cache locking to improve WCET in real-time systems.

    Directory of Open Access Journals (Sweden)

    Fan Ni

    Full Text Available Caches play an important role in embedded systems to bridge the performance gap between fast processor and slow memory. And prefetching mechanisms are proposed to further improve the cache performance. While in real-time systems, the application of caches complicates the Worst-Case Execution Time (WCET analysis due to its unpredictable behavior. Modern embedded processors often equip locking mechanism to improve timing predictability of the instruction cache. However, locking the whole cache may degrade the cache performance and increase the WCET of the real-time application. In this paper, we proposed an instruction-prefetching combined partial cache locking mechanism, which combines an instruction prefetching mechanism (termed as BBIP with partial cache locking to improve the WCET estimates of real-time applications. BBIP is an instruction prefetching mechanism we have already proposed to improve the worst-case cache performance and in turn the worst-case execution time. The estimations on typical real-time applications show that the partial cache locking mechanism shows remarkable WCET improvement over static analysis and full cache locking.

  3. Cache-Aware Asymptotically-Optimal Sampling-Based Motion Planning.

    Science.gov (United States)

    Ichnowski, Jeffrey; Prins, Jan F; Alterovitz, Ron

    2014-05-01

    We present CARRT* (Cache-Aware Rapidly Exploring Random Tree*), an asymptotically optimal sampling-based motion planner that significantly reduces motion planning computation time by effectively utilizing the cache memory hierarchy of modern central processing units (CPUs). CARRT* can account for the CPU's cache size in a manner that keeps its working dataset in the cache. The motion planner progressively subdivides the robot's configuration space into smaller regions as the number of configuration samples rises. By focusing configuration exploration in a region for periods of time, nearest neighbor searching is accelerated since the working dataset is small enough to fit in the cache. CARRT* also rewires the motion planning graph in a manner that complements the cache-aware subdivision strategy to more quickly refine the motion planning graph toward optimality. We demonstrate the performance benefit of our cache-aware motion planning approach for scenarios involving a point robot as well as the Rethink Robotics Baxter robot.

  4. a Cache Design Method for Spatial Information Visualization in 3d Real-Time Rendering Engine

    Science.gov (United States)

    Dai, X.; Xiong, H.; Zheng, X.

    2012-07-01

    A well-designed cache system has positive impacts on the 3D real-time rendering engine. As the amount of visualization data getting larger, the effects become more obvious. They are the base of the 3D real-time rendering engine to smoothly browsing through the data, which is out of the core memory, or from the internet. In this article, a new kind of caches which are based on multi threads and large file are introduced. The memory cache consists of three parts, the rendering cache, the pre-rendering cache and the elimination cache. The rendering cache stores the data that is rendering in the engine; the data that is dispatched according to the position of the view point in the horizontal and vertical directions is stored in the pre-rendering cache; the data that is eliminated from the previous cache is stored in the eliminate cache and is going to write to the disk cache. Multi large files are used in the disk cache. When a disk cache file size reaches the limit length(128M is the top in the experiment), no item will be eliminated from the file, but a new large cache file will be created. If the large file number is greater than the maximum number that is pre-set, the earliest file will be deleted from the disk. In this way, only one file is opened for writing and reading, and the rest are read-only so the disk cache can be used in a high asynchronous way. The size of the large file is limited in order to map to the core memory to save loading time. Multi-thread is used to update the cache data. The threads are used to load data to the rendering cache as soon as possible for rendering, to load data to the pre-rendering cache for rendering next few frames, and to load data to the elimination cache which is not necessary for the moment. In our experiment, two threads are designed. The first thread is to organize the memory cache according to the view point, and created two threads: the adding list and the deleting list, the adding list index the data that should be

  5. Qualitative and Quantitative Sentiment Proxies

    DEFF Research Database (Denmark)

    Zhao, Zeyan; Ahmad, Khurshid

    2015-01-01

    Sentiment analysis is a content-analytic investigative framework for researchers, traders and the general public involved in financial markets. This analysis is based on carefully sourced and elaborately constructed proxies for market sentiment and has emerged as a basis for analysing movements...... and trading volumes. The case study we use is a small market index (Danish Stock Exchange Index, OMXC 20, together with prevailing sentiment in Denmark, to evaluate the impact of sentiment on OMXC 20. Furthermore, we introduce a rather novel and quantitative sentiment proxy, that is the use of the index...

  6. Mobility-Aware Caching and Computation Offloading in 5G Ultra-Dense Cellular Networks

    Directory of Open Access Journals (Sweden)

    Min Chen

    2016-06-01

    Full Text Available Recent trends show that Internet traffic is increasingly dominated by content, which is accompanied by the exponential growth of traffic. To cope with this phenomena, network caching is introduced to utilize the storage capacity of diverse network devices. In this paper, we first summarize four basic caching placement strategies, i.e., local caching, Device-to-Device (D2D caching, Small cell Base Station (SBS caching and Macrocell Base Station (MBS caching. However, studies show that so far, much of the research has ignored the impact of user mobility. Therefore, taking the effect of the user mobility into consideration, we proposes a joint mobility-aware caching and SBS density placement scheme (MS caching. In addition, differences and relationships between caching and computation offloading are discussed. We present a design of a hybrid computation offloading and support it with experimental results, which demonstrate improved performance in terms of energy cost. Finally, we discuss the design of an incentive mechanism by considering network dynamics, differentiated user’s quality of experience (QoE and the heterogeneity of mobile terminals in terms of caching and computing capabilities.

  7. Mobility-Aware Caching and Computation Offloading in 5G Ultra-Dense Cellular Networks.

    Science.gov (United States)

    Chen, Min; Hao, Yixue; Qiu, Meikang; Song, Jeungeun; Wu, Di; Humar, Iztok

    2016-06-25

    Recent trends show that Internet traffic is increasingly dominated by content, which is accompanied by the exponential growth of traffic. To cope with this phenomena, network caching is introduced to utilize the storage capacity of diverse network devices. In this paper, we first summarize four basic caching placement strategies, i.e., local caching, Device-to-Device (D2D) caching, Small cell Base Station (SBS) caching and Macrocell Base Station (MBS) caching. However, studies show that so far, much of the research has ignored the impact of user mobility. Therefore, taking the effect of the user mobility into consideration, we proposes a joint mobility-aware caching and SBS density placement scheme (MS caching). In addition, differences and relationships between caching and computation offloading are discussed. We present a design of a hybrid computation offloading and support it with experimental results, which demonstrate improved performance in terms of energy cost. Finally, we discuss the design of an incentive mechanism by considering network dynamics, differentiated user's quality of experience (QoE) and the heterogeneity of mobile terminals in terms of caching and computing capabilities.

  8. A high level implementation and performance evaluation of level-I asynchronous cache on FPGA

    Directory of Open Access Journals (Sweden)

    Mansi Jhamb

    2017-07-01

    Full Text Available To bridge the ever-increasing performance gap between the processor and the main memory in a cost-effective manner, novel cache designs and implementations are indispensable. Cache is responsible for a major part of energy consumption (approx. 50% of processors. This paper presents a high level implementation of a micropipelined asynchronous architecture of L1 cache. Due to the fact that each cache memory implementation is time consuming and error-prone process, a synthesizable and a configurable model proves out to be of immense help as it aids in generating a range of caches in a reproducible and quick fashion. The micropipelined cache, implemented using C-Elements acts as a distributed message-passing system. The RTL cache model implemented in this paper, comprising of data and instruction caches has a wide array of configurable parameters. In addition to timing robustness our implementation has high average cache throughput and low latency. The implemented architecture comprises of two direct-mapped, write-through caches for data and instruction. The architecture is implemented in a Field Programmable Gate Array (FPGA chip using Very High Speed Integrated Circuit Hardware Description Language (VHSIC HDL along with advanced synthesis and place-and-route tools.

  9. Data Cache-Energy and Throughput Models: Design Exploration for Embedded Processors

    Directory of Open Access Journals (Sweden)

    Qadri MuhammadYasir

    2009-01-01

    Full Text Available Abstract Most modern 16-bit and 32-bit embedded processors contain cache memories to further increase instruction throughput of the device. Embedded processors that contain cache memories open an opportunity for the low-power research community to model the impact of cache energy consumption and throughput gains. For optimal cache memory configuration mathematical models have been proposed in the past. Most of these models are complex enough to be adapted for modern applications like run-time cache reconfiguration. This paper improves and validates previously proposed energy and throughput models for a data cache, which could be used for overhead analysis for various cache types with relatively small amount of inputs. These models analyze the energy and throughput of a data cache on an application basis, thus providing the hardware and software designer with the feedback vital to tune the cache or application for a given energy budget. The models are suitable for use at design time in the cache optimization process for embedded processors considering time and energy overhead or could be employed at runtime for reconfigurable architectures.

  10. Data Cache-Energy and Throughput Models: Design Exploration for Embedded Processors

    Directory of Open Access Journals (Sweden)

    Muhammad Yasir Qadri

    2009-01-01

    Full Text Available Most modern 16-bit and 32-bit embedded processors contain cache memories to further increase instruction throughput of the device. Embedded processors that contain cache memories open an opportunity for the low-power research community to model the impact of cache energy consumption and throughput gains. For optimal cache memory configuration mathematical models have been proposed in the past. Most of these models are complex enough to be adapted for modern applications like run-time cache reconfiguration. This paper improves and validates previously proposed energy and throughput models for a data cache, which could be used for overhead analysis for various cache types with relatively small amount of inputs. These models analyze the energy and throughput of a data cache on an application basis, thus providing the hardware and software designer with the feedback vital to tune the cache or application for a given energy budget. The models are suitable for use at design time in the cache optimization process for embedded processors considering time and energy overhead or could be employed at runtime for reconfigurable architectures.

  11. Food availability and animal space use both determine cache density of Eurasian red squirrels.

    Directory of Open Access Journals (Sweden)

    Ke Rong

    Full Text Available Scatter hoarders are not able to defend their caches. A longer hoarding distance combined with lower cache density can reduce cache losses but increase the costs of hoarding and retrieving. Scatter hoarders arrange their cache density to achieve an optimal balance between hoarding costs and main cache losses. We conducted systematic cache sampling investigations to estimate the effects of food availability on cache patterns of Eurasian red squirrels (Sciurus vulgaris. This study was conducted over a five-year period at two sample plots in a Korean pine (Pinus koraiensis-dominated forest with contrasting seed production patterns. During these investigations, the locations of nest trees were treated as indicators of squirrel space use to explore how space use affected cache pattern. The squirrels selectively hoarded heavier pine seeds farther away from seed-bearing trees. The heaviest seeds were placed in caches around nest trees regardless of the nest tree location, and this placement was not in response to decreased food availability. The cache density declined with the hoarding distance. Cache density was lower at sites with lower seed production and during poor seed years. During seed mast years, the cache density around nest trees was higher and invariant. The pine seeds were dispersed over a larger distance when seed availability was lower. Our results suggest that 1 animal space use is an important factor that affects food hoarding distance and associated cache densities, 2 animals employ different hoarding strategies based on food availability, and 3 seed dispersal outside the original stand is stimulated in poor seed years.

  12. Seed perishability determines the caching behaviour of a food-hoarding bird.

    Science.gov (United States)

    Neuschulz, Eike Lena; Mueller, Thomas; Bollmann, Kurt; Gugerli, Felix; Böhning-Gaese, Katrin

    2015-01-01

    Many animals hoard seeds for later consumption and establish seed caches that are often located at sites with specific environmental characteristics. One explanation for the selection of non-random caching locations is the avoidance of pilferage by other animals. Another possible hypothesis is that animals choose locations that hamper the perishability of stored food, allowing the consumption of unspoiled food items over long time periods. We examined seed perishability and pilferage avoidance as potential drivers for caching behaviour of spotted nutcrackers (Nucifraga caryocatactes) in the Swiss Alps where the birds are specialized on caching seeds of Swiss stone pine (Pinus cembra). We used seedling establishment as an inverse measure of seed perishability, as established seedlings cannot longer be consumed by nutcrackers. We recorded the environmental conditions (i.e. canopy openness and soil moisture) of seed caching, seedling establishment and pilferage sites. Our results show that sites of seed caching and seedling establishment had opposed microenvironmental conditions. Canopy openness and soil moisture were negatively related to seed caching but positively related to seedling establishment, i.e. nutcrackers cached seeds preferentially at sites where seed perishability was low. We found no effects of environmental factors on cache pilferage, i.e. neither canopy openness nor soil moisture had significant effects on pilferage rates. We thus could not relate caching behaviour to pilferage avoidance. Our study highlights the importance of seed perishability as a mechanism for seed-caching behaviour, which should be considered in future studies. Our findings could have important implications for the regeneration of plants whose seeds are dispersed by seed-caching animals, as the potential of seedlings to establish may strongly decrease if animals cache seeds at sites that favour seed perishability rather than seedling establishment. © 2014 The Authors. Journal

  13. Towards Cache-Enabled, Order-Aware, Ontology-Based Stream Reasoning Framework

    Energy Technology Data Exchange (ETDEWEB)

    Yan, Rui; Praggastis, Brenda L.; Smith, William P.; McGuinness, Deborah L.

    2016-08-16

    While streaming data have become increasingly more popular in business and research communities, semantic models and processing software for streaming data have not kept pace. Traditional semantic solutions have not addressed transient data streams. Semantic web languages (e.g., RDF, OWL) have typically addressed static data settings and linked data approaches have predominantly addressed static or growing data repositories. Streaming data settings have some fundamental differences; in particular, data are consumed on the fly and data may expire. Stream reasoning, a combination of stream processing and semantic reasoning, has emerged with the vision of providing "smart" processing of streaming data. C-SPARQL is a prominent stream reasoning system that handles semantic (RDF) data streams. Many stream reasoning systems including C-SPARQL use a sliding window and use data arrival time to evict data. For data streams that include expiration times, a simple arrival time scheme is inadequate if the window size does not match the expiration period. In this paper, we propose a cache-enabled, order-aware, ontology-based stream reasoning framework. This framework consumes RDF streams with expiration timestamps assigned by the streaming source. Our framework utilizes both arrival and expiration timestamps in its cache eviction policies. In addition, we introduce the notion of "semantic importance" which aims to address the relevance of data to the expected reasoning, thus enabling the eviction algorithms to be more context- and reasoning-aware when choosing what data to maintain for question answering. We evaluate this framework by implementing three different prototypes and utilizing five metrics. The trade-offs of deploying the proposed framework are also discussed.

  14. JWIG: Yet Another Framework for Maintainable and Secure Web Applications

    DEFF Research Database (Denmark)

    Møller, Anders; Schwarz, Mathias Romme

    2009-01-01

    Although numerous frameworks for web application programming have been developed in recent years, writing web applications remains a challenging task. Guided by a collection of classical design principles, we propose yet another framework. It is based on a simple but flexible server......-oriented architecture that coherently supports general aspects of modern web applications, including dynamic XML construction, session management, data persistence, caching, and authentication, but it also simplifies programming of server-push communication and integration of XHTML-based applications and XML-based web...... services.The resulting framework provides a novel foundation for developing maintainable and secure web applications....

  15. Hybrid update / invalidate schemes for cache coherence protocols

    Directory of Open Access Journals (Sweden)

    R. V. Dovgopol

    2015-01-01

    Full Text Available In general when considering cache coherence, write back schemes are the default. These schemes invalidate all other copies of a data block during a write. In this paper we propose several hybrid schemes that will switch between updating and invalidating on processor writes at runtime, depending on program conditions. This kind of approaches tend to improve the overall performance of systems in numerous fields ranging from the Information Security to the Civil Aviation. We created our own cache simulator on which we could implement our schemes, and generated data sets from both commercial benchmarks and through artificial methods to run on the simulator. We analyze the results of running the benchmarks with various schemes, and suggest further research that can be done in this area.

  16. A novel cause of chronic viral meningoencephalitis: Cache Valley virus.

    Science.gov (United States)

    Wilson, Michael R; Suan, Dan; Duggins, Andrew; Schubert, Ryan D; Khan, Lillian M; Sample, Hannah A; Zorn, Kelsey C; Rodrigues Hoffman, Aline; Blick, Anna; Shingde, Meena; DeRisi, Joseph L

    2017-07-01

    Immunodeficient patients are particularly vulnerable to neuroinvasive infections that can be challenging to diagnose. Metagenomic next generation sequencing can identify unusual or novel microbes and is therefore well suited for investigating the etiology of chronic meningoencephalitis in immunodeficient patients. We present the case of a 34-year-old man with X-linked agammaglobulinemia from Australia suffering from 3 years of meningoencephalitis that defied an etiologic diagnosis despite extensive conventional testing, including a brain biopsy. Metagenomic next generation sequencing of his cerebrospinal fluid and brain biopsy tissue was performed to identify a causative pathogen. Sequences aligning to multiple Cache Valley virus genes were identified via metagenomic next generation sequencing. Reverse transcription polymerase chain reaction and immunohistochemistry subsequently confirmed the presence of Cache Valley virus in the brain biopsy tissue. Cache Valley virus, a mosquito-borne orthobunyavirus, has only been identified in 3 immunocompetent North American patients with acute neuroinvasive disease. The reported severity ranges from a self-limiting meningitis to a rapidly fatal meningoencephalitis with multiorgan failure. The virus has never been known to cause a chronic systemic or neurologic infection in humans. Cache Valley virus has also never previously been detected on the Australian continent. Our research subject traveled to North and South Carolina and Michigan in the weeks prior to the onset of his illness. This report demonstrates that metagenomic next generation sequencing allows for unbiased pathogen identification, the early detection of emerging viruses as they spread to new locales, and the discovery of novel disease phenotypes. Ann Neurol 2017;82:105-114. © 2017 The Authors Annals of Neurology published by Wiley Periodicals, Inc. on behalf of American Neurological Association.

  17. The Design and Evaluation of In-Cache Address Translation

    Science.gov (United States)

    1990-02-01

    past and present. IEEE Transactions on Software Engi- neering, 6(1):64-84, January 1980. [25] Malcom C. Easton and Ronald Fagin. Cold-start vs. warm...mul- tiprocessing environments. In Proceedings of the 18th Annual Hawaii Int’l Conference on System Sciences, pages 477-486, 1985. [53] A. E. Knowles ...Manchester, January 1985. [54] Alan Knowles and Shreekant Thakkar. The MU6-G virtual address cache. Technical Report CS/E 84-007, University of

  18. Cache-enabled small cell networks: modeling and tradeoffs.

    Science.gov (United States)

    Baştuǧ, Ejder; Bennis, Mehdi; Kountouris, Marios; Debbah, Mérouane

    We consider a network model where small base stations (SBSs) have caching capabilities as a means to alleviate the backhaul load and satisfy users' demand. The SBSs are stochastically distributed over the plane according to a Poisson point process (PPP) and serve their users either (i) by bringing the content from the Internet through a finite rate backhaul or (ii) by serving them from the local caches. We derive closed-form expressions for the outage probability and the average delivery rate as a function of the signal-to-interference-plus-noise ratio (SINR), SBS density, target file bitrate, storage size, file length, and file popularity. We then analyze the impact of key operating parameters on the system performance. It is shown that a certain outage probability can be achieved either by increasing the number of base stations or the total storage size. Our results and analysis provide key insights into the deployment of cache-enabled small cell networks (SCNs), which are seen as a promising solution for future heterogeneous cellular networks.

  19. Efficient Resource Scheduling by Exploiting Relay Cache for Cellular Networks

    Directory of Open Access Journals (Sweden)

    Chun He

    2015-01-01

    Full Text Available In relay-enhanced cellular systems, throughput of User Equipment (UE is constrained by the bottleneck of the two-hop link, backhaul link (or the first hop link, and access link (the second hop link. To maximize the throughput, resource allocation should be coordinated between these two hops. A common resource scheduling algorithm, Adaptive Distributed Proportional Fair, only ensures that the throughput of the first hop is greater than or equal to that of the second hop. But it cannot guarantee a good balance of the throughput and fairness between the two hops. In this paper, we propose a Two-Hop Balanced Distributed Scheduling (TBS algorithm by exploiting relay cache for non-real-time data traffic. The evolved Node Basestation (eNB adaptively adjusts the number of Resource Blocks (RBs allocated to the backhaul link and direct links based on the cache information of relays. Each relay allocates RBs for relay UEs based on the size of the relay UE’s Transport Block. We also design a relay UE’s ACK feedback mechanism to update the data at relay cache. Simulation results show that the proposed TBS can effectively improve resource utilization and achieve a good trade-off between system throughput and fairness by balancing the throughput of backhaul and access link.

  20. Caching Eliminates the Wireless Bottleneck in Video Aware Wireless Networks

    Directory of Open Access Journals (Sweden)

    Andreas F. Molisch

    2014-01-01

    Full Text Available Wireless video is the main driver for rapid growth in cellular data traffic. Traditional methods for network capacity increase are very costly and do not exploit the unique features of video, especially asynchronous content reuse. In this paper we give an overview of our work that proposed and detailed a new transmission paradigm exploiting content reuse and the widespread availability of low-cost storage. Our network structure uses caching in helper stations (femtocaching and/or devices, combined with highly spectrally efficient short-range communications to deliver video files. For femtocaching, we develop optimum storage schemes and dynamic streaming policies that optimize video quality. For caching on devices, combined with device-to-device (D2D communications, we show that communications within clusters of mobile stations should be used; the cluster size can be adjusted to optimize the tradeoff between frequency reuse and the probability that a device finds a desired file cached by another device in the same cluster. In many situations the network throughput increases linearly with the number of users, and the tradeoff between throughput and outage is better than in traditional base-station centric systems. Simulation results with realistic numbers of users and channel conditions show that network throughput can be increased by two orders of magnitude compared to conventional schemes.

  1. Storageless and caching Tier-2 models in the UK context

    Science.gov (United States)

    Cadellin Skipsey, Samuel; Dewhurst, Alastair; Crooks, David; MacMahon, Ewan; Roy, Gareth; Smith, Oliver; Mohammed, Kashif; Brew, Chris; Britton, David

    2017-10-01

    Operational and other pressures have lead to WLCG experiments moving increasingly to a stratified model for Tier-2 resources, where “fat” Tier-2s (“T2Ds”) and “thin” Tier-2s (“T2Cs”) provide different levels of service. In the UK, this distinction is also encouraged by the terms of the current GridPP5 funding model. In anticipation of this, testing has been performed on the implications, and potential implementation, of such a distinction in our resources. In particular, this presentation presents the results of testing of storage T2Cs, where the “thin” nature is expressed by the site having either no local data storage, or only a thin caching layer; data is streamed or copied from a “nearby” T2D when needed by jobs. In OSG, this model has been adopted successfully for CMS AAA sites; but the network topology and capacity in the USA is significantly different to that in the UK (and much of Europe). We present the result of several operational tests: the in-production University College London (UCL) site, which runs ATLAS workloads using storage at the Queen Mary University of London (QMUL) site; the Oxford site, which has had scaling tests performed against T2Ds in various locations in the UK (to test network effects); and the Durham site, which has been testing the specific ATLAS caching solution of “Rucio Cache” integration with ARC’s caching layer.

  2. Using Shadow Page Cache to Improve Isolated Drivers Performance

    Directory of Open Access Journals (Sweden)

    Hao Zheng

    2015-01-01

    Full Text Available With the advantage of the reusability property of the virtualization technology, users can reuse various types and versions of existing operating systems and drivers in a virtual machine, so as to customize their application environment. In order to prevent users’ virtualization environments being impacted by driver faults in virtual machine, Chariot examines the correctness of driver’s write operations by the method of combining a driver’s write operation capture and a driver’s private access control table. However, this method needs to keep the write permission of shadow page table as read-only, so as to capture isolated driver’s write operations through page faults, which adversely affect the performance of the driver. Based on delaying setting frequently used shadow pages’ write permissions to read-only, this paper proposes an algorithm using shadow page cache to improve the performance of isolated drivers and carefully study the relationship between the performance of drivers and the size of shadow page cache. Experimental results show that, through the shadow page cache, the performance of isolated drivers can be greatly improved without impacting Chariot’s reliability too much.

  3. The NOAO Data Cache Initiative - Building a Distributed Online Datastore

    Science.gov (United States)

    Seaman, R.; Barg, I.; Zárate, N.; Smith, C.; Saavedra, N.

    2005-12-01

    The Data Cache Initiative (DCI) of the NOAO Data Products Program is a prototype Data Transport System for NOAO and affiliate facilities. DCI provides pre-tested solutions for conveying data from our large suite of instrumentation to a central mountain data cache. The heart of DCI is an extension of the Save-the-Bits safestore, running for more than a decade (more than 4 million images saved, comprising more than 40 Tbytes). The iSTB server has been simplified by the removal of STB's media handling functionality, and iSTB has been enhanced to remediate each incoming header with information from a database of NOAO instrumentation and an interface to the NOAO proposal database. Each mountain data cache has been implemented on commodity hardware running Redhat 9.0. Software RAID 1 runs over hardware RAID 5 to provide maximum storage reliability for each copy of the data. Each image is transferred from Kitt Peak or Cerro Tololo to the corresponding datastore at the Tucson or La Serena data centers using an rsync-based queue adopted from NCSA. From each data center, the files are transported to the other NOAO data center and also to NCSA for off-site storage using the Storage Resource Broker (SRB) of the San Diego Supercomputer Center. Thus we have three copies of each file on spinning disks or near-online. Major institutional users will be given access to the datastores.

  4. The impact of using combinatorial optimisation for static caching of posting lists

    DEFF Research Database (Denmark)

    Petersen, Casper; Simonsen, Jakob Grue; Lioma, Christina

    2015-01-01

    policies such as LRU and LFU. However, a greedy method does not formally guarantee an optimal solution. We investigate whether the use of methods guaranteed, in theory, to and an approximately optimal solution would yield higher hit rates. Thus, we cast the selection of posting lists for caching......Caching posting lists can reduce the amount of disk I/O required to evaluate a query. Current methods use optimisation procedures for maximising the cache hit ratio. A recent method selects posting lists for static caching in a greedy manner and obtains higher hit rates than standard cache eviction...... as an integer linear programming problem and perform a series of experiments using heuristics from combinatorial optimisation (CCO) to nd optimal solutions. Using simulated query logs we nd that CCO yields comparable results to a greedy baseline using cache sizes between 200 and 1000 MB, with modest...

  5. WCET-based comparison of an instruction scratchpad and a method cache

    DEFF Research Database (Denmark)

    Whitham, Jack; Schoeberl, Martin

    2014-01-01

    This paper compares two proposed alternatives to conventional instruction caches: a scratchpad memory (SPM) and a method cache. The comparison considers the true worst-case execution time (WCET) and the estimated WCET bound of programs using either an SPM or a method cache, using large numbers...... of randomly generated programs. For these programs, we find that a method cache is preferable to an SPM if the true WCET is used, because it leads to execution times that are no greater than those for SPM, and are often lower. However, we also find that analytical pessimism is a significant problem...... for a method cache. If WCET bounds are derived by analysis, the WCET bounds for an instruction SPM are often lower than the bounds for a method cache. This means that an SPM may be preferable in practical systems....

  6. Cache and energy efficient algorithms for Nussinov's RNA Folding.

    Science.gov (United States)

    Zhao, Chunchun; Sahni, Sartaj

    2017-12-06

    An RNA folding/RNA secondary structure prediction algorithm determines the non-nested/pseudoknot-free structure by maximizing the number of complementary base pairs and minimizing the energy. Several implementations of Nussinov's classical RNA folding algorithm have been proposed. Our focus is to obtain run time and energy efficiency by reducing the number of cache misses. Three cache-efficient algorithms, ByRow, ByRowSegment and ByBox, for Nussinov's RNA folding are developed. Using a simple LRU cache model, we show that the Classical algorithm of Nussinov has the highest number of cache misses followed by the algorithms Transpose (Li et al.), ByRow, ByRowSegment, and ByBox (in this order). Extensive experiments conducted on four computational platforms-Xeon E5, AMD Athlon 64 X2, Intel I7 and PowerPC A2-using two programming languages-C and Java-show that our cache efficient algorithms are also efficient in terms of run time and energy. Our benchmarking shows that, depending on the computational platform and programming language, either ByRow or ByBox give best run time and energy performance. The C version of these algorithms reduce run time by as much as 97.2% and energy consumption by as much as 88.8% relative to Classical and by as much as 56.3% and 57.8% relative to Transpose. The Java versions reduce run time by as much as 98.3% relative to Classical and by as much as 75.2% relative to Transpose. Transpose achieves run time and energy efficiency at the expense of memory as it takes twice the memory required by Classical. The memory required by ByRow, ByRowSegment, and ByBox is the same as that of Classical. As a result, using the same amount of memory, the algorithms proposed by us can solve problems up to 40% larger than those solvable by Transpose.

  7. Organizing the pantry: cache management improves quality of overwinter food stores in a montane mammal

    Science.gov (United States)

    Jakopak, Rhiannon P.; Hall, L. Embere; Chalfoun, Anna

    2017-01-01

    Many mammals create food stores to enhance overwinter survival in seasonal environments. Strategic arrangement of food within caches may facilitate the physical integrity of the cache or improve access to high-quality food to ensure that cached resources meet future nutritional demands. We used the American pika (Ochotona princeps), a food-caching lagomorph, to evaluate variation in haypile (cache) structure (i.e., horizontal layering by plant functional group) in Wyoming, United States. Fifty-five percent of 62 haypiles contained at least 2 discrete layers of vegetation. Adults and juveniles layered haypiles in similar proportions. The probability of layering increased with haypile volume, but not haypile number per individual or nearby forage diversity. Vegetation cached in layered haypiles was also higher in nitrogen compared to vegetation in unlayered piles. We found that American pikas frequently structured their food caches, structured caches were larger, and the cached vegetation in structured piles was of higher nutritional quality. Improving access to stable, high-quality vegetation in haypiles, a critical overwinter food resource, may allow individuals to better persist amidst harsh conditions.

  8. Behavior-aware cache hierarchy optimization for low-power multi-core embedded systems

    Science.gov (United States)

    Zhao, Huatao; Luo, Xiao; Zhu, Chen; Watanabe, Takahiro; Zhu, Tianbo

    2017-07-01

    In modern embedded systems, the increasing number of cores requires efficient cache hierarchies to ensure data throughput, but such cache hierarchies are restricted by their tumid size and interference accesses which leads to both performance degradation and wasted energy. In this paper, we firstly propose a behavior-aware cache hierarchy (BACH) which can optimally allocate the multi-level cache resources to many cores and highly improved the efficiency of cache hierarchy, resulting in low energy consumption. The BACH takes full advantage of the explored application behaviors and runtime cache resource demands as the cache allocation bases, so that we can optimally configure the cache hierarchy to meet the runtime demand. The BACH was implemented on the GEM5 simulator. The experimental results show that energy consumption of a three-level cache hierarchy can be saved from 5.29% up to 27.94% compared with other key approaches while the performance of the multi-core system even has a slight improvement counting in hardware overhead.

  9. A Cache System Design for CMPs with Built-In Coherence Verification

    Directory of Open Access Journals (Sweden)

    Mamata Dalui

    2016-01-01

    Full Text Available This work reports an effective design of cache system for Chip Multiprocessors (CMPs. It introduces built-in logic for verification of cache coherence in CMPs realizing directory based protocol. It is developed around the cellular automata (CA machine, invented by John von Neumann in the 1950s. A special class of CA referred to as single length cycle 2-attractor cellular automata (TACA has been planted to detect the inconsistencies in cache line states of processors’ private caches. The TACA module captures coherence status of the CMPs’ cache system and memorizes any inconsistent recording of the cache line states during the processors’ reference to a memory block. Theory has been developed to empower a TACA to analyse the cache state updates and then to settle to an attractor state indicating quick decision on a faulty recording of cache line status. The introduction of segmentation of the CMPs’ processor pool ensures a better efficiency, in determining the inconsistencies, by reducing the number of computation steps in the verification logic. The hardware requirement for the verification logic points to the fact that the overhead of proposed coherence verification module is much lesser than that of the conventional verification units and is insignificant with respect to the cost involved in CMPs’ cache system.

  10. Cache-Oblivious Data Structures and Algorithms for Undirected Breadth-First Search and Shortest Paths

    DEFF Research Database (Denmark)

    Brodal, G.S.; Fagerberg, R.; Meyer, U.

    2004-01-01

    We present improved cache-oblivious data structures and algorithms for breadth-first search and the single-source shortest path problem on undirected graphs with non-negative edge weights. Our results removes the performance gap between the currently best cache-aware algorithms for these problems...... and their cache-oblivious counterparts. Our shortest-path algorithm relies on a new data structure, called bucket heap, which is the first cache-oblivious priority queue to efficiently support a weak DecreaseKey operation....

  11. Munchausen syndrome by proxy: a case report.

    Science.gov (United States)

    Lieder, Holly S; Irving, Sharon Y; Mauricio, Rizalina; Graf, Jeanine M

    2005-01-01

    Munchausen syndrome by proxy is difficult to diagnose unless healthcare providers are astute to its clinical features and management. A case is presented to educate nurses and advanced practice nurses, of the nursing, medical, legal, and social complexities associated with Munchausen syndrome by proxy. This article also provides a brief review of the definition of Munchausen syndrome by proxy, its epidemiology, common features of the perpetrator, implications for healthcare personnel, and the legal and international ramifications of Munchausen syndrome by proxy.

  12. Munchausen syndrome by adult proxy: a review of the literature.

    Science.gov (United States)

    Burton, M Caroline; Warren, Mark B; Lapid, Maria I; Bostwick, J Michael

    2015-01-01

    Munchausen syndrome by proxy (MSBP), more formally known as factitious disorder imposed on another, is a form of abuse in which a caregiver deliberately produces or feigns illness in a person under his or her care so that the proxy will receive medical care that gratifies the caregiver. Although well documented in the pediatric literature, few cases of MSBP with adult proxies (MSB-AP) have been reported. This study reviews existing literature on MSB-AP to provide a framework for clinicians to recognize this disorder. We searched Ovid MEDLINE, Ovid EMBASE, PubMed, Web of Knowledge, and PsychINFO, supplemented by bibliographic examination. We identified 13 cases of MSB-AP. Perpetrators were caregivers, most (62%) were women, and many worked in healthcare. The age range of the victims was 21 to 82 years. Most were unaware of the abuse, although in 2 cases the victim may have colluded with the perpetrator. Disease fabrication most often resulted from poisoning. MSB-AP should be included in the differential diagnosis of patients presenting with a complex constellation of symptoms without a unifying etiology and an overly involved caregiver with suspected psychological gain. Early identification is necessary so that healthcare providers do not unknowingly perpetuate harm through treatments that satisfy the perpetrator's psychological needs at the proxy's expense. © 2014 Society of Hospital Medicine.

  13. Shareholder Activism Through the Proxy Process

    NARCIS (Netherlands)

    Renneboog, L.D.R.; Szilagyi, P.G.

    2009-01-01

    This paper provides evidence on the corporate governance role of shareholder-initiated proxy proposals. Previous studies debate over whether activists use proxy proposals to discipline firms or to simply advance their self-serving agendas, and whether proxy proposals are effective at all in

  14. Shareholder Activism through the Proxy Process

    NARCIS (Netherlands)

    Renneboog, L.D.R.; Szilagyi, P.G.

    2009-01-01

    This paper provides evidence on the corporate governance role of shareholderinitiated proxy proposals. Previous studies debate over whether activists use proxy proposals to discipline firms or to simply advance their self-serving agendas, and whether proxy proposals are effective at all in

  15. Cache-Oblivious Red-Blue Line Segment Intersection

    DEFF Research Database (Denmark)

    Arge, Lars; Mølhave, Thomas; Zeh, Norbert

    2008-01-01

    We present an optimal cache-oblivious algorithm for finding all intersections between a set of non-intersecting red segments and a set of non-intersecting blue segments in the plane. Our algorithm uses $O(\\frac{N}{B}\\log_{M/B}\\frac{N}{B}+T/B)$ memory transfers, where N is the total number...... of segments, M and B are the memory and block transfer sizes of any two consecutive levels of any multilevel memory hierarchy, and T is the number of intersections....

  16. Static probabilistic timing analysis for real-time systems using random replacement caches

    NARCIS (Netherlands)

    Altmeyer, S.; Cucu-Grosjean, L.; Davis, R.I.

    2015-01-01

    In this paper, we investigate static probabilistic timing analysis (SPTA) for single processor real-time systems that use a cache with an evict-on-miss random replacement policy. We show that previously published formulae for the probability of a cache hit can produce results that are optimistic and

  17. Selfish-LRU: Preemption-Aware Caching for Predictability and Performance

    NARCIS (Netherlands)

    Reineke, J.; Altmeyer, S.; Grund, D.; Hahn, S.; Maiza, C.

    2014-01-01

    We introduce Selfish-LRU, a variant of the LRU (least recently used) cache replacement policy that improves performance and predictability in preemptive scheduling scenarios. In multitasking systems with conventional caches, a single memory access by a preempting task can trigger a chain reaction

  18. An Efficient Schema for Cloud Systems Based on SSD Cache Technology

    Directory of Open Access Journals (Sweden)

    Jinjiang Liu

    2013-01-01

    Full Text Available Traditional caching strategy is mainly based on the memory cache, taking read-write speed as its ultimate goal. However, with the emergence of SSD, the design ideas of traditional cache are no longer applicable. Considering the read-write characteristics and times limit of erasing, the characteristics of SSD are taken into account as far as possible at the same time of designing caching strategy. In this paper, the flexible and adaptive cache strategy based on SSD is proposed, called FAC, which gives full consideration to the characteristics of SSD itself, combines traditional caching strategy design ideas, and then maximizes the role SSD has played. The core mechanism is based on the dynamic adjustment capabilities of access patterns and the efficient selection algorithm of hot data. We have developed dynamical adjust section hot data algorithm, DASH in short, to adjust the read-write area capacity to suit the current usage scenario dynamically. The experimental results show that both read and write performance of caching strategy based on SSD have improved a lot, especially for read performance. Compared with traditional caching strategy, the technique can be used in engineering to reduce write times to SSD and prolong its service life without lowering read-write performance.

  19. Re-caching by Western scrub-jays (Aphelocoma californica cannot be attributed to stress.

    Directory of Open Access Journals (Sweden)

    James M Thom

    Full Text Available Western scrub-jays (Aphelocoma californica live double lives, storing food for the future while raiding the stores of other birds. One tactic scrub-jays employ to protect stores is "re-caching"-relocating caches out of sight of would-be thieves. Recent computational modelling work suggests that re-caching might be mediated not by complex cognition, but by a combination of memory failure and stress. The "Stress Model" asserts that re-caching is a manifestation of a general drive to cache, rather than a desire to protect existing stores. Here, we present evidence strongly contradicting the central assumption of these models: that stress drives caching, irrespective of social context. In Experiment (i, we replicate the finding that scrub-jays preferentially relocate food they were watched hiding. In Experiment (ii we find no evidence that stress increases caching. In light of our results, we argue that the Stress Model cannot account for scrub-jay re-caching.

  20. Integrating Cache Related Pre-emption Delay Analysis into EDF Scheduling

    NARCIS (Netherlands)

    Lunniss, W.; Altmeyer, S.; Maiza, C.; Davis, R.I.

    2013-01-01

    Cache memories have been introduced into embedded systems to prevent memory access times from becoming an unacceptable performance bottleneck. Memory and cache are split into blocks containing instructions and data. During a pre-emption, blocks from the pre-empting task can evict those of the

  1. Measuring SIP proxy server performance

    CERN Document Server

    Subramanian, Sureshkumar V

    2013-01-01

    Internet Protocol (IP) telephony is an alternative to the traditional Public Switched Telephone Networks (PSTN), and the Session Initiation Protocol (SIP) is quickly becoming a popular signaling protocol for VoIP-based applications. SIP is a peer-to-peer multimedia signaling protocol standardized by the Internet Engineering Task Force (IETF), and it plays a vital role in providing IP telephony services through its use of the SIP Proxy Server (SPS), a software application that provides call routing services by parsing and forwarding all the incoming SIP packets in an IP telephony network.SIP Pr

  2. Adaptive Neuro-fuzzy Inference System as Cache Memory Replacement Policy

    Directory of Open Access Journals (Sweden)

    CHUNG, Y. M.

    2014-02-01

    Full Text Available To date, no cache memory replacement policy that can perform efficiently for all types of workloads is yet available. Replacement policies used in level 1 cache memory may not be suitable in level 2. In this study, we focused on developing an adaptive neuro-fuzzy inference system (ANFIS as a replacement policy for improving level 2 cache performance in terms of miss ratio. The recency and frequency of referenced blocks were used as input data for ANFIS to make decisions on replacement. MATLAB was employed as a training tool to obtain the trained ANFIS model. The trained ANFIS model was implemented on SimpleScalar. Simulations on SimpleScalar showed that the miss ratio improved by as high as 99.95419% and 99.95419% for instruction level 2 cache, and up to 98.04699% and 98.03467% for data level 2 cache compared with least recently used and least frequently used, respectively.

  3. LoColms: an innovative approach of enhancing traditional classroom form of education by promoting web-based distance learning in the poorer countries.

    Science.gov (United States)

    Ngarambe, Donart; Pan, Yun-he; Chen, De-ren

    2003-01-01

    There have been numerous attempts recently to promote technology based education (Shrestha, 1997) in the poorer third world countries, but so far all these have not provided a sustainable solution as they are either centered and controlled from abroad and relying solely on foreign donors for their sustenance or they are not web-based, which make distribution problematic, and some are not affordable by most of the local population in these places. In this paper we discuss an application, the Local College Learning Management System (LoColms), which we are developing, that is both sustainable and economical to suit the situation in these countries. The application is a web-based system, and aims at improving the traditional form of education by empowering the local universities. Its economy comes from the fact that it is supported by traditional communication technology, the public switching telephone network system, PSTN, which eliminates the need for packet switched or dedicated private virtual networks (PVN) usually required in similar situations. At a later stage, we shall incorporate ontology and paging tools to improve resource sharing and storage optimization in the Proxy Caches (ProCa) and LoColms servers. The system is based on the client/server paradigm and its infrastructure consists of the PSTN, ProCa, with the learning centers accessing the universities by means of point-to-point protocol (PPP).

  4. Broadcasted Location-Aware Data Cache for Vehicular Application

    Directory of Open Access Journals (Sweden)

    Fukuda Akira

    2007-01-01

    Full Text Available There has been increasing interest in the exploitation of advances in information technology, for example, mobile computing and wireless communications in ITS (intelligent transport systems. Classes of applications that can benefit from such an infrastructure include traffic information, roadside businesses, weather reports, entertainment, and so on. There are several wireless communication methods currently available that can be utilized for vehicular applications, such as cellular phone networks, DSRC (dedicated short-range communication, and digital broadcasting. While a cellular phone network is relatively slow and a DSRC has a very small communication area, one-segment digital terrestrial broadcasting service was launched in Japan in 2006, high-performance digital broadcasting for mobile hosts has been available recently. However, broadcast delivery methods have the drawback that clients need to wait for the required data items to appear on the broadcast channel. In this paper, we propose a new cache system to effectively prefetch and replace broadcast data using "scope" (an available area of location-dependent data and "mobility specification" (a schedule according to the direction in which a mobile host moves. We numerically evaluate the cache system on the model close to the traffic road environment, and implement the emulation system to evaluate this location-aware data delivery method for a concrete vehicular application that delivers geographic road map data to a car navigation system.

  5. Improved Space Bounds for Cache-Oblivious Range Reporting

    DEFF Research Database (Denmark)

    Afshani, Peyman; Zeh, Norbert

    2011-01-01

    second main result shows that any cache-oblivious 2-d three-sided range reporting data structure with the optimal query bound has to use Ω(N logε N) space, thereby improving on a recent lower bound for the same problem. Using known transformations, the lower bound extends to 3-d dominance reporting and 3......We provide improved bounds on the size of cacheoblivious range reporting data structures that achieve the optimal query bound of O(logB N + K/B) block transfers. Our first main result is an O(N √ logN log logN)-space data structure that achieves this query bound for 3-d dominance reporting and 2-d...... three-sided range reporting. No cache-oblivious o(N log N/ log logN)-space data structure for these problems was known before, even when allowing a query bound of O(logO(1) 2 N + K/B) block transfers.1 Our result also implies improved space bounds for general 2-d and 3-d orthogonal range reporting. Our...

  6. Broadcasted Location-Aware Data Cache for Vehicular Application

    Directory of Open Access Journals (Sweden)

    Kenya Sato

    2007-05-01

    Full Text Available There has been increasing interest in the exploitation of advances in information technology, for example, mobile computing and wireless communications in ITS (intelligent transport systems. Classes of applications that can benefit from such an infrastructure include traffic information, roadside businesses, weather reports, entertainment, and so on. There are several wireless communication methods currently available that can be utilized for vehicular applications, such as cellular phone networks, DSRC (dedicated short-range communication, and digital broadcasting. While a cellular phone network is relatively slow and a DSRC has a very small communication area, one-segment digital terrestrial broadcasting service was launched in Japan in 2006, high-performance digital broadcasting for mobile hosts has been available recently. However, broadcast delivery methods have the drawback that clients need to wait for the required data items to appear on the broadcast channel. In this paper, we propose a new cache system to effectively prefetch and replace broadcast data using “scope” (an available area of location-dependent data and “mobility specification” (a schedule according to the direction in which a mobile host moves. We numerically evaluate the cache system on the model close to the traffic road environment, and implement the emulation system to evaluate this location-aware data delivery method for a concrete vehicular application that delivers geographic road map data to a car navigation system.

  7. Cache-Oblivious Planar Orthogonal Range Searching and Counting

    DEFF Research Database (Denmark)

    Arge, Lars; Brodal, Gerth Stølting; Fagerberg, Rolf

    2005-01-01

    present the first cache-oblivious data structure for planar orthogonal range counting, and improve on previous results for cache-oblivious planar orthogonal range searching. Our range counting structure uses O(Nlog2 N) space and answers queries using O(logB N) memory transfers, where B is the block...... size of any memory level in a multilevel memory hierarchy. Using bit manipulation techniques, the space can be further reduced to O(N). The structure can also be modified to support more general semigroup range sum queries in O(logB N) memory transfers, using O(Nlog2 N) space for three-sided queries...... and O(Nlog22 N/log2log2 N) space for four-sided queries. Based on the O(Nlog N) space range counting structure, we develop a data structure that uses O(Nlog2 N) space and answers three-sided range queries in O(logB N+T/B) memory transfers, where T is the number of reported points. Based...

  8. Evidence against observational spatial memory for cache locations of conspecifics in marsh tits Poecile palustris.

    Science.gov (United States)

    Urhan, A Utku; Emilsson, Ellen; Brodin, Anders

    2017-01-01

    Many species in the family Paridae, such as marsh tits Poecile palustris, are large-scale scatter hoarders of food that make cryptic caches and disperse these in large year-round territories. The perhaps most well-known species in the family, the great tit Parus major, does not store food itself but is skilled in stealing caches from the other species. We have previously demonstrated that great tits are able to memorise positions of caches they have observed marsh tits make and later return and steal the food. As great tits are explorative in nature and unusually good learners, it is possible that such "memorisation of caches from a distance" is a unique ability of theirs. The other possibility is that this ability is general in the parid family. Here, we tested marsh tits in the same experimental set-up as where we previously have tested great tits. We allowed caged marsh tits to observe a caching conspecific in a specially designed indoor arena. After a retention interval of 1 or 24 h, we allowed the observer to enter the arena and search for the caches. The marsh tits showed no evidence of such observational memorization ability, and we believe that such ability is more useful for a non-hoarding species. Why should a marsh tit that memorises hundreds of their own caches in the field bother with the difficult task of memorising other individuals' caches? We argue that the close-up memorisation procedure that marsh tits use at their own caches may be a different type of observational learning than memorisation of caches made by others. For example, the latter must be done from a distance and hence may require the ability to adopt an allocentric perspective, i.e. the ability to visualise the cache from the hoarder's perspective. Members of the Paridae family are known to possess foraging techniques that are cognitively advanced. Previously, we have demonstrated that a non-hoarding parid species, the great tit P. major, is able to memorise positions of caches that

  9. Novel Quantum Proxy Signature without Entanglement

    Science.gov (United States)

    Xu, Guang-bao

    2015-08-01

    Proxy signature is an important research topic in classic cryptography since it has many application occasions in our real life. But only a few quantum proxy signature schemes have been proposed up to now. In this paper, we propose a quantum proxy signature scheme, which is designed based on quantum one-time pad. Our scheme can be realized easily since it only uses single-particle states. Security analysis shows that it is secure and meets all the properties of a proxy signature, such as verifiability, distinguishability, unforgeability and undeniability.

  10. Adjustable Two-Tier Cache for IPTV Based on Segmented Streaming

    Directory of Open Access Journals (Sweden)

    Kai-Chun Liang

    2012-01-01

    Full Text Available Internet protocol TV (IPTV is a promising Internet killer application, which integrates video, voice, and data onto a single IP network, and offers viewers an innovative set of choices and control over their TV content. To provide high-quality IPTV services, an effective strategy is based on caching. This work proposes a segment-based two-tier caching approach, which divides each video into multiple segments to be cached. This approach also partitions the cache space into two layers, where the first layer mainly caches to-be-played segments and the second layer saves possibly played segments. As the segment access becomes frequent, the proposed approach enlarges the first layer and reduces the second layer, and vice versa. Because requested segments may not be accessed frequently, this work further designs an admission control mechanism to determine whether an incoming segment should be cached or not. The cache architecture takes forward/stop playback into account and may replace the unused segments under the interrupted playback. Finally, we conduct comprehensive simulation experiments to evaluate the performance of the proposed approach. The results show that our approach can yield higher hit ratio than previous work under various environmental parameters.

  11. Caching for where and what: evidence for a mnemonic strategy in a scatter-hoarder.

    Science.gov (United States)

    Delgado, Mikel M; Jacobs, Lucia F

    2017-09-01

    Scatter-hoarding animals face the task of maximizing retrieval of their scattered food caches while minimizing loss to pilferers. This demand should select for mnemonics, such as chunking, i.e. a hierarchical cognitive representation that is known to improve recall. Spatial chunking, where caches with the same type of content are related to each other in physical location and memory, would be one such mechanism. Here we tested the hypothesis that scatter-hoarding eastern fox squirrels (Sciurus niger) are organizing their caches in spatial patterns consistent with a chunking strategy. We presented 45 individual wild fox squirrels with a series of 16 nuts of four different species, either in runs of four of the same species or 16 nuts offered in a pseudorandom order. Squirrels either collected each nut from a different location or collected all nuts from a single location; we then mapped their subsequent cache distributions using GPS. The chunking hypothesis predicted that squirrels would spatially organize caches by nut species, regardless of presentation order. Our results instead demonstrated that squirrels spatially chunked their caches by nut species but only when caching food that was foraged from a single location. This first demonstration of spatial chunking in a scatter hoarder underscores the cognitive demand of scatter hoarding.

  12. Do Clark's nutcrackers demonstrate what-where-when memory on a cache-recovery task?

    Science.gov (United States)

    Gould, Kristy L; Ort, Amy J; Kamil, Alan C

    2012-01-01

    What-where-when (WWW) memory during cache recovery was investigated in six Clark's nutcrackers. During caching, both red- and blue-colored pine seeds were cached by the birds in holes filled with sand. Either a short (3 day) retention interval (RI) or a long (9 day) RI was followed by a recovery session during which caches were replaced with either a single seed or wooden bead depending upon the color of the cache and length of the retention interval. Knowledge of what was in the cache (seed or bead), where it was located, and when the cache had been made (3 or 9 days ago) were the three WWW memory components under investigation. Birds recovered items (bead or seed) at above chance levels, demonstrating accurate spatial memory. They also recovered seeds more than beads after the long RI, but not after the short RI, when they recovered seeds and beads equally often. The differential recovery after the long RI demonstrates that nutcrackers may have the capacity for WWW memory during this task, but it is not clear why it was influenced by RI duration.

  13. A Scalable and Highly Configurable Cache-Aware Hybrid Flash Translation Layer

    Directory of Open Access Journals (Sweden)

    Jalil Boukhobza

    2014-03-01

    Full Text Available This paper presents a cache-aware configurable hybrid flash translation layer (FTL, named CACH-FTL. It was designed based on the observation that most state-of­­-the-art flash-specific cache systems above FTLs flush groups of pages belonging to the same data block. CACH-FTL relies on this characteristic to optimize flash write operations placement, as large groups of pages are flushed to a block-mapped region, named BMR, whereas small groups are buffered into a page-mapped region, named PMR. Page group placement is based on a configurable threshold defining the limit under which it is more cost-effective to use page mapping (PMR and wait for grouping more pages before flushing to the BMR. CACH-FTL is scalable in terms of mapping table size and flexible in terms of Input/Output (I/O workload support. CACH-FTL performs very well, as the performance difference with the ideal page-mapped FTL is less than 15% in most cases and has a mean of 4% for the best CACH-FTL configurations, while using at least 78% less memory for table mapping storage on RAM.

  14. Value-Based Caching in Information-Centric Wireless Body Area Networks

    Directory of Open Access Journals (Sweden)

    Fadi M. Al-Turjman

    2017-01-01

    Full Text Available We propose a resilient cache replacement approach based on a Value of sensed Information (VoI policy. To resolve and fetch content when the origin is not available due to isolated in-network nodes (fragmentation and harsh operational conditions, we exploit a content caching approach. Our approach depends on four functional parameters in sensory Wireless Body Area Networks (WBANs. These four parameters are: age of data based on periodic request, popularity of on-demand requests, communication interference cost, and the duration for which the sensor node is required to operate in active mode to capture the sensed readings. These parameters are considered together to assign a value to the cached data to retain the most valuable information in the cache for prolonged time periods. The higher the value, the longer the duration for which the data will be retained in the cache. This caching strategy provides significant availability for most valuable and difficult to retrieve data in the WBANs. Extensive simulations are performed to compare the proposed scheme against other significant caching schemes in the literature while varying critical aspects in WBANs (e.g., data popularity, cache size, publisher load, connectivity-degree, and severe probabilities of node failures. These simulation results indicate that the proposed VoI-based approach is a valid tool for the retrieval of cached content in disruptive and challenging scenarios, such as the one experienced in WBANs, since it allows the retrieval of content for a long period even while experiencing severe in-network node failures.

  15. Evidence against observational spatial memory for cache locations of conspecifics in marsh tits Poecile palustris

    OpenAIRE

    Urhan, A. Utku; Emilsson, Ellen; Brodin, Anders

    2017-01-01

    Abstract Many species in the family Paridae, such as marsh tits Poecile palustris, are large-scale scatter hoarders of food that make cryptic caches and disperse these in large year-round territories. The perhaps most well-known species in the family, the great tit Parus major, does not store food itself but is skilled in stealing caches from the other species. We have previously demonstrated that great tits are able to memorise positions of caches they have observed marsh tits make and later...

  16. Accurate low-cost methods for performance evaluation of cache memory systems

    Science.gov (United States)

    Laha, Subhasis; Patel, Janak H.; Iyer, Ravishankar K.

    1988-01-01

    Methods of simulation based on statistical techniques are proposed to decrease the need for large trace measurements and for predicting true program behavior. Sampling techniques are applied while the address trace is collected from a workload. This drastically reduces the space and time needed to collect the trace. Simulation techniques are developed to use the sampled data not only to predict the mean miss rate of the cache, but also to provide an empirical estimate of its actual distribution. Finally, a concept of primed cache is introduced to simulate large caches by the sampling-based method.

  17. Optical RAM-enabled cache memory and optical routing for chip multiprocessors: technologies and architectures

    Science.gov (United States)

    Pleros, Nikos; Maniotis, Pavlos; Alexoudi, Theonitsa; Fitsios, Dimitris; Vagionas, Christos; Papaioannou, Sotiris; Vyrsokinos, K.; Kanellos, George T.

    2014-03-01

    The processor-memory performance gap, commonly referred to as "Memory Wall" problem, owes to the speed mismatch between processor and electronic RAM clock frequencies, forcing current Chip Multiprocessor (CMP) configurations to consume more than 50% of the chip real-estate for caching purposes. In this article, we present our recent work spanning from Si-based integrated optical RAM cell architectures up to complete optical cache memory architectures for Chip Multiprocessor configurations. Moreover, we discuss on e/o router subsystems with up to Tb/s routing capacity for cache interconnection purposes within CMP configurations, currently pursued within the FP7 PhoxTrot project.

  18. Cache Sharing and Isolation Tradeoffs in Multicore Mixed-Criticality Systems

    Science.gov (United States)

    2015-05-01

    common practice today to allow hardware com- ponents such as last-level caches (LLCs) and memory con- trollers to be shared across cores; this can be...CAMA: A pre- dictable cache -aware memory allocator. In ECRTS ’11. [10] R. Kessler and M. Hill. Page placement algorithms for large real- indexed...allocation for real-time) cache design. In RTSS ’89. [14] L. Liu, Z. Cui, M. Xing, Y. Bao, M. Chen, and C. Wu. A software memory partition approach for

  19. A Multi-Layered Image Cache for Scientific Visualization

    Energy Technology Data Exchange (ETDEWEB)

    LaMar, E C

    2003-06-26

    We introduce a multi-layered image cache system that is designed to work with a pool of rendering engines to facilitate an interactive, frameless, asynchronous rendering environment. Our system decouples the rendering from the display of imagery. Therefore, it decouples render frequency and resolution from display frequency and resolution, and allows asynchronous transmission of imagery instead of the compute/send cycle of standard parallel systems. It also allows local, incremental refinement of imagery without requiring all imagery to be re-rendered. Images are placed in fixed position in camera (vs. world) space to eliminate occlusion artifacts. Display quality is improved by increasing the number of images. Interactivity is improved by decreasing the number of images.

  20. Memory-Intensive Benchmarks: IRAM vs. Cache-Based Machines

    Science.gov (United States)

    Biswas, Rupak; Gaeke, Brian R.; Husbands, Parry; Li, Xiaoye S.; Oliker, Leonid; Yelick, Katherine A.; Biegel, Bryan (Technical Monitor)

    2002-01-01

    The increasing gap between processor and memory performance has lead to new architectural models for memory-intensive applications. In this paper, we explore the performance of a set of memory-intensive benchmarks and use them to compare the performance of conventional cache-based microprocessors to a mixed logic and DRAM processor called VIRAM. The benchmarks are based on problem statements, rather than specific implementations, and in each case we explore the fundamental hardware requirements of the problem, as well as alternative algorithms and data structures that can help expose fine-grained parallelism or simplify memory access patterns. The benchmarks are characterized by their memory access patterns, their basic control structures, and the ratio of computation to memory operation.

  1. dCache, towards Federated Identities & Anonymized Delegation

    Science.gov (United States)

    Ashish, A.; Millar, AP; Mkrtchyan, T.; Fuhrmann, P.; Behrmann, G.; Sahakyan, M.; Adeyemi, O. S.; Starek, J.; Litvintsev, D.; Rossi, A.

    2017-10-01

    For over a decade, dCache has relied on the authentication and authorization infrastructure (AAI) offered by VOMS, Kerberos, Xrootd etc. Although the established infrastructure has worked well and provided sufficient security, the implementation of procedures and the underlying software is often seen as a burden, especially by smaller communities trying to adopt existing HEP software stacks [1]. Moreover, scientists are increasingly dependent on service portals for data access [2]. In this paper, we describe how federated identity management systems can facilitate the transition from traditional AAI infrastructure to novel solutions like OpenID Connect. We investigate the advantages offered by OpenID Connect in regards to ‘delegation of authentication’ and ‘credential delegation for offline access’. Additionally, we demonstrate how macaroons can provide a more fine-granular authorization mechanism that supports anonymized delegation.

  2. Architecting Web Sites for High Performance

    Directory of Open Access Journals (Sweden)

    Arun Iyengar

    2002-01-01

    Full Text Available Web site applications are some of the most challenging high-performance applications currently being developed and deployed. The challenges emerge from the specific combination of high variability in workload characteristics and of high performance demands regarding the service level, scalability, availability, and costs. In recent years, a large body of research has addressed the Web site application domain, and a host of innovative software and hardware solutions have been proposed and deployed. This paper is an overview of recent solutions concerning the architectures and the software infrastructures used in building Web site applications. The presentation emphasizes three of the main functions in a complex Web site: the processing of client requests, the control of service levels, and the interaction with remote network caches.

  3. Munchausen Syndrome by Proxy: Identification and Intervention

    Science.gov (United States)

    Walk, Alexandra; Davies, Susan C.

    2010-01-01

    This article discusses the Munchausen syndrome by proxy (MSBP), also known as "factitious disorder by proxy" (FDBP) and fabricated and/or induced illness, which is a mental illness in which a person lies about the physical or mental well-being of a person he/she is responsible for. Most often the dynamic transpires between a mother and her child.…

  4. Cache River National Wildlife Refuge [Land Status Map: Sheet 5 of 9

    Data.gov (United States)

    US Fish and Wildlife Service, Department of the Interior — This map was produced by the Division of Realty to depict landownership at Cache River National Wildlife Refuge. It was generated from rectified aerial photography,...

  5. Cache River National Wildlife Refuge: Annual Water Management Program : Calendar Year 1991

    Data.gov (United States)

    US Fish and Wildlife Service, Department of the Interior — The Cache River National Wildlife Refuge's Annual Water Management Plan has been developed to meet the station objectives. The purpose of this plan is to establish a...

  6. Energy-Efficient Caching for Mobile Edge Computing in 5G Networks

    National Research Council Canada - National Science Library

    Zhaohui Luo; Minghui LiWang; Zhijian Lin; Lianfen Huang; Xiaojiang Du; Mohsen Guizani

    2017-01-01

    Mobile Edge Computing (MEC), which is considered a promising and emerging paradigm to provide caching capabilities in proximity to mobile devices in 5G networks, enables fast, popular content delivery of delay-sensitive...

  7. Applying Data Mining Techniques to Improve Information Security in the Cloud: A Single Cache System Approach

    Directory of Open Access Journals (Sweden)

    Amany AlShawi

    2016-01-01

    Full Text Available Presently, the popularity of cloud computing is gradually increasing day by day. The purpose of this research was to enhance the security of the cloud using techniques such as data mining with specific reference to the single cache system. From the findings of the research, it was observed that the security in the cloud could be enhanced with the single cache system. For future purposes, an Apriori algorithm can be applied to the single cache system. This can be applied by all cloud providers, vendors, data distributors, and others. Further, data objects entered into the single cache system can be extended into 12 components. Database and SPSS modelers can be used to implement the same.

  8. Mobile Acoustical Bat Monitoring Annual Summary Report CY 2014- Cache River National Wildlife Refuge

    Data.gov (United States)

    US Fish and Wildlife Service, Department of the Interior — These reports summarize bat calls collected along transects at Cache River National Wildlife Refuge for the CY 2014. Calls were classified using Bat Call ID software...

  9. Routing metrics for cache-based reliable transport in wireless sensor networks: Doc 721

    National Research Council Canada - National Science Library

    António M Grilo; Mike Heidrich

    2013-01-01

    .... The energy and bandwidth constraints of WSNs have motivated the development of new reliable transport protocols in which intermediate nodes are able to cache packets and to retransmit them to the...

  10. Routing metrics for cache-based reliable transport in wireless sensor networks

    National Research Council Canada - National Science Library

    Grilo, António M; Heidrich, Mike

    2013-01-01

    .... The energy and bandwidth constraints of WSNs have motivated the development of new reliable transport protocols in which intermediate nodes are able to cache packets and to retransmit them to the...

  11. Content Delivery in Fog-Aided Small-Cell Systems with Offline and Online Caching: An Information—Theoretic Analysis

    Directory of Open Access Journals (Sweden)

    Seyyed Mohammadreza Azimi

    2017-07-01

    Full Text Available The storage of frequently requested multimedia content at small-cell base stations (BSs can reduce the load of macro-BSs without relying on high-speed backhaul links. In this work, the optimal operation of a system consisting of a cache-aided small-cell BS and a macro-BS is investigated for both offline and online caching settings. In particular, a binary fading one-sided interference channel is considered in which the small-cell BS, whose transmission is interfered by the macro-BS, has a limited-capacity cache. The delivery time per bit (DTB is adopted as a measure of the coding latency, that is, the duration of the transmission block, required for reliable delivery. For offline caching, assuming a static set of popular contents, the minimum achievable DTB is characterized through information-theoretic achievability and converse arguments as a function of the cache capacity and of the capacity of the backhaul link connecting cloud and small-cell BS. For online caching, under a time-varying set of popular contents, the long-term (average DTB is evaluated for both proactive and reactive caching policies. Furthermore, a converse argument is developed to characterize the minimum achievable long-term DTB for online caching in terms of the minimum achievable DTB for offline caching. The performance of both online and offline caching is finally compared using numerical results.

  12. A Birdstone and Phallic Pestle Cache from CA-ORA-365

    OpenAIRE

    Desautels, Nancy A.; Koerper, Henry C.; Couch, Jeffrey S.

    2005-01-01

    This study describes a ceremonial cache containing miniature pestle-like artifacts, a "spike " fragment, an obsidian biface, and a steatite birdstone recovered at Huntington Beach Mesa, Orange County. The phallic naturalism of one "pestle" suggests that more stylized specimens likewise denoted phallic symbols. The direct association of small "pestles" with birdstones, in this and other caches, supports the proposition that birdstones communicated fertility/fecundity symbolism.

  13. The relationship between dominance, corticosterone, memory, and food caching in mountain chickadees (Poecile gambeli).

    Science.gov (United States)

    Pravosudov, Vladimir V; Mendoza, Sally P; Clayton, Nicola S

    2003-08-01

    It has been hypothesized that in avian social groups subordinate individuals should maintain more energy reserves than dominants, as an insurance against increased perceived risk of starvation. Subordinates might also have elevated baseline corticosterone levels because corticosterone is known to facilitate fattening in birds. Recent experiments showed that moderately elevated corticosterone levels resulting from unpredictable food supply are correlated with enhanced cache retrieval efficiency and more accurate performance on a spatial memory task. Given the correlation between corticosterone and memory, a further prediction is that subordinates might be more efficient at cache retrieval and show more accurate performance on spatial memory tasks. We tested these predictions in dominant-subordinate pairs of mountain chickadees (Poecile gambeli). Each pair was housed in the same cage but caching behavior was tested individually in an adjacent aviary to avoid the confounding effects of small spaces in which birds could unnaturally and directly influence each other's behavior. In sharp contrast to our hypothesis, we found that subordinate chickadees cached less food, showed less efficient cache retrieval, and performed significantly worse on the spatial memory task than dominants. Although the behavioral differences could have resulted from social stress of subordination, and dominant birds reached significantly higher levels of corticosterone during their response to acute stress compared to subordinates, there were no significant differences between dominants and subordinates in baseline levels or in the pattern of adrenocortical stress response. We find no evidence, therefore, to support the hypothesis that subordinate mountain chickadees maintain elevated baseline corticosterone levels whereas lower caching rates and inferior cache retrieval efficiency might contribute to reduced survival of subordinates commonly found in food-caching parids.

  14. Milestone Report - Level-2 Milestone 5589: Modernization and Expansion of LLNL Archive Disk Cache

    Energy Technology Data Exchange (ETDEWEB)

    Shoopman, J. D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-02-04

    This report documents Livermore Computing (LC) activities in support of ASC L2 milestone 5589: Modernization and Expansion of LLNL Archive Disk Cache, due March 31, 2016. The full text of the milestone is included in Attachment 1. The description of the milestone is: Description: Configuration of archival disk cache systems will be modernized to reduce fragmentation, and new, higher capacity disk subsystems will be deployed. This will enhance archival disk cache capability for ASC archive users, enabling files written to the archives to remain resident on disk for many (6–12) months, regardless of file size. The milestone was completed in three phases. On August 26, 2015 subsystems with 6PB of disk cache were deployed for production use in LLNL’s unclassified HPSS environment. Following that, on September 23, 2015 subsystems with 9 PB of disk cache were deployed for production use in LLNL’s classified HPSS environment. On January 31, 2016, the milestone was fully satisfied when the legacy Data Direct Networks (DDN) archive disk cache subsystems were fully retired from production use in both LLNL’s unclassified and classified HPSS environments, and only the newly deployed systems were in use.

  15. A Technique for Improving Lifetime of Non-Volatile Caches Using Write-Minimization

    Directory of Open Access Journals (Sweden)

    Sparsh Mittal

    2016-01-01

    Full Text Available While non-volatile memories (NVMs provide high-density and low-leakage, they also have low write-endurance. This, along with the write-variation introduced by the cache management policies, can lead to very small cache lifetime. In this paper, we propose ENLIVE, a technique for ENhancing the LIfetime of non-Volatile cachEs. Our technique uses a small SRAM (static random access memory storage, called HotStore. ENLIVE detects frequently written blocks and transfers them to the HotStore so that they can be accessed with smaller latency and energy. This also reduces the number of writes to the NVM cache which improves its lifetime. We present microarchitectural schemes for managing the HotStore. Simulations have been performed using an x86-64 simulator and benchmarks from SPEC2006 suite. We observe that ENLIVE provides higher improvement in lifetime and better performance and energy efficiency than two state-of-the-art techniques for improving NVM cache lifetime. ENLIVE provides 8.47×, 14.67× and 15.79× improvement in lifetime or two, four and eight core systems, respectively. In addition, it works well for a range of system and algorithm parameters and incurs only small overhead.

  16. 78 FR 70987 - Proxy Advisory Firm Roundtable

    Science.gov (United States)

    2013-11-27

    ... of proxy advisory firm use by investment advisers and institutional investors and potential changes... Special Counsel, Division of Investment Management, at 202-551-6700, or Raymond Be, Special Counsel...

  17. A serial Munchausen syndrome by proxy

    National Research Council Canada - National Science Library

    Esra Unal; Volkan Unal; Ali Gul; Mustafa Celtek; Behzat Diken; Ibrahim Balcioglu

    2017-01-01

    Munchausen syndrome by proxy (MSBP) is a form of child abuse that describes children whose parents or caregivers invent illness stories and substantiate the stories by fabricating false physical signs...

  18. Seizures and Munchausen Syndrome by Proxy

    Directory of Open Access Journals (Sweden)

    J Gordon Millichap

    2002-05-01

    Full Text Available The prevalence, morbidity and mortality, diagnosis and management of cases of fabricated seizures and child abuse (Munchausen syndrome by proxy (MSbp are assessed by pediatricians at the University of Wales College of Medicine, Cardiff, UK.

  19. Munchausen Syndrome by Proxy: A Clinical Vignette

    Science.gov (United States)

    Zylstra, Robert G.; Miller, Karl E.; Stephens, Walter E.

    2000-01-01

    Munchausen syndrome by proxy is the act of one person fabricating or inducing an illness in another to meet his or her own emotional needs through the treatment process. The diagnosis is poorly understood and controversial. We report here the case of a 6-year-old boy who presented with possible pneumonia, nausea, vomiting, and diarrhea and whose mother was suspected of Munchausen syndrome by proxy. PMID:15014581

  20. PENGELOLAAN JARINGAN INTERNET DENGAN PROXY WINGATE

    Directory of Open Access Journals (Sweden)

    Titin Winarti

    2005-01-01

    Full Text Available Proxy adalah bagian dari protokol yang berfungsi sebagai link untuk host tunggal atau sebagai link untuk beberapa host antarajaringan dan jaringan lain. Proxy wingate adalah software yang digunakan untuk berbagi koneksi internet melalui satu alamat IPyang terintegrasi ke internet.

  1. gLExec and MyProxy integration in the ATLAS/OSG PanDA workload management system

    Science.gov (United States)

    Caballero, J.; Hover, J.; Litmaath, M.; Maeno, T.; Nilsson, P.; Potekhin, M.; Wenaus, T.; Zhao, X.

    2010-04-01

    Worker nodes on the grid exhibit great diversity, making it difficult to offer uniform processing resources. A pilot job architecture, which probes the environment on the remote worker node before pulling down a payload job, can help. Pilot jobs become smart wrappers, preparing an appropriate environment for job execution and providing logging and monitoring capabilities. PanDA (Production and Distributed Analysis), an ATLAS and OSG workload management system, follows this design. However, in the simplest (and most efficient) pilot submission approach of identical pilots carrying the same identifying grid proxy, end-user accounting by the site can only be done with application-level information (PanDA maintains its own end-user accounting), and end-user jobs run with the identity and privileges of the proxy carried by the pilots, which may be seen as a security risk. To address these issues, we have enabled PanDA to use gLExec, a tool provided by EGEE which runs payload jobs under an end-user's identity. End-user proxies are pre-staged in a credential caching service, MyProxy, and the information needed by the pilots to access them is stored in the PanDA DB. gLExec then extracts from the user's proxy the proper identity under which to run. We describe the deployment, installation, and configuration of gLExec, and how PanDA components have been augmented to use it. We describe how difficulties were overcome, and how security risks have been mitigated. Results are presented from OSG and EGEE Grid environments performing ATLAS analysis using PanDA and gLExec.

  2. Innovative Mobile E-Healthcare Systems: A New Rule-Based Cache Replacement Strategy Using Least Profit Values

    Directory of Open Access Journals (Sweden)

    Ramzi A. Haraty

    2016-01-01

    Full Text Available Providing and managing e-health data from heterogeneous and ubiquitous e-health service providers in a content distribution network (CDN for providing e-health services is a challenging task. A content distribution network is normally utilized to cache e-health media contents such as real-time medical images and videos. Efficient management, storage, and caching of distributed e-health data in a CDN or in a cloud computing environment of mobile patients facilitate that doctors, health care professionals, and other e-health service providers have immediate access to e-health information for efficient decision making as well as better treatment. Caching is one of the key methods in distributed computing environments to improve the performance of data retrieval. To find which item in the cache can be evicted and replaced, cache replacement algorithms are used. Many caching approaches are proposed, but the SACCS—Scalable Asynchronous Cache Consistency Scheme—has proved to be more scalable than the others. In this work, we propose a new cache replacement algorithm—Profit SACCS—that is based on the rule-based least profit value. It replaces the least recently used strategy that SACCS uses. A comparison with different cache replacement strategies is also presented.

  3. Accurate modeling of cache replacement policies in a Data-Grid.

    Energy Technology Data Exchange (ETDEWEB)

    Otoo, Ekow J.; Shoshani, Arie

    2003-01-23

    Caching techniques have been used to improve the performance gap of storage hierarchies in computing systems. In data intensive applications that access large data files over wide area network environment, such as a data grid,caching mechanism can significantly improve the data access performance under appropriate workloads. In a data grid, it is envisioned that local disk storage resources retain or cache the data files being used by local application. Under a workload of shared access and high locality of reference, the performance of the caching techniques depends heavily on the replacement policies being used. A replacement policy effectively determines which set of objects must be evicted when space is needed. Unlike cache replacement policies in virtual memory paging or database buffering, developing an optimal replacement policy for data grids is complicated by the fact that the file objects being cached have varying sizes and varying transfer and processing costs that vary with time. We present an accurate model for evaluating various replacement policies and propose a new replacement algorithm referred to as ''Least Cost Beneficial based on K backward references (LCB-K).'' Using this modeling technique, we compare LCB-K with various replacement policies such as Least Frequently Used (LFU), Least Recently Used (LRU), Greedy DualSize (GDS), etc., using synthetic and actual workload of accesses to and from tertiary storage systems. The results obtained show that (LCB-K) and (GDS) are the most cost effective cache replacement policies for storage resource management in data grids.

  4. Inferring climate variability from skewed proxy records

    Science.gov (United States)

    Emile-Geay, J.; Tingley, M.

    2013-12-01

    Many paleoclimate analyses assume a linear relationship between the proxy and the target climate variable, and that both the climate quantity and the errors follow normal distributions. An ever-increasing number of proxy records, however, are better modeled using distributions that are heavy-tailed, skewed, or otherwise non-normal, on account of the proxies reflecting non-normally distributed climate variables, or having non-linear relationships with a normally distributed climate variable. The analysis of such proxies requires a different set of tools, and this work serves as a cautionary tale on the danger of making conclusions about the underlying climate from applications of classic statistical procedures to heavily skewed proxy records. Inspired by runoff proxies, we consider an idealized proxy characterized by a nonlinear, thresholded relationship with climate, and describe three approaches to using such a record to infer past climate: (i) applying standard methods commonly used in the paleoclimate literature, without considering the non-linearities inherent to the proxy record; (ii) applying a power transform prior to using these standard methods; (iii) constructing a Bayesian model to invert the mechanistic relationship between the climate and the proxy. We find that neglecting the skewness in the proxy leads to erroneous conclusions and often exaggerates changes in climate variability between different time intervals. In contrast, an explicit treatment of the skewness, using either power transforms or a Bayesian inversion of the mechanistic model for the proxy, yields significantly better estimates of past climate variations. We apply these insights in two paleoclimate settings: (1) a classical sedimentary record from Laguna Pallcacocha, Ecuador (Moy et al., 2002). Our results agree with the qualitative aspects of previous analyses of this record, but quantitative departures are evident and hold implications for how such records are interpreted, and

  5. Food-caching in timber wolves, and the question of rules of action syntax.

    Science.gov (United States)

    Phillips, D P; Danilchuk, W; Ryon, J; Fentress, J C

    1990-04-16

    This report presents data on the sequence of motor operations used by captive timber wolves to cache food. Videotapes were obtained of 151 caching episodes by 8 wolves. The vast majority of these episodes contained 3 distinct phases, each composed of movements unique to that phase. The excavation of the cache site was always done with the forefeet, and burying of the food was always done with the snout. Both the identity of the movements, and the serial order of phases were independent of the sex of the animal, the season in which the observations were made, and the nature of the substrate. A comparison of the temporal sequencing of these actions with the temporal stereotypy seen in rodent motor patterns (e.g. grooming) revealed a striking phenomenological similarity. The factors shaping the temporal sequencing in the two behaviors are, however, probably very different. This is because much, though not all, of the temporal stereotypy in the sequence of movements used by the wolf in caching is constrained by the logistics of the cache operation, while this is not the case for the phases of facial grooming in rodents. The implications of our data for the kinds of behavioral evidence required for ascription of such stereotypy to a central pattern generator are discussed.

  6. Dynamic Allocation of SPM Based on Time-Slotted Cache Conflict Graph for System Optimization

    Science.gov (United States)

    Wu, Jianping; Ling, Ming; Zhang, Yang; Mei, Chen; Wang, Huan

    This paper proposes a novel dynamic Scratch-pad Memory allocation strategy to optimize the energy consumption of the memory sub-system. Firstly, the whole program execution process is sliced into several time slots according to the temporal dimension; thereafter, a Time-Slotted Cache Conflict Graph (TSCCG) is introduced to model the behavior of Data Cache (D-Cache) conflicts within each time slot. Then, Integer Nonlinear Programming (INP) is implemented, which can avoid time-consuming linearization process, to select the most profitable data pages. Virtual Memory System (VMS) is adopted to remap those data pages, which will cause severe Cache conflicts within a time slot, to SPM. In order to minimize the swapping overhead of dynamic SPM allocation, a novel SPM controller with a tightly coupled DMA is introduced to issue the swapping operations without CPU's intervention. Last but not the least, this paper discusses the fluctuation of system energy profit based on different MMU page size as well as the Time Slot duration quantitatively. According to our design space exploration, the proposed method can optimize all of the data segments, including global data, heap and stack data in general, and reduce the total energy consumption by 27.28% on average, up to 55.22% with a marginal performance promotion. And comparing to the conventional static CCG (Cache Conflicts Graph), our approach can obtain 24.7% energy profit on average, up to 30.5% with a sight boost in performance.

  7. Two-Layer Error Control Codes Combining Rectangular and Hamming Product Codes for Cache Error

    Directory of Open Access Journals (Sweden)

    Meilin Zhang

    2014-02-01

    Full Text Available We propose a novel two-layer error control code, combining error detection capability of rectangular codes and error correction capability of Hamming product codes in an efficient way, in order to increase cache error resilience for many core systems, while maintaining low power, area and latency overhead. Based on the fact of low latency and overhead of rectangular codes and high error control capability of Hamming product codes, two-layer error control codes employ simple rectangular codes for each cache line to detect cache errors, while loading the extra Hamming product code checks bits in the case of error detection; thus enabling reliable large-scale cache operations. Analysis and experiments are conducted to evaluate the cache fault-tolerant capability of various existing solutions and the proposed approach. The results show that the proposed approach can significantly increase Mean-Error-To-Failure (METF and Mean-Time-To-failure (MTTF up to 2.8×, reduce storage overhead by over 57%, and increase instruction per-cycle (IPC up to 7%, compared to complex four-way 4EC5ED; and it increases METF and MTTF up to 133×, reduces storage overhead by over 11%, and achieves a similar IPC compared to simple eight-way single-error correcting double-error detecting (SECDED. The cost of the proposed approach is no more than 4% external memory access overhead.

  8. Improving the performance of heterogeneous multi-core processors by modifying the cache coherence protocol

    Science.gov (United States)

    Fang, Juan; Hao, Xiaoting; Fan, Qingwen; Chang, Zeqing; Song, Shuying

    2017-05-01

    In the Heterogeneous multi-core architecture, CPU and GPU processor are integrated on the same chip, which poses a new challenge to the last-level cache management. In this architecture, the CPU application and the GPU application execute concurrently, accessing the last-level cache. CPU and GPU have different memory access characteristics, so that they have differences in the sensitivity of last-level cache (LLC) capacity. For many CPU applications, a reduced share of the LLC could lead to significant performance degradation. On the contrary, GPU applications can tolerate increase in memory access latency when there is sufficient thread-level parallelism. Taking into account the GPU program memory latency tolerance characteristics, this paper presents a method that let GPU applications can access to memory directly, leaving lots of LLC space for CPU applications, in improving the performance of CPU applications and does not affect the performance of GPU applications. When the CPU application is cache sensitive, and the GPU application is insensitive to the cache, the overall performance of the system is improved significantly.

  9. Analysis of power gating in different hierarchical levels of 2MB cache, considering variation

    Science.gov (United States)

    Jafari, Mohsen; Imani, Mohsen; Fathipour, Morteza

    2015-09-01

    This article reintroduces power gating technique in different hierarchical levels of static random-access memory (SRAM) design including cell, row, bank and entire cache memory in 16 nm Fin field effect transistor. Different structures of SRAM cells such as 6T, 8T, 9T and 10T are used in design of 2MB cache memory. The power reduction of the entire cache memory employing cell-level optimisation is 99.7% with the expense of area and other stability overheads. The power saving of the cell-level optimisation is 3× (1.2×) higher than power gating in cache (bank) level due to its superior selectivity. The access delay times are allowed to increase by 4% in the same energy delay product to achieve the best power reduction for each supply voltages and optimisation levels. The results show the row-level power gating is the best for optimising the power of the entire cache with lowest drawbacks. Comparisons of cells show that the cells whose bodies have higher power consumption are the best candidates for power gating technique in row-level optimisation. The technique has the lowest percentage of saving in minimum energy point (MEP) of the design. The power gating also improves the variation of power in all structures by at least 70%.

  10. Will video caching remain energy efficient in future core optical networks?

    Directory of Open Access Journals (Sweden)

    Niemah Izzeldin Osman

    2017-02-01

    Full Text Available Optical networks are expected to cater for the future Internet due to the high speed and capacity that they offer. Caching in the core network has proven to reduce power usage for various video services in current optical networks. This paper investigates whether video caching will still remain power efficient in future optical networks. The study compares the power consumption of caching in a current IP over WDM core network to a future network. The study considers a number of features to exemplify future networks. Future optical networks are considered where: (1 network devices consume less power, (2 network devices have sleep-mode capabilities, (3 IP over WDM implements lightpath bypass, and (4 the demand for video content significantly increases and high definition video dominates. Results show that video caching in future optical networks saves up to 42% of power consumption even when the power consumption of transport reduces. These results suggest that video caching is expected to remain a green option in video services in the future Internet.

  11. Evict on write, a management strategy for a prefetch unit and/or first level cache in a multiprocessor system with speculative execution

    Science.gov (United States)

    Gara, Alan; Ohmacht, Martin

    2014-09-16

    In a multiprocessor system with at least two levels of cache, a speculative thread may run on a core processor in parallel with other threads. When the thread seeks to do a write to main memory, this access is to be written through the first level cache to the second level cache. After the write though, the corresponding line is deleted from the first level cache and/or prefetch unit, so that any further accesses to the same location in main memory have to be retrieved from the second level cache. The second level cache keeps track of multiple versions of data, where more than one speculative thread is running in parallel, while the first level cache does not have any of the versions during speculation. A switch allows choosing between modes of operation of a speculation blind first level cache.

  12. A Network-Aware Distributed Storage Cache for Data Intensive Environments

    Energy Technology Data Exchange (ETDEWEB)

    Tierney, B.L.; Lee, J.R.; Johnston, W.E.; Crowley, B.; Holding, M.

    1999-12-23

    Modern scientific computing involves organizing, moving, visualizing, and analyzing massive amounts of data at multiple sites around the world. The technologies, the middleware services, and the architectures that are used to build useful high-speed, wide area distributed systems, constitute the field of data intensive computing. In this paper the authors describe an architecture for data intensive applications where they use a high-speed distributed data cache as a common element for all of the sources and sinks of data. This cache-based approach provides standard interfaces to a large, application-oriented, distributed, on-line, transient storage system. They describe their implementation of this cache, how they have made it network aware, and how they do dynamic load balancing based on the current network conditions. They also show large increases in application throughput by access to knowledge of the network conditions.

  13. Security Enhancement Using Cache Based Reauthentication in WiMAX Based E-Learning System

    Directory of Open Access Journals (Sweden)

    Chithra Rajagopal

    2015-01-01

    Full Text Available WiMAX networks are the most suitable for E-Learning through their Broadcast and Multicast Services at rural areas. Authentication of users is carried out by AAA server in WiMAX. In E-Learning systems the users must be forced to perform reauthentication to overcome the session hijacking problem. The reauthentication of users introduces frequent delay in the data access which is crucial in delaying sensitive applications such as E-Learning. In order to perform fast reauthentication caching mechanism known as Key Caching Based Authentication scheme is introduced in this paper. Even though the cache mechanism requires extra storage to keep the user credentials, this type of mechanism reduces the 50% of the delay occurring during reauthentication.

  14. Simplifying and speeding the management of intra-node cache coherence

    Energy Technology Data Exchange (ETDEWEB)

    Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton on Hudson, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Phillip [Cortlandt Manor, NY; Hoenicke, Dirk [Ossining, NY; Ohmacht, Martin [Yorktown Heights, NY

    2012-04-17

    A method and apparatus for managing coherence between two processors of a two processor node of a multi-processor computer system. Generally the present invention relates to a software algorithm that simplifies and significantly speeds the management of cache coherence in a message passing parallel computer, and to hardware apparatus that assists this cache coherence algorithm. The software algorithm uses the opening and closing of put/get windows to coordinate the activated required to achieve cache coherence. The hardware apparatus may be an extension to the hardware address decode, that creates, in the physical memory address space of the node, an area of virtual memory that (a) does not actually exist, and (b) is therefore able to respond instantly to read and write requests from the processing elements.

  15. Memory for multiple cache locations and prey quantities in a food-hoarding songbird

    Directory of Open Access Journals (Sweden)

    Nicola eArmstrong

    2012-12-01

    Full Text Available Most animals can discriminate between pairs of numbers that are each less than four without training. However, North Island robins (Petroica longipes, a food hoarding songbird endemic to New Zealand, can discriminate between quantities of items as high as eight without training. Here we investigate whether robins are capable of other complex quantity discrimination tasks. We test whether their ability to discriminate between small quantities declines with 1. the number of cache sites containing prey rewards and 2. the length of time separating cache creation and retrieval (retention interval. Results showed that subjects generally performed above chance expectations. They were equally able to discriminate between different combinations of prey quantities that were hidden from view in 2, 3 and 4 cache sites from between 1, 10 and 60 seconds. Overall results indicate that North Island robins can process complex quantity information involving more than two discrete quantities of items for up to one minute long retention intervals without training.

  16. Interacting Cache memories: evidence for flexible memory use by Western Scrub-Jays (Aphelocoma californica).

    Science.gov (United States)

    Clayton, Nicola S; Yu, Kara Shirley; Dickinson, Anthony

    2003-01-01

    When Western Scrub-Jays (Aphelocoma californica) cached and recovered perishable crickets, N. S. Clayton, K. S. Yu, and A. Dickinson (2001) reported that the jays rapidly learned to search for fresh crickets after a 1-day retention interval (RI) between caching and recovery but to avoid searching for perished crickets after a 4-day RI. In the present experiments, the jays generalized their search preference for crickets to intermediate RIs and used novel information about the rate of decay of crickets presented during the RI to reverse these search preferences at recovery. The authors interpret this reversal as evidence that the birds can integrate information about the caching episode with new information presented during the RI.

  17. Killing and caching of an adult White-tailed deer, Odocoileus virginianus, by a single Gray Wolf, Canis lupus

    Science.gov (United States)

    Nelson, Michael E.

    2011-01-01

    A single Gray Wolf (Canis lupus) killed an adult male White-tailed Deer (Odocoileus virginianus) and cached the intact carcass in 76 cm of snow. The carcass was revisited and entirely consumed between four and seven days later. This is the first recorded observation of a Gray Wolf caching an entire adult deer.

  18. Advantages of masting in European beech: timing of granivore satiation and benefits of seed caching support the predator dispersal hypothesis.

    Science.gov (United States)

    Zwolak, Rafał; Bogdziewicz, Michał; Wróbel, Aleksandra; Crone, Elizabeth E

    2016-03-01

    The predator satiation and predator dispersal hypotheses provide alternative explanations for masting. Both assume satiation of seed-eating vertebrates. They differ in whether satiation occurs before or after seed removal and caching by granivores (predator satiation and predator dispersal, respectively). This difference is largely unrecognized, but it is demographically important because cached seeds are dispersed and often have a microsite advantage over nondispersed seeds. We conducted rodent exclosure experiments in two mast and two nonmast years to test predictions of the predator dispersal hypothesis in our study system of yellow-necked mice (Apodemus flavicollis) and European beech (Fagus sylvatica). Specifically, we tested whether the fraction of seeds removed from the forest floor is similar during mast and nonmast years (i.e., lack of satiation before seed caching), whether masting decreases the removal of cached seeds (i.e., satiation after seed storage), and whether seed caching increases the probability of seedling emergence. We found that masting did not result in satiation at the seed removal stage. However, masting decreased the removal of cached seeds, and seed caching dramatically increased the probability of seedling emergence relative to noncached seeds. European beech thus benefits from masting through the satiation of scatterhoarders that occurs only after seeds are removed and cached. Although these findings do not exclude other evolutionary advantages of beech masting, they indicate that fitness benefits of masting extend beyond the most commonly considered advantages of predator satiation and increased pollination efficiency.

  19. Long-term moderate elevation of corticosterone facilitates avian food-caching behaviour and enhances spatial memory.

    Science.gov (United States)

    Pravosudov, Vladimir V

    2003-12-22

    It is widely assumed that chronic stress and corresponding chronic elevations of glucocorticoid levels have deleterious effects on animals' brain functions such as learning and memory. Some animals, however, appear to maintain moderately elevated levels of glucocorticoids over long periods of time under natural energetically demanding conditions, and it is not clear whether such chronic but moderate elevations may be adaptive. I implanted wild-caught food-caching mountain chickadees (Poecile gambeli), which rely at least in part on spatial memory to find their caches, with 90-day continuous time-release corticosterone pellets designed to approximately double the baseline corticosterone levels. Corticosterone-implanted birds cached and consumed significantly more food and showed more efficient cache recovery and superior spatial memory performance compared with placebo-implanted birds. Thus, contrary to prevailing assumptions, long-term moderate elevations of corticosterone appear to enhance spatial memory in food-caching mountain chickadees. These results suggest that moderate chronic elevation of corticosterone may serve as an adaptation to unpredictable environments by facilitating feeding and food-caching behaviour and by improving cache-retrieval efficiency in food-caching birds.

  20. A statistical proxy for sulphuric acid concentration

    Directory of Open Access Journals (Sweden)

    S. Mikkonen

    2011-11-01

    Full Text Available Gaseous sulphuric acid is a key precursor for new particle formation in the atmosphere. Previous experimental studies have confirmed a strong correlation between the number concentrations of freshly formed particles and the ambient concentrations of sulphuric acid. This study evaluates a body of experimental gas phase sulphuric acid concentrations, as measured by Chemical Ionization Mass Spectrometry (CIMS during six intensive measurement campaigns and one long-term observational period. The campaign datasets were measured in Hyytiälä, Finland, in 2003 and 2007, in San Pietro Capofiume, Italy, in 2009, in Melpitz, Germany, in 2008, in Atlanta, Georgia, USA, in 2002, and in Niwot Ridge, Colorado, USA, in 2007. The long term data were obtained in Hohenpeissenberg, Germany, during 1998 to 2000. The measured time series were used to construct proximity measures ("proxies" for sulphuric acid concentration by using statistical analysis methods. The objective of this study is to find a proxy for sulfuric acid that is valid in as many different atmospheric environments as possible. Our most accurate and universal formulation of the sulphuric acid concentration proxy uses global solar radiation, SO2 concentration, condensation sink and relative humidity as predictor variables, yielding a correlation measure (R of 0.87 between observed concentration and the proxy predictions. Interestingly, the role of the condensation sink in the proxy was only minor, since similarly accurate proxies could be constructed with global solar radiation and SO2 concentration alone. This could be attributed to SO2 being an indicator for anthropogenic pollution, including particulate and gaseous emissions which represent sinks for the OH radical that, in turn, is needed for the formation of sulphuric acid.

  1. Webvise: Browser and Proxy support for open hypermedia structuring mechanisms on the WWW

    DEFF Research Database (Denmark)

    Grønbæk, Kaj; Sloth, Lennard; Ørbæk, Peter

    1999-01-01

    This paper discusses how to augment the World Wide Web with an open hypermedia service (Webvise) that provides structures such as contexts, links, annotations, and guided tours stored in hypermedia databases external to the Web pages. This includes the ability for users collaboratively to create...... Web pages. Support for providing links to/from parts of non-HTML data, such as sound and movie, will be possible via interfaces to plug-ins and Java-based media players. The hypermedia structures are stored in a hypermedia database, developed from the Devise Hypermedia framework, and the service...... be manipulated and used via special Java applets and a pure proxy server solution is provided for users who only need to browse the structures. A user can create and use the external structures as ‘transparency' layers on top of arbitrary Web pages, the user can switch between viewing pages with one or more...

  2. Consistencia de ejecución: una propuesta no cache coherente

    OpenAIRE

    García, Rafael B.; Ardenghi, Jorge Raúl

    2005-01-01

    La presencia de uno o varios niveles de memoria cache en los procesadores modernos, cuyo objetivo es reducir el tiempo efectivo de acceso a memoria, adquiere especial relevancia en un ambiente multiprocesador del tipo DSM dado el mucho mayor costo de las referencias a memoria en módulos remotos. Claramente, el protocolo de coherencia de cache debe responder al modelo de consistencia de memoria adoptado. El modelo secuencial SC, aceptado generalmente como el más natural, junto a una serie de m...

  3. Implementació d'una Cache per a un processador MIPS d'una FPGA

    OpenAIRE

    Riera Villanueva, Marc

    2013-01-01

    [CATALÀ] Primer s'explicarà breument l'arquitectura d'un MIPS, la jerarquia de memòria i el funcionament de la cache. Posteriorment s'explicarà com s'ha dissenyat i implementat una jerarquia de memòria per a un MIPS implementat en VHDL en una FPGA. [ANGLÈS] First, the MIPS architecture, memory hierarchy and the functioning of the cache will be explained briefly. Then, the design and implementation of a memory hierarchy for a MIPS processor implemented in VHDL on an FPGA will be explained....

  4. A Way Memoization Technique for Reducing Power Consumption of Caches in Application Specific Integrated Processors

    OpenAIRE

    Ishihara, Tohru; Fallah, Farzan

    2005-01-01

    Submitted on behalf of EDAA (http://www.edaa.com/); International audience; This paper presents a technique for eliminating redundant cache-tag and cache-way accesses to reduce power consumption. The basic idea is to keep a small number of Most Recently Used (MRU) addresses in a Memory Address Buffer (MAB) and to omit redundant tag and way accesses when there is a MAB-hit. Since the approach keeps only tag and set-index values in the MAB, the energy and area overheads are relatively small eve...

  5. Using XRootD to provide caches for CernVM-FS

    CERN Document Server

    Domenighini, Matteo

    2017-01-01

    CernVM-FS recently added the possibility of using plugin for cache management. In order to investigate the capabilities and limits of such possibility, an XRootD plugin was written and benchmarked; as a byproduct, a POSIX plugin was also generated. The tests revealed that the plugin interface introduces no signicant performance over- head; moreover, the XRootD plugin performance was discovered to be worse than the ones of the built-in cache manager and the POSIX plugin. Further test of the XRootD component revealed that its per- formance is dependent on the server disk speed.

  6. Education for sustainability and environmental education in National Geoparks. EarthCaching - a new method?

    Science.gov (United States)

    Zecha, Stefanie; Regelous, Anette

    2017-04-01

    National Geoparks are restricted areas incorporating educational resources of great importance in promoting education for sustainable development, mobilizing knowledge inherent to the EarthSciences. Different methods can be used to implement the education of sustainability. Here we present possibilities for National Geoparks to support sustainability focusing on new media and EarthCaches based on the data set of the "EarthCachers International EarthCaching" conference in Goslar in October 2015. Using an empirical study designed by ourselves we collected actual information about the environmental consciousness of Earthcachers. The data set was analyzed using SPSS and statistical methods. Here we present the results and their consequences for National Geoparks.

  7. Reader set encoding for directory of shared cache memory in multiprocessor system

    Science.gov (United States)

    Ahn, Dnaiel; Ceze, Luis H.; Gara, Alan; Ohmacht, Martin; Xiaotong, Zhuang

    2014-06-10

    In a parallel processing system with speculative execution, conflict checking occurs in a directory lookup of a cache memory that is shared by all processors. In each case, the same physical memory address will map to the same set of that cache, no matter which processor originated that access. The directory includes a dynamic reader set encoding, indicating what speculative threads have read a particular line. This reader set encoding is used in conflict checking. A bitset encoding is used to specify particular threads that have read the line.

  8. Triangular Energy-Saving Cache-Based Routing Protocol by Energy Sieving

    OpenAIRE

    Chiu-Ching Tuan; Yi-Chao Wu

    2012-01-01

    In wireless ad hoc networks, designing an energy-efficient routing protocol is a major issue since nodes are energy limited. To address energy issue, we proposed a triangular energy-saving cached-based routing protocol by energy sieving (TESCES). TESCES offered a grid leader election by energy sieving (GLEES), a cache-based grid leader maintenance (CGLM), and a triangular energy-saving routing discovery (TESRD). In GLEES, only few nodes join in grid leader election to be elected as a grid lea...

  9. Enabling web services to consume and produce large datasets

    NARCIS (Netherlands)

    Koulouzis, S.; Cushing, R.; Karasavvas, K.A.; Belloum, A.; Bubak, M.

    2012-01-01

    Service-oriented architectures and Web services are well-established paradigms for developing distributed applications. However, Web services face problems when accessing, moving, and processing large datasets. To address this problem, the authors present ProxyWS, which uses myriad protocols to

  10. Web Engineering

    OpenAIRE

    Deshpande, Yogesh; Murugesan, San; Ginige, Athula; Hansen, Steve; Schwabe, Daniel; Gaedke, Martin; White, Bebo

    2003-01-01

    Web Engineering is the application of systematic, disciplined and quantifiable approaches to development, operation, and maintenance of Web-based applications. It is both a pro-active approach and a growing collection of theoretical and empirical research in Web application development. This paper gives an overview of Web Engineering by addressing the questions: a) why is it needed? b) what is its domain of operation? c) how does it help and what should it do to improve Web application develo...

  11. Munchausen Syndrome by Proxy: Medical Diagnostic Criteria.

    Science.gov (United States)

    Rosenberg, Donna Andrea

    2003-01-01

    Medical diagnostic criteria for Munchausen Syndrome by Proxy (a persistent fabrication by one individual of illness in another) are presented. Since the strength of the known facts may vary from case to case, diagnostic criteria are given for a definitive diagnosis, a possible diagnosis, an inconclusive determination, and the definitely excluded…

  12. Munchausen Syndrome by Proxy: Evaluation and Treatment.

    Science.gov (United States)

    Parnell, Teresa F.; Day, Deborah O.

    Munchausen Syndrome by Proxy (MSBP) is characterized by a significant caretaker, usually a mother, deliberately inducing and/or falsely reporting illness in a child. The potentially fatal outcome of undetected MSBP makes the understanding of this syndrome gravely important. Early detection and effective intervention can be accomplished through the…

  13. Munchausen Syndrome by Proxy: Social Work's Role.

    Science.gov (United States)

    Mercer, Susan O.; Perdue, Jeanette D.

    1993-01-01

    Describes Munchausen syndrome by proxy, diagnosis used to describe variation of child abuse whereby parent or adult caregiver fabricates medical history or induces symptoms in child, or both, resulting in unnecessary examinations, treatments, hospitalizations, and even death. Reviews assessment procedures, provides case studies, and describes…

  14. Munchausen Syndrome by Proxy: A Family Affair.

    Science.gov (United States)

    Mehl, Albert L.; And Others

    1990-01-01

    The article reports on a case of Munchausen syndrome by proxy in which chronic illicit insulin was administered to a one-year-old child by her mother. Factitious illnesses continued despite psychiatric intervention. Retrospective review of medical records suggested 30 previous episodes of factitious illness within the family. (DB)

  15. The Syndrome of Munchausen by Proxy.

    Science.gov (United States)

    Jones, David P. H.

    1994-01-01

    This editorial introduces two articles on Munchausen by Proxy syndrome (the induction of an appearance or state of physical ill health in a child, by the caretaker, and the child's subsequent presentation to health professionals for diagnosis and/or treatment). The severity of the caretaker's psychological disturbance and the serious effects on…

  16. Analisis Perancangan PC (Personal Computer Router Proxy Untuk Menggabungkan Tiga Jalur Koneksi Di Indospeed

    Directory of Open Access Journals (Sweden)

    Bangun Harizal

    2012-05-01

    Full Text Available Router very important for computer network. To get the router can buy a router products but can also design your own using personal computers. Mikrotik is one of the router manufacturer that provides products in the form of hardware or software. If you want to pay less to design a router can use operating system Ubuntu. This operating system is open source and provided free of charge by the manufacturer. If you want to design a router with this OS can make use of personal computers are often used in homes or offices. Merging the performance of both types of routers can also be usedto cover the lack of one another. With the proxy on a local network then the use of bandwidth can be saved. Becausethere are several websites cached and stored therein if the same website is accessed by other users, the router transmits only from the proxy to the computerwhich make the request.

  17. Exploitation of pocket gophers and their food caches by grizzly bears

    Science.gov (United States)

    Mattson, D.J.

    2004-01-01

    I investigated the exploitation of pocket gophers (Thomomys talpoides) by grizzly bears (Ursus arctos horribilis) in the Yellowstone region of the United States with the use of data collected during a study of radiomarked bears in 1977-1992. My analysis focused on the importance of pocket gophers as a source of energy and nutrients, effects of weather and site features, and importance of pocket gophers to grizzly bears in the western contiguous United States prior to historical extirpations. Pocket gophers and their food caches were infrequent in grizzly bear feces, although foraging for pocket gophers accounted for about 20-25% of all grizzly bear feeding activity during April and May. Compared with roots individually excavated by bears, pocket gopher food caches were less digestible but more easily dug out. Exploitation of gopher food caches by grizzly bears was highly sensitive to site and weather conditions and peaked during and shortly after snowmelt. This peak coincided with maximum success by bears in finding pocket gopher food caches. Exploitation was most frequent and extensive on gently sloping nonforested sites with abundant spring beauty (Claytonia lanceolata) and yampah (Perdieridia gairdneri). Pocket gophers are rare in forests, and spring beauty and yampah roots are known to be important foods of both grizzly bears and burrowing rodents. Although grizzly bears commonly exploit pocket gophers only in the Yellowstone region, this behavior was probably widespread in mountainous areas of the western contiguous United States prior to extirpations of grizzly bears within the last 150 years.

  18. On-chip COMA cache-coherence protocol for microgrids of microthreaded cores

    NARCIS (Netherlands)

    Zhang, L.; Jesshope, C.

    2008-01-01

    This paper describes an on-chip COMA cache coherency protocol to support the microthread model of concurrent program composition. The model gives a sound basis for building multi-core computers as it captures concurrency, abstracts communication and identifies resources, such as processor groups

  19. Model checking a cache coherence protocol for a Java DSM implementation

    NARCIS (Netherlands)

    J. Pang; W.J. Fokkink (Wan); R. Hofman (Rutger); R. Veldema

    2007-01-01

    textabstractJackal is a fine-grained distributed shared memory implementation of the Java programming language. It aims to implement Java's memory model and allows multithreaded Java programs to run unmodified on a distributed memory system. It employs a multiple-writer cache coherence

  20. Model checking a cache coherence protocol of a Java DSM implementation

    NARCIS (Netherlands)

    Pang, J.; Fokkink, W.J.; Hofman, R.; Veldema, R.S.

    2007-01-01

    Jackal is a fine-grained distributed shared memory implementation of the Java programming language. It aims to implement Java's memory model and allows multithreaded Java programs to run unmodified on a distributed memory system. It employs a multiple-writer cache coherence protocol. In this paper,

  1. Use of the sun as a heading indicator when caching and recovering in a wild rodent

    Science.gov (United States)

    Samson, Jamie; Manser, Marta B.

    2016-01-01

    A number of diurnal species have been shown to use directional information from the sun to orientate. The use of the sun in this way has been suggested to occur in either a time-dependent (relying on specific positional information) or a time-compensated manner (a compass that adjusts itself over time with the shifts in the sun’s position). However, some interplay may occur between the two where a species could also use the sun in a time-limited way, whereby animals acquire certain information about the change of position, but do not show full compensational abilities. We tested whether Cape ground squirrels (Xerus inauris) use the sun as an orientation marker to provide information for caching and recovery. This species is a social sciurid that inhabits arid, sparsely vegetated habitats in Southern Africa, where the sun is nearly always visible during the diurnal period. Due to the lack of obvious landmarks, we predicted that they might use positional cues from the sun in the sky as a reference point when caching and recovering food items. We provide evidence that Cape ground squirrels use information from the sun’s position while caching and reuse this information in a time-limited way when recovering these caches. PMID:27580797

  2. Randomized Caches Can Be Pretty Useful to Hard Real-Time Systems

    Directory of Open Access Journals (Sweden)

    Enrico Mezzetti

    2015-03-01

    Full Text Available Cache randomization per se, and its viability for probabilistic timing analysis (PTA of critical real-time systems, are receiving increasingly close attention from the scientific community and the industrial practitioners. In fact, the very notion of introducing randomness and probabilities in time-critical systems has caused strenuous debates owing to the apparent clash that this idea has with the strictly deterministic view traditionally held for those systems. A paper recently appeared in LITES (Reineke, J. (2014. Randomized Caches Considered Harmful in Hard Real-Time Systems. LITES, 1(1, 03:1-03:13. provides a critical analysis of the weaknesses and risks entailed in using randomized caches in hard real-time systems. In order to provide the interested reader with a fuller, balanced appreciation of the subject matter, a critical analysis of the benefits brought about by that innovation should be provided also. This short paper addresses that need by revisiting the array of issues addressed in the cited work, in the light of the latest advances to the relevant state of the art. Accordingly, we show that the potential benefits of randomized caches do offset their limitations, causing them to be - when used in conjunction with PTA - a serious competitor to conventional designs.

  3. OneService - Generic Cache Aggregator Framework for Service Depended Cloud Applications

    NARCIS (Netherlands)

    Tekinerdogan, B.; Oral, O.A.

    2017-01-01

    Current big data cloud systems often use different data migration strategies from providers to customers. This often results in increased bandwidth usage and herewith a decrease of the performance. To enhance the performance often caching mechanisms are adopted. However, the implementations of these

  4. Acorn Caching in Tree Squirrels: Teaching Hypothesis Testing in the Park

    Science.gov (United States)

    McEuen, Amy B.; Steele, Michael A.

    2012-01-01

    We developed an exercise for a university-level ecology class that teaches hypothesis testing by examining acorn preferences and caching behavior of tree squirrels (Sciurus spp.). This exercise is easily modified to teach concepts of behavioral ecology for earlier grades, particularly high school, and provides students with a theoretical basis for…

  5. Delivery Time Minimization in Edge Caching: Synergistic Benefits of Subspace Alignment and Zero Forcing

    KAUST Repository

    Kakar, Jaber

    2017-10-29

    An emerging trend of next generation communication systems is to provide network edges with additional capabilities such as additional storage resources in the form of caches to reduce file delivery latency. To investigate this aspect, we study the fundamental limits of a cache-aided wireless network consisting of one central base station, $M$ transceivers and $K$ receivers from a latency-centric perspective. We use the normalized delivery time (NDT) to capture the per-bit latency for the worst-case file request pattern at high signal-to-noise ratios (SNR), normalized with respect to a reference interference-free system with unlimited transceiver cache capabilities. For various special cases with $M=\\\\{1,2\\\\}$ and $K=\\\\{1,2,3\\\\}$ that satisfy $M+K\\\\leq 4$, we establish the optimal tradeoff between cache storage and latency. This is facilitated through establishing a novel converse (for arbitrary $M$ and $K$) and an achievability scheme on the NDT. Our achievability scheme is a synergistic combination of multicasting, zero-forcing beamforming and interference alignment.

  6. CAChe Molecular Modeling: A Visualization Tool Early in the Undergraduate Chemistry Curriculum.

    Science.gov (United States)

    Crouch, R. David; And Others

    1996-01-01

    Describes a "Synthesis and Reactivity" curriculum that focuses on the correlation of laboratory experiments with lecture topics and the extension of laboratory exercises beyond the usual four-hour period. Highlights experiments developed and an out-of-class computational chemistry exercise using CAChe, a versatile molecular modeling…

  7. I/O-Optimal Distribution Sweeping on Private-Cache Chip Multiprocessors

    DEFF Research Database (Denmark)

    Ajwani, Deepak; Sitchinava, Nodar; Zeh, Norbert

    2011-01-01

    The parallel external memory (PEM) model has been used as a basis for the design and analysis of a wide range of algorithms for private-cache multi-core architectures. As a tool for developing geometric algorithms in this model, a parallel version of the I/O-efficient distribution sweeping framew...

  8. Two-dimensional cache-oblivious sparse matrix–vector multiplication

    NARCIS (Netherlands)

    Yzelman, A.N.|info:eu-repo/dai/nl/313872643; Bisseling, R.H.|info:eu-repo/dai/nl/304828068

    2011-01-01

    In earlier work, we presented a one-dimensional cache-oblivious sparse matrix–vector (SpMV) multiplication scheme which has its roots in one-dimensional sparse matrix partitioning. Partitioning is often used in distributed-memory parallel computing for the SpMV multiplication, an important kernel in

  9. Sex, estradiol, and spatial memory in a food-caching corvid.

    Science.gov (United States)

    Rensel, Michelle A; Ellis, Jesse M S; Harvey, Brigit; Schlinger, Barney A

    2015-09-01

    Estrogens significantly impact spatial memory function in mammalian species. Songbirds express the estrogen synthetic enzyme aromatase at relatively high levels in the hippocampus and there is evidence from zebra finches that estrogens facilitate performance on spatial learning and/or memory tasks. It is unknown, however, whether estrogens influence hippocampal function in songbirds that naturally exhibit memory-intensive behaviors, such as cache recovery observed in many corvid species. To address this question, we examined the impact of estradiol on spatial memory in non-breeding Western scrub-jays, a species that routinely participates in food caching and retrieval in nature and in captivity. We also asked if there were sex differences in performance or responses to estradiol. Utilizing a combination of an aromatase inhibitor, fadrozole, with estradiol implants, we found that while overall cache recovery rates were unaffected by estradiol, several other indices of spatial memory, including searching efficiency and efficiency to retrieve the first item, were impaired in the presence of estradiol. In addition, males and females differed in some performance measures, although these differences appeared to be a consequence of the nature of the task as neither sex consistently out-performed the other. Overall, our data suggest that a sustained estradiol elevation in a food-caching bird impairs some, but not all, aspects of spatial memory on an innate behavioral task, at times in a sex-specific manner. Copyright © 2015 Elsevier Inc. All rights reserved.

  10. Dependability Aspects Regarding the Cache Level of a Memory Hierarchy using Hamming Codes

    Science.gov (United States)

    Novac, O.; Vari-Kakas, St.; Novac, Mihaela; Vladu, Ecaterina; Indrie, Liliana

    In this paper we will apply a SEC-DED code to the cache level of a memory hierarchy. From the category of SEC-DED (Single Error Correction Double Error Detection) codes we select the Hamming code. For correction of single-bit error we use a syndrome decoder, a syndrome generator and the check bits generator circuit.

  11. CAChe Molecular Modeling: A Visualization Tool Early in the Undergraduate Chemistry Curriculum

    Science.gov (United States)

    Crouch, R. David; Holden, Michael S.; Samet, Cindy

    1996-10-01

    In Dickinson's chemistry curriculum, "Synthesis & Reactivity" replaces the traditional organic chemistry sequence and begins in the second semester of the freshman year. A key aspect of our sequence is the correlation of laboratory experiments with lecture topics and the extension of laboratory exercises beyond the usual 4-hour period. With this goal in mind, a number of "Synthesis & Reactivity" experiments have been developed that include an out-of-class computational chemistry exercise using CAChe (1), a versatile molecular modeling software package. Because the first semester of "Synthesis & Reactivity" has a large number of freshmen, emphasis is placed on developing an insight for where nucleophiles and electrophiles might attack a molecule. The Visualizer+ routine in CAChe generates striking graphical images of these sites and the reaction of NBS/H2O with 3-sulfolene (2) presents an excellent opportunity to introduce CAChe into an experiment. Before the laboratory, students are introduced to CAChe to determine how NBS might interact with a nucleophile such as an alkene. Students then return to the laboratory to perform the bromohydrin synthesis but are asked to consider what the regiochemistry would be were the alkene not symmetric. Specifically, students are instructed to visit the computer laboratory during the week and perform calculations on the bromonium ion formed from 2-methylpropene to determine where a nucleophilic H2O molecule might attack. The MOPAC routine in CAChe provides data that are converted to a graphical depiction of the frontier density of the intermediate, indicating potential reactive sites based on electron distribution of orbitals near the HOMO and LUMO. When these data are manipulated by Visualizer+, the obvious conclusion is that the nucleophilic water molecule should attack the more highly substituted carbon of the bromonium ion (Fig. 1) and generate one regioisomer. Figure 1. Relative nucleophilic susceptibilities ofr the

  12. Where is meaning when form is gone? Knowledge representation on the Web

    Directory of Open Access Journals (Sweden)

    Terrence A. Brooks

    2001-01-01

    Full Text Available This essay argues that legacy methods of knowledge represenation do not transfer well to a Web environment. Legacy methods assume discrete documents that persist through time. Web documents are often products of dynamic scripts, database manipulations and caching or distributed processing. The size and rate of growth of the Web prohibits labor-intensive methods such as manual cataloging. This essay suggests that an appropriate future home of content-bearing metadata is extensible markup technologies. Meaning can be incorporated in Extensible Markup Language (XML various ways such as semanticaly rich markup tags, attributes and links among XML sources.

  13. Quantifying animal movement for caching foragers: the path identification index (PII) and cougars, Puma concolor

    Science.gov (United States)

    Ironside, Kirsten E.; Mattson, David J.; Theimer, Tad; Jansen, Brian; Holton, Brandon; Arundel, Terry; Peters, Michael; Sexton, Joseph O.; Edwards, Thomas C.

    2017-01-01

    Relocation studies of animal movement have focused on directed versus area restricted movement, which rely on correlations between step-length and turn angles, along with a degree of stationarity through time to define behavioral states. Although these approaches may work well for grazing foraging strategies in a patchy landscape, species that do not spend a significant amount of time searching out and gathering small dispersed food items, but instead feed for short periods on large, concentrated sources or cache food result in movements that maybe difficult to analyze using turning and velocity alone. We use GPS telemetry collected from a prey-caching predator, the cougar (Puma concolor), to test whether adding additional movement metrics capturing site recursion, to the more traditional velocity and turning, improve the ability to identify behaviors. We evaluated our movement index’s ability to identify behaviors using field investigations. We further tested for statistical stationarity across behaviors for use of topographic view-sheds. We found little correlation between turn angle, velocity, tortuosity, and site fidelity and combined them into a movement index used to identify movement paths (temporally autocorrelated movements) related to fast directed movements (taxis), area restricted movements (search), and prey caching (foraging). Changes in the frequency and duration of these movements were helpful for identifying seasonal activities such as migration and denning in females. Comparison of field investigations of cougar activities to behavioral classes defined using the movement index and found an overall classification accuracy of 81%. Changes in behaviors resulted in changes in how cougars used topographic view-sheds, showing statistical non-stationarity over time. The movement index shows promise for identifying behaviors in species that frequently return to specific locations such as food caches, watering holes, or dens, and highlights the role

  14. A Content Standard for Computational Models; Digital Rights Management (DRM) Architectures; A Digital Object Approach to Interoperable Rights Management: Finely-Grained Policy Enforcement Enabled by a Digital Object Infrastructure; LOCKSS: A Permanent Web Publishing and Access System; Tapestry of Time and Terrain.

    Science.gov (United States)

    Hill, Linda L.; Crosier, Scott J.; Smith, Terrence R.; Goodchild, Michael; Iannella, Renato; Erickson, John S.; Reich, Vicky; Rosenthal, David S. H.

    2001-01-01

    Includes five articles. Topics include requirements for a content standard to describe computational models; architectures for digital rights management systems; access control for digital information objects; LOCKSS (Lots of Copies Keep Stuff Safe) that allows libraries to run Web caches for specific journals; and a Web site from the U.S.…

  15. Legal requirements governing proxy voting in Denmark

    DEFF Research Database (Denmark)

    Werlauff, Erik

    2008-01-01

    The requirements in Danish company law concerning proxy voting in companies whose shares have been accepted for listing on a regulated market have been successively tightened in recent years, and corporate governance principles have also led to the introduction of several requirements concerning...... proxy holders. A thorough knowledge of these requirements is important not only for the listed companies but also for their advisers and investors in Denmark and abroad. This article considers these requirements as well as the additional requirements which will derive from Directive 2007....../36 on the exercise of shareholders' rights in listed companies, which must be implemented by 3 August 2009. It is pointed out that companies may provide with advantage in their articles of association for both the existing and the forthcoming requirements at this early stage....

  16. Observable Proxies For 26 Al Enhancement

    Energy Technology Data Exchange (ETDEWEB)

    Fryer, Christopher L [Los Alamos National Laboratory; Young, Patrick A [Los Alamos National Laboratory; Ellinger, Carola I [ASU; Arnett, William D [UNIV ARIZONA

    2008-01-01

    We consider the cospatial production of elements in supernova explosions to find observationally detectable proxies for enhancement of {sup 26}Al in supernova ejecta and stellar systems. Using four progenitors we explore a range of 1D explosions at different energies and an asymmetric 3D explosion. We find that the most reliable indicator of the presence of {sup 26}Al in unmixed ejecta is a very low S/Si ratio ({approx} 0.05). Production of N in O/S/Si-rich regions is also indicative. The biologically important element P is produced at its highest abundance in the same regions. Proxies should be detectable in supernova ejecta with high spatial resolution multi wavelength observations, but the small absolute abundance of material injected into a proto-planetary disk makes detection unlikely in existing or forming stellar/planetary systems.

  17. Non-destructive foraminiferal paleoclimatic proxies: A brief insight

    Digital Repository Service at National Institute of Oceanography (India)

    Saraswat, R.

    Non-Destructive Foraminiferal Paleoclimatic Proxies: A Brief Insight The knowledge of past climate can help us to understand imminent climatic changes. Oceans are the vast archives of past climate. Various indirect techniques termed as proxies...

  18. 12 CFR 7.2002 - Director or attorney as proxy.

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 1 2010-01-01 2010-01-01 false Director or attorney as proxy. 7.2002 Section 7... OPERATIONS Corporate Practices § 7.2002 Director or attorney as proxy. Any person or group of persons, except the bank's officers, clerks, tellers, or bookkeepers, may be designated to act as proxy. The bank's...

  19. Physicians' Involvement with the New York State Health Care Proxy

    Science.gov (United States)

    Heyman, Janna C.; Sealy, Yvette M.

    2011-01-01

    This study examined physicians' attitude, involvement, and perceived barriers with the health care proxy. A cross sectional, correlational design was used to survey practicing physicians (N = 70). Physicians had positive attitudes toward the health care proxy and indicated that the most significant barriers to health care proxy completion were…

  20. Earth Science Mining Web Services

    Science.gov (United States)

    Pham, Long; Lynnes, Christopher; Hegde, Mahabaleshwa; Graves, Sara; Ramachandran, Rahul; Maskey, Manil; Keiser, Ken

    2008-01-01

    To allow scientists further capabilities in the area of data mining and web services, the Goddard Earth Sciences Data and Information Services Center (GES DISC) and researchers at the University of Alabama in Huntsville (UAH) have developed a system to mine data at the source without the need of network transfers. The system has been constructed by linking together several pre-existing technologies: the Simple Scalable Script-based Science Processor for Measurements (S4PM), a processing engine at he GES DISC; the Algorithm Development and Mining (ADaM) system, a data mining toolkit from UAH that can be configured in a variety of ways to create customized mining processes; ActiveBPEL, a workflow execution engine based on BPEL (Business Process Execution Language); XBaya, a graphical workflow composer; and the EOS Clearinghouse (ECHO). XBaya is used to construct an analysis workflow at UAH using ADam components, which are also installed remotely at the GES DISC, wrapped as Web Services. The S4PM processing engine searches ECHO for data using space-time criteria, staging them to cache, allowing the ActiveBPEL engine to remotely orchestras the processing workflow within S4PM. As mining is completed, the output is placed in an FTP holding area for the end user. The goals are to give users control over the data they want to process, while mining data at the data source using the server's resources rather than transferring the full volume over the internet. These diverse technologies have been infused into a functioning, distributed system with only minor changes to the underlying technologies. The key to the infusion is the loosely coupled, Web-Services based architecture: All of the participating components are accessible (one way or another) through (Simple Object Access Protocol) SOAP-based Web Services.

  1. Dark Web

    CERN Document Server

    Chen, Hsinchun

    2012-01-01

    The University of Arizona Artificial Intelligence Lab (AI Lab) Dark Web project is a long-term scientific research program that aims to study and understand the international terrorism (Jihadist) phenomena via a computational, data-centric approach. We aim to collect "ALL" web content generated by international terrorist groups, including web sites, forums, chat rooms, blogs, social networking sites, videos, virtual world, etc. We have developed various multilingual data mining, text mining, and web mining techniques to perform link analysis, content analysis, web metrics (technical

  2. Web 25

    DEFF Research Database (Denmark)

    Web 25: Histories from the First 25 Years of the World Wide Web celebrates the 25th anniversary of the Web. Since the beginning of the 1990s, the Web has played an important role in the development of the Internet as well as in the development of most societies at large, from its early grey...... and blue webpages introducing the hyperlink for a wider public, to today’s multifacted uses of the Web as an integrated part of our daily lives. This is the rst book to look back at 25 years of Web evolution, and it tells some of the histories about how the Web was born and has developed. It takes...... are presented alongside methodological re ections on how the past Web can be studied, as well as accounts of how one of the most important source types of our time is provided, namely the archived Web. Web 25: Histories from the First 25 Years of the World Wide Web is a must-read for anyone interested in how...

  3. High-speed mapping of water isotopes and residence time in Cache Slough Complex, San Francisco Bay Delta, CA

    Data.gov (United States)

    Department of the Interior — Real-time, high frequency (1-second sample interval) GPS location, water quality, and water isotope (δ2H, δ18O) data was collected in the Cache Slough Complex (CSC),...

  4. Operational Use of OGC Web Services at the Met Office

    Science.gov (United States)

    Wright, Bruce

    2010-05-01

    The Met Office has adopted the Service-Orientated Architecture paradigm to deliver services to a range of customers through Rich Internet Applications (RIAs). The approach uses standard Open Geospatial Consortium (OGC) web services to provide information to web-based applications through a range of generic data services. "Invent", the Met Office beta site, is used to showcase Met Office future plans for presenting web-based weather forecasts, product and information to the public. This currently hosts a freely accessible Weather Map Viewer, written in JavaScript, which accesses a Web Map Service (WMS), to deliver innovative web-based visualizations of weather and its potential impacts to the public. The intention is to engage the public in the development of new web-based services that more accurately meet their needs. As the service is intended for public use within the UK, it has been designed to support a user base of 5 million, the analysed level of UK web traffic reaching the Met Office's public weather information site. The required scalability has been realised through the use of multi-tier tile caching: - WMS requests are made for 256x256 tiles for fixed areas and zoom levels; - a Tile Cache, developed in house, efficiently serves tiles on demand, managing WMS request for the new tiles; - Edge Servers, externally hosted by Akamai, provide a highly scalable (UK-centric) service for pre-cached tiles, passing new requests to the Tile Cache; - the Invent Weather Map Viewer uses the Google Maps API to request tiles from Edge Servers. (We would expect to make use of the Web Map Tiling Service, when it becomes an OGC standard.) The Met Office delivers specialist commercial products to market sectors such as transport, utilities and defence, which exploit a Web Feature Service (WFS) for data relating forecasts and observations to specific geographic features, and a Web Coverage Service (WCS) for sub-selections of gridded data. These are locally rendered as maps or

  5. Wolves, Canis lupus, carry and cache the collars of radio-collared White-tailed Deer, Odocoileus virginianus, they killed

    Science.gov (United States)

    Nelson, Michael E.; Mech, L. David

    2011-01-01

    Wolves (Canis lupus) in northeastern Minnesota cached six radio-collars (four in winter, two in spring-summer) of 202 radio-collared White-tailed Deer (Odocoileus virginianus) they killed or consumed from 1975 to 2010. A Wolf bedded on top of one collar cached in snow. We found one collar each at a Wolf den and Wolf rendezvous site, 2.5 km and 0.5 km respectively, from each deer's previous locations.

  6. Proxy Graph: Visual Quality Metrics of Big Graph Sampling.

    Science.gov (United States)

    Nguyen, Quan Hoang; Hong, Seok-Hee; Eades, Peter; Meidiana, Amyra

    2017-06-01

    Data sampling has been extensively studied for large scale graph mining. Many analyses and tasks become more efficient when performed on graph samples of much smaller size. The use of proxy objects is common in software engineering for analysis and interaction with heavy objects or systems. In this paper, we coin the term 'proxy graph' and empirically investigate how well a proxy graph visualization can represent a big graph. Our investigation focuses on proxy graphs obtained by sampling; this is one of the most common proxy approaches. Despite the plethora of data sampling studies, this is the first evaluation of sampling in the context of graph visualization. For an objective evaluation, we propose a new family of quality metrics for visual quality of proxy graphs. Our experiments cover popular sampling techniques. Our experimental results lead to guidelines for using sampling-based proxy graphs in visualization.

  7. EqualChance: Addressing Intra-set Write Variation to Increase Lifetime of Non-volatile Caches

    Energy Technology Data Exchange (ETDEWEB)

    Mittal, Sparsh [ORNL; Vetter, Jeffrey S [ORNL

    2014-01-01

    To address the limitations of SRAM such as high-leakage and low-density, researchers have explored use of non-volatile memory (NVM) devices, such as ReRAM (resistive RAM) and STT-RAM (spin transfer torque RAM) for designing on-chip caches. A crucial limitation of NVMs, however, is that their write endurance is low and the large intra-set write variation introduced by existing cache management policies may further exacerbate this problem, thereby reducing the cache lifetime significantly. We present EqualChance, a technique to increase cache lifetime by reducing intra-set write variation. EqualChance works by periodically changing the physical cache-block location of a write-intensive data item within a set to achieve wear-leveling. Simulations using workloads from SPEC CPU2006 suite and HPC (high-performance computing) field show that EqualChance improves the cache lifetime by 4.29X. Also, its implementation overhead is small, and it incurs very small performance and energy loss.

  8. Ecosystem services from keystone species: diversionary seeding and seed-caching desert rodents can enhance Indian ricegrass seedling establishment

    Science.gov (United States)

    Longland, William; Ostoja, Steven M.

    2013-01-01

    Seeds of Indian ricegrass (Achnatherum hymenoides), a native bunchgrass common to sandy soils on arid western rangelands, are naturally dispersed by seed-caching rodent species, particularly Dipodomys spp. (kangaroo rats). These animals cache large quantities of seeds when mature seeds are available on or beneath plants and recover most of their caches for consumption during the remainder of the year. Unrecovered seeds in caches account for the vast majority of Indian ricegrass seedling recruitment. We applied three different densities of white millet (Panicum miliaceum) seeds as “diversionary foods” to plots at three Great Basin study sites in an attempt to reduce rodents' over-winter cache recovery so that more Indian ricegrass seeds would remain in soil seedbanks and potentially establish new seedlings. One year after diversionary seed application, a moderate level of Indian ricegrass seedling recruitment occurred at two of our study sites in western Nevada, although there was no recruitment at the third site in eastern California. At both Nevada sites, the number of Indian ricegrass seedlings sampled along transects was significantly greater on all plots treated with diversionary seeds than on non-seeded control plots. However, the density of diversionary seeds applied to plots had a marginally non-significant effect on seedling recruitment, and it was not correlated with recruitment patterns among plots. Results suggest that application of a diversionary seed type that is preferred by seed-caching rodents provides a promising passive restoration strategy for target plant species that are dispersed by these rodents.

  9. Adaptability in CORBA: The Mobile Proxy Approach

    DEFF Research Database (Denmark)

    Aziz, B.; Jensen, Christian D.

    2000-01-01

    are inherently open, heterogeneous, and dynamic environments integrating a wide range of platforms, operating systems and applications from a number of different sources. In this paper, we propose to use mobile proxies to provide adaptability in distributed applications integrated using the CORBA technology......Adaptability is one of the most important challenges in modern distributed systems. It may be defined as the ease with which a software application satisfies the different system constraints and the requirements of users and other applications. Adaptability is needed because distributed systems...

  10. Secure File Allocation and Caching in Large-scale Distributed Systems

    DEFF Research Database (Denmark)

    Di Mauro, Alessio; Mei, Alessandro; Jajodia, Sushil

    2012-01-01

    In this paper, we present a file allocation and caching scheme that guarantees high assurance, availability, and load balancing in a large-scale distributed file system that can support dynamic updates of authorization policies. The scheme uses fragmentation and replication to store files with hi......-balancing, and reducing delay of read operations. The system offers a trade-off-between performance and security that is dynamically tunable according to the current level of threat. We validate our mechanisms with extensive simulations in an Internet-like network....... security requirements in a system composed of a majority of low-security servers. We develop mechanisms to fragment files, to allocate them into multiple servers, and to cache them as close as possible to their readers while preserving the security requirement of the files, providing load...

  11. Cache-Oblivious Search Trees via Binary Trees of Small Height

    DEFF Research Database (Denmark)

    Brodal, G.S.; Fagerberg, R.; Jacob, R.

    2002-01-01

    We propose a version of cache oblivious search trees which is simpler than the previous proposal of Bender, Demaine and Farach-Colton and has the same complexity bounds. In particular, our data structure avoids the use of weight balanced B-trees, and can be implemented as just a single array...... oblivious fashion, using the van Emde Boas layout of Prokop.We also investigate the practicality of cache obliviousness in the area of search trees, by providing an empirical comparison of different methods for laying out a search tree in memory....... of data elements, without the use of pointers. The structure also improves space utilization.For storing n elements, our proposal uses (1 + ε)n times the element size of memory, and performs searches in worst case O(logB n) memory transfers, updates in amortized O((log2 n)/(εB)) memory transfers...

  12. Real-Time Scheduling in Heterogeneous Systems Considering Cache Reload Time Using Genetic Algorithms

    Science.gov (United States)

    Miryani, Mohammad Reza; Naghibzadeh, Mahmoud

    Since optimal assignment of tasks in a multiprocessor system is, in almost all practical cases, an NP-hard problem, in recent years some algorithms based on genetic algorithms have been proposed. Some of these algorithms have considered real-time applications with multiple objectives, total tardiness, completion time, etc. Here, we propose a suboptimal static scheduler of nonpreemptable tasks in hard real-time heterogeneous multiprocessor systems considering time constraints and cache reload time. The approach makes use of genetic algorithm to minimize total completion time and number of processors used, simultaneously. One important issue which makes this research different from previous ones is cache reload time. The method is implemented and the results are compared against a similar method.

  13. Modifying dementia risk and trajectories of cognitive decline in aging: the Cache County Memory Study.

    Science.gov (United States)

    Welsh-Bohmer, Kathleen A; Breitner, John C S; Hayden, Kathleen M; Lyketsos, Constantine; Zandi, Peter P; Tschanz, Joann T; Norton, Maria C; Munger, Ron

    2006-07-01

    The Cache County Study of Memory, Health, and Aging, more commonly referred to as the "Cache County Memory Study (CCMS)" is a longitudinal investigation of aging and Alzheimer's disease (AD) based in an exceptionally long-lived population residing in northern Utah. The study begun in 1994 has followed an initial cohort of 5,092 older individuals (many over age 84) and has examined the development of cognitive impairment and dementia in relation to genetic and environmental antecedents. This article summarizes the major contributions of the CCMS towards the understanding of mild cognitive disorders and AD across the lifespan, underscoring the role of common health exposures in modifying dementia risk and trajectories of cognitive change. The study now in its fourth wave of ascertainment illustrates the role of population-based approaches in informing testable models of cognitive aging and Alzheimer's disease.

  14. Data Rate Estimation for Wireless Core-to-Cache Communication in Multicore CPUs

    OpenAIRE

    Komar, M.; Petrov, V.; K. Borunova; D. Moltchanov; E. Koucheryavy

    2015-01-01

    In this paper, a principal architecture of common purpose CPU and its main components are discussed, CPUs evolution is considered and drawbacks that prevent future CPU development are mentioned. Further, solutions proposed so far are addressed and a new CPU architecture is introduced. The proposed architecture is based on wireless cache access that enables a reliable interaction between cores in multicore CPUs using terahertz band, 0.1-10THz. The presented architecture addresses the scalabili...

  15. Worst-case execution time analysis-driven object cache design

    DEFF Research Database (Denmark)

    Huber, Benedikt; Puffitsch, Wolfgang; Schoeberl, Martin

    2012-01-01

    Hard real‐time systems need a time‐predictable computing platform to enable static worst‐case execution time (WCET) analysis. All performance‐enhancing features need to be WCET analyzable. However, standard data caches containing heap‐allocated data are very hard to analyze statically. In this pa...... result in overly pessimistic WCET estimations. We therefore believe that an early architecture exploration by means of static timing analysis techniques helps to identify configurations suitable for hard real‐time systems....

  16. Preliminary pumping strategy analyses for southeastern Cache Valley, Utah and river baseflow impacts

    OpenAIRE

    Chowdhury, Shyamal B.; Peralta, R. C.

    1995-01-01

    US/REMAX, a linear optimization model for groundwater management, is used to compute preliminary optimal sustained groundwater pumping increases for southeastern Cache Valley. US/REMAX employs the response matrix method of representing system response to stimuli as constraint equations within an optimization problem. The management objective is to maximize groundwater extraction at four specified locations subject to constrai~ts on aquifer potentiometric head, aquifer/river interflow, and the...

  17. Web Engineering

    Energy Technology Data Exchange (ETDEWEB)

    White, Bebo

    2003-06-23

    Web Engineering is the application of systematic, disciplined and quantifiable approaches to development, operation, and maintenance of Web-based applications. It is both a pro-active approach and a growing collection of theoretical and empirical research in Web application development. This paper gives an overview of Web Engineering by addressing the questions: (a) why is it needed? (b) what is its domain of operation? (c) how does it help and what should it do to improve Web application development? and (d) how should it be incorporated in education and training? The paper discusses the significant differences that exist between Web applications and conventional software, the taxonomy of Web applications, the progress made so far and the research issues and experience of creating a specialization at the master's level. The paper reaches a conclusion that Web Engineering at this stage is a moving target since Web technologies are constantly evolving, making new types of applications possible, which in turn may require innovations in how they are built, deployed and maintained.

  18. Fox squirrels match food assessment and cache effort to value and scarcity.

    Directory of Open Access Journals (Sweden)

    Mikel M Delgado

    Full Text Available Scatter hoarders must allocate time to assess items for caching, and to carry and bury each cache. Such decisions should be driven by economic variables, such as the value of the individual food items, the scarcity of these items, competition for food items and risk of pilferage by conspecifics. The fox squirrel, an obligate scatter-hoarder, assesses cacheable food items using two overt movements, head flicks and paw manipulations. These behaviors allow an examination of squirrel decision processes when storing food for winter survival. We measured wild squirrels' time allocations and frequencies of assessment and investment behaviors during periods of food scarcity (summer and abundance (fall, giving the squirrels a series of 15 items (alternating five hazelnuts and five peanuts. Assessment and investment per cache increased when resource value was higher (hazelnuts or resources were scarcer (summer, but decreased as scarcity declined (end of sessions. This is the first study to show that assessment behaviors change in response to factors that indicate daily and seasonal resource abundance, and that these factors may interact in complex ways to affect food storing decisions. Food-storing tree squirrels may be a useful and important model species to understand the complex economic decisions made under natural conditions.

  19. A cache-friendly sampling strategy for texture-based volume rendering on GPU

    Directory of Open Access Journals (Sweden)

    Junpeng Wang

    2017-06-01

    Full Text Available The texture-based volume rendering is a memory-intensive algorithm. Its performance relies heavily on the performance of the texture cache. However, most existing texture-based volume rendering methods blindly map computational resources to texture memory and result in incoherent memory access patterns, causing low cache hit rates in certain cases. The distance between samples taken by threads of an atomic scheduling unit (e.g. a warp of 32 threads in CUDA of the GPU is a crucial factor that affects the texture cache performance. Based on this fact, we present a new sampling strategy, called Warp Marching, for the ray-casting algorithm of texture-based volume rendering. The effects of different sample organizations and different thread-pixel mappings in the ray-casting algorithm are thoroughly analyzed. Also, a pipeline manner color blending approach is introduced and the power of warp-level GPU operations is leveraged to improve the efficiency of parallel executions on the GPU. In addition, the rendering performance of the Warp Marching is view-independent, and it outperforms existing empty space skipping techniques in scenarios that need to render large dynamic volumes in a low resolution image. Through a series of micro-benchmarking and real-life data experiments, we rigorously analyze our sampling strategies and demonstrate significant performance enhancements over existing sampling methods.

  20. The role of seed mass on the caching decision by agoutis, Dasyprocta leporina (Rodentia: Agoutidae

    Directory of Open Access Journals (Sweden)

    Mauro Galetti

    2010-06-01

    Full Text Available It has been shown that the local extinction of large-bodied frugivores may cause cascading consequences for plant recruitment and overall plant diversity. However, to what extent the resilient mammals can compensate the role of seed dispersal in defaunated sites is poorly understood. Caviomorph rodents, especially Dasyprocta spp., are usually resilient frugivores in hunted forests and their seed caching behavior may be important for many plant species which lack primary dispersers. We compared the effect of the variation in seed mass of six vertebrate-dispersed plant species on the caching decision by the red-rumped agoutis Dasyprocta leporina Linnaeus, 1758 in a land-bridge island of the Atlantic forest, Brazil. We found a strong positive effect of seed mass on seed fate and dispersal distance, but there was a great variation between species. Agoutis never cached seeds smaller than 0.9 g and larger seeds were dispersed for longer distances. Therefore, agoutis can be important seed dispersers of large-seeded species in defaunated forests.

  1. Memory for multiple cache locations and prey quantities in a food-hoarding songbird.

    Science.gov (United States)

    Armstrong, Nicola; Garland, Alexis; Burns, K C

    2012-01-01

    Most animals can discriminate between pairs of numbers that are each less than four without training. However, North Island robins (Petroica longipes), a food-hoarding songbird endemic to New Zealand, can discriminate between quantities of items as high as eight without training. Here we investigate whether robins are capable of other complex quantity discrimination tasks. We test whether their ability to discriminate between small quantities declines with (1) the number of cache sites containing prey rewards and (2) the length of time separating cache creation and retrieval (retention interval). Results showed that subjects generally performed above-chance expectations. They were equally able to discriminate between different combinations of prey quantities that were hidden from view in 2, 3, and 4 cache sites from between 1, 10, and 60 s. Overall results indicate that North Island robins can process complex quantity information involving more than two discrete quantities of items for up to 1 min long retention intervals without training.

  2. Improving Image Processing Systems by Using Software Simulated LRU Cache Algorithms

    Directory of Open Access Journals (Sweden)

    Cosmin CIORANU

    2012-01-01

    Full Text Available Today’s scientific progress is closely related with data processing, a process is implemented using algorithms, but in order to have a result, algorithms need data, and data are generated by sensors, particularly satellite imagery or collaborative GIS platforms. The progress has made those imaging capturing sensors more and more accurate therefore the generated data are becoming larger and larger. The problem is mostly related to the operating system and sometimes software design’s inability to manage contiguous spaces of memory. In an ironic turn of events, those data sometimes cannot be held all at once in a computer system to be analyzed. A solution needed to be devised to overcome this easy problem at first, but complex in implementation. The answer is somehow hidden, but is has been around since the birth of computer science, and is called a memory cache, which is basically at its origins a fast memory. We can adjust this concept in software programming by identifying the problem and coming up with an implementation. The data cache can be implemented in many various ways but here we will present one based on LRU (least recently used algorithm mostly to handle three dimension arrays, called 3dCache which is widely compatible with software packages that supports external tools such as Matlab or a programming environment like C++.

  3. Use of diuretics is associated with reduced risk of Alzheimer's disease: the Cache County Study.

    Science.gov (United States)

    Chuang, Yi-Fang; Breitner, John C S; Chiu, Yen-Ling; Khachaturian, Ara; Hayden, Kathleen; Corcoran, Chris; Tschanz, JoAnn; Norton, Maria; Munger, Ron; Welsh-Bohmer, Kathleen; Zandi, Peter P

    2014-11-01

    Although the use of antihypertensive medications has been associated with reduced risk of Alzheimer's disease (AD), it remains unclear which class provides the most benefit. The Cache County Study of Memory Health and Aging is a prospective longitudinal cohort study of dementing illnesses among the elderly population of Cache County, Utah. Using waves I to IV data of the Cache County Study, 3417 participants had a mean of 7.1 years of follow-up. Time-varying use of antihypertensive medications including different class of diuretics, angiotensin converting enzyme inhibitors, β-blockers, and calcium channel blockers was used to predict the incidence of AD using Cox proportional hazards analyses. During follow-up, 325 AD cases were ascertained with a total of 23,590 person-years. Use of any antihypertensive medication was associated with lower incidence of AD (adjusted hazard ratio [aHR], 0.77; 95% confidence interval [CI], 0.61-0.97). Among different classes of antihypertensive medications, thiazide (aHR, 0.7; 95% CI, 0.53-0.93), and potassium-sparing diuretics (aHR, 0.69; 95% CI, 0.48-0.99) were associated with the greatest reduction of AD risk. Thiazide and potassium-sparing diuretics were associated with decreased risk of AD. The inverse association of potassium-sparing diuretics confirms an earlier finding in this cohort, now with longer follow-up, and merits further investigation. Copyright © 2014 Elsevier Inc. All rights reserved.

  4. Munchausen syndrome by proxy: ongoing clinical challenges.

    Science.gov (United States)

    Squires, Janet E; Squires, Robert H

    2010-09-01

    In 1977, Roy Meadow, a pediatric nephrologist, first described a condition he subsequently coined Munchausen syndrome by proxy. The classic form involves a parent or other caregiver who inflicts injury or induces illness in a child, deceive the treating physician with fictitious or exaggerated information, and perpetrate the trick for months or years. A related form of pathology is more insidious and more common but also damaging. It involves parents who fabricate or exaggerate symptoms of illness in children, causing overly aggressive medical evaluations and interventions. The common thread is that the treating physician plays a role in inflicting the abuse upon the child. Failure to recognize the problem is common because the condition is often not included in the differential diagnosis of challenging or confusing clinical problems. We believe that a heightened "self-awareness" of the physician's role in Munchausen syndrome by proxy will prevent or reduce the morbidity and mortality associated with this diagnosis. In addition, we believe contemporary developments within the modern health care system likely facilitate this condition.

  5. Web archives

    DEFF Research Database (Denmark)

    Finnemann, Niels Ole

    2018-01-01

    a broad and rich archive. Section six is concerned with inherent limitations and why web archives are always flawed. The last sections deal with the question how web archives may fit into the rapidly expanding, but fragmented landscape of digital repositories taking care of various parts...... of the exponentially growing amounts of still more heterogeneous data materials....

  6. Enhancing the AliEn Web Service Authentication

    Science.gov (United States)

    Zhu, Jianlin; Saiz, Pablo; Carminati, Federico; Betev, Latchezar; Zhou, Daicui; Mendez Lorenzo, Patricia; Grigoras, Alina Gabriela; Grigoras, Costin; Furano, Fabrizio; Schreiner, Steffen; Vladimirovna Datskova, Olga; Sankar Banerjee, Subho; Zhang, Guoping

    2011-12-01

    Web Services are an XML based technology that allow applications to communicate with each other across disparate systems. Web Services are becoming the de facto standard that enable inter operability between heterogeneous processes and systems. AliEn2 is a grid environment based on web services. The AliEn2 services can be divided in three categories: Central services, deployed once per organization; Site services, deployed on each of the participating centers; Job Agents running on the worker nodes automatically. A security model to protect these services is essential for the whole system. Current implementations of web server, such as Apache, are not suitable to be used within the grid environment. Apache with the mod_ssl and OpenSSL only supports the X.509 certificates. But in the grid environment, the common credential is the proxy certificate for the purpose of providing restricted proxy and delegation. An Authentication framework was taken for AliEn2 web services to add the ability to accept X.509 certificates and proxy certificates from client-side to Apache Web Server. The authentication framework could also allow the generation of access control policies to limit access to the AliEn2 web services.

  7. Workflows for intelligent monitoring using proxy services.

    Science.gov (United States)

    Rüping, Stefan; Wegener, Dennis; Sfakianakis, Stelios; Sengstag, Thierry

    2009-01-01

    Grid technologies have proven to be very successful in the area of eScience, and in particular in healthcare applications. But while the applicability of workflow enacting tools for biomedical research has long since been proven, the practical adoption into regular clinical research has some additional challenges in grid context. In this paper, we investigate the case of data monitoring, and how to seamlessly implement the step between a one-time proof-of-concept workflow and high-performance on-line monitoring of data streams, as exemplified by the case of long-running clinical trials. We will present an approach based on proxy services that allows executing single-run workflows repeatedly with little overhead.

  8. Salmon: Robust Proxy Distribution for Censorship Circumvention

    Directory of Open Access Journals (Sweden)

    Douglas Frederick

    2016-10-01

    Full Text Available Many governments block their citizens’ access to much of the Internet. Simple workarounds are unreliable; censors quickly discover and patch them. Previously proposed robust approaches either have non-trivial obstacles to deployment, or rely on low-performance covert channels that cannot support typical Internet usage such as streaming video. We present Salmon, an incrementally deployable system designed to resist a censor with the resources of the “Great Firewall” of China. Salmon relies on a network of volunteers in uncensored countries to run proxy servers. Although any member of the public can become a user, Salmon protects the bulk of its servers from being discovered and blocked by the censor via an algorithm for quickly identifying malicious users. The algorithm entails identifying some users as especially trustworthy or suspicious, based on their actions. We impede Sybil attacks by requiring either an unobtrusive check of a social network account, or a referral from a trustworthy user.

  9. Development of six PROMIS pediatrics proxy-report item banks

    Directory of Open Access Journals (Sweden)

    Irwin Debra E

    2012-02-01

    Full Text Available Abstract Background Pediatric self-report should be considered the standard for measuring patient reported outcomes (PRO among children. However, circumstances exist when the child is too young, cognitively impaired, or too ill to complete a PRO instrument and a proxy-report is needed. This paper describes the development process including the proxy cognitive interviews and large-field-test survey methods and sample characteristics employed to produce item parameters for the Patient Reported Outcomes Measurement Information System (PROMIS pediatric proxy-report item banks. Methods The PROMIS pediatric self-report items were converted into proxy-report items before undergoing cognitive interviews. These items covered six domains (physical function, emotional distress, social peer relationships, fatigue, pain interference, and asthma impact. Caregivers (n = 25 of children ages of 5 and 17 years provided qualitative feedback on proxy-report items to assess any major issues with these items. From May 2008 to March 2009, the large-scale survey enrolled children ages 8-17 years to complete the self-report version and caregivers to complete the proxy-report version of the survey (n = 1548 dyads. Caregivers of children ages 5 to 7 years completed the proxy report survey (n = 432. In addition, caregivers completed other proxy instruments, PedsQL™ 4.0 Generic Core Scales Parent Proxy-Report version, PedsQL™ Asthma Module Parent Proxy-Report version, and KIDSCREEN Parent-Proxy-52. Results Item content was well understood by proxies and did not require item revisions but some proxies clearly noted that determining an answer on behalf of their child was difficult for some items. Dyads and caregivers of children ages 5-17 years old were enrolled in the large-scale testing. The majority were female (85%, married (70%, Caucasian (64% and had at least a high school education (94%. Approximately 50% had children with a chronic health condition, primarily

  10. Mobile web browsing using the cloud

    CERN Document Server

    Zhao, Bo; Cao, Guohong

    2013-01-01

    This brief surveys existing techniques to address the problem of long delays and high power consumption for web browsing on smartphones, which can be due to the local computational limitation at the smartphone (e.g., running java scripts or flash objects) level. To address this issue, an architecture called Virtual-Machine based Proxy (VMP) is introduced, shifting the computing from smartphones to the VMP which may reside in the cloud. Mobile Web Browsing Using the Cloud illustrates the feasibility of deploying the proposed VMP system in 3G networks through a prototype using Xen virtual machin

  11. Proxy-produced ethnographic work: what are the problems, issues, and dilemmas arising from proxy ethnography?

    DEFF Research Database (Denmark)

    Martinussen, Marie Louise; Højbjerg, Karin; Tamborg, Andreas Lindenskov

    2018-01-01

    and positions arising from such a setup between the teacher/researcher and the proxy ethnographer/student are found to have implications for the ethnographies produced. This article’s main focus is to show how these relations and positions have not distorted the ethnographic work and the ethnographies but......This article addresses the implications of research-student cooperation in the production of empirical material. For the student to replace the experienced researcher and work under the researcher’s supervision, we call such work proxy-produced ethnographic work. The specific relations...... the research process. These ethnographic distortions will be generated and described within a framework drawn primarily on the work of sociologist Pierre Bourdieu....

  12. Tuning the cache memory usage in tomographic reconstruction on standard computers with Advanced Vector eXtensions (AVX

    Directory of Open Access Journals (Sweden)

    Jose-Ignacio Agulleiro

    2015-06-01

    Full Text Available Cache blocking is a technique widely used in scientific computing to minimize the exchange of information with main memory by reusing the data kept in cache memory. In tomographic reconstruction on standard computers using vector instructions, cache blocking turns out to be central to optimize performance. To this end, sinograms of the tilt-series and slices of the volumes to be reconstructed have to be divided into small blocks that fit into the different levels of cache memory. The code is then reorganized so as to operate with a block as much as possible before proceeding with another one. This data article is related to the research article titled Tomo3D 2.0 – Exploitation of Advanced Vector eXtensions (AVX for 3D reconstruction (Agulleiro and Fernandez, 2015 [1]. Here we present data of a thorough study of the performance of tomographic reconstruction by varying cache block sizes, which allows derivation of expressions for their automatic quasi-optimal tuning.

  13. Tuning the cache memory usage in tomographic reconstruction on standard computers with Advanced Vector eXtensions (AVX).

    Science.gov (United States)

    Agulleiro, Jose-Ignacio; Fernandez, Jose-Jesus

    2015-06-01

    Cache blocking is a technique widely used in scientific computing to minimize the exchange of information with main memory by reusing the data kept in cache memory. In tomographic reconstruction on standard computers using vector instructions, cache blocking turns out to be central to optimize performance. To this end, sinograms of the tilt-series and slices of the volumes to be reconstructed have to be divided into small blocks that fit into the different levels of cache memory. The code is then reorganized so as to operate with a block as much as possible before proceeding with another one. This data article is related to the research article titled Tomo3D 2.0 - Exploitation of Advanced Vector eXtensions (AVX) for 3D reconstruction (Agulleiro and Fernandez, 2015) [1]. Here we present data of a thorough study of the performance of tomographic reconstruction by varying cache block sizes, which allows derivation of expressions for their automatic quasi-optimal tuning.

  14. Cliff swallows Petrochelidon pyrrhonota as bioindicators of environmental mercury, Cache Creek Watershed, California

    Science.gov (United States)

    Hothem, Roger L.; Trejo, Bonnie S.; Bauer, Marissa L.; Crayon, John J.

    2008-01-01

    To evaluate mercury (Hg) and other element exposure in cliff swallows (Petrochelidon pyrrhonota), eggs were collected from 16 sites within the mining-impacted Cache Creek watershed, Colusa, Lake, and Yolo counties, California, USA, in 1997-1998. Nestlings were collected from seven sites in 1998. Geometric mean total Hg (THg) concentrations ranged from 0.013 to 0.208 ??g/g wet weight (ww) in cliff swallow eggs and from 0.047 to 0.347 ??g/g ww in nestlings. Mercury detected in eggs generally followed the spatial distribution of Hg in the watershed based on proximity to both anthropogenic and natural sources. Mean Hg concentrations in samples of eggs and nestlings collected from sites near Hg sources were up to five and seven times higher, respectively, than in samples from reference sites within the watershed. Concentrations of other detected elements, including aluminum, beryllium, boron, calcium, manganese, strontium, and vanadium, were more frequently elevated at sites near Hg sources. Overall, Hg concentrations in eggs from Cache Creek were lower than those reported in eggs of tree swallows (Tachycineta bicolor) from highly contaminated locations in North America. Total Hg concentrations were lower in all Cache Creek egg samples than adverse effects levels established for other species. Total Hg concentrations in bullfrogs (Rana catesbeiana) and foothill yellow-legged frogs (Rana boylii) collected from 10 of the study sites were both positively correlated with THg concentrations in cliff swallow eggs. Our data suggest that cliff swallows are reliable bioindicators of environmental Hg. ?? Springer Science+Business Media, LLC 2007.

  15. Turtle: identifying frequent k-mers with cache-efficient algorithms.

    Science.gov (United States)

    Roy, Rajat Shuvro; Bhattacharya, Debashish; Schliep, Alexander

    2014-07-15

    Counting the frequencies of k-mers in read libraries is often a first step in the analysis of high-throughput sequencing data. Infrequent k-mers are assumed to be a result of sequencing errors. The frequent k-mers constitute a reduced but error-free representation of the experiment, which can inform read error correction or serve as the input to de novo assembly methods. Ideally, the memory requirement for counting should be linear in the number of frequent k-mers and not in the, typically much larger, total number of k-mers in the read library. We present a novel method that balances time, space and accuracy requirements to efficiently extract frequent k-mers even for high-coverage libraries and large genomes such as human. Our method is designed to minimize cache misses in a cache-efficient manner by using a pattern-blocked Bloom filter to remove infrequent k-mers from consideration in combination with a novel sort-and-compact scheme, instead of a hash, for the actual counting. Although this increases theoretical complexity, the savings in cache misses reduce the empirical running times. A variant of method can resort to a counting Bloom filter for even larger savings in memory at the expense of false-negative rates in addition to the false-positive rates common to all Bloom filter-based approaches. A comparison with the state-of-the-art shows reduced memory requirements and running times. The tools are freely available for download at http://bioinformatics.rutgers.edu/Software/Turtle and http://figshare.com/articles/Turtle/791582. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  16. A Comparison between Fixed Priority and EDF Scheduling accounting for Cache Related Pre-emption Delays

    Directory of Open Access Journals (Sweden)

    Will Lunniss

    2014-04-01

    Full Text Available In multitasking real-time systems, the choice of scheduling algorithm is an important factor to ensure that response time requirements are met while maximising limited system resources. Two popular scheduling algorithms include fixed priority (FP and earliest deadline first (EDF. While they have been studied in great detail before, they have not been compared when taking into account cache related pre-emption delays (CRPD. Memory and cache are split into a number of blocks containing instructions and data. During a pre-emption, cache blocks from the pre-empting task can evict those of the pre-empted task. When the pre-empted task is resumed, if it then has to re-load the evicted blocks, CRPD are introduced which then affect the schedulability of the task. In this paper we compare FP and EDF scheduling algorithms in the presence of CRPD using the state-of-the-art CRPD analysis. We find that when CRPD is accounted for, the performance gains offered by EDF over FP, while still notable, are diminished. Furthermore, we find that under scenarios that cause relatively high CRPD, task layout optimisation techniques can be applied to allow FP to schedule tasksets at a similar processor utilisation to EDF. Thus making the choice of the task layout in memory as important as the choice of scheduling algorithm. This is very relevant for industry, as it is much cheaper and simpler to adjust the task layout through the linker than it is to switch the scheduling algorithm.

  17. Incorporating cache management behavior into seed dispersal: the effect of pericarp removal on acorn germination.

    Directory of Open Access Journals (Sweden)

    Xianfeng Yi

    Full Text Available Selecting seeds for long-term storage is a key factor for food hoarding animals. Siberian chipmunks (Tamias sibiricus remove the pericarp and scatter hoard sound acorns of Quercus mongolica over those that are insect-infested to maximize returns from caches. We have no knowledge of whether these chipmunks remove the pericarp from acorns of other species of oaks and if this behavior benefits seedling establishment. In this study, we tested whether Siberian chipmunks engage in this behavior with acorns of three other Chinese oak species, Q. variabilis, Q. aliena and Q. serrata var. brevipetiolata, and how the dispersal and germination of these acorns are affected. Our results show that when chipmunks were provided with sound and infested acorns of Quercus variabilis, Q. aliena and Q. serrata var. brevipetiolata, the two types were equally harvested and dispersed. This preference suggests that Siberian chipmunks are incapable of distinguishing between sound and insect-infested acorns. However, Siberian chipmunks removed the pericarp from acorns of these three oak species prior to dispersing and caching them. Consequently, significantly more sound acorns were scatter hoarded and more infested acorns were immediately consumed. Additionally, indoor germination experiments showed that pericarp removal by chipmunks promoted acorn germination while artificial removal showed no significant effect. Our results show that pericarp removal allows Siberian chipmunks to effectively discriminate against insect-infested acorns and may represent an adaptive behavior for cache management. Because of the germination patterns of pericarp-removed acorns, we argue that the foraging behavior of Siberian chipmunks could have potential impacts on the dispersal and germination of acorns from various oak species.

  18. Incorporating cache management behavior into seed dispersal: the effect of pericarp removal on acorn germination.

    Science.gov (United States)

    Yi, Xianfeng; Zhang, Mingming; Bartlow, Andrew W; Dong, Zhong

    2014-01-01

    Selecting seeds for long-term storage is a key factor for food hoarding animals. Siberian chipmunks (Tamias sibiricus) remove the pericarp and scatter hoard sound acorns of Quercus mongolica over those that are insect-infested to maximize returns from caches. We have no knowledge of whether these chipmunks remove the pericarp from acorns of other species of oaks and if this behavior benefits seedling establishment. In this study, we tested whether Siberian chipmunks engage in this behavior with acorns of three other Chinese oak species, Q. variabilis, Q. aliena and Q. serrata var. brevipetiolata, and how the dispersal and germination of these acorns are affected. Our results show that when chipmunks were provided with sound and infested acorns of Quercus variabilis, Q. aliena and Q. serrata var. brevipetiolata, the two types were equally harvested and dispersed. This preference suggests that Siberian chipmunks are incapable of distinguishing between sound and insect-infested acorns. However, Siberian chipmunks removed the pericarp from acorns of these three oak species prior to dispersing and caching them. Consequently, significantly more sound acorns were scatter hoarded and more infested acorns were immediately consumed. Additionally, indoor germination experiments showed that pericarp removal by chipmunks promoted acorn germination while artificial removal showed no significant effect. Our results show that pericarp removal allows Siberian chipmunks to effectively discriminate against insect-infested acorns and may represent an adaptive behavior for cache management. Because of the germination patterns of pericarp-removed acorns, we argue that the foraging behavior of Siberian chipmunks could have potential impacts on the dispersal and germination of acorns from various oak species.

  19. Attacks on One Designated Verifier Proxy Signature Scheme

    Directory of Open Access Journals (Sweden)

    Baoyuan Kang

    2012-01-01

    Full Text Available In a designated verifier proxy signature scheme, there are three participants, namely, the original signer, the proxy signer, and the designated verifier. The original signer delegates his or her signing right to the proxy signer, then the proxy signer can generate valid signature on behalf of the original signer. But only the designated verifier can verify the proxy signature. Several designated verifier proxy signature schemes have been proposed. However, most of them were proven secure in the random oracle model, which has received a lot of criticism since the security proofs in the random oracle model are not sound with respect to the standard model. Recently, by employing Water's hashing technique, Yu et al. proposed a new construction of designated verifier proxy signature. They claimed that the new construction is the first designated verifier proxy signature, whose security does not rely on the random oracles. But, in this paper, we will show some attacks on Yu et al.'s scheme. So, their scheme is not secure.

  20. Factitious Disorder by Proxy in Educational Settings: A Review

    Science.gov (United States)

    Frye, Ellen M.; Feldman, Marc D.

    2012-01-01

    Factitious disorder by proxy (FDP), historically known as Munchausen syndrome by proxy, is a diagnosis applied to parents and other caregivers who intentionally feign, exaggerate, and/or induce illness or injury in a child to get attention from health professionals and others. A review of the recent literature and our experience as consultants…

  1. Free Factories: Unified Infrastructure for Data Intensive Web Services.

    Science.gov (United States)

    Zaranek, Alexander Wait; Clegg, Tom; Vandewege, Ward; Church, George M

    2008-05-01

    We introduce the Free Factory, a platform for deploying data-intensive web services using small clusters of commodity hardware and free software. Independently administered virtual machines called Freegols give application developers the flexibility of a general purpose web server, along with access to distributed batch processing, cache and storage services. Each cluster exploits idle RAM and disk space for cache, and reserves disks in each node for high bandwidth storage. The batch processing service uses a variation of the MapReduce model. Virtualization allows every CPU in the cluster to participate in batch jobs. Each 48-node cluster can achieve 4-8 gigabytes per second of disk I/O. Our intent is to use multiple clusters to process hundreds of simultaneous requests on multi-hundred terabyte data sets. Currently, our applications achieve 1 gigabyte per second of I/O with 123 disks by scheduling batch jobs on two clusters, one of which is located in a remote data center.

  2. Behavior characterization of the shared last-level cache in a chip multiprocessor

    OpenAIRE

    Benedicte Illescas, Pedro

    2014-01-01

    [CATALÀ] Aquest projecte consisteix a analitzar diferents aspectes de la jerarquia de memòria i entendre la seva influència al rendiment del sistema. Els aspectes que s'analitzaran són els algorismes de reemplaçament, els esquemes de mapeig de memòria i les polítiques de pàgina de memòria. [ANGLÈS] This project consists in analyzing different aspects of the memory hierarchy and understanding its influence in the overall system performance. The aspects that will be analyzed are cache replac...

  3. Analytical derivation of traffic patterns in cache-coherent shared-memory systems

    DEFF Research Database (Denmark)

    Stuart, Matthias Bo; Sparsø, Jens

    2011-01-01

    This paper presents an analytical method to derive the worst-case traffic pattern caused by a task graph mapped to a cache-coherent shared-memory system. Our analysis allows designers to rapidly evaluate the impact of different mappings of tasks to IP cores on the traffic pattern. The accuracy...... varies with the application’s data sharing pattern, and is around 65% in the average case and 1% in the best case when considering the traffic pattern as a whole. For individual connections, our method produces tight worst-case bandwidths....

  4. Comparison of the Frontier Distributed Database Caching System with NoSQL Databases

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Non-relational "NoSQL" databases such as Cassandra and CouchDB are best known for their ability to scale to large numbers of clients spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects, is based on traditional SQL databases but also has the same high scalability and wide-area distributability for an important subset of applications. This paper compares the architectures, behavior, performance, and maintainability of the two different approaches and identifies the criteria for choosing which approach to prefer over the other.

  5. Cache la Poudre River Basin, Larimer - Weld Counties, Colorado. Volume 3. Flood Plain Analysis Sheep Draw.

    Science.gov (United States)

    1981-10-01

    Flooded areas represent existing conditions. SCALE IN FEET 200 0 200 400 ISO . SPECIAL STUDY ?24 CACHE LA POUDRE RIVER BASIN 12930N I"!LARIMER-WELD...FEE ar wfltrI ’JIJU~~~~.rLJ z~ Utl4~dItrJI C,) cn" C;D 32000 31500 31000 30500 30000 29500 29000 28500 STREAMI DISTANCE IN FEET UPSTREAM FROM MOUTH...PLATE 24 REPRODUCED AT GOVERNMENT FXPENSE ORO ECTED URBA14IZATI()N ND EIST114G Co 4DITIO 4S cv, co, Lo 12000 31500 31000 30500 30000 29500 29000 28500

  6. Cache Domains That are Homologous to, but Different from PAS Domains Comprise the Largest Superfamily of Extracellular Sensors in Prokaryotes.

    Science.gov (United States)

    Upadhyay, Amit A; Fleetwood, Aaron D; Adebali, Ogun; Finn, Robert D; Zhulin, Igor B

    2016-04-01

    Cellular receptors usually contain a designated sensory domain that recognizes the signal. Per/Arnt/Sim (PAS) domains are ubiquitous sensors in thousands of species ranging from bacteria to humans. Although PAS domains were described as intracellular sensors, recent structural studies revealed PAS-like domains in extracytoplasmic regions in several transmembrane receptors. However, these structurally defined extracellular PAS-like domains do not match sequence-derived PAS domain models, and thus their distribution across the genomic landscape remains largely unknown. Here we show that structurally defined extracellular PAS-like domains belong to the Cache superfamily, which is homologous to, but distinct from the PAS superfamily. Our newly built computational models enabled identification of Cache domains in tens of thousands of signal transduction proteins including those from important pathogens and model organisms. Furthermore, we show that Cache domains comprise the dominant mode of extracellular sensing in prokaryotes.

  7. Physics development of web-based tools for use in hardware clusters doing lattice physics

    Science.gov (United States)

    Dreher, P.; Akers, W.; Chen, J.; Chen, Y.; Watson, C.

    2002-03-01

    Jefferson Lab and MIT are developing a set of web-based tools within the Lattice Hadron Physics Collaboration to allow lattice QCD theorists to treat the computational facilities located at the two sites as a single meta-facility. The prototype Lattice Portal provides researchers the ability to submit jobs to the cluster, browse data caches, and transfer files between cache and off-line storage. The user can view the configuration of the PBS servers and to monitor both the status of all batch queues as well as the jobs in each queue. Work is starting on expanding the present system to include job submissions at the meta-facility level (shared queue), as well as multi-site file transfers and enhanced policy-based data management capabilities.

  8. Proxy SDN Controller for Wireless Networks

    Directory of Open Access Journals (Sweden)

    Won-Suk Kim

    2016-01-01

    Full Text Available Management of wireless networks as well as wired networks by using software-defined networking (SDN has been highlighted continually. However, control features of a wireless network differ from those of a wired network in several aspects. In this study, we identify the various inefficient points when controlling and managing wireless networks by using SDN and propose SDN-based control architecture called Proxcon to resolve these problems. Proxcon introduces the concept of a proxy SDN controller (PSC for the wireless network control, and the PSC entrusted with the role of a main controller performs control operations and provides the latest network state for a network administrator. To address the control inefficiency, Proxcon supports offloaded SDN operations for controlling wireless networks by utilizing the PSC, such as local control by each PSC, hybrid control utilizing the PSC and the main controller, and locally cooperative control utilizing the PSCs. The proposed architecture and the newly supported control operations can enhance scalability and response time when the logically centralized control plane responds to the various wireless network events. Through actual experiments, we verified that the proposed architecture could address the various control issues such as scalability, response time, and control overhead.

  9. Lithium in Brachiopods - proxy for seawater evolution?

    Science.gov (United States)

    Gaspers, Natalie; Magna, Tomas; Tomasovych, Adam; Henkel, Daniela

    2017-04-01

    Marine biogenic carbonates have the potential to serve as a proxy for evolution of seawater chemistry. In order to compile a record of the past and recent δ7Li in the oceans, foraminifera shells, scleractinian corals and belemnites have been used. However, only a foraminifera-based record appears to more accurately reflect the Li isotope composition of ocean water. At present, this record is available for the Cenozoic with implications for major events during this period of time, including K/T event [1]. A record for the entire Phanerozoic has not yet been obtained. In order to extend this record to the more distant past, Li elemental/isotope systematics of brachiopods were investigated because these marine animals were already present in Early Cambrian oceans and because they are less sensitive to diagenesis-induced modifications due to their shell mineralogy (low-Mg calcite). The preliminary data indicates a species-, temperature- and salinity-independent behavior of Li isotopes in brachiopod shells. Also, no vital effects have been observed for different shell parts. The consistent offset of -4‰ relative to modern seawater is in accordance with experimental data [2]. Further data are now being collected for Cenozoic specimens to more rigorously test brachiopods as possible archives of past seawater in comparison to the existing foraminiferal records. [1] Misra & Froelich (2012) Science 335, 818-823 [2] Marriott et al. (2004) Chem Geol 212, 5-15

  10. Deciphering dynamical proxy responses from lake sediments

    Science.gov (United States)

    Ramisch, Arne; Tjallingii, Rik; Hartmann, Kai; Brauer, Achim; Diekmann, Bernhard; Haberzettl, Torsten; Kasper, Thomas; Ahlborn, Marieke

    2017-04-01

    Lakes form a reliable archive of paleoenvironmental change in the terrestrial realm. Non-destructive XRF scans provide high-resolution records of element concentrations that are commonly related to past environmental change. However, XRF records of lake sediments enclose paleoenvironmental information that originates from multiple lake external and internal forcing. The variety of environmental forcing factors can complicate a direct identification of single mechanisms like climatic change from XRF or other proxy records. Here we present XRF records from several Asian lake archives, which indicate asynchronous variations of similar geochemical records since the late glacial/early Holocene. All XRF time series are characterized by damped harmonic oscillations of relative element concentrations through time. The asynchronous variations can be expressed by the frequency and the rate of damping of theses oscillations that differ between the lakes. We argue that the oscillatory behavior is a result of a feedback between the physical removal and dissolution of mineral phases in catchment soils and their subsequent enrichment and deposition within the lake. We present a numerical model, which accurately simulates major Holocene variations in the element concentration of lake records and discuss implications for the reconstruction of environmental signals from lake sediments.

  11. The Cache County Study on Memory in Aging: factors affecting risk of Alzheimer's disease and its progression after onset.

    Science.gov (United States)

    Tschanz, Joann T; Norton, Maria C; Zandi, Peter P; Lyketsos, Constantine G

    2013-12-01

    The Cache County Study on Memory in Aging is a longitudinal, population-based study of Alzheimer's disease (AD) and other dementias. Initiated in 1995 and extending to 2013, the study has followed over 5,000 elderly residents of Cache County, Utah (USA) for over twelve years. Achieving a 90% participation rate at enrolment, and spawning two ancillary projects, the study has contributed to the literature on genetic, psychosocial and environmental risk factors for AD, late-life cognitive decline, and the clinical progression of dementia after its onset. This paper describes the major study contributions to the literature on AD and dementia.

  12. Web-building spiders attract prey by storing decaying matter

    Science.gov (United States)

    Bjorkman-Chiswell, Bojun T.; Kulinski, Melissa M.; Muscat, Robert L.; Nguyen, Kim A.; Norton, Briony A.; Symonds, Matthew R. E.; Westhorpe, Gina E.; Elgar, Mark A.

    The orb-weaving spider Nephila edulis incorporates into its web a band of decaying animal and plant matter. While earlier studies demonstrate that larger spiders utilise these debris bands as caches of food, the presence of plant matter suggests additional functions. When organic and plastic items were placed in the webs of N. edulis, some of the former but none of the latter were incorporated into the debris band. Using an Y-maze olfactometer, we show that sheep blowflies Lucilia cuprina are attracted to recently collected debris bands, but that this attraction does not persist over time. These data reveal an entirely novel foraging strategy, in which a sit-and-wait predator attracts insect prey by utilising the odours of decaying organic material. The spider's habit of replenishing the debris band may be necessary to maintain its efficacy for attracting prey.

  13. Deep web

    OpenAIRE

    Bago, Neven

    2016-01-01

    Završnom radu „Deep Web“ je cilj da se u osnovi nauči što je on te koliko je rasprostranjen. Korištenjem programa TOR pristupa se „sakrivenom“ dijelu interneta poznatom kao Deep Web. U radu je opisan proces pristupanja Deep Webu pomoću spomenutog programa. Navodi sve njegove mogućnosti i prednosti nad ostalim web pretraživačima. Istražena je valuta BitCoin koja se koristi u online transakcijama zbog mogućnosti kojom pruža anonimnost. Cilj ovog rada je pokazati do koje mjere ...

  14. A general approach for cache-oblivious range reporting and approximate range counting

    DEFF Research Database (Denmark)

    Afshani, Peyman; Hamilton, Chris; Zeh, Norbert

    2010-01-01

    of points in the query range. As a corollary, we also obtain the first approximate 3-d halfspace range counting and 3-d dominance counting data structures with a worst-case query time of O(log(N/K)) in internal memory. An easy but important consequence of our main result is the existence of -space cache...... counting queries. This class includes three-sided range counting in the plane, 3-d dominance counting, and 3-d halfspace range counting. The constructed data structures use linear space and answer queries in the optimal query bound of O(logB(N/K)) block transfers in the worst case, where K is the number......-oblivious data structures with an optimal query bound of O(logBN+K/B) block transfers for the reporting versions of the above problems. Using standard reductions, these data structures allow us to obtain the first cache-oblivious data structures that use almost linear space and achieve the optimal query bound...

  15. Flood Frequency Analysis of Future Climate Projections in the Cache Creek Watershed

    Science.gov (United States)

    Fischer, I.; Trihn, T.; Ishida, K.; Jang, S.; Kavvas, E.; Kavvas, M. L.

    2014-12-01

    Effects of climate change on hydrologic flow regimes, particularly extreme events, necessitate modeling of future flows to best inform water resources management. Future flow projections may be modeled through the joint use of carbon emission scenarios, general circulation models and watershed models. This research effort ran 13 simulations for carbon emission scenarios (taken from the A1, A2 and B1 families) over the 21st century (2001-2100) for the Cache Creek watershed in Northern California. Atmospheric data from general circulation models, CCSM3 and ECHAM5, were dynamically downscaled to a 9 km resolution using MM5, a regional mesoscale model, before being input into the physically based watershed environmental hydrology (WEHY) model. Ensemble mean and standard deviation of simulated flows describe the expected hydrologic system response. Frequency histograms and cumulative distribution functions characterize the range of hydrologic responses that may occur. The modeled flow results comprise a dataset suitable for time series and frequency analysis allowing for more robust system characterization, including indices such as the 100 year flood return period. These results are significant for water quality management as the Cache Creek watershed is severely impacted by mercury pollution from historic mining activities. Extreme flow events control mercury fate and transport affecting the downstream water bodies of the Sacramento River and Sacramento- San Joaquin Delta which provide drinking water to over 25 million people.

  16. Transient Variable Caching in Java’s Stack-Based Intermediate Representation

    Directory of Open Access Journals (Sweden)

    Paul Týma

    1999-01-01

    Full Text Available Java’s stack‐based intermediate representation (IR is typically coerced to execute on register‐based architectures. Unoptimized compiled code dutifully replicates transient variable usage designated by the programmer and common optimization practices tend to introduce further usage (i.e., CSE, Loop‐invariant Code Motion, etc.. On register based machines, often transient variables are cached within registers (when available saving the expense of actually accessing memory. Unfortunately, in stack‐based environments because of the need to push and pop the transient values, further performance improvement is possible. This paper presents Transient Variable Caching (TVC, a technique for eliminating transient variable overhead whenever possible. This optimization would find a likely home in optimizers attached to the back of popular Java compilers. Side effects of the algorithm include significant instruction reordering and introduction of many stack‐manipulation operations. This combination has proven to greatly impede the ability to decompile stack‐based IR code sequences. The code that results from the transform is faster, smaller, and greatly impedes decompilation.

  17. Observations of territorial breeding common ravens caching eggs of greater sage-grouse

    Science.gov (United States)

    Howe, Kristy B.; Coates, Peter S.

    2015-01-01

    Previous investigations using continuous video monitoring of greater sage-grouse Centrocercus urophasianus nests have unambiguously identified common ravens Corvus corax as an important egg predator within the western United States. The quantity of greater sage-grouse eggs an individual common raven consumes during the nesting period and the extent to which common ravens actively hunt greater sage-grouse nests are largely unknown. However, some evidence suggests that territorial breeding common ravens, rather than nonbreeding transients, are most likely responsible for nest depredations. We describe greater sage-grouse egg depredation observations obtained opportunistically from three common raven nests located in Idaho and Nevada where depredated greater sage-grouse eggs were found at or in the immediate vicinity of the nest site, including the caching of eggs in nearby rock crevices. We opportunistically monitored these nests by counting and removing depredated eggs and shell fragments from the nest sites during each visit to determine the extent to which the common raven pairs preyed on greater sage-grouse eggs. To our knowledge, our observations represent the first evidence that breeding, territorial pairs of common ravens cache greater sage-grouse eggs and are capable of depredating multiple greater sage-grouse nests.

  18. Secure Mobile Agent from Leakage-Resilient Proxy Signatures

    Directory of Open Access Journals (Sweden)

    Fei Tang

    2015-01-01

    Full Text Available A mobile agent can sign a message in a remote server on behalf of a customer without exposing its secret key; it can be used not only to search for special products or services, but also to make a contract with a remote server. Hence a mobile agent system can be used for electronic commerce as an important key technology. In order to realize such a system, Lee et al. showed that a secure mobile agent can be constructed using proxy signatures. Intuitively, a proxy signature permits an entity (delegator to delegate its signing right to another entity (proxy to sign some specified messages on behalf of the delegator. However, the proxy signatures are often used in scenarios where the signing is done in an insecure environment, for example, the remote server of a mobile agent system. In such setting, an adversary could launch side-channel attacks to exploit some leakage information about the proxy key or even other secret states. The proxy signatures which are secure in the traditional security models obviously cannot provide such security. Based on this consideration, in this paper, we design a leakage-resilient proxy signature scheme for the secure mobile agent systems.

  19. Proxies and consent discussions for dementia research.

    Science.gov (United States)

    Sugarman, Jeremy; Roter, Debra; Cain, Carole; Wallace, Roberta; Schmechel, Don; Welsh-Bohmer, Kathleen A

    2007-04-01

    To better understand the nature of informed consent encounters for research involving patients with dementia that requires proxy consent. Audiotaping of informed-consent encounters for a study of genetic markers for sporadic Alzheimer's disease. Outpatients at an Alzheimer's disease research center. Patients with dementia and their companions. Audiotapes were analyzed to characterize communication style and coverage of the standard elements of informed consent and, using the Roter Interaction Analysis System, to capture the dynamics of three-way interaction between the patient, their companion, and the physician investigator. Of 26 informed consent encounters, all involved a patient, a companion, and a physician. Patients had a mean Mini-Mental State Examination (MMSE) score of 21.8. For patients, 49% of their interactions involved agreement and approval (positive statements), 16% psychosocial information, 7% biomedical information, 7% asking questions, and 7% expressing emotion. Companion interactions involved 37% positive statements and 19% biomedical information. Physician interactions involved emotional expressiveness (30%) and positive statements (19%). Discussion length was positively related to MMSE score (Spearman rho=0.45; Pinformed consent was fairly comprehensive and had no relationship to patients' MMSE scores. These data should inform policies regarding the ethically appropriate ways of conducting research with cognitively impaired adults. For example, patients in this study were more silent than their companions and the physician, but when patients spoke, they primarily agreed with what was said. Although this might first seem to signal assent, such an interpretation should be made with caution for persons with dementia. In addition, previous work on informed consent has focused on its cognitive aspects, but these data reveal that the emotional and social dimensions warrant attention.

  20. Dynamicky zasílané WWW-stránky

    OpenAIRE

    Kotlín, Jiří

    2009-01-01

    Serving dynamic web pages raises higher load of web servers and associated technologies. This can to some extent eliminate setting up reverse proxy with cache in front of the web server. The primary goal of this thesis is to implement this technique via presently most popular web server -- Apache. These Apache's proxy features were at first well tested and described, later practically applied in real LAMP software bundle enviroment (Linux, Apache, PHP, MySQL).

  1. Optimizing TCP Performance over UMTS with Split TCP Proxy

    DEFF Research Database (Denmark)

    Hu, Liang; Dittmann, Lars

    2009-01-01

    Abstract: The TCP performance over UMTS network is challenged by the large delay bandwidth product. Large delay bandwidth product is mainly caused by the latency from the link layer ARQ retransmissions and diversity technique at physical layer which are used to cope with radio transmission errors...... scenario (e.g.64 kbps). Besides, the split TCP proxy brings more performance gain for downloading large files than downloading small ones. To the end, for the configuration of the split proxy, an aggressive initial TCP congestion window size (e.g. 10 MSS) at proxy is particularly useful for radio links...

  2. Cryptanalytic Performance Appraisal of Improved CCH2 Proxy Multisignature Scheme

    Directory of Open Access Journals (Sweden)

    Raman Kumar

    2014-01-01

    Full Text Available Many of the signature schemes are proposed in which the t out of n threshold schemes are deployed, but they still lack the property of security. In this paper, we have discussed implementation of improved CCH1 and improved CCH2 proxy multisignature scheme based on elliptic curve cryptosystem. We have represented time complexity, space complexity, and computational overhead of improved CCH1 and CCH2 proxy multisignature schemes. We have presented cryptanalysis of improved CCH2 proxy multisignature scheme and showed that improved CCH2 scheme suffered from various attacks, that is, forgery attack and framing attack.

  3. Web Sitings.

    Science.gov (United States)

    Lo, Erika

    2001-01-01

    Presents seven mathematics games, located on the World Wide Web, for elementary students, including: Absurd Math: Pre-Algebra from Another Dimension; The Little Animals Activity Centre; MathDork Game Room (classic video games focusing on algebra); Lemonade Stand (students practice math and business skills); Math Cats (teaches the artistic beauty…

  4. Fiber webs

    Science.gov (United States)

    Roger M. Rowell; James S. Han; Von L. Byrd

    2005-01-01

    Wood fibers can be used to produce a wide variety of low-density three-dimensional webs, mats, and fiber-molded products. Short wood fibers blended with long fibers can be formed into flexible fiber mats, which can be made by physical entanglement, nonwoven needling, or thermoplastic fiber melt matrix technologies. The most common types of flexible mats are carded, air...

  5. Computational Fluid Dynamics (CFD) Modeling And Analysis Delivery Order 0006: Cache-Aware Air Vehicles Unstructured Solver (AVUS)

    Science.gov (United States)

    2005-08-01

    AFRL-VA-WP-TM-2006-3009 COMPUTATIONAL FLUID DYNAMICS ( CFD ) MODELING AND ANALYSIS Delivery Order 0006: Cache-Aware Air Vehicles Unstructured...CONTRACT NUMBER F33615-03-D-3307-0006 5b. GRANT NUMBER 4. TITLE AND SUBTITLE COMPUTATIONAL FLUID DYNAMICS ( CFD ) MODELING AND ANALYSIS

  6. Application of computer graphics to generate coal resources of the Cache coal bed, Recluse geologic model area, Campbell County, Wyoming

    Science.gov (United States)

    Schneider, G.B.; Crowley, S.S.; Carey, M.A.

    1982-01-01

    Low-sulfur subbituminous coal resources have been calculated, using both manual and computer methods, for the Cache coal bed in the Recluse Model Area, which covers the White Tail Butte, Pitch Draw, Recluse, and Homestead Draw SW 7 1/2 minute quadrangles, Campbell County, Wyoming. Approximately 275 coal thickness measurements obtained from drill hole data are evenly distributed throughout the area. The Cache coal and associated beds are in the Paleocene Tongue River Member of the Fort Union Formation. The depth from the surface to the Cache bed ranges from 269 to 1,257 feet. The thickness of the coal is as much as 31 feet, but in places the Cache coal bed is absent. Comparisons between hand-drawn and computer-generated isopach maps show minimal differences. Total coal resources calculated by computer show the bed to contain 2,316 million short tons or about 6.7 percent more than the hand-calculated figure of 2,160 million short tons.

  7. Assessment of watershed vulnerability to climate change for the Uinta-Wasatch-Cache and Ashley National Forests, Utah

    Science.gov (United States)

    Janine Rice; Tim Bardsley; Pete Gomben; Dustin Bambrough; Stacey Weems; Sarah Leahy; Christopher Plunkett; Charles Condrat; Linda A. Joyce

    2017-01-01

    Watersheds on the Uinta-Wasatch-Cache and Ashley National Forests provide many ecosystem services, and climate change poses a risk to these services. We developed a watershed vulnerability assessment to provide scientific information for land managers facing the challenge of managing these watersheds. Literature-based information and expert elicitation is used to...

  8. Munchausen syndrome by proxy presenting as hearing loss.

    Science.gov (United States)

    Ashraf, N; Thevasagayam, M S

    2014-06-01

    To review the diagnosis of Munchausen syndrome by proxy, a factitious disorder, in which symptoms are induced or feigned, usually in a child, by the caregiver. The involved caregiver seeks to gain attention or sympathy and often has a psychological need to maintain the sick role. We highlight the diagnostic difficulties and factors that may help with diagnosis in an otolaryngology setting. We present the case of Munchausen syndrome by proxy presenting with hearing loss in a five-year old boy, who was diagnosed eight years after his initial presentation. A literature review of Munchausen syndrome by proxy cases presenting with ENT symptoms is provided. Munchausen syndrome by proxy is a diagnosis that otolaryngologists should be aware of, particularly where recurrent or persistent illnesses in children, especially those involving otological symptoms, are refractory to the usual treatments.

  9. Munchausen Syndrome by Proxy: Unusual Manifestations and Disturbing Sequelae.

    Science.gov (United States)

    Porter, Gerald E.; And Others

    1994-01-01

    This study documents previously unreported findings in cases of Munchausen Syndrome by Proxy (in which a mother fabricates an illness in her child). In the reported case, esophageal perforation, retrograde intussusception, tooth loss, and bradycardia were found. (Author/DB)

  10. Web Components and the Semantic Web

    OpenAIRE

    Casey, Máire; Pahl, Claus

    2003-01-01

    Component-based software engineering on the Web differs from traditional component and software engineering. We investigate Web component engineering activites that are crucial for the development,com position, and deployment of components on the Web. The current Web Services and Semantic Web initiatives strongly influence our work. Focussing on Web component composition we develop description and reasoning techniques that support a component developer in the composition activities,fo cussing...

  11. Applicability of a cognitive questionnaire in the elderly and proxy

    Directory of Open Access Journals (Sweden)

    Renata Areza Fegyveres

    Full Text Available Abstract The Informant Questionnaire on Cognitive Decline in the Elderly with the Proxy (IQCODE was developed as a screening tool for cognition alterations. Objectives: 1 To verify the applicability of IQCODE in the elderly with limited schooling, 2 To verify the reliability of the responses supplied by the aged and their proxies. Methods: Individuals of a Community Group were evaluated using the Mini-Mental State Examination (MMSE, IQCODE and Geriatric Depression Scale (GDS. The IQCODE was applied to informants and proxies. Results: We analyzed 44 individuals, aged between 58-82 years (M=66.8, SD=5.97 with mean elderly-schooling level of 3.75, SD=2.82 and 44 proxies aged 44.5 (SD=13.3, with mean schooling level of 8.25 (SD=4.3. The mean GDS was 8.22, SD=4.90 and 13 participants presented a score suggestive of depressive symptoms. The mean elderly IQCODE score was 3.26, SD=0.69 and 3.21, SD=0.65, for proxy responses. There was no statistical difference between these means. On the MMSE, the mean score was 24.20, SD=4.14 and 18 participants presented scores below the cut-off. The IQCODE answers by the elderly in this latter group were more congruent with MMSE than the answers of proxies. Conclusions: The applicability of the IQCODE in a population with little schooling was verified in that the proxy-report was similar to the elderly report. We can affirm that the elderly answers were more accurate than the proxies, as they were closer to MMSE score. The inclusion of a greater number of participants from community-dwelling settings is necessary to confirm the results obtained in this study.

  12. Proxy-rated quality of life in Alzheimer's disease

    DEFF Research Database (Denmark)

    Vogel, Asmus; Bhattacharya, Suvosree; Waldorff, Frans Boch

    2012-01-01

    The study investigated the change in proxy rated quality of life (QoL) of a large cohort of home living patients with Alzheimer's disease (AD) over a period of 36 months.......The study investigated the change in proxy rated quality of life (QoL) of a large cohort of home living patients with Alzheimer's disease (AD) over a period of 36 months....

  13. Web enabled data management with DPM & LFC

    CERN Document Server

    Alvarez Ayllon, A; Fabrizio, F; Hellmich, M; Keeble, O; Brito da Rocha, R

    2012-01-01

    The Disk Pool Manager (DPM) and LCG File Catalog (LFC) are two grid data management components currently used in production with more than 240 endpoints. Together with a set of grid client tools they give the users a unified view of their data, hiding most details concerning data location and access. Recently we’ve put a lot of effort in developing a reliable and high performance HTTP/WebDAV frontend to both our grid catalog and storage components, exposing the existing functionality to users accessing the services via standard clients - e.g. web browsers, curl - present in all operating systems, giving users a simple and straigh-forward way of interaction. In addition, as other relevant grid storage components (like dCache) expose their data using the same protocol, for the first time we had the opportunity of attempting a unified view of all grid storage using HTTP. We describe the mechanism used to integrate the grid catalog(s) with the multiple storage components - HTTP redirection -, including details ...

  14. Semantic Web Technologies for the Adaptive Web

    DEFF Research Database (Denmark)

    Dolog, Peter; Nejdl, Wolfgang

    2007-01-01

    Ontologies and reasoning are the key terms brought into focus by the semantic web community. Formal representation of ontologies in a common data model on the web can be taken as a foundation for adaptive web technologies as well. This chapter describes how ontologies shared on the semantic web p...... are crucial to be formalized by the semantic web ontologies for adaptive web. We use examples from an eLearning domain to illustrate the principles which are broadly applicable to any information domain on the web.......Ontologies and reasoning are the key terms brought into focus by the semantic web community. Formal representation of ontologies in a common data model on the web can be taken as a foundation for adaptive web technologies as well. This chapter describes how ontologies shared on the semantic web...

  15. gpuSPHASE-A shared memory caching implementation for 2D SPH using CUDA

    Science.gov (United States)

    Winkler, Daniel; Meister, Michael; Rezavand, Massoud; Rauch, Wolfgang

    2017-04-01

    Smoothed particle hydrodynamics (SPH) is a meshless Lagrangian method that has been successfully applied to computational fluid dynamics (CFD), solid mechanics and many other multi-physics problems. Using the method to solve transport phenomena in process engineering requires the simulation of several days to weeks of physical time. Based on the high computational demand of CFD such simulations in 3D need a computation time of years so that a reduction to a 2D domain is inevitable. In this paper gpuSPHASE, a new open-source 2D SPH solver implementation for graphics devices, is developed. It is optimized for simulations that must be executed with thousands of frames per second to be computed in reasonable time. A novel caching algorithm for Compute Unified Device Architecture (CUDA) shared memory is proposed and implemented. The software is validated and the performance is evaluated for the well established dambreak test case.

  16. Caching behaviour by red squirrels may contribute to food conditioning of grizzly bears

    Directory of Open Access Journals (Sweden)

    Julia Elizabeth Put

    2017-08-01

    Full Text Available We describe an interspecific relationship wherein grizzly bears (Ursus arctos horribilis appear to seek out and consume agricultural seeds concentrated in the middens of red squirrels (Tamiasciurus hudsonicus, which had collected and cached spilled grain from a railway. We studied this interaction by estimating squirrel density, midden density and contents, and bear activity along paired transects that were near (within 50 m or far (200 m from the railway. Relative to far ones, near transects had 2.4 times more squirrel sightings, but similar numbers of squirrel middens. Among 15 middens in which agricultural products were found, 14 were near the rail and 4 subsequently exhibited evidence of bear digging. Remote cameras confirmed the presence of squirrels on the rail and bears excavating middens. We speculate that obtaining grain from squirrel middens encourages bears to seek grain on the railway, potentially contributing to their rising risk of collisions with trains.

  17. The Identification and Treatment of a Unique Cache of Organic Artefacts from Menorca's Bronze Age

    Directory of Open Access Journals (Sweden)

    Howard Wellman

    1996-05-01

    Full Text Available A unique cache of organic artefacts was excavated in March 1995 from Cova d'es Carritx, Menorca, a sealed cave system that was used as a mortuary in the late second or early first millennia BC. This deposit included a set of unique conical tubes made of bovine horn sheath, stuffed with hair or other fibres, and capped with wooden disks. Other materials were found in association with the tubes, including a copper-tin alloy rod. The decision to display some of the tubes required a degree of consolidative strengthening which would conflict with conservation aims of preserving the artefacts essentially unchanged for future study. The two most complete artefacts were treated by localised consolidation (with Paraloid B-72, while the other two were left untreated. The two consolidated tubes were provided with display-ready mounts, while the others were packaged to minimise the effects of handling and long-term storage.

  18. An ecological response model for the Cache la Poudre River through Fort Collins

    Science.gov (United States)

    Shanahan, Jennifer; Baker, Daniel; Bledsoe, Brian P.; Poff, LeRoy; Merritt, David M.; Bestgen, Kevin R.; Auble, Gregor T.; Kondratieff, Boris C.; Stokes, John; Lorie, Mark; Sanderson, John

    2014-01-01

    The Poudre River Ecological Response Model (ERM) is a collaborative effort initiated by the City of Fort Collins and a team of nine river scientists to provide the City with a tool to improve its understanding of the past, present, and likely future conditions of the Cache la Poudre River ecosystem. The overall ecosystem condition is described through the measurement of key ecological indicators such as shape and character of the stream channel and banks, streamside plant communities and floodplain wetlands, aquatic vegetation and insects, and fishes, both coolwater trout and warmwater native species. The 13- mile-long study area of the Poudre River flows through Fort Collins, Colorado, and is located in an ecological transition zone between the upstream, cold-water, steep-gradient system in the Front Range of the Southern Rocky Mountains and the downstream, warm-water, low-gradient reach in the Colorado high plains.

  19. Data preservation for the HERA experiments at DESY using dCache technology

    Science.gov (United States)

    Krücker, Dirk; Schwank, Karsten; Fuhrmann, Patrick; Lewendel, Birgit; South, David M.

    2015-12-01

    We report on the status of the data preservation project at DESY for the HERA experiments and present the latest design of the storage which is a central element for bit- preservation. The HEP experiments based at the HERA accelerator at DESY collected large and unique datasets during the period from 1992 to 2007. As part of the ongoing DPHEP data preservation efforts at DESY, these datasets must be transferred into storage systems that keep the data available for ongoing studies and guarantee safe long term access. To achieve a high level of reliability, we use the dCache distributed storage solution and make use of its replication capabilities and tape interfaces. We also investigate a recently introduced Small File Service that allows for the fully automatic creation of tape friendly container files.

  20. Security in the Cache and Forward Architecture for the Next Generation Internet

    Science.gov (United States)

    Hadjichristofi, G. C.; Hadjicostis, C. N.; Raychaudhuri, D.

    The future Internet architecture will be comprised predominately of wireless devices. It is evident at this stage that the TCP/IP protocol that was developed decades ago will not properly support the required network functionalities since contemporary communication profiles tend to be data-driven rather than host-based. To address this paradigm shift in data propagation, a next generation architecture has been proposed, the Cache and Forward (CNF) architecture. This research investigates security aspects of this new Internet architecture. More specifically, we discuss content privacy, secure routing, key management and trust management. We identify security weaknesses of this architecture that need to be addressed and we derive security requirements that should guide future research directions. Aspects of the research can be adopted as a step-stone as we build the future Internet.

  1. Comparison of the Frontier Distributed Database Caching System to NoSQL Databases

    Science.gov (United States)

    Dykstra, Dave

    2012-12-01

    One of the main attractions of non-relational “NoSQL” databases is their ability to scale to large numbers of readers, including readers spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects for Conditions data, is based on traditional SQL databases but also adds high scalability and the ability to be distributed over a wide-area for an important subset of applications. This paper compares the major characteristics of the two different approaches and identifies the criteria for choosing which approach to prefer over the other. It also compares in some detail the NoSQL databases used by CMS and ATLAS: MongoDB, CouchDB, HBase, and Cassandra.

  2. Comparison of the Frontier Distributed Database Caching System with NoSQL Databases

    CERN Document Server

    Dykstra, David

    2012-01-01

    One of the main attractions of non-relational "NoSQL" databases is their ability to scale to large numbers of readers, including readers spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects for Conditions data, is based on traditional SQL databases but also has high scalability and wide-area distributability for an important subset of applications. This paper compares the major characteristics of the two different approaches and identifies the criteria for choosing which approach to prefer over the other. It also compares in some detail the NoSQL databases used by CMS and ATLAS: MongoDB, CouchDB, HBase, and Cassandra.

  3. EXtending Mobility to Publish/Subscribe Systems Using a Pro-Active Caching Approach

    Directory of Open Access Journals (Sweden)

    Abdulbaset Gaddah

    2010-01-01

    Full Text Available The publish/subscribe communication paradigm has many characteristics that lend themselves well to mobile wireless networks. Our research investigates the extension of current publish/subscribe systems to support subscriber mobility in such networks. We present a novel mobility management scheme based on a pro-active caching approach to overcome the challenges and the performance concerns of disconnected operations in publish/subscribe systems. We discuss the mechanism of our proposed scheme and present a comprehensive experimental evaluation of our approach and alternative state-of-the-art solutions based on reactive approaches and durable subscriptions. The obtained results illustrate significant performance benefits of our proposed scheme across a range of scenarios. We conclude our work by discussing a modeling approach that can be used to extrapolate the performance of our approach in a near-size environment (in terms of broker network and/or subscriber population to our experimental testbed.

  4. Modified stretched exponential model of computer system resources management limitations-The case of cache memory

    Science.gov (United States)

    Strzałka, Dominik; Dymora, Paweł; Mazurek, Mirosław

    2018-02-01

    In this paper we present some preliminary results in the field of computer systems management with relation to Tsallis thermostatistics and the ubiquitous problem of hardware limited resources. In the case of systems with non-deterministic behaviour, management of their resources is a key point that guarantees theirs acceptable performance and proper working. This is very wide problem that stands for many challenges in financial, transport, water and food, health, etc. areas. We focus on computer systems with attention paid to cache memory and propose to use an analytical model that is able to connect non-extensive entropy formalism, long-range dependencies, management of system resources and queuing theory. Obtained analytical results are related to the practical experiment showing interesting and valuable results.

  5. An illustration of web survey

    DEFF Research Database (Denmark)

    He, Chen

    A former study in the Danish primary schools has shown that there is an association between organic school food policies and indicators (proxies) for healthy eating among children when (school food coordinators) statements on indicators (proxies) for healthy eating are used as variable. This proj...... to be studied in a comparative study design where the Danish case (existing data from WBQ) will be compared with new data from school food service in Germany, Italy and Finland. These data is going to be collected through a web survey........ This project continue to search for the above signs of associations but involving also a “bottom” level (pupils) perspective in addition to the “top” level (school food coordinators) in the previous study. The project is to study the following hypothesis: organic food service praxis/policy (POP) is associated...... with praxis/policies for healthier eating in Danish school food service. In other words if organic procurement policies and the resulting praxis in schools can help build a healthier eating habits among pupils in such school as compared to schools without organic policies/praxis. The last perspective is going...

  6. Design issues and caching strategies for CD-ROM-based multimedia storage

    Science.gov (United States)

    Shastri, Vijnan; Rajaraman, V.; Jamadagni, H. S.; Venkat-Rangan, P.; Sampath-Kumar, Srihari

    1996-03-01

    CD-ROMs have proliferated as a distribution media for desktop machines for a large variety of multimedia applications (targeted for a single-user environment) like encyclopedias, magazines and games. With CD-ROM capacities up to 3 GB being available in the near future, they will form an integral part of Video on Demand (VoD) servers to store full-length movies and multimedia. In the first section of this paper we look at issues related to the single- user desktop environment. Since these multimedia applications are highly interactive in nature, we take a pragmatic approach, and have made a detailed study of the multimedia application behavior in terms of the I/O request patterns generated to the CD-ROM subsystem by tracing these patterns. We discuss prefetch buffer design and seek time characteristics in the context of the analysis of these traces. We also propose an adaptive main-memory hosted cache that receives caching hints from the application to reduce the latency when the user moves from one node of the hyper graph to another. In the second section we look at the use of CD-ROM in a VoD server and discuss the problem of scheduling multiple request streams and buffer management in this scenario. We adapt the C-SCAN (Circular SCAN) algorithm to suit the CD-ROM drive characteristics and prove that it is optimal in terms of buffer size management. We provide computationally inexpensive relations by which this algorithm can be implemented. We then propose an admission control algorithm which admits new request streams without disrupting the continuity of playback of the previous request streams. The algorithm also supports operations such as fast forward and replay. Finally, we discuss the problem of optimal placement of MPEG streams on CD-ROMs in the third section.

  7. Mercury and methylmercury concentrations and loads in the Cache Creek watershed, California.

    Science.gov (United States)

    Domagalski, Joseph L; Alpers, Charles N; Slotton, Darell G; Suchanek, Thomas H; Ayers, Shaun M

    2004-07-05

    Concentrations and loads of total mercury and methylmercury were measured in streams draining abandoned mercury mines and in the proximity of geothermal discharge in the Cache Creek watershed of California during a 17-month period from January 2000 through May 2001. Rainfall and runoff were lower than long-term averages during the study period. The greatest loading of mercury and methylmercury from upstream sources to downstream receiving waters, such as San Francisco Bay, generally occurred during or after winter rainfall events. During the study period, loads of mercury and methylmercury from geothermal sources tended to be greater than those from abandoned mining areas, a pattern attributable to the lack of large precipitation events capable of mobilizing significant amounts of either mercury-laden sediment or dissolved mercury and methylmercury from mine waste. Streambed sediments of Cache Creek are a significant source of mercury and methylmercury to downstream receiving bodies of water. Much of the mercury in these sediments is the result of deposition over the last 100-150 years by either storm-water runoff, from abandoned mines, or continuous discharges from geothermal areas. Several geochemical constituents were useful as natural tracers for mining and geothermal areas, including the aqueous concentrations of boron, chloride, lithium and sulfate, and the stable isotopes of hydrogen and oxygen in water. Stable isotopes of water in areas draining geothermal discharges showed a distinct trend toward enrichment of (18)O compared with meteoric waters, whereas much of the runoff from abandoned mines indicated a stable isotopic pattern more consistent with local meteoric water.

  8. Fabryq: Using Phones as Smart Proxies to Control Wearable Devices from the Web

    Science.gov (United States)

    2014-06-12

    unclassified b. ABSTRACT unclassified c. THIS PAGE unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 Copyright © 2014, by the...velopment cycle. In future versions of fabryq, some of these points of interaction can be reduced or eliminated, e.g., by providing an integrated workbench

  9. Semantic Web

    Directory of Open Access Journals (Sweden)

    Anna Lamandini

    2011-06-01

    Full Text Available The semantic Web is a technology at the service of knowledge which is aimed at accessibility and the sharing of content; facilitating interoperability between different systems and as such is one of the nine key technological pillars of TIC (technologies for information and communication within the third theme, programme specific cooperation of the seventh programme framework for research and development (7°PQRS, 2007-2013. As a system it seeks to overcome overload or excess of irrelevant information in Internet, in order to facilitate specific or pertinent research. It is an extension of the existing Web in which the aim is for cooperation between and the computer and people (the dream of Sir Tim Berners –Lee where machines can give more support to people when integrating and elaborating data in order to obtain inferences and a global sharing of data. It is a technology that is able to favour the development of a “data web” in other words the creation of a space in both sets of interconnected and shared data (Linked Data which allows users to link different types of data coming from different sources. It is a technology that will have great effect on everyday life since it will permit the planning of “intelligent applications” in various sectors such as education and training, research, the business world, public information, tourism, health, and e-government. It is an innovative technology that activates a social transformation (socio-semantic Web on a world level since it redefines the cognitive universe of users and enables the sharing not only of information but of significance (collective and connected intelligence.

  10. Proxy comparisons for Paleogene sea water temperature reconstructions

    Science.gov (United States)

    de Bar, Marijke; de Nooijer, Lennart; Schouten, Stefan; Ziegler, Martin; Sluijs, Appy; Reichart, Gert-Jan

    2017-04-01

    Several studies have reconstructed Paleogene seawater temperatures, using single- or multi-proxy approaches (e.g. Hollis et al., 2012 and references therein), particularly comparing TEX86 with foraminiferal δ18O and Mg/Ca. Whereas trends often agree relatively well, absolute temperatures can differ significantly between proxies, possibly because they are often applied to (extreme) climate events/transitions (e.g. Sluijs et al., 2011), where certain assumptions underlying the temperature proxies may not hold true. A more general long-term multi-proxy temperature reconstruction, is therefore necessary to validate the different proxies and underlying presumed boundary conditions. Here we apply a multi-proxy approach using foraminiferal calcite and organic proxies to generate a low-resolution, long term (80 Myr) paleotemperature record for the Bass River core (New Jersey, North Atlantic). Oxygen (δ18O), clumped isotopes (Δ47) and Mg/Ca of benthic foraminifera, as well as the organic proxies MBT'-CBT, TEX86H, U37K' index and the LDI were determined on the same sediments. The youngest samples of Miocene age are characterized by a high BIT index (>0.8) and fractional abundance of the C32 1,15-diol (>0.6; de Bar et al., 2016) and the absence of foraminifera, all suggesting high continental input and shallow depths. The older sediment layers (˜30 to 90 Ma) display BIT values and C32 1,15-diol fractional abundances global transition from the Cretaceous to Eocene greenhouse world into the icehouse climate. The TEX86H sea surface temperature (SST) record shows a gradual cooling over time of ˜35 to 20 ˚ C, whereas the δ18O-derived bottom water temperatures (BWTs) decrease from ˜20 to 10 ˚ C, and the Mg/Ca and Δ47-derived BWTs decrease from ˜25 to 15 ˚ C. The absolute temperature difference between the δ18O and Δ47, might be explained by local variations in seawater δ18O composition. Similarly, the difference in Mg/Ca- and δ18O-derived BWTs is likely caused by

  11. Tracking Seed Fates of Tropical Tree Species: Evidence for Seed Caching in a Tropical Forest in North-East India

    Science.gov (United States)

    Sidhu, Swati; Datta, Aparajita

    2015-01-01

    Rodents affect the post-dispersal fate of seeds by acting either as on-site seed predators or as secondary dispersers when they scatter-hoard seeds. The tropical forests of north-east India harbour a high diversity of little-studied terrestrial murid and hystricid rodents. We examined the role played by these rodents in determining the seed fates of tropical evergreen tree species in a forest site in north-east India. We selected ten tree species (3 mammal-dispersed and 7 bird-dispersed) that varied in seed size and followed the fates of 10,777 tagged seeds. We used camera traps to determine the identity of rodent visitors, visitation rates and their seed-handling behavior. Seeds of all tree species were handled by at least one rodent taxon. Overall rates of seed removal (44.5%) were much higher than direct on-site seed predation (9.9%), but seed-handling behavior differed between the terrestrial rodent groups: two species of murid rodents removed and cached seeds, and two species of porcupines were on-site seed predators. In addition, a true cricket, Brachytrupes sp., cached seeds of three species underground. We found 309 caches formed by the rodents and the cricket; most were single-seeded (79%) and seeds were moved up to 19 m. Over 40% of seeds were re-cached from primary cache locations, while about 12% germinated in the primary caches. Seed removal rates varied widely amongst tree species, from 3% in Beilschmiedia assamica to 97% in Actinodaphne obovata. Seed predation was observed in nine species. Chisocheton cumingianus (57%) and Prunus ceylanica (25%) had moderate levels of seed predation while the remaining species had less than 10% seed predation. We hypothesized that seed traits that provide information on resource quantity would influence rodent choice of a seed, while traits that determine resource accessibility would influence whether seeds are removed or eaten. Removal rates significantly decreased (p seed size. Removal rates were significantly

  12. Time-and-ID-Based Proxy Reencryption Scheme

    Directory of Open Access Journals (Sweden)

    Kambombo Mtonga

    2014-01-01

    Full Text Available Time- and ID-based proxy reencryption scheme is proposed in this paper in which a type-based proxy reencryption enables the delegator to implement fine-grained policies with one key pair without any additional trust on the proxy. However, in some applications, the time within which the data was sampled or collected is very critical. In such applications, for example, healthcare and criminal investigations, the delegatee may be interested in only some of the messages with some types sampled within some time bound instead of the entire subset. Hence, in order to carter for such situations, in this paper, we propose a time-and-identity-based proxy reencryption scheme that takes into account the time within which the data was collected as a factor to consider when categorizing data in addition to its type. Our scheme is based on Boneh and Boyen identity-based scheme (BB-IBE and Matsuo’s proxy reencryption scheme for identity-based encryption (IBE to IBE. We prove that our scheme is semantically secure in the standard model.

  13. A test of the adaptive specialization hypothesis: population differences in caching, memory, and the hippocampus in black-capped chickadees (Poecile atricapilla).

    Science.gov (United States)

    Pravosudov, Vladimir V; Clayton, Nicola S

    2002-08-01

    To test the hypothesis that accurate cache recovery is more critical for birds that live in harsh conditions where the food supply is limited and unpredictable, the authors compared food caching, memory, and the hippocampus of black-capped chickadees (Poecile atricapilla) from Alaska and Colorado. Under identical laboratory conditions, Alaska chickadees (a) cached significantly more food; (b) were more efficient at cache recovery: (c) performed more accurately on one-trial associative learning tasks in which birds had to rely on spatial memory, but did not differ when tested on a nonspatial version of this task; and (d) had significantly larger hippocampal volumes containing more neurons compared with Colorado chickadees. The results support the hypothesis that these population differences may reflect adaptations to a harsh environment.

  14. PENGEMBANGAN MEKANISME OTENTIKASI DAN OTORISASI MANAJEMEN CONFIG PADA KASUS SHARED WEB HOSTING BERBASIS LINUX CONTAINER

    Directory of Open Access Journals (Sweden)

    Saifuddin Saifuddin

    2016-08-01

    The results show config on web applications that are in the directory in a single server can be read using these methods but can not be decoded to read user, password, and dbname, because it has given authorization can be decoded only from the directory already listed. on testing performance for latency, memory, and CPU system be followed, to get good results the previous system. The test results using the cache, the response time generated when accessed simultaneously by 20 click per user amounted to 941.4 ms for the old system and amounted to 786.6 ms.

  15. Development of Grid-like Applications for Public Health Using Web 2.0 Mashup Techniques

    Science.gov (United States)

    Scotch, Matthew; Yip, Kevin Y.; Cheung, Kei-Hoi

    2008-01-01

    Development of public health informatics applications often requires the integration of multiple data sources. This process can be challenging due to issues such as different file formats, schemas, naming systems, and having to scrape the content of web pages. A potential solution to these system development challenges is the use of Web 2.0 technologies. In general, Web 2.0 technologies are new internet services that encourage and value information sharing and collaboration among individuals. In this case report, we describe the development and use of Web 2.0 technologies including Yahoo! Pipes within a public health application that integrates animal, human, and temperature data to assess the risk of West Nile Virus (WNV) outbreaks. The results of development and testing suggest that while Web 2.0 applications are reasonable environments for rapid prototyping, they are not mature enough for large-scale public health data applications. The application, in fact a “systems of systems,” often failed due to varied timeouts for application response across web sites and services, internal caching errors, and software added to web sites by administrators to manage the load on their servers. In spite of these concerns, the results of this study demonstrate the potential value of grid computing and Web 2.0 approaches in public health informatics. PMID:18755998

  16. Development of grid-like applications for public health using Web 2.0 mashup techniques.

    Science.gov (United States)

    Scotch, Matthew; Yip, Kevin Y; Cheung, Kei-Hoi

    2008-01-01

    Development of public health informatics applications often requires the integration of multiple data sources. This process can be challenging due to issues such as different file formats, schemas, naming systems, and having to scrape the content of web pages. A potential solution to these system development challenges is the use of Web 2.0 technologies. In general, Web 2.0 technologies are new internet services that encourage and value information sharing and collaboration among individuals. In this case report, we describe the development and use of Web 2.0 technologies including Yahoo! Pipes within a public health application that integrates animal, human, and temperature data to assess the risk of West Nile Virus (WNV) outbreaks. The results of development and testing suggest that while Web 2.0 applications are reasonable environments for rapid prototyping, they are not mature enough for large-scale public health data applications. The application, in fact a "systems of systems," often failed due to varied timeouts for application response across web sites and services, internal caching errors, and software added to web sites by administrators to manage the load on their servers. In spite of these concerns, the results of this study demonstrate the potential value of grid computing and Web 2.0 approaches in public health informatics.

  17. A new way to proxy levels of infrastructure development

    Directory of Open Access Journals (Sweden)

    Steve Pickering

    2017-01-01

    Full Text Available Researchers in many fields have needed to develop a measure of infrastructure, and many proxies have been used toward this end, such as night light data and the Digital Chart of the World. Yet there are issues in using these methods. This paper presents a new way of proxying infrastructure: analysing the file sizes of map images on the Bing, Google, OpenStreetMap and Sina websites. The paper also demonstrates four ways in which this can be achieved. This approach is by no means perfect and does not solve all of the difficulties presented by other methods. Nevertheless, it does provide a simple and functional alternative proxy for level of infrastructure development.

  18. Munchausen Syndrome By Proxy Admitting with Bloody Urine and Stool

    Directory of Open Access Journals (Sweden)

    Tugba Koca

    2014-02-01

    Full Text Available Munchausen syndrome by Proxy is a severe form of child abuse. Disease symptoms and signs are fabricated or imitated by parents or caregivers The child is usually presented to doctors, persistently. A delay in diagnosis may cause severe negative impact on spiritual, physical, mental and social development of the cases and even death. Symptoms usually disappear in the absence of the perpetrators. The diagnosis is extremely difficult. A 21-month-old boy who had applied to many centers due to bleeding from various parts of the body for last six months, and whose symptoms could not be explained with any physical reason after tests were conducted. Finally he was admitted to our center with bloody urine and stools, and diagnosed Munchausen syndrome by proxy. In cases with recurrent hospital admission in whom no apparent disease is diagnosed, Munchausen syndrome by Proxy should be among the differential diagnosis.

  19. Research on implementation of proxy Arp in IP DSLAM

    Science.gov (United States)

    Cheng, Chuanqing; Wang, Li; Huang, Qiugen

    2005-02-01

    While the ethernet is applied more and more in public network environment and xdsl service become the most common access mode ,IP kenel DSLAM undertakes some functions such as service distribution and convergence ,security management and customer management.Facing the contradiction of the need of port isolation and the shortage of ip address,VLAN aggregation technology is applied in DSLAM.How to implement the communicatio between the two vlan but share the same ip subnet,proxy arp does this. This paper introduces how to implement proxy arp in the DSLAM. TCP/IP communication detail procedure betweent two host ,the relation of VLAN and network segment are discussed. The proxy arp model and its implementation in IP DSLAM is also expatiated in this paper and a conformance tesing is given.

  20. Seed drops and caches by the harvester ant Messor barbarus: do they contribute to seed dispersal in Mediterranean grasslands?

    Science.gov (United States)

    Detrain, C.; Tasse, Olivier

    To determine whether the harvester ant Messor barbarus acts as a seed disperser in Mediterranean grasslands, the accuracy level of seed processing was assessed in the field by quantifying seed drops by loaded foragers. In the vicinity of exploited seed patches 3times as many diaspores were found as in controls due to seed losses by foragers. Over trails, up to 30% of harvested seeds were dropped, singly, by workers but all were recovered by nestmates within 24h. Seeds were also dropped within temporary caches with very few viable diaspores being left per cache when ants no longer used the trail. Globally, ant-dispersed diaspores accounted for only 0.1% of seeds harvested by M. barbarus. We discuss the possible significance for grassland vegetation of harvester-ant-mediated seed dispersal.

  1. A population study of Alzheimer's disease: findings from the Cache County Study on Memory, Health, and Aging.

    Science.gov (United States)

    Tschanz, Joann T; Treiber, Katherine; Norton, Maria C; Welsh-Bohmer, Kathleen A; Toone, Leslie; Zandi, Peter P; Szekely, Christine A; Lyketsos, Constantine; Breitner, John C S

    2005-01-01

    There are several population-based studies of aging, memory, and dementia being conducted worldwide. Of these, the Cache County Study on Memory, Health and Aging is noteworthy for its large number of "oldest-old" members. This study, which has been following an initial cohort of 5,092 seniors since 1995, has reported among its major findings the role of the Apolipoprotein E gene on modifying the risk for Alzheimer's disease (AD) in males and females and identifying pharmacologic compounds that may act to reduce AD risk. This article summarizes the major findings of the Cache County study to date, describes ongoing investigations, and reports preliminary analyses on the outcome of the oldest-old in this population, the subgroup of participants who were over age 84 at the study's inception.

  2. Health anxiety by proxy in women with severe health anxiety

    DEFF Research Database (Denmark)

    Thorgaard, Mette Viller; Frostholm, Lisbeth; Walker, Lynn

    2017-01-01

    Health anxiety (HA) refers to excessive worries and anxiety about harbouring serious illness based on misinterpretation of bodily sensations or changes as signs of serious illness. Severe HA is associated with disability and high health care costs. However, the impact of parental HA on excessive...... concern with their children's health (health anxiety by proxy) is scantly investigated. The aim of this study is to investigate HA by proxy in mothers with severe HA. Fifty mothers with severe HA and two control groups were included, i.e. mothers with rheumatoid arthritis (N = 49) and healthy mothers (N...

  3. TFC - Accesibilidad web

    OpenAIRE

    Aguilar Garzón, Daniel

    2011-01-01

    Estudio de 10 webs del portal uoc.edu, basado en las pautas de accesibilidad web W3C Estudi de 10 webs del portal uoc.edu, basat en les pautes d'accessibilitat web W3C Study of 10 portal websites uoc.edu based on the W3C web accessibility guidelines

  4. Minimizing End-to-End Interference in I/O Stacks Spanning Shared Multi-Level Buffer Caches

    Science.gov (United States)

    Patrick, Christina M.

    2011-01-01

    This thesis presents an end-to-end interference minimizing uniquely designed high performance I/O stack that spans multi-level shared buffer cache hierarchies accessing shared I/O servers to deliver a seamless high performance I/O stack. In this thesis, I show that I can build a superior I/O stack which minimizes the inter-application interference…

  5. A Cross-Layer Framework for Designing and Optimizing Deeply-Scaled FinFET-Based Cache Memories

    Directory of Open Access Journals (Sweden)

    Alireza Shafaei

    2015-08-01

    Full Text Available This paper presents a cross-layer framework in order to design and optimize energy-efficient cache memories made of deeply-scaled FinFET devices. The proposed design framework spans device, circuit and architecture levels and considers both super- and near-threshold modes of operation. Initially, at the device-level, seven FinFET devices on a 7-nm process technology are designed in which only one geometry-related parameter (e.g., fin width, gate length, gate underlap is changed per device. Next, at the circuit-level, standard 6T and 8T SRAM cells made of these 7-nm FinFET devices are characterized and compared in terms of static noise margin, access latency, leakage power consumption, etc. Finally, cache memories with all different combinations of devices and SRAM cells are evaluated at the architecture-level using a modified version of the CACTI tool with FinFET support and other considerations for deeply-scaled technologies. Using this design framework, it is observed that L1 cache memory made of longer channel FinFET devices operating at the near-threshold regime achieves the minimum energy operation point.

  6. Web TA Production (WebTA)

    Data.gov (United States)

    US Agency for International Development — WebTA is a web-based time and attendance system that supports USAID payroll administration functions, and is designed to capture hours worked, leave used and...

  7. Slices: A shape-proxy based on planar sections

    KAUST Repository

    McCrae, James

    2011-12-01

    Minimalist object representations or shape-proxies that spark and inspire human perception of shape remain an incompletely understood, yet powerful aspect of visual communication. We explore the use of planar sections, i.e., the contours of intersection of planes with a 3D object, for creating shape abstractions, motivated by their popularity in art and engineering. We first perform a user study to show that humans do define consistent and similar planar section proxies for common objects. Interestingly, we observe a strong correlation between user-defined planes and geometric features of objects. Further we show that the problem of finding the minimum set of planes that capture a set of 3D geometric shape features is both NP-hard and not always the proxy a user would pick. Guided by the principles inferred from our user study, we present an algorithm that progressively selects planes to maximize feature coverage, which in turn influence the selection of subsequent planes. The algorithmic framework easily incorporates various shape features, while their relative importance values are computed and validated from the user study data. We use our algorithm to compute planar slices for various objects, validate their utility towards object abstraction using a second user study, and conclude showing the potential applications of the extracted planar slice shape proxies.

  8. Proxy indicators as measure of local economic dispositions in South ...

    African Journals Online (AJOL)

    Even though South Africa is in a more fortunate position with regard to the availability of such data, it also has data gaps, notably with regard to informal economic activities in the rural areas of the country. This exploratory article engages the use of proxy indicators to provide cues as to the state of a local economy.

  9. Proxy indicators as measure of local economic dispositions in South ...

    African Journals Online (AJOL)

    The growth of spare-part sales mirrors the behaviour of the national economy more accurately than used and new vehicles. BER: Retail Survey. (2005-2010). Used vehicles. 0.53. Spare Parts. 0.80. Banking-related proxy indicators. 13. House bonds. 0.43. Although some similarities exist between the national economy and ...

  10. Munchausen Syndrome by Proxy: A Study of Psychopathology.

    Science.gov (United States)

    Bools, Christopher; And Others

    1994-01-01

    This study evaluated 100 mothers with Munchausen Syndrome by Proxy (the fabrication of illness by a mother in her child). Approximately half of the mothers had either smothered or poisoned their child as part of their fabrications. Lifetime psychiatric histories were reported for 47 of the mothers. The most notable psychopathology was personality…

  11. Munchausen Syndrome by Proxy: Mother Fabricates Infant's Hearing Impairment.

    Science.gov (United States)

    Kahn, Gerri; Goldman, Ellen

    1991-01-01

    Case study reports a case of Munchausen Syndrome by Proxy, a form of child abuse in which the mother presents a child for treatment for a condition she herself has invented or created. This case study describes the ways in which a mother obtained a diagnosis of sensorineural hearing loss as well as amplification for her normally hearing infant.…

  12. Identifying and Responding to Munchausen Syndrome by Proxy.

    Science.gov (United States)

    Pearl, Peggy T.

    1995-01-01

    Defines Munchausen Syndrome by Proxy in children up to eight years, in which the mother falsifies illness in her child by simulating or producing illness, bringing about frequent hospitalizations, painful tests, potentially harmful treatment, and in extreme cases, death. Describes symptoms and suggested professional actions. (DR)

  13. Munchausen by Proxy Victims in Adulthood: A First Look.

    Science.gov (United States)

    Libow, Judith A.

    1995-01-01

    Childhood experiences and long-term psychological outcomes were investigated with 10 adults, ages 33 through 71, who were self-identified victims of illness fabrication by a parent (Munchausen by Proxy). During childhood they felt unloved and unsafe and had emotional and physical problems. As adults, problems included insecurity, reality-testing…

  14. Munchausen Syndrome by Proxy (MSBP): An Intergenerational Perspective.

    Science.gov (United States)

    Rappaport, Sol R.; Hochstadt, Neil J.

    1993-01-01

    Presents new information about Munchausen Syndrome by Proxy (MSBP), factitious disorder in which caretaker may induce or exaggerate medical illness in his or her child that may lead to illness and even death. Provides psychosocial history of caregiver using intergenerational model. Presents case of MSBP involving three siblings and information…

  15. Shareholder Activism through Proxy Proposals : The European Perspective

    NARCIS (Netherlands)

    Cziraki, P.; Renneboog, L.D.R.; Szilagyi, P.G.

    2009-01-01

    This paper is the first to investigate the corporate governance role of shareholderinitiated proxy proposals in European firms. While proposals in the US are nonbinding even if they pass the shareholder vote, they are legally binding in the UK and most of Continental Europe. Nonetheless, submissions

  16. A comparison of Solar proxy-magnetometry diagnostics

    NARCIS (Netherlands)

    Leenaarts, J.|info:eu-repo/dai/nl/304837946; Rutten, R.J.|info:eu-repo/dai/nl/074143662; Carlsson, M.; Uitenbroek, H.

    2006-01-01

    Aims. We test various proxy-magnetometry diagnostics, i.e., brightness signatures of small-scale magnetic elements, for studying magnetic field structures in the solar photosphere. Methods. Images are numerically synthesized from a 3D solar magneto-convection simulation for, respectively, the G band

  17. Munchausen by Proxy (MBP) Maltreatment: An International Educational Challenge.

    Science.gov (United States)

    Lasher, Louisa J.

    2003-01-01

    This article is an introduction to a special section on Munchausen Syndrome by Proxy (MSBP) as a form of child maltreatment. In MSBP the perpetrator has deliberately induced, fabricated, or exaggerated a physical and/or psychological-behavioral-mental health problem in another. The article stresses the importance of obtaining an MSBP finding of…

  18. SINOMA - a better tool for proxy based reconstructions?

    Science.gov (United States)

    Buras, Allan; Thees, Barnim; Czymzik, Markus; Dräger, Nadine; Kienel, Ulrike; Neugebauer, Ina; Ott, Florian; Scharnweber, Tobias; Simard, Sonia; Slowinski, Michal; Slowinski, Sandra; Tecklenburg, Christina; Zawiska, Izabela; Wilmking, Martin

    2014-05-01

    Our knowledge on past environmental conditions largely relies on reconstructions that are based on linear regressions between proxy variables (e.g. tree-rings, lake sediments, ice cores) covering a comparably long period (centuries to millennia) and environmental parameters (e.g. climate data) of which only rather short measurement series exist (mostly decades). In general, the corresponding measurements are prone to errors. For instance, air temperature records that are to be prolonged by reconstruction from tree-rings are normally not measured in situ, i.e. where the trees used for reconstructions are growing. In contrast, the variation of tree-ring properties which are used as proxies does not only depend on temperature variations but also on other environmental variables and biological effects. However, if regressions are based on noisy data, knowledge on the noise intensity of both predictor and predictand is needed and model parameter estimates (slope and intercept) will be erroneous if information on the noise is not included in their estimation (Kutzbach et al., 2011). Here, we investigate the performance of the new Sequential Iterative Noise Matching Algorithm (SINOMA; Thees et al., 2009; and Thees et al., submitted) on a variety of typical proxy-data of differing temporal resolution (i.e. hourly (dendrometers, piezometers), seasonally (tree-rings), and annually (tree rings and varved lake sediments)). For each of the investigated proxies a number of pseudo-proxy datasets is generated. I.e. to each proxy variable two different noises are added, resulting in two noisy variables that originate from a common signal (the proxy) and of which the respective error noises and the true model parameters (slope and intercept) between both are known. SINOMA is applied to each of these pseudo-proxy datasets and its performance is evaluated against traditional regression techniques. The herewith submitted contribution thus focuses on the applicability of SINOMA rather

  19. Semantic web for dummies

    CERN Document Server

    Pollock, Jeffrey T

    2009-01-01

    Semantic Web technology is already changing how we interact with data on the Web. By connecting random information on the Internet in new ways, Web 3.0, as it is sometimes called, represents an exciting online evolution. Whether you're a consumer doing research online, a business owner who wants to offer your customers the most useful Web site, or an IT manager eager to understand Semantic Web solutions, Semantic Web For Dummies is the place to start! It will help you:Know how the typical Internet user will recognize the effects of the Semantic WebExplore all the benefits the data Web offers t

  20. Data Rate Estimation for Wireless Core-to-Cache Communication in Multicore CPUs

    Directory of Open Access Journals (Sweden)

    M. Komar

    2015-01-01

    Full Text Available In this paper, a principal architecture of common purpose CPU and its main components are discussed, CPUs evolution is considered and drawbacks that prevent future CPU development are mentioned. Further, solutions proposed so far are addressed and a new CPU architecture is introduced. The proposed architecture is based on wireless cache access that enables a reliable interaction between cores in multicore CPUs using terahertz band, 0.1-10THz. The presented architecture addresses the scalability problem of existing processors and may potentially allow to scale them to tens of cores. As in-depth analysis of the applicability of the suggested architecture requires accurate prediction of traffic in current and next generations of processors, we consider a set of approaches for traffic estimation in modern CPUs discussing their benefits and drawbacks. The authors identify traffic measurements by using existing software tools as the most promising approach for traffic estimation, and they use Intel Performance Counter Monitor for this purpose. Three types of CPU loads are considered including two artificial tests and background system load. For each load type the amount of data transmitted through the L2-L3 interface is reported for various input parameters including the number of active cores and their dependences on the number of cores and operational frequency.

  1. Evaluation of Cache-based Superscalar and Cacheless Vector Architectures for Scientific Computations

    Science.gov (United States)

    Oliker, Leonid; Carter, Jonathan; Shalf, John; Skinner, David; Ethier, Stephane; Biswas, Rupak; Djomehri, Jahed; VanderWijngaart, Rob

    2003-01-01

    The growing gap between sustained and peak performance for scientific applications has become a well-known problem in high performance computing. The recent development of parallel vector systems offers the potential to bridge this gap for a significant number of computational science codes and deliver a substantial increase in computing capabilities. This paper examines the intranode performance of the NEC SX6 vector processor and the cache-based IBM Power3/4 superscalar architectures across a number of key scientific computing areas. First, we present the performance of a microbenchmark suite that examines a full spectrum of low-level machine characteristics. Next, we study the behavior of the NAS Parallel Benchmarks using some simple optimizations. Finally, we evaluate the perfor- mance of several numerical codes from key scientific computing domains. Overall results demonstrate that the SX6 achieves high performance on a large fraction of our application suite and in many cases significantly outperforms the RISC-based architectures. However, certain classes of applications are not easily amenable to vectorization and would likely require extensive reengineering of both algorithm and implementation to utilize the SX6 effectively.

  2. Diets of three species of anurans from the cache creek watershed, California, USA

    Science.gov (United States)

    Hothem, R.L.; Meckstroth, A.M.; Wegner, K.E.; Jennings, M.R.; Crayon, J.J.

    2009-01-01

    We evaluated the diets of three sympatric anuran species, the native Northern Pacific Treefrog, Pseudacris regilla, and Foothill Yellow-Legged Frog, Rana boylii, and the introduced American Bullfrog, Lithobates catesbeianus, based on stomach contents of frogs collected at 36 sites in 1997 and 1998. This investigation was part of a study of mercury bioaccumulation in the biota of the Cache Creek Watershed in north-central California, an area affected by mercury contamination from natural sources and abandoned mercury mines. We collected R. boylii at 22 sites, L. catesbeianus at 21 sites, and P. regilla at 13 sites. We collected both L. catesbeianus and R. boylii at nine sites and all three species at five sites. Pseudacris regilla had the least aquatic diet (100% of the samples had terrestrial prey vs. 5% with aquatic prey), followed by R. boylii (98% terrestrial, 28% aquatic), and L. catesbeianus, which had similar percentages of terrestrial (81%) and aquatic prey (74%). Observed predation by L. catesbeianus on R. boylii may indicate that interaction between these two species is significant. Based on their widespread abundance and their preference for aquatic foods, we suggest that, where present, L. catesbeianus should be the species of choice for all lethal biomonitoring of mercury in amphibians. Copyright ?? 2009 Society for the Study of Amphibians and Reptiles.

  3. An optimal and practical cache-oblivious algorithm for computing multiresolution rasters

    DEFF Research Database (Denmark)

    Arge, L.; Brodal, G.S.; Truelsen, J.

    2013-01-01

    where each cell of Gμ stores the average of the values of μ x μ cells of G . Here we consider the case where G is so large that it does not fit in the main memory of the computer. We present a novel algorithm that solves this problem in O(scan(N)) data block transfers from/to the external memory......, and in θ(N) CPU operations; here scan(N) is the number of block transfers that are needed to read the entire dataset from the external memory. Unlike previous results on this problem, our algorithm achieves this optimal performance without making any assumptions on the size of the main memory...... of the computer. Moreover, this algorithm is cache-oblivious; its performance does not depend on the data block size and the main memory size. We have implemented the new algorithm and we evaluate its performance on datasets of various sizes; we show that it clearly outperforms previous approaches on this problem...

  4. Traversal Caches: A Framework for FPGA Acceleration of Pointer Data Structures

    Directory of Open Access Journals (Sweden)

    James Coole

    2010-01-01

    Full Text Available Field-programmable gate arrays (FPGAs and other reconfigurable computing (RC devices have been widely shown to have numerous advantages including order of magnitude performance and power improvements compared to microprocessors for some applications. Unfortunately, FPGA usage has largely been limited to applications exhibiting sequential memory access patterns, thereby prohibiting acceleration of important applications with irregular patterns (e.g., pointer-based data structures. In this paper, we present a design pattern for RC application development that serializes irregular data structure traversals online into a traversal cache, which allows the corresponding data to be efficiently streamed to the FPGA. The paper presents a generalized framework that benefits applications with repeated traversals, which we show can achieve between 7x and 29x speedup over pointer-based software. For applications without strictly repeated traversals, we present application-specialized extensions that benefit applications with highly similar traversals by exploiting similarity to improve memory bandwidth and execute multiple traversals in parallel. We show that these extensions can achieve a speedup between 11x and 70x on a Virtex4 LX100 for Barnes-Hut n-body simulation.

  5. Smart Collaborative Caching for Information-Centric IoT in Fog Computing

    Directory of Open Access Journals (Sweden)

    Fei Song

    2017-11-01

    Full Text Available The significant changes enabled by the fog computing had demonstrated that Internet of Things (IoT urgently needs more evolutional reforms. Limited by the inflexible design philosophy; the traditional structure of a network is hard to meet the latest demands. However, Information-Centric Networking (ICN is a promising option to bridge and cover these enormous gaps. In this paper, a Smart Collaborative Caching (SCC scheme is established by leveraging high-level ICN principles for IoT within fog computing paradigm. The proposed solution is supposed to be utilized in resource pooling, content storing, node locating and other related situations. By investigating the available characteristics of ICN, some challenges of such combination are reviewed in depth. The details of building SCC, including basic model and advanced algorithms, are presented based on theoretical analysis and simplified examples. The validation focuses on two typical scenarios: simple status inquiry and complex content sharing. The number of clusters, packet loss probability and other parameters are also considered. The analytical results demonstrate that the performance of our scheme, regarding total packet number and average transmission latency, can outperform that of the original ones. We expect that the SCC will contribute an efficient solution to the related studies.

  6. AirCache: A Crowd-Based Solution for Geoanchored Floating Data

    Directory of Open Access Journals (Sweden)

    Armir Bujari

    2016-01-01

    Full Text Available The Internet edge has evolved from a simple consumer of information and data to eager producer feeding sensed data at a societal scale. The crowdsensing paradigm is a representative example which has the potential to revolutionize the way we acquire and consume data. Indeed, especially in the era of smartphones, the geographical and temporal scopus of data is often local. For instance, users’ queries are more and more frequently about a nearby object, event, person, location, and so forth. These queries could certainly be processed and answered locally, without the need for contacting a remote server through the Internet. In this scenario, the data is alimented (sensed by the users and, as a consequence, data lifetime is limited by human organizational factors (e.g., mobility. From this basis, data survivability in the Area of Interest (AoI is crucial and, if not guaranteed, could undermine system deployment. Addressing this scenario, we discuss and contribute with a novel protocol named AirCache, whose aim is to guarantee data availability in the AoI while at the same time reducing the data access costs at the network edges. We assess our proposal through a simulation analysis showing that our approach effectively fulfills its design objectives.

  7. Optimization of ETL Process in Data Warehouse Through a Combination of Parallelization and Shared Cache Memory

    Directory of Open Access Journals (Sweden)

    M. Faridi Masouleh

    2016-12-01

    Full Text Available Extraction, Transformation and Loading (ETL is introduced as one of the notable subjects in optimization, management, improvement and acceleration of processes and operations in data bases and data warehouses. The creation of ETL processes is potentially one of the greatest tasks of data warehouses and so its production is a time-consuming and complicated procedure. Without optimization of these processes, the implementation of projects in data warehouses area is costly, complicated and time-consuming. The present paper used the combination of parallelization methods and shared cache memory in systems distributed on the basis of data warehouse. According to the conducted assessment, the proposed method exhibited 7.1% speed improvement to kattle optimization instrument and 7.9% to talend instrument in terms of implementation time of the ETL process. Therefore, parallelization could notably improve the ETL process. It eventually caused the management and integration processes of big data to be implemented in a simple way and with acceptable speed.

  8. Smart Collaborative Caching for Information-Centric IoT in Fog Computing.

    Science.gov (United States)

    Song, Fei; Ai, Zheng-Yang; Li, Jun-Jie; Pau, Giovanni; Collotta, Mario; You, Ilsun; Zhang, Hong-Ke

    2017-11-01

    The significant changes enabled by the fog computing had demonstrated that Internet of Things (IoT) urgently needs more evolutional reforms. Limited by the inflexible design philosophy; the traditional structure of a network is hard to meet the latest demands. However, Information-Centric Networking (ICN) is a promising option to bridge and cover these enormous gaps. In this paper, a Smart Collaborative Caching (SCC) scheme is established by leveraging high-level ICN principles for IoT within fog computing paradigm. The proposed solution is supposed to be utilized in resource pooling, content storing, node locating and other related situations. By investigating the available characteristics of ICN, some challenges of such combination are reviewed in depth. The details of building SCC, including basic model and advanced algorithms, are presented based on theoretical analysis and simplified examples. The validation focuses on two typical scenarios: simple status inquiry and complex content sharing. The number of clusters, packet loss probability and other parameters are also considered. The analytical results demonstrate that the performance of our scheme, regarding total packet number and average transmission latency, can outperform that of the original ones. We expect that the SCC will contribute an efficient solution to the related studies.

  9. On the use of human mobility proxies for modeling epidemics.

    Directory of Open Access Journals (Sweden)

    Michele Tizzoni

    2014-07-01

    Full Text Available Human mobility is a key component of large-scale spatial-transmission models of infectious diseases. Correctly modeling and quantifying human mobility is critical for improving epidemic control, but may be hindered by data incompleteness or unavailability. Here we explore the opportunity of using proxies for individual mobility to describe commuting flows and predict the diffusion of an influenza-like-illness epidemic. We consider three European countries and the corresponding commuting networks at different resolution scales, obtained from (i official census surveys, (ii proxy mobility data extracted from mobile phone call records, and (iii the radiation model calibrated with census data. Metapopulation models defined on these countries and integrating the different mobility layers are compared in terms of epidemic observables. We show that commuting networks from mobile phone data capture the empirical commuting patterns well, accounting for more than 87% of the total fluxes. The distributions of commuting fluxes per link from mobile phones and census sources are similar and highly correlated, however a systematic overestimation of commuting traffic in the mobile phone data is observed. This leads to epidemics that spread faster than on census commuting networks, once the mobile phone commuting network is considered in the epidemic model, however preserving to a high degree the order of infection of newly affected locations. Proxies' calibration affects the arrival times' agreement across different models, and the observed topological and traffic discrepancies among mobility sources alter the resulting epidemic invasion patterns. Results also suggest that proxies perform differently in approximating commuting patterns for disease spread at different resolution scales, with the radiation model showing higher accuracy than mobile phone data when the seed is central in the network, the opposite being observed for peripheral locations. Proxies

  10. On the use of human mobility proxies for modeling epidemics.

    Science.gov (United States)

    Tizzoni, Michele; Bajardi, Paolo; Decuyper, Adeline; Kon Kam King, Guillaume; Schneider, Christian M; Blondel, Vincent; Smoreda, Zbigniew; González, Marta C; Colizza, Vittoria

    2014-07-01

    Human mobility is a key component of large-scale spatial-transmission models of infectious diseases. Correctly modeling and quantifying human mobility is critical for improving epidemic control, but may be hindered by data incompleteness or unavailability. Here we explore the opportunity of using proxies for individual mobility to describe commuting flows and predict the diffusion of an influenza-like-illness epidemic. We consider three European countries and the corresponding commuting networks at different resolution scales, obtained from (i) official census surveys, (ii) proxy mobility data extracted from mobile phone call records, and (iii) the radiation model calibrated with census data. Metapopulation models defined on these countries and integrating the different mobility layers are compared in terms of epidemic observables. We show that commuting networks from mobile phone data capture the empirical commuting patterns well, accounting for more than 87% of the total fluxes. The distributions of commuting fluxes per link from mobile phones and census sources are similar and highly correlated, however a systematic overestimation of commuting traffic in the mobile phone data is observed. This leads to epidemics that spread faster than on census commuting networks, once the mobile phone commuting network is considered in the epidemic model, however preserving to a high degree the order of infection of newly affected locations. Proxies' calibration affects the arrival times' agreement across different models, and the observed topological and traffic discrepancies among mobility sources alter the resulting epidemic invasion patterns. Results also suggest that proxies perform differently in approximating commuting patterns for disease spread at different resolution scales, with the radiation model showing higher accuracy than mobile phone data when the seed is central in the network, the opposite being observed for peripheral locations. Proxies should therefore be

  11. An empirical analysis of the relationship between web usage and academic performance in undergraduate students

    OpenAIRE

    Hazelhurst, Scott; Johnson, Yestin; Sanders, Ian

    2011-01-01

    The use of the internet, and in particular web browsing, offers many potential advantages for educational institutions as students have access to a wide range of information previously not available. However, there are potential negative effects due to factors such as time-wasting and asocial behaviour. In this study, we conducted an empirical investigation of the academic performance and the web-usage pattern of 2153 undergraduate students. Data from university proxy logs allows us to examin...

  12. Het WEB leert begrijpen

    CERN Multimedia

    Stroeykens, Steven

    2004-01-01

    The WEB could be much more useful if the computers understood something of information on the Web pages. That explains the goal of the "semantic Web", a project in which takes part, amongst others, Tim Berners Lee, the inventor of the original WEB

  13. Handbook of web surveys

    NARCIS (Netherlands)

    Bethlehem, J.; Biffignandi, S.

    2012-01-01

    Best practices to create and implementhighly effective web surveys Exclusively combining design and sampling issues, Handbook of Web Surveys presents a theoretical yet practical approach to creating and conducting web surveys. From the history of web surveys to various modes of data collection to

  14. Geospatial semantic web

    CERN Document Server

    Zhang, Chuanrong; Li, Weidong

    2015-01-01

    This book covers key issues related to Geospatial Semantic Web, including geospatial web services for spatial data interoperability; geospatial ontology for semantic interoperability; ontology creation, sharing, and integration; querying knowledge and information from heterogeneous data source; interfaces for Geospatial Semantic Web, VGI (Volunteered Geographic Information) and Geospatial Semantic Web; challenges of Geospatial Semantic Web; and development of Geospatial Semantic Web applications. This book also describes state-of-the-art technologies that attempt to solve these problems such as WFS, WMS, RDF, OWL, and GeoSPARQL, and demonstrates how to use the Geospatial Semantic Web technologies to solve practical real-world problems such as spatial data interoperability.

  15. Mobile Multicast in Hierarchical Proxy Mobile IPV6

    Science.gov (United States)

    Hafizah Mohd Aman, Azana; Hashim, Aisha Hassan A.; Mustafa, Amin; Abdullah, Khaizuran

    2013-12-01

    Mobile Internet Protocol Version 6 (MIPv6) environments have been developing very rapidly. Many challenges arise with the fast progress of MIPv6 technologies and its environment. Therefore the importance of improving the existing architecture and operations increases. One of the many challenges which need to be addressed is the need for performance improvement to support mobile multicast. Numerous approaches have been proposed to improve mobile multicast performance. This includes Context Transfer Protocol (CXTP), Hierarchical Mobile IPv6 (HMIPv6), Fast Mobile IPv6 (FMIPv6) and Proxy Mobile IPv6 (PMIPv6). This document describes multicast context transfer in hierarchical proxy mobile IPv6 (H-PMIPv6) to provide better multicasting performance in PMIPv6 domain.

  16. How different proxies record precipitation variability over southeastern South America

    Energy Technology Data Exchange (ETDEWEB)

    Chiessi, Cristiano M; Mulitza, Stefan; Paetzold, Juergen; Wefer, Gerold, E-mail: chiessi@uni-bremen.d [MARUM-Center for Marine Environmental Sciences, University of Bremen, Leobener Strasse, 28359 Bremen (Germany)

    2010-03-15

    Detrending natural and anthropogenic components of climate variability is arguably an issue of utmost importance to society. To accomplish this issue, one must rely on a comprehensive understanding of the natural variability of the climate system on a regional level. Here we explore how different proxies (e.g., stalagmite oxygen isotopic composition, pollen percentages, bulk sediment elemental ratios) record Holocene precipitation variability over southeastern South America. We found a general good agreement between the different records both on orbital and centennial time-scales. Dry mid Holocene, and wet late Holocene, Younger Dryas and a period between {approx}9.4 and 8.12 cal kyr BP seem to be pervasive features. Moreover, we show that proxy-specific sensitivity can greatly improve past precipitation reconstructions.

  17. A comparison of proxy performance in coral biodiversity monitoring

    Science.gov (United States)

    Richards, Zoe T.

    2013-03-01

    The productivity and health of coral reef habitat is diminishing worldwide; however, the effect that habitat declines have on coral reef biodiversity is not known. Logistical and financial constraints mean that surveys of hard coral communities rarely collect data at the species level; hence it is important to know if there are proxy metrics that can reliably predict biodiversity. Here, the performances of six proxy metrics are compared using regression analyses on survey data from a location in the northern Great Barrier Reef. Results suggest generic richness is a strong explanatory variable for spatial patterns in species richness (explaining 82 % of the variation when measured on a belt transect). The most commonly used metric of reef health, percentage live coral cover, is not positively or linearly related to hard coral species richness. This result raises doubt as to whether management actions based on such reefscape information will be effective for the conservation of coral biodiversity.

  18. False allegations of abuse and Munchausen syndrome by proxy.

    Science.gov (United States)

    Meadow, R

    1993-01-01

    Fourteen children from seven families are reported for whom false allegations of abuse were made by the mother. Twelve children were alleged to have incurred sexual abuse, one both sexual and physical abuse, and one physical abuse alone. Thirteen of the children had incurred, or were currently victims of, factitious illness abuse invented by the mother. The one child with no history of factitious illness abuse had a sibling who had incurred definite factitious illness abuse. The false allegations of abuse did not occur in the context of parental separation, divorce, or custody disputes concerning the children. They occurred in the context of Munchausen syndrome by proxy abuse. The age of the children, 3 to 9 years, was older than the usual age for Munchausen syndrome by proxy abuse. The mother was the source of the false allegations and was the person who encouraged or taught six of the children to substantiate allegations of sexual abuse. PMID:8503664

  19. Munchausen syndrome and Munchausen syndrome by proxy in dermatology.

    Science.gov (United States)

    Boyd, Alan S; Ritchie, Coleman; Likhari, Sunaina

    2014-08-01

    Patients with Munchausen syndrome purposefully injure themselves, often with the injection of foreign materials, to gain hospital admission and the attention associated with having a difficult-to-identify condition. Munchausen syndrome by proxy occurs when a child's caregiver, typically the mother, injures the child for the same reasons. Cases of Munchausen syndrome and Munchausen syndrome by proxy with primary cutaneous involvement appear to be rarely described in the literature suggesting either that diagnosis is not made readily or that it is, in fact, an uncommon disorder. At the center of both conditions is significant psychological pathology and treatment is difficult as many patients with Munchausen syndrome when confronted with these diagnostic possibilities simply leave the hospital. Little is known about the long-term outcome or prognosis of these patients. Copyright © 2014 American Academy of Dermatology, Inc. Published by Mosby, Inc. All rights reserved.

  20. A case report of Factitious Disorder (Munchausen Syndrome by proxy

    Directory of Open Access Journals (Sweden)

    Mehdi Shirzadifar

    2014-09-01

    Full Text Available Background: In Factitious Disorder by proxy, one person (perpetrator induces the disease in another person, thereby seeking emotional needs during the treatment process Diagnosis of this disorder is very difficult and there is not much consensus over it among experts. Lack of timely diagnosis of this disorder may lead to serious harms in patients. Case presentation: We will introduce a 19 year-old boy with mental retardation and history of multiple admissions to psychiatric, internal, urology and surgery wards. He has a 12 year-old sister and a 4 year-old brother, both with history of multiple admissions to pediatrics and internal wards. The father of family was 48 years old with chronic mental disorder, drug dependency and history of multiple admissions to medical, psychiatry and neurology wards. The mother of this family was diagnosed with munchausen syndrome by proxy.

  1. Munchausen syndrome by proxy: an alarming face of child abuse.

    Science.gov (United States)

    Gehlawat, Pratibha; Gehlawat, Virender Kumar; Singh, Priti; Gupta, Rajiv

    2015-01-01

    Munchausen syndrome by proxy (MSBP) is emerging as a serious form of child abuse. It is an intentional production of illness in another, usually children by mothers, to assume sick role by proxy. It is poorly understood and a controversial diagnosis. Treatment is very difficult. We present a case of 9-year-old boy brought to Pt. B. D. Sharma, PGIMS, Rohtak, a tertiary care hospital in northern India by his father and paternal uncle with complaints of hematemesis since July 2012. He underwent many invasive procedures until the diagnosis of MSBP was finally considered. The examination of the blood sample confirmed the diagnosis. The child was placed under custody of his mother. The case was reported to social services, which incorporated whole family in the management.

  2. Web Project Management

    OpenAIRE

    Suralkar, Sunita; Joshi, Nilambari; Meshram, B B

    2013-01-01

    This paper describes about the need for Web project management, fundamentals of project management for web projects: what it is, why projects go wrong, and what's different about web projects. We also discuss Cost Estimation Techniques based on Size Metrics. Though Web project development is similar to traditional software development applications, the special characteristics of Web Application development requires adaption of many software engineering approaches or even development of comple...

  3. Trends in Web characteristics

    OpenAIRE

    Miranda, João; Gomes, Daniel

    2009-01-01

    Abstract—The Web is permanently changing, with new technologies and publishing behaviors emerging everyday. It is important to track trends on the evolution of the Web to develop efficient tools to process its data. For instance, Web trends influence the design of browsers, crawlers and search engines. This study presents trends on the evolution of the Web derived from the analysis of 3 characterizations performed within an interval of 5 years. The Web portion used as a c...

  4. Proxy-SU(3) symmetry in heavy deformed nuclei

    Science.gov (United States)

    Bonatsos, Dennis; Assimakis, I. E.; Minkov, N.; Martinou, Andriana; Cakirli, R. B.; Casten, R. F.; Blaum, K.

    2017-06-01

    Background: Microscopic calculations of heavy nuclei face considerable difficulties due to the sizes of the matrices that need to be solved. Various approximation schemes have been invoked, for example by truncating the spaces, imposing seniority limits, or appealing to various symmetry schemes such as pseudo-SU(3). This paper proposes a new symmetry scheme also based on SU(3). This proxy-SU(3) can be applied to well-deformed nuclei, is simple to use, and can yield analytic predictions. Purpose: To present the new scheme and its microscopic motivation, and to test it using a Nilsson model calculation with the original shell model orbits and with the new proxy set. Method: We invoke an approximate, analytic, treatment of the Nilsson model, that allows the above vetting and yet is also transparent in understanding the approximations involved in the new proxy-SU(3). Results: It is found that the new scheme yields a Nilsson diagram for well-deformed nuclei that is very close to the original Nilsson diagram. The specific levels of approximation in the new scheme are also shown, for each major shell. Conclusions: The new proxy-SU(3) scheme is a good approximation to the full set of orbits in a major shell. Being able to replace a complex shell model calculation with a symmetry-based description now opens up the possibility to predict many properties of nuclei analytically and often in a parameter-free way. The new scheme works best for heavier nuclei, precisely where full microscopic calculations are most challenged. Some cases in which the new scheme can be used, often analytically, to make specific predictions, are shown in a subsequent paper.

  5. Law Enforcement Proxies Matter for the Law and Finance Nexus

    OpenAIRE

    Valentin Toci; Iraj Hashi

    2013-01-01

    The paper employs various measures of law enforcement to provide new evidence on the importance of legal institutions for different dimensions of financial development in transition economies. It offers a critical assessment of law enforcement measures employed in recent studies by showing that some proxies for law enforcement in the credit market may not be appropriate. Hence, care should be taken in how the quality of institutions is measured and the context which it represents. An original...

  6. Fingerprinting Reverse Proxies Using Timing Analysis of TCP Flows

    Science.gov (United States)

    2013-09-01

    Address Translation NPS Naval Postgraduate School OTT One-way Transit Time PET Privacy Enhancing Technology PHP Hypertext Preprocessor P2P ] Peer-to...of timing information that can translate into usable intelligence for detecting the use of reverse proxies by a network domain. 1.1 Problem Statement...websites (i.e., Sky News Arabia, Kemalist Gazete, Detroit News), and entertainment industry sites (i.e., HBO GO, LeoVegas Online Casino , FreeRide Games

  7. Texture analysis for mapping Tamarix parviflora using aerial photographs along the Cache Creek, California.

    Science.gov (United States)

    Ge, Shaokui; Carruthers, Raymond; Gong, Peng; Herrera, Angelica

    2006-03-01

    Natural color photographs were used to detect the coverage of saltcedar, Tamarix parviflora, along a 40 km portion of Cache Creek near Woodland, California. Historical aerial photographs from 2001 were retrospectively evaluated and compared with actual ground-based information to assess accuracy of the assessment process. The color aerial photos were sequentially digitized, georeferenced, classified using color and texture methods, and mosaiced into maps for field use. Eight types of ground cover (Tamarix, agricultural crops, roads, rocks, water bodies, evergreen trees, non-evergreen trees and shrubs (excluding Tamarix)) were selected from the digitized photos for separability analysis and supervised classification. Due to color similarities among the eight cover types, the average separability, based originally only on color, was very low. The separability was improved significantly through the inclusion of texture analysis. Six types of texture measures with various window sizes were evaluated. The best texture was used as an additional feature along with the color, for identifying Tamarix. A total of 29 color photographs were processed to detect Tamarix infestations using a combination of the original digital images and optimal texture features. It was found that the saltcedar covered a total of 3.96 km(2) (396 hectares) within the study area. For the accuracy assessment, 95 classified samples from the resulting map were checked in the field with a global position system (GPS) unit to verify Tamarix presence. The producer's accuracy was 77.89%. In addition, 157 independently located ground sites containing saltcedar were compared with the classified maps, producing a user's accuracy of 71.33%.

  8. Suspense, culpa y cintas de vídeo. Caché/Escondido de Michael Haneke

    Directory of Open Access Journals (Sweden)

    Miguel Martínez-Cabeza

    2011-12-01

    Full Text Available Caché/Escondido (2005 representa dentro de la filmografía de Michael Haneke el ejemplo más destacado de síntesis de los planteamientos formales e ideológicos del cineasta austriaco. Este artículo analiza el filme como manifiesto cinematográfico y como explotación de las convenciones genéricas para construir un modelo de espectador reflexivo. La investigación del modo en que el director plantea y abandona las técnicas del suspense aporta claves para explicar el éxito casi unánime de crítica y la respuesta mucho menos homogénea de las audiencias. El desencadenante de la trama, unas cintas de vídeo que reciben los Laurent, es alusión directa a Carretera Perdida (1997 de David Lynch; no obstante, el misterio acerca del autor de la videovigilancia pierde interés en relación al sentimiento de culpa que desencadena en el protagonista. El episodio infantil de celos y venganza hacia un niño argelino y la actitud del Georges adulto representan una alegoría de la relación de Francia con su pasado colonial que tampoco cierra la narración de Haneke. Es precisamente la apertura formal con que el filme (desestructura cuestiones actuales como el límite entre la responsabilidad individual y colectiva lo que conforma un espectador tan distanciado de la diégesis como consciente de su propio papel de observador.

  9. Simulations of potential future conditions in the cache critical groundwater area, Arkansas

    Science.gov (United States)

    Rashid, Haveen M.; Clark, Brian R.; Mahdi, Hanan H.; Rifai, Hanadi S.; Al-Shukri, Haydar J.

    2015-01-01

    A three-dimensional finite-difference model for part of the Mississippi River Valley alluvial aquifer in the Cache Critical Groundwater Area of eastern Arkansas was constructed to simulate potential future conditions of groundwater flow. The objectives of this study were to test different pilot point distributions to find reasonable estimates of aquifer properties for the alluvial aquifer, to simulate flux from rivers, and to demonstrate how changes in pumping rates for different scenarios affect areas of long-term water-level declines over time. The model was calibrated using the parameter estimation code. Additional calibration was achieved using pilot points with regularization and singular value decomposition. Pilot point parameter values were estimated at a number of discrete locations in the study area to obtain reasonable estimates of aquifer properties. Nine pumping scenarios for the years 2011 to 2020 were tested and compared to the simulated water-level heads from 2010. Hydraulic conductivity values from pilot point calibration ranged between 42 and 173 m/d. Specific yield values ranged between 0.19 and 0.337. Recharge rates ranged between 0.00009 and 0.0006 m/d. The model was calibrated using 2,322 hydraulic head measurements for the years 2000 to 2010 from 150 observation wells located in the study area. For all scenarios, the volume of water depleted ranged between 5.7 and 23.3 percent, except in Scenario 2 (minimum pumping rates), in which the volume increased by 2.5 percent.

  10. Mental and behavioral disturbances in dementia: findings from the Cache County Study on Memory in Aging.

    Science.gov (United States)

    Lyketsos, C G; Steinberg, M; Tschanz, J T; Norton, M C; Steffens, D C; Breitner, J C

    2000-05-01

    The authors report findings from a study of 5,092 community residents who constituted 90% of the elderly resident population of Cache County, Utah. The 5,092 participants, who were 65 years old or older, were screened for dementia. Based on the results of this screen, 1,002 participants (329 with dementia and 673 without dementia) underwent comprehensive neuropsychiatric examinations and were rated on the Neuropsychiatric Inventory, a widely used method for ascertainment and classification of dementia-associated mental and behavioral disturbances. Of the 329 participants with dementia, 214 (65%) had Alzheimer's disease, 62 (19%) had vascular dementia, and 53 (16%) had another DSM-IV dementia diagnosis; 201 (61%) had exhibited one or more mental or behavioral disturbances in the past month. Apathy (27%), depression (24%), and agitation/aggression (24%) were the most common in participants with dementia. These disturbances were almost four times more common in participants with dementia than in those without. Only modest differences were observed in the prevalence of mental or behavioral disturbances in different types of dementia or at different stages of illness: participants with Alzheimer's disease were more likely to have delusions and less likely to have depression. Agitation/aggression and aberrant motor behavior were more common in participants with advanced dementia. On the basis of their findings in this large community population of elderly people, the authors conclude that a wide range of dementia-associated mental and behavioral disturbances afflict the majority of individuals with dementia. Because of their frequency and their adverse effects on patients and their caregivers, these disturbances should be ascertained and treated in all cases of dementia.

  11. WEB STRUCTURE MINING

    Directory of Open Access Journals (Sweden)

    CLAUDIA ELENA DINUCĂ

    2011-01-01

    Full Text Available The World Wide Web became one of the most valuable resources for information retrievals and knowledge discoveries due to the permanent increasing of the amount of data available online. Taking into consideration the web dimension, the users get easily lost in the web’s rich hyper structure. Application of data mining methods is the right solution for knowledge discovery on the Web. The knowledge extracted from the Web can be used to raise the performances for Web information retrievals, question answering and Web based data warehousing. In this paper, I provide an introduction of Web mining categories and I focus on one of these categories: the Web structure mining. Web structure mining, one of three categories of web mining for data, is a tool used to identify the relationship between Web pages linked by information or direct link connection. It offers information about how different pages are linked together to form this huge web. Web Structure Mining finds hidden basic structures and uses hyperlinks for more web applications such as web search.

  12. Marine proxy evidence linking decadal North Pacific and Atlantic climate

    Energy Technology Data Exchange (ETDEWEB)

    Hetzinger, S. [University of Toronto Mississauga, CPS-Department, Mississauga, ON (Canada); Leibniz Institute of Marine Sciences, IFM-GEOMAR, Kiel (Germany); Halfar, J. [University of Toronto Mississauga, CPS-Department, Mississauga, ON (Canada); Mecking, J.V.; Keenlyside, N.S. [Leibniz Institute of Marine Sciences, IFM-GEOMAR, Kiel (Germany); University of Bergen, Geophysical Institute and Bjerknes Centre for Climate Research, Bergen (Norway); Kronz, A. [University of Goettingen, Geowissenschaftliches Zentrum, Goettingen (Germany); Steneck, R.S. [University of Maine, Darling Marine Center, Walpole, ME (United States); Adey, W.H. [Smithsonian Institution, Department of Botany, Washington, DC (United States); Lebednik, P.A. [ARCADIS U.S. Inc., Walnut Creek, CA (United States)

    2012-09-15

    Decadal- to multidecadal variability in the extra-tropical North Pacific is evident in 20th century instrumental records and has significant impacts on Northern Hemisphere climate and marine ecosystems. Several studies have discussed a potential linkage between North Pacific and Atlantic climate on various time scales. On decadal time scales no relationship could be confirmed, potentially due to sparse instrumental observations before 1950. Proxy data are limited and no multi-centennial high-resolution marine geochemical proxy records are available from the subarctic North Pacific. Here we present an annually-resolved record (1818-1967) of Mg/Ca variations from a North Pacific/Bering Sea coralline alga that extends our knowledge in this region beyond available data. It shows for the first time a statistically significant link between decadal fluctuations in sea-level pressure in the North Pacific and North Atlantic. The record is a lagged proxy for decadal-scale variations of the Aleutian Low. It is significantly related to regional sea surface temperature and the North Atlantic Oscillation (NAO) index in late boreal winter on these time scales. Our data show that on decadal time scales a weaker Aleutian Low precedes a negative NAO by several years. This atmospheric link can explain the coherence of decadal North Pacific and Atlantic Multidecadal Variability, as suggested by earlier studies using climate models and limited instrumental data. (orig.)

  13. MUNCHAUSEN SYNDROME BY PROXY IN PEDIATRIC DENTISTRY: MYTH OR REALITY?

    Directory of Open Access Journals (Sweden)

    Veronica PINTILICIUC-ŞERBAN

    2017-06-01

    Full Text Available Background and aims: Munchausen syndrome by proxy is a condition traditionally comprising physical and mental abuse and medical neglect as a form of psychogenic maltreatment of the child, secondary to fabrication of a pediatric illness by the parent or guardian. The aim of our paper is to assess whether such condition occurs in current pediatric dental practice and to evidence certain situations in which the pediatric dentist should suspect this form of child abuse. Problem statement: Munchausen syndrome by proxy in pediatric dentistry may lead to serious chronic disabilities of the abused or neglected child, being one of the causes of treatment failure. Discussion: Prompt detection of such condition should be regarded as one of the duties of the practitioner who should be trained to report the suspected cases to the governmental child protective agencies. This should be regarded as a form of child abuse and neglect, and the responsible caregiver could be held liable when such wrongful actions cause harm or endanger child’s welfare. Conclusion: Munchausen syndrome by proxy should be regarded as a reality in current pediatric dental practice and dental teams should be trained to properly recognize, assess and manage such complex situations.

  14. Short-term indicators. Intensities as a proxy for savings

    Energy Technology Data Exchange (ETDEWEB)

    Boonekamp, P.G.M.; Gerdes, J. [ECN Policy Studies, Petten (Netherlands); Faberi, S. [Institute of Studies for the Integration of Systems ISIS, Rome (Italy)

    2013-12-15

    The ODYSSEE database on energy efficiency indicators (www.odyssee-indicators.org) has been set up to enable the monitoring and evaluation of realised energy efficiency improvements and related energy savings. The database covers the 27 EU countries as well as Norway and Croatia and data are available from 1990 on. This work contributes to the growing need for quantitative monitoring and evaluation of the impacts of energy policies and measures, both at the EU and national level, e.g. due to the Energy Services Directive and the proposed Energy Efficiency Directive. Because the underlying data become available only after some time, the savings figures are not always timely available. This is especially true for the ODEX efficiency indices per sector that rely on a number of indicators. Therefore, there is a need for so-called short-term indicators that become available shortly after the year has passed for which data are needed. The short term indicators do not replace the savings indicators but function as a proxy for the savings in the most recent year. This proxy value is faster available, but will be less accurate than the saving indicators themselves. The short term indicators have to be checked regularly with the ODEX indicators in order to see whether they can function still as a proxy.

  15. A unified proxy for ENSO and PDO variability since 1650

    Directory of Open Access Journals (Sweden)

    S. McGregor

    2010-01-01

    Full Text Available In this manuscript we have attempted to consolidate the common signal in previously defined proxy reconstructions of the El Niño-Southern Oscillation into one individual proxy titled the Unified ENSO Proxy (UEP. While correlating well with the majority of input reconstructions, the UEP provides better representation of observed indices of ENSO, discrete ENSO events and documented historical chronologies of ENSO than any of these input ENSO reconstructions. Further to this, the UEP also provides a means to reconstruct the PDO/IPO multi-decadal variability of the Pacific Ocean as the low-pass filtered UEP displays multi-decadal variability that is consistent with the 20th century variability of the PDO and IPO. The UEP is then used to describe changes in ENSO variability which have occurred since 1650 focusing on changes in ENSOs variance, multi-year ENSO events, PDO-like multi-decadal variability and the effects of volcanic and solar forcing on ENSO. We find that multi-year El Niño events similar to the 1990–1995 event have occurred several times over the last 3 1/2 centuries. Consistent with earlier studies we find that volcanic forcing can induce a statistically significant change in the mean state of ENSO in the year of the eruption and a doubling of the probability of an El Niño (La Niña event occurring in the year of (three years after the eruption.

  16. Evaluating Ground-based Proxies for Solar Irradiance Variation

    Science.gov (United States)

    Oegerle, William (Technical Monitor); Jordan, Stuart

    2003-01-01

    In order to determine what ground-based proxies are best for evaluating solar irradiance variation before the advent of space observations, it is necessary to test these proxies against space observations. We have tested sunspot number, total sunspot area, and sunspot umbral area against the Nimbus-7 measurements of total solar irradiance variation cover the eleven year period 1980-1990. The umbral area yields the best correlation and the total sunspot area yields the poorest. Reasons for expecting the umbral area to yield the best correlation are given, the statistical procedure followed to obtain the results is described, and the value of determining the best proxy is discussed. The latter is based upon the availability of an excellent database from the Greenwich Observatory obtained over the period 1876-1976, which can be used to estimate the total solar irradiance variation before sensitive space observations were available. The ground-based observations used were obtained at the Coimbra Solar Observatory. The analysis was done at Goddard using these data and data from the Nimbus-7 satellite.

  17. Shell architecture: a novel proxy for paleotemperature reconstructions?

    Science.gov (United States)

    Milano, Stefania; Nehrke, Gernot; Wanamaker, Alan D., Jr.; Witbaard, Rob; Schöne, Bernd R.

    2017-04-01

    Mollusk shells are unique high-resolution paleoenvironmental archives. Their geochemical properties, such as oxygen isotope composition (δ18Oshell) and element-to-calcium ratios, are routinely used to estimate past environmental conditions. However, the existing proxies have certain drawbacks that can affect paleoreconstruction robustness. For instance, the estimation of water temperature of brackish and near-shore environments can be biased by the interdependency of δ18Oshell from multiple environmental variables (water temperature and δ18Owater). Likely, the environmental signature can be masked by physiological processes responsible for the incorporation of trace elements into the shell. The present study evaluated the use of shell structural properties as alternative environmental proxies. The sensitivity of shell architecture at µm and nm-scale to the environment was tested. In particular, the relationship between water temperature and microstructure formation was investigated. To enable the detection of potential structural changes, the shells of the marine bivalves Cerastoderma edule and Arctica islandica were analyzed with Scanning Electron Microscopy (SEM), nanoindentation and Confocal Raman Microscopy (CRM). These techniques allow a quantitative approach to the microstructural analysis. Our results show that water temperature induces a clear response in shell microstructure. A significant alteration in the morphometric characteristics and crystallographic orientation of the structural units was observed. Our pilot study suggests that shell architecture records environmental information and it has potential to be used as novel temperature proxy in near-shore and open ocean habitats.

  18. Heinrich event 4 characterized by terrestrial proxies in southwestern Europe

    Directory of Open Access Journals (Sweden)

    J. M. López-García

    2013-05-01

    Full Text Available Heinrich event 4 (H4 is well documented in the North Atlantic Ocean as a cooling event that occurred between 39 and 40 Ka. Deep-sea cores around the Iberian Peninsula coastline have been analysed to characterize the H4 event, but there are no data on the terrestrial response to this event. Here we present for the first time an analysis of terrestrial proxies for characterizing the H4 event, using the small-vertebrate assemblage (comprising small mammals, squamates and amphibians from Terrassa Riera dels Canyars, an archaeo-palaeontological deposit located on the seaboard of the northeastern Iberian Peninsula. This assemblage shows that the H4 event is characterized in northeastern Iberia by harsher and drier terrestrial conditions than today. Our results were compared with other proxies such as pollen, charcoal, phytolith, avifauna and large-mammal data available for this site, as well as with the general H4 event fluctuations and with other sites where H4 and the previous and subsequent Heinrich events (H5 and H3 have been detected in the Mediterranean and Atlantic regions of the Iberian Peninsula. We conclude that the terrestrial proxies follow the same patterns as the climatic and environmental conditions detected by the deep-sea cores at the Iberian margins.

  19. Error characterization for asynchronous computations: Proxy equation approach

    Science.gov (United States)

    Sallai, Gabriella; Mittal, Ankita; Girimaji, Sharath

    2017-11-01

    Numerical techniques for asynchronous fluid flow simulations are currently under development to enable efficient utilization of massively parallel computers. These numerical approaches attempt to accurately solve time evolution of transport equations using spatial information at different time levels. The truncation error of asynchronous methods can be divided into two parts: delay dependent (EA) or asynchronous error and delay independent (ES) or synchronous error. The focus of this study is a specific asynchronous error mitigation technique called proxy-equation approach. The aim of this study is to examine these errors as a function of the characteristic wavelength of the solution. Mitigation of asynchronous effects requires that the asynchronous error be smaller than synchronous truncation error. For a simple convection-diffusion equation, proxy-equation error analysis identifies critical initial wave-number, λc. At smaller wave numbers, synchronous error are larger than asynchronous errors. We examine various approaches to increase the value of λc in order to improve the range of applicability of proxy-equation approach.

  20. Efficient Conditional Proxy Re-encryption with Chosen-Ciphertext Security

    NARCIS (Netherlands)

    Weng, Jiang; Yang, Yanjiang; Tang, Qiang; Deng, Robert H.; Bao, Feng

    Recently, a variant of proxy re-encryption, named conditional proxy re-encryption (C-PRE), has been introduced. Compared with traditional proxy re-encryption, C-PRE enables the delegator to implement fine-grained delegation of decryption rights, and thus is more useful in many applications. In this