WorldWideScience

Sample records for web proxy cache

  1. Web proxy cache replacement strategies simulation, implementation, and performance evaluation

    CERN Document Server

    ElAarag, Hala; Cobb, Jake

    2013-01-01

    This work presents a study of cache replacement strategies designed for static web content. Proxy servers can improve performance by caching static web content such as cascading style sheets, java script source files, and large files such as images. This topic is particularly important in wireless ad hoc networks, in which mobile devices act as proxy servers for a group of other mobile devices. Opening chapters present an introduction to web requests and the characteristics of web objects, web proxy servers and Squid, and artificial neural networks. This is followed by a comprehensive review o

  2. Web Caching

    Indian Academy of Sciences (India)

    leveraged through Web caching technology. Specifically, Web caching becomes an ... Web routing can improve the overall performance of the Internet. Web caching is similar to memory system caching - a Web cache stores Web resources in ...

  3. Analisis Algoritma Pergantian Cache Pada Proxy Web Server Internet Dengan Simulasi

    OpenAIRE

    Nurwarsito, Heru

    2007-01-01

    Pertumbuhan jumlah client internet dari waktu ke waktu terus bertambah, maka respon akses internet menjadi semakin lambat. Untuk membantu kecepatan akses tersebut maka diperlukan cache pada Proxy Server. Penelitian ini bertujuan untuk menganalisis performansi Proxy Server pada Jaringan Internet terhadap penggunaan algoritma pergantian cache-nya.Analisis Algoritma Pergantian Cache Pada Proxy Server didesain dengan metoda pemodelan simulasi jaringan internet yang terdiri dari Web server, Proxy ...

  4. Enhancement web proxy cache performance using Wrapper Feature Selection methods with NB and J48

    Science.gov (United States)

    Mahmoud Al-Qudah, Dua'a.; Funke Olanrewaju, Rashidah; Wong Azman, Amelia

    2017-11-01

    Web proxy cache technique reduces response time by storing a copy of pages between client and server sides. If requested pages are cached in the proxy, there is no need to access the server. Due to the limited size and excessive cost of cache compared to the other storages, cache replacement algorithm is used to determine evict page when the cache is full. On the other hand, the conventional algorithms for replacement such as Least Recently Use (LRU), First in First Out (FIFO), Least Frequently Use (LFU), Randomized Policy etc. may discard important pages just before use. Furthermore, using conventional algorithm cannot be well optimized since it requires some decision to intelligently evict a page before replacement. Hence, most researchers propose an integration among intelligent classifiers and replacement algorithm to improves replacement algorithms performance. This research proposes using automated wrapper feature selection methods to choose the best subset of features that are relevant and influence classifiers prediction accuracy. The result present that using wrapper feature selection methods namely: Best First (BFS), Incremental Wrapper subset selection(IWSS)embedded NB and particle swarm optimization(PSO)reduce number of features and have a good impact on reducing computation time. Using PSO enhance NB classifier accuracy by 1.1%, 0.43% and 0.22% over using NB with all features, using BFS and using IWSS embedded NB respectively. PSO rises J48 accuracy by 0.03%, 1.91 and 0.04% over using J48 classifier with all features, using IWSS-embedded NB and using BFS respectively. While using IWSS embedded NB fastest NB and J48 classifiers much more than BFS and PSO. However, it reduces computation time of NB by 0.1383 and reduce computation time of J48 by 2.998.

  5. Web cache location

    Directory of Open Access Journals (Sweden)

    Boffey Brian

    2004-01-01

    Full Text Available Stress placed on network infrastructure by the popularity of the World Wide Web may be partially relieved by keeping multiple copies of Web documents at geographically dispersed locations. In particular, use of proxy caches and replication provide a means of storing information 'nearer to end users'. This paper concentrates on the locational aspects of Web caching giving both an overview, from an operational research point of view, of existing research and putting forward avenues for possible further research. This area of research is in its infancy and the emphasis will be on themes and trends rather than on algorithm construction. Finally, Web caching problems are briefly related to referral systems more generally.

  6. Maintaining Web Cache Coherency

    Directory of Open Access Journals (Sweden)

    2000-01-01

    Full Text Available Document coherency is a challenging problem for Web caching. Once the documents are cached throughout the Internet, it is often difficult to keep them coherent with the origin document without generating a new traffic that could increase the traffic on the international backbone and overload the popular servers. Several solutions have been proposed to solve this problem, among them two categories have been widely discussed: the strong document coherency and the weak document coherency. The cost and the efficiency of the two categories are still a controversial issue, while in some studies the strong coherency is far too expensive to be used in the Web context, in other studies it could be maintained at a low cost. The accuracy of these analysis is depending very much on how the document updating process is approximated. In this study, we compare some of the coherence methods proposed for Web caching. Among other points, we study the side effects of these methods on the Internet traffic. The ultimate goal is to study the cache behavior under several conditions, which will cover some of the factors that play an important role in the Web cache performance evaluation and quantify their impact on the simulation accuracy. The results presented in this study show indeed some differences in the outcome of the simulation of a Web cache depending on the workload being used, and the probability distribution used to approximate updates on the cached documents. Each experiment shows two case studies that outline the impact of the considered parameter on the performance of the cache.

  7. A Scalable proxy cache for Grid Data Access

    International Nuclear Information System (INIS)

    Cristian Cirstea, Traian; Just Keijser, Jan; Arthur Koeroo, Oscar; Starink, Ronald; Alan Templon, Jeffrey

    2012-01-01

    We describe a prototype grid proxy cache system developed at Nikhef, motivated by a desire to construct the first building block of a future https-based Content Delivery Network for grid infrastructures. Two goals drove the project: firstly to provide a “native view” of the grid for desktop-type users, and secondly to improve performance for physics-analysis type use cases, where multiple passes are made over the same set of data (residing on the grid). We further constrained the design by requiring that the system should be made of standard components wherever possible. The prototype that emerged from this exercise is a horizontally-scalable, cooperating system of web server / cache nodes, fronted by a customized webDAV server. The webDAV server is custom only in the sense that it supports http redirects (providing horizontal scaling) and that the authentication module has, as back end, a proxy delegation chain that can be used by the cache nodes to retrieve files from the grid. The prototype was deployed at Nikhef and tested at a scale of several terabytes of data and approximately one hundred fast cores of computing. Both small and large files were tested, in a number of scenarios, and with various numbers of cache nodes, in order to understand the scaling properties of the system. For properly-dimensioned cache-node hardware, the system showed speedup of several integer factors for the analysis-type use cases. These results and others are presented and discussed.

  8. Caching web service for TICF project

    International Nuclear Information System (INIS)

    Pais, V.F.; Stancalie, V.

    2008-01-01

    A caching web service was developed to allow caching of any object to a network cache, presented in the form of a web service. This application was used to increase the speed of previously implemented web services and for new ones. Various tests were conducted to determine the impact of using this caching web service in the existing network environment and where it should be placed in order to achieve the greatest increase in performance. Since the cache is presented to applications as a web service, it can also be used for remote access to stored data and data sharing between applications

  9. Here be web proxies

    DEFF Research Database (Denmark)

    Weaver, Nicholas; Kreibich, Christian; Dam, Martin

    2014-01-01

    ,000 clients that include a novel proxy location technique based on traceroutes of the responses to TCP connection establishment requests, which provides additional clues regarding the purpose of the identified web proxies. Overall, we see 14% of Netalyzr-analyzed clients with results that suggest the presence...... of web proxies....

  10. Web proxy auto discovery for the WLCG

    CERN Document Server

    Dykstra, D; Blumenfeld, B; De Salvo, A; Dewhurst, A; Verguilov, V

    2017-01-01

    All four of the LHC experiments depend on web proxies (that is, squids) at each grid site to support software distribution by the CernVM FileSystem (CVMFS). CMS and ATLAS also use web proxies for conditions data distributed through the Frontier Distributed Database caching system. ATLAS & CMS each have their own methods for their grid jobs to find out which web proxies to use for Frontier at each site, and CVMFS has a third method. Those diverse methods limit usability and flexibility, particularly for opportunistic use cases, where an experiment’s jobs are run at sites that do not primarily support that experiment. This paper describes a new Worldwide LHC Computing Grid (WLCG) system for discovering the addresses of web proxies. The system is based on an internet standard called Web Proxy Auto Discovery (WPAD). WPAD is in turn based on another standard called Proxy Auto Configuration (PAC). Both the Frontier and CVMFS clients support this standard. The input into the WLCG system comes from squids regis...

  11. New distributive web-caching technique for VOD services

    Science.gov (United States)

    Kim, Iksoo; Woo, Yoseop; Hwang, Taejune; Choi, Jintak; Kim, Youngjune

    2002-12-01

    At present, one of the most popular services through internet is on-demand services including VOD, EOD and NOD. But the main problems for on-demand service are excessive load of server and insufficiency of network resources. Therefore the service providers require a powerful expensive server and clients are faced with long end-to-end delay and network congestion problem. This paper presents a new distributive web-caching technique for fluent VOD services using distributed proxies in Head-end-Network (HNET). The HNET consists of a Switching-Agent (SA) as a control node, some Head-end Nodes (HEN) as proxies and clients connected to HEN. And each HEN is composing a LAN. Clients request VOD services to server through a HEN and SA. The SA operates the heart of HNET, all the operations using proposed distributive caching technique perform under the control of SA. This technique stores some parts of a requested video on the corresponding HENs when clients connected to each HEN request an identical video. Thus, clients access those HENs (proxies) alternatively for acquiring video streams. Eventually, this fact leads to equi-loaded proxy (HEN). We adopt the cache replacement strategy using the combination of LRU, LFU, remove streams from other HEN prior to server streams and the method of replacing the first block of video last to reduce end-to end delay.

  12. Web Proxy Auto Discovery for the WLCG

    Science.gov (United States)

    Dykstra, D.; Blomer, J.; Blumenfeld, B.; De Salvo, A.; Dewhurst, A.; Verguilov, V.

    2017-10-01

    All four of the LHC experiments depend on web proxies (that is, squids) at each grid site to support software distribution by the CernVM FileSystem (CVMFS). CMS and ATLAS also use web proxies for conditions data distributed through the Frontier Distributed Database caching system. ATLAS & CMS each have their own methods for their grid jobs to find out which web proxies to use for Frontier at each site, and CVMFS has a third method. Those diverse methods limit usability and flexibility, particularly for opportunistic use cases, where an experiment’s jobs are run at sites that do not primarily support that experiment. This paper describes a new Worldwide LHC Computing Grid (WLCG) system for discovering the addresses of web proxies. The system is based on an internet standard called Web Proxy Auto Discovery (WPAD). WPAD is in turn based on another standard called Proxy Auto Configuration (PAC). Both the Frontier and CVMFS clients support this standard. The input into the WLCG system comes from squids registered in the ATLAS Grid Information System (AGIS) and CMS SITECONF files, cross-checked with squids registered by sites in the Grid Configuration Database (GOCDB) and the OSG Information Management (OIM) system, and combined with some exceptions manually configured by people from ATLAS and CMS who operate WLCG Squid monitoring. WPAD servers at CERN respond to http requests from grid nodes all over the world with a PAC file that lists available web proxies, based on IP addresses matched from a database that contains the IP address ranges registered to organizations. Large grid sites are encouraged to supply their own WPAD web servers for more flexibility, to avoid being affected by short term long distance network outages, and to offload the WLCG WPAD servers at CERN. The CERN WPAD servers additionally support requests from jobs running at non-grid sites (particularly for LHC@Home) which they direct to the nearest publicly accessible web proxy servers. The responses

  13. The Use of Proxy Caches for File Access in a Multi-Tier Grid Environment

    International Nuclear Information System (INIS)

    Brun, R; Duellmann, D; Ganis, G; Janyst, L; Peters, A J; Rademakers, F; Sindrilaru, E; Hanushevsky, A

    2011-01-01

    The use of proxy caches has been extensively studied in the HEP environment for efficient access of database data and showed significant performance with only very moderate operational effort at higher grid tiers (T2, T3). In this contribution we propose to apply the same concept to the area of file access and analyse the possible performance gains, operational impact on site services and applicability to different HEP use cases. Base on a proof-of-concept studies with a modified XROOT proxy server we review the cache efficiency and overheads for access patterns of typical ROOT based analysis programs. We conclude with a discussion of the potential role of this new component at the different tiers of a distributed computing grid.

  14. Servidor proxy caché: comprensión y asimilación tecnológica

    Directory of Open Access Journals (Sweden)

    Carlos E. Gómez

    2012-01-01

    Full Text Available Los proveedores de acceso a Internet usualmente incluyen el concepto de aceleradores de Internet para reducir el tiempo promedio que tarda un navegador en obtener los archivos solicitados. Para los administradores del sistema es difícil elegir la configuración del servidor proxy caché, ya que es necesario decidir los valores que se deben usar en diferentes variables. En este artículo se presenta la forma como se abordó el proceso de comprensión y asimilación tecnológica del servicio de proxy caché, un servicio de alto impacto organizacional. Además, este artículo es producto del proyecto de investigación “Análisis de configuraciones de servidores proxy caché”, en el cual se estudiaron aspectos relevantes del rendimiento de Squid como servidor proxy caché.

  15. Smart caching based on mobile agent of power WebGIS platform.

    Science.gov (United States)

    Wang, Xiaohui; Wu, Kehe; Chen, Fei

    2013-01-01

    Power information construction is developing towards intensive, platform, distributed direction with the expansion of power grid and improvement of information technology. In order to meet the trend, power WebGIS was designed and developed. In this paper, we first discuss the architecture and functionality of power WebGIS, and then we study caching technology in detail, which contains dynamic display cache model, caching structure based on mobile agent, and cache data model. We have designed experiments of different data capacity to contrast performance between WebGIS with the proposed caching model and traditional WebGIS. The experimental results showed that, with the same hardware environment, the response time of WebGIS with and without caching model increased as data capacity growing, while the larger the data was, the higher the performance of WebGIS with proposed caching model improved.

  16. XRootd, disk-based, caching proxy for optimization of data access, data placement and data replication

    International Nuclear Information System (INIS)

    Bauerdick, L A T; Bloom, K; Bockelman, B; Bradley, D C; Dasu, S; Dost, J M; Sfiligoi, I; Tadel, A; Tadel, M; Wuerthwein, F; Yagil, A

    2014-01-01

    Following the success of the XRootd-based US CMS data federation, the AAA project investigated extensions of the federation architecture by developing two sample implementations of an XRootd, disk-based, caching proxy. The first one simply starts fetching a whole file as soon as a file open request is received and is suitable when completely random file access is expected or it is already known that a whole file be read. The second implementation supports on-demand downloading of partial files. Extensions to the Hadoop Distributed File System have been developed to allow for an immediate fallback to network access when local HDFS storage fails to provide the requested block. Both cache implementations are in pre-production testing at UCSD.

  17. Web Cache Prefetching as an Aspect: Towards a Dynamic-Weaving Based Solution

    DEFF Research Database (Denmark)

    Segura-Devillechaise, Marc; Menaud, Jean-Marc; Muller, Gilles

    2003-01-01

    Given the high proportion of HTTP traffic in the Internet, Web caches are crucial to reduce user access time, network latency, and bandwidth consumption. Prefetching in a Web cache can further enhance these benefits. For the best performance, however, the prefetching policy must match user and Web...

  18. Dynamic web cache publishing for IaaS clouds using Shoal

    International Nuclear Information System (INIS)

    Gable, Ian; Chester, Michael; Berghaus, Frank; Leavett-Brown, Colin; Paterson, Michael; Prior, Robert; Sobie, Randall; Taylor, Ryan; Armstrong, Patrick; Charbonneau, Andre

    2014-01-01

    We have developed a highly scalable application, called Shoal, for tracking and utilizing a distributed set of HTTP web caches. Our application uses the Squid HTTP cache. Squid servers advertise their existence to the Shoal server via AMQP messaging by running Shoal Agent. The Shoal server provides a simple REST interface that allows clients to determine their closest Squid cache. Our goal is to dynamically instantiate Squid caches on IaaS clouds in response to client demand. Shoal provides the VMs on IaaS clouds with the location of the nearest dynamically instantiated Squid Cache

  19. Researching of Covert Timing Channels Based on HTTP Cache Headers in Web API

    Directory of Open Access Journals (Sweden)

    Denis Nikolaevich Kolegov

    2015-12-01

    Full Text Available In this paper, it is shown how covert timing channels based on HTTP cache headers can be implemented using different Web API of Google Drive, Dropbox and Facebook  Internet services.

  20. XRootd, disk-based, caching-proxy for optimization of data-access, data-placement and data-replication

    CERN Document Server

    Tadel, Matevz

    2013-01-01

    Following the smashing success of XRootd-based USCMS data-federation, AAA project investigated extensions of the federation architecture by developing two sample implementations of an XRootd, disk-based, caching-proxy. The first one simply starts fetching a whole file as soon as a file-open request is received and is suitable when completely random file access is expected or it is already known that a whole file be read. The second implementation supports on-demand downloading of partial files. Extensions to the Hadoop file-system have been developed to allow foran immediate fallback to network access when local HDFS storage fails to provide the requested block. Tools needed to analyze and to tweak block replication factors and to inject downloaded blocks into a running HDFS installation have also been developed. Both cache implementations are in operation at UCSD and several tests were also performed at UNL and UW-M. Operational experience and applications to automatic storage healing and opportunistic compu...

  1. Analisa Pemanfaatan Proxy Server Sebagai Media Filtering Dan Caching Pada Jaringan Komputer

    OpenAIRE

    -, Yuisar; Yulianti, Liza; H, Yanolanda Suzantri

    2015-01-01

    This research aims to Implement control and safe use of the internet at an agency with a utilization fee seminal may use the Linux operating system Ubuntu 14:04. Bandwidth division according to the time of busy servers and Internet client. To quick opening for the second time and so on. Blocking sites - porn site that tends to contain spyware. Accelerate video audio streaming. This study uses experimental research. In this study conducted experiments on the performance of the proxy server tha...

  2. Effective caching of shortest paths for location-based services

    DEFF Research Database (Denmark)

    Jensen, Christian S.; Thomsen, Jeppe Rishede; Yiu, Man Lung

    2012-01-01

    Web search is ubiquitous in our daily lives. Caching has been extensively used to reduce the computation time of the search engine and reduce the network traffic beyond a proxy server. Another form of web search, known as online shortest path search, is popular due to advances in geo...

  3. Políticas de reemplazo en la caché de web

    Directory of Open Access Journals (Sweden)

    Carlos Quesada Sánchez

    2006-05-01

    Full Text Available La web es el mecanismo de comunicación más utilizado en la actualidad debido a su flexibilidad y a la oferta casi interminable de herramientas para navegarla. Esto hace que día con día se agreguen alrededor de un millón de páginas en ella. De esta manera, es entonces la biblioteca más grande, con recursos textuales y de multimedia, que jamás se haya visto antes. Eso sí, es una biblioteca distribuida alrededor de todos los servidores que contienen esa información. Como fuente de consulta, es importante que la recuperación de los datos sea eficiente. Para ello existe el Web Caching, técnica mediante la cual se almacenan temporalmente algunos datos de la web en los servidores locales, de manera que no haya que pedirlos al servidor remoto cada vez que un usuario los solicita. Empero, la cantidad de memoria disponible en los servidores locales para almacenar esa información es limitada: hay que decidir cuáles objetos de la web se almacenan y cuáles no. Esto da pie a varias políticas de reemplazo que se explorarán en este artículo. Mediante un experimento de peticiones reales de la Web, compararemos el desempeño de estas técnicas.

  4. Secure Service Proxy: A CoAP(s Intermediary for a Securer and Smarter Web of Things

    Directory of Open Access Journals (Sweden)

    Floris Van den Abeele

    2017-07-01

    Full Text Available As the IoT continues to grow over the coming years, resource-constrained devices and networks will see an increase in traffic as everything is connected in an open Web of Things. The performance- and function-enhancing features are difficult to provide in resource-constrained environments, but will gain importance if the WoT is to be scaled up successfully. For example, scalable open standards-based authentication and authorization will be important to manage access to the limited resources of constrained devices and networks. Additionally, features such as caching and virtualization may help further reduce the load on these constrained systems. This work presents the Secure Service Proxy (SSP: a constrained-network edge proxy with the goal of improving the performance and functionality of constrained RESTful environments. Our evaluations show that the proposed design reaches its goal by reducing the load on constrained devices while implementing a wide range of features as different adapters. Specifically, the results show that the SSP leads to significant savings in processing, network traffic, network delay and packet loss rates for constrained devices. As a result, the SSP helps to guarantee the proper operation of constrained networks as these networks form an ever-expanding Web of Things.

  5. Secure Service Proxy: A CoAP(s) Intermediary for a Securer and Smarter Web of Things.

    Science.gov (United States)

    Van den Abeele, Floris; Moerman, Ingrid; Demeester, Piet; Hoebeke, Jeroen

    2017-07-11

    As the IoT continues to grow over the coming years, resource-constrained devices and networks will see an increase in traffic as everything is connected in an open Web of Things. The performance- and function-enhancing features are difficult to provide in resource-constrained environments, but will gain importance if the WoT is to be scaled up successfully. For example, scalable open standards-based authentication and authorization will be important to manage access to the limited resources of constrained devices and networks. Additionally, features such as caching and virtualization may help further reduce the load on these constrained systems. This work presents the Secure Service Proxy (SSP): a constrained-network edge proxy with the goal of improving the performance and functionality of constrained RESTful environments. Our evaluations show that the proposed design reaches its goal by reducing the load on constrained devices while implementing a wide range of features as different adapters. Specifically, the results show that the SSP leads to significant savings in processing, network traffic, network delay and packet loss rates for constrained devices. As a result, the SSP helps to guarantee the proper operation of constrained networks as these networks form an ever-expanding Web of Things.

  6. CACHING DATA STORED IN SQL SERVER FOR OPTIMIZING THE PERFORMANCE

    Directory of Open Access Journals (Sweden)

    Demian Horia

    2016-12-01

    Full Text Available This paper present the architecture of web site with different techniques used for optimize the performance of loading the web content. The architecture presented here is for e-commerce site developed on windows with MVC, IIS and Micosoft SQL Server. Caching the data is one technique used by the browsers, by the web servers itself or by proxy servers. Caching the data is made without the knowledge of users and need to provide to user the more recent information from the server. This means that caching mechanism has to be aware of any modification of data on the server. There are different information’s presented in e-commerce site related to products like images, code of product, description, properties or stock

  7. CacheCard : Caching static and dynamic content on the NIC

    NARCIS (Netherlands)

    Bos, Herbert; Huang, Kaiming

    2009-01-01

    CacheCard is a NIC-based cache for static and dynamic web content in a way that allows for implementation on simple devices like NICs. It requires neither understanding of the way dynamic data is generated, nor execution of scripts on the cache. By monitoring file system activity and potential

  8. SIDECACHE: Information access, management and dissemination framework for web services.

    Science.gov (United States)

    Doderer, Mark S; Burkhardt, Cory; Robbins, Kay A

    2011-06-14

    Many bioinformatics algorithms and data sets are deployed using web services so that the results can be explored via the Internet and easily integrated into other tools and services. These services often include data from other sites that is accessed either dynamically or through file downloads. Developers of these services face several problems because of the dynamic nature of the information from the upstream services. Many publicly available repositories of bioinformatics data frequently update their information. When such an update occurs, the developers of the downstream service may also need to update. For file downloads, this process is typically performed manually followed by web service restart. Requests for information obtained by dynamic access of upstream sources is sometimes subject to rate restrictions. SideCache provides a framework for deploying web services that integrate information extracted from other databases and from web sources that are periodically updated. This situation occurs frequently in biotechnology where new information is being continuously generated and the latest information is important. SideCache provides several types of services including proxy access and rate control, local caching, and automatic web service updating. We have used the SideCache framework to automate the deployment and updating of a number of bioinformatics web services and tools that extract information from remote primary sources such as NCBI, NCIBI, and Ensembl. The SideCache framework also has been used to share research results through the use of a SideCache derived web service.

  9. Development of Virtual Resource Based IoT Proxy for Bridging Heterogeneous Web Services in IoT Networks.

    Science.gov (United States)

    Jin, Wenquan; Kim, DoHyeun

    2018-05-26

    The Internet of Things is comprised of heterogeneous devices, applications, and platforms using multiple communication technologies to connect the Internet for providing seamless services ubiquitously. With the requirement of developing Internet of Things products, many protocols, program libraries, frameworks, and standard specifications have been proposed. Therefore, providing a consistent interface to access services from those environments is difficult. Moreover, bridging the existing web services to sensor and actuator networks is also important for providing Internet of Things services in various industry domains. In this paper, an Internet of Things proxy is proposed that is based on virtual resources to bridge heterogeneous web services from the Internet to the Internet of Things network. The proxy enables clients to have transparent access to Internet of Things devices and web services in the network. The proxy is comprised of server and client to forward messages for different communication environments using the virtual resources which include the server for the message sender and the client for the message receiver. We design the proxy for the Open Connectivity Foundation network where the virtual resources are discovered by the clients as Open Connectivity Foundation resources. The virtual resources represent the resources which expose services in the Internet by web service providers. Although the services are provided by web service providers from the Internet, the client can access services using the consistent communication protocol in the Open Connectivity Foundation network. For discovering the resources to access services, the client also uses the consistent discovery interface to discover the Open Connectivity Foundation devices and virtual resources.

  10. Development of Virtual Resource Based IoT Proxy for Bridging Heterogeneous Web Services in IoT Networks

    Directory of Open Access Journals (Sweden)

    Wenquan Jin

    2018-05-01

    Full Text Available The Internet of Things is comprised of heterogeneous devices, applications, and platforms using multiple communication technologies to connect the Internet for providing seamless services ubiquitously. With the requirement of developing Internet of Things products, many protocols, program libraries, frameworks, and standard specifications have been proposed. Therefore, providing a consistent interface to access services from those environments is difficult. Moreover, bridging the existing web services to sensor and actuator networks is also important for providing Internet of Things services in various industry domains. In this paper, an Internet of Things proxy is proposed that is based on virtual resources to bridge heterogeneous web services from the Internet to the Internet of Things network. The proxy enables clients to have transparent access to Internet of Things devices and web services in the network. The proxy is comprised of server and client to forward messages for different communication environments using the virtual resources which include the server for the message sender and the client for the message receiver. We design the proxy for the Open Connectivity Foundation network where the virtual resources are discovered by the clients as Open Connectivity Foundation resources. The virtual resources represent the resources which expose services in the Internet by web service providers. Although the services are provided by web service providers from the Internet, the client can access services using the consistent communication protocol in the Open Connectivity Foundation network. For discovering the resources to access services, the client also uses the consistent discovery interface to discover the Open Connectivity Foundation devices and virtual resources.

  11. Caching Patterns and Implementation

    Directory of Open Access Journals (Sweden)

    Octavian Paul ROTARU

    2006-01-01

    Full Text Available Repetitious access to remote resources, usually data, constitutes a bottleneck for many software systems. Caching is a technique that can drastically improve the performance of any database application, by avoiding multiple read operations for the same data. This paper addresses the caching problems from a pattern perspective. Both Caching and caching strategies, like primed and on demand, are presented as patterns and a pattern-based flexible caching implementation is proposed.The Caching pattern provides method of expensive resources reacquisition circumvention. Primed Cache pattern is applied in situations in which the set of required resources, or at least a part of it, can be predicted, while Demand Cache pattern is applied whenever the resources set required cannot be predicted or is unfeasible to be buffered.The advantages and disadvantages of all the caching patterns presented are also discussed, and the lessons learned are applied in the implementation of the pattern-based flexible caching solution proposed.

  12. Adaptive proxy map server for efficient vector spatial data rendering

    Science.gov (United States)

    Sayar, Ahmet

    2013-01-01

    The rapid transmission of vector map data over the Internet is becoming a bottleneck of spatial data delivery and visualization in web-based environment because of increasing data amount and limited network bandwidth. In order to improve both the transmission and rendering performances of vector spatial data over the Internet, we propose a proxy map server enabling parallel vector data fetching as well as caching to improve the performance of web-based map servers in a dynamic environment. Proxy map server is placed seamlessly anywhere between the client and the final services, intercepting users' requests. It employs an efficient parallelization technique based on spatial proximity and data density in case distributed replica exists for the same spatial data. The effectiveness of the proposed technique is proved at the end of the article by the application of creating map images enriched with earthquake seismic data records.

  13. Multi Service Proxy: Mobile Web Traffic Entitlement Point in 4G Core Network

    Directory of Open Access Journals (Sweden)

    Dalibor Uhlir

    2015-05-01

    Full Text Available Core part of state-of-the-art mobile networks is composed of several standard elements like GGSN (Gateway General Packet Radio Service Support Node, SGSN (Serving GPRS Support Node, F5 or MSP (Multi Service Proxy. Each node handles network traffic from a slightly different perspective, and with various goals. In this article we will focus only on the MSP, its key features and especially on related security issues. MSP handles all HTTP traffic in the mobile network and therefore it is a suitable point for the implementation of different optimization functions, e.g. to reduce the volume of data generated by YouTube or similar HTTP-based service. This article will introduce basic features and functions of MSP as well as ways of remote access and security mechanisms of this key element in state-of-the-art mobile networks.

  14. A method cache for Patmos

    DEFF Research Database (Denmark)

    Degasperi, Philipp; Hepp, Stefan; Puffitsch, Wolfgang

    2014-01-01

    For real-time systems we need time-predictable processors. This paper presents a method cache as a time-predictable solution for instruction caching. The method cache caches whole methods (or functions) and simplifies worst-case execution time analysis. We have integrated the method cache...... in the time-predictable processor Patmos. We evaluate the method cache with a large set of embedded benchmarks. Most benchmarks show a good hit rate for a method cache size in the range between 4 and 16 KB....

  15. Software trace cache

    OpenAIRE

    Ramírez Bellido, Alejandro; Larriba Pey, Josep; Valero Cortés, Mateo

    2005-01-01

    We explore the use of compiler optimizations, which optimize the layout of instructions in memory. The target is to enable the code to make better use of the underlying hardware resources regardless of the specific details of the processor/architecture in order to increase fetch performance. The Software Trace Cache (STC) is a code layout algorithm with a broader target than previous layout optimizations. We target not only an improvement in the instruction cache hit rate, but also an increas...

  16. CORAL Server and CORAL Server Proxy: Scalable Access to Relational Databases from CORAL Applications

    CERN Document Server

    Valassi, A; Kalkhof, A; Salnikov, A; Wache, M

    2011-01-01

    The CORAL software is widely used at CERN for accessing the data stored by the LHC experiments using relational database technologies. CORAL provides a C++ abstraction layer that supports data persistency for several backends and deployment models, including local access to SQLite files, direct client access to Oracle and MySQL servers, and read-only access to Oracle through the FroNTier web server and cache. Two new components have recently been added to CORAL to implement a model involving a middle tier "CORAL server" deployed close to the database and a tree of "CORAL server proxy" instances, with data caching and multiplexing functionalities, deployed close to the client. The new components are meant to provide advantages for read-only and read-write data access, in both offline and online use cases, in the areas of scalability and performance (multiplexing for several incoming connections, optional data caching) and security (authentication via proxy certificates). A first implementation of the two new c...

  17. Data cache organization for accurate timing analysis

    DEFF Research Database (Denmark)

    Schoeberl, Martin; Huber, Benedikt; Puffitsch, Wolfgang

    2013-01-01

    it is important to classify memory accesses as either cache hit or cache miss. The addresses of instruction fetches are known statically and static cache hit/miss classification is possible for the instruction cache. The access to data that is cached in the data cache is harder to predict statically. Several...

  18. Cache Oblivious Distribution Sweeping

    DEFF Research Database (Denmark)

    Brodal, G.S.; Fagerberg, R.

    2002-01-01

    We adapt the distribution sweeping method to the cache oblivious model. Distribution sweeping is the name used for a general approach for divide-and-conquer algorithms where the combination of solved subproblems can be viewed as a merging process of streams. We demonstrate by a series of algorith...

  19. On the Feasibility of Prefetching and Caching for Online TV Services: A Measurement Study on Hulu

    Science.gov (United States)

    Krishnappa, Dilip Kumar; Khemmarat, Samamon; Gao, Lixin; Zink, Michael

    Lately researchers are looking at ways to reduce the delay on video playback through mechanisms like prefetching and caching for Video-on-Demand (VoD) services. The usage of prefetching and caching also has the potential to reduce the amount of network bandwidth usage, as most popular requests are served from a local cache rather than the server containing the original content. In this paper, we investigate the advantages of having such a prefetching and caching scheme for a free hosting service of professionally created video (movies and TV shows) named "hulu". We look into the advantages of using a prefetching scheme where the most popular videos of the week, as provided by the hulu website, are prefetched and compare this approach with a conventional LRU caching scheme with limited storage space and a combined scheme of prefetching and caching. Results from our measurement and analysis shows that employing a basic caching scheme at the proxy yields a hit ratio of up to 77.69%, but requires storage of about 236GB. Further analysis shows that a prefetching scheme where the top-100 popular videos of the week are downloaded to the proxy yields a hit ratio of 44% with a storage requirement of 10GB. A LRU caching scheme with a storage limitation of 20GB can achieve a hit ratio of 55% but downloads 4713 videos to achieve such high hit ratio compared to 100 videos in prefetching scheme, whereas a scheme with both prefetching and caching with the same storage yields a hit ratio of 59% with download requirement of 4439 videos. We find that employing a scheme of prefetching along with caching with trade-off on the storage will yield a better hit ratio and bandwidth saving than individual caching or prefetching schemes.

  20. Development of Virtual Resource Based IoT Proxy for Bridging Heterogeneous Web Services in IoT Networks

    OpenAIRE

    Wenquan Jin; DoHyeun Kim

    2018-01-01

    The Internet of Things is comprised of heterogeneous devices, applications, and platforms using multiple communication technologies to connect the Internet for providing seamless services ubiquitously. With the requirement of developing Internet of Things products, many protocols, program libraries, frameworks, and standard specifications have been proposed. Therefore, providing a consistent interface to access services from those environments is difficult. Moreover, bridging the existing web...

  1. Time-predictable Stack Caching

    DEFF Research Database (Denmark)

    Abbaspourseyedi, Sahar

    completely. Thus, in systems with hard deadlines the worst-case execution time (WCET) of the real-time software running on them needs to be bounded. Modern architectures use features such as pipelining and caches for improving the average performance. These features, however, make the WCET analysis more...... addresses, provides an opportunity to predict and tighten the WCET of accesses to data in caches. In this thesis, we introduce the time-predictable stack cache design and implementation within a time-predictable processor. We introduce several optimizations to our design for tightening the WCET while...... keeping the timepredictability of the design intact. Moreover, we provide a solution for reducing the cost of context switching in a system using the stack cache. In design of these caches, we use custom hardware and compiler support for delivering time-predictable stack data accesses. Furthermore...

  2. Language-Based Caching of Dynamically Generated HTML

    DEFF Research Database (Denmark)

    Brabrand, Claus; Møller, Anders; Olesen, Steffan

    2002-01-01

    Increasingly, HTML documents are dynamically generated by interactive Web services. To ensure that the client is presented with the newest versions of such documents it is customary to disable client caching causing a seemingly inevitable performance penalty. In the system, dynamic HTML documents...

  3. Instant Varnish Cache how-to

    CERN Document Server

    Moutinho, Roberto

    2013-01-01

    Filled with practical, step-by-step instructions and clear explanations for the most important and useful tasks. Get the job done and learn as you go. Easy-to-follow, step-by-step recipes which will get you started with Varnish Cache. Practical examples will help you to get set up quickly and easily.This book is aimed at system administrators and web developers who need to scale websites without tossing money on a large and costly infrastructure. It's assumed that you have some knowledge of the HTTP protocol, how browsers and server communicate with each other, and basic Linux systems.

  4. On the Limits of Cache-Obliviousness

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Fagerberg, Rolf

    2003-01-01

    In this paper, we present lower bounds for permuting and sorting in the cache-oblivious model. We prove that (1) I/O optimal cache-oblivious comparison based sorting is not possible without a tall cache assumption, and (2) there does not exist an I/O optimal cache-oblivious algorithm for permutin...

  5. MESI Cache Coherence Simulator for Teaching Purposes

    OpenAIRE

    Gómez Luna, Juan; Herruzo Gómez, Ezequiel; Benavides Benítez, José Ignacio

    2009-01-01

    Nowadays, the computational systems (multi and uniprocessors) need to avoid the cache coherence problem. There are some techniques to solve this problem. The MESI cache coherence protocol is one of them. This paper presents a simulator of the MESI protocol which is used for teaching the cache memory coherence on the computer systems with hierarchical memory system and for explaining the process of the cache memory location in multilevel cache memory systems. The paper shows a d...

  6. Caching at the Mobile Edge: a Practical Implementation

    DEFF Research Database (Denmark)

    Poderys, Justas; Artuso, Matteo; Lensbøl, Claus Michael Oest

    2018-01-01

    Thanks to recent advances in mobile networks, it is becoming increasingly popular to access heterogeneous content from mobile terminals. There are, however, unique challenges in mobile networks that affect the perceived quality of experience (QoE) at the user end. One such challenge is the higher...... latency that users typically experience in mobile networks compared to wired ones. Cloud-based radio access networks with content caches at the base stations are seen as a key contributor in reducing the latency required to access content and thus improve the QoE at the mobile user terminal. In this paper...... for the mobile user obtained by caching content at the base stations. This is quantified with a comparison to non-cached content by means of ping tests (10–11% shorter times), a higher response rate for web traffic (1.73–3.6 times higher), and an improvement in the jitter (6% reduction)....

  7. LPPS: A Distributed Cache Pushing Based K-Anonymity Location Privacy Preserving Scheme

    Directory of Open Access Journals (Sweden)

    Ming Chen

    2016-01-01

    Full Text Available Recent years have witnessed the rapid growth of location-based services (LBSs for mobile social network applications. To enable location-based services, mobile users are required to report their location information to the LBS servers and receive answers of location-based queries. Location privacy leak happens when such servers are compromised, which has been a primary concern for information security. To address this issue, we propose the Location Privacy Preservation Scheme (LPPS based on distributed cache pushing. Unlike existing solutions, LPPS deploys distributed cache proxies to cover users mostly visited locations and proactively push cache content to mobile users, which can reduce the risk of leaking users’ location information. The proposed LPPS includes three major process. First, we propose an algorithm to find the optimal deployment of proxies to cover popular locations. Second, we present cache strategies for location-based queries based on the Markov chain model and propose update and replacement strategies for cache content maintenance. Third, we introduce a privacy protection scheme which is proved to achieve k-anonymity guarantee for location-based services. Extensive experiments illustrate that the proposed LPPS achieves decent service coverage ratio and cache hit ratio with lower communication overhead compared to existing solutions.

  8. A Cache Considering Role-Based Access Control and Trust in Privilege Management Infrastructure

    Institute of Scientific and Technical Information of China (English)

    ZHANG Shaomin; WANG Baoyi; ZHOU Lihua

    2006-01-01

    PMI(privilege management infrastructure) is used to perform access control to resource in an E-commerce or E-government system. With the ever-increasing need for secure transaction, the need for systems that offer a wide variety of QoS (quality-of-service) features is also growing. In order to improve the QoS of PMI system, a cache based on RBAC(Role-based Access Control) and trust is proposed. Our system is realized based on Web service. How to design the cache based on RBAC and trust in the access control model is described in detail. The algorithm to query role permission in cache and to add records in cache is dealt with. The policy to update cache is introduced also.

  9. Minimizing cache misses in an event-driven network server: A case study of TUX

    DEFF Research Database (Denmark)

    Bhatia, Sapan; Consel, Charles; Lawall, Julia Laetitia

    2006-01-01

    We analyze the performance of CPU-bound network servers and demonstrate experimentally that the degradation in the performance of these servers under high-concurrency workloads is largely due to inefficient use of the hardware caches. We then describe an approach to speeding up event-driven network...... servers by optimizing their use of the L2 CPU cache in the context of the TUX Web server, known for its robustness to heavy load. Our approach is based on a novel cache-aware memory allocator and a specific scheduling strategy that together ensure that the total working data set of the server stays...

  10. Cache-Oblivious Mesh Layouts

    International Nuclear Information System (INIS)

    Yoon, S; Lindstrom, P; Pascucci, V; Manocha, D

    2005-01-01

    We present a novel method for computing cache-oblivious layouts of large meshes that improve the performance of interactive visualization and geometric processing algorithms. Given that the mesh is accessed in a reasonably coherent manner, we assume no particular data access patterns or cache parameters of the memory hierarchy involved in the computation. Furthermore, our formulation extends directly to computing layouts of multi-resolution and bounding volume hierarchies of large meshes. We develop a simple and practical cache-oblivious metric for estimating cache misses. Computing a coherent mesh layout is reduced to a combinatorial optimization problem. We designed and implemented an out-of-core multilevel minimization algorithm and tested its performance on unstructured meshes composed of tens to hundreds of millions of triangles. Our layouts can significantly reduce the number of cache misses. We have observed 2-20 times speedups in view-dependent rendering, collision detection, and isocontour extraction without any modification of the algorithms or runtime applications

  11. Cache-Aware and Cache-Oblivious Adaptive Sorting

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Fagerberg, Rolf; Moruz, Gabriel

    2005-01-01

    Two new adaptive sorting algorithms are introduced which perform an optimal number of comparisons with respect to the number of inversions in the input. The first algorithm is based on a new linear time reduction to (non-adaptive) sorting. The second algorithm is based on a new division protocol...... for the GenericSort algorithm by Estivill-Castro and Wood. From both algorithms we derive I/O-optimal cache-aware and cache-oblivious adaptive sorting algorithms. These are the first I/O-optimal adaptive sorting algorithms....

  12. dCache: Big Data storage for HEP communities and beyond

    International Nuclear Information System (INIS)

    Millar, A P; Bernardt, C; Fuhrmann, P; Mkrtchyan, T; Petersen, A; Schwank, K; Behrmann, G; Litvintsev, D; Rossi, A

    2014-01-01

    With over ten years in production use dCache data storage system has evolved to match ever changing lansdcape of continually evolving storage technologies with new solutions to both existing problems and new challenges. In this paper, we present three areas of innovation in dCache: providing efficient access to data with NFS v4.1 pNFS, adoption of CDMI and WebDAV as an alternative to SRM for managing data, and integration with alternative authentication mechanisms.

  13. A Technique to Speedup Access to Web Contents

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 7; Issue 7. Web Caching - A Technique to Speedup Access to Web Contents. Harsha Srinath Shiva Shankar Ramanna. General Article Volume 7 Issue 7 July 2002 pp 54-62 ... Keywords. World wide web; data caching; internet traffic; web page access.

  14. Sincronización de estadísticas entre servidores y proxies

    OpenAIRE

    Catullo, María José

    1999-01-01

    Un caching proxy server colecciona todas las páginas que pasan a través de él, así cuando un usuario pregunta por una página, el proxy server, si la tiene, la retoma de su cache, sin necesidad de acceder al servidor real. De ésta manera se ahorra ancho de banda y se agilizan las respuestas. Cada requerimiento que arriba a un proxy ó a un servidor, se almacena en archivos de logs. Existen varios tipos de archivos de logs, que se utilizan para auditoría del rendimiento, monitoreo, control de...

  15. Nature as a treasure map! Teaching geoscience with the help of earth caches?!

    Science.gov (United States)

    Zecha, Stefanie; Schiller, Thomas

    2015-04-01

    This presentation looks at how earth caches are influence the learning process in the field of geo science in non-formal education. The development of mobile technologies using Global Positioning System (GPS) data to point geographical location together with the evolving Web 2.0 supporting the creation and consumption of content, suggest a potential for collaborative informal learning linked to location. With the help of the GIS in smartphones you can go directly in nature, search for information by your smartphone, and learn something about nature. Earth caches are a very good opportunity, which are organized and supervised geocaches with special information about physical geography high lights. Interested people can inform themselves about aspects in geoscience area by earth caches. The main question of this presentation is how these caches are created in relation to learning processes. As is not possible, to analyze all existing earth caches, there was focus on Bavaria and a certain feature of earth caches. At the end the authors show limits and potentials for the use of earth caches and give some remark for the future.

  16. Security in the CernVM File System and the Frontier Distributed Database Caching System

    International Nuclear Information System (INIS)

    Dykstra, D; Blomer, J

    2014-01-01

    Both the CernVM File System (CVMFS) and the Frontier Distributed Database Caching System (Frontier) distribute centrally updated data worldwide for LHC experiments using http proxy caches. Neither system provides privacy or access control on reading the data, but both control access to updates of the data and can guarantee the authenticity and integrity of the data transferred to clients over the internet. CVMFS has since its early days required digital signatures and secure hashes on all distributed data, and recently Frontier has added X.509-based authenticity and integrity checking. In this paper we detail and compare the security models of CVMFS and Frontier.

  17. Security in the CernVM File System and the Frontier Distributed Database Caching System

    Science.gov (United States)

    Dykstra, D.; Blomer, J.

    2014-06-01

    Both the CernVM File System (CVMFS) and the Frontier Distributed Database Caching System (Frontier) distribute centrally updated data worldwide for LHC experiments using http proxy caches. Neither system provides privacy or access control on reading the data, but both control access to updates of the data and can guarantee the authenticity and integrity of the data transferred to clients over the internet. CVMFS has since its early days required digital signatures and secure hashes on all distributed data, and recently Frontier has added X.509-based authenticity and integrity checking. In this paper we detail and compare the security models of CVMFS and Frontier.

  18. A Time-predictable Stack Cache

    DEFF Research Database (Denmark)

    Abbaspour, Sahar; Brandner, Florian; Schoeberl, Martin

    2013-01-01

    Real-time systems need time-predictable architectures to support static worst-case execution time (WCET) analysis. One architectural feature, the data cache, is hard to analyze when different data areas (e.g., heap allocated and stack allocated data) share the same cache. This sharing leads to le...... of a cache for stack allocated data. Our port of the LLVM C++ compiler supports the management of the stack cache. The combination of stack cache instructions and the hardware implementation of the stack cache is a further step towards timepredictable architectures.......Real-time systems need time-predictable architectures to support static worst-case execution time (WCET) analysis. One architectural feature, the data cache, is hard to analyze when different data areas (e.g., heap allocated and stack allocated data) share the same cache. This sharing leads to less...... precise results of the cache analysis part of the WCET analysis. Splitting the data cache for different data areas enables composable data cache analysis. The WCET analysis tool can analyze the accesses to these different data areas independently. In this paper we present the design and implementation...

  19. Research on Cache Placement in ICN

    Directory of Open Access Journals (Sweden)

    Yu Zhang

    2017-08-01

    Full Text Available Ubiquitous in-network caching is one of key features of Information Centric Network, together with receiver-drive content retrieval paradigm, Information Centric Network is better support for content distribution, multicast, mobility, etc. Cache placement strategy is crucial to improving utilization of cache space and reducing the occupation of link bandwidth. Most of the literature about caching policies considers the overall cost and bandwidth, but ignores the limits of node cache capacity. This paper proposes a G-FMPH algorithm which takes into ac-count both constrains on the link bandwidth and the cache capacity of nodes. Our algorithm aims at minimizing the overall cost of contents caching afterwards. The simulation results have proved that our proposed algorithm has a better performance.

  20. CryptoCache: A Secure Sharable File Cache for Roaming Users

    DEFF Research Database (Denmark)

    Jensen, Christian D.

    2000-01-01

    . Conventional distributed file systems cache everything locally or not at all; there is no possibility to cache files on nearby nodes.In this paper we present the design of a secure cache system called CryptoCache that allows roaming users to cache files on untrusted file hosting servers. The system allows...... flexible sharing of cached files among unauthenticated users, i.e. unlike most distributed file systems CryptoCache does not require a global authentication framework.Files are encrypted when they are transferred over the network and while stored on untrusted servers. The system uses public key......Small mobile computers are now sufficiently powerful to run many applications, but storage capacity remains limited so working files cannot be cached or stored locally. Even if files can be stored locally, the mobile device is not powerful enough to act as server in collaborations with other users...

  1. Cache-Oblivious Algorithms and Data Structures

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting

    2004-01-01

    Frigo, Leiserson, Prokop and Ramachandran in 1999 introduced the ideal-cache model as a formal model of computation for developing algorithms in environments with multiple levels of caching, and coined the terminology of cache-oblivious algorithms. Cache-oblivious algorithms are described...... as standard RAM algorithms with only one memory level, i.e. without any knowledge about memory hierarchies, but are analyzed in the two-level I/O model of Aggarwal and Vitter for an arbitrary memory and block size and an optimal off-line cache replacement strategy. The result are algorithms that automatically...... apply to multi-level memory hierarchies. This paper gives an overview of the results achieved on cache-oblivious algorithms and data structures since the seminal paper by Frigo et al....

  2. Cache memory modelling method and system

    OpenAIRE

    Posadas Cobo, Héctor; Villar Bonet, Eugenio; Díaz Suárez, Luis

    2011-01-01

    The invention relates to a method for modelling a data cache memory of a destination processor, in order to simulate the behaviour of said data cache memory during the execution of a software code on a platform comprising said destination processor. According to the invention, the simulation is performed on a native platform having a processor different from the destination processor comprising the aforementioned data cache memory to be modelled, said modelling being performed by means of the...

  3. Store operations to maintain cache coherence

    Energy Technology Data Exchange (ETDEWEB)

    Evangelinos, Constantinos; Nair, Ravi; Ohmacht, Martin

    2017-08-01

    In one embodiment, a computer-implemented method includes encountering a store operation during a compile-time of a program, where the store operation is applicable to a memory line. It is determined, by a computer processor, that no cache coherence action is necessary for the store operation. A store-without-coherence-action instruction is generated for the store operation, responsive to determining that no cache coherence action is necessary. The store-without-coherence-action instruction specifies that the store operation is to be performed without a cache coherence action, and cache coherence is maintained upon execution of the store-without-coherence-action instruction.

  4. Store operations to maintain cache coherence

    Energy Technology Data Exchange (ETDEWEB)

    Evangelinos, Constantinos; Nair, Ravi; Ohmacht, Martin

    2017-09-12

    In one embodiment, a computer-implemented method includes encountering a store operation during a compile-time of a program, where the store operation is applicable to a memory line. It is determined, by a computer processor, that no cache coherence action is necessary for the store operation. A store-without-coherence-action instruction is generated for the store operation, responsive to determining that no cache coherence action is necessary. The store-without-coherence-action instruction specifies that the store operation is to be performed without a cache coherence action, and cache coherence is maintained upon execution of the store-without-coherence-action instruction.

  5. Test data generation for LRU cache-memory testing

    OpenAIRE

    Evgeni, Kornikhin

    2009-01-01

    System functional testing of microprocessors deals with many assembly programs of given behavior. The paper proposes new constraint-based algorithm of initial cache-memory contents generation for given behavior of assembly program (with cache misses and hits). Although algorithm works for any types of cache-memory, the paper describes algorithm in detail for basis types of cache-memory only: fully associative cache and direct mapped cache.

  6. Funnel Heap - A Cache Oblivious Priority Queue

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Fagerberg, Rolf

    2002-01-01

    The cache oblivious model of computation is a two-level memory model with the assumption that the parameters of the model are unknown to the algorithms. A consequence of this assumption is that an algorithm efficient in the cache oblivious model is automatically efficient in a multi-level memory...

  7. Search-Order Independent State Caching

    DEFF Research Database (Denmark)

    Evangelista, Sami; Kristensen, Lars Michael

    2010-01-01

    State caching is a memory reduction technique used by model checkers to alleviate the state explosion problem. It has traditionally been coupled with a depth-first search to ensure termination.We propose and experimentally evaluate an extension of the state caching method for general state...

  8. Engineering a Cache-Oblivious Sorting Algorithm

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Fagerberg, Rolf; Vinther, Kristoffer

    2007-01-01

    This paper is an algorithmic engineering study of cache-oblivious sorting. We investigate by empirical methods a number of implementation issues and parameter choices for the cache-oblivious sorting algorithm Lazy Funnelsort, and compare the final algorithm with Quicksort, the established standard...

  9. The dCache scientific storage cloud

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    For over a decade, the dCache team has provided software for handling big data for a diverse community of scientists. The team has also amassed a wealth of operational experience from using this software in production. With this experience, the team have refined dCache with the goal of providing a "scientific cloud": a storage solution that satisfies all requirements of a user community by exposing different facets of dCache with which users interact. Recent development, as part of this "scientific cloud" vision, has introduced a new facet: a sync-and-share service, often referred to as "dropbox-like storage". This work has been strongly focused on local requirements, but will be made available in future releases of dCache allowing others to adopt dCache solutions. In this presentation we will outline the current status of the work: both the successes and limitations, and the direction and time-scale of future work.

  10. Efficient Mobile Client Caching Supporting Transaction Semantics

    Directory of Open Access Journals (Sweden)

    IlYoung Chung

    2000-05-01

    Full Text Available In mobile client-server database systems, caching of frequently accessed data is an important technique that will reduce the contention on the narrow bandwidth wireless channel. As the server in mobile environments may not have any information about the state of its clients' cache(stateless server, using broadcasting approach to transmit the updated data lists to numerous concurrent mobile clients is an attractive approach. In this paper, a caching policy is proposed to maintain cache consistency for mobile computers. The proposed protocol adopts asynchronous(non-periodic broadcasting as the cache invalidation scheme, and supports transaction semantics in mobile environments. With the asynchronous broadcasting approach, the proposed protocol can improve the throughput by reducing the abortion of transactions with low communication costs. We study the performance of the protocol by means of simulation experiments.

  11. Efficient sorting using registers and caches

    DEFF Research Database (Denmark)

    Wickremesinghe, Rajiv; Arge, Lars Allan; Chase, Jeffrey S.

    2002-01-01

    . Inadequate models lead to poor algorithmic choices and an incomplete understanding of algorithm behavior on real machines.A key step toward developing better models is to quantify the performance effects of features not reflected in the models. This paper explores the effect of memory system features...... on sorting performance. We introduce a new cache-conscious sorting algorithm, R-MERGE, which achieves better performance in practice over algorithms that are superior in the theoretical models. R-MERGE is designed to minimize memory stall cycles rather than cache misses by considering features common to many......Modern computer systems have increasingly complex memory systems. Common machine models for algorithm analysis do not reflect many of the features of these systems, e.g., large register sets, lockup-free caches, cache hierarchies, associativity, cache line fetching, and streaming behavior...

  12. Archeological Excavations at the Wanapum Cache Site

    International Nuclear Information System (INIS)

    T. E. Marceau

    2000-01-01

    This report was prepared to document the actions taken to locate and excavate an abandoned Wanapum cache located east of the 100-H Reactor area. Evidence (i.e., glass, ceramics, metal, and wood) obtained from shovel and backhoe excavations at the Wanapum cache site indicate that the storage caches were found. The highly fragmented condition of these materials argues that the contents of the caches were collected or destroyed prior to the caches being burned and buried by mechanical equipment. While the fiber nets would have been destroyed by fire, the specialized stone weights would have remained behind. The fact that the site might have been gleaned of desirable artifacts prior to its demolition is consistent with the account by Riddell (1948) for a contemporary village site. Unfortunately, fishing equipment, owned by and used on behalf of the village, that might have returned to productive use has been irretrievably lost

  13. Application and Network-Cognizant Proxies - Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Antonio Ortega; Daniel C. Lee

    2003-03-24

    OAK B264 Application and Network-Cognizant Proxies - Final Report. Current networks show increasing heterogeneity both in terms of their bandwidths/delays and the applications they are required to support. This is a trend that is likely to intensify in the future, as real-time services, such as video, become more widely available and networking access over wireless links becomes more widespread. For this reason they propose that application-specific proxies, intermediate network nodes that broker the interactions between server and client, will become an increasingly important network element. These proxies will allow adaptation to changes in network characteristics without requiring a direct intervention of either server or client. Moreover, it will be possible to locate these proxies strategically at those points where a mismatch occurs between subdomains (for example, a proxy could be placed so as to act as a bridge between a reliable network domain and an unreliable one). This design philosophy favors scalability in the sense that the basic network infrastructure can remain unchanged while new functionality can be added to proxies, as required by the applications. While proxies can perform numerous generic functions, such as caching or security, they concentrate here on media-specific, and in particular video-specific, tasks. The goal of this project was to demonstrate that application- and network-specific knowledge at a proxy can improve overall performance especially under changing network conditions. They summarize below the work performed to address these issues. Particular effort was spent in studying caching techniques and on video classification to enable DiffServ delivery. other work included analysis of traffic characteristics, optimized media scheduling, coding techniques based on multiple description coding, and use of proxies to reduce computation costs. This work covered much of what was originally proposed but with a necessarily reduced scope.

  14. A Distributed Cache Update Deployment Strategy in CDN

    Science.gov (United States)

    E, Xinhua; Zhu, Binjie

    2018-04-01

    The CDN management system distributes content objects to the edge of the internet to achieve the user's near access. Cache strategy is an important problem in network content distribution. A cache strategy was designed in which the content effective diffusion in the cache group, so more content was storage in the cache, and it improved the group hit rate.

  15. WATCHMAN: A Data Warehouse Intelligent Cache Manager

    Science.gov (United States)

    Scheuermann, Peter; Shim, Junho; Vingralek, Radek

    1996-01-01

    Data warehouses store large volumes of data which are used frequently by decision support applications. Such applications involve complex queries. Query performance in such an environment is critical because decision support applications often require interactive query response time. Because data warehouses are updated infrequently, it becomes possible to improve query performance by caching sets retrieved by queries in addition to query execution plans. In this paper we report on the design of an intelligent cache manager for sets retrieved by queries called WATCHMAN, which is particularly well suited for data warehousing environment. Our cache manager employs two novel, complementary algorithms for cache replacement and for cache admission. WATCHMAN aims at minimizing query response time and its cache replacement policy swaps out entire retrieved sets of queries instead of individual pages. The cache replacement and admission algorithms make use of a profit metric, which considers for each retrieved set its average rate of reference, its size, and execution cost of the associated query. We report on a performance evaluation based on the TPC-D and Set Query benchmarks. These experiments show that WATCHMAN achieves a substantial performance improvement in a decision support environment when compared to a traditional LRU replacement algorithm.

  16. Truth Space Method for Caching Database Queries

    Directory of Open Access Journals (Sweden)

    S. V. Mosin

    2015-01-01

    Full Text Available We propose a new method of client-side data caching for relational databases with a central server and distant clients. Data are loaded into the client cache based on queries executed on the server. Every query has the corresponding DB table – the result of the query execution. These queries have a special form called "universal relational query" based on three fundamental Relational Algebra operations: selection, projection and natural join. We have to mention that such a form is the closest one to the natural language and the majority of database search queries can be expressed in this way. Besides, this form allows us to analyze query correctness by checking lossless join property. A subsequent query may be executed in a client’s local cache if we can determine that the query result is entirely contained in the cache. For this we compare truth spaces of the logical restrictions in a new user’s query and the results of the queries execution in the cache. Such a comparison can be performed analytically , without need in additional Database queries. This method may be used to define lacking data in the cache and execute the query on the server only for these data. To do this the analytical approach is also used, what distinguishes our paper from the existing technologies. We propose four theorems for testing the required conditions. The first and the third theorems conditions allow us to define the existence of required data in cache. The second and the fourth theorems state conditions to execute queries with cache only. The problem of cache data actualizations is not discussed in this paper. However, it can be solved by cataloging queries on the server and their serving by triggers in background mode. The article is published in the author’s wording.

  17. Cache management of tape files in mass storage system

    International Nuclear Information System (INIS)

    Cheng Yaodong; Ma Nan; Yu Chuansong; Chen Gang

    2006-01-01

    This paper proposes the group-cooperative caching policy according to the characteristics of tapes and requirements of high energy physics domain. This policy integrates the advantages of traditional local caching and cooperative caching on basis of cache model. It divides cache into independent groups; the same group of cache is made of cooperating disks on network. This paper also analyzes the directory management, update algorithm and cache consistency of the policy. The experiment shows the policy can meet the requirements of data processing and mass storage in high energy physics domain very well. (authors)

  18. Optimizing Maintenance of Constraint-Based Database Caches

    Science.gov (United States)

    Klein, Joachim; Braun, Susanne

    Caching data reduces user-perceived latency and often enhances availability in case of server crashes or network failures. DB caching aims at local processing of declarative queries in a DBMS-managed cache close to the application. Query evaluation must produce the same results as if done at the remote database backend, which implies that all data records needed to process such a query must be present and controlled by the cache, i. e., to achieve “predicate-specific” loading and unloading of such record sets. Hence, cache maintenance must be based on cache constraints such that “predicate completeness” of the caching units currently present can be guaranteed at any point in time. We explore how cache groups can be maintained to provide the data currently needed. Moreover, we design and optimize loading and unloading algorithms for sets of records keeping the caching units complete, before we empirically identify the costs involved in cache maintenance.

  19. Reducing Competitive Cache Misses in Modern Processor Architectures

    OpenAIRE

    Prisagjanec, Milcho; Mitrevski, Pece

    2017-01-01

    The increasing number of threads inside the cores of a multicore processor, and competitive access to the shared cache memory, become the main reasons for an increased number of competitive cache misses and performance decline. Inevitably, the development of modern processor architectures leads to an increased number of cache misses. In this paper, we make an attempt to implement a technique for decreasing the number of competitive cache misses in the first level of cache memory. This tec...

  20. The Cost of Cache-Oblivious Searching

    DEFF Research Database (Denmark)

    Bender, Michael A.; Brodal, Gert Stølting; Fagerberg, Rolf

    2003-01-01

    , multilevel memory hierarchies can be modelled. It is shown that as k grows, the search costs of the optimal k-level DAM search structure and of the optimal cache-oblivious search structure rapidly converge. This demonstrates that for a multilevel memory hierarchy, a simple cache-oblivious structure almost......Tight bounds on the cost of cache-oblivious searching are proved. It is shown that no cache-oblivious search structure can guarantee that a search performs fewer than lg e log B N block transfers between any two levels of the memory hierarchy. This lower bound holds even if all of the block sizes...... the random placement of the rst element of the structure in memory. As searching in the Disk Access Model (DAM) can be performed in log B N + 1 block transfers, this result shows a separation between the 2-level DAM and cacheoblivious memory-hierarchy models. By extending the DAM model to k levels...

  1. Greatly improved cache update times for conditions data with Frontier/Squid

    Energy Technology Data Exchange (ETDEWEB)

    Dykstra, Dave; Lueking, Lee, E-mail: dwd@fnal.go [Computing Division, Fermilab, Batavia, IL (United States)

    2010-04-01

    The CMS detector project loads copies of conditions data to over 100,000 computer cores worldwide by using a software subsystem called Frontier. This subsystem translates database queries into HTTP, looks up the results in a central database at CERN, and caches the results in an industry-standard HTTP proxy/caching server called Squid. One of the most challenging aspects of any cache system is coherency, that is, ensuring that changes made to the underlying data get propagated out to all clients in a timely manner. Recently, the Frontier system was enhanced to drastically reduce the time for changes to be propagated everywhere without heavily loading servers. The propagation time is now as low as 15 minutes for some kinds of data and no more than 60 minutes for the rest of the data. This was accomplished by taking advantage of an HTTP and Squid feature called If-Modified-Since. In order to use this feature, the Frontier server sends a Last-Modified timestamp, but since modification times are not normally tracked by Oracle databases, a PL/SQL program was developed to track the modification times of database tables. We discuss the details of this caching scheme and the obstacles overcome including database and Squid bugs.

  2. Greatly improved cache update times for conditions data with Frontier/Squid

    International Nuclear Information System (INIS)

    Dykstra, Dave; Lueking, Lee

    2009-01-01

    The CMS detector project loads copies of conditions data to over 100,000 computer cores worldwide by using a software subsystem called Frontier. This subsystem translates database queries into HTTP, looks up the results in a central database at CERN, and caches the results in an industry-standard HTTP proxy/caching server called Squid. One of the most challenging aspects of any cache system is coherency, that is, ensuring that changes made to the underlying data get propagated out to all clients in a timely manner. Recently, the Frontier system was enhanced to drastically reduce the time for changes to be propagated everywhere without heavily loading servers. The propagation time is now as low as 15 minutes for some kinds of data and no more than 60 minutes for the rest of the data. This was accomplished by taking advantage of an HTTP and Squid feature called If-Modified-Since. In order to use this feature, the Frontier server sends a Last-Modified timestamp, but since modification times are not normally tracked by Oracle databases, a PL/SQL program was developed to track the modification times of database tables. We discuss the details of this caching scheme and the obstacles overcome including database and Squid bugs.

  3. Search-Order Independent State Caching

    DEFF Research Database (Denmark)

    Evangelista, Sami; Kristensen, Lars Michael

    2009-01-01

    State caching is a memory reduction technique used by model checkers to alleviate the state explosion problem. It has traditionally been coupled with a depth-first search to ensure termination.We propose and experimentally evaluate an extension of the state caching method for general state...... exploring algorithms that are independent of the search order (i.e., search algorithms that partition the state space into closed (visited) states, open (to visit) states and unmet states)....

  4. dCache, agile adoption of storage technology

    Energy Technology Data Exchange (ETDEWEB)

    Millar, A. P. [Hamburg U.; Baranova, T. [Hamburg U.; Behrmann, G. [Unlisted, DK; Bernardt, C. [Hamburg U.; Fuhrmann, P. [Hamburg U.; Litvintsev, D. O. [Fermilab; Mkrtchyan, T. [Hamburg U.; Petersen, A. [Hamburg U.; Rossi, A. [Fermilab; Schwank, K. [Hamburg U.

    2012-01-01

    For over a decade, dCache has been synonymous with large-capacity, fault-tolerant storage using commodity hardware that supports seamless data migration to and from tape. In this paper we provide some recent news of changes within dCache and the community surrounding it. We describe the flexible nature of dCache that allows both externally developed enhancements to dCache facilities and the adoption of new technologies. Finally, we present information about avenues the dCache team is exploring for possible future improvements in dCache.

  5. Private Web Browsing

    National Research Council Canada - National Science Library

    Syverson, Paul F; Reed, Michael G; Goldschlag, David M

    1997-01-01

    .... These are both kept confidential from network elements as well as external observers. Private Web browsing is achieved by unmodified Web browsers using anonymous connections by means of HTTP proxies...

  6. Cache-aware network-on-chip for chip multiprocessors

    Science.gov (United States)

    Tatas, Konstantinos; Kyriacou, Costas; Dekoulis, George; Demetriou, Demetris; Avraam, Costas; Christou, Anastasia

    2009-05-01

    This paper presents the hardware prototype of a Network-on-Chip (NoC) for a chip multiprocessor that provides support for cache coherence, cache prefetching and cache-aware thread scheduling. A NoC with support to these cache related mechanisms can assist in improving systems performance by reducing the cache miss ratio. The presented multi-core system employs the Data-Driven Multithreading (DDM) model of execution. In DDM thread scheduling is done according to data availability, thus the system is aware of the threads to be executed in the near future. This characteristic of the DDM model allows for cache aware thread scheduling and cache prefetching. The NoC prototype is a crossbar switch with output buffering that can support a cache-aware 4-node chip multiprocessor. The prototype is built on the Xilinx ML506 board equipped with a Xilinx Virtex-5 FPGA.

  7. Cache timing attacks on recent microarchitectures

    DEFF Research Database (Denmark)

    Andreou, Alexandres; Bogdanov, Andrey; Tischhauser, Elmar Wolfgang

    2017-01-01

    Cache timing attacks have been known for a long time, however since the rise of cloud computing and shared hardware resources, such attacks found new potentially devastating applications. One prominent example is S$A (presented by Irazoqui et al at S&P 2015) which is a cache timing attack against...... AES or similar algorithms in virtualized environments. This paper applies variants of this cache timing attack to Intel's latest generation of microprocessors. It enables a spy-process to recover cryptographic keys, interacting with the victim processes only over TCP. The threat model is a logically...... separated but CPU co-located attacker with root privileges. We report successful and practically verified applications of this attack against a wide range of microarchitectures, from a two-core Nehalem processor (i5-650) to two-core Haswell (i7-4600M) and four-core Skylake processors (i7-6700). The attack...

  8. Federated or cached searches: providing expected performance from multiple invasive species databases

    Science.gov (United States)

    Graham, Jim; Jarnevich, Catherine S.; Simpson, Annie; Newman, Gregory J.; Stohlgren, Thomas J.

    2011-01-01

    Invasive species are a universal global problem, but the information to identify them, manage them, and prevent invasions is stored around the globe in a variety of formats. The Global Invasive Species Information Network is a consortium of organizations working toward providing seamless access to these disparate databases via the Internet. A distributed network of databases can be created using the Internet and a standard web service protocol. There are two options to provide this integration. First, federated searches are being proposed to allow users to search “deep” web documents such as databases for invasive species. A second method is to create a cache of data from the databases for searching. We compare these two methods, and show that federated searches will not provide the performance and flexibility required from users and a central cache of the datum are required to improve performance.

  9. Scope-Based Method Cache Analysis

    DEFF Research Database (Denmark)

    Huber, Benedikt; Hepp, Stefan; Schoeberl, Martin

    2014-01-01

    The quest for time-predictable systems has led to the exploration of new hardware architectures that simplify analysis and reasoning in the temporal domain, while still providing competitive performance. For the instruction memory, the method cache is a conceptually attractive solution, as it req......The quest for time-predictable systems has led to the exploration of new hardware architectures that simplify analysis and reasoning in the temporal domain, while still providing competitive performance. For the instruction memory, the method cache is a conceptually attractive solution...

  10. Efficient Context Switching for the Stack Cache: Implementation and Analysis

    DEFF Research Database (Denmark)

    Abbaspourseyedi, Sahar; Brandner, Florian; Naji, Amine

    2015-01-01

    , the analysis of the stack cache was limited to individual tasks, ignoring aspects related to multitasking. A major drawback of the original stack cache design is that, due to its simplicity, it cannot hold the data of multiple tasks at the same time. Consequently, the entire cache content needs to be saved...

  11. Analysis of preemption costs for the stack cache

    DEFF Research Database (Denmark)

    Naji, Amine; Abbaspour, Sahar; Brandner, Florian

    2018-01-01

    , the analysis of the stack cache was limited to individual tasks, ignoring aspects related to multitasking. A major drawback of the original stack cache design is that, due to its simplicity, it cannot hold the data of multiple tasks at the same time. Consequently, the entire cache content needs to be saved...

  12. A detailed GPU cache model based on reuse distance theory

    NARCIS (Netherlands)

    Nugteren, C.; Braak, van den G.J.W.; Corporaal, H.; Bal, H.E.

    2014-01-01

    As modern GPUs rely partly on their on-chip memories to counter the imminent off-chip memory wall, the efficient use of their caches has become important for performance and energy. However, optimising cache locality systematically requires insight into and prediction of cache behaviour. On

  13. Don't make cache too complex: A simple probability-based cache management scheme for SSDs.

    Directory of Open Access Journals (Sweden)

    Seungjae Baek

    Full Text Available Solid-state drives (SSDs have recently become a common storage component in computer systems, and they are fueled by continued bit cost reductions achieved with smaller feature sizes and multiple-level cell technologies. However, as the flash memory stores more bits per cell, the performance and reliability of the flash memory degrade substantially. To solve this problem, a fast non-volatile memory (NVM-based cache has been employed within SSDs to reduce the long latency required to write data. Absorbing small writes in a fast NVM cache can also reduce the number of flash memory erase operations. To maximize the benefits of an NVM cache, it is important to increase the NVM cache utilization. In this paper, we propose and study ProCache, a simple NVM cache management scheme, that makes cache-entrance decisions based on random probability testing. Our scheme is motivated by the observation that frequently written hot data will eventually enter the cache with a high probability, and that infrequently accessed cold data will not enter the cache easily. Owing to its simplicity, ProCache is easy to implement at a substantially smaller cost than similar previously studied techniques. We evaluate ProCache and conclude that it achieves comparable performance compared to a more complex reference counter-based cache-management scheme.

  14. Snippet-based relevance predictions for federated web search

    NARCIS (Netherlands)

    Demeester, Thomas; Nguyen, Dong-Phuong; Trieschnigg, Rudolf Berend; Develder, Chris; Hiemstra, Djoerd

    How well can the relevance of a page be predicted, purely based on snippets? This would be highly useful in a Federated Web Search setting where caching large amounts of result snippets is more feasible than caching entire pages. The experiments reported in this paper make use of result snippets and

  15. Static analysis of worst-case stack cache behavior

    DEFF Research Database (Denmark)

    Jordan, Alexander; Brandner, Florian; Schoeberl, Martin

    2013-01-01

    Utilizing a stack cache in a real-time system can aid predictability by avoiding interference that heap memory traffic causes on the data cache. While loads and stores are guaranteed cache hits, explicit operations are responsible for managing the stack cache. The behavior of these operations can......-graph, the worst-case bounds can be efficiently yet precisely determined. Our evaluation using the MiBench benchmark suite shows that only 37% and 21% of potential stack cache operations actually store to and load from memory, respectively. Analysis times are modest, on average running between 0.46s and 1.30s per...

  16. Cache-Conscious Radix-Decluster Projections

    NARCIS (Netherlands)

    S. Manegold (Stefan); P.A. Boncz (Peter); N.J. Nes (Niels); M.L. Kersten (Martin)

    2004-01-01

    textabstractAs CPUs become more powerful with Moore's law and memory latencies stay constant, the impact of the memory access performance bottleneck continues to grow on relational operators like join, which can exhibit random access on a memory region larger than the hardware caches. While

  17. Jemen - the Proxy War

    Directory of Open Access Journals (Sweden)

    Magdalena El Ghamari

    2015-12-01

    Full Text Available The military operation in Yemen is significant departure from Saudi Arabia's foreign policy tradition and customs. Riyadh has always relied on three strategies to pursue its interests abroad: wealth, establish a global network and muslim education and diplomacy and meadiation. The term "proxy war" has experienced a new popularity in stories on the Middle East. A proxy war is two opposing countries avoiding direct war, and instead supporting combatants that serve their interests. In some occasions, one country is a direct combatant whilst the other supporting its enemy. Various news sources began using the term to describe the conflict in Yemen immediately, as if on cue, after Saudi Arabia launched its bombing campaign against Houthi targets in Yemen on 25 March 2015. This is the reason, why author try to answer for following questions: Is the Yemen Conflict Devolves into Proxy War? and Who's fighting whom in Yemen's proxy war?" Research area includes the problem of proxy war in the Middle East. For sure, the real problem of proxy war must begin with the fact that the United States and its NATO allies opened the floodgates for regional proxy wars by the two major wars for regime change: in Iraq and Libya. Those two destabilising wars provided opportunities and motives for Sunni states across the Middle East to pursue their own sectarian and political power objectives through "proxy war".

  18. Design Space Exploration of Object Caches with Cross-Profiling

    DEFF Research Database (Denmark)

    Schoeberl, Martin; Binder, Walter; Villazon, Alex

    2011-01-01

    . However, before implementing such an object cache, an empirical analysis of different organization forms is needed. We use a cross-profiling technique based on aspect-oriented programming in order to evaluate different object cache organizations with standard Java benchmarks. From the evaluation we......To avoid data cache trashing between heap-allocated data and other data areas, a distinct object cache has been proposed for embedded real-time Java processors. This object cache uses high associativity in order to statically track different object pointers for worst-case execution-time analysis...... conclude that field access exhibits some temporal locality, but almost no spatial locality. Therefore, filling long cache lines on a miss just introduces a high miss penalty without increasing the hit rate enough to make up for the increased miss penalty. For an object cache, it is more efficient to fill...

  19. dCache: implementing a high-end NFSv4.1 service using a Java NIO framework

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    dCache is a high performance scalable storage system widely used by HEP community. In addition to set of home grown protocols we also provide industry standard access mechanisms like WebDAV and NFSv4.1. This support places dCache as a direct competitor to commercial solutions. Nevertheless conforming to a protocol is not enough; our implementations must perform comparably or even better than commercial systems. To achieve this, dCache uses two high-end IO frameworks from well know application servers: GlassFish and JBoss. This presentation describes how we implemented an rfc1831 and rfc2203 compliant ONC RPC (Sun RPC) service based on the Grizzly NIO framework, part of the GlassFish application server. This ONC RPC service is the key component of dCache’s NFSv4.1 implementation, but is independent of dCache and available for other projects. We will also show some details of dCache NFS v4.1 implementations, describe some of the Java NIO techniques used and, finally, present details of our performance e...

  20. Cache Management of Big Data in Equipment Condition Assessment

    Directory of Open Access Journals (Sweden)

    Ma Yan

    2016-01-01

    Full Text Available Big data platform for equipment condition assessment is built for comprehensive analysis. The platform has various application demands. According to its response time, its application can be divided into offline, interactive and real-time types. For real-time application, its data processing efficiency is important. In general, data cache is one of the most efficient ways to improve query time. However, big data caching is different from the traditional data caching. In the paper we propose a distributed cache management framework of big data for equipment condition assessment. It consists of three parts: cache structure, cache replacement algorithm and cache placement algorithm. Cache structure is the basis of the latter two algorithms. Based on the framework and algorithms, we make full use of the characteristics of just accessing some valuable data during a period of time, and put relevant data on the neighborhood nodes, which largely reduce network transmission cost. We also validate the performance of our proposed approaches through extensive experiments. It demonstrates that the proposed cache replacement algorithm and cache management framework has higher hit rate or lower query time than LRU algorithm and round-robin algorithm.

  1. Hybrid caches: design and data management

    OpenAIRE

    Valero Bresó, Alejandro

    2013-01-01

    Cache memories have been usually implemented with Static Random-Access Memory (SRAM) technology since it is the fastest electronic memory technology. However, this technology consumes a high amount of leakage currents, which is a major design concern because leakage energy consumption increases as the transistor size shrinks. Alternative technologies are being considered to reduce this consumption. Among them, embedded Dynamic RAM (eDRAM) technology provides minimal area and le...

  2. OPTIMAL DATA REPLACEMENT TECHNIQUE FOR COOPERATIVE CACHING IN MANET

    Directory of Open Access Journals (Sweden)

    P. Kuppusamy

    2014-09-01

    Full Text Available A cooperative caching approach improves data accessibility and reduces query latency in Mobile Ad hoc Network (MANET. Maintaining the cache is challenging issue in large MANET due to mobility, cache size and power. The previous research works on caching primarily have dealt with LRU, LFU and LRU-MIN cache replacement algorithms that offered low query latency and greater data accessibility in sparse MANET. This paper proposes Memetic Algorithm (MA to locate the better replaceable data based on neighbours interest and fitness value of cached data to store the newly arrived data. This work also elects ideal CH using Meta heuristic search Ant Colony Optimization algorithm. The simulation results shown that proposed algorithm reduces the latency, control overhead and increases the packet delivery rate than existing approach by increasing nodes and speed respectively.

  3. Enhancing Leakage Power in CPU Cache Using Inverted Architecture

    OpenAIRE

    Bilal A. Shehada; Ahmed M. Serdah; Aiman Abu Samra

    2013-01-01

    Power consumption is an increasingly pressing problem in modern processor design. Since the on-chip caches usually consume a significant amount of power so power and energy consumption parameters have become one of the most important design constraint. It is one of the most attractive targets for power reduction. This paper presents an approach to enhance the dynamic power consumption of CPU cache using inverted cache architecture. Our assumption tries to reduce dynamic write power dissipatio...

  4. Frontier: High Performance Database Access Using Standard Web Components in a Scalable Multi-Tier Architecture

    International Nuclear Information System (INIS)

    Kosyakov, S.; Kowalkowski, J.; Litvintsev, D.; Lueking, L.; Paterno, M.; White, S.P.; Autio, Lauri; Blumenfeld, B.; Maksimovic, P.; Mathis, M.

    2004-01-01

    A high performance system has been assembled using standard web components to deliver database information to a large number of broadly distributed clients. The CDF Experiment at Fermilab is establishing processing centers around the world imposing a high demand on their database repository. For delivering read-only data, such as calibrations, trigger information, and run conditions data, we have abstracted the interface that clients use to retrieve data objects. A middle tier is deployed that translates client requests into database specific queries and returns the data to the client as XML datagrams. The database connection management, request translation, and data encoding are accomplished in servlets running under Tomcat. Squid Proxy caching layers are deployed near the Tomcat servers, as well as close to the clients, to significantly reduce the load on the database and provide a scalable deployment model. Details the system's construction and use are presented, including its architecture, design, interfaces, administration, performance measurements, and deployment plan

  5. An Adaptive Insertion and Promotion Policy for Partitioned Shared Caches

    Science.gov (United States)

    Mahrom, Norfadila; Liebelt, Michael; Raof, Rafikha Aliana A.; Daud, Shuhaizar; Hafizah Ghazali, Nur

    2018-03-01

    Cache replacement policies in chip multiprocessors (CMP) have been investigated extensively and proven able to enhance shared cache management. However, competition among multiple processors executing different threads that require simultaneous access to a shared memory may cause cache contention and memory coherence problems on the chip. These issues also exist due to some drawbacks of the commonly used Least Recently Used (LRU) policy employed in multiprocessor systems, which are because of the cache lines residing in the cache longer than required. In image processing analysis of for example extra pulmonary tuberculosis (TB), an accurate diagnosis for tissue specimen is required. Therefore, a fast and reliable shared memory management system to execute algorithms for processing vast amount of specimen image is needed. In this paper, the effects of the cache replacement policy in a partitioned shared cache are investigated. The goal is to quantify whether better performance can be achieved by using less complex replacement strategies. This paper proposes a Middle Insertion 2 Positions Promotion (MI2PP) policy to eliminate cache misses that could adversely affect the access patterns and the throughput of the processors in the system. The policy employs a static predefined insertion point, near distance promotion, and the concept of ownership in the eviction policy to effectively improve cache thrashing and to avoid resource stealing among the processors.

  6. Data Cleaning Methods for Client and Proxy Logs

    NARCIS (Netherlands)

    Weinreich, H.; Obendorf, H.; Herder, E.; Edmonds, A.; Hawkey, K.; Kellar, M.; Turnbull, D.

    2006-01-01

    In this paper we present our experiences with the cleaning of Web client and proxy usage logs, based on a long-term browsing study with 25 participants. A detailed clickstream log, recorded using a Web intermediary, was combined with a second log of user interface actions, which was captured by a

  7. Perbandingan proxy pada linux dan windows untuk mempercepat browsing website

    Directory of Open Access Journals (Sweden)

    Dafwen Toresa

    2017-05-01

    Full Text Available AbstrakPada saat ini sangat banyak organisasi, baik pendidikan, pemerintahan,  maupun perusahaan swasta berusaha membatasi akses para pengguna ke internet dengan alasan bandwidth yang dimiliki mulai terasa lambat ketika para penggunanya mulai banyak yang melakukan browsing ke internet. Mempercepat akses browsing menjadi perhatian utama dengan memanfaatkan teknologi Proxy server. Penggunaan proxy server perlu mempertimbangkan sistem operasi pada server dan tool yang digunakan belum diketahui performansi terbaiknya pada sistem operasi apa.  Untuk itu dirasa perlu untuk menganalisis performan Proxy server pada sistem operasi berbeda yaitu Sistem Operasi Linux dengan tools Squid  dan Sistem Operasi Windows dengan tool Winroute. Kajian ini dilakukan untuk mengetahui perbandingan kecepatan browsing dari komputer pengguna (client. Browser yang digunakan di komputer pengguna adalah Mozilla Firefox. Penelitian ini menggunakan 2 komputer klien dengan pengujian masing-masingnya 5 kali pengujian pengaksesan/browsing web yang dituju melalui proxy server. Dari hasil pengujian yang dilakukan, diperoleh kesimpulan bahwa penerapan proxy server di sistem operasi linux dengan tools squid lebih cepat browsing dari klien menggunakan web browser yang sama dan komputer klien yang berbeda dari pada proxy server sistem operasi windows dengan tools winroute.  Kata kunci: Proxy, Bandwidth, Browsing, Squid, Winroute AbstractAt this time very many organizations, both education, government, and private companies try to limit the access of users to the internet on the grounds that the bandwidth owned began to feel slow when the users began to do a lot of browsing to the internet. Speed up browsing access is a major concern by utilizing Proxy server technology. The use of proxy servers need to consider the operating system on the server and the tool used is not yet known the best performance on what operating system. For that it is necessary to analyze Performance Proxy

  8. Witnessing entanglement by proxy

    International Nuclear Information System (INIS)

    Bäuml, Stefan; Bruß, Dagmar; Kampermann, Hermann; Huber, Marcus; Winter, Andreas

    2016-01-01

    Entanglement is a ubiquitous feature of low temperature systems and believed to be highly relevant for the dynamics of condensed matter properties and quantum computation even at higher temperatures. The experimental certification of this paradigmatic quantum effect in macroscopic high temperature systems is constrained by the limited access to the quantum state of the system. In this paper we show how macroscopic observables beyond the mean energy of the system can be exploited as proxy witnesses for entanglement detection. Using linear and semi-definite relaxations we show that all previous approaches to this problem can be outperformed by our proxies, i.e. entanglement can be certified at higher temperatures without access to any local observable. For an efficient computation of proxy witnesses one can resort to a generalised grand canonical ensemble, enabling entanglement certification even in complex systems with macroscopic particle numbers. (paper)

  9. Legal Aspects of the Web.

    Science.gov (United States)

    Borrull, Alexandre Lopez; Oppenheim, Charles

    2004-01-01

    Presents a literature review that covers the following topics related to legal aspects of the Web: copyright; domain names and trademarks; linking, framing, caching, and spamdexing; patents; pornography and censorship on the Internet; defamation; liability; conflict of laws and jurisdiction; legal deposit; and spam, i.e., unsolicited mails.…

  10. Corvid re-caching without 'theory of mind': a model.

    Science.gov (United States)

    van der Vaart, Elske; Verbrugge, Rineke; Hemelrijk, Charlotte K

    2012-01-01

    Scrub jays are thought to use many tactics to protect their caches. For instance, they predominantly bury food far away from conspecifics, and if they must cache while being watched, they often re-cache their worms later, once they are in private. Two explanations have been offered for such observations, and they are intensely debated. First, the birds may reason about their competitors' mental states, with a 'theory of mind'; alternatively, they may apply behavioral rules learned in daily life. Although this second hypothesis is cognitively simpler, it does seem to require a different, ad-hoc behavioral rule for every caching and re-caching pattern exhibited by the birds. Our new theory avoids this drawback by explaining a large variety of patterns as side-effects of stress and the resulting memory errors. Inspired by experimental data, we assume that re-caching is not motivated by a deliberate effort to safeguard specific caches from theft, but by a general desire to cache more. This desire is brought on by stress, which is determined by the presence and dominance of onlookers, and by unsuccessful recovery attempts. We study this theory in two experiments similar to those done with real birds with a kind of 'virtual bird', whose behavior depends on a set of basic assumptions about corvid cognition, and a well-established model of human memory. Our results show that the 'virtual bird' acts as the real birds did; its re-caching reflects whether it has been watched, how dominant its onlooker was, and how close to that onlooker it has cached. This happens even though it cannot attribute mental states, and it has only a single behavioral rule assumed to be previously learned. Thus, our simulations indicate that corvid re-caching can be explained without sophisticated social cognition. Given our specific predictions, our theory can easily be tested empirically.

  11. Corvid re-caching without 'theory of mind': a model.

    Directory of Open Access Journals (Sweden)

    Elske van der Vaart

    Full Text Available Scrub jays are thought to use many tactics to protect their caches. For instance, they predominantly bury food far away from conspecifics, and if they must cache while being watched, they often re-cache their worms later, once they are in private. Two explanations have been offered for such observations, and they are intensely debated. First, the birds may reason about their competitors' mental states, with a 'theory of mind'; alternatively, they may apply behavioral rules learned in daily life. Although this second hypothesis is cognitively simpler, it does seem to require a different, ad-hoc behavioral rule for every caching and re-caching pattern exhibited by the birds. Our new theory avoids this drawback by explaining a large variety of patterns as side-effects of stress and the resulting memory errors. Inspired by experimental data, we assume that re-caching is not motivated by a deliberate effort to safeguard specific caches from theft, but by a general desire to cache more. This desire is brought on by stress, which is determined by the presence and dominance of onlookers, and by unsuccessful recovery attempts. We study this theory in two experiments similar to those done with real birds with a kind of 'virtual bird', whose behavior depends on a set of basic assumptions about corvid cognition, and a well-established model of human memory. Our results show that the 'virtual bird' acts as the real birds did; its re-caching reflects whether it has been watched, how dominant its onlooker was, and how close to that onlooker it has cached. This happens even though it cannot attribute mental states, and it has only a single behavioral rule assumed to be previously learned. Thus, our simulations indicate that corvid re-caching can be explained without sophisticated social cognition. Given our specific predictions, our theory can easily be tested empirically.

  12. Evidence for cache surveillance by a scatter-hoarding rodent

    NARCIS (Netherlands)

    Hirsch, B.T.; Kays, R.; Jansen, P.A.

    2013-01-01

    The mechanisms by which food-hoarding animals are capable of remembering the locations of numerous cached food items over long time spans has been the focus of intensive research. The ‘memory enhancement hypothesis’ states that hoarders reinforce spatial memory of their caches by repeatedly

  13. A Stack Cache for Real-Time Systems

    DEFF Research Database (Denmark)

    Schoeberl, Martin; Nielsen, Carsten

    2016-01-01

    Real-time systems need time-predictable computing platforms to allowfor static analysis of the worst-case execution time. Caches are important for good performance, but data caches arehard to analyze for the worst-case execution time. Stack allocated data has different properties related...

  14. On Optimal Geographical Caching in Heterogeneous Cellular Networks

    NARCIS (Netherlands)

    Serbetci, Berksan; Goseling, Jasper

    2017-01-01

    In this work we investigate optimal geographical caching in heterogeneous cellular networks where different types of base stations (BSs) have different cache capacities. Users request files from a content library according to a known probability distribution. The performance metric is the total hit

  15. Munchausen by Proxy Syndrome

    Science.gov (United States)

    ... the Child Getting Help for the Parent or Caregiver Print Munchausen by proxy syndrome (MBPS) is a relatively rare form of child abuse that involves the exaggeration or fabrication of illnesses or symptoms by a primary caretaker. Also known as "medical child abuse," MBPS ...

  16. Efficacy of Code Optimization on Cache-based Processors

    Science.gov (United States)

    VanderWijngaart, Rob F.; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    The current common wisdom in the U.S. is that the powerful, cost-effective supercomputers of tomorrow will be based on commodity (RISC) micro-processors with cache memories. Already, most distributed systems in the world use such hardware as building blocks. This shift away from vector supercomputers and towards cache-based systems has brought about a change in programming paradigm, even when ignoring issues of parallelism. Vector machines require inner-loop independence and regular, non-pathological memory strides (usually this means: non-power-of-two strides) to allow efficient vectorization of array operations. Cache-based systems require spatial and temporal locality of data, so that data once read from main memory and stored in high-speed cache memory is used optimally before being written back to main memory. This means that the most cache-friendly array operations are those that feature zero or unit stride, so that each unit of data read from main memory (a cache line) contains information for the next iteration in the loop. Moreover, loops ought to be 'fat', meaning that as many operations as possible are performed on cache data-provided instruction caches do not overflow and enough registers are available. If unit stride is not possible, for example because of some data dependency, then care must be taken to avoid pathological strides, just ads on vector computers. For cache-based systems the issues are more complex, due to the effects of associativity and of non-unit block (cache line) size. But there is more to the story. Most modern micro-processors are superscalar, which means that they can issue several (arithmetic) instructions per clock cycle, provided that there are enough independent instructions in the loop body. This is another argument for providing fat loop bodies. With these restrictions, it appears fairly straightforward to produce code that will run efficiently on any cache-based system. It can be argued that although some of the important

  17. Compiler-Enforced Cache Coherence Using a Functional Language

    Directory of Open Access Journals (Sweden)

    Rich Wolski

    1996-01-01

    Full Text Available The cost of hardware cache coherence, both in terms of execution delay and operational cost, is substantial for scalable systems. Fortunately, compiler-generated cache management can reduce program serialization due to cache contention; increase execution performance; and reduce the cost of parallel systems by eliminating the need for more expensive hardware support. In this article, we use the Sisal functional language system as a vehicle to implement and investigate automatic, compiler-based cache management. We describe our implementation of Sisal for the IBM Power/4. The Power/4, briefly available as a product, represents an early attempt to build a shared memory machine that relies strictly on the language system for cache coherence. We discuss the issues associated with deterministic execution and program correctness on a system without hardware coherence, and demonstrate how Sisal (as a functional language is able to address those issues.

  18. Version pressure feedback mechanisms for speculative versioning caches

    Science.gov (United States)

    Eichenberger, Alexandre E.; Gara, Alan; O& #x27; Brien, Kathryn M.; Ohmacht, Martin; Zhuang, Xiaotong

    2013-03-12

    Mechanisms are provided for controlling version pressure on a speculative versioning cache. Raw version pressure data is collected based on one or more threads accessing cache lines of the speculative versioning cache. One or more statistical measures of version pressure are generated based on the collected raw version pressure data. A determination is made as to whether one or more modifications to an operation of a data processing system are to be performed based on the one or more statistical measures of version pressure, the one or more modifications affecting version pressure exerted on the speculative versioning cache. An operation of the data processing system is modified based on the one or more determined modifications, in response to a determination that one or more modifications to the operation of the data processing system are to be performed, to affect the version pressure exerted on the speculative versioning cache.

  19. Study of cache performance in distributed environment for data processing

    International Nuclear Information System (INIS)

    Makatun, Dzmitry; Lauret, Jérôme; Šumbera, Michal

    2014-01-01

    Processing data in distributed environment has found its application in many fields of science (Nuclear and Particle Physics (NPP), astronomy, biology to name only those). Efficiently transferring data between sites is an essential part of such processing. The implementation of caching strategies in data transfer software and tools, such as the Reasoner for Intelligent File Transfer (RIFT) being developed in the STAR collaboration, can significantly decrease network load and waiting time by reusing the knowledge of data provenance as well as data placed in transfer cache to further expand on the availability of sources for files and data-sets. Though, a great variety of caching algorithms is known, a study is needed to evaluate which one can deliver the best performance in data access considering the realistic demand patterns. Records of access to the complete data-sets of NPP experiments were analyzed and used as input for computer simulations. Series of simulations were done in order to estimate the possible cache hits and cache hits per byte for known caching algorithms. The simulations were done for cache of different sizes within interval 0.001 – 90% of complete data-set and low-watermark within 0-90%. Records of data access were taken from several experiments and within different time intervals in order to validate the results. In this paper, we will discuss the different data caching strategies from canonical algorithms to hybrid cache strategies, present the results of our simulations for the diverse algorithms, debate and identify the choice for the best algorithm in the context of Physics Data analysis in NPP. While the results of those studies have been implemented in RIFT, they can also be used when setting up cache in any other computational work-flow (Cloud processing for example) or managing data storages with partial replicas of the entire data-set

  20. Planetary Sample Caching System Design Options

    Science.gov (United States)

    Collins, Curtis; Younse, Paulo; Backes, Paul

    2009-01-01

    Potential Mars Sample Return missions would aspire to collect small core and regolith samples using a rover with a sample acquisition tool and sample caching system. Samples would need to be stored in individual sealed tubes in a canister that could be transfered to a Mars ascent vehicle and returned to Earth. A sample handling, encapsulation and containerization system (SHEC) has been developed as part of an integrated system for acquiring and storing core samples for application to future potential MSR and other potential sample return missions. Requirements and design options for the SHEC system were studied and a recommended design concept developed. Two families of solutions were explored: 1)transfer of a raw sample from the tool to the SHEC subsystem and 2)transfer of a tube containing the sample to the SHEC subsystem. The recommended design utilizes sample tool bit change out as the mechanism for transferring tubes to and samples in tubes from the tool. The SHEC subsystem design, called the Bit Changeout Caching(BiCC) design, is intended for operations on a MER class rover.

  1. A Two-Level Cache for Distributed Information Retrieval in Search Engines

    Directory of Open Access Journals (Sweden)

    Weizhe Zhang

    2013-01-01

    Full Text Available To improve the performance of distributed information retrieval in search engines, we propose a two-level cache structure based on the queries of the users’ logs. We extract the highest rank queries of users from the static cache, in which the queries are the most popular. We adopt the dynamic cache as an auxiliary to optimize the distribution of the cache data. We propose a distribution strategy of the cache data. The experiments prove that the hit rate, the efficiency, and the time consumption of the two-level cache have advantages compared with other structures of cache.

  2. A two-level cache for distributed information retrieval in search engines.

    Science.gov (United States)

    Zhang, Weizhe; He, Hui; Ye, Jianwei

    2013-01-01

    To improve the performance of distributed information retrieval in search engines, we propose a two-level cache structure based on the queries of the users' logs. We extract the highest rank queries of users from the static cache, in which the queries are the most popular. We adopt the dynamic cache as an auxiliary to optimize the distribution of the cache data. We propose a distribution strategy of the cache data. The experiments prove that the hit rate, the efficiency, and the time consumption of the two-level cache have advantages compared with other structures of cache.

  3. Cache and memory hierarchy design a performance directed approach

    CERN Document Server

    Przybylski, Steven A

    1991-01-01

    An authoritative book for hardware and software designers. Caches are by far the simplest and most effective mechanism for improving computer performance. This innovative book exposes the characteristics of performance-optimal single and multi-level cache hierarchies by approaching the cache design process through the novel perspective of minimizing execution times. It presents useful data on the relative performance of a wide spectrum of machines and offers empirical and analytical evaluations of the underlying phenomena. This book will help computer professionals appreciate the impact of ca

  4. Replication Strategy for Spatiotemporal Data Based on Distributed Caching System.

    Science.gov (United States)

    Xiong, Lian; Yang, Liu; Tao, Yang; Xu, Juan; Zhao, Lun

    2018-01-14

    The replica strategy in distributed cache can effectively reduce user access delay and improve system performance. However, developing a replica strategy suitable for varied application scenarios is still quite challenging, owing to differences in user access behavior and preferences. In this paper, a replication strategy for spatiotemporal data (RSSD) based on a distributed caching system is proposed. By taking advantage of the spatiotemporal locality and correlation of user access, RSSD mines high popularity and associated files from historical user access information, and then generates replicas and selects appropriate cache node for placement. Experimental results show that the RSSD algorithm is simple and efficient, and succeeds in significantly reducing user access delay.

  5. Método y sistema de modelado de memoria cache

    OpenAIRE

    Posadas Cobo, Héctor; Villar Bonet, Eugenio; Díaz Suárez, Luis

    2010-01-01

    Un método de modelado de una memoria cache de datos de un procesador destino, para simular el comportamiento de dicha memoria cache de datos en la ejecución de un código software en una plataforma que comprenda dicho procesador destino, donde dicha simulación se realiza en una plataforma nativa que tiene un procesador diferente del procesador destino que comprende dicha memoria cache de datos que se va a modelar, donde dicho modelado se realiza mediante la ejecución en dicha plataforma nativa...

  6. A distributed storage system with dCache

    DEFF Research Database (Denmark)

    Behrmann, Gerd; Fuhrmann, Patrick; Grønager, Michael

    2008-01-01

    The LCG collaboration is encompassed by a number of Tier 1 centers. The Nordic LCG Tier 1, operated by NDGF, is in contrast to many other Tier 1 centers distributed over the Nordic countries. A distributed setup was chosen for both political and technical reasons, but also provides a number...... of unique challenges. dCache is well known and respected as a powerful distributed storage resource manager, and was chosen for implementing the storage aspects of the Nordic Tier 1. In contrast to classic dCache deployments, we deploy dCache over a WAN with limited bandwidth, high latency, frequent network...

  7. Randomized Caches Considered Harmful in Hard Real-Time Systems

    Directory of Open Access Journals (Sweden)

    Jan Reineke

    2014-06-01

    Full Text Available We investigate the suitability of caches with randomized placement and replacement in the context of hard real-time systems. Such caches have been claimed to drastically reduce the amount of information required by static worst-case execution time (WCET analysis, and to be an enabler for measurement-based probabilistic timing analysis. We refute these claims and conclude that with prevailing static and measurement-based analysis techniques caches with deterministic placement and least-recently-used replacement are preferable over randomized ones.

  8. Replication Strategy for Spatiotemporal Data Based on Distributed Caching System

    Science.gov (United States)

    Xiong, Lian; Tao, Yang; Xu, Juan; Zhao, Lun

    2018-01-01

    The replica strategy in distributed cache can effectively reduce user access delay and improve system performance. However, developing a replica strategy suitable for varied application scenarios is still quite challenging, owing to differences in user access behavior and preferences. In this paper, a replication strategy for spatiotemporal data (RSSD) based on a distributed caching system is proposed. By taking advantage of the spatiotemporal locality and correlation of user access, RSSD mines high popularity and associated files from historical user access information, and then generates replicas and selects appropriate cache node for placement. Experimental results show that the RSSD algorithm is simple and efficient, and succeeds in significantly reducing user access delay. PMID:29342897

  9. A Novel Cache Invalidation Scheme for Mobile Networks

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    In this paper, we propose a strategy of maintaining cache consistency in wireless mobile environments, which adds a validation server (VS) into the GPRS network, utilizes the location information of mobile terminal in SGSN located at GPRS backbone, just sends invalidation information to mobile terminal which is online in accordance with the cached data, and reduces the information amount in asynchronous transmission. This strategy enables mobile terminal to access cached data with very little computing amount, little delay and arbitrary disconnection intervals, and excels the synchronous IR and asynchronous state (AS) in the total performances.

  10. The Cost of Cache-Oblivious Searching

    DEFF Research Database (Denmark)

    Bender, Michael A.; Brodal, Gerth Stølting; Fagerberg, Rolf

    2011-01-01

    of the block sizes are limited to be powers of 2. The paper gives modified versions of the van Emde Boas layout, where the expected number of memory transfers between any two levels of the memory hierarchy is arbitrarily close to [lg e+O(lg lg B/lg B)]log  B N+O(1). This factor approaches lg e≈1.443 as B...... increases. The expectation is taken over the random placement in memory of the first element of the structure. Because searching in the disk-access machine (DAM) model can be performed in log  B N+O(1) block transfers, this result establishes a separation between the (2-level) DAM model and cache...

  11. Automated Cache Performance Analysis And Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Mohror, Kathryn [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2013-12-23

    While there is no lack of performance counter tools for coarse-grained measurement of cache activity, there is a critical lack of tools for relating data layout to cache behavior to application performance. Generally, any nontrivial optimizations are either not done at all, or are done ”by hand” requiring significant time and expertise. To the best of our knowledge no tool available to users measures the latency of memory reference instructions for partic- ular addresses and makes this information available to users in an easy-to-use and intuitive way. In this project, we worked to enable the Open|SpeedShop performance analysis tool to gather memory reference latency information for specific instructions and memory ad- dresses, and to gather and display this information in an easy-to-use and intuitive way to aid performance analysts in identifying problematic data structures in their codes. This tool was primarily designed for use in the supercomputer domain as well as grid, cluster, cloud-based parallel e-commerce, and engineering systems and middleware. Ultimately, we envision a tool to automate optimization of application cache layout and utilization in the Open|SpeedShop performance analysis tool. To commercialize this soft- ware, we worked to develop core capabilities for gathering enhanced memory usage per- formance data from applications and create and apply novel methods for automatic data structure layout optimizations, tailoring the overall approach to support existing supercom- puter and cluster programming models and constraints. In this Phase I project, we focused on infrastructure necessary to gather performance data and present it in an intuitive way to users. With the advent of enhanced Precise Event-Based Sampling (PEBS) counters on recent Intel processor architectures and equivalent technology on AMD processors, we are now in a position to access memory reference information for particular addresses. Prior to the introduction of PEBS counters

  12. Munchausen syndrome by proxy

    Directory of Open Access Journals (Sweden)

    Jovanović Aleksandar A.

    2005-01-01

    Full Text Available This review deals with bibliography on Munchausen syndrome by proxy (MSbP. The name of this disorder was introduced by English psychiatrist Roy Meadow who pointed to diagnostic difficulties as well as to serious medical and legal connotations of MSbP. MSbP was classified in DSM-IV among criteria sets provided for further study as "factitious disorder by proxy", while in ICD-10, though not explicitly cited, MSbP might be classified as "factitious disorders" F68.1. MSbP is a special form of abuse where the perpetrator induces somatic or mental symptoms of illness in the victim under his/her care and then persistently presents the victims for medical examinations and care. The victim is usually a preschool child and the perpetrator is the child's mother. Motivation for such pathological behavior of perpetrator is considered to be unconscious need to assume sick role by proxy while external incentives such as economic gain are absent. Conceptualization of MSbP development is still in the domain of psychodynamic speculation, its course is chronic and the prognosis is poor considering lack of consistent, efficient and specific treatment. The authors also present the case report of thirty-three year-old mother who had been abusing her nine year-old son both emotionally and physically over the last several years forcing him to, together with her, report to the police, medical and educational institutions that he had been the victim of rape, poisoning and beating by various individuals, especially teaching and medical staff. Mother manifested psychosis and her child presented with impaired cognitive development, emotional problems and conduct disorder.

  13. Distributed caching mechanism for various MPE software services

    CERN Document Server

    Svec, Andrej

    2017-01-01

    The MPE Software Section provides multiple software services to facilitate the testing and the operation of the CERN Accelerator complex. Continuous growth in the number of users and the amount of processed data result in the requirement of high scalability. Our current priority is to move towards a distributed and properly load balanced set of services based on containers. The aim of this project is to implement the generic caching mechanism applicable to our services and chosen architecture. The project will at first require research about the different aspects of distributed caching (persistence, no gc-caching, cache consistency etc.) and the available technologies followed by the implementation of the chosen solution. In order to validate the correctness and performance of the implementation in the last phase of the project it will be required to implement a monitoring layer and integrate it with the current ELK stack.

  14. Dynamic Video Streaming in Caching-enabled Wireless Mobile Networks

    OpenAIRE

    Liang, C.; Hu, S.

    2017-01-01

    Recent advances in software-defined mobile networks (SDMNs), in-network caching, and mobile edge computing (MEC) can have great effects on video services in next generation mobile networks. In this paper, we jointly consider SDMNs, in-network caching, and MEC to enhance the video service in next generation mobile networks. With the objective of maximizing the mean measurement of video quality, an optimization problem is formulated. Due to the coupling of video data rate, computing resource, a...

  15. Probabilistic Caching Placement in the Presence of Multiple Eavesdroppers

    Directory of Open Access Journals (Sweden)

    Fang Shi

    2018-01-01

    Full Text Available The wireless caching has attracted a lot of attention in recent years, since it can reduce the backhaul cost significantly and improve the user-perceived experience. The existing works on the wireless caching and transmission mainly focus on the communication scenarios without eavesdroppers. When the eavesdroppers appear, it is of vital importance to investigate the physical-layer security for the wireless caching aided networks. In this paper, a caching network is studied in the presence of multiple eavesdroppers, which can overhear the secure information transmission. We model the locations of eavesdroppers by a homogeneous Poisson Point Process (PPP, and the eavesdroppers jointly receive and decode contents through the maximum ratio combining (MRC reception which yields the worst case of wiretap. Moreover, the main performance metric is measured by the average probability of successful transmission, which is the probability of finding and successfully transmitting all the requested files within a radius R. We study the system secure transmission performance by deriving a single integral result, which is significantly affected by the probability of caching each file. Therefore, we extend to build the optimization problem of the probability of caching each file, in order to optimize the system secure transmission performance. This optimization problem is nonconvex, and we turn to use the genetic algorithm (GA to solve the problem. Finally, simulation and numerical results are provided to validate the proposed studies.

  16. dCache on Steroids - Delegated Storage Solutions

    Science.gov (United States)

    Mkrtchyan, T.; Adeyemi, F.; Ashish, A.; Behrmann, G.; Fuhrmann, P.; Litvintsev, D.; Millar, P.; Rossi, A.; Sahakyan, M.; Starek, J.

    2017-10-01

    For over a decade, dCache.org has delivered a robust software used at more than 80 Universities and research institutes around the world, allowing these sites to provide reliable storage services for the WLCG experiments as well as many other scientific communities. The flexible architecture of dCache allows running it in a wide variety of configurations and platforms - from a SoC based all-in-one Raspberry-Pi up to hundreds of nodes in a multipetabyte installation. Due to lack of managed storage at the time, dCache implemented data placement, replication and data integrity directly. Today, many alternatives are available: S3, GlusterFS, CEPH and others. While such solutions position themselves as scalable storage systems, they cannot be used by many scientific communities out of the box. The absence of community-accepted authentication and authorization mechanisms, the use of product specific protocols and the lack of namespace are some of the reasons that prevent wide-scale adoption of these alternatives. Most of these limitations are already solved by dCache. By delegating low-level storage management functionality to the above-mentioned new systems and providing the missing layer through dCache, we provide a solution which combines the benefits of both worlds - industry standard storage building blocks with the access protocols and authentication required by scientific communities. In this paper, we focus on CEPH, a popular software for clustered storage that supports file, block and object interfaces. CEPH is often used in modern computing centers, for example as a backend to OpenStack services. We will show prototypes of dCache running with a CEPH backend and discuss the benefits and limitations of such an approach. We will also outline the roadmap for supporting ‘delegated storage’ within the dCache releases.

  17. Caching Efficiency Enhancement at Wireless Edges with Concerns on User’s Quality of Experience

    Directory of Open Access Journals (Sweden)

    Feng Li

    2018-01-01

    Full Text Available Content caching is a promising approach to enhancing bandwidth utilization and minimizing delivery delay for new-generation Internet applications. The design of content caching is based on the principles that popular contents are cached at appropriate network edges in order to reduce transmission delay and avoid backhaul bottleneck. In this paper, we propose a cooperative caching replacement and efficiency optimization scheme for IP-based wireless networks. Wireless edges are designed to establish a one-hop scope of caching information table for caching replacement in cases when there is not enough cache resource available within its own space. During the course, after receiving the caching request, every caching node should determine the weight of the required contents and provide a response according to the availability of its own caching space. Furthermore, to increase the caching efficiency from a practical perspective, we introduce the concept of quality of user experience (QoE and try to properly allocate the cache resource of the whole networks to better satisfy user demands. Different caching allocation strategies are devised to be adopted to enhance user QoE in various circumstances. Numerical results are further provided to justify the performance improvement of our proposal from various aspects.

  18. High Performance Analytics with the R3-Cache

    Science.gov (United States)

    Eavis, Todd; Sayeed, Ruhan

    Contemporary data warehouses now represent some of the world’s largest databases. As these systems grow in size and complexity, however, it becomes increasingly difficult for brute force query processing approaches to meet the performance demands of end users. Certainly, improved indexing and more selective view materialization are helpful in this regard. Nevertheless, with warehouses moving into the multi-terabyte range, it is clear that the minimization of external memory accesses must be a primary performance objective. In this paper, we describe the R 3-cache, a natively multi-dimensional caching framework designed specifically to support sophisticated warehouse/OLAP environments. R 3-cache is based upon an in-memory version of the R-tree that has been extended to support buffer pages rather than disk blocks. A key strength of the R 3-cache is that it is able to utilize multi-dimensional fragments of previous query results so as to significantly minimize the frequency and scale of disk accesses. Moreover, the new caching model directly accommodates the standard relational storage model and provides mechanisms for pro-active updates that exploit the existence of query “hot spots”. The current prototype has been evaluated as a component of the Sidera DBMS, a “shared nothing” parallel OLAP server designed for multi-terabyte analytics. Experimental results demonstrate significant performance improvements relative to simpler alternatives.

  19. Horizontally scaling dCache SRM with the Terracotta platform

    International Nuclear Information System (INIS)

    Perelmutov, T; Crawford, M; Moibenko, A; Oleynik, G

    2011-01-01

    The dCache disk caching file system has been chosen by a majority of LHC experiments' Tier 1 centers for their data storage needs. It is also deployed at many Tier 2 centers. The Storage Resource Manager (SRM) is a standardized grid storage interface and a single point of remote entry into dCache, and hence is a critical component. SRM must scale to increasing transaction rates and remain resilient against changing usage patterns. The initial implementation of the SRM service in dCache suffered from an inability to support clustered deployment, and its performance was limited by the hardware of a single node. Using the Terracotta platform[l], we added the ability to horizontally scale the dCache SRM service to run on multiple nodes in a cluster configuration, coupled with network load balancing. This gives site administrators the ability to increase the performance and reliability of SRM service to face the ever-increasing requirements of LHC data handling. In this paper we will describe the previous limitations of the architecture SRM server and how the Terracotta platform allowed us to readily convert single node service into a highly scalable clustered application.

  20. Optimal Caching in Multicast 5G Networks with Opportunistic Spectrum Access

    KAUST Repository

    Emara, Mostafa; Elsawy, Hesham; Sorour, Sameh; Al-Ghadhban, Samir; Alouini, Mohamed-Slim; Al-Naffouri, Tareq Y.

    2018-01-01

    Cache-enabled small base station (SBS) densification is foreseen as a key component of 5G cellular networks. This architecture enables storing popular files at the network edge (i.e., SBS caches), which empowers local communication and alleviates

  1. Architectural Development and Performance Analysis of a Primary Data Cache with Read Miss Address Prediction Capability

    National Research Council Canada - National Science Library

    Christensen, Kathryn

    1998-01-01

    .... The Predictive Read Cache (PRC) further improves the overall memory hierarchy performance by tracking the data read miss patterns of memory accesses, developing a prediction for the next access and prefetching the data into the faster cache memory...

  2. Going, going, still there: using the WebCite service to permanently archive cited web pages.

    Science.gov (United States)

    Eysenbach, Gunther; Trudel, Mathieu

    2005-12-30

    Scholars are increasingly citing electronic "web references" which are not preserved in libraries or full text archives. WebCite is a new standard for citing web references. To "webcite" a document involves archiving the cited Web page through www.webcitation.org and citing the WebCite permalink instead of (or in addition to) the unstable live Web page. This journal has amended its "instructions for authors" accordingly, asking authors to archive cited Web pages before submitting a manuscript. Almost 200 other journals are already using the system. We discuss the rationale for WebCite, its technology, and how scholars, editors, and publishers can benefit from the service. Citing scholars initiate an archiving process of all cited Web references, ideally before they submit a manuscript. Authors of online documents and websites which are expected to be cited by others can ensure that their work is permanently available by creating an archived copy using WebCite and providing the citation information including the WebCite link on their Web document(s). Editors should ask their authors to cache all cited Web addresses (Uniform Resource Locators, or URLs) "prospectively" before submitting their manuscripts to their journal. Editors and publishers should also instruct their copyeditors to cache cited Web material if the author has not done so already. Finally, WebCite can process publisher submitted "citing articles" (submitted for example as eXtensible Markup Language [XML] documents) to automatically archive all cited Web pages shortly before or on publication. Finally, WebCite can act as a focussed crawler, caching retrospectively references of already published articles. Copyright issues are addressed by honouring respective Internet standards (robot exclusion files, no-cache and no-archive tags). Long-term preservation is ensured by agreements with libraries and digital preservation organizations. The resulting WebCite Index may also have applications for research

  3. Fundamental Parallel Algorithms for Private-Cache Chip Multiprocessors

    DEFF Research Database (Denmark)

    Arge, Lars Allan; Goodrich, Michael T.; Nelson, Michael

    2008-01-01

    about the way cores are interconnected, for we assume that all inter-processor communication occurs through the memory hierarchy. We study several fundamental problems, including prefix sums, selection, and sorting, which often form the building blocks of other parallel algorithms. Indeed, we present...... two sorting algorithms, a distribution sort and a mergesort. Our algorithms are asymptotically optimal in terms of parallel cache accesses and space complexity under reasonable assumptions about the relationships between the number of processors, the size of memory, and the size of cache blocks....... In addition, we study sorting lower bounds in a computational model, which we call the parallel external-memory (PEM) model, that formalizes the essential properties of our algorithms for private-cache CMPs....

  4. Efficacy of Code Optimization on Cache-Based Processors

    Science.gov (United States)

    VanderWijngaart, Rob F.; Saphir, William C.; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    In this paper a number of techniques for improving the cache performance of a representative piece of numerical software is presented. Target machines are popular processors from several vendors: MIPS R5000 (SGI Indy), MIPS R8000 (SGI PowerChallenge), MIPS R10000 (SGI Origin), DEC Alpha EV4 + EV5 (Cray T3D & T3E), IBM RS6000 (SP Wide-node), Intel PentiumPro (Ames' Whitney), Sun UltraSparc (NERSC's NOW). The optimizations all attempt to increase the locality of memory accesses. But they meet with rather varied and often counterintuitive success on the different computing platforms. We conclude that it may be genuinely impossible to obtain portable performance on the current generation of cache-based machines. At the least, it appears that the performance of modern commodity processors cannot be described with parameters defining the cache alone.

  5. Analyzing data distribution on disk pools for dCache

    Energy Technology Data Exchange (ETDEWEB)

    Halstenberg, S; Jung, C; Ressmann, D [Forschungszentrum Karlsruhe, Steinbuch Centre for Computing, Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen (Germany)

    2010-04-01

    Most Tier-1 centers of LHC Computing Grid are using dCache as their storage system. dCache uses a cost model incorporating CPU and space costs for the distribution of data on its disk pools. Storage resources at Tier-1 centers are usually upgraded once or twice a year according to given milestones. One of the effects of this procedure is the accumulation of heterogeneous hardware resources. For a dCache system, a heterogeneous set of disk pools complicates the process of weighting CPU and space costs for an efficient distribution of data. In order to evaluate the data distribution on the disk pools, the distribution is simulated in Java. The results are discussed and suggestions for improving the weight scheme are given.

  6. Fast and Cache-Oblivious Dynamic Programming with Local Dependencies

    DEFF Research Database (Denmark)

    Bille, Philip; Stöckel, Morten

    2012-01-01

    are widely used in bioinformatics to compare DNA and protein sequences. These problems can all be solved using essentially the same dynamic programming scheme over a two-dimensional matrix, where each entry depends locally on at most 3 neighboring entries. We present a simple, fast, and cache......-oblivious algorithm for this type of local dynamic programming suitable for comparing large-scale strings. Our algorithm outperforms the previous state-of-the-art solutions. Surprisingly, our new simple algorithm is competitive with a complicated, optimized, and tuned implementation of the best cache-aware algorithm...

  7. Alignment of Memory Transfers of a Time-Predictable Stack Cache

    DEFF Research Database (Denmark)

    Abbaspourseyedi, Sahar; Brandner, Florian

    2014-01-01

    of complex cache states. Instead, only the occupancy level of the cache has to be determined. The memory transfers generated by the standard stack cache are not generally aligned. These unaligned accesses risk to introduce complexity to the otherwise simple WCET analysis. In this work, we investigate three...

  8. Using caching and optimization techniques to improve performance of the Ensembl website

    Directory of Open Access Journals (Sweden)

    Smith James A

    2010-05-01

    Full Text Available Abstract Background The Ensembl web site has provided access to genomic information for almost 10 years. During this time the amount of data available through Ensembl has grown dramatically. At the same time, the World Wide Web itself has become a dramatically more important component of the scientific workflow and the way that scientists share and access data and scientific information. Since 2000, the Ensembl web interface has had three major updates and numerous smaller updates. These have largely been in response to expanding data types and valuable representations of existing data types. In 2007 it was realised that a radical new approach would be required in order to serve the project's future requirements, and development therefore focused on identifying suitable web technologies for implementation in the 2008 site redesign. Results By comparing the Ensembl website to well-known "Web 2.0" sites, we were able to identify two main areas in which cutting-edge technologies could be advantageously deployed: server efficiency and interface latency. We then evaluated the performance of the existing site using browser-based tools and Apache benchmarking, and selected appropriate technologies to overcome any issues found. Solutions included optimization of the Apache web server, introduction of caching technologies and widespread implementation of AJAX code. These improvements were successfully deployed on the Ensembl website in late 2008 and early 2009. Conclusions Web 2.0 technologies provide a flexible and efficient way to access the terabytes of data now available from Ensembl, enhancing the user experience through improved website responsiveness and a rich, interactive interface.

  9. Professional JavaScript for Web Developers

    CERN Document Server

    Zakas, Nicholas C

    2011-01-01

    A significant update to a bestselling JavaScript book As the key scripting language for the web, JavaScript is supported by every modern web browser and allows developers to create client-side scripts that take advantage of features such as animating the canvas tag and enabling client-side storage and application caches. After an in-depth introduction to the JavaScript language, this updated edition of a bestseller progresses to break down how JavaScript is applied for web development using the latest web development technologies. Veteran author and JavaScript guru Nicholas Zakas shows how Jav

  10. Unfavorable Strides in Cache Memory Systems (RNR Technical Report RNR-92-015

    Directory of Open Access Journals (Sweden)

    David H. Bailey

    1995-01-01

    Full Text Available An important issue in obtaining high performance on a scientific application running on a cache-based computer system is the behavior of the cache when data are accessed at a constant stride. Others who have discussed this issue have noted an odd phenomenon in such situations: A few particular innocent-looking strides result in sharply reduced cache efficiency. In this article, this problem is analyzed, and a simple formula is presented that accurately gives the cache efficiency for various cache parameters and data strides.

  11. Cache Timing Analysis of LFSR-based Stream Ciphers

    DEFF Research Database (Denmark)

    Zenner, Erik; Leander, Gregor; Hawkes, Philip

    2009-01-01

    Cache timing attacks are a class of side-channel attacks that is applicable against certain software implementations. They have generated significant interest when demonstrated against the Advanced Encryption Standard (AES), but have more recently also been applied against other cryptographic...

  12. ARC Cache: A solution for lightweight Grid sites in ATLAS

    CERN Document Server

    Garonne, Vincent; The ATLAS collaboration

    2016-01-01

    Many Grid sites have the need to reduce operational manpower, and running a storage element consumes a large amount of effort. In addition, setting up a new Grid site including a storage element involves a steep learning curve and large investment of time. For these reasons so-called storage-less sites are becoming more popular as a way to provide Grid computing resources with less operational overhead. ARC CE is a widely-used and mature Grid middleware which was designed from the start to be used on sites with no persistent storage element. Instead, it maintains a local self-managing cache of data which retains popular data for future jobs. As the cache is simply an area on a local posix shared filesystem with no external-facing service, it requires no extra maintenance. The cache can be scaled up as required by increasing the size of the filesystem or adding new filesystems. This paper describes how ARC CE and its cache are an ideal solution for lightweight Grid sites in the ATLAS experiment, and the integr...

  13. Caching Over-The-Top Services, the Netflix Case

    DEFF Research Database (Denmark)

    Jensen, Stefan; Jensen, Michael; Gutierrez Lopez, Jose Manuel

    2015-01-01

    Problem (LLB-CFL). The solution search processes are implemented based on Genetic Algorithms (GA), designing genetic operators highly targeted towards this specific problem. The proposed methods are applied to a case study focusing on the demand and cache specifications of Netflix, and framed into a real...

  14. Magpies can use local cues to retrieve their food caches.

    Science.gov (United States)

    Feenders, Gesa; Smulders, Tom V

    2011-03-01

    Much importance has been placed on the use of spatial cues by food-hoarding birds in the retrieval of their caches. In this study, we investigate whether food-hoarding birds can be trained to use local cues ("beacons") in their cache retrieval. We test magpies (Pica pica) in an active hoarding-retrieval paradigm, where local cues are always reliable, while spatial cues are not. Our results show that the birds use the local cues to retrieve their caches, even when occasionally contradicting spatial information is available. The design of our study does not allow us to test rigorously whether the birds prefer using local over spatial cues, nor to investigate the process through which they learn to use local cues. We furthermore provide evidence that magpies develop landmark preferences, which improve their retrieval accuracy. Our findings support the hypothesis that birds are flexible in their use of memory information, using a combination of the most reliable or salient information to retrieve their caches. © Springer-Verlag 2010

  15. A distributed storage system with dCache

    Science.gov (United States)

    Behrmann, G.; Fuhrmann, P.; Grønager, M.; Kleist, J.

    2008-07-01

    The LCG collaboration is encompassed by a number of Tier 1 centers. The Nordic LCG Tier 1, operated by NDGF, is in contrast to many other Tier 1 centers distributed over the Nordic countries. A distributed setup was chosen for both political and technical reasons, but also provides a number of unique challenges. dCache is well known and respected as a powerful distributed storage resource manager, and was chosen for implementing the storage aspects of the Nordic Tier 1. In contrast to classic dCache deployments, we deploy dCache over a WAN with limited bandwidth, high latency, frequent network failures, and spanning many administrative domains. These properties provide unique challenges, covering topics such as security, administration, maintenance, upgradability, reliability, and performance. Our initial focus has been on implementing the GFD.47 OGF recommendation (which introduced the GridFTP 2 protocol) in dCache and the Globus Toolkit. Compared to GridFTP 1, GridFTP 2 allows for more intelligent data flow between clients and storage pools, thus enabling more efficient use of our limited bandwidth.

  16. A Software Managed Stack Cache for Real-Time Systems

    DEFF Research Database (Denmark)

    Jordan, Alexander; Abbaspourseyedi, Sahar; Schoeberl, Martin

    2016-01-01

    In a real-time system, the use of a scratchpad memory can mitigate the difficulties related to analyzing data caches, whose behavior is inherently hard to predict. We propose to use a scratchpad memory for stack allocated data. While statically allocating stack frames for individual functions...

  17. Cache Timing Analysis of eStream Finalists

    DEFF Research Database (Denmark)

    Zenner, Erik

    2009-01-01

    Cache Timing Attacks have attracted a lot of cryptographic attention due to their relevance for the AES. However, their applicability to other cryptographic primitives is less well researched. In this talk, we give an overview over our analysis of the stream ciphers that were selected for phase 3...

  18. dCache, agile adoption of storage technology

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    For over a decade, dCache has been synonymous with large-capacity, fault-tolerant storage using commodity hardware that supports seamless data migration to and from tape. Over that time, it has satisfied the requirements of various demanding scientific user communities to store their data, transfer it between sites and fast, site-local access. When the dCache project started, the focus was on managing a relatively small disk cache in front of large tape archives. Over the project's lifetime storage technology has changed. During this period, technology changes have driven down the cost-per-GiB of harddisks. This resulted in a shift towards systems where the majority of data is stored on disk. More recently, the availability of Solid State Disks, while not yet a replacement for magnetic disks, offers an intriguing opportunity for significant performance improvement if they can be used intelligently within an existing system. New technologies provide new opportunities and dCache user communities' computi...

  19. A distributed storage system with dCache

    International Nuclear Information System (INIS)

    Behrmann, G; Groenager, M; Fuhrmann, P; Kleist, J

    2008-01-01

    The LCG collaboration is encompassed by a number of Tier 1 centers. The Nordic LCG Tier 1, operated by NDGF, is in contrast to many other Tier 1 centers distributed over the Nordic countries. A distributed setup was chosen for both political and technical reasons, but also provides a number of unique challenges. dCache is well known and respected as a powerful distributed storage resource manager, and was chosen for implementing the storage aspects of the Nordic Tier 1. In contrast to classic dCache deployments, we deploy dCache over a WAN with limited bandwidth, high latency, frequent network failures, and spanning many administrative domains. These properties provide unique challenges, covering topics such as security, administration, maintenance, upgradability, reliability, and performance. Our initial focus has been on implementing the GFD.47 OGF recommendation (which introduced the GridFTP 2 protocol) in dCache and the Globus Toolkit. Compared to GridFTP 1, GridFTP 2 allows for more intelligent data flow between clients and storage pools, thus enabling more efficient use of our limited bandwidth

  20. Tier 3 batch system data locality via managed caches

    Science.gov (United States)

    Fischer, Max; Giffels, Manuel; Jung, Christopher; Kühn, Eileen; Quast, Günter

    2015-05-01

    Modern data processing increasingly relies on data locality for performance and scalability, whereas the common HEP approaches aim for uniform resource pools with minimal locality, recently even across site boundaries. To combine advantages of both, the High- Performance Data Analysis (HPDA) Tier 3 concept opportunistically establishes data locality via coordinated caches. In accordance with HEP Tier 3 activities, the design incorporates two major assumptions: First, only a fraction of data is accessed regularly and thus the deciding factor for overall throughput. Second, data access may fallback to non-local, making permanent local data availability an inefficient resource usage strategy. Based on this, the HPDA design generically extends available storage hierarchies into the batch system. Using the batch system itself for scheduling file locality, an array of independent caches on the worker nodes is dynamically populated with high-profile data. Cache state information is exposed to the batch system both for managing caches and scheduling jobs. As a result, users directly work with a regular, adequately sized storage system. However, their automated batch processes are presented with local replications of data whenever possible.

  1. The Last Millennium Reanalysis: Improvements to proxies and proxy modeling

    Science.gov (United States)

    Tardif, R.; Hakim, G. J.; Emile-Geay, J.; Noone, D.; Anderson, D. M.

    2017-12-01

    The Last Millennium Reanalysis (LMR) employs a paleoclimate data assimilation (PDA) approach to produce climate field reconstructions (CFRs). Here, we focus on two key factors in PDA generated CFRs: the set of assimilated proxy records and forward models (FMs) used to estimate proxies from climate model output. In the initial configuration of the LMR [Hakim et al., 2016], the proxy dataset of [PAGES2k Consortium, 2013] was used, along with univariate linear FMs calibrated against annually-averaged 20th century temperature datasets. In an updated configuration, proxy records from the recent dataset [PAGES2k Consortium, 2017] are used, while a hierarchy of statistical FMs are tested: (1) univariate calibrated on annual temperature as in the initial configuration, (2) univariate against temperature as in (1) but calibration performed using expert-derived seasonality for individual proxy records, (3) as in (2) but expert proxy seasonality replaced by seasonal averaging determined objectively as part of the calibration process, (4) linear objective seasonal FMs as in (3) but objectively selecting relationships calibrated either on temperature or precipitation, and (5) bivariate linear models calibrated on temperature and precipitation with objectively-derived seasonality. (4) and (5) specifically aim at better representing the physical drivers of tree ring width proxies. Reconstructions generated using the CCSM4 Last Millennium simulation as an uninformed prior are evaluated against various 20th century data products. Results show the benefits of using the new proxy collection, particularly on the detrended global mean temperature and spatial patterns. The positive impact of using proper seasonality and temperature/moisture sensitivities for tree ring width records is also notable. This updated configuration will be used for the first generation of LMR-generated CFRs to be publicly released. These also provide a benchmark for future efforts aimed at evaluating the

  2. A trace-driven analysis of name and attribute caching in a distributed system

    Science.gov (United States)

    Shirriff, Ken W.; Ousterhout, John K.

    1992-01-01

    This paper presents the results of simulating file name and attribute caching on client machines in a distributed file system. The simulation used trace data gathered on a network of about 40 workstations. Caching was found to be advantageous: a cache on each client containing just 10 directories had a 91 percent hit rate on name look ups. Entry-based name caches (holding individual directory entries) had poorer performance for several reasons, resulting in a maximum hit rate of about 83 percent. File attribute caching obtained a 90 percent hit rate with a cache on each machine of the attributes for 30 files. The simulations show that maintaining cache consistency between machines is not a significant problem; only 1 in 400 name component look ups required invalidation of a remotely cached entry. Process migration to remote machines had little effect on caching. Caching was less successful in heavily shared and modified directories such as /tmp, but there weren't enough references to /tmp overall to affect the results significantly. We estimate that adding name and attribute caching to the Sprite operating system could reduce server load by 36 percent and the number of network packets by 30 percent.

  3. Combining instruction prefetching with partial cache locking to improve WCET in real-time systems.

    Directory of Open Access Journals (Sweden)

    Fan Ni

    Full Text Available Caches play an important role in embedded systems to bridge the performance gap between fast processor and slow memory. And prefetching mechanisms are proposed to further improve the cache performance. While in real-time systems, the application of caches complicates the Worst-Case Execution Time (WCET analysis due to its unpredictable behavior. Modern embedded processors often equip locking mechanism to improve timing predictability of the instruction cache. However, locking the whole cache may degrade the cache performance and increase the WCET of the real-time application. In this paper, we proposed an instruction-prefetching combined partial cache locking mechanism, which combines an instruction prefetching mechanism (termed as BBIP with partial cache locking to improve the WCET estimates of real-time applications. BBIP is an instruction prefetching mechanism we have already proposed to improve the worst-case cache performance and in turn the worst-case execution time. The estimations on typical real-time applications show that the partial cache locking mechanism shows remarkable WCET improvement over static analysis and full cache locking.

  4. Combining instruction prefetching with partial cache locking to improve WCET in real-time systems.

    Science.gov (United States)

    Ni, Fan; Long, Xiang; Wan, Han; Gao, Xiaopeng

    2013-01-01

    Caches play an important role in embedded systems to bridge the performance gap between fast processor and slow memory. And prefetching mechanisms are proposed to further improve the cache performance. While in real-time systems, the application of caches complicates the Worst-Case Execution Time (WCET) analysis due to its unpredictable behavior. Modern embedded processors often equip locking mechanism to improve timing predictability of the instruction cache. However, locking the whole cache may degrade the cache performance and increase the WCET of the real-time application. In this paper, we proposed an instruction-prefetching combined partial cache locking mechanism, which combines an instruction prefetching mechanism (termed as BBIP) with partial cache locking to improve the WCET estimates of real-time applications. BBIP is an instruction prefetching mechanism we have already proposed to improve the worst-case cache performance and in turn the worst-case execution time. The estimations on typical real-time applications show that the partial cache locking mechanism shows remarkable WCET improvement over static analysis and full cache locking.

  5. Massively parallel algorithms for trace-driven cache simulations

    Science.gov (United States)

    Nicol, David M.; Greenberg, Albert G.; Lubachevsky, Boris D.

    1991-01-01

    Trace driven cache simulation is central to computer design. A trace is a very long sequence of reference lines from main memory. At the t(exp th) instant, reference x sub t is hashed into a set of cache locations, the contents of which are then compared with x sub t. If at the t sup th instant x sub t is not present in the cache, then it is said to be a miss, and is loaded into the cache set, possibly forcing the replacement of some other memory line, and making x sub t present for the (t+1) sup st instant. The problem of parallel simulation of a subtrace of N references directed to a C line cache set is considered, with the aim of determining which references are misses and related statistics. A simulation method is presented for the Least Recently Used (LRU) policy, which regradless of the set size C runs in time O(log N) using N processors on the exclusive read, exclusive write (EREW) parallel model. A simpler LRU simulation algorithm is given that runs in O(C log N) time using N/log N processors. Timings are presented of the second algorithm's implementation on the MasPar MP-1, a machine with 16384 processors. A broad class of reference based line replacement policies are considered, which includes LRU as well as the Least Frequently Used and Random replacement policies. A simulation method is presented for any such policy that on any trace of length N directed to a C line set runs in the O(C log N) time with high probability using N processors on the EREW model. The algorithms are simple, have very little space overhead, and are well suited for SIMD implementation.

  6. Multi-proxy studies in palaeolimnology

    OpenAIRE

    Birks, Hilary H.; Birks, Harry John Betteley

    2006-01-01

    Multi-proxy studies are becoming increasingly common in palaeolimnology. Eight basic requirements and challenges for a multi-proxy study are outlined in this essay – definition of research questions, leadership, site selection and coring, data storage, chronology, presentation of results, numerical tools, and data interpretation. The nature of proxy data is discussed in terms of physical proxies and biotic proxies. Loss-on-ignition changes and the use of transfer functions are reviewed as exa...

  7. Energy Efficient Caching in Backhaul-Aware Cellular Networks with Dynamic Content Popularity

    Directory of Open Access Journals (Sweden)

    Jiequ Ji

    2018-01-01

    Full Text Available Caching popular contents at base stations (BSs has been regarded as an effective approach to alleviate the backhaul load and to improve the quality of service. To meet the explosive data traffic demand and to save energy consumption, energy efficiency (EE has become an extremely important performance index for the 5th generation (5G cellular networks. In general, there are two ways for improving the EE for caching, that is, improving the cache-hit rate and optimizing the cache size. In this work, we investigate the energy efficient caching problem in backhaul-aware cellular networks jointly considering these two approaches. Note that most existing works are based on the assumption that the content catalog and popularity are static. However, in practice, content popularity is dynamic. To timely estimate the dynamic content popularity, we propose a method based on shot noise model (SNM. Then we propose a distributed caching policy to improve the cache-hit rate in such a dynamic environment. Furthermore, we analyze the tradeoff between energy efficiency and cache capacity for which an optimization is formulated. We prove its convexity and derive a closed-form optimal cache capacity for maximizing the EE. Simulation results validate the proposed scheme and show that EE can be improved with appropriate choice of cache capacity.

  8. A high level implementation and performance evaluation of level-I asynchronous cache on FPGA

    Directory of Open Access Journals (Sweden)

    Mansi Jhamb

    2017-07-01

    Full Text Available To bridge the ever-increasing performance gap between the processor and the main memory in a cost-effective manner, novel cache designs and implementations are indispensable. Cache is responsible for a major part of energy consumption (approx. 50% of processors. This paper presents a high level implementation of a micropipelined asynchronous architecture of L1 cache. Due to the fact that each cache memory implementation is time consuming and error-prone process, a synthesizable and a configurable model proves out to be of immense help as it aids in generating a range of caches in a reproducible and quick fashion. The micropipelined cache, implemented using C-Elements acts as a distributed message-passing system. The RTL cache model implemented in this paper, comprising of data and instruction caches has a wide array of configurable parameters. In addition to timing robustness our implementation has high average cache throughput and low latency. The implemented architecture comprises of two direct-mapped, write-through caches for data and instruction. The architecture is implemented in a Field Programmable Gate Array (FPGA chip using Very High Speed Integrated Circuit Hardware Description Language (VHSIC HDL along with advanced synthesis and place-and-route tools.

  9. Towards Cache-Enabled, Order-Aware, Ontology-Based Stream Reasoning Framework

    Energy Technology Data Exchange (ETDEWEB)

    Yan, Rui; Praggastis, Brenda L.; Smith, William P.; McGuinness, Deborah L.

    2016-08-16

    While streaming data have become increasingly more popular in business and research communities, semantic models and processing software for streaming data have not kept pace. Traditional semantic solutions have not addressed transient data streams. Semantic web languages (e.g., RDF, OWL) have typically addressed static data settings and linked data approaches have predominantly addressed static or growing data repositories. Streaming data settings have some fundamental differences; in particular, data are consumed on the fly and data may expire. Stream reasoning, a combination of stream processing and semantic reasoning, has emerged with the vision of providing "smart" processing of streaming data. C-SPARQL is a prominent stream reasoning system that handles semantic (RDF) data streams. Many stream reasoning systems including C-SPARQL use a sliding window and use data arrival time to evict data. For data streams that include expiration times, a simple arrival time scheme is inadequate if the window size does not match the expiration period. In this paper, we propose a cache-enabled, order-aware, ontology-based stream reasoning framework. This framework consumes RDF streams with expiration timestamps assigned by the streaming source. Our framework utilizes both arrival and expiration timestamps in its cache eviction policies. In addition, we introduce the notion of "semantic importance" which aims to address the relevance of data to the expected reasoning, thus enabling the eviction algorithms to be more context- and reasoning-aware when choosing what data to maintain for question answering. We evaluate this framework by implementing three different prototypes and utilizing five metrics. The trade-offs of deploying the proposed framework are also discussed.

  10. Qualitative and Quantitative Sentiment Proxies

    DEFF Research Database (Denmark)

    Zhao, Zeyan; Ahmad, Khurshid

    2015-01-01

    Sentiment analysis is a content-analytic investigative framework for researchers, traders and the general public involved in financial markets. This analysis is based on carefully sourced and elaborately constructed proxies for market sentiment and has emerged as a basis for analysing movements...

  11. Study on data acquisition system based on reconfigurable cache technology

    Science.gov (United States)

    Zhang, Qinchuan; Li, Min; Jiang, Jun

    2018-03-01

    Waveform capture rate is one of the key features of digital acquisition systems, which represents the waveform processing capability of the system in a unit time. The higher the waveform capture rate is, the larger the chance to capture elusive events is and the more reliable the test result is. First, this paper analyzes the impact of several factors on the waveform capture rate of the system, then the novel technology based on reconfigurable cache is further proposed to optimize system architecture, and the simulation results show that the signal-to-noise ratio of signal, capacity, and structure of cache have significant effects on the waveform capture rate. Finally, the technology is demonstrated by the engineering practice, and the results show that the waveform capture rate of the system is improved substantially without significant increase of system's cost, and the technology proposed has a broad application prospect.

  12. I-Structure software cache for distributed applications

    Directory of Open Access Journals (Sweden)

    Alfredo Cristóbal Salas

    2004-01-01

    Full Text Available En este artículo, describimos el caché de software I-Structure para entornos de memoria distribuida (D-ISSC, lo cual toma ventaja de la localidad de los datos mientras mantiene la capacidad de tolerancia a la latencia de sistemas de memoria I-Structure. Las facilidades de programación de los programas MPI, le ocultan los problemas de sincronización al programador. Nuestra evaluación experimental usando un conjunto de pruebas de rendimiento indica que clusters de PC con I-Structure y su mecanismo de cache D-ISSC son más robustos. El sistema puede acelerar aplicaciones de comunicación intensiva regulares e irregulares.

  13. The Potential Role of Cache Mechanism for Complicated Design Optimization

    International Nuclear Information System (INIS)

    Noriyasu, Hirokawa; Fujita, Kikuo

    2002-01-01

    This paper discusses the potential role of cache mechanism for complicated design optimization While design optimization is an application of mathematical programming techniques to engineering design problems over numerical computation, its progress has been coevolutionary. The trend in such progress indicates that more complicated applications become the next target of design optimization beyond growth of computational resources. As the progress in the past two decades had required response surface techniques, decomposition techniques, etc., any new framework must be introduced for the future of design optimization methods. This paper proposes a possibility of what we call cache mechanism for mediating the coming challenge and briefly demonstrates some promises in the idea of Voronoi diagram based cumulative approximation as an example of its implementation, development of strict robust design, extension of design optimization for product variety

  14. An Economic Model for Self-tuned Cloud Caching

    OpenAIRE

    Dash, Debabrata; Kantere, Verena; Ailamaki, Anastasia

    2009-01-01

    Cloud computing, the new trend for service infrastructures requires user multi-tenancy as well as minimal capital expenditure. In a cloud that services large amounts of data that are massively collected and queried, such as scientific data, users typically pay for query services. The cloud supports caching of data in order to provide quality query services. User payments cover query execution costs and maintenance of cloud infrastructure, and incur cloud profit. The challenge resides in provi...

  15. Big Data Caching for Networking: Moving from Cloud to Edge

    OpenAIRE

    Zeydan, Engin; Baştuğ, Ejder; Bennis, Mehdi; Kader, Manhal Abdel; Karatepe, Alper; Er, Ahmet Salih; Debbah, Mérouane

    2016-01-01

    In order to cope with the relentless data tsunami in $5G$ wireless networks, current approaches such as acquiring new spectrum, deploying more base stations (BSs) and increasing nodes in mobile packet core networks are becoming ineffective in terms of scalability, cost and flexibility. In this regard, context-aware $5$G networks with edge/cloud computing and exploitation of \\emph{big data} analytics can yield significant gains to mobile operators. In this article, proactive content caching in...

  16. Cache Performance Optimization for SoC Vedio Applications

    OpenAIRE

    Lei Li; Wei Zhang; HuiYao An; Xing Zhang; HuaiQi Zhu

    2014-01-01

    Chip Multiprocessors (CMPs) are adopted by industry to deal with the speed limit of the single-processor. But memory access has become the bottleneck of the performance, especially in multimedia applications. In this paper, a set of management policies is proposed to improve the cache performance for a SoC platform of video application. By analyzing the behavior of Vedio Engine, the memory-friendly writeback and efficient prefetch policies are adopted. The experiment platform is simulated by ...

  17. On the Performance of the Cache Coding Protocol

    Directory of Open Access Journals (Sweden)

    Behnaz Maboudi

    2018-03-01

    Full Text Available Network coding approaches typically consider an unrestricted recoding of coded packets in the relay nodes to increase performance. However, this can expose the system to pollution attacks that cannot be detected during transmission, until the receivers attempt to recover the data. To prevent these attacks while allowing for the benefits of coding in mesh networks, the cache coding protocol was proposed. This protocol only allows recoding at the relays when the relay has received enough coded packets to decode an entire generation of packets. At that point, the relay node recodes and signs the recoded packets with its own private key, allowing the system to detect and minimize the effect of pollution attacks and making the relays accountable for changes on the data. This paper analyzes the delay performance of cache coding to understand the security-performance trade-off of this scheme. We introduce an analytical model for the case of two relays in an erasure channel relying on an absorbing Markov chain and an approximate model to estimate the performance in terms of the number of transmissions before successfully decoding at the receiver. We confirm our analysis using simulation results. We show that cache coding can overcome the security issues of unrestricted recoding with only a moderate decrease in system performance.

  18. Using Shadow Page Cache to Improve Isolated Drivers Performance

    Directory of Open Access Journals (Sweden)

    Hao Zheng

    2015-01-01

    Full Text Available With the advantage of the reusability property of the virtualization technology, users can reuse various types and versions of existing operating systems and drivers in a virtual machine, so as to customize their application environment. In order to prevent users’ virtualization environments being impacted by driver faults in virtual machine, Chariot examines the correctness of driver’s write operations by the method of combining a driver’s write operation capture and a driver’s private access control table. However, this method needs to keep the write permission of shadow page table as read-only, so as to capture isolated driver’s write operations through page faults, which adversely affect the performance of the driver. Based on delaying setting frequently used shadow pages’ write permissions to read-only, this paper proposes an algorithm using shadow page cache to improve the performance of isolated drivers and carefully study the relationship between the performance of drivers and the size of shadow page cache. Experimental results show that, through the shadow page cache, the performance of isolated drivers can be greatly improved without impacting Chariot’s reliability too much.

  19. Efficient Resource Scheduling by Exploiting Relay Cache for Cellular Networks

    Directory of Open Access Journals (Sweden)

    Chun He

    2015-01-01

    Full Text Available In relay-enhanced cellular systems, throughput of User Equipment (UE is constrained by the bottleneck of the two-hop link, backhaul link (or the first hop link, and access link (the second hop link. To maximize the throughput, resource allocation should be coordinated between these two hops. A common resource scheduling algorithm, Adaptive Distributed Proportional Fair, only ensures that the throughput of the first hop is greater than or equal to that of the second hop. But it cannot guarantee a good balance of the throughput and fairness between the two hops. In this paper, we propose a Two-Hop Balanced Distributed Scheduling (TBS algorithm by exploiting relay cache for non-real-time data traffic. The evolved Node Basestation (eNB adaptively adjusts the number of Resource Blocks (RBs allocated to the backhaul link and direct links based on the cache information of relays. Each relay allocates RBs for relay UEs based on the size of the relay UE’s Transport Block. We also design a relay UE’s ACK feedback mechanism to update the data at relay cache. Simulation results show that the proposed TBS can effectively improve resource utilization and achieve a good trade-off between system throughput and fairness by balancing the throughput of backhaul and access link.

  20. Using shadow page cache to improve isolated drivers performance.

    Science.gov (United States)

    Zheng, Hao; Dong, Xiaoshe; Wang, Endong; Chen, Baoke; Zhu, Zhengdong; Liu, Chengzhe

    2015-01-01

    With the advantage of the reusability property of the virtualization technology, users can reuse various types and versions of existing operating systems and drivers in a virtual machine, so as to customize their application environment. In order to prevent users' virtualization environments being impacted by driver faults in virtual machine, Chariot examines the correctness of driver's write operations by the method of combining a driver's write operation capture and a driver's private access control table. However, this method needs to keep the write permission of shadow page table as read-only, so as to capture isolated driver's write operations through page faults, which adversely affect the performance of the driver. Based on delaying setting frequently used shadow pages' write permissions to read-only, this paper proposes an algorithm using shadow page cache to improve the performance of isolated drivers and carefully study the relationship between the performance of drivers and the size of shadow page cache. Experimental results show that, through the shadow page cache, the performance of isolated drivers can be greatly improved without impacting Chariot's reliability too much.

  1. Storageless and caching Tier-2 models in the UK context

    Science.gov (United States)

    Cadellin Skipsey, Samuel; Dewhurst, Alastair; Crooks, David; MacMahon, Ewan; Roy, Gareth; Smith, Oliver; Mohammed, Kashif; Brew, Chris; Britton, David

    2017-10-01

    Operational and other pressures have lead to WLCG experiments moving increasingly to a stratified model for Tier-2 resources, where “fat” Tier-2s (“T2Ds”) and “thin” Tier-2s (“T2Cs”) provide different levels of service. In the UK, this distinction is also encouraged by the terms of the current GridPP5 funding model. In anticipation of this, testing has been performed on the implications, and potential implementation, of such a distinction in our resources. In particular, this presentation presents the results of testing of storage T2Cs, where the “thin” nature is expressed by the site having either no local data storage, or only a thin caching layer; data is streamed or copied from a “nearby” T2D when needed by jobs. In OSG, this model has been adopted successfully for CMS AAA sites; but the network topology and capacity in the USA is significantly different to that in the UK (and much of Europe). We present the result of several operational tests: the in-production University College London (UCL) site, which runs ATLAS workloads using storage at the Queen Mary University of London (QMUL) site; the Oxford site, which has had scaling tests performed against T2Ds in various locations in the UK (to test network effects); and the Durham site, which has been testing the specific ATLAS caching solution of “Rucio Cache” integration with ARC’s caching layer.

  2. JWIG: Yet Another Framework for Maintainable and Secure Web Applications

    DEFF Research Database (Denmark)

    Møller, Anders; Schwarz, Mathias Romme

    2009-01-01

    Although numerous frameworks for web application programming have been developed in recent years, writing web applications remains a challenging task. Guided by a collection of classical design principles, we propose yet another framework. It is based on a simple but flexible server......-oriented architecture that coherently supports general aspects of modern web applications, including dynamic XML construction, session management, data persistence, caching, and authentication, but it also simplifies programming of server-push communication and integration of XHTML-based applications and XML-based web...... services.The resulting framework provides a novel foundation for developing maintainable and secure web applications....

  3. Effects of simulated mountain lion caching on decomposition of ungulate carcasses

    Science.gov (United States)

    Bischoff-Mattson, Z.; Mattson, D.

    2009-01-01

    Caching of animal remains is common among carnivorous species of all sizes, yet the effects of caching on larger prey are unstudied. We conducted a summer field experiment designed to test the effects of simulated mountain lion (Puma concolor) caching on mass loss, relative temperature, and odor dissemination of 9 prey-like carcasses. We deployed all but one of the carcasses in pairs, with one of each pair exposed and the other shaded and shallowly buried (cached). Caching substantially reduced wastage during dry and hot (drought) but not wet and cool (monsoon) periods, and it also reduced temperature and discernable odor to some degree during both seasons. These results are consistent with the hypotheses that caching serves to both reduce competition from arthropods and microbes and reduce odds of detection by larger vertebrates such as bears (Ursus spp.), wolves (Canis lupus), or other lions.

  4. Cache and energy efficient algorithms for Nussinov's RNA Folding.

    Science.gov (United States)

    Zhao, Chunchun; Sahni, Sartaj

    2017-12-06

    An RNA folding/RNA secondary structure prediction algorithm determines the non-nested/pseudoknot-free structure by maximizing the number of complementary base pairs and minimizing the energy. Several implementations of Nussinov's classical RNA folding algorithm have been proposed. Our focus is to obtain run time and energy efficiency by reducing the number of cache misses. Three cache-efficient algorithms, ByRow, ByRowSegment and ByBox, for Nussinov's RNA folding are developed. Using a simple LRU cache model, we show that the Classical algorithm of Nussinov has the highest number of cache misses followed by the algorithms Transpose (Li et al.), ByRow, ByRowSegment, and ByBox (in this order). Extensive experiments conducted on four computational platforms-Xeon E5, AMD Athlon 64 X2, Intel I7 and PowerPC A2-using two programming languages-C and Java-show that our cache efficient algorithms are also efficient in terms of run time and energy. Our benchmarking shows that, depending on the computational platform and programming language, either ByRow or ByBox give best run time and energy performance. The C version of these algorithms reduce run time by as much as 97.2% and energy consumption by as much as 88.8% relative to Classical and by as much as 56.3% and 57.8% relative to Transpose. The Java versions reduce run time by as much as 98.3% relative to Classical and by as much as 75.2% relative to Transpose. Transpose achieves run time and energy efficiency at the expense of memory as it takes twice the memory required by Classical. The memory required by ByRow, ByRowSegment, and ByBox is the same as that of Classical. As a result, using the same amount of memory, the algorithms proposed by us can solve problems up to 40% larger than those solvable by Transpose.

  5. Organizing the pantry: cache management improves quality of overwinter food stores in a montane mammal

    Science.gov (United States)

    Jakopak, Rhiannon P.; Hall, L. Embere; Chalfoun, Anna D.

    2017-01-01

    Many mammals create food stores to enhance overwinter survival in seasonal environments. Strategic arrangement of food within caches may facilitate the physical integrity of the cache or improve access to high-quality food to ensure that cached resources meet future nutritional demands. We used the American pika (Ochotona princeps), a food-caching lagomorph, to evaluate variation in haypile (cache) structure (i.e., horizontal layering by plant functional group) in Wyoming, United States. Fifty-five percent of 62 haypiles contained at least 2 discrete layers of vegetation. Adults and juveniles layered haypiles in similar proportions. The probability of layering increased with haypile volume, but not haypile number per individual or nearby forage diversity. Vegetation cached in layered haypiles was also higher in nitrogen compared to vegetation in unlayered piles. We found that American pikas frequently structured their food caches, structured caches were larger, and the cached vegetation in structured piles was of higher nutritional quality. Improving access to stable, high-quality vegetation in haypiles, a critical overwinter food resource, may allow individuals to better persist amidst harsh conditions.

  6. A Cache System Design for CMPs with Built-In Coherence Verification

    Directory of Open Access Journals (Sweden)

    Mamata Dalui

    2016-01-01

    Full Text Available This work reports an effective design of cache system for Chip Multiprocessors (CMPs. It introduces built-in logic for verification of cache coherence in CMPs realizing directory based protocol. It is developed around the cellular automata (CA machine, invented by John von Neumann in the 1950s. A special class of CA referred to as single length cycle 2-attractor cellular automata (TACA has been planted to detect the inconsistencies in cache line states of processors’ private caches. The TACA module captures coherence status of the CMPs’ cache system and memorizes any inconsistent recording of the cache line states during the processors’ reference to a memory block. Theory has been developed to empower a TACA to analyse the cache state updates and then to settle to an attractor state indicating quick decision on a faulty recording of cache line status. The introduction of segmentation of the CMPs’ processor pool ensures a better efficiency, in determining the inconsistencies, by reducing the number of computation steps in the verification logic. The hardware requirement for the verification logic points to the fact that the overhead of proposed coherence verification module is much lesser than that of the conventional verification units and is insignificant with respect to the cost involved in CMPs’ cache system.

  7. Behavior-aware cache hierarchy optimization for low-power multi-core embedded systems

    Science.gov (United States)

    Zhao, Huatao; Luo, Xiao; Zhu, Chen; Watanabe, Takahiro; Zhu, Tianbo

    2017-07-01

    In modern embedded systems, the increasing number of cores requires efficient cache hierarchies to ensure data throughput, but such cache hierarchies are restricted by their tumid size and interference accesses which leads to both performance degradation and wasted energy. In this paper, we firstly propose a behavior-aware cache hierarchy (BACH) which can optimally allocate the multi-level cache resources to many cores and highly improved the efficiency of cache hierarchy, resulting in low energy consumption. The BACH takes full advantage of the explored application behaviors and runtime cache resource demands as the cache allocation bases, so that we can optimally configure the cache hierarchy to meet the runtime demand. The BACH was implemented on the GEM5 simulator. The experimental results show that energy consumption of a three-level cache hierarchy can be saved from 5.29% up to 27.94% compared with other key approaches while the performance of the multi-core system even has a slight improvement counting in hardware overhead.

  8. Cooperative Caching in Mobile Ad Hoc Networks Based on Data Utility

    Directory of Open Access Journals (Sweden)

    Narottam Chand

    2007-01-01

    Full Text Available Cooperative caching, which allows sharing and coordination of cached data among clients, is a potential technique to improve the data access performance and availability in mobile ad hoc networks. However, variable data sizes, frequent data updates, limited client resources, insufficient wireless bandwidth and client's mobility make cache management a challenge. In this paper, we propose a utility based cache replacement policy, least utility value (LUV, to improve the data availability and reduce the local cache miss ratio. LUV considers several factors that affect cache performance, namely access probability, distance between the requester and data source/cache, coherency and data size. A cooperative cache management strategy, Zone Cooperative (ZC, is developed that employs LUV as replacement policy. In ZC one-hop neighbors of a client form a cooperation zone since the cost for communication with them is low both in terms of energy consumption and message exchange. Simulation experiments have been conducted to evaluate the performance of LUV based ZC caching strategy. The simulation results show that, LUV replacement policy substantially outperforms the LRU policy.

  9. Optimal Caching in Multicast 5G Networks with Opportunistic Spectrum Access

    KAUST Repository

    Emara, Mostafa

    2018-01-15

    Cache-enabled small base station (SBS) densification is foreseen as a key component of 5G cellular networks. This architecture enables storing popular files at the network edge (i.e., SBS caches), which empowers local communication and alleviates traffic congestions at the core/backhaul network. This paper develops a mathematical framework, based on stochastic geometry, to characterize the hit probability of a cache-enabled multicast 5G network with SBS multi-channel capabilities and opportunistic spectrum access. To this end, we first derive the hit probability by characterizing opportunistic spectrum access success probabilities, service distance distributions, and coverage probabilities. The optimal caching distribution to maximize the hit probability is then computed. The performance and trade-offs of the derived optimal caching distributions are then assessed and compared with two widely employed caching distribution schemes, namely uniform and Zipf caching, through numerical results and extensive simulations. It is shown that the Zipf caching almost optimal only in scenarios with large number of available channels and large cache sizes.

  10. Shareholder Activism through the Proxy Process

    NARCIS (Netherlands)

    Renneboog, L.D.R.; Szilagyi, P.G.

    2009-01-01

    This paper provides evidence on the corporate governance role of shareholderinitiated proxy proposals. Previous studies debate over whether activists use proxy proposals to discipline firms or to simply advance their self-serving agendas, and whether proxy proposals are effective at all in

  11. Shareholder Activism Through the Proxy Process

    NARCIS (Netherlands)

    Renneboog, L.D.R.; Szilagyi, P.G.

    2009-01-01

    This paper provides evidence on the corporate governance role of shareholder-initiated proxy proposals. Previous studies debate over whether activists use proxy proposals to discipline firms or to simply advance their self-serving agendas, and whether proxy proposals are effective at all in

  12. California scrub-jays reduce visual cues available to potential pilferers by matching food colour to caching substrate.

    Science.gov (United States)

    Kelley, Laura A; Clayton, Nicola S

    2017-07-01

    Some animals hide food to consume later; however, these caches are susceptible to theft by conspecifics and heterospecifics. Caching animals can use protective strategies to minimize sensory cues available to potential pilferers, such as caching in shaded areas and in quiet substrate. Background matching (where object patterning matches the visual background) is commonly seen in prey animals to reduce conspicuousness, and caching animals may also use this tactic to hide caches, for example, by hiding coloured food in a similar coloured substrate. We tested whether California scrub-jays ( Aphelocoma californica ) camouflage their food in this way by offering them caching substrates that either matched or did not match the colour of food available for caching. We also determined whether this caching behaviour was sensitive to social context by allowing the birds to cache when a conspecific potential pilferer could be both heard and seen (acoustic and visual cues present), or unseen (acoustic cues only). When caching events could be both heard and seen by a potential pilferer, birds cached randomly in matching and non-matching substrates. However, they preferentially hid food in the substrate that matched the food colour when only acoustic cues were present. This is a novel cache protection strategy that also appears to be sensitive to social context. We conclude that studies of cache protection strategies should consider the perceptual capabilities of the cacher and potential pilferers. © 2017 The Author(s).

  13. Delivering Electronic Resources with Web OPACs and Other Web-based Tools: Needs of Reference Librarians.

    Science.gov (United States)

    Bordeianu, Sever; Carter, Christina E.; Dennis, Nancy K.

    2000-01-01

    Describes Web-based online public access catalogs (Web OPACs) and other Web-based tools as gateway methods for providing access to library collections. Addresses solutions for overcoming barriers to information, such as through the implementation of proxy servers and other authentication tools for remote users. (Contains 18 references.)…

  14. Effectiveness of caching in a distributed digital library system

    DEFF Research Database (Denmark)

    Hollmann, J.; Ardø, Anders; Stenstrom, P.

    2007-01-01

    as manifested by gateways that implement the interfaces to the many fulltext archives. A central research question in this approach is: What is the nature of locality in the user access stream to such a digital library? Based on access logs that drive the simulations, it is shown that client-side caching can......Today independent publishers are offering digital libraries with fulltext archives. In an attempt to provide a single user-interface to a large set of archives, the studied Article-Database-Service offers a consolidated interface to a geographically distributed set of archives. While this approach...

  15. Cache-Oblivious Red-Blue Line Segment Intersection

    DEFF Research Database (Denmark)

    Arge, Lars; Mølhave, Thomas; Zeh, Norbert

    2008-01-01

    We present an optimal cache-oblivious algorithm for finding all intersections between a set of non-intersecting red segments and a set of non-intersecting blue segments in the plane. Our algorithm uses $O(\\frac{N}{B}\\log_{M/B}\\frac{N}{B}+T/B)$ memory transfers, where N is the total number...... of segments, M and B are the memory and block transfer sizes of any two consecutive levels of any multilevel memory hierarchy, and T is the number of intersections....

  16. Ordering sparse matrices for cache-based systems

    International Nuclear Information System (INIS)

    Biswas, Rupak; Oliker, Leonid

    2001-01-01

    The Conjugate Gradient (CG) algorithm is the oldest and best-known Krylov subspace method used to solve sparse linear systems. Most of the coating-point operations within each CG iteration is spent performing sparse matrix-vector multiplication (SPMV). We examine how various ordering and partitioning strategies affect the performance of CG and SPMV when different programming paradigms are used on current commercial cache-based computers. However, a multithreaded implementation on the cacheless Cray MTA demonstrates high efficiency and scalability without any special ordering or partitioning

  17. Something different - caching applied to calculation of impedance matrix elements

    CSIR Research Space (South Africa)

    Lysko, AA

    2012-09-01

    Full Text Available of the multipliers, the approximating functions are used any required parameters, such as input impedance or gain pattern etc. The method is relatively straightforward but, especially for small to medium matrices, requires spending time on filling... of the computing the impedance matrix for the method of moments, or a similar method, such as boundary element method (BEM) [22], with the help of the flowchart shown in Figure 1. Input Parameters (a) Search the cached data for a match (b) A match found...

  18. A general approach for cache-oblivious range reporting and approximate range counting

    DEFF Research Database (Denmark)

    Afshani, Peyman; Hamilton, Chris; Zeh, Norbert

    2010-01-01

    We present cache-oblivious solutions to two important variants of range searching: range reporting and approximate range counting. Our main contribution is a general approach for constructing cache-oblivious data structures that provide relative (1+ε)-approximations for a general class of range c...

  19. Lack of caching of direct-seeded Douglas fir seeds by deer mice

    International Nuclear Information System (INIS)

    Sullivan, T.P.

    1978-01-01

    Seed caching by deer mice was investigated by radiotagging seeds in forest and clear-cut areas in coastal British Columbia. Deer mice tend to cache very few Douglas fir seeds in the fall when the seed is uniformly distributed and is at densities comparable with those used in direct-seeding programs. (author)

  20. Re-caching by Western scrub-jays (Aphelocoma californica cannot be attributed to stress.

    Directory of Open Access Journals (Sweden)

    James M Thom

    Full Text Available Western scrub-jays (Aphelocoma californica live double lives, storing food for the future while raiding the stores of other birds. One tactic scrub-jays employ to protect stores is "re-caching"-relocating caches out of sight of would-be thieves. Recent computational modelling work suggests that re-caching might be mediated not by complex cognition, but by a combination of memory failure and stress. The "Stress Model" asserts that re-caching is a manifestation of a general drive to cache, rather than a desire to protect existing stores. Here, we present evidence strongly contradicting the central assumption of these models: that stress drives caching, irrespective of social context. In Experiment (i, we replicate the finding that scrub-jays preferentially relocate food they were watched hiding. In Experiment (ii we find no evidence that stress increases caching. In light of our results, we argue that the Stress Model cannot account for scrub-jay re-caching.

  1. Learning Automata Based Caching for Efficient Data Access in Delay Tolerant Networks

    Directory of Open Access Journals (Sweden)

    Zhenjie Ma

    2018-01-01

    Full Text Available Effective data access is one of the major challenges in Delay Tolerant Networks (DTNs that are characterized by intermittent network connectivity and unpredictable node mobility. Currently, different data caching schemes have been proposed to improve the performance of data access in DTNs. However, most existing data caching schemes perform poorly due to the lack of global network state information and the changing network topology in DTNs. In this paper, we propose a novel data caching scheme based on cooperative caching in DTNs, aiming at improving the successful rate of data access and reducing the data access delay. In the proposed scheme, learning automata are utilized to select a set of caching nodes as Caching Node Set (CNS in DTNs. Unlike the existing caching schemes failing to address the challenging characteristics of DTNs, our scheme is designed to automatically self-adjust to the changing network topology through the well-designed voting and updating processes. The proposed scheme improves the overall performance of data access in DTNs compared with the former caching schemes. The simulations verify the feasibility of our scheme and the improvements in performance.

  2. Cache aware mapping of streaming apllications on a multiprocessor system-on-chip

    NARCIS (Netherlands)

    Moonen, A.J.M.; Bekooij, M.J.G.; Berg, van den R.M.J.; Meerbergen, van J.; Sciuto, D.; Peng, Z.

    2008-01-01

    Efficient use of the memory hierarchy is critical for achieving high performance in a multiprocessor system- on-chip. An external memory that is shared between processors is a bottleneck in current and future systems. Cache misses and a large cache miss penalty contribute to a low processor

  3. Measuring SIP proxy server performance

    CERN Document Server

    Subramanian, Sureshkumar V

    2013-01-01

    Internet Protocol (IP) telephony is an alternative to the traditional Public Switched Telephone Networks (PSTN), and the Session Initiation Protocol (SIP) is quickly becoming a popular signaling protocol for VoIP-based applications. SIP is a peer-to-peer multimedia signaling protocol standardized by the Internet Engineering Task Force (IETF), and it plays a vital role in providing IP telephony services through its use of the SIP Proxy Server (SPS), a software application that provides call routing services by parsing and forwarding all the incoming SIP packets in an IP telephony network.SIP Pr

  4. Adaptive Neuro-fuzzy Inference System as Cache Memory Replacement Policy

    Directory of Open Access Journals (Sweden)

    CHUNG, Y. M.

    2014-02-01

    Full Text Available To date, no cache memory replacement policy that can perform efficiently for all types of workloads is yet available. Replacement policies used in level 1 cache memory may not be suitable in level 2. In this study, we focused on developing an adaptive neuro-fuzzy inference system (ANFIS as a replacement policy for improving level 2 cache performance in terms of miss ratio. The recency and frequency of referenced blocks were used as input data for ANFIS to make decisions on replacement. MATLAB was employed as a training tool to obtain the trained ANFIS model. The trained ANFIS model was implemented on SimpleScalar. Simulations on SimpleScalar showed that the miss ratio improved by as high as 99.95419% and 99.95419% for instruction level 2 cache, and up to 98.04699% and 98.03467% for data level 2 cache compared with least recently used and least frequently used, respectively.

  5. CORAL Server and CORAL Server Proxy: Scalable Access to Relational Databases from CORAL Applications

    International Nuclear Information System (INIS)

    Valassi, A; Kalkhof, A; Bartoldus, R; Salnikov, A; Wache, M

    2011-01-01

    The CORAL software is widely used at CERN by the LHC experiments to access the data they store on relational databases, such as Oracle. Two new components have recently been added to implement a model involving a middle tier 'CORAL server' deployed close to the database and a tree of 'CORAL server proxies', providing data caching and multiplexing, deployed close to the client. A first implementation of the two new components, released in the summer 2009, is now deployed in the ATLAS online system to read the data needed by the High Level Trigger, allowing the configuration of a farm of several thousand processes. This paper reviews the architecture of the software, its development status and its usage in ATLAS.

  6. Broadcasted Location-Aware Data Cache for Vehicular Application

    Directory of Open Access Journals (Sweden)

    Fukuda Akira

    2007-01-01

    Full Text Available There has been increasing interest in the exploitation of advances in information technology, for example, mobile computing and wireless communications in ITS (intelligent transport systems. Classes of applications that can benefit from such an infrastructure include traffic information, roadside businesses, weather reports, entertainment, and so on. There are several wireless communication methods currently available that can be utilized for vehicular applications, such as cellular phone networks, DSRC (dedicated short-range communication, and digital broadcasting. While a cellular phone network is relatively slow and a DSRC has a very small communication area, one-segment digital terrestrial broadcasting service was launched in Japan in 2006, high-performance digital broadcasting for mobile hosts has been available recently. However, broadcast delivery methods have the drawback that clients need to wait for the required data items to appear on the broadcast channel. In this paper, we propose a new cache system to effectively prefetch and replace broadcast data using "scope" (an available area of location-dependent data and "mobility specification" (a schedule according to the direction in which a mobile host moves. We numerically evaluate the cache system on the model close to the traffic road environment, and implement the emulation system to evaluate this location-aware data delivery method for a concrete vehicular application that delivers geographic road map data to a car navigation system.

  7. Pattern recognition for cache management in distributed medical imaging environments.

    Science.gov (United States)

    Viana-Ferreira, Carlos; Ribeiro, Luís; Matos, Sérgio; Costa, Carlos

    2016-02-01

    Traditionally, medical imaging repositories have been supported by indoor infrastructures with huge operational costs. This paradigm is changing thanks to cloud outsourcing which not only brings technological advantages but also facilitates inter-institutional workflows. However, communication latency is one main problem in this kind of approaches, since we are dealing with tremendous volumes of data. To minimize the impact of this issue, cache and prefetching are commonly used. The effectiveness of these mechanisms is highly dependent on their capability of accurately selecting the objects that will be needed soon. This paper describes a pattern recognition system based on artificial neural networks with incremental learning to evaluate, from a set of usage pattern, which one fits the user behavior at a given time. The accuracy of the pattern recognition model in distinct training conditions was also evaluated. The solution was tested with a real-world dataset and a synthesized dataset, showing that incremental learning is advantageous. Even with very immature initial models, trained with just 1 week of data samples, the overall accuracy was very similar to the value obtained when using 75% of the long-term data for training the models. Preliminary results demonstrate an effective reduction in communication latency when using the proposed solution to feed a prefetching mechanism. The proposed approach is very interesting for cache replacement and prefetching policies due to the good results obtained since the first deployment moments.

  8. Broadcasted Location-Aware Data Cache for Vehicular Application

    Directory of Open Access Journals (Sweden)

    Kenya Sato

    2007-05-01

    Full Text Available There has been increasing interest in the exploitation of advances in information technology, for example, mobile computing and wireless communications in ITS (intelligent transport systems. Classes of applications that can benefit from such an infrastructure include traffic information, roadside businesses, weather reports, entertainment, and so on. There are several wireless communication methods currently available that can be utilized for vehicular applications, such as cellular phone networks, DSRC (dedicated short-range communication, and digital broadcasting. While a cellular phone network is relatively slow and a DSRC has a very small communication area, one-segment digital terrestrial broadcasting service was launched in Japan in 2006, high-performance digital broadcasting for mobile hosts has been available recently. However, broadcast delivery methods have the drawback that clients need to wait for the required data items to appear on the broadcast channel. In this paper, we propose a new cache system to effectively prefetch and replace broadcast data using “scope” (an available area of location-dependent data and “mobility specification” (a schedule according to the direction in which a mobile host moves. We numerically evaluate the cache system on the model close to the traffic road environment, and implement the emulation system to evaluate this location-aware data delivery method for a concrete vehicular application that delivers geographic road map data to a car navigation system.

  9. Cache-Oblivious Planar Orthogonal Range Searching and Counting

    DEFF Research Database (Denmark)

    Arge, Lars; Brodal, Gerth Stølting; Fagerberg, Rolf

    2005-01-01

    present the first cache-oblivious data structure for planar orthogonal range counting, and improve on previous results for cache-oblivious planar orthogonal range searching. Our range counting structure uses O(Nlog2 N) space and answers queries using O(logB N) memory transfers, where B is the block...... size of any memory level in a multilevel memory hierarchy. Using bit manipulation techniques, the space can be further reduced to O(N). The structure can also be modified to support more general semigroup range sum queries in O(logB N) memory transfers, using O(Nlog2 N) space for three-sided queries...... and O(Nlog22 N/log2log2 N) space for four-sided queries. Based on the O(Nlog N) space range counting structure, we develop a data structure that uses O(Nlog2 N) space and answers three-sided range queries in O(logB N+T/B) memory transfers, where T is the number of reported points. Based...

  10. Novel dynamic caching for hierarchically distributed video-on-demand systems

    Science.gov (United States)

    Ogo, Kenta; Matsuda, Chikashi; Nishimura, Kazutoshi

    1998-02-01

    It is difficult to simultaneously serve the millions of video streams that will be needed in the age of 'Mega-Media' networks by using only one high-performance server. To distribute the service load, caching servers should be location near users. However, in previously proposed caching mechanisms, the grade of service depends on whether the data is already cached at a caching server. To make the caching servers transparent to the users, the ability to randomly access the large volume of data stored in the central server should be supported, and the operational functions of the provided service should not be narrowly restricted. We propose a mechanism for constructing a video-stream-caching server that is transparent to the users and that will always support all special playback functions for all available programs to all the contents with a latency of only 1 or 2 seconds. This mechanism uses Variable-sized-quantum-segment- caching technique derived from an analysis of the historical usage log data generated by a line-on-demand-type service experiment and based on the basic techniques used by a time- slot-based multiple-stream video-on-demand server.

  11. Adjustable Two-Tier Cache for IPTV Based on Segmented Streaming

    Directory of Open Access Journals (Sweden)

    Kai-Chun Liang

    2012-01-01

    Full Text Available Internet protocol TV (IPTV is a promising Internet killer application, which integrates video, voice, and data onto a single IP network, and offers viewers an innovative set of choices and control over their TV content. To provide high-quality IPTV services, an effective strategy is based on caching. This work proposes a segment-based two-tier caching approach, which divides each video into multiple segments to be cached. This approach also partitions the cache space into two layers, where the first layer mainly caches to-be-played segments and the second layer saves possibly played segments. As the segment access becomes frequent, the proposed approach enlarges the first layer and reduces the second layer, and vice versa. Because requested segments may not be accessed frequently, this work further designs an admission control mechanism to determine whether an incoming segment should be cached or not. The cache architecture takes forward/stop playback into account and may replace the unused segments under the interrupted playback. Finally, we conduct comprehensive simulation experiments to evaluate the performance of the proposed approach. The results show that our approach can yield higher hit ratio than previous work under various environmental parameters.

  12. A Scalable and Highly Configurable Cache-Aware Hybrid Flash Translation Layer

    Directory of Open Access Journals (Sweden)

    Jalil Boukhobza

    2014-03-01

    Full Text Available This paper presents a cache-aware configurable hybrid flash translation layer (FTL, named CACH-FTL. It was designed based on the observation that most state-of­­-the-art flash-specific cache systems above FTLs flush groups of pages belonging to the same data block. CACH-FTL relies on this characteristic to optimize flash write operations placement, as large groups of pages are flushed to a block-mapped region, named BMR, whereas small groups are buffered into a page-mapped region, named PMR. Page group placement is based on a configurable threshold defining the limit under which it is more cost-effective to use page mapping (PMR and wait for grouping more pages before flushing to the BMR. CACH-FTL is scalable in terms of mapping table size and flexible in terms of Input/Output (I/O workload support. CACH-FTL performs very well, as the performance difference with the ideal page-mapped FTL is less than 15% in most cases and has a mean of 4% for the best CACH-FTL configurations, while using at least 78% less memory for table mapping storage on RAM.

  13. Explicit Content Caching at Mobile Edge Networks with Cross-Layer Sensing

    Science.gov (United States)

    Chen, Lingyu; Su, Youxing; Luo, Wenbin; Hong, Xuemin; Shi, Jianghong

    2018-01-01

    The deployment density and computational power of small base stations (BSs) are expected to increase significantly in the next generation mobile communication networks. These BSs form the mobile edge network, which is a pervasive and distributed infrastructure that can empower a variety of edge/fog computing applications. This paper proposes a novel edge-computing application called explicit caching, which stores selective contents at BSs and exposes such contents to local users for interactive browsing and download. We formulate the explicit caching problem as a joint content recommendation, caching, and delivery problem, which aims to maximize the expected user quality-of-experience (QoE) with varying degrees of cross-layer sensing capability. Optimal and effective heuristic algorithms are presented to solve the problem. The theoretical performance bounds of the explicit caching system are derived in simplified scenarios. The impacts of cache storage space, BS backhaul capacity, cross-layer information, and user mobility on the system performance are simulated and discussed in realistic scenarios. Results suggest that, compared with conventional implicit caching schemes, explicit caching can better exploit the mobile edge network infrastructure for personalized content dissemination. PMID:29565313

  14. Optimal and Scalable Caching for 5G Using Reinforcement Learning of Space-Time Popularities

    Science.gov (United States)

    Sadeghi, Alireza; Sheikholeslami, Fatemeh; Giannakis, Georgios B.

    2018-02-01

    Small basestations (SBs) equipped with caching units have potential to handle the unprecedented demand growth in heterogeneous networks. Through low-rate, backhaul connections with the backbone, SBs can prefetch popular files during off-peak traffic hours, and service them to the edge at peak periods. To intelligently prefetch, each SB must learn what and when to cache, while taking into account SB memory limitations, the massive number of available contents, the unknown popularity profiles, as well as the space-time popularity dynamics of user file requests. In this work, local and global Markov processes model user requests, and a reinforcement learning (RL) framework is put forth for finding the optimal caching policy when the transition probabilities involved are unknown. Joint consideration of global and local popularity demands along with cache-refreshing costs allow for a simple, yet practical asynchronous caching approach. The novel RL-based caching relies on a Q-learning algorithm to implement the optimal policy in an online fashion, thus enabling the cache control unit at the SB to learn, track, and possibly adapt to the underlying dynamics. To endow the algorithm with scalability, a linear function approximation of the proposed Q-learning scheme is introduced, offering faster convergence as well as reduced complexity and memory requirements. Numerical tests corroborate the merits of the proposed approach in various realistic settings.

  15. Do Clark's nutcrackers demonstrate what-where-when memory on a cache-recovery task?

    Science.gov (United States)

    Gould, Kristy L; Ort, Amy J; Kamil, Alan C

    2012-01-01

    What-where-when (WWW) memory during cache recovery was investigated in six Clark's nutcrackers. During caching, both red- and blue-colored pine seeds were cached by the birds in holes filled with sand. Either a short (3 day) retention interval (RI) or a long (9 day) RI was followed by a recovery session during which caches were replaced with either a single seed or wooden bead depending upon the color of the cache and length of the retention interval. Knowledge of what was in the cache (seed or bead), where it was located, and when the cache had been made (3 or 9 days ago) were the three WWW memory components under investigation. Birds recovered items (bead or seed) at above chance levels, demonstrating accurate spatial memory. They also recovered seeds more than beads after the long RI, but not after the short RI, when they recovered seeds and beads equally often. The differential recovery after the long RI demonstrates that nutcrackers may have the capacity for WWW memory during this task, but it is not clear why it was influenced by RI duration.

  16. Explicit Content Caching at Mobile Edge Networks with Cross-Layer Sensing.

    Science.gov (United States)

    Chen, Lingyu; Su, Youxing; Luo, Wenbin; Hong, Xuemin; Shi, Jianghong

    2018-03-22

    The deployment density and computational power of small base stations (BSs) are expected to increase significantly in the next generation mobile communication networks. These BSs form the mobile edge network, which is a pervasive and distributed infrastructure that can empower a variety of edge/fog computing applications. This paper proposes a novel edge-computing application called explicit caching, which stores selective contents at BSs and exposes such contents to local users for interactive browsing and download. We formulate the explicit caching problem as a joint content recommendation, caching, and delivery problem, which aims to maximize the expected user quality-of-experience (QoE) with varying degrees of cross-layer sensing capability. Optimal and effective heuristic algorithms are presented to solve the problem. The theoretical performance bounds of the explicit caching system are derived in simplified scenarios. The impacts of cache storage space, BS backhaul capacity, cross-layer information, and user mobility on the system performance are simulated and discussed in realistic scenarios. Results suggest that, compared with conventional implicit caching schemes, explicit caching can better exploit the mobile edge network infrastructure for personalized content dissemination.

  17. Value-Based Caching in Information-Centric Wireless Body Area Networks

    Directory of Open Access Journals (Sweden)

    Fadi M. Al-Turjman

    2017-01-01

    Full Text Available We propose a resilient cache replacement approach based on a Value of sensed Information (VoI policy. To resolve and fetch content when the origin is not available due to isolated in-network nodes (fragmentation and harsh operational conditions, we exploit a content caching approach. Our approach depends on four functional parameters in sensory Wireless Body Area Networks (WBANs. These four parameters are: age of data based on periodic request, popularity of on-demand requests, communication interference cost, and the duration for which the sensor node is required to operate in active mode to capture the sensed readings. These parameters are considered together to assign a value to the cached data to retain the most valuable information in the cache for prolonged time periods. The higher the value, the longer the duration for which the data will be retained in the cache. This caching strategy provides significant availability for most valuable and difficult to retrieve data in the WBANs. Extensive simulations are performed to compare the proposed scheme against other significant caching schemes in the literature while varying critical aspects in WBANs (e.g., data popularity, cache size, publisher load, connectivity-degree, and severe probabilities of node failures. These simulation results indicate that the proposed VoI-based approach is a valid tool for the retrieval of cached content in disruptive and challenging scenarios, such as the one experienced in WBANs, since it allows the retrieval of content for a long period even while experiencing severe in-network node failures.

  18. Applying Data Mining Techniques to Improve Information Security in the Cloud: A Single Cache System Approach

    OpenAIRE

    Amany AlShawi

    2016-01-01

    Presently, the popularity of cloud computing is gradually increasing day by day. The purpose of this research was to enhance the security of the cloud using techniques such as data mining with specific reference to the single cache system. From the findings of the research, it was observed that the security in the cloud could be enhanced with the single cache system. For future purposes, an Apriori algorithm can be applied to the single cache system. This can be applied by all cloud providers...

  19. Proposal and development of a reconfigurable associativity algorithm in cache memories.

    OpenAIRE

    Roberto Borges Kerr Junior

    2008-01-01

    A evolução constante dos processadores está aumentando cada vez o overhead dos acessos à memória. Tentando evitar este problema, os desenvolvedores de processadores utilizam diversas técnicas, entre elas, o emprego de memórias cache na hierarquia de memórias dos computadores. As memórias cache, por outro lado, não conseguem suprir totalmente as suas necessidades, sendo interessante alguma técnica que tornasse possível aproveitar melhor a memória cache. Para resolver este problema, autores pro...

  20. Optical RAM-enabled cache memory and optical routing for chip multiprocessors: technologies and architectures

    Science.gov (United States)

    Pleros, Nikos; Maniotis, Pavlos; Alexoudi, Theonitsa; Fitsios, Dimitris; Vagionas, Christos; Papaioannou, Sotiris; Vyrsokinos, K.; Kanellos, George T.

    2014-03-01

    The processor-memory performance gap, commonly referred to as "Memory Wall" problem, owes to the speed mismatch between processor and electronic RAM clock frequencies, forcing current Chip Multiprocessor (CMP) configurations to consume more than 50% of the chip real-estate for caching purposes. In this article, we present our recent work spanning from Si-based integrated optical RAM cell architectures up to complete optical cache memory architectures for Chip Multiprocessor configurations. Moreover, we discuss on e/o router subsystems with up to Tb/s routing capacity for cache interconnection purposes within CMP configurations, currently pursued within the FP7 PhoxTrot project.

  1. dCache, towards Federated Identities & Anonymized Delegation

    Science.gov (United States)

    Ashish, A.; Millar, AP; Mkrtchyan, T.; Fuhrmann, P.; Behrmann, G.; Sahakyan, M.; Adeyemi, O. S.; Starek, J.; Litvintsev, D.; Rossi, A.

    2017-10-01

    For over a decade, dCache has relied on the authentication and authorization infrastructure (AAI) offered by VOMS, Kerberos, Xrootd etc. Although the established infrastructure has worked well and provided sufficient security, the implementation of procedures and the underlying software is often seen as a burden, especially by smaller communities trying to adopt existing HEP software stacks [1]. Moreover, scientists are increasingly dependent on service portals for data access [2]. In this paper, we describe how federated identity management systems can facilitate the transition from traditional AAI infrastructure to novel solutions like OpenID Connect. We investigate the advantages offered by OpenID Connect in regards to ‘delegation of authentication’ and ‘credential delegation for offline access’. Additionally, we demonstrate how macaroons can provide a more fine-granular authorization mechanism that supports anonymized delegation.

  2. Geometric Algorithms for Private-Cache Chip Multiprocessors

    DEFF Research Database (Denmark)

    Ajwani, Deepak; Sitchinava, Nodari; Zeh, Norbert

    2010-01-01

    -D convex hulls. These results are obtained by analyzing adaptations of either the PEM merge sort algorithm or PRAM algorithms. For the second group of problems—orthogonal line segment intersection reporting, batched range reporting, and related problems—more effort is required. What distinguishes......We study techniques for obtaining efficient algorithms for geometric problems on private-cache chip multiprocessors. We show how to obtain optimal algorithms for interval stabbing counting, 1-D range counting, weighted 2-D dominance counting, and for computing 3-D maxima, 2-D lower envelopes, and 2...... these problems from the ones in the previous group is the variable output size, which requires I/O-efficient load balancing strategies based on the contribution of the individual input elements to the output size. To obtain nearly optimal algorithms for these problems, we introduce a parallel distribution...

  3. Architecting Web Sites for High Performance

    Directory of Open Access Journals (Sweden)

    Arun Iyengar

    2002-01-01

    Full Text Available Web site applications are some of the most challenging high-performance applications currently being developed and deployed. The challenges emerge from the specific combination of high variability in workload characteristics and of high performance demands regarding the service level, scalability, availability, and costs. In recent years, a large body of research has addressed the Web site application domain, and a host of innovative software and hardware solutions have been proposed and deployed. This paper is an overview of recent solutions concerning the architectures and the software infrastructures used in building Web site applications. The presentation emphasizes three of the main functions in a complex Web site: the processing of client requests, the control of service levels, and the interaction with remote network caches.

  4. Cooperative Coding and Caching for Streaming Data in Multihop Wireless Networks

    Directory of Open Access Journals (Sweden)

    Dan Wang

    2010-01-01

    Full Text Available This paper studies the distributed caching managements for the current flourish of the streaming applications in multihop wireless networks. Many caching managements to date use randomized network coding approach, which provides an elegant solution for ubiquitous data accesses in such systems. However, the encoding, essentially a combination operation, makes the coded data difficult to be changed. In particular, to accommodate new data, the system may have to first decode all the combined data segments, remove some unimportant ones, and then reencode the data segments again. This procedure is clearly expensive for continuously evolving data storage. As such, we introduce a novel Cooperative Coding and Caching (C3 scheme, which allows decoding-free data removal through a triangle-like codeword organization. Its decoding performance is very close to the conventional network coding with only a sublinear overhead. Our scheme offers a promising solution to the caching management for streaming data.

  5. A Novel Architecture of Metadata Management System Based on Intelligent Cache

    Institute of Scientific and Technical Information of China (English)

    SONG Baoyan; ZHAO Hongwei; WANG Yan; GAO Nan; XU Jin

    2006-01-01

    This paper introduces a novel architecture of metadata management system based on intelligent cache called Metadata Intelligent Cache Controller (MICC). By using an intelligent cache to control the metadata system, MICC can deal with different scenarios such as splitting and merging of queries into sub-queries for available metadata sets in local, in order to reduce access time of remote queries. Application can find results patially from local cache and the remaining portion of the metadata that can be fetched from remote locations. Using the existing metadata, it can not only enhance the fault tolerance and load balancing of system effectively, but also improve the efficiency of access while ensuring the access quality.

  6. dCache data storage system implementations at a Tier-2 centre

    Energy Technology Data Exchange (ETDEWEB)

    Tsigenov, Oleg; Nowack, Andreas; Kress, Thomas [III. Physikalisches Institut B, RWTH Aachen (Germany)

    2009-07-01

    The experimental high energy physics groups of the RWTH Aachen University operate one of the largest Grid Tier-2 sites in the world and offer more than 2000 modern CPU cores and about 550 TB of disk space mainly to the CMS experiment and to a lesser extent to the Auger and Icecube collaborations.Running such a large data cluster requires a flexible storage system with high performance. We use dCache for this purpose and are integrated into the dCache support team to the benefit of the German Grid sites. Recently, a storage pre-production cluster has been built to study the setup and the behavior of novel dCache features within Chimera without interfering with the production system. This talk gives an overview about the practical experience gained with dCache on both the production and the testbed cluster and discusses future plans.

  7. Optimal Replacement Policies for Non-Uniform Cache Objects with Optional Eviction

    National Research Council Canada - National Science Library

    Bahat, Omri; Makowski, Armand M

    2002-01-01

    .... However, since the introduction of optimal replacement policies for conventional caching, the problem of finding optimal replacement policies under the factors indicated has not been studied in any systematic manner...

  8. Cooperative Coding and Caching for Streaming Data in Multihop Wireless Networks

    Directory of Open Access Journals (Sweden)

    Liu Jiangchuan

    2010-01-01

    Full Text Available This paper studies the distributed caching managements for the current flourish of the streaming applications in multihop wireless networks. Many caching managements to date use randomized network coding approach, which provides an elegant solution for ubiquitous data accesses in such systems. However, the encoding, essentially a combination operation, makes the coded data difficult to be changed. In particular, to accommodate new data, the system may have to first decode all the combined data segments, remove some unimportant ones, and then reencode the data segments again. This procedure is clearly expensive for continuously evolving data storage. As such, we introduce a novel Cooperative Coding and Caching ( scheme, which allows decoding-free data removal through a triangle-like codeword organization. Its decoding performance is very close to the conventional network coding with only a sublinear overhead. Our scheme offers a promising solution to the caching management for streaming data.

  9. Applying Data Mining Techniques to Improve Information Security in the Cloud: A Single Cache System Approach

    Directory of Open Access Journals (Sweden)

    Amany AlShawi

    2016-01-01

    Full Text Available Presently, the popularity of cloud computing is gradually increasing day by day. The purpose of this research was to enhance the security of the cloud using techniques such as data mining with specific reference to the single cache system. From the findings of the research, it was observed that the security in the cloud could be enhanced with the single cache system. For future purposes, an Apriori algorithm can be applied to the single cache system. This can be applied by all cloud providers, vendors, data distributors, and others. Further, data objects entered into the single cache system can be extended into 12 components. Database and SPSS modelers can be used to implement the same.

  10. The Optimization of In-Memory Space Partitioning Trees for Cache Utilization

    Science.gov (United States)

    Yeo, Myung Ho; Min, Young Soo; Bok, Kyoung Soo; Yoo, Jae Soo

    In this paper, a novel cache conscious indexing technique based on space partitioning trees is proposed. Many researchers investigated efficient cache conscious indexing techniques which improve retrieval performance of in-memory database management system recently. However, most studies considered data partitioning and targeted fast information retrieval. Existing data partitioning-based index structures significantly degrade performance due to the redundant accesses of overlapped spaces. Specially, R-tree-based index structures suffer from the propagation of MBR (Minimum Bounding Rectangle) information by updating data frequently. In this paper, we propose an in-memory space partitioning index structure for optimal cache utilization. The proposed index structure is compared with the existing index structures in terms of update performance, insertion performance and cache-utilization rate in a variety of environments. The results demonstrate that the proposed index structure offers better performance than existing index structures.

  11. 12 CFR 569.4 - Proxy soliciting material.

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 5 2010-01-01 2010-01-01 false Proxy soliciting material. 569.4 Section 569.4 Banks and Banking OFFICE OF THRIFT SUPERVISION, DEPARTMENT OF THE TREASURY PROXIES § 569.4 Proxy soliciting material. No solicitation of a proxy shall be made by means of any statement, form of proxy...

  12. 12 CFR 569.2 - Form of proxies.

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 5 2010-01-01 2010-01-01 false Form of proxies. 569.2 Section 569.2 Banks and Banking OFFICE OF THRIFT SUPERVISION, DEPARTMENT OF THE TREASURY PROXIES § 569.2 Form of proxies. Every form of proxy shall conform to the following requirements: (a) The proxy shall be revocable at will by...

  13. Content Delivery in Fog-Aided Small-Cell Systems with Offline and Online Caching: An Information—Theoretic Analysis

    Directory of Open Access Journals (Sweden)

    Seyyed Mohammadreza Azimi

    2017-07-01

    Full Text Available The storage of frequently requested multimedia content at small-cell base stations (BSs can reduce the load of macro-BSs without relying on high-speed backhaul links. In this work, the optimal operation of a system consisting of a cache-aided small-cell BS and a macro-BS is investigated for both offline and online caching settings. In particular, a binary fading one-sided interference channel is considered in which the small-cell BS, whose transmission is interfered by the macro-BS, has a limited-capacity cache. The delivery time per bit (DTB is adopted as a measure of the coding latency, that is, the duration of the transmission block, required for reliable delivery. For offline caching, assuming a static set of popular contents, the minimum achievable DTB is characterized through information-theoretic achievability and converse arguments as a function of the cache capacity and of the capacity of the backhaul link connecting cloud and small-cell BS. For online caching, under a time-varying set of popular contents, the long-term (average DTB is evaluated for both proactive and reactive caching policies. Furthermore, a converse argument is developed to characterize the minimum achievable long-term DTB for online caching in terms of the minimum achievable DTB for offline caching. The performance of both online and offline caching is finally compared using numerical results.

  14. A Survey on Mobile Edge Networks: Convergence of Computing, Caching and Communications

    OpenAIRE

    Wang, Shuo; Zhang, Xing; Zhang, Yan; Wang, Lin; Yang, Juwo; Wang, Wenbo

    2017-01-01

    As the explosive growth of smart devices and the advent of many new applications, traffic volume has been growing exponentially. The traditional centralized network architecture cannot accommodate such user demands due to heavy burden on the backhaul links and long latency. Therefore, new architectures which bring network functions and contents to the network edge are proposed, i.e., mobile edge computing and caching. Mobile edge networks provide cloud computing and caching capabilities at th...

  15. Enabling MPEG-2 video playback in embedded systems through improved data cache efficiency

    Science.gov (United States)

    Soderquist, Peter; Leeser, Miriam E.

    1999-01-01

    Digital video decoding, enabled by the MPEG-2 Video standard, is an important future application for embedded systems, particularly PDAs and other information appliances. Many such system require portability and wireless communication capabilities, and thus face severe limitations in size and power consumption. This places a premium on integration and efficiency, and favors software solutions for video functionality over specialized hardware. The processors in most embedded system currently lack the computational power needed to perform video decoding, but a related and equally important problem is the required data bandwidth, and the need to cost-effectively insure adequate data supply. MPEG data sets are very large, and generate significant amounts of excess memory traffic for standard data caches, up to 100 times the amount required for decoding. Meanwhile, cost and power limitations restrict cache sizes in embedded systems. Some systems, including many media processors, eliminate caches in favor of memories under direct, painstaking software control in the manner of digital signal processors. Yet MPEG data has locality which caches can exploit if properly optimized, providing fast, flexible, and automatic data supply. We propose a set of enhancements which target the specific needs of the heterogeneous types within the MPEG decoder working set. These optimizations significantly improve the efficiency of small caches, reducing cache-memory traffic by almost 70 percent, and can make an enhanced 4 KB cache perform better than a standard 1 MB cache. This performance improvement can enable high-resolution, full frame rate video playback in cheaper, smaller system than woudl otherwise be possible.

  16. Transparent Proxy for Secure E-Mail

    Science.gov (United States)

    Michalák, Juraj; Hudec, Ladislav

    2010-05-01

    The paper deals with the security of e-mail messages and e-mail server implementation by means of a transparent SMTP proxy. The security features include encryption and signing of transported messages. The goal is to design and implement a software proxy for secure e-mail including its monitoring, administration, encryption and signing keys administration. In particular, we focus on automatic public key on-the-fly encryption and signing of e-mail messages according to S/MIME standard by means of an embedded computer system whose function can be briefly described as a brouter with transparent SMTP proxy.

  17. Supporting Safe Content-Inspection of Web Traffic

    National Research Council Canada - National Science Library

    Pal, Partha; Atighetchi, Michael

    2008-01-01

    ... for various reasons, including security. Software wrappers, firewalls, Web proxies, and a number of middleware constructs all depend on interception to achieve their respective security, fault tolerance, interoperability, or load balancing objectives...

  18. Computer Cache. The Web Has Tales to Tell: Traditional Literature Websites

    Science.gov (United States)

    Byerly, Greg; Brodie, Carolyn S.

    2004-01-01

    "Traditional Literature" is defined by Carl M. Tomlinson and Carol Lynch-Brown in "Essentials of Children's Literature (Allyn and Bacon, 2001) as "the body of ancient stories and poems that grew out of the human quest to understand the natural and spiritual worlds and that was preserved through time by the oral tradition of storytelling before…

  19. Robotic Vehicle Proxy Simulation, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Energid Technologies proposes the development of a digital simulation that can replace robotic vehicles in field studies. This proxy simulation will model the...

  20. gLExec and MyProxy integration in the ATLAS/OSG PanDA workload management system

    International Nuclear Information System (INIS)

    Caballero, J; Hover, J; Maeno, T; Potekhin, M; Wenaus, T; Zhao, X; Litmaath, M; Nilsson, P

    2010-01-01

    Worker nodes on the grid exhibit great diversity, making it difficult to offer uniform processing resources. A pilot job architecture, which probes the environment on the remote worker node before pulling down a payload job, can help. Pilot jobs become smart wrappers, preparing an appropriate environment for job execution and providing logging and monitoring capabilities. PanDA (Production and Distributed Analysis), an ATLAS and OSG workload management system, follows this design. However, in the simplest (and most efficient) pilot submission approach of identical pilots carrying the same identifying grid proxy, end-user accounting by the site can only be done with application-level information (PanDA maintains its own end-user accounting), and end-user jobs run with the identity and privileges of the proxy carried by the pilots, which may be seen as a security risk. To address these issues, we have enabled PanDA to use gLExec, a tool provided by EGEE which runs payload jobs under an end-user's identity. End-user proxies are pre-staged in a credential caching service, MyProxy, and the information needed by the pilots to access them is stored in the PanDA DB. gLExec then extracts from the user's proxy the proper identity under which to run. We describe the deployment, installation, and configuration of gLExec, and how PanDA components have been augmented to use it. We describe how difficulties were overcome, and how security risks have been mitigated. Results are presented from OSG and EGEE Grid environments performing ATLAS analysis using PanDA and gLExec.

  1. Data Locality via Coordinated Caching for Distributed Processing

    Science.gov (United States)

    Fischer, M.; Kuehn, E.; Giffels, M.; Jung, C.

    2016-10-01

    To enable data locality, we have developed an approach of adding coordinated caches to existing compute clusters. Since the data stored locally is volatile and selected dynamically, only a fraction of local storage space is required. Our approach allows to freely select the degree at which data locality is provided. It may be used to work in conjunction with large network bandwidths, providing only highly used data to reduce peak loads. Alternatively, local storage may be scaled up to perform data analysis even with low network bandwidth. To prove the applicability of our approach, we have developed a prototype implementing all required functionality. It integrates seamlessly into batch systems, requiring practically no adjustments by users. We have now been actively using this prototype on a test cluster for HEP analyses. Specifically, it has been integral to our jet energy calibration analyses for CMS during run 2. The system has proven to be easily usable, while providing substantial performance improvements. Since confirming the applicability for our use case, we have investigated the design in a more general way. Simulations show that many infrastructure setups can benefit from our approach. For example, it may enable us to dynamically provide data locality in opportunistic cloud resources. The experience we have gained from our prototype enables us to realistically assess the feasibility for general production use.

  2. Inferring climate variability from skewed proxy records

    Science.gov (United States)

    Emile-Geay, J.; Tingley, M.

    2013-12-01

    Many paleoclimate analyses assume a linear relationship between the proxy and the target climate variable, and that both the climate quantity and the errors follow normal distributions. An ever-increasing number of proxy records, however, are better modeled using distributions that are heavy-tailed, skewed, or otherwise non-normal, on account of the proxies reflecting non-normally distributed climate variables, or having non-linear relationships with a normally distributed climate variable. The analysis of such proxies requires a different set of tools, and this work serves as a cautionary tale on the danger of making conclusions about the underlying climate from applications of classic statistical procedures to heavily skewed proxy records. Inspired by runoff proxies, we consider an idealized proxy characterized by a nonlinear, thresholded relationship with climate, and describe three approaches to using such a record to infer past climate: (i) applying standard methods commonly used in the paleoclimate literature, without considering the non-linearities inherent to the proxy record; (ii) applying a power transform prior to using these standard methods; (iii) constructing a Bayesian model to invert the mechanistic relationship between the climate and the proxy. We find that neglecting the skewness in the proxy leads to erroneous conclusions and often exaggerates changes in climate variability between different time intervals. In contrast, an explicit treatment of the skewness, using either power transforms or a Bayesian inversion of the mechanistic model for the proxy, yields significantly better estimates of past climate variations. We apply these insights in two paleoclimate settings: (1) a classical sedimentary record from Laguna Pallcacocha, Ecuador (Moy et al., 2002). Our results agree with the qualitative aspects of previous analyses of this record, but quantitative departures are evident and hold implications for how such records are interpreted, and

  3. WEB-SERVICE. RESTFUL ARCHITECTURE

    Directory of Open Access Journals (Sweden)

    M. Melnichuk

    2018-04-01

    Full Text Available Network technology for interaction between two applications via the HTTP protocol was considered in article.When client works with REST API - it means it works with "resources", and in SOAP work is performed with operations. To build REST web services, you must follow certain principles: explicit use of HTTP methods, access to resources by URI, stateless, HATEAOS, caching, transfer of objects in JSON or XML representation. But sometimes some principles are ignored to ensure a higher speed of work and to reduce development time.The pros and cons of using JSON and XML representations were considered, and it can be said that using the JSON format reduces the amount of data transfer, and with the use of XML, the readability of data increases.Also, two main ways of data transfer in REST web services were considered: converting the file to Base64 and transferring it as an object field or transferring the file using the usual HTTP multipart. The Base64 standard approach gives a higher speed for multiple files in a single request, because only one HTTP connection is created, but these files are stored in RAM during request processing, which increases chance of the application crashing.In the conclusion, the advantages of using web services and their wide use in other architectural approaches were considered, which increases the popularity of web services.

  4. Energy-Efficient Caching for Mobile Edge Computing in 5G Networks

    Directory of Open Access Journals (Sweden)

    Zhaohui Luo

    2017-05-01

    Full Text Available Mobile Edge Computing (MEC, which is considered a promising and emerging paradigm to provide caching capabilities in proximity to mobile devices in 5G networks, enables fast, popular content delivery of delay-sensitive applications at the backhaul capacity of limited mobile networks. Most existing studies focus on cache allocation, mechanism design and coding design for caching. However, grid power supply with fixed power uninterruptedly in support of a MEC server (MECS is costly and even infeasible, especially when the load changes dynamically over time. In this paper, we investigate the energy consumption of the MECS problem in cellular networks. Given the average download latency constraints, we take the MECS’s energy consumption, backhaul capacities and content popularity distributions into account and formulate a joint optimization framework to minimize the energy consumption of the system. As a complicated joint optimization problem, we apply a genetic algorithm to solve it. Simulation results show that the proposed solution can effectively determine the near-optimal caching placement to obtain better performance in terms of energy efficiency gains compared with conventional caching placement strategies. In particular, it is shown that the proposed scheme can significantly reduce the joint cost when backhaul capacity is low.

  5. Cache Aided Decode-and-Forward Relaying Networks: From the Spatial View

    Directory of Open Access Journals (Sweden)

    Junjuan Xia

    2018-01-01

    Full Text Available We investigate cache technique from the spatial view and study its impact on the relaying networks. In particular, we consider a dual-hop relaying network, where decode-and-forward (DF relays can assist the data transmission from the source to the destination. In addition to the traditional dual-hop relaying, we also consider the cache from the spatial view, where the source can prestore the data among the memories of the nodes around the destination. For the DF relaying networks without and with cache, we study the system performance by deriving the analytical expressions of outage probability and symbol error rate (SER. We also derive the asymptotic outage probability and SER in the high regime of transmit power, from which we find the system diversity order can be rapidly increased by using cache and the system performance can be significantly improved. Simulation and numerical results are demonstrated to verify the proposed studies and find that the system power resources can be efficiently saved by using cache technique.

  6. Dynamic Allocation of SPM Based on Time-Slotted Cache Conflict Graph for System Optimization

    Science.gov (United States)

    Wu, Jianping; Ling, Ming; Zhang, Yang; Mei, Chen; Wang, Huan

    This paper proposes a novel dynamic Scratch-pad Memory allocation strategy to optimize the energy consumption of the memory sub-system. Firstly, the whole program execution process is sliced into several time slots according to the temporal dimension; thereafter, a Time-Slotted Cache Conflict Graph (TSCCG) is introduced to model the behavior of Data Cache (D-Cache) conflicts within each time slot. Then, Integer Nonlinear Programming (INP) is implemented, which can avoid time-consuming linearization process, to select the most profitable data pages. Virtual Memory System (VMS) is adopted to remap those data pages, which will cause severe Cache conflicts within a time slot, to SPM. In order to minimize the swapping overhead of dynamic SPM allocation, a novel SPM controller with a tightly coupled DMA is introduced to issue the swapping operations without CPU's intervention. Last but not the least, this paper discusses the fluctuation of system energy profit based on different MMU page size as well as the Time Slot duration quantitatively. According to our design space exploration, the proposed method can optimize all of the data segments, including global data, heap and stack data in general, and reduce the total energy consumption by 27.28% on average, up to 55.22% with a marginal performance promotion. And comparing to the conventional static CCG (Cache Conflicts Graph), our approach can obtain 24.7% energy profit on average, up to 30.5% with a sight boost in performance.

  7. 12 CFR 569.3 - Holders of proxies.

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 5 2010-01-01 2010-01-01 false Holders of proxies. 569.3 Section 569.3 Banks and Banking OFFICE OF THRIFT SUPERVISION, DEPARTMENT OF THE TREASURY PROXIES § 569.3 Holders of proxies. No proxy of a mutual savings association with a term greater than eleven months or solicited at...

  8. Water Isotope Proxy-Proxy and Proxy-Model Convergence for Late Pleistocene East Asian Monsoon Rainfall Reconstructions

    Science.gov (United States)

    Clemens, S. C.; Holbourn, A.; Kubota, Y.; Lee, K. E.; Liu, Z.; Chen, G.

    2017-12-01

    Confidence in reconstruction of East Asian paleomonsoon rainfall using precipitation isotope proxies is a matter of considerable debate, largely due to the lack of correlation between precipitation amount and isotopic composition in the present climate. We present four new, very highly resolved records spanning the past 300,000 years ( 200 year sample spacing) from IODP Site U1429 in the East China Sea. We demonstrate that all the orbital- and millennial-scale variance in the onshore Yangtze River Valley speleothem δ18O record1 is also embedded in the offshore Site U1429 seawater δ18O record (derived from the planktonic foraminifer Globigerinoides ruber and sea surface temperature reconstructions). Signal replication in these two independent terrestrial and marine archives, both controlled by the same monsoon system, uniquely identifies δ18O of precipitation as the primary driver of the precession-band variance in both records. This proxy-proxy convergence also eliminates a wide array of other drivers that have been called upon as potential contaminants to the precipitation δ18O signal recorded by these proxies. We compare East Asian precipitation isotope proxy records to precipitation amount from a CCSM3 transient climate model simulation of the past 300,000 years using realistic insolation, ice volume, greenhouse gasses, and sea level boundary conditions. This model-proxy comparison suggests that both Yangtze River Valley precipitation isotope proxies (seawater and speleothem δ18O) track changes in summer-monsoon rainfall amount at orbital time scales, as do precipitation isotope records from the Pearl River Valley2 (leaf wax δ2H) and Borneo3 (speleothem δ18O). Notably, these proxy records all have significantly different spectral structure indicating strongly regional rainfall patterns that are also consistent with model results. Transient, isotope-enabled model simulations will be necessary to more thoroughly evaluate these promising results, and to

  9. Evict on write, a management strategy for a prefetch unit and/or first level cache in a multiprocessor system with speculative execution

    Science.gov (United States)

    Gara, Alan; Ohmacht, Martin

    2014-09-16

    In a multiprocessor system with at least two levels of cache, a speculative thread may run on a core processor in parallel with other threads. When the thread seeks to do a write to main memory, this access is to be written through the first level cache to the second level cache. After the write though, the corresponding line is deleted from the first level cache and/or prefetch unit, so that any further accesses to the same location in main memory have to be retrieved from the second level cache. The second level cache keeps track of multiple versions of data, where more than one speculative thread is running in parallel, while the first level cache does not have any of the versions during speculation. A switch allows choosing between modes of operation of a speculation blind first level cache.

  10. Cache-Oblivious Search Trees via Binary Trees of Small Height

    DEFF Research Database (Denmark)

    Brodal, G.S.; Fagerberg, R.; Jacob, R.

    2002-01-01

    We propose a version of cache oblivious search trees which is simpler than the previous proposal of Bender, Demaine and Farach-Colton and has the same complexity bounds. In particular, our data structure avoids the use of weight balanced B-trees, and can be implemented as just a single array......, and range queries in worst case O(logB n + k/B) memory transfers, where k is the size of the output.The basic idea of our data structure is to maintain a dynamic binary tree of height log n+O(1) using existing methods, embed this tree in a static binary tree, which in turn is embedded in an array in a cache...... oblivious fashion, using the van Emde Boas layout of Prokop.We also investigate the practicality of cache obliviousness in the area of search trees, by providing an empirical comparison of different methods for laying out a search tree in memory....

  11. Worst-case execution time analysis-driven object cache design

    DEFF Research Database (Denmark)

    Huber, Benedikt; Puffitsch, Wolfgang; Schoeberl, Martin

    2012-01-01

    result in a WCET analysis‐friendly design. Aiming for a time‐predictable design, we therefore propose to employ WCET analysis techniques for the design space exploration of processor architectures. We evaluated different object cache configurations using static analysis techniques. The number of field......Hard real‐time systems need a time‐predictable computing platform to enable static worst‐case execution time (WCET) analysis. All performance‐enhancing features need to be WCET analyzable. However, standard data caches containing heap‐allocated data are very hard to analyze statically....... In this paper we explore a new object cache design, which is driven by the capabilities of static WCET analysis. Simulations of standard benchmarks estimating the expected average case performance usually drive computer architecture design. The design decisions derived from this methodology do not necessarily...

  12. Memory for multiple cache locations and prey quantities in a food-hoarding songbird

    Directory of Open Access Journals (Sweden)

    Nicola eArmstrong

    2012-12-01

    Full Text Available Most animals can discriminate between pairs of numbers that are each less than four without training. However, North Island robins (Petroica longipes, a food hoarding songbird endemic to New Zealand, can discriminate between quantities of items as high as eight without training. Here we investigate whether robins are capable of other complex quantity discrimination tasks. We test whether their ability to discriminate between small quantities declines with 1. the number of cache sites containing prey rewards and 2. the length of time separating cache creation and retrieval (retention interval. Results showed that subjects generally performed above chance expectations. They were equally able to discriminate between different combinations of prey quantities that were hidden from view in 2, 3 and 4 cache sites from between 1, 10 and 60 seconds. Overall results indicate that North Island robins can process complex quantity information involving more than two discrete quantities of items for up to one minute long retention intervals without training.

  13. 5G Network Communication, Caching, and Computing Algorithms Based on the Two‐Tier Game Model

    Directory of Open Access Journals (Sweden)

    Sungwook Kim

    2018-02-01

    Full Text Available In this study, we developed hybrid control algorithms in smart base stations (SBSs along with devised communication, caching, and computing techniques. In the proposed scheme, SBSs are equipped with computing power and data storage to collectively offload the computation from mobile user equipment and to cache the data from clouds. To combine in a refined manner the communication, caching, and computing algorithms, game theory is adopted to characterize competitive and cooperative interactions. The main contribution of our proposed scheme is to illuminate the ultimate synergy behind a fully integrated approach, while providing excellent adaptability and flexibility to satisfy the different performance requirements. Simulation results demonstrate that the proposed approach can outperform existing schemes by approximately 5% to 15% in terms of bandwidth utilization, access delay, and system throughput.

  14. Killing and caching of an adult White-tailed deer, Odocoileus virginianus, by a single Gray Wolf, Canis lupus

    Science.gov (United States)

    Nelson, Michael E.

    2011-01-01

    A single Gray Wolf (Canis lupus) killed an adult male White-tailed Deer (Odocoileus virginianus) and cached the intact carcass in 76 cm of snow. The carcass was revisited and entirely consumed between four and seven days later. This is the first recorded observation of a Gray Wolf caching an entire adult deer.

  15. Fighting terrorism in Africa by proxy

    DEFF Research Database (Denmark)

    Olsen, Gorm Rye

    2014-01-01

    The French intervention in Mali in early 2013 emphasizes that the decision-makers in Paris, Brussels, and Washington considered the establishment of the radical Islamist regime in Northern Mali a threat to their security interests. The widespread instability including the rise of radical Islamist...... groups in Somalia was perceived as a threat to western interests. It is the core argument of the paper if western powers decide to provide security in Africa, they will be inclined to use proxy instead of deploying own troops. Security provision by proxy in African means that African troops are doing...

  16. Webvise: Browser and Proxy support for open hypermedia structuring mechanisms on the WWW

    DEFF Research Database (Denmark)

    Grønbæk, Kaj; Sloth, Lennard; Ørbæk, Peter

    1999-01-01

    Web pages. Support for providing links to/from parts of non-HTML data, such as sound and movie, will be possible via interfaces to plug-ins and Java-based media players. The hypermedia structures are stored in a hypermedia database, developed from the Devise Hypermedia framework, and the service...... be manipulated and used via special Java applets and a pure proxy server solution is provided for users who only need to browse the structures. A user can create and use the external structures as ‘transparency' layers on top of arbitrary Web pages, the user can switch between viewing pages with one or more...

  17. Decision-cache based XACML authorisation and anonymisation for XML documents

    OpenAIRE

    Ulltveit-Moe, Nils; Oleshchuk, Vladimir A

    2012-01-01

    Author's version of an article in the journal: Computer Standards and Interfaces. Also available from the publisher at: http://dx.doi.org/10.1016/j.csi.2011.10.007 This paper describes a decision cache for the eXtensible Access Control Markup Language (XACML) that supports fine-grained authorisation and anonymisation of XML based messages and documents down to XML attribute and element level. The decision cache is implemented as an XACML obligation service, where a specification of the XML...

  18. Consistencia de ejecución: una propuesta no cache coherente

    OpenAIRE

    García, Rafael B.; Ardenghi, Jorge Raúl

    2005-01-01

    La presencia de uno o varios niveles de memoria cache en los procesadores modernos, cuyo objetivo es reducir el tiempo efectivo de acceso a memoria, adquiere especial relevancia en un ambiente multiprocesador del tipo DSM dado el mucho mayor costo de las referencias a memoria en módulos remotos. Claramente, el protocolo de coherencia de cache debe responder al modelo de consistencia de memoria adoptado. El modelo secuencial SC, aceptado generalmente como el más natural, junto a una serie de m...

  19. Implementació d'una Cache per a un processador MIPS d'una FPGA

    OpenAIRE

    Riera Villanueva, Marc

    2013-01-01

    [CATALÀ] Primer s'explicarà breument l'arquitectura d'un MIPS, la jerarquia de memòria i el funcionament de la cache. Posteriorment s'explicarà com s'ha dissenyat i implementat una jerarquia de memòria per a un MIPS implementat en VHDL en una FPGA. [ANGLÈS] First, the MIPS architecture, memory hierarchy and the functioning of the cache will be explained briefly. Then, the design and implementation of a memory hierarchy for a MIPS processor implemented in VHDL on an FPGA will be explained....

  20. Using XRootD to provide caches for CernVM-FS

    CERN Document Server

    Domenighini, Matteo

    2017-01-01

    CernVM-FS recently added the possibility of using plugin for cache management. In order to investigate the capabilities and limits of such possibility, an XRootD plugin was written and benchmarked; as a byproduct, a POSIX plugin was also generated. The tests revealed that the plugin interface introduces no signicant performance over- head; moreover, the XRootD plugin performance was discovered to be worse than the ones of the built-in cache manager and the POSIX plugin. Further test of the XRootD component revealed that its per- formance is dependent on the server disk speed.

  1. Munchausen Syndrome by Proxy: A Family Affair.

    Science.gov (United States)

    Mehl, Albert L.; And Others

    1990-01-01

    The article reports on a case of Munchausen syndrome by proxy in which chronic illicit insulin was administered to a one-year-old child by her mother. Factitious illnesses continued despite psychiatric intervention. Retrospective review of medical records suggested 30 previous episodes of factitious illness within the family. (DB)

  2. 78 FR 70987 - Proxy Advisory Firm Roundtable

    Science.gov (United States)

    2013-11-27

    ... Firm Roundtable AGENCY: Securities and Exchange Commission. ACTION: Notice of roundtable discussion... advisory firms. The panel will be asked to discuss topics including the current state of proxy advisory firm use by investment advisers and institutional investors and potential changes that have been...

  3. Munchausen syndrome by proxy: a family anthology.

    Science.gov (United States)

    Pickford, E; Buchanan, N; McLaughlan, S

    1988-06-20

    While the Munchausen-by-proxy syndrome is well recognized, the story of one family has been related to describe some remarkable features. These include the psychopathology of the mother, the involvement of both children in the family, the great difficulty in obtaining proof of child abuse and, finally, the prosecution of the mother in the criminal court.

  4. Analisis Perancangan PC (Personal Computer Router Proxy Untuk Menggabungkan Tiga Jalur Koneksi Di Indospeed

    Directory of Open Access Journals (Sweden)

    Bangun Harizal

    2012-05-01

    Full Text Available Router very important for computer network. To get the router can buy a router products but can also design your own using personal computers. Mikrotik is one of the router manufacturer that provides products in the form of hardware or software. If you want to pay less to design a router can use operating system Ubuntu. This operating system is open source and provided free of charge by the manufacturer. If you want to design a router with this OS can make use of personal computers are often used in homes or offices. Merging the performance of both types of routers can also be usedto cover the lack of one another. With the proxy on a local network then the use of bandwidth can be saved. Becausethere are several websites cached and stored therein if the same website is accessed by other users, the router transmits only from the proxy to the computerwhich make the request.

  5. Web Mining

    Science.gov (United States)

    Fürnkranz, Johannes

    The World-Wide Web provides every internet citizen with access to an abundance of information, but it becomes increasingly difficult to identify the relevant pieces of information. Research in web mining tries to address this problem by applying techniques from data mining and machine learning to Web data and documents. This chapter provides a brief overview of web mining techniques and research areas, most notably hypertext classification, wrapper induction, recommender systems and web usage mining.

  6. Exploitation of pocket gophers and their food caches by grizzly bears

    Science.gov (United States)

    Mattson, D.J.

    2004-01-01

    I investigated the exploitation of pocket gophers (Thomomys talpoides) by grizzly bears (Ursus arctos horribilis) in the Yellowstone region of the United States with the use of data collected during a study of radiomarked bears in 1977-1992. My analysis focused on the importance of pocket gophers as a source of energy and nutrients, effects of weather and site features, and importance of pocket gophers to grizzly bears in the western contiguous United States prior to historical extirpations. Pocket gophers and their food caches were infrequent in grizzly bear feces, although foraging for pocket gophers accounted for about 20-25% of all grizzly bear feeding activity during April and May. Compared with roots individually excavated by bears, pocket gopher food caches were less digestible but more easily dug out. Exploitation of gopher food caches by grizzly bears was highly sensitive to site and weather conditions and peaked during and shortly after snowmelt. This peak coincided with maximum success by bears in finding pocket gopher food caches. Exploitation was most frequent and extensive on gently sloping nonforested sites with abundant spring beauty (Claytonia lanceolata) and yampah (Perdieridia gairdneri). Pocket gophers are rare in forests, and spring beauty and yampah roots are known to be important foods of both grizzly bears and burrowing rodents. Although grizzly bears commonly exploit pocket gophers only in the Yellowstone region, this behavior was probably widespread in mountainous areas of the western contiguous United States prior to extirpations of grizzly bears within the last 150 years.

  7. Turbidity and Total Suspended Solids on the Lower Cache River Watershed, AR.

    Science.gov (United States)

    Rosado-Berrios, Carlos A; Bouldin, Jennifer L

    2016-06-01

    The Cache River Watershed (CRW) in Arkansas is part of one of the largest remaining bottomland hardwood forests in the US. Although wetlands are known to improve water quality, the Cache River is listed as impaired due to sedimentation and turbidity. This study measured turbidity and total suspended solids (TSS) in seven sites of the lower CRW; six sites were located on the Bayou DeView tributary of the Cache River. Turbidity and TSS levels ranged from 1.21 to 896 NTU, and 0.17 to 386.33 mg/L respectively and had an increasing trend over the 3-year study. However, a decreasing trend from upstream to downstream in the Bayou DeView tributary was noted. Sediment loading calculated from high precipitation events and mean TSS values indicate that contributions from the Cache River main channel was approximately 6.6 times greater than contributions from Bayou DeView. Land use surrounding this river channel affects water quality as wetlands provide a filter for sediments in the Bayou DeView channel.

  8. On-chip COMA cache-coherence protocol for microgrids of microthreaded cores

    NARCIS (Netherlands)

    Zhang, L.; Jesshope, C.

    2008-01-01

    This paper describes an on-chip COMA cache coherency protocol to support the microthread model of concurrent program composition. The model gives a sound basis for building multi-core computers as it captures concurrency, abstracts communication and identifies resources, such as processor groups

  9. CACHE: an extended BASIC program which computes the performance of shell and tube heat exchangers

    International Nuclear Information System (INIS)

    Tallackson, J.R.

    1976-03-01

    An extended BASIC program, CACHE, has been written to calculate steady state heat exchange rates in the core auxiliary heat exchangers, (CAHE), designed to remove afterheat from High-Temperature Gas-Cooled Reactors (HTGR). Computationally, these are unbaffled counterflow shell and tube heat exchangers. The computational method is straightforward. The exchanger is subdivided into a user-selected number of lengthwise segments; heat exchange in each segment is calculated in sequence and summed. The program takes the temperature dependencies of all thermal conductivities, viscosities and heat capacities into account providing these are expressed algebraically. CACHE is easily adapted to compute steady state heat exchange rates in any unbaffled counterflow exchanger. As now used, CACHE calculates heat removal by liquid weight from high-temperature helium and helium mixed with nitrogen, oxygen and carbon monoxide. A second program, FULTN, is described. FULTN computes the geometrical parameters required as input to CACHE. As reported herein, FULTN computes the internal dimensions of the Fulton Station CAHE. The two programs are chained to operate as one. Complete user information is supplied. The basic equations, variable lists, annotated program lists, and sample outputs with explanatory notes are included

  10. OneService - Generic Cache Aggregator Framework for Service Depended Cloud Applications

    NARCIS (Netherlands)

    Tekinerdogan, B.; Oral, O.A.

    2017-01-01

    Current big data cloud systems often use different data migration strategies from providers to customers. This often results in increased bandwidth usage and herewith a decrease of the performance. To enhance the performance often caching mechanisms are adopted. However, the implementations of these

  11. Delivery Time Minimization in Edge Caching: Synergistic Benefits of Subspace Alignment and Zero Forcing

    KAUST Repository

    Kakar, Jaber; Alameer, Alaa; Chaaban, Anas; Sezgin, Aydin; Paulraj, Arogyaswami

    2017-01-01

    the fundamental limits of a cache-aided wireless network consisting of one central base station, $M$ transceivers and $K$ receivers from a latency-centric perspective. We use the normalized delivery time (NDT) to capture the per-bit latency for the worst-case file

  12. An ESL Approach for Energy Consumption Analysis of Cache Memories in SoC Platforms

    Directory of Open Access Journals (Sweden)

    Abel G. Silva-Filho

    2011-01-01

    Full Text Available The design of complex circuits as SoCs presents two great challenges to designers. One is the speeding up of system functionality modeling and the second is the implementation of the system in an architecture that meets performance and power consumption requirements. Thus, developing new high-level specification mechanisms for the reduction of the design effort with automatic architecture exploration is a necessity. This paper proposes an Electronic-System-Level (ESL approach for system modeling and cache energy consumption analysis of SoCs called PCacheEnergyAnalyzer. It uses as entry a high-level UML-2.0 profile model of the system and it generates a simulation model of a multicore platform that can be analyzed for cache tuning. PCacheEnergyAnalyzer performs static/dynamic energy consumption analysis of caches on platforms that may have different processors. Architecture exploration is achieved by letting designers choose different processors for platform generation and different mechanisms for cache optimization. PCacheEnergyAnalyzer has been validated with several applications of Mibench, Mediabench, and PowerStone benchmarks, and results show that it provides analysis with reduced simulation effort.

  13. MonetDB/X100 - A DBMS in the CPU cache

    NARCIS (Netherlands)

    M. Zukowski (Marcin); P.A. Boncz (Peter); N.J. Nes (Niels); S. Héman (Sándor)

    2005-01-01

    textabstractX100 is a new execution engine for the MonetDB system, that improves execution speed and overcomes its main memory limitation. It introduces the concept of in-cache vectorized processing that strikes a balance between the existing column-at-a-time MIL execution primitives of MonetDB and

  14. Randomized Caches Can Be Pretty Useful to Hard Real-Time Systems

    Directory of Open Access Journals (Sweden)

    Enrico Mezzetti

    2015-03-01

    Full Text Available Cache randomization per se, and its viability for probabilistic timing analysis (PTA of critical real-time systems, are receiving increasingly close attention from the scientific community and the industrial practitioners. In fact, the very notion of introducing randomness and probabilities in time-critical systems has caused strenuous debates owing to the apparent clash that this idea has with the strictly deterministic view traditionally held for those systems. A paper recently appeared in LITES (Reineke, J. (2014. Randomized Caches Considered Harmful in Hard Real-Time Systems. LITES, 1(1, 03:1-03:13. provides a critical analysis of the weaknesses and risks entailed in using randomized caches in hard real-time systems. In order to provide the interested reader with a fuller, balanced appreciation of the subject matter, a critical analysis of the benefits brought about by that innovation should be provided also. This short paper addresses that need by revisiting the array of issues addressed in the cited work, in the light of the latest advances to the relevant state of the art. Accordingly, we show that the potential benefits of randomized caches do offset their limitations, causing them to be - when used in conjunction with PTA - a serious competitor to conventional designs.

  15. Performance Evaluation of Moving Small-Cell Network with Proactive Cache

    Directory of Open Access Journals (Sweden)

    Young Min Kwon

    2016-01-01

    Full Text Available Due to rapid growth in mobile traffic, mobile network operators (MNOs are considering the deployment of moving small-cells (mSCs. mSC is a user-centric network which provides voice and data services during mobility. mSC can receive and forward data traffic via wireless backhaul and sidehaul links. In addition, due to the predictive nature of users demand, mSCs can proactively cache the predicted contents in off-peak-traffic periods. Due to these characteristics, MNOs consider mSCs as a cost-efficient solution to not only enhance the system capacity but also provide guaranteed quality of service (QoS requirements to moving user equipment (UE in peak-traffic periods. In this paper, we conduct extensive system level simulations to analyze the performance of mSCs with varying cache size and content popularity and their effect on wireless backhaul load. The performance evaluation confirms that the QoS of moving small-cell UE (mSUE notably improves by using mSCs together with proactive caching. We also show that the effective use of proactive cache significantly reduces the wireless backhaul load and increases the overall network capacity.

  16. Sex, estradiol, and spatial memory in a food-caching corvid.

    Science.gov (United States)

    Rensel, Michelle A; Ellis, Jesse M S; Harvey, Brigit; Schlinger, Barney A

    2015-09-01

    Estrogens significantly impact spatial memory function in mammalian species. Songbirds express the estrogen synthetic enzyme aromatase at relatively high levels in the hippocampus and there is evidence from zebra finches that estrogens facilitate performance on spatial learning and/or memory tasks. It is unknown, however, whether estrogens influence hippocampal function in songbirds that naturally exhibit memory-intensive behaviors, such as cache recovery observed in many corvid species. To address this question, we examined the impact of estradiol on spatial memory in non-breeding Western scrub-jays, a species that routinely participates in food caching and retrieval in nature and in captivity. We also asked if there were sex differences in performance or responses to estradiol. Utilizing a combination of an aromatase inhibitor, fadrozole, with estradiol implants, we found that while overall cache recovery rates were unaffected by estradiol, several other indices of spatial memory, including searching efficiency and efficiency to retrieve the first item, were impaired in the presence of estradiol. In addition, males and females differed in some performance measures, although these differences appeared to be a consequence of the nature of the task as neither sex consistently out-performed the other. Overall, our data suggest that a sustained estradiol elevation in a food-caching bird impairs some, but not all, aspects of spatial memory on an innate behavioral task, at times in a sex-specific manner. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. TaPT: Temperature-Aware Dynamic Cache Optimization for Embedded Systems

    Directory of Open Access Journals (Sweden)

    Tosiron Adegbija

    2017-12-01

    Full Text Available Embedded systems have stringent design constraints, which has necessitated much prior research focus on optimizing energy consumption and/or performance. Since embedded systems typically have fewer cooling options, rising temperature, and thus temperature optimization, is an emergent concern. Most embedded systems only dissipate heat by passive convection, due to the absence of dedicated thermal management hardware mechanisms. The embedded system’s temperature not only affects the system’s reliability, but can also affect the performance, power, and cost. Thus, embedded systems require efficient thermal management techniques. However, thermal management can conflict with other optimization objectives, such as execution time and energy consumption. In this paper, we focus on managing the temperature using a synergy of cache optimization and dynamic frequency scaling, while also optimizing the execution time and energy consumption. This paper provides new insights on the impact of cache parameters on efficient temperature-aware cache tuning heuristics. In addition, we present temperature-aware phase-based tuning, TaPT, which determines Pareto optimal clock frequency and cache configurations for fine-grained execution time, energy, and temperature tradeoffs. TaPT enables autonomous system optimization and also allows designers to specify temperature constraints and optimization priorities. Experiments show that TaPT can effectively reduce execution time, energy, and temperature, while imposing minimal hardware overhead.

  18. Analytical derivation of traffic patterns in cache-coherent shared-memory systems

    DEFF Research Database (Denmark)

    Stuart, Matthias Bo; Sparsø, Jens

    2011-01-01

    This paper presents an analytical method to derive the worst-case traffic pattern caused by a task graph mapped to a cache-coherent shared-memory system. Our analysis allows designers to rapidly evaluate the impact of different mappings of tasks to IP cores on the traffic pattern. The accuracy...

  19. Model checking a cache coherence protocol for a Java DSM implementation

    NARCIS (Netherlands)

    J. Pang; W.J. Fokkink (Wan); R. Hofman (Rutger); R. Veldema

    2007-01-01

    textabstractJackal is a fine-grained distributed shared memory implementation of the Java programming language. It aims to implement Java's memory model and allows multithreaded Java programs to run unmodified on a distributed memory system. It employs a multiple-writer cache coherence

  20. Model checking a cache coherence protocol of a Java DSM implementation

    NARCIS (Netherlands)

    Pang, J.; Fokkink, W.J.; Hofman, R.; Veldema, R.S.

    2007-01-01

    Jackal is a fine-grained distributed shared memory implementation of the Java programming language. It aims to implement Java's memory model and allows multithreaded Java programs to run unmodified on a distributed memory system. It employs a multiple-writer cache coherence protocol. In this paper,

  1. Delivery Time Minimization in Edge Caching: Synergistic Benefits of Subspace Alignment and Zero Forcing

    KAUST Repository

    Kakar, Jaber

    2017-10-29

    An emerging trend of next generation communication systems is to provide network edges with additional capabilities such as additional storage resources in the form of caches to reduce file delivery latency. To investigate this aspect, we study the fundamental limits of a cache-aided wireless network consisting of one central base station, $M$ transceivers and $K$ receivers from a latency-centric perspective. We use the normalized delivery time (NDT) to capture the per-bit latency for the worst-case file request pattern at high signal-to-noise ratios (SNR), normalized with respect to a reference interference-free system with unlimited transceiver cache capabilities. For various special cases with $M=\\\\{1,2\\\\}$ and $K=\\\\{1,2,3\\\\}$ that satisfy $M+K\\\\leq 4$, we establish the optimal tradeoff between cache storage and latency. This is facilitated through establishing a novel converse (for arbitrary $M$ and $K$) and an achievability scheme on the NDT. Our achievability scheme is a synergistic combination of multicasting, zero-forcing beamforming and interference alignment.

  2. The Proxy Challenge: Why bespoke proxy indicators can help solve the anti-corruption measurement problem

    OpenAIRE

    Johnsøn, Jesper; Mason, Phil

    2013-01-01

    Practitioners working in anti-corruption face perennial challenges in measuring changes in corruption levels and evaluating whether anti-corruption efforts are successful. These two challenges are linked but not inseparable. To make progress on the latter front, that is, evaluating whether anti-corruption efforts are having an impact, the U4 Anti-Corruption Resource Centre and the UK Department for International Development are launching an exploration into the use of proxy indicators. Proxy ...

  3. Legal requirements governing proxy voting in Denmark

    DEFF Research Database (Denmark)

    Werlauff, Erik

    2008-01-01

    The requirements in Danish company law concerning proxy voting in companies whose shares have been accepted for listing on a regulated market have been successively tightened in recent years, and corporate governance principles have also led to the introduction of several requirements concerning...... proxy holders. A thorough knowledge of these requirements is important not only for the listed companies but also for their advisers and investors in Denmark and abroad. This article considers these requirements as well as the additional requirements which will derive from Directive 2007....../36 on the exercise of shareholders' rights in listed companies, which must be implemented by 3 August 2009. It is pointed out that companies may provide with advantage in their articles of association for both the existing and the forthcoming requirements at this early stage....

  4. Harvard Law School Proxy Access Roundtable

    OpenAIRE

    Bebchuk, Lucian Arye; Hirst, Scott

    2010-01-01

    This paper contains the proceedings of the Proxy Access Roundtable that was held by the Harvard Law School Program on Corporate Governance on October 7, 2009. The Roundtable brought together prominent participants in the debate - representing a range of perspectives and experiences - for a day of discussion on the subject. The day’s first two sessions focused on the question of whether the Securities and Exchange Commission should provide an access regime, or whether it should leave the adopt...

  5. Munchausen syndrome by proxy and child's rights

    International Nuclear Information System (INIS)

    Al-Haidar, Fatima A.

    2008-01-01

    Munchausen syndrome by proxy (MSBP) is an extreme form of child abuse in which perpetrators induce life-threatening conditions in their children. A case of MSBP is described in detail. Difficulties in diagnosis and management in this part of the world are presented. Until now, no national legal guidelines exist in the Kingdom of Saudi Arabia (KSA) to child abuse in general and MSBP in particular. Urgent guidelines, policies and legal system are required in the KSA. (author)

  6. Forecasting Lightning Threat Using WRF Proxy Fields

    Science.gov (United States)

    McCaul, E. W., Jr.

    2010-01-01

    Objectives: Given that high-resolution WRF forecasts can capture the character of convective outbreaks, we seek to: 1. Create WRF forecasts of LTG threat (1-24 h), based on 2 proxy fields from explicitly simulated convection: - graupel flux near -15 C (captures LTG time variability) - vertically integrated ice (captures LTG threat area). 2. Calibrate each threat to yield accurate quantitative peak flash rate densities. 3. Also evaluate threats for areal coverage, time variability. 4. Blend threats to optimize results. 5. Examine sensitivity to model mesh, microphysics. Methods: 1. Use high-resolution 2-km WRF simulations to prognose convection for a diverse series of selected case studies. 2. Evaluate graupel fluxes; vertically integrated ice (VII). 3. Calibrate WRF LTG proxies using peak total LTG flash rate densities from NALMA; relationships look linear, with regression line passing through origin. 4. Truncate low threat values to make threat areal coverage match NALMA flash extent density obs. 5. Blend proxies to achieve optimal performance 6. Study CAPS 4-km ensembles to evaluate sensitivities.

  7. Unlocking data: federated identity with LSDMA and dCache

    Science.gov (United States)

    Millar, AP; Behrmann, G.; Bernardt, C.; Fuhrmann, P.; Hardt, M.; Hayrapetyan, A.; Litvintsev, D.; Mkrtchyan, T.; Rossi, A.; Schwank, K.

    2015-12-01

    X.509, the dominant identity system from grid computing, has proved unpopular for many user communities. More popular alternatives generally assume the user is interacting via their web-browser. Such alternatives allow a user to authenticate with many services with the same credentials (user-name and password). They also allow users from different organisations form collaborations quickly and simply. Scientists generally require that their custom analysis software has direct access to the data. Such direct access is not currently supported by alternatives to X.509, as they require the use of a web-browser. Various approaches to solve this issue are being investigated as part of the Large Scale Data Management and Analysis (LSDMA) project, a German funded national R&D project. These involve dynamic credential translation (creating an X.509 credential) to allow backwards compatibility in addition to direct SAML- and OpenID Connect-based authentication. We present a summary of the current state of art and the current status of the federated identity work funded by the LSDMA project along with the future road map.

  8. High-speed mapping of water isotopes and residence time in Cache Slough Complex, San Francisco Bay Delta, CA

    Data.gov (United States)

    Department of the Interior — Real-time, high frequency (1-second sample interval) GPS location, water quality, and water isotope (δ2H, δ18O) data was collected in the Cache Slough Complex (CSC),...

  9. Earth Science Mining Web Services

    Science.gov (United States)

    Pham, Long; Lynnes, Christopher; Hegde, Mahabaleshwa; Graves, Sara; Ramachandran, Rahul; Maskey, Manil; Keiser, Ken

    2008-01-01

    To allow scientists further capabilities in the area of data mining and web services, the Goddard Earth Sciences Data and Information Services Center (GES DISC) and researchers at the University of Alabama in Huntsville (UAH) have developed a system to mine data at the source without the need of network transfers. The system has been constructed by linking together several pre-existing technologies: the Simple Scalable Script-based Science Processor for Measurements (S4PM), a processing engine at he GES DISC; the Algorithm Development and Mining (ADaM) system, a data mining toolkit from UAH that can be configured in a variety of ways to create customized mining processes; ActiveBPEL, a workflow execution engine based on BPEL (Business Process Execution Language); XBaya, a graphical workflow composer; and the EOS Clearinghouse (ECHO). XBaya is used to construct an analysis workflow at UAH using ADam components, which are also installed remotely at the GES DISC, wrapped as Web Services. The S4PM processing engine searches ECHO for data using space-time criteria, staging them to cache, allowing the ActiveBPEL engine to remotely orchestras the processing workflow within S4PM. As mining is completed, the output is placed in an FTP holding area for the end user. The goals are to give users control over the data they want to process, while mining data at the data source using the server's resources rather than transferring the full volume over the internet. These diverse technologies have been infused into a functioning, distributed system with only minor changes to the underlying technologies. The key to the infusion is the loosely coupled, Web-Services based architecture: All of the participating components are accessible (one way or another) through (Simple Object Access Protocol) SOAP-based Web Services.

  10. Proxy Graph: Visual Quality Metrics of Big Graph Sampling.

    Science.gov (United States)

    Nguyen, Quan Hoang; Hong, Seok-Hee; Eades, Peter; Meidiana, Amyra

    2017-06-01

    Data sampling has been extensively studied for large scale graph mining. Many analyses and tasks become more efficient when performed on graph samples of much smaller size. The use of proxy objects is common in software engineering for analysis and interaction with heavy objects or systems. In this paper, we coin the term 'proxy graph' and empirically investigate how well a proxy graph visualization can represent a big graph. Our investigation focuses on proxy graphs obtained by sampling; this is one of the most common proxy approaches. Despite the plethora of data sampling studies, this is the first evaluation of sampling in the context of graph visualization. For an objective evaluation, we propose a new family of quality metrics for visual quality of proxy graphs. Our experiments cover popular sampling techniques. Our experimental results lead to guidelines for using sampling-based proxy graphs in visualization.

  11. The role of proxy information in missing data analysis.

    Science.gov (United States)

    Huang, Rong; Liang, Yuanyuan; Carrière, K C

    2005-10-01

    This article investigates the role of proxy data in dealing with the common problem of missing data in clinical trials using repeated measures designs. In an effort to avoid the missing data situation, some proxy information can be gathered. The question is how to treat proxy information, that is, is it always better to utilize proxy information when there are missing data? A model for repeated measures data with missing values is considered and a strategy for utilizing proxy information is developed. Then, simulations are used to compare the power of a test using proxy to simply utilizing all available data. It is concluded that using proxy information can be a useful alternative when such information is available. The implications for various clinical designs are also considered and a data collection strategy for efficiently estimating parameters is suggested.

  12. Wolves, Canis lupus, carry and cache the collars of radio-collared White-tailed Deer, Odocoileus virginianus, they killed

    Science.gov (United States)

    Nelson, Michael E.; Mech, L. David

    2011-01-01

    Wolves (Canis lupus) in northeastern Minnesota cached six radio-collars (four in winter, two in spring-summer) of 202 radio-collared White-tailed Deer (Odocoileus virginianus) they killed or consumed from 1975 to 2010. A Wolf bedded on top of one collar cached in snow. We found one collar each at a Wolf den and Wolf rendezvous site, 2.5 km and 0.5 km respectively, from each deer's previous locations.

  13. Web archives

    DEFF Research Database (Denmark)

    Finnemann, Niels Ole

    2018-01-01

    This article deals with general web archives and the principles for selection of materials to be preserved. It opens with a brief overview of reasons why general web archives are needed. Section two and three present major, long termed web archive initiatives and discuss the purposes and possible...... values of web archives and asks how to meet unknown future needs, demands and concerns. Section four analyses three main principles in contemporary web archiving strategies, topic centric, domain centric and time-centric archiving strategies and section five discuss how to combine these to provide...... a broad and rich archive. Section six is concerned with inherent limitations and why web archives are always flawed. The last sections deal with the question how web archives may fit into the rapidly expanding, but fragmented landscape of digital repositories taking care of various parts...

  14. The development of caching and object permanence in Western scrub-jays (Aphelocoma californica): which emerges first?

    Science.gov (United States)

    Salwiczek, Lucie H; Emery, Nathan J; Schlinger, Barney; Clayton, Nicola S

    2009-08-01

    Recent studies on the food-caching behavior of corvids have revealed complex physical and social skills, yet little is known about the ontogeny of food caching in relation to the development of cognitive capacities. Piagetian object permanence is the understanding that objects continue to exist even when they are no longer visible. Here, the authors focus on Piagetian Stages 3 and 4, because they are hallmarks in the cognitive development of both young children and animals. Our aim is to determine in a food-caching corvid, the Western scrub-jay, whether (1) Piagetian Stage 4 competence and tentative caching (i.e., hiding an item invisibly and retrieving it without delay), emerge concomitantly or consecutively; (2) whether experiencing the reappearance of hidden objects enhances the timing of the appearance of object permanence; and (3) discuss how the development of object permanence is related to behavioral development and sensorimotor intelligence. Our findings suggest that object permanence Stage 4 emerges before tentative caching, and independent of environmental influences, but that once the birds have developed simple object-permanence, then social learning might advance the interval after which tentative caching commences. Copyright 2009 APA, all rights reserved.

  15. EqualChance: Addressing Intra-set Write Variation to Increase Lifetime of Non-volatile Caches

    Energy Technology Data Exchange (ETDEWEB)

    Mittal, Sparsh [ORNL; Vetter, Jeffrey S [ORNL

    2014-01-01

    To address the limitations of SRAM such as high-leakage and low-density, researchers have explored use of non-volatile memory (NVM) devices, such as ReRAM (resistive RAM) and STT-RAM (spin transfer torque RAM) for designing on-chip caches. A crucial limitation of NVMs, however, is that their write endurance is low and the large intra-set write variation introduced by existing cache management policies may further exacerbate this problem, thereby reducing the cache lifetime significantly. We present EqualChance, a technique to increase cache lifetime by reducing intra-set write variation. EqualChance works by periodically changing the physical cache-block location of a write-intensive data item within a set to achieve wear-leveling. Simulations using workloads from SPEC CPU2006 suite and HPC (high-performance computing) field show that EqualChance improves the cache lifetime by 4.29X. Also, its implementation overhead is small, and it incurs very small performance and energy loss.

  16. A New Caching Technique to Support Conjunctive Queries in P2P DHT

    Science.gov (United States)

    Kobatake, Koji; Tagashira, Shigeaki; Fujita, Satoshi

    P2P DHT (Peer-to-Peer Distributed Hash Table) is one of typical techniques for realizing an efficient management of shared resources distributed over a network and a keyword search over such networks in a fully distributed manner. In this paper, we propose a new method for supporting conjunctive queries in P2P DHT. The basic idea of the proposed technique is to share a global information on past trials by conducting a local caching of search results for conjunctive queries and by registering the fact to the global DHT. Such a result caching is expected to significantly reduce the amount of transmitted data compared with conventional schemes. The effect of the proposed method is experimentally evaluated by simulation. The result of experiments indicates that by using the proposed method, the amount of returned data is reduced by 60% compared with conventional P2P DHT which does not support conjunctive queries.

  17. Operational Use of OGC Web Services at the Met Office

    Science.gov (United States)

    Wright, Bruce

    2010-05-01

    The Met Office has adopted the Service-Orientated Architecture paradigm to deliver services to a range of customers through Rich Internet Applications (RIAs). The approach uses standard Open Geospatial Consortium (OGC) web services to provide information to web-based applications through a range of generic data services. "Invent", the Met Office beta site, is used to showcase Met Office future plans for presenting web-based weather forecasts, product and information to the public. This currently hosts a freely accessible Weather Map Viewer, written in JavaScript, which accesses a Web Map Service (WMS), to deliver innovative web-based visualizations of weather and its potential impacts to the public. The intention is to engage the public in the development of new web-based services that more accurately meet their needs. As the service is intended for public use within the UK, it has been designed to support a user base of 5 million, the analysed level of UK web traffic reaching the Met Office's public weather information site. The required scalability has been realised through the use of multi-tier tile caching: - WMS requests are made for 256x256 tiles for fixed areas and zoom levels; - a Tile Cache, developed in house, efficiently serves tiles on demand, managing WMS request for the new tiles; - Edge Servers, externally hosted by Akamai, provide a highly scalable (UK-centric) service for pre-cached tiles, passing new requests to the Tile Cache; - the Invent Weather Map Viewer uses the Google Maps API to request tiles from Edge Servers. (We would expect to make use of the Web Map Tiling Service, when it becomes an OGC standard.) The Met Office delivers specialist commercial products to market sectors such as transport, utilities and defence, which exploit a Web Feature Service (WFS) for data relating forecasts and observations to specific geographic features, and a Web Coverage Service (WCS) for sub-selections of gridded data. These are locally rendered as maps or

  18. Programming Algorithms of load balancing with HA-Proxy in HTTP services

    Directory of Open Access Journals (Sweden)

    José Teodoro Mejía Viteri

    2018-02-01

    Full Text Available The access to the public and private services through the web gains daily protagonism, and sometimes they must support amounts of requests that a team can not process, so there are solutions that use algorithms that allow to distribute the load of requests of a web application in several equipment; the objective of this work is to perform an analysis of load balancing scheduling algorithms through the HA-Proxy tool, and deliver an instrument that identifies the load distribution algorithm to be used and the technological infrastructure, to largely cover implementation. The information used for this work is based on a bibliographic analysis, eld study and implementation of the different load balancing algorithms in equipment, where the distribution and its performance will be analyzed. The incorporation of this technology to the management of services on the web, improves availability, helps business continuity and through the different forms of distribution of the requests of the algorithms that can be implemented in HA-Proxy to provide those responsible for information technology systems with a view of their advantages and disadvantages.

  19. Feasibility Report and Environmental Statement for Water Resources Development, Cache Creek Basin, California

    Science.gov (United States)

    1979-02-01

    classified as Porno , Lake Miwok, and Patwin. Recent surveys within the Clear Lake-Cache Creek Basin have located 28 archeological sites, some of which...additional 8,400 acre-feet annually to the Lakeport area. Porno Reservoir on Kelsey Creek, being studied by Lake County, also would supplement M&l water...project on Scotts Creek could provide 9,100 acre- feet annually of irrigation water. Also, as previously discussed, Porno Reservoir would furnish

  20. Fox squirrels match food assessment and cache effort to value and scarcity.

    Directory of Open Access Journals (Sweden)

    Mikel M Delgado

    Full Text Available Scatter hoarders must allocate time to assess items for caching, and to carry and bury each cache. Such decisions should be driven by economic variables, such as the value of the individual food items, the scarcity of these items, competition for food items and risk of pilferage by conspecifics. The fox squirrel, an obligate scatter-hoarder, assesses cacheable food items using two overt movements, head flicks and paw manipulations. These behaviors allow an examination of squirrel decision processes when storing food for winter survival. We measured wild squirrels' time allocations and frequencies of assessment and investment behaviors during periods of food scarcity (summer and abundance (fall, giving the squirrels a series of 15 items (alternating five hazelnuts and five peanuts. Assessment and investment per cache increased when resource value was higher (hazelnuts or resources were scarcer (summer, but decreased as scarcity declined (end of sessions. This is the first study to show that assessment behaviors change in response to factors that indicate daily and seasonal resource abundance, and that these factors may interact in complex ways to affect food storing decisions. Food-storing tree squirrels may be a useful and important model species to understand the complex economic decisions made under natural conditions.

  1. A cache-friendly sampling strategy for texture-based volume rendering on GPU

    Directory of Open Access Journals (Sweden)

    Junpeng Wang

    2017-06-01

    Full Text Available The texture-based volume rendering is a memory-intensive algorithm. Its performance relies heavily on the performance of the texture cache. However, most existing texture-based volume rendering methods blindly map computational resources to texture memory and result in incoherent memory access patterns, causing low cache hit rates in certain cases. The distance between samples taken by threads of an atomic scheduling unit (e.g. a warp of 32 threads in CUDA of the GPU is a crucial factor that affects the texture cache performance. Based on this fact, we present a new sampling strategy, called Warp Marching, for the ray-casting algorithm of texture-based volume rendering. The effects of different sample organizations and different thread-pixel mappings in the ray-casting algorithm are thoroughly analyzed. Also, a pipeline manner color blending approach is introduced and the power of warp-level GPU operations is leveraged to improve the efficiency of parallel executions on the GPU. In addition, the rendering performance of the Warp Marching is view-independent, and it outperforms existing empty space skipping techniques in scenarios that need to render large dynamic volumes in a low resolution image. Through a series of micro-benchmarking and real-life data experiments, we rigorously analyze our sampling strategies and demonstrate significant performance enhancements over existing sampling methods.

  2. Secure File Allocation and Caching in Large-scale Distributed Systems

    DEFF Research Database (Denmark)

    Di Mauro, Alessio; Mei, Alessandro; Jajodia, Sushil

    2012-01-01

    In this paper, we present a file allocation and caching scheme that guarantees high assurance, availability, and load balancing in a large-scale distributed file system that can support dynamic updates of authorization policies. The scheme uses fragmentation and replication to store files with hi......-balancing, and reducing delay of read operations. The system offers a trade-off-between performance and security that is dynamically tunable according to the current level of threat. We validate our mechanisms with extensive simulations in an Internet-like network.......In this paper, we present a file allocation and caching scheme that guarantees high assurance, availability, and load balancing in a large-scale distributed file system that can support dynamic updates of authorization policies. The scheme uses fragmentation and replication to store files with high...... security requirements in a system composed of a majority of low-security servers. We develop mechanisms to fragment files, to allocate them into multiple servers, and to cache them as close as possible to their readers while preserving the security requirement of the files, providing load...

  3. GENIUS/EnginFrame Grid Portal: VOMS Proxy creation, new features and enhancements

    International Nuclear Information System (INIS)

    Ardizzone, V.; Barbaram, R.; Falzone, A.; Emidio, G.; Neri, L.; Scardaci, D.; Venuti, N.

    2007-01-01

    Scientific domain knowledge and tools must be presented to the (non-expert) users in terms of applications without needing to know the underlying details of the Grid Middle wares. GENIUS Grid portal, powered by EnginFrame, is an increasingly popular mechanism for creating customisable, Web-based interfaces to Grid services and resources. This work describes new GENIUS portals capabilities such as portal login, access control, display management, new approach to building reusable portal components as plug ins, better performance results as consequence of EnginFrame core functions improvement. Finally will be described two new tools developed for VOMS Proxy creation and simple JDL composer. (Author)

  4. GENIUS/EnginFrame Grid Portal: VOMS Proxy creation, new features and enhancements

    Energy Technology Data Exchange (ETDEWEB)

    Ardizzone, V.; Barbaram, R.; Falzone, A.; Emidio, G.; Neri, L.; Scardaci, D.; Venuti, N.

    2007-07-01

    Scientific domain knowledge and tools must be presented to the (non-expert) users in terms of applications without needing to know the underlying details of the Grid Middle wares. GENIUS Grid portal, powered by EnginFrame, is an increasingly popular mechanism for creating customisable, Web-based interfaces to Grid services and resources. This work describes new GENIUS portals capabilities such as portal login, access control, display management, new approach to building reusable portal components as plug ins, better performance results as consequence of EnginFrame core functions improvement. Finally will be described two new tools developed for VOMS Proxy creation and simple JDL composer. (Author)

  5. Cache-aware data structure model for parallelism and dynamic load balancing

    International Nuclear Information System (INIS)

    Sridi, Marwa

    2016-01-01

    This PhD thesis is dedicated to the implementation of innovative parallel methods in the framework of fast transient fluid-structure dynamics. It improves existing methods within EUROPLEXUS software, in order to optimize the shared memory parallel strategy, complementary to the original distributed memory approach, brought together into a global hybrid strategy for clusters of multi-core nodes. Starting from a sound analysis of the state of the art concerning data structuring techniques correlated to the hierarchic memory organization of current multi-processor architectures, the proposed work introduces an approach suitable for an explicit time integration (i.e. with no linear system to solve at each step). A data structure of type 'Structure of arrays' is conserved for the global data storage, providing flexibility and efficiency for current operations on kinematics fields (displacement, velocity and acceleration). On the contrary, in the particular case of elementary operations (for internal forces generic computations, as well as fluxes computations between cell faces for fluid models), particularly time consuming but localized in the program, a temporary data structure of type 'Array of structures' is used instead, to force an efficient filling of the cache memory and increase the performance of the resolution, for both serial and shared memory parallel processing. Switching from the global structure to the temporary one is based on a cell grouping strategy, following classing cache-blocking principles but handling specifically for this work neighboring data necessary to the efficient treatment of ALE fluxes for cells on the group boundaries. The proposed approach is extensively tested, from the point of views of both the computation time and the access failures into cache memory, confronting the gains obtained within the elementary operations to the potential overhead generated by the data structure switch. Obtained results are very satisfactory, especially

  6. Web Engineering

    Energy Technology Data Exchange (ETDEWEB)

    White, Bebo

    2003-06-23

    Web Engineering is the application of systematic, disciplined and quantifiable approaches to development, operation, and maintenance of Web-based applications. It is both a pro-active approach and a growing collection of theoretical and empirical research in Web application development. This paper gives an overview of Web Engineering by addressing the questions: (a) why is it needed? (b) what is its domain of operation? (c) how does it help and what should it do to improve Web application development? and (d) how should it be incorporated in education and training? The paper discusses the significant differences that exist between Web applications and conventional software, the taxonomy of Web applications, the progress made so far and the research issues and experience of creating a specialization at the master's level. The paper reaches a conclusion that Web Engineering at this stage is a moving target since Web technologies are constantly evolving, making new types of applications possible, which in turn may require innovations in how they are built, deployed and maintained.

  7. Validity of proxies and correction for proxy use when evaluating social determinants of health in stroke patients.

    Science.gov (United States)

    Skolarus, Lesli E; Sánchez, Brisa N; Morgenstern, Lewis B; Garcia, Nelda M; Smith, Melinda A; Brown, Devin L; Lisabeth, Lynda D

    2010-03-01

    The purpose of this study was to evaluate stroke patient-proxy agreement with respect to social determinants of health, including depression, optimism, and spirituality, and to explore approaches to minimize proxy-introduced bias. Stroke patient-proxy pairs from the Brain Attack Surveillance in Corpus Christi Project were interviewed (n=34). Evaluation of agreement between patient-proxy pairs included calculation of intraclass correlation coefficients, linear regression models (ProxyResponse=alpha(0)+alpha(1)PatientResponse+delta, where alpha(0)=0 and alpha(1)=1 denotes no bias) and kappa statistics. Bias introduced by proxies was quantified with simulation studies. In the simulated data, we applied 4 approaches to estimate regression coefficients of stroke outcome social determinants of health associations when only proxy data were available for some patients: (1) substituting proxy responses in place of patient responses; (2) including an indicator variable for proxy use; (3) using regression calibration with external validation; and (4) internal validation. Agreement was fair for depression (intraclass correlation coefficient, 0.41) and optimism (intraclass correlation coefficient, 0.48) and moderate for spirituality (kappa, 0.48 to 0.53). Responses of proxies were a biased measure of the patients' responses for depression, with alpha(0)=4.88 (CI, 2.24 to 7.52) and alpha(1)=0.39 (CI, 0.09 to 0.69), and for optimism, with alpha(0)=3.82 (CI, -1.04 to 8.69) and alpha(1)=0.81 (CI, 0.41 to 1.22). Regression calibration with internal validation was the most accurate method to correct for proxy-induced bias. Fair/moderate patient-proxy agreement was observed for social determinants of health. Stroke researchers who plan to study social determinants of health may consider performing validation studies so corrections for proxy use can be made.

  8. Optimizing TCP Performance over UMTS with Split TCP Proxy

    DEFF Research Database (Denmark)

    Hu, Liang; Dittmann, Lars

    2009-01-01

    . To cope with large delay bandwidth product, we propose a novel concept of split TCP proxy which is placed at GGSN between UNITS network and Internet. The split proxy divides the bandwidth delay product into two parts, resulting in two TCP connections with smaller bandwidth delay products which can...... be pipelined and thus operating at higher speeds. Simulation results show, the split TCP proxy can significantly improve the TCP performance in terms of RLC throughput under high bit rate DCH channel scenario (e.g.256 kbps). On the other hand, it only brings small performance improvement under low bit rate DCH...... scenario (e.g.64 kbps). Besides, the split TCP proxy brings more performance gain for downloading large files than downloading small ones. To the end, for the configuration of the split proxy, an aggressive initial TCP congestion window size (e.g. 10 MSS) at proxy is particularly useful for radio links...

  9. Estimation of sedimentary proxy records together with associated uncertainty

    OpenAIRE

    Goswami, B.; Heitzig, J.; Rehfeld, K.; Marwan, N.; Anoop, A.; Prasad, S.; Kurths, J.

    2014-01-01

    Sedimentary proxy records constitute a significant portion of the recorded evidence that allows us to investigate paleoclimatic conditions and variability. However, uncertainties in the dating of proxy archives limit our ability to fix the timing of past events and interpret proxy record intercomparisons. While there are various age-modeling approaches to improve the estimation of the age–depth relations of archives, relatively little focus has been placed on the propagation...

  10. Time-and-ID-Based Proxy Reencryption Scheme

    OpenAIRE

    Mtonga, Kambombo; Paul, Anand; Rho, Seungmin

    2014-01-01

    Time- and ID-based proxy reencryption scheme is proposed in this paper in which a type-based proxy reencryption enables the delegator to implement fine-grained policies with one key pair without any additional trust on the proxy. However, in some applications, the time within which the data was sampled or collected is very critical. In such applications, for example, healthcare and criminal investigations, the delegatee may be interested in only some of the messages with some types sampled wi...

  11. Salmon: Robust Proxy Distribution for Censorship Circumvention

    Directory of Open Access Journals (Sweden)

    Douglas Frederick

    2016-10-01

    Full Text Available Many governments block their citizens’ access to much of the Internet. Simple workarounds are unreliable; censors quickly discover and patch them. Previously proposed robust approaches either have non-trivial obstacles to deployment, or rely on low-performance covert channels that cannot support typical Internet usage such as streaming video. We present Salmon, an incrementally deployable system designed to resist a censor with the resources of the “Great Firewall” of China. Salmon relies on a network of volunteers in uncensored countries to run proxy servers. Although any member of the public can become a user, Salmon protects the bulk of its servers from being discovered and blocked by the censor via an algorithm for quickly identifying malicious users. The algorithm entails identifying some users as especially trustworthy or suspicious, based on their actions. We impede Sybil attacks by requiring either an unobtrusive check of a social network account, or a referral from a trustworthy user.

  12. Development of six PROMIS pediatrics proxy-report item banks.

    Science.gov (United States)

    Irwin, Debra E; Gross, Heather E; Stucky, Brian D; Thissen, David; DeWitt, Esi Morgan; Lai, Jin Shei; Amtmann, Dagmar; Khastou, Leyla; Varni, James W; DeWalt, Darren A

    2012-02-22

    Pediatric self-report should be considered the standard for measuring patient reported outcomes (PRO) among children. However, circumstances exist when the child is too young, cognitively impaired, or too ill to complete a PRO instrument and a proxy-report is needed. This paper describes the development process including the proxy cognitive interviews and large-field-test survey methods and sample characteristics employed to produce item parameters for the Patient Reported Outcomes Measurement Information System (PROMIS) pediatric proxy-report item banks. The PROMIS pediatric self-report items were converted into proxy-report items before undergoing cognitive interviews. These items covered six domains (physical function, emotional distress, social peer relationships, fatigue, pain interference, and asthma impact). Caregivers (n = 25) of children ages of 5 and 17 years provided qualitative feedback on proxy-report items to assess any major issues with these items. From May 2008 to March 2009, the large-scale survey enrolled children ages 8-17 years to complete the self-report version and caregivers to complete the proxy-report version of the survey (n = 1548 dyads). Caregivers of children ages 5 to 7 years completed the proxy report survey (n = 432). In addition, caregivers completed other proxy instruments, PedsQL™ 4.0 Generic Core Scales Parent Proxy-Report version, PedsQL™ Asthma Module Parent Proxy-Report version, and KIDSCREEN Parent-Proxy-52. Item content was well understood by proxies and did not require item revisions but some proxies clearly noted that determining an answer on behalf of their child was difficult for some items. Dyads and caregivers of children ages 5-17 years old were enrolled in the large-scale testing. The majority were female (85%), married (70%), Caucasian (64%) and had at least a high school education (94%). Approximately 50% had children with a chronic health condition, primarily asthma, which was diagnosed or treated within 6

  13. Development of six PROMIS pediatrics proxy-report item banks

    Directory of Open Access Journals (Sweden)

    Irwin Debra E

    2012-02-01

    Full Text Available Abstract Background Pediatric self-report should be considered the standard for measuring patient reported outcomes (PRO among children. However, circumstances exist when the child is too young, cognitively impaired, or too ill to complete a PRO instrument and a proxy-report is needed. This paper describes the development process including the proxy cognitive interviews and large-field-test survey methods and sample characteristics employed to produce item parameters for the Patient Reported Outcomes Measurement Information System (PROMIS pediatric proxy-report item banks. Methods The PROMIS pediatric self-report items were converted into proxy-report items before undergoing cognitive interviews. These items covered six domains (physical function, emotional distress, social peer relationships, fatigue, pain interference, and asthma impact. Caregivers (n = 25 of children ages of 5 and 17 years provided qualitative feedback on proxy-report items to assess any major issues with these items. From May 2008 to March 2009, the large-scale survey enrolled children ages 8-17 years to complete the self-report version and caregivers to complete the proxy-report version of the survey (n = 1548 dyads. Caregivers of children ages 5 to 7 years completed the proxy report survey (n = 432. In addition, caregivers completed other proxy instruments, PedsQL™ 4.0 Generic Core Scales Parent Proxy-Report version, PedsQL™ Asthma Module Parent Proxy-Report version, and KIDSCREEN Parent-Proxy-52. Results Item content was well understood by proxies and did not require item revisions but some proxies clearly noted that determining an answer on behalf of their child was difficult for some items. Dyads and caregivers of children ages 5-17 years old were enrolled in the large-scale testing. The majority were female (85%, married (70%, Caucasian (64% and had at least a high school education (94%. Approximately 50% had children with a chronic health condition, primarily

  14. Web 25

    DEFF Research Database (Denmark)

    the reader on an exciting time travel journey to learn more about the prehistory of the hyperlink, the birth of the Web, the spread of the early Web, and the Web’s introduction to the general public in mainstream media. Fur- thermore, case studies of blogs, literature, and traditional media going online...

  15. AN EFFICIENT WEB PERSONALIZATION APPROACH TO DISCOVER USER INTERESTED DIRECTORIES

    Directory of Open Access Journals (Sweden)

    M. Robinson Joel

    2014-04-01

    Full Text Available Web Usage Mining is the application of data mining technique used to retrieve the web usage from web proxy log file. Web Usage Mining consists of three major stages: preprocessing, clustering and pattern analysis. This paper explains each of these stages in detail. In this proposed approach, the web directories are discovered based on the user’s interestingness. The web proxy log file undergoes a preprocessing phase to improve the quality of data. Fuzzy Clustering Algorithm is used to cluster the user and session into disjoint clusters. In this paper, an effective approach is presented for Web personalization based on an Advanced Apriori algorithm. It is used to select the user interested web directories. The proposed method is compared with the existing web personalization methods like Objective Probabilistic Directory Miner (OPDM, Objective Community Directory Miner (OCDM and Objective Clustering and Probabilistic Directory Miner (OCPDM. The result shows that the proposed approach provides better results than the aforementioned existing approaches. At last, an application is developed with the user interested directories and web usage details.

  16. Web Page Recommendation Using Web Mining

    OpenAIRE

    Modraj Bhavsar; Mrs. P. M. Chavan

    2014-01-01

    On World Wide Web various kind of content are generated in huge amount, so to give relevant result to user web recommendation become important part of web application. On web different kind of web recommendation are made available to user every day that includes Image, Video, Audio, query suggestion and web page. In this paper we are aiming at providing framework for web page recommendation. 1) First we describe the basics of web mining, types of web mining. 2) Details of each...

  17. Enhancing the AliEn Web Service Authentication

    International Nuclear Information System (INIS)

    Zhu Jianlin; Zhou Daicui; Zhang Guoping; Saiz, Pablo; Carminati, Federico; Betev, Latchezar; Lorenzo, Patricia Mendez; Grigoras, Alina Gabriela; Grigoras, Costin; Furano, Fabrizio; Schreiner, Steffen; Datskova, Olga Vladimirovna; Banerjee, Subho Sankar

    2011-01-01

    Web Services are an XML based technology that allow applications to communicate with each other across disparate systems. Web Services are becoming the de facto standard that enable inter operability between heterogeneous processes and systems. AliEn2 is a grid environment based on web services. The AliEn2 services can be divided in three categories: Central services, deployed once per organization; Site services, deployed on each of the participating centers; Job Agents running on the worker nodes automatically. A security model to protect these services is essential for the whole system. Current implementations of web server, such as Apache, are not suitable to be used within the grid environment. Apache with the mod s sl and OpenSSL only supports the X.509 certificates. But in the grid environment, the common credential is the proxy certificate for the purpose of providing restricted proxy and delegation. An Authentication framework was taken for AliEn2 web services to add the ability to accept X.509 certificates and proxy certificates from client-side to Apache Web Server. The authentication framework could also allow the generation of access control policies to limit access to the AliEn2 web services.

  18. Proxy-produced ethnographic work: what are the problems, issues, and dilemmas arising from proxy ethnography?

    DEFF Research Database (Denmark)

    Martinussen, Marie Louise; Højbjerg, Karin; Tamborg, Andreas Lindenskov

    2018-01-01

    This article addresses the implications of research-student cooperation in the production of empirical material. For the student to replace the experienced researcher and work under the researcher’s supervision, we call such work proxy-produced ethnographic work. The specific relations...... and positions arising from such a setup between the teacher/researcher and the proxy ethnographer/student are found to have implications for the ethnographies produced. This article’s main focus is to show how these relations and positions have not distorted the ethnographic work and the ethnographies but......, rather, have oriented it in certain ways. It is shown how the participating researchers – both senior and junior - have distinctive, incorporated dispositions with which they pre-consciously participate in an implicit and subtle relation that can make it very easy to overlook distortions during...

  19. UrbanWeb: a Platform for Mobile Context-aware Social Computing

    DEFF Research Database (Denmark)

    Hansen, Frank Allan; Grønbæk, Kaj

    2010-01-01

    UrbanWeb is a novel Web-based context-aware hypermedia plat- form. It provides essential mechanisms for mobile social comput- ing applications: the framework implements context as an exten- sion to Web 2.0 tagging and provides developers with an easy to use platform for mobile context......-aware applications. Services can be statically or dynamically defined in the user’s context, data can be pre-cached for data intensive mobile applications, and shared state supports synchronization between running applications such as games. The paper discusses how UrbanWeb acquires cues about the user’s context...... from sensors in mobile phones, ranging from GPS data, to 2D barcodes, and manual entry of context in- formation, as well as how to utilize this context in applications. The experiences show that the UrbanWeb platform efficiently supports a rich variety of urban computing applications in differ- ent...

  20. A Global User-Driven Model for Tile Prefetching in Web Geographical Information Systems.

    Science.gov (United States)

    Pan, Shaoming; Chong, Yanwen; Zhang, Hang; Tan, Xicheng

    2017-01-01

    A web geographical information system is a typical service-intensive application. Tile prefetching and cache replacement can improve cache hit ratios by proactively fetching tiles from storage and replacing the appropriate tiles from the high-speed cache buffer without waiting for a client's requests, which reduces disk latency and improves system access performance. Most popular prefetching strategies consider only the relative tile popularities to predict which tile should be prefetched or consider only a single individual user's access behavior to determine which neighbor tiles need to be prefetched. Some studies show that comprehensively considering all users' access behaviors and all tiles' relationships in the prediction process can achieve more significant improvements. Thus, this work proposes a new global user-driven model for tile prefetching and cache replacement. First, based on all users' access behaviors, a type of expression method for tile correlation is designed and implemented. Then, a conditional prefetching probability can be computed based on the proposed correlation expression mode. Thus, some tiles to be prefetched can be found by computing and comparing the conditional prefetching probability from the uncached tiles set and, similarly, some replacement tiles can be found in the cache buffer according to multi-step prefetching. Finally, some experiments are provided comparing the proposed model with other global user-driven models, other single user-driven models, and other client-side prefetching strategies. The results show that the proposed model can achieve a prefetching hit rate in approximately 10.6% ~ 110.5% higher than the compared methods.

  1. A Hybrid Web Browser Architecture for Mobile Devices

    Directory of Open Access Journals (Sweden)

    CHO, J.

    2014-08-01

    Full Text Available Web browsing on mobile networks is slow in comparison to wired or Wi-Fi networks. Particularly, the connection establishment phase including DNS lookups and TCP handshakes takes a long time on mobile networks due to its long round-trip latency. In this paper, we propose a novel web browser architecture that aims to improve mobile web browsing performance. Our approach delegates the connection establishment and HTTP header field delivery tasks to a dedicated proxy server located at the joint point between the WAN and mobile network. Since the traffic for the connection establishment and HTTP header fields delivery passes only through the WAN between the proxy and web servers, our approach significantly reduces both the number and size of packets on the mobile network. Our evaluation showed that the proposed scheme reduces the number of mobile network packets by up to 42% and, consequently, the average page loading time is shortened by up to 52%.

  2. Mobile web browsing using the cloud

    CERN Document Server

    Zhao, Bo; Cao, Guohong

    2013-01-01

    This brief surveys existing techniques to address the problem of long delays and high power consumption for web browsing on smartphones, which can be due to the local computational limitation at the smartphone (e.g., running java scripts or flash objects) level. To address this issue, an architecture called Virtual-Machine based Proxy (VMP) is introduced, shifting the computing from smartphones to the VMP which may reside in the cloud. Mobile Web Browsing Using the Cloud illustrates the feasibility of deploying the proposed VMP system in 3G networks through a prototype using Xen virtual machin

  3. Analyzing Web Behavior in Indoor Retail Spaces

    OpenAIRE

    Ren, Yongli; Tomko, Martin; Salim, Flora; Ong, Kevin; Sanderson, Mark

    2015-01-01

    We analyze 18 million rows of Wi-Fi access logs collected over a one year period from over 120,000 anonymized users at an inner-city shopping mall. The anonymized dataset gathered from an opt-in system provides users' approximate physical location, as well as Web browsing and some search history. Such data provides a unique opportunity to analyze the interaction between people's behavior in physical retail spaces and their Web behavior, serving as a proxy to their information needs. We find: ...

  4. Sensor web

    Science.gov (United States)

    Delin, Kevin A. (Inventor); Jackson, Shannon P. (Inventor)

    2011-01-01

    A Sensor Web formed of a number of different sensor pods. Each of the sensor pods include a clock which is synchronized with a master clock so that all of the sensor pods in the Web have a synchronized clock. The synchronization is carried out by first using a coarse synchronization which takes less power, and subsequently carrying out a fine synchronization to make a fine sync of all the pods on the Web. After the synchronization, the pods ping their neighbors to determine which pods are listening and responded, and then only listen during time slots corresponding to those pods which respond.

  5. Proxy indicators as measure of local economic dispositions in South ...

    African Journals Online (AJOL)

    in international markets. The proxy is a coincidental and pro-cyclic indicator, with a correlation of 0.86. Key similarities exist between the economy and the profitability of hardware stores, although the proxy is not as accurate as the volume of sales in hardware stores. The correlation is measured at 0.85. The profitability of the ...

  6. Tellurium Stable Isotopes as a Paleoredox Proxy

    Science.gov (United States)

    Wasserman, N.; Johnson, T. M.

    2017-12-01

    Despite arguments for variably-oxygenated shallow waters and anoxic deep marine waters, which delayed animal development until the Neoproterozoic Oxidation Event, the magnitude of atmospheric oxygen during the Proterozoic is still uncertain [1]. The evidence for low pO2 (<0.1-1% PAL) is based on geochemical and isotopic proxies, which track the mobilization of Fe and Mn on the continents. For example, large chromium isotope shifts occur at the Neoproterozoic Oxidation Event due to the initiation of Cr redox cycling, but this proxy is insensitive to fluctuations in the lower-pO2 conditions at other times during the Proterozoic. Tellurium, a metalloid with a lower threshold to oxidation, may be sensitive to pO2 shifts in a lower range. In the reduced forms, Te(-II) and Te(0), the element is insoluble and immobile. However, in the more oxidized phases, Te(IV) and Te(VI), Te can form soluble oxyanions (though it tends to adsorb to Fe-oxyhydroxides and clays) [2]. Te stable isotopes have been shown to fractionate during abiotic or biologic reduction of Te(VI) or Te(IV) to elemental Te(0) [3, 4]. Utilizing hydride generation MC-ICP-MS, we are able to obtain high precision (2σ 0.04‰) measurements of δ128Te/125Te for natural samples containing < 10 ng of Te. A suite of Phanerozoic and Proterozoic ironstones show significant variation in δ128Te/125Te (<0.5‰), suggesting that the Te redox cycle was active during the Proterozoic. Future directions will include Te isotope measurements of Precambrian paleosols to determine natural isotope variation before the Great Oxidation Event and experiments to determine fractionation during adsorption to Fe-oxyhydroxides. [1] Planavsky et al. (2014) Science 346 (6209), pp. 635-638 [2] Qin et al. (2017) Environmental Science and Technology 51 (11), pp 6027-6035 [3] Baesman et al. (2007) Applied Environmental Microbiology 73 (7), pp 2135-2143 [4] Smithers and Krause (1968) Canadian Journal of Chemistry 46(4): pp 583-591

  7. Global Tsunami Database: Adding Geologic Deposits, Proxies, and Tools

    Science.gov (United States)

    Brocko, V. R.; Varner, J.

    2007-12-01

    A result of collaboration between NOAA's National Geophysical Data Center (NGDC) and the Cooperative Institute for Research in the Environmental Sciences (CIRES), the Global Tsunami Database includes instrumental records, human observations, and now, information inferred from the geologic record. Deep Ocean Assessment and Reporting of Tsunamis (DART) data, historical reports, and information gleaned from published tsunami deposit research build a multi-faceted view of tsunami hazards and their history around the world. Tsunami history provides clues to what might happen in the future, including frequency of occurrence and maximum wave heights. However, instrumental and written records commonly span too little time to reveal the full range of a region's tsunami hazard. The sedimentary deposits of tsunamis, identified with the aid of modern analogs, increasingly complement instrumental and human observations. By adding the component of tsunamis inferred from the geologic record, the Global Tsunami Database extends the record of tsunamis backward in time. Deposit locations, their estimated age and descriptions of the deposits themselves fill in the tsunami record. Tsunamis inferred from proxies, such as evidence for coseismic subsidence, are included to estimate recurrence intervals, but are flagged to highlight the absence of a physical deposit. Authors may submit their own descriptions and upload digital versions of publications. Users may sort by any populated field, including event, location, region, age of deposit, author, publication type (extract information from peer reviewed publications only, if you wish), grain size, composition, presence/absence of plant material. Users may find tsunami deposit references for a given location, event or author; search for particular properties of tsunami deposits; and even identify potential collaborators. Users may also download public-domain documents. Data and information may be viewed using tools designed to extract and

  8. A Comparison between Fixed Priority and EDF Scheduling accounting for Cache Related Pre-emption Delays

    Directory of Open Access Journals (Sweden)

    Will Lunniss

    2014-04-01

    Full Text Available In multitasking real-time systems, the choice of scheduling algorithm is an important factor to ensure that response time requirements are met while maximising limited system resources. Two popular scheduling algorithms include fixed priority (FP and earliest deadline first (EDF. While they have been studied in great detail before, they have not been compared when taking into account cache related pre-emption delays (CRPD. Memory and cache are split into a number of blocks containing instructions and data. During a pre-emption, cache blocks from the pre-empting task can evict those of the pre-empted task. When the pre-empted task is resumed, if it then has to re-load the evicted blocks, CRPD are introduced which then affect the schedulability of the task. In this paper we compare FP and EDF scheduling algorithms in the presence of CRPD using the state-of-the-art CRPD analysis. We find that when CRPD is accounted for, the performance gains offered by EDF over FP, while still notable, are diminished. Furthermore, we find that under scenarios that cause relatively high CRPD, task layout optimisation techniques can be applied to allow FP to schedule tasksets at a similar processor utilisation to EDF. Thus making the choice of the task layout in memory as important as the choice of scheduling algorithm. This is very relevant for industry, as it is much cheaper and simpler to adjust the task layout through the linker than it is to switch the scheduling algorithm.

  9. Behavior characterization of the shared last-level cache in a chip multiprocessor

    OpenAIRE

    Benedicte Illescas, Pedro

    2014-01-01

    [CATALÀ] Aquest projecte consisteix a analitzar diferents aspectes de la jerarquia de memòria i entendre la seva influència al rendiment del sistema. Els aspectes que s'analitzaran són els algorismes de reemplaçament, els esquemes de mapeig de memòria i les polítiques de pàgina de memòria. [ANGLÈS] This project consists in analyzing different aspects of the memory hierarchy and understanding its influence in the overall system performance. The aspects that will be analyzed are cache replac...

  10. Temperature and Discharge on a Highly Altered Stream in Utah's Cache Valley

    OpenAIRE

    Pappas, Andy

    2013-01-01

    To study the River Continuum Concept (RCC) and the Serial Discontinuity Hypothesis (SDH), I looked at temperature and discharge changes along 52 km of the Little Bear River in Cache Valley, Utah. The Little Bear River is a fourth order stream with one major reservoir, a number of irrigation diversions, and one major tributary, the East Fork of the Little Bear River. Discharge data was collected at six sites on 29 September 2012 and temperature data was collected hourly at eleven sites from 1 ...

  11. Comparison of the Frontier Distributed Database Caching System with NoSQL Databases

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Non-relational "NoSQL" databases such as Cassandra and CouchDB are best known for their ability to scale to large numbers of clients spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects, is based on traditional SQL databases but also has the same high scalability and wide-area distributability for an important subset of applications. This paper compares the architectures, behavior, performance, and maintainability of the two different approaches and identifies the criteria for choosing which approach to prefer over the other.

  12. Web Analytics

    Science.gov (United States)

    EPA’s Web Analytics Program collects, analyzes, and provides reports on traffic, quality assurance, and customer satisfaction metrics for EPA’s website. The program uses a variety of analytics tools, including Google Analytics and CrazyEgg.

  13. Web Service

    Science.gov (United States)

    ... topic data in XML format. Using the Web service, software developers can build applications that utilize MedlinePlus health topic information. The service accepts keyword searches as requests and returns relevant ...

  14. Proxy SDN Controller for Wireless Networks

    Directory of Open Access Journals (Sweden)

    Won-Suk Kim

    2016-01-01

    Full Text Available Management of wireless networks as well as wired networks by using software-defined networking (SDN has been highlighted continually. However, control features of a wireless network differ from those of a wired network in several aspects. In this study, we identify the various inefficient points when controlling and managing wireless networks by using SDN and propose SDN-based control architecture called Proxcon to resolve these problems. Proxcon introduces the concept of a proxy SDN controller (PSC for the wireless network control, and the PSC entrusted with the role of a main controller performs control operations and provides the latest network state for a network administrator. To address the control inefficiency, Proxcon supports offloaded SDN operations for controlling wireless networks by utilizing the PSC, such as local control by each PSC, hybrid control utilizing the PSC and the main controller, and locally cooperative control utilizing the PSCs. The proposed architecture and the newly supported control operations can enhance scalability and response time when the logically centralized control plane responds to the various wireless network events. Through actual experiments, we verified that the proposed architecture could address the various control issues such as scalability, response time, and control overhead.

  15. Physics development of web-based tools for use in hardware clusters doing lattice physics

    International Nuclear Information System (INIS)

    Dreher, P.; Akers, W.; Chen, J.; Chen, Y.; Watson, C.

    2002-01-01

    Jefferson Lab and MIT are developing a set of web-based tools within the Lattice Hadron Physics Collaboration to allow lattice QCD theorists to treat the computational facilities located at the two sites as a single meta-facility. The prototype Lattice Portal provides researchers the ability to submit jobs to the cluster, browse data caches, and transfer files between cache and off-line storage. The user can view the configuration of the PBS servers and to monitor both the status of all batch queues as well as the jobs in each queue. Work is starting on expanding the present system to include job submissions at the meta-facility level (shared queue), as well as multi-site file transfers and enhanced policy-based data management capabilities

  16. Physics development of web-based tools for use in hardware clusters doing lattice physics

    International Nuclear Information System (INIS)

    Dreher, P.; Akers, Walt; Jian-ping Chen; Chen, Y.; William, A. Watson III

    2001-01-01

    Jefferson Lab and MIT are developing a set of web-based tools within the Lattice Hadron Physics Collaboration to allow lattice QCD theorists to treat the computational facilities located at the two sites as a single meta-facility. The prototype Lattice Portal provides researchers the ability to submit jobs to the cluster, browse data caches, and transfer files between cache and off-line storage. The user can view the configuration of the PBS servers and to monitor both the status of all batch queues as well as the jobs in each queue. Work is starting on expanding the present system to include job submissions at the meta-facility level (shared queue), as well as multi-site file transfers and enhanced policy-based data management capabilities

  17. Physics development of web-based tools for use in hardware clusters doing lattice physics

    Energy Technology Data Exchange (ETDEWEB)

    Dreher, P.; Akers, W.; Chen, J.; Chen, Y.; Watson, C

    2002-03-01

    Jefferson Lab and MIT are developing a set of web-based tools within the Lattice Hadron Physics Collaboration to allow lattice QCD theorists to treat the computational facilities located at the two sites as a single meta-facility. The prototype Lattice Portal provides researchers the ability to submit jobs to the cluster, browse data caches, and transfer files between cache and off-line storage. The user can view the configuration of the PBS servers and to monitor both the status of all batch queues as well as the jobs in each queue. Work is starting on expanding the present system to include job submissions at the meta-facility level (shared queue), as well as multi-site file transfers and enhanced policy-based data management capabilities.

  18. Proxy comparisons for Paleogene sea water temperature reconstructions

    Science.gov (United States)

    de Bar, Marijke; de Nooijer, Lennart; Schouten, Stefan; Ziegler, Martin; Sluijs, Appy; Reichart, Gert-Jan

    2017-04-01

    Several studies have reconstructed Paleogene seawater temperatures, using single- or multi-proxy approaches (e.g. Hollis et al., 2012 and references therein), particularly comparing TEX86 with foraminiferal δ18O and Mg/Ca. Whereas trends often agree relatively well, absolute temperatures can differ significantly between proxies, possibly because they are often applied to (extreme) climate events/transitions (e.g. Sluijs et al., 2011), where certain assumptions underlying the temperature proxies may not hold true. A more general long-term multi-proxy temperature reconstruction, is therefore necessary to validate the different proxies and underlying presumed boundary conditions. Here we apply a multi-proxy approach using foraminiferal calcite and organic proxies to generate a low-resolution, long term (80 Myr) paleotemperature record for the Bass River core (New Jersey, North Atlantic). Oxygen (δ18O), clumped isotopes (Δ47) and Mg/Ca of benthic foraminifera, as well as the organic proxies MBT'-CBT, TEX86H, U37K' index and the LDI were determined on the same sediments. The youngest samples of Miocene age are characterized by a high BIT index (>0.8) and fractional abundance of the C32 1,15-diol (>0.6; de Bar et al., 2016) and the absence of foraminifera, all suggesting high continental input and shallow depths. The older sediment layers (˜30 to 90 Ma) display BIT values and C32 1,15-diol fractional abundances 28 ˚ C. In contrast, LDI temperatures were considerably lower and varied only between 21 and 19 ˚ C. MBT'-CBT derived mean annual temperatures for the ages of 9 and 20 Ma align well with the TEX86H SSTs. Overall, the agreement of the paleotemperature proxies in terms of main tendencies, and the covariation with the global benthic oxygen isotope compilation suggests that temperatures in this region varied in concert with global climate variability. The fact that offsets between the different proxies used here remain fairly constant down to 90 Ma ago

  19. Flood Frequency Analysis of Future Climate Projections in the Cache Creek Watershed

    Science.gov (United States)

    Fischer, I.; Trihn, T.; Ishida, K.; Jang, S.; Kavvas, E.; Kavvas, M. L.

    2014-12-01

    Effects of climate change on hydrologic flow regimes, particularly extreme events, necessitate modeling of future flows to best inform water resources management. Future flow projections may be modeled through the joint use of carbon emission scenarios, general circulation models and watershed models. This research effort ran 13 simulations for carbon emission scenarios (taken from the A1, A2 and B1 families) over the 21st century (2001-2100) for the Cache Creek watershed in Northern California. Atmospheric data from general circulation models, CCSM3 and ECHAM5, were dynamically downscaled to a 9 km resolution using MM5, a regional mesoscale model, before being input into the physically based watershed environmental hydrology (WEHY) model. Ensemble mean and standard deviation of simulated flows describe the expected hydrologic system response. Frequency histograms and cumulative distribution functions characterize the range of hydrologic responses that may occur. The modeled flow results comprise a dataset suitable for time series and frequency analysis allowing for more robust system characterization, including indices such as the 100 year flood return period. These results are significant for water quality management as the Cache Creek watershed is severely impacted by mercury pollution from historic mining activities. Extreme flow events control mercury fate and transport affecting the downstream water bodies of the Sacramento River and Sacramento- San Joaquin Delta which provide drinking water to over 25 million people.

  20. Replicas Strategy and Cache Optimization of Video Surveillance Systems Based on Cloud Storage

    Directory of Open Access Journals (Sweden)

    Rongheng Li

    2018-04-01

    Full Text Available With the rapid development of video surveillance technology, especially the popularity of cloud-based video surveillance applications, video data begins to grow explosively. However, in the cloud-based video surveillance system, replicas occupy an amount of storage space. Also, the slow response to video playback constrains the performance of the system. In this paper, considering the characteristics of video data comprehensively, we propose a dynamic redundant replicas mechanism based on security levels that can dynamically adjust the number of replicas. Based on the location correlation between cameras, this paper also proposes a data cache strategy to improve the response speed of data reading. Experiments illustrate that: (1 our dynamic redundant replicas mechanism can save storage space while ensuring data security; (2 the cache mechanism can predict the playback behaviors of the users in advance and improve the response speed of data reading according to the location and time correlation of the front-end cameras; and (3 in terms of cloud-based video surveillance, our proposed approaches significantly outperform existing methods.

  1. Transient Variable Caching in Java’s Stack-Based Intermediate Representation

    Directory of Open Access Journals (Sweden)

    Paul Týma

    1999-01-01

    Full Text Available Java’s stack‐based intermediate representation (IR is typically coerced to execute on register‐based architectures. Unoptimized compiled code dutifully replicates transient variable usage designated by the programmer and common optimization practices tend to introduce further usage (i.e., CSE, Loop‐invariant Code Motion, etc.. On register based machines, often transient variables are cached within registers (when available saving the expense of actually accessing memory. Unfortunately, in stack‐based environments because of the need to push and pop the transient values, further performance improvement is possible. This paper presents Transient Variable Caching (TVC, a technique for eliminating transient variable overhead whenever possible. This optimization would find a likely home in optimizers attached to the back of popular Java compilers. Side effects of the algorithm include significant instruction reordering and introduction of many stack‐manipulation operations. This combination has proven to greatly impede the ability to decompile stack‐based IR code sequences. The code that results from the transform is faster, smaller, and greatly impedes decompilation.

  2. Agricultural Influences on Cache Valley, Utah Air Quality During a Wintertime Inversion Episode

    Science.gov (United States)

    Silva, P. J.

    2017-12-01

    Several of northern Utah's intermountain valleys are classified as non-attainment for fine particulate matter. Past data indicate that ammonium nitrate is the major contributor to fine particles and that the gas phase ammonia concentrations are among the highest in the United States. During the 2017 Utah Winter Fine Particulate Study, USDA brought a suite of online and real-time measurement methods to sample particulate matter and potential gaseous precursors from agricultural emissions in the Cache Valley. Instruments were co-located at the State of Utah monitoring site in Smithfield, Utah from January 21st through February 12th, 2017. A Scanning mobility particle sizer (SMPS) and aerodynamic particle sizer (APS) acquired size distributions of particles from 10 nm - 10 μm in 5-min intervals. A URG ambient ion monitor (AIM) gave hourly concentrations for gas and particulate ions and a Chromatotec Trsmedor gas chromatograph obtained 10 minute measurements of gaseous sulfur species. High ammonia concentrations were detected at the Smithfield site with concentrations above 100 ppb at times, indicating a significant influence from agriculture at the sampling site. Ammonia is not the only agricultural emission elevated in Cache Valley during winter, as reduced sulfur gas concentrations of up to 20 ppb were also detected. Dimethylsulfide was the major sulfur-containing gaseous species. Analysis indicates that particle growth and particle nucleation events were both observed by the SMPS. Relationships between gas and particulate concentrations and correlations between the two will be discussed.

  3. A Unified Buffering Management with Set Divisible Cache for PCM Main Memory

    Institute of Scientific and Technical Information of China (English)

    Mei-Ying Bian; Su-Kyung Yoon; Jeong-Geun Kim; Sangjae Nam; Shin-Dug Kim

    2016-01-01

    This research proposes a phase-change memory (PCM) based main memory system with an effective combi-nation of a superblock-based adaptive buffering structure and its associated set divisible last-level cache (LLC). To achieve high performance similar to that of dynamic random-access memory (DRAM) based main memory, the superblock-based adaptive buffer (SABU) is comprised of dual DRAM buffers, i.e., an aggressive superblock-based pre-fetching buffer (SBPB) and an adaptive sub-block reusing buffer (SBRB), and a set divisible LLC based on a cache space optimization scheme. According to our experiment, the longer PCM access latency can typically be hidden using our proposed SABU, which can significantly reduce the number of writes over the PCM main memory by 26.44%. The SABU approach can reduce PCM access latency up to 0.43 times, compared with conventional DRAM main memory. Meanwhile, the average memory energy consumption can be reduced by 19.7%.

  4. Observations of territorial breeding common ravens caching eggs of greater sage-grouse

    Science.gov (United States)

    Howe, Kristy B.; Coates, Peter S.

    2015-01-01

    Previous investigations using continuous video monitoring of greater sage-grouse Centrocercus urophasianus nests have unambiguously identified common ravens Corvus corax as an important egg predator within the western United States. The quantity of greater sage-grouse eggs an individual common raven consumes during the nesting period and the extent to which common ravens actively hunt greater sage-grouse nests are largely unknown. However, some evidence suggests that territorial breeding common ravens, rather than nonbreeding transients, are most likely responsible for nest depredations. We describe greater sage-grouse egg depredation observations obtained opportunistically from three common raven nests located in Idaho and Nevada where depredated greater sage-grouse eggs were found at or in the immediate vicinity of the nest site, including the caching of eggs in nearby rock crevices. We opportunistically monitored these nests by counting and removing depredated eggs and shell fragments from the nest sites during each visit to determine the extent to which the common raven pairs preyed on greater sage-grouse eggs. To our knowledge, our observations represent the first evidence that breeding, territorial pairs of common ravens cache greater sage-grouse eggs and are capable of depredating multiple greater sage-grouse nests.

  5. Secure Mobile Agent from Leakage-Resilient Proxy Signatures

    Directory of Open Access Journals (Sweden)

    Fei Tang

    2015-01-01

    Full Text Available A mobile agent can sign a message in a remote server on behalf of a customer without exposing its secret key; it can be used not only to search for special products or services, but also to make a contract with a remote server. Hence a mobile agent system can be used for electronic commerce as an important key technology. In order to realize such a system, Lee et al. showed that a secure mobile agent can be constructed using proxy signatures. Intuitively, a proxy signature permits an entity (delegator to delegate its signing right to another entity (proxy to sign some specified messages on behalf of the delegator. However, the proxy signatures are often used in scenarios where the signing is done in an insecure environment, for example, the remote server of a mobile agent system. In such setting, an adversary could launch side-channel attacks to exploit some leakage information about the proxy key or even other secret states. The proxy signatures which are secure in the traditional security models obviously cannot provide such security. Based on this consideration, in this paper, we design a leakage-resilient proxy signature scheme for the secure mobile agent systems.

  6. Cryptanalytic Performance Appraisal of Improved CCH2 Proxy Multisignature Scheme

    Directory of Open Access Journals (Sweden)

    Raman Kumar

    2014-01-01

    Full Text Available Many of the signature schemes are proposed in which the t out of n threshold schemes are deployed, but they still lack the property of security. In this paper, we have discussed implementation of improved CCH1 and improved CCH2 proxy multisignature scheme based on elliptic curve cryptosystem. We have represented time complexity, space complexity, and computational overhead of improved CCH1 and CCH2 proxy multisignature schemes. We have presented cryptanalysis of improved CCH2 proxy multisignature scheme and showed that improved CCH2 scheme suffered from various attacks, that is, forgery attack and framing attack.

  7. Tannin concentration enhances seed caching by scatter-hoarding rodents: An experiment using artificial ‘seeds’

    Science.gov (United States)

    Wang, Bo; Chen, Jin

    2008-11-01

    Tannins are very common among plant seeds but their effects on the fate of seeds, for example, via mediation of the feeding preferences of scatter-hoarding rodents, are poorly understood. In this study, we created a series of artificial 'seeds' that only differed in tannin concentration and the type of tannin, and placed them in a pine forest in the Shangri-La Alpine Botanical Garden, Yunnan Province of China. Two rodent species ( Apodemus latronum and A. chevrieri) showed significant preferences for 'seeds' with different tannin concentrations. A significantly higher proportion of seeds with low tannin concentration were consumed in situ compared with seeds with a higher tannin concentration. Meanwhile, the tannin concentration was significantly positively correlated with the proportion of seeds cached. The different types of tannin (hydrolysable tannin vs condensed tannin) did not differ significantly in their effect on the proportion of seeds eaten in situ vs seeds cached. Tannin concentrations had no significant effect on the distance that cached seeds were carried, which suggests that rodents may respond to different seed traits in deciding whether or not to cache seeds and how far they will transport seeds.

  8. Application of computer graphics to generate coal resources of the Cache coal bed, Recluse geologic model area, Campbell County, Wyoming

    Science.gov (United States)

    Schneider, G.B.; Crowley, S.S.; Carey, M.A.

    1982-01-01

    Low-sulfur subbituminous coal resources have been calculated, using both manual and computer methods, for the Cache coal bed in the Recluse Model Area, which covers the White Tail Butte, Pitch Draw, Recluse, and Homestead Draw SW 7 1/2 minute quadrangles, Campbell County, Wyoming. Approximately 275 coal thickness measurements obtained from drill hole data are evenly distributed throughout the area. The Cache coal and associated beds are in the Paleocene Tongue River Member of the Fort Union Formation. The depth from the surface to the Cache bed ranges from 269 to 1,257 feet. The thickness of the coal is as much as 31 feet, but in places the Cache coal bed is absent. Comparisons between hand-drawn and computer-generated isopach maps show minimal differences. Total coal resources calculated by computer show the bed to contain 2,316 million short tons or about 6.7 percent more than the hand-calculated figure of 2,160 million short tons.

  9. Munchausen Syndrome by Proxy: Unusual Manifestations and Disturbing Sequelae.

    Science.gov (United States)

    Porter, Gerald E.; And Others

    1994-01-01

    This study documents previously unreported findings in cases of Munchausen Syndrome by Proxy (in which a mother fabricates an illness in her child). In the reported case, esophageal perforation, retrograde intussusception, tooth loss, and bradycardia were found. (Author/DB)

  10. Ocean currents generate large footprints in marine palaeoclimate proxies

    NARCIS (Netherlands)

    van Sebille, E.; Scussolini, P.; Durgadoo, J.V.; Peeters, F.J.C.; Biastoch, A.; Weijer, W.; Turney, C.; Paris, C.B.; Zahn, R.

    2015-01-01

    Fossils of marine microorganisms such as planktic foraminifera are among the cornerstones of palaeoclimatological studies. It is often assumed that the proxies derived from their shells represent ocean conditions above the location where they were deposited. Planktic foraminifera, however, are

  11. Proxy-rated quality of life in Alzheimer's disease

    DEFF Research Database (Denmark)

    Vogel, Asmus; Bhattacharya, Suvosree; Waldorff, Frans Boch

    2012-01-01

    The study investigated the change in proxy rated quality of life (QoL) of a large cohort of home living patients with Alzheimer's disease (AD) over a period of 36 months.......The study investigated the change in proxy rated quality of life (QoL) of a large cohort of home living patients with Alzheimer's disease (AD) over a period of 36 months....

  12. Applicability of a cognitive questionnaire in the elderly and proxy

    Directory of Open Access Journals (Sweden)

    Renata Areza Fegyveres

    Full Text Available Abstract The Informant Questionnaire on Cognitive Decline in the Elderly with the Proxy (IQCODE was developed as a screening tool for cognition alterations. Objectives: 1 To verify the applicability of IQCODE in the elderly with limited schooling, 2 To verify the reliability of the responses supplied by the aged and their proxies. Methods: Individuals of a Community Group were evaluated using the Mini-Mental State Examination (MMSE, IQCODE and Geriatric Depression Scale (GDS. The IQCODE was applied to informants and proxies. Results: We analyzed 44 individuals, aged between 58-82 years (M=66.8, SD=5.97 with mean elderly-schooling level of 3.75, SD=2.82 and 44 proxies aged 44.5 (SD=13.3, with mean schooling level of 8.25 (SD=4.3. The mean GDS was 8.22, SD=4.90 and 13 participants presented a score suggestive of depressive symptoms. The mean elderly IQCODE score was 3.26, SD=0.69 and 3.21, SD=0.65, for proxy responses. There was no statistical difference between these means. On the MMSE, the mean score was 24.20, SD=4.14 and 18 participants presented scores below the cut-off. The IQCODE answers by the elderly in this latter group were more congruent with MMSE than the answers of proxies. Conclusions: The applicability of the IQCODE in a population with little schooling was verified in that the proxy-report was similar to the elderly report. We can affirm that the elderly answers were more accurate than the proxies, as they were closer to MMSE score. The inclusion of a greater number of participants from community-dwelling settings is necessary to confirm the results obtained in this study.

  13. The Difference Between Using Proxy Server and VPN

    Directory of Open Access Journals (Sweden)

    David Dwiputra Kurniadi

    2015-11-01

    For example, looking for software, game through internet. But sometimes, there are some websites that cannot be opened as they have Internet Positive notificatio. To solve that problem, hacker found the solution by creating Proxy Server or VPN. In this time internet is very modern and very easy to access and there are a lot of Proxy Server and VPN that can be easly used.

  14. Advanced data extraction infrastructure: Web based system for management of time series data

    Energy Technology Data Exchange (ETDEWEB)

    Chilingaryan, S; Beglarian, A [Forschungszentrum Karlsruhe, Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen (Germany); Kopmann, A; Voecking, S, E-mail: Suren.Chilingaryan@kit.ed [University of Muenster, Institut fuer Kernphysik, Wilhelm-Klemm-Strasse 9, 48149 Mnster (Germany)

    2010-04-01

    During operation of high energy physics experiments a big amount of slow control data is recorded. It is necessary to examine all collected data checking the integrity and validity of measurements. With growing maturity of AJAX technologies it becomes possible to construct sophisticated interfaces using web technologies only. Our solution for handling time series, generally slow control data, has a modular architecture: backend system for data analysis and preparation, a web service interface for data access and a fast AJAX web display. In order to provide fast interactive access the time series are aggregated over time slices of few predefined lengths. The aggregated values are stored in the temporary caching database and, then, are used to create generalizing data plots. These plots may include indication of data quality and are generated within few hundreds of milliseconds even if very high data rates are involved. The extensible export subsystem provides data in multiple formats including CSV, Excel, ROOT, and TDMS. The search engine can be used to find periods of time where indications of selected sensors are falling into the specified ranges. Utilization of the caching database allows performing most of such lookups within a second. Based on this functionality a web interface facilitating fast (Google-maps style) navigation through the data has been implemented. The solution is at the moment used by several slow control systems at Test Facility for Fusion Magnets (TOSKA) and Karlsruhe Tritium Neutrino (KATRIN).

  15. Advanced data extraction infrastructure: Web based system for management of time series data

    International Nuclear Information System (INIS)

    Chilingaryan, S; Beglarian, A; Kopmann, A; Voecking, S

    2010-01-01

    During operation of high energy physics experiments a big amount of slow control data is recorded. It is necessary to examine all collected data checking the integrity and validity of measurements. With growing maturity of AJAX technologies it becomes possible to construct sophisticated interfaces using web technologies only. Our solution for handling time series, generally slow control data, has a modular architecture: backend system for data analysis and preparation, a web service interface for data access and a fast AJAX web display. In order to provide fast interactive access the time series are aggregated over time slices of few predefined lengths. The aggregated values are stored in the temporary caching database and, then, are used to create generalizing data plots. These plots may include indication of data quality and are generated within few hundreds of milliseconds even if very high data rates are involved. The extensible export subsystem provides data in multiple formats including CSV, Excel, ROOT, and TDMS. The search engine can be used to find periods of time where indications of selected sensors are falling into the specified ranges. Utilization of the caching database allows performing most of such lookups within a second. Based on this functionality a web interface facilitating fast (Google-maps style) navigation through the data has been implemented. The solution is at the moment used by several slow control systems at Test Facility for Fusion Magnets (TOSKA) and Karlsruhe Tritium Neutrino (KATRIN).

  16. Fiber webs

    Science.gov (United States)

    Roger M. Rowell; James S. Han; Von L. Byrd

    2005-01-01

    Wood fibers can be used to produce a wide variety of low-density three-dimensional webs, mats, and fiber-molded products. Short wood fibers blended with long fibers can be formed into flexible fiber mats, which can be made by physical entanglement, nonwoven needling, or thermoplastic fiber melt matrix technologies. The most common types of flexible mats are carded, air...

  17. Web Sitings.

    Science.gov (United States)

    Lo, Erika

    2001-01-01

    Presents seven mathematics games, located on the World Wide Web, for elementary students, including: Absurd Math: Pre-Algebra from Another Dimension; The Little Animals Activity Centre; MathDork Game Room (classic video games focusing on algebra); Lemonade Stand (students practice math and business skills); Math Cats (teaches the artistic beauty…

  18. Tracheal web

    International Nuclear Information System (INIS)

    Legasto, A.C.; Haller, J.O.; Giusti, R.J.

    2004-01-01

    Congenital tracheal web is a rare entity often misdiagnosed as refractory asthma. Clinical suspicion based on patient history, examination, and pulmonary function tests should lead to its consideration. Bronchoscopy combined with CT imaging and multiplanar reconstruction is an accepted, highly sensitive means of diagnosis. (orig.)

  19. Achieving cost/performance balance ratio using tiered storage caching techniques: A case study with CephFS

    Science.gov (United States)

    Poat, M. D.; Lauret, J.

    2017-10-01

    As demand for widely accessible storage capacity increases and usage is on the rise, steady IO performance is desired but tends to suffer within multi-user environments. Typical deployments use standard hard drives as the cost per/GB is quite low. On the other hand, HDD based solutions for storage is not known to scale well with process concurrency and soon enough, high rate of IOPs create a “random access” pattern killing performance. Though not all SSDs are alike, SSDs are an established technology often used to address this exact “random access” problem. In this contribution, we will first discuss the IO performance of many different SSD drives (tested in a comparable and standalone manner). We will then be discussing the performance and integrity of at least three low-level disk caching techniques (Flashcache, dm-cache, and bcache) including individual policies, procedures, and IO performance. Furthermore, the STAR online computing infrastructure currently hosts a POSIX-compliant Ceph distributed storage cluster - while caching is not a native feature of CephFS (only exists in the Ceph Object store), we will show how one can implement a caching mechanism profiting from an implementation at a lower level. As our illustration, we will present our CephFS setup, IO performance tests, and overall experience from such configuration. We hope this work will service the community’s interest for using disk-caching mechanisms with applicable uses such as distributed storage systems and seeking an overall IO performance gain.

  20. An ecological response model for the Cache la Poudre River through Fort Collins

    Science.gov (United States)

    Shanahan, Jennifer; Baker, Daniel; Bledsoe, Brian P.; Poff, LeRoy; Merritt, David M.; Bestgen, Kevin R.; Auble, Gregor T.; Kondratieff, Boris C.; Stokes, John; Lorie, Mark; Sanderson, John

    2014-01-01

    The Poudre River Ecological Response Model (ERM) is a collaborative effort initiated by the City of Fort Collins and a team of nine river scientists to provide the City with a tool to improve its understanding of the past, present, and likely future conditions of the Cache la Poudre River ecosystem. The overall ecosystem condition is described through the measurement of key ecological indicators such as shape and character of the stream channel and banks, streamside plant communities and floodplain wetlands, aquatic vegetation and insects, and fishes, both coolwater trout and warmwater native species. The 13- mile-long study area of the Poudre River flows through Fort Collins, Colorado, and is located in an ecological transition zone between the upstream, cold-water, steep-gradient system in the Front Range of the Southern Rocky Mountains and the downstream, warm-water, low-gradient reach in the Colorado high plains.

  1. Comparison of the Frontier Distributed Database Caching System to NoSQL Databases

    Science.gov (United States)

    Dykstra, Dave

    2012-12-01

    One of the main attractions of non-relational “NoSQL” databases is their ability to scale to large numbers of readers, including readers spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects for Conditions data, is based on traditional SQL databases but also adds high scalability and the ability to be distributed over a wide-area for an important subset of applications. This paper compares the major characteristics of the two different approaches and identifies the criteria for choosing which approach to prefer over the other. It also compares in some detail the NoSQL databases used by CMS and ATLAS: MongoDB, CouchDB, HBase, and Cassandra.

  2. Comparison of the Frontier Distributed Database Caching System to NoSQL Databases

    International Nuclear Information System (INIS)

    Dykstra, Dave

    2012-01-01

    One of the main attractions of non-relational “NoSQL” databases is their ability to scale to large numbers of readers, including readers spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects for Conditions data, is based on traditional SQL databases but also adds high scalability and the ability to be distributed over a wide-area for an important subset of applications. This paper compares the major characteristics of the two different approaches and identifies the criteria for choosing which approach to prefer over the other. It also compares in some detail the NoSQL databases used by CMS and ATLAS: MongoDB, CouchDB, HBase, and Cassandra.

  3. A Cache-Oblivious Implicit Dictionary with the Working Set Property

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Kejlberg-Rasmussen, Casper; Truelsen, Jakob

    2010-01-01

    In this paper we present an implicit dictionary with the working set property i.e. a dictionary supporting \\op{insert}($e$), \\op{delete}($x$) and \\op{predecessor}($x$) in~$\\O(\\log n)$ time and \\op{search}($x$) in $\\O(\\log\\ell)$ time, where $n$ is the number of elements stored in the dictionary...... and $\\ell$ is the number of distinct elements searched for since the element with key~$x$ was last searched for. The dictionary stores the elements in an array of size~$n$ using \\emph{no} additional space. In the cache-oblivious model the operations \\op{insert}($e$), \\op{delete}($x$) and \\op...

  4. Caching behaviour by red squirrels may contribute to food conditioning of grizzly bears

    Directory of Open Access Journals (Sweden)

    Julia Elizabeth Put

    2017-08-01

    Full Text Available We describe an interspecific relationship wherein grizzly bears (Ursus arctos horribilis appear to seek out and consume agricultural seeds concentrated in the middens of red squirrels (Tamiasciurus hudsonicus, which had collected and cached spilled grain from a railway. We studied this interaction by estimating squirrel density, midden density and contents, and bear activity along paired transects that were near (within 50 m or far (200 m from the railway. Relative to far ones, near transects had 2.4 times more squirrel sightings, but similar numbers of squirrel middens. Among 15 middens in which agricultural products were found, 14 were near the rail and 4 subsequently exhibited evidence of bear digging. Remote cameras confirmed the presence of squirrels on the rail and bears excavating middens. We speculate that obtaining grain from squirrel middens encourages bears to seek grain on the railway, potentially contributing to their rising risk of collisions with trains.

  5. I/O-Optimal Distribution Sweeping on Private-Cache Chip Multiprocessors

    DEFF Research Database (Denmark)

    Ajwani, Deepak; Sitchinava, Nodar; Zeh, Norbert

    2011-01-01

    /PB) for a number of problems on axis aligned objects; P denotes the number of cores/processors, B denotes the number of elements that fit in a cache line, N and K denote the sizes of the input and output, respectively, and sortp(N) denotes the I/O complexity of sorting N items using P processors in the PEM model...... framework was introduced recently, and a number of algorithms for problems on axis-aligned objects were obtained using this framework. The obtained algorithms were efficient but not optimal. In this paper, we improve the framework to obtain algorithms with the optimal I/O complexity of O(sortp(N) + K...

  6. Modified stretched exponential model of computer system resources management limitations-The case of cache memory

    Science.gov (United States)

    Strzałka, Dominik; Dymora, Paweł; Mazurek, Mirosław

    2018-02-01

    In this paper we present some preliminary results in the field of computer systems management with relation to Tsallis thermostatistics and the ubiquitous problem of hardware limited resources. In the case of systems with non-deterministic behaviour, management of their resources is a key point that guarantees theirs acceptable performance and proper working. This is very wide problem that stands for many challenges in financial, transport, water and food, health, etc. areas. We focus on computer systems with attention paid to cache memory and propose to use an analytical model that is able to connect non-extensive entropy formalism, long-range dependencies, management of system resources and queuing theory. Obtained analytical results are related to the practical experiment showing interesting and valuable results.

  7. Summary and Synthesis of Mercury Studies in the Cache Creek Watershed, California, 2000-01

    Science.gov (United States)

    Domagalski, Joseph L.; Slotton, Darell G.; Alpers, Charles N.; Suchanek, Thomas H.; Churchill, Ronald; Bloom, Nicolas; Ayers, Shaun M.; Clinkenbeard, John

    2004-01-01

    This report summarizes the principal findings of the Cache Creek, California, components of a project funded by the CALFED Bay?Delta Program entitled 'An Assessment of Ecological and Human Health Impacts of Mercury in the Bay?Delta Watershed.' A companion report summarizes the key findings of other components of the project based in the San Francisco Bay and the Delta of the Sacramento and San Joaquin Rivers. These summary documents present the more important findings of the various studies in a format intended for a wide audience. For more in-depth, scientific presentation and discussion of the research, a series of detailed technical reports of the integrated mercury studies is available at the following website: .

  8. Improved cache performance in Monte Carlo transport calculations using energy banding

    Science.gov (United States)

    Siegel, A.; Smith, K.; Felker, K.; Romano, P.; Forget, B.; Beckman, P.

    2014-04-01

    We present an energy banding algorithm for Monte Carlo (MC) neutral particle transport simulations which depend on large cross section lookup tables. In MC codes, read-only cross section data tables are accessed frequently, exhibit poor locality, and are typically too much large to fit in fast memory. Thus, performance is often limited by long latencies to RAM, or by off-node communication latencies when the data footprint is very large and must be decomposed on a distributed memory machine. The proposed energy banding algorithm allows maximal temporal reuse of data in band sizes that can flexibly accommodate different architectural features. The energy banding algorithm is general and has a number of benefits compared to the traditional approach. In the present analysis we explore its potential to achieve improvements in time-to-solution on modern cache-based architectures.

  9. Security in the Cache and Forward Architecture for the Next Generation Internet

    Science.gov (United States)

    Hadjichristofi, G. C.; Hadjicostis, C. N.; Raychaudhuri, D.

    The future Internet architecture will be comprised predominately of wireless devices. It is evident at this stage that the TCP/IP protocol that was developed decades ago will not properly support the required network functionalities since contemporary communication profiles tend to be data-driven rather than host-based. To address this paradigm shift in data propagation, a next generation architecture has been proposed, the Cache and Forward (CNF) architecture. This research investigates security aspects of this new Internet architecture. More specifically, we discuss content privacy, secure routing, key management and trust management. We identify security weaknesses of this architecture that need to be addressed and we derive security requirements that should guide future research directions. Aspects of the research can be adopted as a step-stone as we build the future Internet.

  10. Comparison of the Frontier Distributed Database Caching System with NoSQL Databases

    CERN Document Server

    Dykstra, David

    2012-01-01

    One of the main attractions of non-relational "NoSQL" databases is their ability to scale to large numbers of readers, including readers spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects for Conditions data, is based on traditional SQL databases but also has high scalability and wide-area distributability for an important subset of applications. This paper compares the major characteristics of the two different approaches and identifies the criteria for choosing which approach to prefer over the other. It also compares in some detail the NoSQL databases used by CMS and ATLAS: MongoDB, CouchDB, HBase, and Cassandra.

  11. Comparison of the Frontier Distributed Database Caching System to NoSQL Databases

    Energy Technology Data Exchange (ETDEWEB)

    Dykstra, Dave [Fermilab

    2012-07-20

    One of the main attractions of non-relational NoSQL databases is their ability to scale to large numbers of readers, including readers spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects for Conditions data, is based on traditional SQL databases but also adds high scalability and the ability to be distributed over a wide-area for an important subset of applications. This paper compares the major characteristics of the two different approaches and identifies the criteria for choosing which approach to prefer over the other. It also compares in some detail the NoSQL databases used by CMS and ATLAS: MongoDB, CouchDB, HBase, and Cassandra.

  12. Web components and the semantic web

    OpenAIRE

    Casey, Maire; Pahl, Claus

    2003-01-01

    Component-based software engineering on the Web differs from traditional component and software engineering. We investigate Web component engineering activites that are crucial for the development,com position, and deployment of components on the Web. The current Web Services and Semantic Web initiatives strongly influence our work. Focussing on Web component composition we develop description and reasoning techniques that support a component developer in the composition activities,fo cussing...

  13. Family factors in end-of-life decision-making: family conflict and proxy relationship.

    Science.gov (United States)

    Parks, Susan Mockus; Winter, Laraine; Santana, Abbie J; Parker, Barbara; Diamond, James J; Rose, Molly; Myers, Ronald E

    2011-02-01

    Few studies have examined proxy decision-making regarding end-of-life treatment decisions. Proxy accuracy is defined as whether proxy treatment choices are consistent with the expressed wishes of their index elder. The purpose of this study was to examine proxy accuracy in relation to two family factors that may influence proxy accuracy: perceived family conflict and type of elder-proxy relationship. Telephone interviews with 202 community-dwelling elders and their proxy decision makers were conducted including the Life-Support Preferences Questionnaire (LSPQ), and a measure of family conflict, and sociodemographic characteristics, including type of relationship. Elder-proxy accuracy was associated with the type of elder-proxy relationship. Adult children demonstrated the lowest elder-proxy accuracy and spousal proxies the highest elder-proxy accuracy. Elder-proxy accuracy was associated with family conflict. Proxies reporting higher family conflict had lower elder-proxy accuracy. No interaction between family conflict and relationship type was revealed. Spousal proxies were more accurate in their substituted judgment than adult children, and proxies who perceive higher degree of family conflict tended to be less accurate than those with lower family conflict. Health care providers should be aware of these family factors when discussing advance care planning.

  14. ORGANIZATION AND VISUALIZATION TO OBSERVATIONAL DATA USING WEB MAPPING SERVICES

    Directory of Open Access Journals (Sweden)

    A. A. Kadochnikov

    2014-01-01

    Full Text Available Current trends in the field of nature protection are monitoring environmental pollution resulting from human impact on nature, and as a result of natural processes. Monitoring the state of the environment in the area of the various industries can reduce costs to eliminate the impact of industrial accidents, which in turn reduces the possibility of contamination soil, surface water, loss of vegetation and wildlife. Consider the problem of creation of information-analytical systems for environmental monitoring of the natural environment and resources, built on the basis of GIS technologies, Internet, remote sensing data processing and data from monitoring stations. Considerable attention is given to web services, software interfaces and generally accepted standards. The author were directly involved in the development and implementation of projects of ecological orientation. In developing the software many different software libraries and components were used. Web mapping user interface was created using a number of open source libraries. To create a server-side web application author used GIS platforms MapGuide Open Source and Minnesota MapServer. GeoWebCache was another essential component of distributed web mapping environmental monitoring applications.

  15. Mercury and methylmercury concentrations and loads in the Cache Creek watershed, California

    International Nuclear Information System (INIS)

    Domagalski, Joseph L.; Alpers, Charles N.; Slotton, Darell G.; Suchanek, Thomas H.; Ayers, Shaun M.

    2004-01-01

    Concentrations and loads of total mercury and methylmercury were measured in streams draining abandoned mercury mines and in the proximity of geothermal discharge in the Cache Creek watershed of California during a 17-month period from January 2000 through May 2001. Rainfall and runoff were lower than long-term averages during the study period. The greatest loading of mercury and methylmercury from upstream sources to downstream receiving waters, such as San Francisco Bay, generally occurred during or after winter rainfall events. During the study period, loads of mercury and methylmercury from geothermal sources tended to be greater than those from abandoned mining areas, a pattern attributable to the lack of large precipitation events capable of mobilizing significant amounts of either mercury-laden sediment or dissolved mercury and methylmercury from mine waste. Streambed sediments of Cache Creek are a significant source of mercury and methylmercury to downstream receiving bodies of water. Much of the mercury in these sediments is the result of deposition over the last 100-150 years by either storm-water runoff, from abandoned mines, or continuous discharges from geothermal areas. Several geochemical constituents were useful as natural tracers for mining and geothermal areas, including the aqueous concentrations of boron, chloride, lithium and sulfate, and the stable isotopes of hydrogen and oxygen in water. Stable isotopes of water in areas draining geothermal discharges showed a distinct trend toward enrichment of 18 O compared with meteoric waters, whereas much of the runoff from abandoned mines indicated a stable isotopic pattern more consistent with local meteoric water

  16. Using dCache in Archiving Systems oriented to Earth Observation

    Science.gov (United States)

    Garcia Gil, I.; Perez Moreno, R.; Perez Navarro, O.; Platania, V.; Ozerov, D.; Leone, R.

    2012-04-01

    The object of LAST activity (Long term data Archive Study on new Technologies) is to perform an independent study on best practices and assessment of different archiving technologies mature for operation in the short and mid-term time frame, or available in the long-term with emphasis on technologies better suited to satisfy the requirements of ESA, LTDP and other European and Canadian EO partners in terms of digital information preservation and data accessibility and exploitation. During the last phase of the project, a testing of several archiving solutions has been performed in order to evaluate their suitability. In particular, dCache, aimed to provide a file system tree view of the data repository exchanging this data with backend (tertiary) Storage Systems as well as space management, pool attraction, dataset replication, hot spot determination and recovery from disk or node failures. Connected to a tertiary storage system, dCache simulates unlimited direct access storage space. Data exchanges to and from the underlying HSM are performed automatically and invisibly to the user Dcache was created to solve the requirements of big computer centers and universities with big amounts of data, putting their efforts together and founding EMI (European Middleware Initiative). At the moment being, Dcache is mature enough to be implemented, being used by several research centers of relevance (e.g. LHC storing up to 50TB/day). This solution has been not used so far in Earth Observation and the results of the study are summarized in this article, focusing on the capacities over a simulated environment to get in line with the ESA requirements for a geographically distributed storage. The challenge of a geographically distributed storage system can be summarized as the way to provide a maximum quality for storage and dissemination services with the minimum cost.

  17. Mercury and methylmercury concentrations and loads in the Cache Creek watershed, California

    Energy Technology Data Exchange (ETDEWEB)

    Domagalski, Joseph L.; Alpers, Charles N.; Slotton, Darell G.; Suchanek, Thomas H.; Ayers, Shaun M

    2004-07-05

    Concentrations and loads of total mercury and methylmercury were measured in streams draining abandoned mercury mines and in the proximity of geothermal discharge in the Cache Creek watershed of California during a 17-month period from January 2000 through May 2001. Rainfall and runoff were lower than long-term averages during the study period. The greatest loading of mercury and methylmercury from upstream sources to downstream receiving waters, such as San Francisco Bay, generally occurred during or after winter rainfall events. During the study period, loads of mercury and methylmercury from geothermal sources tended to be greater than those from abandoned mining areas, a pattern attributable to the lack of large precipitation events capable of mobilizing significant amounts of either mercury-laden sediment or dissolved mercury and methylmercury from mine waste. Streambed sediments of Cache Creek are a significant source of mercury and methylmercury to downstream receiving bodies of water. Much of the mercury in these sediments is the result of deposition over the last 100-150 years by either storm-water runoff, from abandoned mines, or continuous discharges from geothermal areas. Several geochemical constituents were useful as natural tracers for mining and geothermal areas, including the aqueous concentrations of boron, chloride, lithium and sulfate, and the stable isotopes of hydrogen and oxygen in water. Stable isotopes of water in areas draining geothermal discharges showed a distinct trend toward enrichment of {sup 18}O compared with meteoric waters, whereas much of the runoff from abandoned mines indicated a stable isotopic pattern more consistent with local meteoric water.

  18. Design issues and caching strategies for CD-ROM-based multimedia storage

    Science.gov (United States)

    Shastri, Vijnan; Rajaraman, V.; Jamadagni, H. S.; Venkat-Rangan, P.; Sampath-Kumar, Srihari

    1996-03-01

    CD-ROMs have proliferated as a distribution media for desktop machines for a large variety of multimedia applications (targeted for a single-user environment) like encyclopedias, magazines and games. With CD-ROM capacities up to 3 GB being available in the near future, they will form an integral part of Video on Demand (VoD) servers to store full-length movies and multimedia. In the first section of this paper we look at issues related to the single- user desktop environment. Since these multimedia applications are highly interactive in nature, we take a pragmatic approach, and have made a detailed study of the multimedia application behavior in terms of the I/O request patterns generated to the CD-ROM subsystem by tracing these patterns. We discuss prefetch buffer design and seek time characteristics in the context of the analysis of these traces. We also propose an adaptive main-memory hosted cache that receives caching hints from the application to reduce the latency when the user moves from one node of the hyper graph to another. In the second section we look at the use of CD-ROM in a VoD server and discuss the problem of scheduling multiple request streams and buffer management in this scenario. We adapt the C-SCAN (Circular SCAN) algorithm to suit the CD-ROM drive characteristics and prove that it is optimal in terms of buffer size management. We provide computationally inexpensive relations by which this algorithm can be implemented. We then propose an admission control algorithm which admits new request streams without disrupting the continuity of playback of the previous request streams. The algorithm also supports operations such as fast forward and replay. Finally, we discuss the problem of optimal placement of MPEG streams on CD-ROMs in the third section.

  19. An illustration of web survey

    DEFF Research Database (Denmark)

    He, Chen

    A former study in the Danish primary schools has shown that there is an association between organic school food policies and indicators (proxies) for healthy eating among children when (school food coordinators) statements on indicators (proxies) for healthy eating are used as variable. This proj...... to be studied in a comparative study design where the Danish case (existing data from WBQ) will be compared with new data from school food service in Germany, Italy and Finland. These data is going to be collected through a web survey........ This project continue to search for the above signs of associations but involving also a “bottom” level (pupils) perspective in addition to the “top” level (school food coordinators) in the previous study. The project is to study the following hypothesis: organic food service praxis/policy (POP) is associated...... with praxis/policies for healthier eating in Danish school food service. In other words if organic procurement policies and the resulting praxis in schools can help build a healthier eating habits among pupils in such school as compared to schools without organic policies/praxis. The last perspective is going...

  20. Fabryq: Using Phones as Smart Proxies to Control Wearable Devices from the Web

    Science.gov (United States)

    2014-06-12

    may track a user’s liquid intake over the course of a day and correlate it with heart rate variability. It may require information about liquid level...which exposes a standard Heart Rate service (green) and characteristic (blue), for use in their application. Data are stored in a simple MySQL backend

  1. Gradient-based model calibration with proxy-model assistance

    Science.gov (United States)

    Burrows, Wesley; Doherty, John

    2016-02-01

    Use of a proxy model in gradient-based calibration and uncertainty analysis of a complex groundwater model with large run times and problematic numerical behaviour is described. The methodology is general, and can be used with models of all types. The proxy model is based on a series of analytical functions that link all model outputs used in the calibration process to all parameters requiring estimation. In enforcing history-matching constraints during the calibration and post-calibration uncertainty analysis processes, the proxy model is run for the purposes of populating the Jacobian matrix, while the original model is run when testing parameter upgrades; the latter process is readily parallelized. Use of a proxy model in this fashion dramatically reduces the computational burden of complex model calibration and uncertainty analysis. At the same time, the effect of model numerical misbehaviour on calculation of local gradients is mitigated, this allowing access to the benefits of gradient-based analysis where lack of integrity in finite-difference derivatives calculation would otherwise have impeded such access. Construction of a proxy model, and its subsequent use in calibration of a complex model, and in analysing the uncertainties of predictions made by that model, is implemented in the PEST suite.

  2. Time-and-ID-Based Proxy Reencryption Scheme

    Directory of Open Access Journals (Sweden)

    Kambombo Mtonga

    2014-01-01

    Full Text Available Time- and ID-based proxy reencryption scheme is proposed in this paper in which a type-based proxy reencryption enables the delegator to implement fine-grained policies with one key pair without any additional trust on the proxy. However, in some applications, the time within which the data was sampled or collected is very critical. In such applications, for example, healthcare and criminal investigations, the delegatee may be interested in only some of the messages with some types sampled within some time bound instead of the entire subset. Hence, in order to carter for such situations, in this paper, we propose a time-and-identity-based proxy reencryption scheme that takes into account the time within which the data was collected as a factor to consider when categorizing data in addition to its type. Our scheme is based on Boneh and Boyen identity-based scheme (BB-IBE and Matsuo’s proxy reencryption scheme for identity-based encryption (IBE to IBE. We prove that our scheme is semantically secure in the standard model.

  3. CSU Final Report on the Math/CS Institute CACHE: Communication-Avoiding and Communication-Hiding at the Extreme Scale

    Energy Technology Data Exchange (ETDEWEB)

    Strout, Michelle [Colorado State University

    2014-06-10

    The CACHE project entails researching and developing new versions of numerical algorithms that result in data reuse that can be scheduled in a communication avoiding way. Since memory accesses take more time than any computation and require the most power, the focus on turning data reuse into data locality is critical to improving performance and reducing power usage in scientific simulations. This final report summarizes the accomplishments at Colorado State University as part of the CACHE project.

  4. Usare WebDewey

    OpenAIRE

    Baldi, Paolo

    2016-01-01

    This presentation shows how to use the WebDewey tool. Features of WebDewey. Italian WebDewey compared with American WebDewey. Querying Italian WebDewey. Italian WebDewey and MARC21. Italian WebDewey and UNIMARC. Numbers, captions, "equivalente verbale": Dewey decimal classification in Italian catalogues. Italian WebDewey and Nuovo soggettario. Italian WebDewey and LCSH. Italian WebDewey compared with printed version of Italian Dewey Classification (22. edition): advantages and disadvantages o...

  5. Semantic Web

    Directory of Open Access Journals (Sweden)

    Anna Lamandini

    2011-06-01

    Full Text Available The semantic Web is a technology at the service of knowledge which is aimed at accessibility and the sharing of content; facilitating interoperability between different systems and as such is one of the nine key technological pillars of TIC (technologies for information and communication within the third theme, programme specific cooperation of the seventh programme framework for research and development (7°PQRS, 2007-2013. As a system it seeks to overcome overload or excess of irrelevant information in Internet, in order to facilitate specific or pertinent research. It is an extension of the existing Web in which the aim is for cooperation between and the computer and people (the dream of Sir Tim Berners –Lee where machines can give more support to people when integrating and elaborating data in order to obtain inferences and a global sharing of data. It is a technology that is able to favour the development of a “data web” in other words the creation of a space in both sets of interconnected and shared data (Linked Data which allows users to link different types of data coming from different sources. It is a technology that will have great effect on everyday life since it will permit the planning of “intelligent applications” in various sectors such as education and training, research, the business world, public information, tourism, health, and e-government. It is an innovative technology that activates a social transformation (socio-semantic Web on a world level since it redefines the cognitive universe of users and enables the sharing not only of information but of significance (collective and connected intelligence.

  6. Satellite Altimeters and Gravimeters as Proxy of the Indonesian Throughflow

    Science.gov (United States)

    Susanto, R. D.; Song, Y. T.

    2014-12-01

    The Indonesian Throughflow (ITF), the only pathway for interocean exchange between the Pacific to the Indian Ocean, plays an important role in global ocean circulation and climate. Yet, continuous ITF measurement is difficult and expensive. We demonstrate a plausible approach to derive the ITF transport proxy using satellite altimetry sea surface height (SSH), gravimetry ocean bottom pressure (OBP) data, in situ measurements from the Makassar Strait from 1996-1998 and 2004-2009, and a theoretical formulation. We first identified the optimal locations in the Pacific and Indian Ocean based on the optimal correlation between the ITF transport through the Makassar Strait and the pressure gradients, represented by the SSH and OBP differences between the Pacific and Indian Oceans at a 1° x 1° horizontal resolution. These geographical locations (centred at off-equatorial in the western Pacific Ocean and centred at along the equator in the eastern Indian Ocean) that control the strength and variability of the ITF transport in the Makassar Strait differ from early studies. The proxy time series follow the observation time series quite well, resolving the intraseasonal, monsoonal, and interannual signals with the 1993-2011 annual mean proxy transport of 11.6 ± 3.2 Sv. Our formulation provides a continuous approach to derive the ITF proxy as long as the satellite data are available. Such a continuous record would be difficult to achieve by in situ measurements alone due to logistical and financial challenges. Ideally, the proxy can be used to complement or fill in the gaps of the observations for a continuous ITF proxy for better understanding the ocean climate and validating ocean circulation models.

  7. Semantic Web Requirements through Web Mining Techniques

    OpenAIRE

    Hassanzadeh, Hamed; Keyvanpour, Mohammad Reza

    2012-01-01

    In recent years, Semantic web has become a topic of active research in several fields of computer science and has applied in a wide range of domains such as bioinformatics, life sciences, and knowledge management. The two fast-developing research areas semantic web and web mining can complement each other and their different techniques can be used jointly or separately to solve the issues in both areas. In addition, since shifting from current web to semantic web mainly depends on the enhance...

  8. PENGEMBANGAN MEKANISME OTENTIKASI DAN OTORISASI MANAJEMEN CONFIG PADA KASUS SHARED WEB HOSTING BERBASIS LINUX CONTAINER

    Directory of Open Access Journals (Sweden)

    Saifuddin Saifuddin

    2016-08-01

    The results show config on web applications that are in the directory in a single server can be read using these methods but can not be decoded to read user, password, and dbname, because it has given authorization can be decoded only from the directory already listed. on testing performance for latency, memory, and CPU system be followed, to get good results the previous system. The test results using the cache, the response time generated when accessed simultaneously by 20 click per user amounted to 941.4 ms for the old system and amounted to 786.6 ms.

  9. The Intra-Industry Effects of Proxy Contests

    OpenAIRE

    Fang Chen; Jian Huang; Han Yu

    2018-01-01

    This paper is the first study on the intra-industry effects of proxy contests. Using a sample of proxy contests from January 1988 through December 2008, we identify a striking cross-sectional difference in market reaction to the target companies. As much as 61% of the target firms have a significant positive cumulative abnormal return (CARs) in the period (‒10, +10) around the announcement day, while 39% of the target firms have the negative CARs in the same event window. Moreover, we find th...

  10. Health anxiety by proxy in women with severe health anxiety

    DEFF Research Database (Denmark)

    Thorgaard, Mette Viller; Frostholm, Lisbeth; Walker, Lynn

    2017-01-01

    Health anxiety (HA) refers to excessive worries and anxiety about harbouring serious illness based on misinterpretation of bodily sensations or changes as signs of serious illness. Severe HA is associated with disability and high health care costs. However, the impact of parental HA on excessive...... concern with their children's health (health anxiety by proxy) is scantly investigated. The aim of this study is to investigate HA by proxy in mothers with severe HA. Fifty mothers with severe HA and two control groups were included, i.e. mothers with rheumatoid arthritis (N = 49) and healthy mothers (N...

  11. Development of grid-like applications for public health using Web 2.0 mashup techniques.

    Science.gov (United States)

    Scotch, Matthew; Yip, Kevin Y; Cheung, Kei-Hoi

    2008-01-01

    Development of public health informatics applications often requires the integration of multiple data sources. This process can be challenging due to issues such as different file formats, schemas, naming systems, and having to scrape the content of web pages. A potential solution to these system development challenges is the use of Web 2.0 technologies. In general, Web 2.0 technologies are new internet services that encourage and value information sharing and collaboration among individuals. In this case report, we describe the development and use of Web 2.0 technologies including Yahoo! Pipes within a public health application that integrates animal, human, and temperature data to assess the risk of West Nile Virus (WNV) outbreaks. The results of development and testing suggest that while Web 2.0 applications are reasonable environments for rapid prototyping, they are not mature enough for large-scale public health data applications. The application, in fact a "systems of systems," often failed due to varied timeouts for application response across web sites and services, internal caching errors, and software added to web sites by administrators to manage the load on their servers. In spite of these concerns, the results of this study demonstrate the potential value of grid computing and Web 2.0 approaches in public health informatics.

  12. Responsive web design workflow

    OpenAIRE

    LAAK, TIMO

    2013-01-01

    Responsive Web Design Workflow is a literature review about Responsive Web Design, a web standards based modern web design paradigm. The goals of this research were to define what responsive web design is, determine its importance in building modern websites and describe a workflow for responsive web design projects. Responsive web design is a paradigm to create adaptive websites, which respond to the properties of the media that is used to render them. The three key elements of responsi...

  13. The People of Bear Hunter Speak: Oral Histories of the Cache Valley Shoshones Regarding the Bear River Massacre

    OpenAIRE

    Crawford, Aaron L.

    2007-01-01

    The Cache Valley Shoshone are the survivors of the Bear River Massacre, where a battle between a group of US. volunteer troops from California and a Shoshone village degenerated into the worst Indian massacre in US. history, resulting in the deaths of over 200 Shoshones. The massacre occurred due to increasing tensions over land use between the Shoshones and the Mormon settlers. Following the massacre, the Shoshones attempted settling in several different locations in Box Elder County, eventu...

  14. Minimizing End-to-End Interference in I/O Stacks Spanning Shared Multi-Level Buffer Caches

    Science.gov (United States)

    Patrick, Christina M.

    2011-01-01

    This thesis presents an end-to-end interference minimizing uniquely designed high performance I/O stack that spans multi-level shared buffer cache hierarchies accessing shared I/O servers to deliver a seamless high performance I/O stack. In this thesis, I show that I can build a superior I/O stack which minimizes the inter-application interference…

  15. Caching-Aided Collaborative D2D Operation for Predictive Data Dissemination in Industrial IoT

    OpenAIRE

    Orsino, Antonino; Kovalchukov, Roman; Samuylov, Andrey; Moltchanov, Dmitri; Andreev, Sergey; Koucheryavy, Yevgeni; Valkama, Mikko

    2018-01-01

    Industrial automation deployments constitute challenging environments where moving IoT machines may produce high-definition video and other heavy sensor data during surveying and inspection operations. Transporting massive contents to the edge network infrastructure and then eventually to the remote human operator requires reliable and high-rate radio links supported by intelligent data caching and delivery mechanisms. In this work, we address the challenges of contents dissemination in chara...

  16. Potential Mechanisms Driving Population Variation in Spatial Memory and the Hippocampus in Food-caching Chickadees.

    Science.gov (United States)

    Croston, Rebecca; Branch, Carrie L; Kozlovsky, Dovid Y; Roth, Timothy C; LaDage, Lara D; Freas, Cody A; Pravosudov, Vladimir V

    2015-09-01

    Harsh environments and severe winters have been hypothesized to favor improvement of the cognitive abilities necessary for successful foraging. Geographic variation in winter climate, then, is likely associated with differences in selection pressures on cognitive ability, which could lead to evolutionary changes in cognition and its neural mechanisms, assuming that variation in these traits is heritable. Here, we focus on two species of food-caching chickadees (genus Poecile), which rely on stored food for survival over winter and require the use of spatial memory to recover their stores. These species also exhibit extensive climate-related population level variation in spatial memory and the hippocampus, including volume, the total number and size of neurons, and adults' rates of neurogenesis. Such variation could be driven by several mechanisms within the context of natural selection, including independent, population-specific selection (local adaptation), environment experience-based plasticity, developmental differences, and/or epigenetic differences. Extensive data on cognition, brain morphology, and behavior in multiple populations of these two species of chickadees along longitudinal, latitudinal, and elevational gradients in winter climate are most consistent with the hypothesis that natural selection drives the evolution of local adaptations associated with spatial memory differences among populations. Conversely, there is little support for the hypotheses that environment-induced plasticity or developmental differences are the main causes of population differences across climatic gradients. Available data on epigenetic modifications of memory ability are also inconsistent with the observed patterns of population variation, with birds living in more stressful and harsher environments having better spatial memory associated with a larger hippocampus and a larger number of hippocampal neurons. Overall, the existing data are most consistent with the

  17. A Cross-Layer Framework for Designing and Optimizing Deeply-Scaled FinFET-Based Cache Memories

    Directory of Open Access Journals (Sweden)

    Alireza Shafaei

    2015-08-01

    Full Text Available This paper presents a cross-layer framework in order to design and optimize energy-efficient cache memories made of deeply-scaled FinFET devices. The proposed design framework spans device, circuit and architecture levels and considers both super- and near-threshold modes of operation. Initially, at the device-level, seven FinFET devices on a 7-nm process technology are designed in which only one geometry-related parameter (e.g., fin width, gate length, gate underlap is changed per device. Next, at the circuit-level, standard 6T and 8T SRAM cells made of these 7-nm FinFET devices are characterized and compared in terms of static noise margin, access latency, leakage power consumption, etc. Finally, cache memories with all different combinations of devices and SRAM cells are evaluated at the architecture-level using a modified version of the CACTI tool with FinFET support and other considerations for deeply-scaled technologies. Using this design framework, it is observed that L1 cache memory made of longer channel FinFET devices operating at the near-threshold regime achieves the minimum energy operation point.

  18. Comparison of independent proxies in the reconstruction of deep ...

    African Journals Online (AJOL)

    Independent proxies were assessed in two Late Quaternary sediment cores from the eastern South Atlantic to compare deep-water changes during the last 400 kyr. ... is exclusively observed during interglacials, with maximum factor loadings in ... only slightly without a significant glacial-interglacial pattern, as measured in a ...

  19. TEX86 paleothermometry : proxy validation and application in marine sediments

    NARCIS (Netherlands)

    Huguet, C.

    2007-01-01

    Determination of past sea surface temperature (SST) is of primary importance for the reconstruction of natural climatic changes, modelling of climate and reconstruction of ocean circulation. Recently, a new SST proxy was introduced, the TetraEther indeX of lipids with 86 carbons (TEX86), which is

  20. Shareholder Activism through Proxy Proposals : The European Perspective

    NARCIS (Netherlands)

    Cziraki, P.; Renneboog, L.D.R.; Szilagyi, P.G.

    2009-01-01

    This paper is the first to investigate the corporate governance role of shareholderinitiated proxy proposals in European firms. While proposals in the US are nonbinding even if they pass the shareholder vote, they are legally binding in the UK and most of Continental Europe. Nonetheless, submissions

  1. Munchausen Syndrome by Proxy: A Study of Psychopathology.

    Science.gov (United States)

    Bools, Christopher; And Others

    1994-01-01

    This study evaluated 100 mothers with Munchausen Syndrome by Proxy (the fabrication of illness by a mother in her child). Approximately half of the mothers had either smothered or poisoned their child as part of their fabrications. Lifetime psychiatric histories were reported for 47 of the mothers. The most notable psychopathology was personality…

  2. Climate proxy data as groundwater tracers in regional flow systems

    Science.gov (United States)

    Clark, J. F.; Morrissey, S. K.; Stute, M.

    2008-05-01

    The isotopic and chemical signatures of groundwater reflect local climate conditions. By systematically analyzing groundwater and determining their hydrologic setting, records of past climates can be constructed. Because of their chemistries and relatively uncomplicated source functions, dissolved noble gases have yielded reliable records of continental temperatures for the last 30,000 to 50,000 years. Variations in the stable isotope compositions of groundwater due to long term climate changes have also been documented over these time scales. Because glacial - interglacial climate changes are relatively well known, these climate proxies can be used as "stratigraphic" markers within flow systems and used to distinguish groundwaters that have recharged during the Holocene from those recharged during the last glacial period, important time scales for distinguishing regional and local flow systems in many aquifers. In southern Georgia, the climate proxy tracers were able to identify leakage from surface aquifers into the Upper Floridan aquifer in areas previously thought to be confined. In south Florida, the transition between Holocene and glacial signatures in the Upper Floridan aquifer occurs mid-way between the recharge area and Lake Okeechobee. Down gradient of the lake, the proxies are uniform, indicating recharge during the last glacial period. Furthermore, there is no evidence for leakage from the shallow aquifers into the Upper Floridan. In the Lower Floridan, the climate proxies indicate that the saline water entered the aquifer after sea level rose to its present level.

  3. Slices: A shape-proxy based on planar sections

    KAUST Repository

    McCrae, James

    2011-12-01

    Minimalist object representations or shape-proxies that spark and inspire human perception of shape remain an incompletely understood, yet powerful aspect of visual communication. We explore the use of planar sections, i.e., the contours of intersection of planes with a 3D object, for creating shape abstractions, motivated by their popularity in art and engineering. We first perform a user study to show that humans do define consistent and similar planar section proxies for common objects. Interestingly, we observe a strong correlation between user-defined planes and geometric features of objects. Further we show that the problem of finding the minimum set of planes that capture a set of 3D geometric shape features is both NP-hard and not always the proxy a user would pick. Guided by the principles inferred from our user study, we present an algorithm that progressively selects planes to maximize feature coverage, which in turn influence the selection of subsequent planes. The algorithmic framework easily incorporates various shape features, while their relative importance values are computed and validated from the user study data. We use our algorithm to compute planar slices for various objects, validate their utility towards object abstraction using a second user study, and conclude showing the potential applications of the extracted planar slice shape proxies.

  4. Munchausen Syndrome by Proxy (MSBP): An Intergenerational Perspective.

    Science.gov (United States)

    Rappaport, Sol R.; Hochstadt, Neil J.

    1993-01-01

    Presents new information about Munchausen Syndrome by Proxy (MSBP), factitious disorder in which caretaker may induce or exaggerate medical illness in his or her child that may lead to illness and even death. Provides psychosocial history of caregiver using intergenerational model. Presents case of MSBP involving three siblings and information…

  5. Slices: A shape-proxy based on planar sections

    KAUST Repository

    McCrae, James; Singh, Karan S.; Mitra, Niloy J.

    2011-01-01

    of intersection of planes with a 3D object, for creating shape abstractions, motivated by their popularity in art and engineering. We first perform a user study to show that humans do define consistent and similar planar section proxies for common objects

  6. A Web portal for CMS Grid job submission and management

    Energy Technology Data Exchange (ETDEWEB)

    Braun, David [Department of Physics, Purdue University, W. Lafayette, IN 47907 (United States); Neumeister, Norbert, E-mail: neumeist@purdue.ed [Rosen Center for Advanced Computing, Purdue University, W. Lafayette, IN 47907 (United States)

    2010-04-01

    We present a Web portal for CMS Grid submission and management. The portal is built using a JBoss application server. It has a three tier architecture; presentation, business logic and data. Bean based business logic interacts with the underlying Grid infrastructure and pre-existing external applications, while the presentation layer uses AJAX to offer an intuitive, functional interface to the back-end. Application data aggregating information from the portal as well as the external applications is persisted to the server memory cache and then to a backend database. We describe how the portal exploits standard, off-the-shelf commodity software together with existing Grid infrastructures in order to facilitate job submission and monitoring for the CMS collaboration. This paper describes the design, development, current functionality and plans for future enhancements of the portal.

  7. A Web portal for CMS Grid job submission and management

    International Nuclear Information System (INIS)

    Braun, David; Neumeister, Norbert

    2010-01-01

    We present a Web portal for CMS Grid submission and management. The portal is built using a JBoss application server. It has a three tier architecture; presentation, business logic and data. Bean based business logic interacts with the underlying Grid infrastructure and pre-existing external applications, while the presentation layer uses AJAX to offer an intuitive, functional interface to the back-end. Application data aggregating information from the portal as well as the external applications is persisted to the server memory cache and then to a backend database. We describe how the portal exploits standard, off-the-shelf commodity software together with existing Grid infrastructures in order to facilitate job submission and monitoring for the CMS collaboration. This paper describes the design, development, current functionality and plans for future enhancements of the portal.

  8. On the Use of Human Mobility Proxies for Modeling Epidemics

    Science.gov (United States)

    Tizzoni, Michele; Bajardi, Paolo; Decuyper, Adeline; Kon Kam King, Guillaume; Schneider, Christian M.; Blondel, Vincent; Smoreda, Zbigniew; González, Marta C.; Colizza, Vittoria

    2014-01-01

    Human mobility is a key component of large-scale spatial-transmission models of infectious diseases. Correctly modeling and quantifying human mobility is critical for improving epidemic control, but may be hindered by data incompleteness or unavailability. Here we explore the opportunity of using proxies for individual mobility to describe commuting flows and predict the diffusion of an influenza-like-illness epidemic. We consider three European countries and the corresponding commuting networks at different resolution scales, obtained from (i) official census surveys, (ii) proxy mobility data extracted from mobile phone call records, and (iii) the radiation model calibrated with census data. Metapopulation models defined on these countries and integrating the different mobility layers are compared in terms of epidemic observables. We show that commuting networks from mobile phone data capture the empirical commuting patterns well, accounting for more than 87% of the total fluxes. The distributions of commuting fluxes per link from mobile phones and census sources are similar and highly correlated, however a systematic overestimation of commuting traffic in the mobile phone data is observed. This leads to epidemics that spread faster than on census commuting networks, once the mobile phone commuting network is considered in the epidemic model, however preserving to a high degree the order of infection of newly affected locations. Proxies' calibration affects the arrival times' agreement across different models, and the observed topological and traffic discrepancies among mobility sources alter the resulting epidemic invasion patterns. Results also suggest that proxies perform differently in approximating commuting patterns for disease spread at different resolution scales, with the radiation model showing higher accuracy than mobile phone data when the seed is central in the network, the opposite being observed for peripheral locations. Proxies should therefore be

  9. Smart Collaborative Caching for Information-Centric IoT in Fog Computing.

    Science.gov (United States)

    Song, Fei; Ai, Zheng-Yang; Li, Jun-Jie; Pau, Giovanni; Collotta, Mario; You, Ilsun; Zhang, Hong-Ke

    2017-11-01

    The significant changes enabled by the fog computing had demonstrated that Internet of Things (IoT) urgently needs more evolutional reforms. Limited by the inflexible design philosophy; the traditional structure of a network is hard to meet the latest demands. However, Information-Centric Networking (ICN) is a promising option to bridge and cover these enormous gaps. In this paper, a Smart Collaborative Caching (SCC) scheme is established by leveraging high-level ICN principles for IoT within fog computing paradigm. The proposed solution is supposed to be utilized in resource pooling, content storing, node locating and other related situations. By investigating the available characteristics of ICN, some challenges of such combination are reviewed in depth. The details of building SCC, including basic model and advanced algorithms, are presented based on theoretical analysis and simplified examples. The validation focuses on two typical scenarios: simple status inquiry and complex content sharing. The number of clusters, packet loss probability and other parameters are also considered. The analytical results demonstrate that the performance of our scheme, regarding total packet number and average transmission latency, can outperform that of the original ones. We expect that the SCC will contribute an efficient solution to the related studies.

  10. Diets of three species of anurans from the cache creek watershed, California, USA

    Science.gov (United States)

    Hothem, R.L.; Meckstroth, A.M.; Wegner, K.E.; Jennings, M.R.; Crayon, J.J.

    2009-01-01

    We evaluated the diets of three sympatric anuran species, the native Northern Pacific Treefrog, Pseudacris regilla, and Foothill Yellow-Legged Frog, Rana boylii, and the introduced American Bullfrog, Lithobates catesbeianus, based on stomach contents of frogs collected at 36 sites in 1997 and 1998. This investigation was part of a study of mercury bioaccumulation in the biota of the Cache Creek Watershed in north-central California, an area affected by mercury contamination from natural sources and abandoned mercury mines. We collected R. boylii at 22 sites, L. catesbeianus at 21 sites, and P. regilla at 13 sites. We collected both L. catesbeianus and R. boylii at nine sites and all three species at five sites. Pseudacris regilla had the least aquatic diet (100% of the samples had terrestrial prey vs. 5% with aquatic prey), followed by R. boylii (98% terrestrial, 28% aquatic), and L. catesbeianus, which had similar percentages of terrestrial (81%) and aquatic prey (74%). Observed predation by L. catesbeianus on R. boylii may indicate that interaction between these two species is significant. Based on their widespread abundance and their preference for aquatic foods, we suggest that, where present, L. catesbeianus should be the species of choice for all lethal biomonitoring of mercury in amphibians. Copyright ?? 2009 Society for the Study of Amphibians and Reptiles.

  11. Data Rate Estimation for Wireless Core-to-Cache Communication in Multicore CPUs

    Directory of Open Access Journals (Sweden)

    M. Komar

    2015-01-01

    Full Text Available In this paper, a principal architecture of common purpose CPU and its main components are discussed, CPUs evolution is considered and drawbacks that prevent future CPU development are mentioned. Further, solutions proposed so far are addressed and a new CPU architecture is introduced. The proposed architecture is based on wireless cache access that enables a reliable interaction between cores in multicore CPUs using terahertz band, 0.1-10THz. The presented architecture addresses the scalability problem of existing processors and may potentially allow to scale them to tens of cores. As in-depth analysis of the applicability of the suggested architecture requires accurate prediction of traffic in current and next generations of processors, we consider a set of approaches for traffic estimation in modern CPUs discussing their benefits and drawbacks. The authors identify traffic measurements by using existing software tools as the most promising approach for traffic estimation, and they use Intel Performance Counter Monitor for this purpose. Three types of CPU loads are considered including two artificial tests and background system load. For each load type the amount of data transmitted through the L2-L3 interface is reported for various input parameters including the number of active cores and their dependences on the number of cores and operational frequency.

  12. AirCache: A Crowd-Based Solution for Geoanchored Floating Data

    Directory of Open Access Journals (Sweden)

    Armir Bujari

    2016-01-01

    Full Text Available The Internet edge has evolved from a simple consumer of information and data to eager producer feeding sensed data at a societal scale. The crowdsensing paradigm is a representative example which has the potential to revolutionize the way we acquire and consume data. Indeed, especially in the era of smartphones, the geographical and temporal scopus of data is often local. For instance, users’ queries are more and more frequently about a nearby object, event, person, location, and so forth. These queries could certainly be processed and answered locally, without the need for contacting a remote server through the Internet. In this scenario, the data is alimented (sensed by the users and, as a consequence, data lifetime is limited by human organizational factors (e.g., mobility. From this basis, data survivability in the Area of Interest (AoI is crucial and, if not guaranteed, could undermine system deployment. Addressing this scenario, we discuss and contribute with a novel protocol named AirCache, whose aim is to guarantee data availability in the AoI while at the same time reducing the data access costs at the network edges. We assess our proposal through a simulation analysis showing that our approach effectively fulfills its design objectives.

  13. Smart Collaborative Caching for Information-Centric IoT in Fog Computing

    Directory of Open Access Journals (Sweden)

    Fei Song

    2017-11-01

    Full Text Available The significant changes enabled by the fog computing had demonstrated that Internet of Things (IoT urgently needs more evolutional reforms. Limited by the inflexible design philosophy; the traditional structure of a network is hard to meet the latest demands. However, Information-Centric Networking (ICN is a promising option to bridge and cover these enormous gaps. In this paper, a Smart Collaborative Caching (SCC scheme is established by leveraging high-level ICN principles for IoT within fog computing paradigm. The proposed solution is supposed to be utilized in resource pooling, content storing, node locating and other related situations. By investigating the available characteristics of ICN, some challenges of such combination are reviewed in depth. The details of building SCC, including basic model and advanced algorithms, are presented based on theoretical analysis and simplified examples. The validation focuses on two typical scenarios: simple status inquiry and complex content sharing. The number of clusters, packet loss probability and other parameters are also considered. The analytical results demonstrate that the performance of our scheme, regarding total packet number and average transmission latency, can outperform that of the original ones. We expect that the SCC will contribute an efficient solution to the related studies.

  14. Experimental Evaluation of a High Speed Flywheel for an Energy Cache System

    Science.gov (United States)

    Haruna, J.; Murai, K.; Itoh, J.; Yamada, N.; Hirano, Y.; Fujimori, T.; Homma, T.

    2011-03-01

    A flywheel energy cache system (FECS) is a mechanical battery that can charge/discharge electricity by converting it into the kinetic energy of a rotating flywheel, and vice versa. Compared to a chemical battery, a FECS has great advantages in durability and lifetime, especially in hot or cold environments. Design simulations of the FECS were carried out to clarify the effects of the composition and dimensions of the flywheel rotor on the charge/discharge performance. The rotation speed of a flywheel is limited by the strength of the materials from which it is constructed. Three materials, carbon fiber-reinforced polymer (CFRP), Cr-Mo steel, and a Mg alloy were examined with respect to the required weight and rotation speed for a 3 MJ (0.8 kWh) charging/discharging energy, which is suitable for an FECS operating with a 3-5 kW photovoltaic device in an ordinary home connected to a smart grid. The results demonstrate that, for a stationary 3 MJ FECS, Cr-Mo steel was the most cost-effective, but also the heaviest, Mg-alloy had a good balance of rotation speed and weight, which should result in reduced mechanical loss and enhanced durability and lifetime of the system, and CFRP should be used for applications requiring compactness and a higher energy density. Finally, a high-speed prototype FW was analyzed to evaluate its fundamental characteristics both under acceleration and in the steady state.

  15. Experimental Evaluation of a High Speed Flywheel for an Energy Cache System

    International Nuclear Information System (INIS)

    Haruna, J; Itoh, J; Murai, K; Yamada, N; Hirano, Y; Homma, T; Fujimori, T

    2011-01-01

    A flywheel energy cache system (FECS) is a mechanical battery that can charge/discharge electricity by converting it into the kinetic energy of a rotating flywheel, and vice versa. Compared to a chemical battery, a FECS has great advantages in durability and lifetime, especially in hot or cold environments. Design simulations of the FECS were carried out to clarify the effects of the composition and dimensions of the flywheel rotor on the charge/discharge performance. The rotation speed of a flywheel is limited by the strength of the materials from which it is constructed. Three materials, carbon fiber-reinforced polymer (CFRP), Cr-Mo steel, and a Mg alloy were examined with respect to the required weight and rotation speed for a 3 MJ (0.8 kWh) charging/discharging energy, which is suitable for an FECS operating with a 3-5 kW photovoltaic device in an ordinary home connected to a smart grid. The results demonstrate that, for a stationary 3 MJ FECS, Cr-Mo steel was the most cost-effective, but also the heaviest, Mg-alloy had a good balance of rotation speed and weight, which should result in reduced mechanical loss and enhanced durability and lifetime of the system, and CFRP should be used for applications requiring compactness and a higher energy density. Finally, a high-speed prototype FW was analyzed to evaluate its fundamental characteristics both under acceleration and in the steady state.

  16. Leveraging KVM Events to Detect Cache-Based Side Channel Attacks in a Virtualization Environment

    Directory of Open Access Journals (Sweden)

    Ady Wahyudi Paundu

    2018-01-01

    Full Text Available Cache-based side channel attack (CSCa techniques in virtualization systems are becoming more advanced, while defense methods against them are still perceived as nonpractical. The most recent CSCa variant called Flush + Flush has showed that the current detection methods can be easily bypassed. Within this work, we introduce a novel monitoring approach to detect CSCa operations inside a virtualization environment. We utilize the Kernel Virtual Machine (KVM event data in the kernel and process this data using a machine learning technique to identify any CSCa operation in the guest Virtual Machine (VM. We evaluate our approach using Receiver Operating Characteristic (ROC diagram of multiple attack and benign operation scenarios. Our method successfully separate the CSCa datasets from the non-CSCa datasets, on both trained and nontrained data scenarios. The successful classification also include the Flush + Flush attack scenario. We are also able to explain the classification results by extracting the set of most important features that separate both classes using their Fisher scores and show that our monitoring approach can work to detect CSCa in general. Finally, we evaluate the overhead impact of our CSCa monitoring method and show that it has a negligible computation overhead on the host and the guest VM.

  17. A Query Cache Tool for Optimizing Repeatable and Parallel OLAP Queries

    Science.gov (United States)

    Santos, Ricardo Jorge; Bernardino, Jorge

    On-line analytical processing against data warehouse databases is a common form of getting decision making information for almost every business field. Decision support information oftenly concerns periodic values based on regular attributes, such as sales amounts, percentages, most transactioned items, etc. This means that many similar OLAP instructions are periodically repeated, and simultaneously, between the several decision makers. Our Query Cache Tool takes advantage of previously executed queries, storing their results and the current state of the data which was accessed. Future queries only need to execute against the new data, inserted since the queries were last executed, and join these results with the previous ones. This makes query execution much faster, because we only need to process the most recent data. Our tool also minimizes the execution time and resource consumption for similar queries simultaneously executed by different users, putting the most recent ones on hold until the first finish and returns the results for all of them. The stored query results are held until they are considered outdated, then automatically erased. We present an experimental evaluation of our tool using a data warehouse based on a real-world business dataset and use a set of typical decision support queries to discuss the results, showing a very high gain in query execution time.

  18. Smart Collaborative Caching for Information-Centric IoT in Fog Computing

    Science.gov (United States)

    Song, Fei; Ai, Zheng-Yang; Li, Jun-Jie; Zhang, Hong-Ke

    2017-01-01

    The significant changes enabled by the fog computing had demonstrated that Internet of Things (IoT) urgently needs more evolutional reforms. Limited by the inflexible design philosophy; the traditional structure of a network is hard to meet the latest demands. However, Information-Centric Networking (ICN) is a promising option to bridge and cover these enormous gaps. In this paper, a Smart Collaborative Caching (SCC) scheme is established by leveraging high-level ICN principles for IoT within fog computing paradigm. The proposed solution is supposed to be utilized in resource pooling, content storing, node locating and other related situations. By investigating the available characteristics of ICN, some challenges of such combination are reviewed in depth. The details of building SCC, including basic model and advanced algorithms, are presented based on theoretical analysis and simplified examples. The validation focuses on two typical scenarios: simple status inquiry and complex content sharing. The number of clusters, packet loss probability and other parameters are also considered. The analytical results demonstrate that the performance of our scheme, regarding total packet number and average transmission latency, can outperform that of the original ones. We expect that the SCC will contribute an efficient solution to the related studies. PMID:29104219

  19. Traversal Caches: A Framework for FPGA Acceleration of Pointer Data Structures

    Directory of Open Access Journals (Sweden)

    James Coole

    2010-01-01

    Full Text Available Field-programmable gate arrays (FPGAs and other reconfigurable computing (RC devices have been widely shown to have numerous advantages including order of magnitude performance and power improvements compared to microprocessors for some applications. Unfortunately, FPGA usage has largely been limited to applications exhibiting sequential memory access patterns, thereby prohibiting acceleration of important applications with irregular patterns (e.g., pointer-based data structures. In this paper, we present a design pattern for RC application development that serializes irregular data structure traversals online into a traversal cache, which allows the corresponding data to be efficiently streamed to the FPGA. The paper presents a generalized framework that benefits applications with repeated traversals, which we show can achieve between 7x and 29x speedup over pointer-based software. For applications without strictly repeated traversals, we present application-specialized extensions that benefit applications with highly similar traversals by exploiting similarity to improve memory bandwidth and execute multiple traversals in parallel. We show that these extensions can achieve a speedup between 11x and 70x on a Virtex4 LX100 for Barnes-Hut n-body simulation.

  20. Percolation-theoretic bounds on the cache size of nodes in mobile opportunistic networks.

    Science.gov (United States)

    Yuan, Peiyan; Wu, Honghai; Zhao, Xiaoyan; Dong, Zhengnan

    2017-07-18

    The node buffer size has a large influence on the performance of Mobile Opportunistic Networks (MONs). This is mainly because each node should temporarily cache packets to deal with the intermittently connected links. In this paper, we study fundamental bounds on node buffer size below which the network system can not achieve the expected performance such as the transmission delay and packet delivery ratio. Given the condition that each link has the same probability p to be active in the next time slot when the link is inactive and q to be inactive when the link is active, there exists a critical value p c from a percolation perspective. If p > p c , the network is in the supercritical case, where we found that there is an achievable upper bound on the buffer size of nodes, independent of the inactive probability q. When p network is in the subcritical case, and there exists a closed-form solution for buffer occupation, which is independent of the size of the network.

  1. Proxy-Produced Ethnographic Work: What Are the Problems, Issues, and Dilemmas Arising from Proxy-Ethnography?

    Science.gov (United States)

    Martinussen, Marie; Højbjerg, Karin; Tamborg, Andreas Lindenskov

    2018-01-01

    This article addresses the implications of researcher-student cooperation in the production of empirical material. For the student to replace the experienced researcher and work under the researcher's supervision, we call such work proxy-produced ethnographic work. Although there are clear advantages, the specific relations and positions arising…

  2. Evaluation of the sea ice proxy IP against observational and diatom proxy data in the SW Labrador Sea

    DEFF Research Database (Denmark)

    Weckström, K.; Andersen, M.L.; Kuijpers, A.

    2013-01-01

    The recent rapid decline in Arctic sea ice cover has increased the need to improve the accuracy of the sea ice component in climate models and to provide detailed long-term sea ice concentration records, which are only available via proxy data. Recently, the highly branched isoprenoid IP25...

  3. Web TA Production (WebTA)

    Data.gov (United States)

    US Agency for International Development — WebTA is a web-based time and attendance system that supports USAID payroll administration functions, and is designed to capture hours worked, leave used and...

  4. Web server attack analyzer

    OpenAIRE

    Mižišin, Michal

    2013-01-01

    Web server attack analyzer - Abstract The goal of this work was to create prototype of analyzer of injection flaws attacks on web server. Proposed solution combines capabilities of web application firewall and web server log analyzer. Analysis is based on configurable signatures defined by regular expressions. This paper begins with summary of web attacks, followed by detection techniques analysis on web servers, description and justification of selected implementation. In the end are charact...

  5. Semantic web for dummies

    CERN Document Server

    Pollock, Jeffrey T

    2009-01-01

    Semantic Web technology is already changing how we interact with data on the Web. By connecting random information on the Internet in new ways, Web 3.0, as it is sometimes called, represents an exciting online evolution. Whether you're a consumer doing research online, a business owner who wants to offer your customers the most useful Web site, or an IT manager eager to understand Semantic Web solutions, Semantic Web For Dummies is the place to start! It will help you:Know how the typical Internet user will recognize the effects of the Semantic WebExplore all the benefits the data Web offers t

  6. Evaluation of proxy-based millennial reconstruction methods

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Terry C.K.; Tsao, Min [University of Victoria, Department of Mathematics and Statistics, Victoria, BC (Canada); Zwiers, Francis W. [Environment Canada, Climate Research Division, Toronto, ON (Canada)

    2008-08-15

    A range of existing statistical approaches for reconstructing historical temperature variations from proxy data are compared using both climate model data and real-world paleoclimate proxy data. We also propose a new method for reconstruction that is based on a state-space time series model and Kalman filter algorithm. The state-space modelling approach and the recently developed RegEM method generally perform better than their competitors when reconstructing interannual variations in Northern Hemispheric mean surface air temperature. On the other hand, a variety of methods are seen to perform well when reconstructing surface air temperature variability on decadal time scales. An advantage of the new method is that it can incorporate additional, non-temperature, information into the reconstruction, such as the estimated response to external forcing, thereby permitting a simultaneous reconstruction and detection analysis as well as future projection. An application of these extensions is also demonstrated in the paper. (orig.)

  7. A case report of Factitious Disorder (Munchausen Syndrome by proxy

    Directory of Open Access Journals (Sweden)

    Mehdi Shirzadifar

    2014-09-01

    Full Text Available Background: In Factitious Disorder by proxy, one person (perpetrator induces the disease in another person, thereby seeking emotional needs during the treatment process Diagnosis of this disorder is very difficult and there is not much consensus over it among experts. Lack of timely diagnosis of this disorder may lead to serious harms in patients. Case presentation: We will introduce a 19 year-old boy with mental retardation and history of multiple admissions to psychiatric, internal, urology and surgery wards. He has a 12 year-old sister and a 4 year-old brother, both with history of multiple admissions to pediatrics and internal wards. The father of family was 48 years old with chronic mental disorder, drug dependency and history of multiple admissions to medical, psychiatry and neurology wards. The mother of this family was diagnosed with munchausen syndrome by proxy.

  8. Munchausen syndrome and Munchausen syndrome by proxy in dermatology.

    Science.gov (United States)

    Boyd, Alan S; Ritchie, Coleman; Likhari, Sunaina

    2014-08-01

    Patients with Munchausen syndrome purposefully injure themselves, often with the injection of foreign materials, to gain hospital admission and the attention associated with having a difficult-to-identify condition. Munchausen syndrome by proxy occurs when a child's caregiver, typically the mother, injures the child for the same reasons. Cases of Munchausen syndrome and Munchausen syndrome by proxy with primary cutaneous involvement appear to be rarely described in the literature suggesting either that diagnosis is not made readily or that it is, in fact, an uncommon disorder. At the center of both conditions is significant psychological pathology and treatment is difficult as many patients with Munchausen syndrome when confronted with these diagnostic possibilities simply leave the hospital. Little is known about the long-term outcome or prognosis of these patients. Copyright © 2014 American Academy of Dermatology, Inc. Published by Mosby, Inc. All rights reserved.

  9. Mobile Multicast in Hierarchical Proxy Mobile IPV6

    Science.gov (United States)

    Hafizah Mohd Aman, Azana; Hashim, Aisha Hassan A.; Mustafa, Amin; Abdullah, Khaizuran

    2013-12-01

    Mobile Internet Protocol Version 6 (MIPv6) environments have been developing very rapidly. Many challenges arise with the fast progress of MIPv6 technologies and its environment. Therefore the importance of improving the existing architecture and operations increases. One of the many challenges which need to be addressed is the need for performance improvement to support mobile multicast. Numerous approaches have been proposed to improve mobile multicast performance. This includes Context Transfer Protocol (CXTP), Hierarchical Mobile IPv6 (HMIPv6), Fast Mobile IPv6 (FMIPv6) and Proxy Mobile IPv6 (PMIPv6). This document describes multicast context transfer in hierarchical proxy mobile IPv6 (H-PMIPv6) to provide better multicasting performance in PMIPv6 domain.

  10. Accuracy of Caregiver Proxy Reports of Home Care Service Use.

    Science.gov (United States)

    Chappell, Neena L; Kadlec, Helena

    2016-12-01

    Although much of the research on service use by older adults with dementia relies on proxy reports by informal caregivers, little research assesses the accuracy of these reports, and that which does exist, does not focus on home care services. This brief report compares proxy reports by family caregivers to those with dementia with provincial Ministry of Health records collected for payment and monitoring. The four home care services examined include home nursing care, adult day care, home support, and respite care. Data come from a province-wide study of caregivers in British Columbia, Canada. Caregiver reports are largely consistent with Ministry records, ranging from 81.0% agreement for home support to 96.6% for respite care. Spouses living with the care recipient (the vast majority of the sample) are the most accurate. Others, whether living with the care recipient or not, have only a 50-50 chance of being correct.

  11. A comparison of proxy performance in coral biodiversity monitoring

    Science.gov (United States)

    Richards, Zoe T.

    2013-03-01

    The productivity and health of coral reef habitat is diminishing worldwide; however, the effect that habitat declines have on coral reef biodiversity is not known. Logistical and financial constraints mean that surveys of hard coral communities rarely collect data at the species level; hence it is important to know if there are proxy metrics that can reliably predict biodiversity. Here, the performances of six proxy metrics are compared using regression analyses on survey data from a location in the northern Great Barrier Reef. Results suggest generic richness is a strong explanatory variable for spatial patterns in species richness (explaining 82 % of the variation when measured on a belt transect). The most commonly used metric of reef health, percentage live coral cover, is not positively or linearly related to hard coral species richness. This result raises doubt as to whether management actions based on such reefscape information will be effective for the conservation of coral biodiversity.

  12. A Novel Two-Tier Cooperative Caching Mechanism for the Optimization of Multi-Attribute Periodic Queries in Wireless Sensor Networks

    Science.gov (United States)

    Zhou, ZhangBing; Zhao, Deng; Shu, Lei; Tsang, Kim-Fung

    2015-01-01

    Wireless sensor networks, serving as an important interface between physical environments and computational systems, have been used extensively for supporting domain applications, where multiple-attribute sensory data are queried from the network continuously and periodically. Usually, certain sensory data may not vary significantly within a certain time duration for certain applications. In this setting, sensory data gathered at a certain time slot can be used for answering concurrent queries and may be reused for answering the forthcoming queries when the variation of these data is within a certain threshold. To address this challenge, a popularity-based cooperative caching mechanism is proposed in this article, where the popularity of sensory data is calculated according to the queries issued in recent time slots. This popularity reflects the possibility that sensory data are interested in the forthcoming queries. Generally, sensory data with the highest popularity are cached at the sink node, while sensory data that may not be interested in the forthcoming queries are cached in the head nodes of divided grid cells. Leveraging these cooperatively cached sensory data, queries are answered through composing these two-tier cached data. Experimental evaluation shows that this approach can reduce the network communication cost significantly and increase the network capability. PMID:26131665

  13. Fingerprinting Reverse Proxies Using Timing Analysis of TCP Flows

    Science.gov (United States)

    2013-09-01

    Message Protocol IG Information Grid IP Internet Protocol ISP Internet Service Provider IT Information Technology MITM Man-in-the-middle NAT Network...Unsuspecting users could be vulnerable to man-in-the-middle ( MITM ) attacks or web-based attacks [2] against web browsers and browser plug-ins. There is no

  14. Testing New Proxies for Photosymbiosis in the Fossil Record

    Science.gov (United States)

    Tornabene, C.; Martindale, R. C.; Schaller, M. F.

    2015-12-01

    Photosymbiosis is a mutualistic relationship that many corals have developed with dinoflagellates called zooxanthellae. The dinoflagellates, of the genus Symbiodinium, photosynthesize and provide corals with most of their energy, while in turn coral hosts live in waters where zooxanthellae have optimal exposure to sunlight. Thanks to this relationship, symbiotic corals calcify faster than non-symbiotic corals. Photosymbiosis is therefore considered the evolutionary innovation that allowed corals to become major reef-builders through geological time.This relationship is extremely difficult to study. Zooxanthellae, which are housed in the coral tissue, are not preserved in fossil coral skeletons, thus determining whether corals had symbionts requires a robust proxy. In order to address this critical question, the goal of this research is to test new proxies for ancient photosymbiosis. Currently the project is focused on assessing the nitrogen (δ15N) isotopes of corals' organic matrices, sensu Muscatine et al. (2005), as well as carbon and oxygen (δ13C, δ18O) isotopes of fossil coral skeletons. Samples from Modern, Pleistocene, Oligocene and Triassic coral skeletons were analyzed to test the validity of these proxies. Coral samples comprise both (interpreted) symbiotic and non-symbiotic fossil corals from the Oligocene and Triassic as well as symbiotic fossil corals from the Modern and Pleistocene to corroborate our findings with the results of Muscatine et al. (2005). Samples were tested for diagenesis through petrographic and scanning electron microscope (SEM) analyses to avoid contamination. Additionally, a novel technique that has not yet been applied to the fossil record was tested. The technique aims to recognize dinosterol, a dinoflagellate biomarker, in both modern and fossil coral samples. The premise of this proxy is that symbiotic corals should contain the dinoflagellate biomarker, whereas those lacking symbionts should lack dinosterol. Results from this

  15. Evaluation of Organic Proxies for Quantifying Past Primary Productivity

    Science.gov (United States)

    Raja, M.; Rosell-Melé, A.; Galbraith, E.

    2017-12-01

    Ocean primary productivity is a key element of the marine carbon cycle. However, its quantitative reconstruction in the past relies on the use of biogeochemical models as the available proxy approaches are qualitative at best. Here, we present an approach that evaluates the use of phytoplanktonic biomarkers (i.e. chlorins and alkenones) as quantitative proxies to reconstruct past changes in marine productivity. We compare biomarkers contents in a global suite of core-top sediments to sea-surface chlorophyll-a abundance estimated by satellites over the last 20 years, and the results are compared to total organic carbon (TOC). We also assess satellite data and detect satellite limitations and biases due to the complexity of optical properties and the actual defined algorithms. Our findings show that sedimentary chlorins can be used to track total sea-surface chlorophyll-a abundance as an indicator for past primary productivity. However, degradation processes restrict the application of this proxy to concentrations below a threshold value (1µg/g). Below this threshold, chlorins are a useful tool to identify reducing conditions when used as part of a multiproxy approach to assess redox sedimentary conditions (e.g. using Re, U). This is based on the link between anoxic/disoxic conditions and the flux of organic matter from the sea-surface to the sediments. We also show that TOC is less accurate than chlorins for estimating sea-surface chlorophyll-a due to the contribution of terrigenous organic matter, and the different degradation pathways of all organic compounds that TOC includes. Alkenones concentration also relates to primary productivity, but they are constrained by different processes in different regions. In conclusion, as lons as specific constraints are taken into account, our study evaluates the use of chlorins and alkenones as quantitative proxies of past primary productivity, with more accuracy than by using TOC.

  16. Distributed late-binding micro-scheduling and data caching for data-intensive workflows

    International Nuclear Information System (INIS)

    Delgado Peris, A.

    2015-01-01

    Today's world is flooded with vast amounts of digital information coming from innumerable sources. Moreover, it seems clear that this trend will only intensify in the future. Industry, society and remarkably science are not indifferent to this fact. On the contrary, they are struggling to get the most out of this data, which means that they need to capture, transfer, store and process it in a timely and efficient manner, using a wide range of computational resources. And this task is not always simple. A very representative example of the challenges posed by the management and processing of large quantities of data is that of the Large Hadron Collider experiments, which handle tens of petabytes of physics information every year. Based on the experience of one of these collaborations, we have studied the main issues involved in the management of huge volumes of data and in the completion of sizeable workflows that consume it. In this context, we have developed a general-purpose architecture for the scheduling and execution of workflows with heavy data requirements: the Task Queue. This new system builds on the late-binding overlay model, which has helped experiments to successfully overcome the problems associated to the heterogeneity and complexity of large computational grids. Our proposal introduces several enhancements to the existing systems. The execution agents of the Task Queue architecture share a Distributed Hash Table (DHT) and perform job matching and assignment cooperatively. In this way, scalability problems of centralized matching algorithms are avoided and workflow execution times are improved. Scalability makes fine-grained micro-scheduling possible and enables new functionalities, like the implementation of a distributed data cache on the execution nodes and the integration of data location information in the scheduling decisions...(Author)

  17. Evaluation of low-temperature geothermal potential in Cache Valley, Utah. Report of investigation No. 174

    Energy Technology Data Exchange (ETDEWEB)

    de Vries, J.L.

    1982-11-01

    Field work consisted of locating 90 wells and springs throughout the study area, collecting water samples for later laboratory analyses, and field measurement of pH, temperature, bicarbonate alkalinity, and electrical conductivity. Na/sup +/, K/sup +/, Ca/sup +2/, Mg/sup +2/, SiO/sub 2/, Fe, SO/sub 4//sup -2/, Cl/sup -/, F/sup -/, and total dissolved solids were determined in the laboratory. Temperature profiles were measured in 12 additional, unused walls. Thermal gradients calculated from the profiles were approximately the same as the average for the Basin and Range province, about 35/sup 0/C/km. One well produced a gradient of 297/sup 0/C/km, most probably as a result of a near-surface occurrence of warm water. Possible warm water reservoir temperatures were calculated using both the silica and the Na-K-Ca geothermometers, with the results averaging about 50 to 100/sup 0/C. If mixing calculations were applied, taking into account the temperatures and silica contents of both warm springs or wells and the cold groundwater, reservoir temperatures up to about 200/sup 0/C were indicated. Considering measured surface water temperatures, calculated reservoir temperatures, thermal gradients, and the local geology, most of the Cache Valley, Utah area is unsuited for geothermal development. However, the areas of North Logan, Benson, and Trenton were found to have anomalously warm groundwater in comparison to the background temperature of 13.0/sup 0/C for the study area. The warm water has potential for isolated energy development but is not warm enough for major commercial development.

  18. Texture analysis for mapping Tamarix parviflora using aerial photographs along the Cache Creek, California.

    Science.gov (United States)

    Ge, Shaokui; Carruthers, Raymond; Gong, Peng; Herrera, Angelica

    2006-03-01

    Natural color photographs were used to detect the coverage of saltcedar, Tamarix parviflora, along a 40 km portion of Cache Creek near Woodland, California. Historical aerial photographs from 2001 were retrospectively evaluated and compared with actual ground-based information to assess accuracy of the assessment process. The color aerial photos were sequentially digitized, georeferenced, classified using color and texture methods, and mosaiced into maps for field use. Eight types of ground cover (Tamarix, agricultural crops, roads, rocks, water bodies, evergreen trees, non-evergreen trees and shrubs (excluding Tamarix)) were selected from the digitized photos for separability analysis and supervised classification. Due to color similarities among the eight cover types, the average separability, based originally only on color, was very low. The separability was improved significantly through the inclusion of texture analysis. Six types of texture measures with various window sizes were evaluated. The best texture was used as an additional feature along with the color, for identifying Tamarix. A total of 29 color photographs were processed to detect Tamarix infestations using a combination of the original digital images and optimal texture features. It was found that the saltcedar covered a total of 3.96 km(2) (396 hectares) within the study area. For the accuracy assessment, 95 classified samples from the resulting map were checked in the field with a global position system (GPS) unit to verify Tamarix presence. The producer's accuracy was 77.89%. In addition, 157 independently located ground sites containing saltcedar were compared with the classified maps, producing a user's accuracy of 71.33%.

  19. Suspense, culpa y cintas de vídeo. Caché/Escondido de Michael Haneke

    Directory of Open Access Journals (Sweden)

    Miguel Martínez-Cabeza

    2011-12-01

    Full Text Available Caché/Escondido (2005 representa dentro de la filmografía de Michael Haneke el ejemplo más destacado de síntesis de los planteamientos formales e ideológicos del cineasta austriaco. Este artículo analiza el filme como manifiesto cinematográfico y como explotación de las convenciones genéricas para construir un modelo de espectador reflexivo. La investigación del modo en que el director plantea y abandona las técnicas del suspense aporta claves para explicar el éxito casi unánime de crítica y la respuesta mucho menos homogénea de las audiencias. El desencadenante de la trama, unas cintas de vídeo que reciben los Laurent, es alusión directa a Carretera Perdida (1997 de David Lynch; no obstante, el misterio acerca del autor de la videovigilancia pierde interés en relación al sentimiento de culpa que desencadena en el protagonista. El episodio infantil de celos y venganza hacia un niño argelino y la actitud del Georges adulto representan una alegoría de la relación de Francia con su pasado colonial que tampoco cierra la narración de Haneke. Es precisamente la apertura formal con que el filme (desestructura cuestiones actuales como el límite entre la responsabilidad individual y colectiva lo que conforma un espectador tan distanciado de la diégesis como consciente de su propio papel de observador.

  20. Error characterization for asynchronous computations: Proxy equation approach

    Science.gov (United States)

    Sallai, Gabriella; Mittal, Ankita; Girimaji, Sharath

    2017-11-01

    Numerical techniques for asynchronous fluid flow simulations are currently under development to enable efficient utilization of massively parallel computers. These numerical approaches attempt to accurately solve time evolution of transport equations using spatial information at different time levels. The truncation error of asynchronous methods can be divided into two parts: delay dependent (EA) or asynchronous error and delay independent (ES) or synchronous error. The focus of this study is a specific asynchronous error mitigation technique called proxy-equation approach. The aim of this study is to examine these errors as a function of the characteristic wavelength of the solution. Mitigation of asynchronous effects requires that the asynchronous error be smaller than synchronous truncation error. For a simple convection-diffusion equation, proxy-equation error analysis identifies critical initial wave-number, λc. At smaller wave numbers, synchronous error are larger than asynchronous errors. We examine various approaches to increase the value of λc in order to improve the range of applicability of proxy-equation approach.

  1. Marine proxy evidence linking decadal North Pacific and Atlantic climate

    Energy Technology Data Exchange (ETDEWEB)

    Hetzinger, S. [University of Toronto Mississauga, CPS-Department, Mississauga, ON (Canada); Leibniz Institute of Marine Sciences, IFM-GEOMAR, Kiel (Germany); Halfar, J. [University of Toronto Mississauga, CPS-Department, Mississauga, ON (Canada); Mecking, J.V.; Keenlyside, N.S. [Leibniz Institute of Marine Sciences, IFM-GEOMAR, Kiel (Germany); University of Bergen, Geophysical Institute and Bjerknes Centre for Climate Research, Bergen (Norway); Kronz, A. [University of Goettingen, Geowissenschaftliches Zentrum, Goettingen (Germany); Steneck, R.S. [University of Maine, Darling Marine Center, Walpole, ME (United States); Adey, W.H. [Smithsonian Institution, Department of Botany, Washington, DC (United States); Lebednik, P.A. [ARCADIS U.S. Inc., Walnut Creek, CA (United States)

    2012-09-15

    Decadal- to multidecadal variability in the extra-tropical North Pacific is evident in 20th century instrumental records and has significant impacts on Northern Hemisphere climate and marine ecosystems. Several studies have discussed a potential linkage between North Pacific and Atlantic climate on various time scales. On decadal time scales no relationship could be confirmed, potentially due to sparse instrumental observations before 1950. Proxy data are limited and no multi-centennial high-resolution marine geochemical proxy records are available from the subarctic North Pacific. Here we present an annually-resolved record (1818-1967) of Mg/Ca variations from a North Pacific/Bering Sea coralline alga that extends our knowledge in this region beyond available data. It shows for the first time a statistically significant link between decadal fluctuations in sea-level pressure in the North Pacific and North Atlantic. The record is a lagged proxy for decadal-scale variations of the Aleutian Low. It is significantly related to regional sea surface temperature and the North Atlantic Oscillation (NAO) index in late boreal winter on these time scales. Our data show that on decadal time scales a weaker Aleutian Low precedes a negative NAO by several years. This atmospheric link can explain the coherence of decadal North Pacific and Atlantic Multidecadal Variability, as suggested by earlier studies using climate models and limited instrumental data. (orig.)

  2. Detecting instabilities in tree-ring proxy calibration

    Directory of Open Access Journals (Sweden)

    H. Visser

    2010-06-01

    Full Text Available Evidence has been found for reduced sensitivity of tree growth to temperature in a number of forests at high northern latitudes and alpine locations. Furthermore, at some of these sites, emergent subpopulations of trees show negative growth trends with rising temperature. These findings are typically referred to as the "Divergence Problem" (DP. Given the high relevance of paleoclimatic reconstructions for policy-related studies, it is important for dendrochronologists to address this issue of potential model uncertainties associated with the DP. Here we address this issue by proposing a calibration technique, termed "stochastic response function" (SRF, which allows the presence or absence of any instabilities in growth response of trees (or any other climate proxy to their calibration target to be visualized and detected. Since this framework estimates confidence limits and subsequently provides statistical significance tests, the approach is also very well suited for proxy screening prior to the generation of a climate-reconstruction network.

    Two examples of tree growth/climate relationships are provided, one from the North American Arctic treeline and the other from the upper treeline in the European Alps. Instabilities were found to be present where stabilities were reported in the literature, and vice versa, stabilities were found where instabilities were reported. We advise to apply SRFs in future proxy-screening schemes, next to the use of correlations and RE/CE statistics. It will improve the strength of reconstruction hindcasts.

  3. Heinrich event 4 characterized by terrestrial proxies in southwestern Europe

    Directory of Open Access Journals (Sweden)

    J. M. López-García

    2013-05-01

    Full Text Available Heinrich event 4 (H4 is well documented in the North Atlantic Ocean as a cooling event that occurred between 39 and 40 Ka. Deep-sea cores around the Iberian Peninsula coastline have been analysed to characterize the H4 event, but there are no data on the terrestrial response to this event. Here we present for the first time an analysis of terrestrial proxies for characterizing the H4 event, using the small-vertebrate assemblage (comprising small mammals, squamates and amphibians from Terrassa Riera dels Canyars, an archaeo-palaeontological deposit located on the seaboard of the northeastern Iberian Peninsula. This assemblage shows that the H4 event is characterized in northeastern Iberia by harsher and drier terrestrial conditions than today. Our results were compared with other proxies such as pollen, charcoal, phytolith, avifauna and large-mammal data available for this site, as well as with the general H4 event fluctuations and with other sites where H4 and the previous and subsequent Heinrich events (H5 and H3 have been detected in the Mediterranean and Atlantic regions of the Iberian Peninsula. We conclude that the terrestrial proxies follow the same patterns as the climatic and environmental conditions detected by the deep-sea cores at the Iberian margins.

  4. Short-term indicators. Intensities as a proxy for savings

    Energy Technology Data Exchange (ETDEWEB)

    Boonekamp, P.G.M.; Gerdes, J. [ECN Policy Studies, Petten (Netherlands); Faberi, S. [Institute of Studies for the Integration of Systems ISIS, Rome (Italy)

    2013-12-15

    The ODYSSEE database on energy efficiency indicators (www.odyssee-indicators.org) has been set up to enable the monitoring and evaluation of realised energy efficiency improvements and related energy savings. The database covers the 27 EU countries as well as Norway and Croatia and data are available from 1990 on. This work contributes to the growing need for quantitative monitoring and evaluation of the impacts of energy policies and measures, both at the EU and national level, e.g. due to the Energy Services Directive and the proposed Energy Efficiency Directive. Because the underlying data become available only after some time, the savings figures are not always timely available. This is especially true for the ODEX efficiency indices per sector that rely on a number of indicators. Therefore, there is a need for so-called short-term indicators that become available shortly after the year has passed for which data are needed. The short term indicators do not replace the savings indicators but function as a proxy for the savings in the most recent year. This proxy value is faster available, but will be less accurate than the saving indicators themselves. The short term indicators have to be checked regularly with the ODEX indicators in order to see whether they can function still as a proxy.

  5. MUNCHAUSEN SYNDROME BY PROXY IN PEDIATRIC DENTISTRY: MYTH OR REALITY?

    Directory of Open Access Journals (Sweden)

    Veronica PINTILICIUC-ŞERBAN

    2017-06-01

    Full Text Available Background and aims: Munchausen syndrome by proxy is a condition traditionally comprising physical and mental abuse and medical neglect as a form of psychogenic maltreatment of the child, secondary to fabrication of a pediatric illness by the parent or guardian. The aim of our paper is to assess whether such condition occurs in current pediatric dental practice and to evidence certain situations in which the pediatric dentist should suspect this form of child abuse. Problem statement: Munchausen syndrome by proxy in pediatric dentistry may lead to serious chronic disabilities of the abused or neglected child, being one of the causes of treatment failure. Discussion: Prompt detection of such condition should be regarded as one of the duties of the practitioner who should be trained to report the suspected cases to the governmental child protective agencies. This should be regarded as a form of child abuse and neglect, and the responsible caregiver could be held liable when such wrongful actions cause harm or endanger child’s welfare. Conclusion: Munchausen syndrome by proxy should be regarded as a reality in current pediatric dental practice and dental teams should be trained to properly recognize, assess and manage such complex situations.

  6. Efficient Conditional Proxy Re-encryption with Chosen-Ciphertext Security

    NARCIS (Netherlands)

    Weng, Jiang; Yang, Yanjiang; Tang, Qiang; Deng, Robert H.; Bao, Feng

    Recently, a variant of proxy re-encryption, named conditional proxy re-encryption (C-PRE), has been introduced. Compared with traditional proxy re-encryption, C-PRE enables the delegator to implement fine-grained delegation of decryption rights, and thus is more useful in many applications. In this

  7. Evaluation of patient and proxy responses on the activity measure for postacute care.

    Science.gov (United States)

    Jette, Alan M; Ni, Pengsheng; Rasch, Elizabeth K; Appelman, Jed; Sandel, M Elizabeth; Terdiman, Joseph; Chan, Leighton

    2012-03-01

    Our objective was to examine the agreement between adult patients with stroke and family member or clinician proxies in activity measure for postacute care (AM-PAC) summary scores for daily activity, basic mobility, and applied cognitive function. This study involved 67 patients with stroke admitted to a hospital within the Kaiser Permanente of Northern California system and were participants in a parent study on stroke outcomes. Each participant and proxy respondent completed the AM-PAC by personal or telephone interview at the point of hospital discharge or during ≥1 transitions to different postacute care settings. The results suggest that for patients with a stroke proxy, AM-PAC data are robust for family or clinician proxy assessment of basic mobility function and clinician proxy assessment of daily activity function, but less robust for family proxy assessment of daily activity function and for all proxy groups' assessments of applied cognitive function. The pattern of disagreement between patient and proxy was, on average, relatively small and random. There was little evidence of systematic bias between proxy and patient reports of their functional status. The degree of concordance between patient and proxy was similar for those with moderate to severe strokes compared with mild strokes. Patient and proxy ratings on the AM-PAC achieved adequate agreement for use in stroke research when using proxy respondents could reduce sample selection bias. The AM-PAC data can be implemented across institutional as well as community care settings while achieving precision and reducing respondent burden.

  8. Analysis of Quality of Proxy Questions in Health Surveys by Behavior Coding

    NARCIS (Netherlands)

    Benitez, I.; Padilla, J.L.; Ongena, Yfke

    2012-01-01

    The aim of this study is to show how to analyze the quality of questions for proxy informants by means of behavior coding. Proxy questions can undermine survey data quality because of the fact that proxies respond to questions on behalf of other people. Behavior coding can improve questions by

  9. The design and implementation about the project of optimizing proxy servers

    International Nuclear Information System (INIS)

    Wu Ling; Liu Baoxu

    2006-01-01

    Proxy server is an important facility in the network of an organization, which play an important role in security and access control and accelerating access of Internet. This article introduces the action of proxy servers, and expounds the resolutions to optimize proxy servers at IHEP: integration, dynamic domain name resolves and data synchronization. (authors)

  10. Some Proxy Signature and Designated verifier Signature Schemes over Braid Groups

    OpenAIRE

    Lal, Sunder; Verma, Vandani

    2009-01-01

    Braids groups provide an alternative to number theoretic public cryptography and can be implemented quite efficiently. The paper proposes five signature schemes: Proxy Signature, Designated Verifier, Bi-Designated Verifier, Designated Verifier Proxy Signature And Bi-Designated Verifier Proxy Signature scheme based on braid groups. We also discuss the security aspects of each of the proposed schemes.

  11. Het WEB leert begrijpen

    CERN Multimedia

    Stroeykens, Steven

    2004-01-01

    The WEB could be much more useful if the computers understood something of information on the Web pages. That explains the goal of the "semantic Web", a project in which takes part, amongst others, Tim Berners Lee, the inventor of the original WEB

  12. Instant responsive web design

    CERN Document Server

    Simmons, Cory

    2013-01-01

    A step-by-step tutorial approach which will teach the readers what responsive web design is and how it is used in designing a responsive web page.If you are a web-designer looking to expand your skill set by learning the quickly growing industry standard of responsive web design, this book is ideal for you. Knowledge of CSS is assumed.

  13. The Caregiver Contribution to Heart Failure Self-Care (CACHS): Further Psychometric Testing of a Novel Instrument.

    Science.gov (United States)

    Buck, Harleah G; Harkness, Karen; Ali, Muhammad Usman; Carroll, Sandra L; Kryworuchko, Jennifer; McGillion, Michael

    2017-04-01

    Caregivers (CGs) contribute important assistance with heart failure (HF) self-care, including daily maintenance, symptom monitoring, and management. Until CGs' contributions to self-care can be quantified, it is impossible to characterize it, account for its impact on patient outcomes, or perform meaningful cost analyses. The purpose of this study was to conduct psychometric testing and item reduction on the recently developed 34-item Caregiver Contribution to Heart Failure Self-care (CACHS) instrument using classical and item response theory methods. Fifty CGs (mean age 63 years ±12.84; 70% female) recruited from a HF clinic completed the CACHS in 2014 and results evaluated using classical test theory and item response theory. Items would be deleted for low (.95) endorsement, low (.7) corrected item-total correlations, significant pairwise correlation coefficients, floor or ceiling effects, relatively low latent trait and item information function levels ( .5), and differential item functioning. After analysis, 14 items were excluded, resulting in a 20-item instrument (self-care maintenance eight items; monitoring seven items; and management five items). Most items demonstrated moderate to high discrimination (median 2.13, minimum .77, maximum 5.05), and appropriate item difficulty (-2.7 to 1.4). Internal consistency reliability was excellent (Cronbach α = .94, average inter-item correlation = .41) with no ceiling effects. The newly developed 20-item version of the CACHS is supported by rigorous instrument development and represents a novel instrument to measure CGs' contribution to HF self-care. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  14. Geospatial semantic web

    CERN Document Server

    Zhang, Chuanrong; Li, Weidong

    2015-01-01

    This book covers key issues related to Geospatial Semantic Web, including geospatial web services for spatial data interoperability; geospatial ontology for semantic interoperability; ontology creation, sharing, and integration; querying knowledge and information from heterogeneous data source; interfaces for Geospatial Semantic Web, VGI (Volunteered Geographic Information) and Geospatial Semantic Web; challenges of Geospatial Semantic Web; and development of Geospatial Semantic Web applications. This book also describes state-of-the-art technologies that attempt to solve these problems such as WFS, WMS, RDF, OWL, and GeoSPARQL, and demonstrates how to use the Geospatial Semantic Web technologies to solve practical real-world problems such as spatial data interoperability.

  15. Avaliação do compartilhamento das memórias cache no desempenho de arquiteturas multi-core

    OpenAIRE

    Marco Antonio Zanata Alves

    2009-01-01

    No atual contexto de inovações em multi-core, em que as novas tecnologias de integração estão fornecendo um número crescente de transistores por chip, o estudo de técnicas de aumento de vazão de dados é de suma importância para os atuais e futuros processadores multi-core e many-core. Com a contínua demanda por desempenho computacional, as memórias cache vêm sendo largamente adotadas nos diversos tipos de projetos arquiteturais de computadores. Os atuais processadores disponíveis no mercado a...

  16. A study on the effectiveness of lockup-free caches for a Reduced Instruction Set Computer (RISC) processor

    OpenAIRE

    Tharpe, Leonard.

    1992-01-01

    Approved for public release; distribution is unlimited This thesis presents a simulation and analysis of the Reduced Instruction Set Computer (RISC) architecture and the effects on RISC performance of a lockup-free cache interface. RISC architectures achieve high performance by having a small, but sufficient, instruction set with most instructions executing in one clock cycle. Current RISC performance range from 1.5 to 2.0 CPI. The goal of RISC is to attain a CPI of 1.0. The major hind...

  17. Virtual Web Services

    OpenAIRE

    Rykowski, Jarogniew

    2007-01-01

    In this paper we propose an application of software agents to provide Virtual Web Services. A Virtual Web Service is a linked collection of several real and/or virtual Web Services, and public and private agents, accessed by the user in the same way as a single real Web Service. A Virtual Web Service allows unrestricted comparison, information merging, pipelining, etc., of data coming from different sources and in different forms. Detailed architecture and functionality of a single Virtual We...

  18. The Semantic Web Revisited

    OpenAIRE

    Shadbolt, Nigel; Berners-Lee, Tim; Hall, Wendy

    2006-01-01

    The original Scientific American article on the Semantic Web appeared in 2001. It described the evolution of a Web that consisted largely of documents for humans to read to one that included data and information for computers to manipulate. The Semantic Web is a Web of actionable information--information derived from data through a semantic theory for interpreting the symbols.This simple idea, however, remains largely unrealized. Shopbots and auction bots abound on the Web, but these are esse...

  19. Web Project Management

    OpenAIRE

    Suralkar, Sunita; Joshi, Nilambari; Meshram, B B

    2013-01-01

    This paper describes about the need for Web project management, fundamentals of project management for web projects: what it is, why projects go wrong, and what's different about web projects. We also discuss Cost Estimation Techniques based on Size Metrics. Though Web project development is similar to traditional software development applications, the special characteristics of Web Application development requires adaption of many software engineering approaches or even development of comple...

  20. WEB STRUCTURE MINING

    Directory of Open Access Journals (Sweden)

    CLAUDIA ELENA DINUCĂ

    2011-01-01

    Full Text Available The World Wide Web became one of the most valuable resources for information retrievals and knowledge discoveries due to the permanent increasing of the amount of data available online. Taking into consideration the web dimension, the users get easily lost in the web’s rich hyper structure. Application of data mining methods is the right solution for knowledge discovery on the Web. The knowledge extracted from the Web can be used to raise the performances for Web information retrievals, question answering and Web based data warehousing. In this paper, I provide an introduction of Web mining categories and I focus on one of these categories: the Web structure mining. Web structure mining, one of three categories of web mining for data, is a tool used to identify the relationship between Web pages linked by information or direct link connection. It offers information about how different pages are linked together to form this huge web. Web Structure Mining finds hidden basic structures and uses hyperlinks for more web applications such as web search.

  1. Semantic Web Technologies for the Adaptive Web

    DEFF Research Database (Denmark)

    Dolog, Peter; Nejdl, Wolfgang

    2007-01-01

    Ontologies and reasoning are the key terms brought into focus by the semantic web community. Formal representation of ontologies in a common data model on the web can be taken as a foundation for adaptive web technologies as well. This chapter describes how ontologies shared on the semantic web...... provide conceptualization for the links which are a main vehicle to access information on the web. The subject domain ontologies serve as constraints for generating only those links which are relevant for the domain a user is currently interested in. Furthermore, user model ontologies provide additional...... means for deciding which links to show, annotate, hide, generate, and reorder. The semantic web technologies provide means to formalize the domain ontologies and metadata created from them. The formalization enables reasoning for personalization decisions. This chapter describes which components...

  2. Applying semantic web services to enterprise web

    OpenAIRE

    Hu, Y; Yang, Q P; Sun, X; Wei, P

    2008-01-01

    Enterprise Web provides a convenient, extendable, integrated platform for information sharing and knowledge management. However, it still has many drawbacks due to complexity and increasing information glut, as well as the heterogeneity of the information processed. Research in the field of Semantic Web Services has shown the possibility of adding higher level of semantic functionality onto the top of current Enterprise Web, enhancing usability and usefulness of resource, enabling decision su...

  3. Demonstration : a RESTful SOS proxy for linked sensor data

    NARCIS (Netherlands)

    Bröring, Arne; Janowicz, Krzysztof; Stasch, Christoph; Schade, Sven; Everding, Thomas; Llaves, Alejandro; Taylor, Kerry; Ayyagari, Arun; De Roure, David

    2011-01-01

    Next generations of spatial information infrastructures call for more dynamic service composition, more sources of information, as well as stronger capabilities for their integration. Sensor networks have been identified as a major data provider for such infrastructures, while Semantic Web

  4. Sounds of Web Advertising

    DEFF Research Database (Denmark)

    Jessen, Iben Bredahl; Graakjær, Nicolai Jørgensgaard

    2010-01-01

    Sound seems to be a neglected issue in the study of web ads. Web advertising is predominantly regarded as visual phenomena–commercial messages, as for instance banner ads that we watch, read, and eventually click on–but only rarely as something that we listen to. The present chapter presents...... an overview of the auditory dimensions in web advertising: Which kinds of sounds do we hear in web ads? What are the conditions and functions of sound in web ads? Moreover, the chapter proposes a theoretical framework in order to analyse the communicative functions of sound in web advertising. The main...... argument is that an understanding of the auditory dimensions in web advertising must include a reflection on the hypertextual settings of the web ad as well as a perspective on how users engage with web content....

  5. Assimilation of a thermal remote sensing-based soil moisture proxy into a root-zone water balance model

    Science.gov (United States)

    Crow, W. T.; Kustas, W. P.

    2006-05-01

    Two types of Soil Vegetation Atmosphere Transfer (SVAT) modeling approaches are commonly applied to monitoring root-zone soil water availability. Water and Energy Balance (WEB) SVAT modeling are based forcing a prognostic water balance model with precipitation observations. In constrast, thermal Remote Sensing (RS) observations of canopy radiometric temperatures can be integrated into purely diagnostic SVAT models to predict the onset of vegetation water stress due to low root-zone soil water availability. Unlike WEB-SVAT models, RS-SVAT models do not require observed precipitation. Using four growings seasons (2001 to 2004) of profile soil moisture, micro-meteorology, and surface radiometric temperature observations at the USDA's OPE3 site, root-zone soil moisture predictions made by both WEB- and RS-SVAT modeling approaches are intercompared with each other and availible root- zone soil moisture observations. Results indicate that root-zone soil moisture estimates derived from a WEB- SVAT model have slightly more skill in detecting soil moisture anomalies at the site than comporable predictions from a competing RS-SVAT modeling approach. However, the relative advantage of the WEB-SVAT model disappears when it is forced with lower-quality rainfall information typical of continental and global-scale rainfall data sets. Most critically, root-zone soil moisture errors associated with both modeling approaches are sufficiently independent such that the merger of both information from both proxies - using either simple linear averaging or an Ensemble Kalman filter - creates a merge soil moisture estimate that is more accurate than either of its parent components.

  6. Tracking variable sedimentation rates in orbitally forced paleoclimate proxy series

    Science.gov (United States)

    Li, M.; Kump, L. R.; Hinnov, L.

    2017-12-01

    This study addresses two fundamental issues in cyclostratigraphy: quantitative testing of orbital forcing in cyclic sedimentary sequences and tracking variable sedimentation rates. The methodology proposed here addresses these issues as an inverse problem, and estimates the product-moment correlation coefficient between the frequency spectra of orbital solutions and paleoclimate proxy series over a range of "test" sedimentation rates. It is inspired by the ASM method (1). The number of orbital parameters involved in the estimation is also considered. The method relies on the hypothesis that orbital forcing had a significant impact on the paleoclimate proxy variations, and thus is also tested. The null hypothesis of no astronomical forcing is evaluated using the Beta distribution, for which the shape parameters are estimated using a Monte Carlo simulation approach. We introduce a metric to estimate the most likely sedimentation rate using the product-moment correlation coefficient, H0 significance level, and the number of contributing orbital parameters, i.e., the CHO value. The CHO metric is applied with a sliding window to track variable sedimentation rates along the paleoclimate proxy series. Two forward models with uniform and variable sedimentation rates are evaluated to demonstrate the robustness of the method. The CHO method is applied to the classical Late Triassic Newark depth rank series; the estimated sedimentation rates match closely with previously published sedimentation rates and provide a more highly time-resolved estimate (2,3). References: (1) Meyers, S.R., Sageman, B.B., Amer. J. Sci., 307, 773-792, 2007; (2) Kent, D.V., Olsen, P.E., Muttoni, G., Earth-Sci. Rev.166, 153-180, 2017; (3) Li, M., Zhang, Y., Huang, C., Ogg, J., Hinnov, L., Wang, Y., Zou, Z., Li, L., 2017. Earth Plant. Sc. Lett. doi:10.1016/j.epsl.2017.07.015

  7. From the Island of the Blue Dolphins: A unique 19th century cache feature from San Nicolas Island, California

    Science.gov (United States)

    Erlandson, Jon M.; Thomas-Barnett, Lisa; Vellanoweth, René L.; Schwartz, Steven J.; Muhs, Daniel R.

    2013-01-01

    A cache feature salvaged from an eroding sea cliff on San Nicolas Island produced two redwood boxes containing more than 200 artifacts of Nicoleño, Native Alaskan, and Euro-American origin. Outside the boxes were four asphaltum-coated baskets, abalone shells, a sandstone dish, and a hafted stone knife. The boxes, made from split redwood planks, contained a variety of artifacts and numerous unmodified bones and teeth from marine mammals, fish, birds, and large land mammals. Nicoleño-style artifacts include 11 knives with redwood handles and stone blades, stone projectile points, steatite ornaments and effigies, a carved stone pipe, abraders and burnishing stones, bird bone whistles, bone and shell pendants, abalone shell dishes, and two unusual barbed shell fishhooks. Artifacts of Native Alaskan style include four bone toggling harpoons, two unilaterally barbed bone harpoon heads, bone harpoon fore-shafts, a ground slate blade, and an adze blade. Objects of Euro-American origin or materials include a brass button, metal harpoon blades, and ten flaked glass bifaces. The contents of the cache feature, dating to the early-to-mid nineteenth century, provide an extraordinary window on a time of European expansion and global economic development that created unique cultural interactions and social transformations.

  8. Efficiently GPU-accelerating long kernel convolutions in 3-D DIRECT TOF PET reconstruction via memory cache optimization

    Energy Technology Data Exchange (ETDEWEB)

    Ha, Sungsoo; Mueller, Klaus [Stony Brook Univ., NY (United States). Center for Visual Computing; Matej, Samuel [Pennsylvania Univ., Philadelphia, PA (United States). Dept. of Radiology

    2011-07-01

    The DIRECT represents a novel approach for 3-D Time-of-Flight (TOF) PET reconstruction. Its novelty stems from the fact that it performs all iterative predictor-corrector operations directly in image space. The projection operations now amount to convolutions in image space, using long TOF (resolution) kernels. While for spatially invariant kernels the computational complexity can be algorithmically overcome by replacing spatial convolution with multiplication in Fourier space, spatially variant kernels cannot use this shortcut. Therefore in this paper, we describe a GPU-accelerated approach for this task. However, the intricate parallel architecture of GPUs poses its own challenges, and careful memory and thread management is the key to obtaining optimal results. As convolution is mainly memory-bound we focus on the former, proposing two types of memory caching schemes that warrant best cache memory re-use by the parallel threads. In contrast to our previous two-stage algorithm, the schemes presented here are both single-stage which is more accurate. (orig.)

  9. The Spy in the Sandbox: Practical Cache Attacks in JavaScript and their Implications

    Science.gov (United States)

    2015-10-16

    tion environment represented a typical web browsing session, with common applications, such as an email client, calen- dar, and even a music player...switches in the middle of the measurement process. Such events are easy to detect since the time that is re- turned is more than the OS multitasking

  10. 350 Year Cloud Reconstruction Deduced from Northeast Caribbean Coral Proxies

    Science.gov (United States)

    Winter, A.; Sammarco, P. W.; Mikolajewicz, U.; Jury, M.; Zanchettin, D.

    2014-12-01

    Clouds are a major factor influencing the global climate and its response to external forcing through their implications for the global hydrological cycle, and hence for the planetary radiative budget. Clouds also contribute to regional climates and their variability through, e.g., the changes they induce in regional precipitation patterns. There have been very few studies of decadal and longer-term changes in cloud cover in the tropics and sub-tropics, both over land and the ocean. In the tropics, there is great uncertainty regarding how global warming will affect cloud cover. Observational satellite data are too short to unambiguously discern any temporal trends in cloud cover. Corals generally live in well-mixed coastal regions and can often record environmental conditions of large areas of the upper ocean. This is particularly the case at low latitudes. Scleractinian corals are sessile, epibenthic fauna, and the type of environmental information recorded at the location where the coral has been living is dependent upon the species of coral considered and proxy index of interest. Skeletons of scleractinian corals are considered to provide among the best records of high-resolution (sub-annual) environmental variability in the tropical and sub-tropical oceans. Zooxanthellate hermatypic corals in tropical and sub-tropical seas precipitate CaCO3 skeletons as they grow. This growth is made possible through the manufacture of CaCO3crystals, facilitated by the zooxanthellae. During the process of crystallization, the holobiont binds carbon of different isotopes into the crystals. Stable carbon isotope concentrations vary with a variety of environmental conditions. In the Caribbean, d13C in corals of the species Montastraea faveolata can be used as a proxy for changes in cloud cover. In this contribution, we will demonstrate that the stable isotope 13C varies concomitantly with cloud cover for the northeastern Caribbean region. Using this proxy we have been able to

  11. Polar cap index as a proxy for hemispheric Joule heating

    DEFF Research Database (Denmark)

    Chun, F.K.; Knipp, D.J.; McHarg, M.G.

    1999-01-01

    The polar cap (PC) index measures the level of geomagnetic activity in the polar cap based on magnetic perturbations from overhead ionospheric currents and distant field-aligned currents on the poleward edge of the nightside auroral oval. Because PC essentially measures the main sources of energy...... input into the polar cap, we propose to use PC as a proxy for the hemispheric Joule heat production rate (JH). In this study, JH is estimated from the Assimilative Mapping of Ionospheric Electrodynamics (AMIE) procedure. We fit hourly PC values to hourly averages of JH. Using a data base approximately...

  12. Improvement of a Quantum Proxy Blind Signature Scheme

    Science.gov (United States)

    Zhang, Jia-Lei; Zhang, Jian-Zhong; Xie, Shu-Cui

    2018-06-01

    Improvement of a quantum proxy blind signature scheme is proposed in this paper. Six-qubit entangled state functions as quantum channel. In our scheme, a trust party Trent is introduced so as to avoid David's dishonest behavior. The receiver David verifies the signature with the help of Trent in our scheme. The scheme uses the physical characteristics of quantum mechanics to implement message blinding, delegation, signature and verification. Security analysis proves that our scheme has the properties of undeniability, unforgeability, anonymity and can resist some common attacks.

  13. The iMars WebGIS - Spatio-Temporal Data Queries and Single Image Map Web Services

    Science.gov (United States)

    Walter, Sebastian; Steikert, Ralf; Schreiner, Bjoern; Muller, Jan-Peter; van Gasselt, Stephan; Sidiropoulos, Panagiotis; Lanz-Kroechert, Julia

    2017-04-01

    WMS GetMap requests by setting additional TIME parameter values in the request. The values for the parameter represent an interval defined by its lower and upper bounds. As the WMS time standard only supports one time variable, only the start times of the images are considered. If no time values are submitted with the request, the full time range of all images is assumed as the default. Dynamic single image WMS: To compare images from different acquisition times at sites of multiple coverage, we have to load every image as a single WMS layer. Due to the vast amount of single images we need a way to set up the layers in a dynamic way - the map server does not know the images to be served beforehand. We use the MapScript interface to dynamically access MapServer's objects and configure the file name and path of the requested image in the map configuration. The layers are created on-the-fly each representing only one single image. On the frontend side, the vendor-specific WMS request parameter (PRODUCTID) has to be appended to the regular set of WMS parameters. The request is then passed on to the MapScript instance. Web Map Tile Cache: In order to speed up access of the WMS requests, a MapCache instance has been integrated in the pipeline. As it is not aware of the available PDS product IDs which will be queried, the PRODUCTID parameter is configured as an additional dimension of the cache. The WMS request is received by the Apache webserver configured with the MapCache module. If the tile is available in the tile cache, it is immediately commited to the client. If not available, the tile request is forwarded to Apache and the MapScript module. The Python script intercepts the WMS request and extracts the product ID from the parameter chain. It loads the layer object from the map file and appends the file name and path of the inquired image. After some possible further image processing inside the script (stretching, color matching), the request is submitted to the Map

  14. Penguin tissue as a proxy for relative krill abundance in East Antarctica during the Holocene.

    Science.gov (United States)

    Huang, Tao; Sun, Liguang; Long, Nanye; Wang, Yuhong; Huang, Wen

    2013-09-30

    Antarctic krill (Euphausia superba) is a key component of the Southern Ocean food web. It supports a large number of upper trophic-level predators, and is also a major fishery resource. Understanding changes in krill abundance has long been a priority for research and conservation in the Southern Ocean. In this study, we performed stable isotope analyses on ancient Adélie penguin tissues and inferred relative krill abundance during the Holocene epoch from paleodiets of Adélie penguin (Pygoscelis adeliae), using inverse of δ¹⁵N (ratio of ¹⁵N/¹⁴N) value as a proxy. We find that variations in krill abundance during the Holocene are in accord with episodes of regional climate changes, showing greater krill abundance in cold periods. Moreover, the low δ¹⁵N values found in modern Adélie penguins indicate relatively high krill availability, which supports the hypothesis of krill surplus in modern ages due to recent hunt for krill-eating seals and whales by humans.

  15. Building web information systems using web services

    NARCIS (Netherlands)

    Frasincar, F.; Houben, G.J.P.M.; Barna, P.; Vasilecas, O.; Eder, J.; Caplinskas, A.

    2006-01-01

    Hera is a model-driven methodology for designing Web information systems. In the past a CASE tool for the Hera methodology was implemented. This software had different components that together form one centralized application. In this paper, we present a distributed Web service-oriented architecture

  16. Simulation of Wake Vortex Radiometric Detection via Jet Exhaust Proxy

    Science.gov (United States)

    Daniels, Taumi S.

    2015-01-01

    This paper describes an analysis of the potential of an airborne hyperspectral imaging IR instrument to infer wake vortices via turbine jet exhaust as a proxy. The goal was to determine the requirements for an imaging spectrometer or radiometer to effectively detect the exhaust plume, and by inference, the location of the wake vortices. The effort examines the gas spectroscopy of the various major constituents of turbine jet exhaust and their contributions to the modeled detectable radiance. Initially, a theoretical analysis of wake vortex proxy detection by thermal radiation was realized in a series of simulations. The first stage used the SLAB plume model to simulate turbine jet exhaust plume characteristics, including exhaust gas transport dynamics and concentrations. The second stage used these plume characteristics as input to the Line By Line Radiative Transfer Model (LBLRTM) to simulate responses from both an imaging IR hyperspectral spectrometer or radiometer. These numerical simulations generated thermal imagery that was compared with previously reported wake vortex temperature data. This research is a continuation of an effort to specify the requirements for an imaging IR spectrometer or radiometer to make wake vortex measurements. Results of the two-stage simulation will be reported, including instrument specifications for wake vortex thermal detection. These results will be compared with previously reported results for IR imaging spectrometer performance.

  17. Starspot variability as an X-ray radiation proxy

    Science.gov (United States)

    Arkhypov, Oleksiy V.; Khodachenko, Maxim L.; Lammer, Helmut; Güdel, Manuel; Lüftinger, Teresa; Johnstone, Colin P.

    2018-05-01

    Stellar X-ray emission plays an important role in the study of exoplanets as a proxy for stellar winds and as a basis for the prediction of extreme ultraviolet (EUV) flux, unavailable for direct measurements, which in their turn are important factors for the mass-loss of planetary atmospheres. Unfortunately, the detection thresholds limit the number of stars with the directly measured X-ray fluxes. At the same time, the known connection between the sunspots and X-ray sources allows using of the starspot variability as an accessible proxy for the stellar X-ray emission. To realize this approach, we analysed the light curves of 1729 main-sequence stars with rotation periods 0.5 X-ray to bolometric luminosity ratio Rx. As a result, the regressions for stellar X-ray luminosity Lx(P, Teff) and its related EUV analogue LEUV were obtained for the main-sequence stars. It was shown that these regressions allow prediction of average (over the considered stars) values of log (Lx) and log (LEUV) with typical errors of 0.26 and 0.22 dex, respectively. This, however, does not include the activity variations in particular stars related to their individual magnetic activity cycles.

  18. Attribute-Based Proxy Re-Encryption with Keyword Search

    Science.gov (United States)

    Shi, Yanfeng; Liu, Jiqiang; Han, Zhen; Zheng, Qingji; Zhang, Rui; Qiu, Shuo

    2014-01-01

    Keyword search on encrypted data allows one to issue the search token and conduct search operations on encrypted data while still preserving keyword privacy. In the present paper, we consider the keyword search problem further and introduce a novel notion called attribute-based proxy re-encryption with keyword search (), which introduces a promising feature: In addition to supporting keyword search on encrypted data, it enables data owners to delegate the keyword search capability to some other data users complying with the specific access control policy. To be specific, allows (i) the data owner to outsource his encrypted data to the cloud and then ask the cloud to conduct keyword search on outsourced encrypted data with the given search token, and (ii) the data owner to delegate other data users keyword search capability in the fine-grained access control manner through allowing the cloud to re-encrypted stored encrypted data with a re-encrypted data (embedding with some form of access control policy). We formalize the syntax and security definitions for , and propose two concrete constructions for : key-policy and ciphertext-policy . In the nutshell, our constructions can be treated as the integration of technologies in the fields of attribute-based cryptography and proxy re-encryption cryptography. PMID:25549257

  19. Attribute-based proxy re-encryption with keyword search.

    Science.gov (United States)

    Shi, Yanfeng; Liu, Jiqiang; Han, Zhen; Zheng, Qingji; Zhang, Rui; Qiu, Shuo

    2014-01-01

    Keyword search on encrypted data allows one to issue the search token and conduct search operations on encrypted data while still preserving keyword privacy. In the present paper, we consider the keyword search problem further and introduce a novel notion called attribute-based proxy re-encryption with keyword search (ABRKS), which introduces a promising feature: In addition to supporting keyword search on encrypted data, it enables data owners to delegate the keyword search capability to some other data users complying with the specific access control policy. To be specific, ABRKS allows (i) the data owner to outsource his encrypted data to the cloud and then ask the cloud to conduct keyword search on outsourced encrypted data with the given search token, and (ii) the data owner to delegate other data users keyword search capability in the fine-grained access control manner through allowing the cloud to re-encrypted stored encrypted data with a re-encrypted data (embedding with some form of access control policy). We formalize the syntax and security definitions for ABRKS, and propose two concrete constructions for ABRKS: key-policy ABRKS and ciphertext-policy ABRKS. In the nutshell, our constructions can be treated as the integration of technologies in the fields of attribute-based cryptography and proxy re-encryption cryptography.

  20. Wordpress web application development

    CERN Document Server

    Ratnayake, Rakhitha Nimesh

    2015-01-01

    This book is intended for WordPress developers and designers who want to develop quality web applications within a limited time frame and for maximum profit. Prior knowledge of basic web development and design is assumed.

  1. Promoting Your Web Site.

    Science.gov (United States)

    Raeder, Aggi

    1997-01-01

    Discussion of ways to promote sites on the World Wide Web focuses on how search engines work and how they retrieve and identify sites. Appropriate Web links for submitting new sites and for Internet marketing are included. (LRW)

  2. Practical web development

    CERN Document Server

    Wellens, Paul

    2015-01-01

    This book is perfect for beginners who want to get started and learn the web development basics, but also offers experienced developers a web development roadmap that will help them to extend their capabilities.

  3. EPA Web Taxonomy

    Data.gov (United States)

    U.S. Environmental Protection Agency — EPA's Web Taxonomy is a faceted hierarchical vocabulary used to tag web pages with terms from a controlled vocabulary. Tagging enables search and discovery of EPA's...

  4. Chemical Search Web Utility

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Chemical Search Web Utility is an intuitive web application that allows the public to easily find the chemical that they are interested in using, and which...

  5. Analyzing differences between patient and proxy on Patient Reported Outcomes in multiple sclerosis.

    Science.gov (United States)

    Sonder, Judith M; Holman, Rebecca; Knol, Dirk L; Bosma, Libertje V A E; Polman, Chris H; Uitdehaag, Bernard M J

    2013-11-15

    Proxy respondents, partners of multiple sclerosis (MS) patients, can provide valuable information on the MS patients' disease. In an earlier publication we found relatively good agreement on patient reported outcomes (PROs) measuring physical impact and functioning, but we found large differences on (neuro)psychological scales. We aim to identify patient and proxy related variables explaining differences between patients' and proxies' ratings on five PROs. We report on data from 175 MS patients and proxy respondents. Regression analyses were performed, using as dependent variable the mean differences on five scales: Physical and Psychological scale of the Multiple Sclerosis Impact Scale (MSIS-29), the Multiple Sclerosis Walking Scale (MSWS), Guy's Neurological Disability Scale (GNDS) and the Multiple Sclerosis Neuropsychological Screening Questionnaire (MSNQ). The independent variables were patient, proxy and disease related variables. Caregiver strain was significantly related to differences between patient and proxy scores for all five PROs. A higher level of patient anxiety on the HADS was linked to larger differences on all PROs except the GNDS. In addition, cognitive functioning, proxy depression, walking ability, proxy gender and MS related disability were contributing to the discrepancies. We found several patient and proxy factors that may contribute to discrepancies between patient and proxy scores on MS PROs. The most important factor is caregiver burden. © 2013 Elsevier B.V. All rights reserved.

  6. Developing a proxy version of the Adult social care outcome toolkit (ASCOT).

    Science.gov (United States)

    Rand, Stacey; Caiels, James; Collins, Grace; Forder, Julien

    2017-05-19

    Social care-related quality of life is a key outcome indicator used in the evaluation of social care interventions and policy. It is not, however, always possible to collect quality of life data by self-report even with adaptations for people with cognitive or communication impairments. A new proxy-report version of the Adult Social Care Outcomes Toolkit (ASCOT) measure of social care-related quality of life was developed to address the issues of wider inclusion of people with cognitive or communication difficulties who may otherwise be systematically excluded. The development of the proxy-report ASCOT questionnaire was informed by literature review and earlier work that identified the key issues and challenges associated with proxy-reported outcomes. To evaluate the acceptability and content validity of the ASCOT-Proxy, qualitative cognitive interviews were conducted with unpaid carers or care workers of people with cognitive or communication impairments. The proxy respondents were invited to 'think aloud' while completing the questionnaire. Follow-up probes were asked to elicit further detail of the respondent's comprehension of the format, layout and content of each item and also how they weighed up the options to formulate a response. A total of 25 unpaid carers and care workers participated in three iterative rounds of cognitive interviews. The findings indicate that the items were well-understood and the concepts were consistent with the item definitions for the standard self-completion version of ASCOT with minor modifications to the draft ASCOT-Proxy. The ASCOT-Proxy allows respondents to rate the proxy-proxy and proxy-patient perspectives, which improved the acceptability of proxy report. A new proxy-report version of ASCOT was developed with evidence of its qualitative content validity and acceptability. The ASCOT-Proxy is ready for empirical testing of its suitability for data collection as a self-completion and/or interview questionnaire, and also

  7. Web Application Vulnerabilities

    OpenAIRE

    Yadav, Bhanu

    2014-01-01

    Web application security has been a major issue in information technology since the evolvement of dynamic web application. The main objective of this project was to carry out a detailed study on the top three web application vulnerabilities such as injection, cross site scripting, broken authentication and session management, present the situation where an application can be vulnerable to these web threats and finally provide preventative measures against them. ...

  8. Reactivity on the Web

    OpenAIRE

    Bailey, James; Bry, François; Eckert, Michael; Patrânjan, Paula Lavinia

    2005-01-01

    Reactivity, the ability to detect simple and composite events and respond in a timely manner, is an essential requirement in many present-day information systems. With the emergence of new, dynamic Web applications, reactivity on the Web is receiving increasing attention. Reactive Web-based systems need to detect and react not only to simple events but also to complex, real-life situations. This paper introduces XChange, a language for programming reactive behaviour on the Web,...

  9. Population genetic structure and its implications for adaptive variation in memory and the hippocampus on a continental scale in food-caching black-capped chickadees.

    Science.gov (United States)

    Pravosudov, V V; Roth, T C; Forister, M L; Ladage, L D; Burg, T M; Braun, M J; Davidson, B S

    2012-09-01

    Food-caching birds rely on stored food to survive the winter, and spatial memory has been shown to be critical in successful cache recovery. Both spatial memory and the hippocampus, an area of the brain involved in spatial memory, exhibit significant geographic variation linked to climate-based environmental harshness and the potential reliance on food caches for survival. Such geographic variation has been suggested to have a heritable basis associated with differential selection. Here, we ask whether population genetic differentiation and potential isolation among multiple populations of food-caching black-capped chickadees is associated with differences in memory and hippocampal morphology by exploring population genetic structure within and among groups of populations that are divergent to different degrees in hippocampal morphology. Using mitochondrial DNA and 583 AFLP loci, we found that population divergence in hippocampal morphology is not significantly associated with neutral genetic divergence or geographic distance, but instead is significantly associated with differences in winter climate. These results are consistent with variation in a history of natural selection on memory and hippocampal morphology that creates and maintains differences in these traits regardless of population genetic structure and likely associated gene flow. Published 2012. This article is a US Government work and is in the public domain in the USA.

  10. Architecture and the Web.

    Science.gov (United States)

    Money, William H.

    Instructors should be concerned with how to incorporate the World Wide Web into an information systems (IS) curriculum organized across three areas of knowledge: information technology, organizational and management concepts, and theory and development of systems. The Web fits broadly into the information technology component. For the Web to be…

  11. Semantic Web Primer

    NARCIS (Netherlands)

    Antoniou, Grigoris; Harmelen, Frank van

    2004-01-01

    The development of the Semantic Web, with machine-readable content, has the potential to revolutionize the World Wide Web and its use. A Semantic Web Primer provides an introduction and guide to this still emerging field, describing its key ideas, languages, and technologies. Suitable for use as a

  12. Evaluating Web Usability

    Science.gov (United States)

    Snider, Jean; Martin, Florence

    2012-01-01

    Web usability focuses on design elements and processes that make web pages easy to use. A website for college students was evaluated for underutilization. One-on-one testing, focus groups, web analytics, peer university review and marketing focus group and demographic data were utilized to conduct usability evaluation. The results indicated that…

  13. Web Search Engines

    OpenAIRE

    Rajashekar, TB

    1998-01-01

    The World Wide Web is emerging as an all-in-one information source. Tools for searching Web-based information include search engines, subject directories and meta search tools. We take a look at key features of these tools and suggest practical hints for effective Web searching.

  14. Semantic Web status model

    CSIR Research Space (South Africa)

    Gerber, AJ

    2006-06-01

    Full Text Available Semantic Web application areas are experiencing intensified interest due to the rapid growth in the use of the Web, together with the innovation and renovation of information content technologies. The Semantic Web is regarded as an integrator across...

  15. Classification of the web

    DEFF Research Database (Denmark)

    Mai, Jens Erik

    2004-01-01

    This paper discusses the challenges faced by investigations into the classification of the Web and outlines inquiries that are needed to use principles for bibliographic classification to construct classifications of the Web. This paper suggests that the classification of the Web meets challenges...... that call for inquiries into the theoretical foundation of bibliographic classification theory....

  16. OpenCache : a content delivery platform for the modern internet

    OpenAIRE

    Broadbent, Matthew Harold; Race, Nicholas

    2016-01-01

    Since its inception, the World Wide Web has revolutionised the way we share information, keep in touch with each other and consume content. In the latter case, it is now used by thousands of simultaneous users to consume video, surpassing physical media as the primary means of distribution. With the rise of on-demand services and more recently, high-definition media, this popularity has not waned. To support this consumption, the underlying infrastructure has been forced to evolve at a rapid ...

  17. Proxy and patient reports of health-related quality of life in a national cancer survey.

    Science.gov (United States)

    Roydhouse, Jessica K; Gutman, Roee; Keating, Nancy L; Mor, Vincent; Wilson, Ira B

    2018-01-05

    Proxy respondents are frequently used in surveys, including those assessing health-related quality of life (HRQOL). In cancer, most research involving proxies has been undertaken with paired proxy-patient populations, where proxy responses are compared to patient responses for the same individual. In these populations, proxy-patient differences are small and suggest proxy underestimation of patient HRQOL. In practice, however, proxy responses will only be used when patient responses are not available. The difference between proxy and patient reports of patient HRQOL where patients are not able to report for themselves in cancer is not known. The objective of this study was to evaluate the difference between patient and proxy reports of patient HRQOL in a large national cancer survey, and determine if this difference could be mitigated by adjusting for clinical and sociodemographic information about patients. Data were from the Cancer Care Outcomes Research and Surveillance (CanCORS) study. Patients or their proxies were recruited within 3-6 months of diagnosis with lung or colorectal cancer. HRQOL was measured using the SF-12 mental and physical composite scales. Differences of ½ SD (=5 points) were considered clinically significant. The primary independent variable was proxy status. Linear regression models were used to adjust for patient sociodemographic and clinical covariates, including cancer stage, patient age and education, and patient co-morbidities. Of 6471 respondents, 1011 (16%) were proxies. Before adjustment, average proxy-reported scores were lower for both physical (-6.7 points, 95% CI -7.4 to -5.9) and mental (-6 points, 95% CI -6.7 to -5.2) health. Proxy-reported scores remained lower after adjustment (physical: -5.8 points, -6.6 to -5; mental: -5.8 points, -6.6 to 5). Proxy-patient score differences remained clinically and statistically significant, even after adjustment for sociodemographic and clinical variables. Proxy-reported outcome scores

  18. Carbohydrates and phenols as quantitative molecular vegetation proxies in peats

    Science.gov (United States)

    Kaiser, K.; Benner, R. H.

    2012-12-01

    Vegetation in peatlands is intricately linked to local environmental conditions and climate. Here we use chemical analyses of carbohydrates and phenols to reconstruct paleovegetation in peat cores collected from 56.8°N (SIB04), 58.4°N (SIB06), 63.8°N (G137) and 66.5°N (E113) in the Western Siberian Lowland. Lignin phenols (vanillyl and syringyl phenols) were sensitive biomarkers for vascular plant contributions and provided additional information on the relative contributions of angiosperm and gymnosperm plants. Specific neutral sugar compositions allowed identification of sphagnum mosses, sedges (Cyperaceae) and lichens. Hydroxyphenols released by CuO oxidation were useful tracers of sphagnum moss contributions. The three independent molecular proxies were calibrated with a diverse group of peat-forming plants to yield quantitative estimates (%C) of vascular plant, sphagnum moss and lichen contributions in peat core samples. Correlation analysis indicated the three molecular proxies produced fairly similar results for paleovegetation compositions, generally within the error interval of each approach (≤26%). The lignin-based method generally lead to higher estimates of vascular plant vegetation. Several significant deviations were also observed due to different reactivities of carbohydrate and phenolic polymers during peat decomposition. Rapid vegetation changes on timescales of 50-200 years were observed in the southern cores SIB04 and SIB06 over the last 2000 years. Vanillyl and syringyl phenol ratios indicated these vegetation changes were largely due to varying inputs of angiosperm and gymnosperm plants. The northern permafrost cores G137 and E113 showed a more stable development. Lichens briefly replaced sphagnum mosses and vascular plants in both of these cores. Shifts in vegetation did not correlate well with Northern hemisphere climate variability over the last 2000 years. This suggested that direct climate forcing of peatland dynamics was overridden

  19. Web services foundations

    CERN Document Server

    Bouguettaya, Athman; Daniel, Florian

    2013-01-01

    Web services and Service-Oriented Computing (SOC) have become thriving areas of academic research, joint university/industry research projects, and novel IT products on the market. SOC is the computing paradigm that uses Web services as building blocks for the engineering of composite, distributed applications out of the reusable application logic encapsulated by Web services. Web services could be considered the best-known and most standardized technology in use today for distributed computing over the Internet.Web Services Foundations is the first installment of a two-book collection coverin

  20. Web Security, Privacy & Commerce

    CERN Document Server

    Garfinkel, Simson

    2011-01-01

    Since the first edition of this classic reference was published, World Wide Web use has exploded and e-commerce has become a daily part of business and personal life. As Web use has grown, so have the threats to our security and privacy--from credit card fraud to routine invasions of privacy by marketers to web site defacements to attacks that shut down popular web sites. Web Security, Privacy & Commerce goes behind the headlines, examines the major security risks facing us today, and explains how we can minimize them. It describes risks for Windows and Unix, Microsoft Internet Exp