WorldWideScience

Sample records for underlying cache algorithm

  1. Optimal file-bundle caching algorithms for data-grids

    Energy Technology Data Exchange (ETDEWEB)

    Otoo, Ekow; Rotem, Doron; Romosan, Alexandru

    2004-04-24

    The file-bundle caching problem arises frequently in scientific applications where jobs need to process several files simultaneously. Consider a host system in a data-grid that maintains a staging disk or disk cache for servicing jobs of file requests. In this environment, a job can only be serviced if all its file requests are present in the disk cache. Files must be admitted into the cache or replaced in sets of file-bundles, i.e. the set of files that must all be processed simultaneously. In this paper we show that traditional caching algorithms based on file popularity measures do not perform well in such caching environments since they are not sensitive to the inter-file dependencies and may hold in the cache non-relevant combinations of files. We present and analyze a new caching algorithm for maximizing the throughput of jobs and minimizing data replacement costs to such data-grid hosts. We tested the new algorithm using a disk cache simulation model under a wide range of conditions such as file request distributions, relative cache size, file size distribution, etc. In all these cases, the results show significant improvement as compared with traditional caching algorithms.

  2. Engineering a cache-oblivious sorting algorithm

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Fagerberg, Rolf; Vinther, Kristoffer

    2007-01-01

    This paper is an algorithmic engineering study of cache-oblivious sorting. We investigate by empirical methods a number of implementation issues and parameter choices for the cache-oblivious sorting algorithm Lazy Funnelsort, and compare the final algorithm with Quicksort, the established standard...

  3. Fundamental Parallel Algorithms for Private-Cache Chip Multiprocessors

    DEFF Research Database (Denmark)

    Arge, Lars Allan; Goodrich, Michael T.; Nelson, Michael

    2008-01-01

    In this paper, we study parallel algorithms for private-cache chip multiprocessors (CMPs), focusing on methods for foundational problems that are scalable with the number of cores. By focusing on private-cache CMPs, we show that we can design efficient algorithms that need no additional assumptions...... about the way cores are interconnected, for we assume that all inter-processor communication occurs through the memory hierarchy. We study several fundamental problems, including prefix sums, selection, and sorting, which often form the building blocks of other parallel algorithms. Indeed, we present...... two sorting algorithms, a distribution sort and a mergesort. Our algorithms are asymptotically optimal in terms of parallel cache accesses and space complexity under reasonable assumptions about the relationships between the number of processors, the size of memory, and the size of cache blocks...

  4. Efficient cache oblivious algorithms for randomized divide-and-conquer on the multicore model

    OpenAIRE

    Sharma, Neeraj; Sen, Sandeep

    2012-01-01

    In this paper we present randomized algorithms for sorting and convex hull that achieves optimal performance (for speed-up and cache misses) on the multicore model with private cache model. Our algorithms are cache oblivious and generalize the randomized divide and conquer strategy given by Reischuk and Reif and Sen. Although the approach yielded optimal speed-up in the PRAM model, we require additional techniques to optimize cache-misses in an oblivious setting. Under a mild assumption on in...

  5. Cache-Oblivious Algorithms and Data Structures

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting

    2004-01-01

    as standard RAM algorithms with only one memory level, i.e. without any knowledge about memory hierarchies, but are analyzed in the two-level I/O model of Aggarwal and Vitter for an arbitrary memory and block size and an optimal off-line cache replacement strategy. The result are algorithms that automatically...... apply to multi-level memory hierarchies. This paper gives an overview of the results achieved on cache-oblivious algorithms and data structures since the seminal paper by Frigo et al....

  6. C-Aware: A Cache Management Algorithm Considering Cache Media Access Characteristic in Cloud Computing

    Directory of Open Access Journals (Sweden)

    Zhu Xudong

    2013-01-01

    Full Text Available Data congestion and network delay are the important factors that affect performance of cloud computing systems. Using local disk of computing nodes as a cache can sometimes get better performance than accessing data through the network. This paper presents a storage cache placement algorithm—C-Aware, which traces history access information of cache and data source, adaptively decides whether to cache data according to cache media characteristic and current access environment, and achieves good performance under different workload on storage server. We implement this algorithm in both simulated and real environments. Our simulation results using OLTP and WebSearch traces demonstrate that C-Aware achieves better adaptability to the changes of server workload. Our benchmark results in real system show that, in the scenario where the size of local cache is half of data set, C-Aware gets nearly 80% improvement compared with traditional methods when the server is not busy and still presents comparable performance when there is high workload on server side.

  7. A Cache-Optimal Alternative to the Unidirectional Hierarchization Algorithm

    DEFF Research Database (Denmark)

    Hupp, Philipp; Jacob, Riko

    2016-01-01

    of the cache misses by a factor of d compared to the unidirectional algorithm which is the common standard up to now. The new algorithm is also optimal in the sense that the leading term of the cache misses is reduced to scanning complexity, i.e., every degree of freedom has to be touched once. We also present...

  8. Massively parallel algorithms for trace-driven cache simulations

    Science.gov (United States)

    Nicol, David M.; Greenberg, Albert G.; Lubachevsky, Boris D.

    1991-01-01

    Trace driven cache simulation is central to computer design. A trace is a very long sequence of reference lines from main memory. At the t(exp th) instant, reference x sub t is hashed into a set of cache locations, the contents of which are then compared with x sub t. If at the t sup th instant x sub t is not present in the cache, then it is said to be a miss, and is loaded into the cache set, possibly forcing the replacement of some other memory line, and making x sub t present for the (t+1) sup st instant. The problem of parallel simulation of a subtrace of N references directed to a C line cache set is considered, with the aim of determining which references are misses and related statistics. A simulation method is presented for the Least Recently Used (LRU) policy, which regradless of the set size C runs in time O(log N) using N processors on the exclusive read, exclusive write (EREW) parallel model. A simpler LRU simulation algorithm is given that runs in O(C log N) time using N/log N processors. Timings are presented of the second algorithm's implementation on the MasPar MP-1, a machine with 16384 processors. A broad class of reference based line replacement policies are considered, which includes LRU as well as the Least Frequently Used and Random replacement policies. A simulation method is presented for any such policy that on any trace of length N directed to a C line set runs in the O(C log N) time with high probability using N processors on the EREW model. The algorithms are simple, have very little space overhead, and are well suited for SIMD implementation.

  9. Cache and energy efficient algorithms for Nussinov's RNA Folding.

    Science.gov (United States)

    Zhao, Chunchun; Sahni, Sartaj

    2017-12-06

    An RNA folding/RNA secondary structure prediction algorithm determines the non-nested/pseudoknot-free structure by maximizing the number of complementary base pairs and minimizing the energy. Several implementations of Nussinov's classical RNA folding algorithm have been proposed. Our focus is to obtain run time and energy efficiency by reducing the number of cache misses. Three cache-efficient algorithms, ByRow, ByRowSegment and ByBox, for Nussinov's RNA folding are developed. Using a simple LRU cache model, we show that the Classical algorithm of Nussinov has the highest number of cache misses followed by the algorithms Transpose (Li et al.), ByRow, ByRowSegment, and ByBox (in this order). Extensive experiments conducted on four computational platforms-Xeon E5, AMD Athlon 64 X2, Intel I7 and PowerPC A2-using two programming languages-C and Java-show that our cache efficient algorithms are also efficient in terms of run time and energy. Our benchmarking shows that, depending on the computational platform and programming language, either ByRow or ByBox give best run time and energy performance. The C version of these algorithms reduce run time by as much as 97.2% and energy consumption by as much as 88.8% relative to Classical and by as much as 56.3% and 57.8% relative to Transpose. The Java versions reduce run time by as much as 98.3% relative to Classical and by as much as 75.2% relative to Transpose. Transpose achieves run time and energy efficiency at the expense of memory as it takes twice the memory required by Classical. The memory required by ByRow, ByRowSegment, and ByBox is the same as that of Classical. As a result, using the same amount of memory, the algorithms proposed by us can solve problems up to 40% larger than those solvable by Transpose.

  10. Geometric Algorithms for Private-Cache Chip Multiprocessors

    DEFF Research Database (Denmark)

    Ajwani, Deepak; Sitchinava, Nodari; Zeh, Norbert

    2010-01-01

    these problems from the ones in the previous group is the variable output size, which requires I/O-efficient load balancing strategies based on the contribution of the individual input elements to the output size. To obtain nearly optimal algorithms for these problems, we introduce a parallel distribution......We study techniques for obtaining efficient algorithms for geometric problems on private-cache chip multiprocessors. We show how to obtain optimal algorithms for interval stabbing counting, 1-D range counting, weighted 2-D dominance counting, and for computing 3-D maxima, 2-D lower envelopes, and 2......-D convex hulls. These results are obtained by analyzing adaptations of either the PEM merge sort algorithm or PRAM algorithms. For the second group of problems—orthogonal line segment intersection reporting, batched range reporting, and related problems—more effort is required. What distinguishes...

  11. Cache-Oblivious Data Structures and Algorithms for Undirected Breadth-First Search and Shortest Paths

    DEFF Research Database (Denmark)

    Brodal, G.S.; Fagerberg, Rolf; Meyer, U.

    2004-01-01

    We present improved cache-oblivious data structures and algorithms for breadth-first search and the single-source shortest path problem on undirected graphs with non-negative edge weights. Our results removes the performance gap between the currently best cache-aware algorithms for these problems...

  12. 5G Network Communication, Caching, and Computing Algorithms Based on the Two‐Tier Game Model

    Directory of Open Access Journals (Sweden)

    Sungwook Kim

    2018-02-01

    Full Text Available In this study, we developed hybrid control algorithms in smart base stations (SBSs along with devised communication, caching, and computing techniques. In the proposed scheme, SBSs are equipped with computing power and data storage to collectively offload the computation from mobile user equipment and to cache the data from clouds. To combine in a refined manner the communication, caching, and computing algorithms, game theory is adopted to characterize competitive and cooperative interactions. The main contribution of our proposed scheme is to illuminate the ultimate synergy behind a fully integrated approach, while providing excellent adaptability and flexibility to satisfy the different performance requirements. Simulation results demonstrate that the proposed approach can outperform existing schemes by approximately 5% to 15% in terms of bandwidth utilization, access delay, and system throughput.

  13. On the Limits of Cache-Obliviousness

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Fagerberg, Rolf

    2003-01-01

    In this paper, we present lower bounds for permuting and sorting in the cache-oblivious model. We prove that (1) I/O optimal cache-oblivious comparison based sorting is not possible without a tall cache assumption, and (2) there does not exist an I/O optimal cache-oblivious algorithm for permutin...

  14. A Class-Based Least-Recently-Used Caching Algorithm for WWW Proxies Proceedings

    NARCIS (Netherlands)

    Khayari el Abdouni, Rachid; Sadre, R.; Haverkort, Boudewijn R.H.M.; Kemper, P.; Sanders, W.H.

    2003-01-01

    In this paper we study and analyze the in uence of caching stategies on the performance of WWW proxies. We propose a new strategy called class-based LRU that works recency-based as well as size-based, with the ultimate aim to obtain a well-balanced mixture between large and small documents in the

  15. Test data generation for LRU cache-memory testing

    OpenAIRE

    Evgeni, Kornikhin

    2009-01-01

    System functional testing of microprocessors deals with many assembly programs of given behavior. The paper proposes new constraint-based algorithm of initial cache-memory contents generation for given behavior of assembly program (with cache misses and hits). Although algorithm works for any types of cache-memory, the paper describes algorithm in detail for basis types of cache-memory only: fully associative cache and direct mapped cache.

  16. Cooperative Proxy Caching for Wireless Base Stations

    Directory of Open Access Journals (Sweden)

    James Z. Wang

    2007-01-01

    Full Text Available This paper proposes a mobile cache model to facilitate the cooperative proxy caching in wireless base stations. This mobile cache model uses a network cache line to record the caching state information about a web document for effective data search and cache space management. Based on the proposed mobile cache model, a P2P cooperative proxy caching scheme is proposed to use a self-configured and self-managed virtual proxy graph (VPG, independent of the underlying wireless network structure and adaptive to the network and geographic environment changes, to achieve efficient data search, data cache and date replication. Based on demand, the aggregate effect of data caching, searching and replicating actions by individual proxy servers automatically migrates the cached web documents closer to the interested clients. In addition, a cache line migration (CLM strategy is proposed to flow and replicate the heads of network cache lines of web documents associated with a moving mobile host to the new base station during the mobile host handoff. These replicated cache line heads provide direct links to the cached web documents accessed by the moving mobile hosts in the previous base station, thus improving the mobile web caching performance. Performance studies have shown that the proposed P2P cooperative proxy caching schemes significantly outperform existing caching schemes.

  17. Web Caching

    Indian Academy of Sciences (India)

    The user may never realize that the cache is between the client and server except in special circumstances. It is important to distinguish between Web cache and a proxy server as their functions are often misunderstood. Proxy servers serve as an intermediary to place a firewall between network users and the outside world.

  18. Web Caching

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 7; Issue 7. Web Caching - A Technique to Speedup Access to Web Contents. Harsha Srinath Shiva Shankar Ramanna. General Article Volume 7 Issue 7 July 2002 pp 54-62 ... Keywords. World wide web; data caching; internet traffic; web page access.

  19. Web Caching

    Indian Academy of Sciences (India)

    operating systems, computer networks, distributed systems,. E-commerce and security. The World Wide Web has been growing in leaps and bounds. Studies have indicated that this massive distributed system can benefit greatly by making use of appropriate caching methods. Intelligent Web caching can lessen the burden ...

  20. Cache-Aware and Cache-Oblivious Adaptive Sorting

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Fagerberg, Rolf; Moruz, Gabriel

    2005-01-01

    Two new adaptive sorting algorithms are introduced which perform an optimal number of comparisons with respect to the number of inversions in the input. The first algorithm is based on a new linear time reduction to (non-adaptive) sorting. The second algorithm is based on a new division protocol...... for the GenericSort algorithm by Estivill-Castro and Wood. From both algorithms we derive I/O-optimal cache-aware and cache-oblivious adaptive sorting algorithms. These are the first I/O-optimal adaptive sorting algorithms....

  1. Cache Oblivious Distribution Sweeping

    DEFF Research Database (Denmark)

    Brodal, G.S.; Fagerberg, R.

    2002-01-01

    We adapt the distribution sweeping method to the cache oblivious model. Distribution sweeping is the name used for a general approach for divide-and-conquer algorithms where the combination of solved subproblems can be viewed as a merging process of streams. We demonstrate by a series of algorith...

  2. Maintaining Web Cache Coherency

    Directory of Open Access Journals (Sweden)

    2000-01-01

    Full Text Available Document coherency is a challenging problem for Web caching. Once the documents are cached throughout the Internet, it is often difficult to keep them coherent with the origin document without generating a new traffic that could increase the traffic on the international backbone and overload the popular servers. Several solutions have been proposed to solve this problem, among them two categories have been widely discussed: the strong document coherency and the weak document coherency. The cost and the efficiency of the two categories are still a controversial issue, while in some studies the strong coherency is far too expensive to be used in the Web context, in other studies it could be maintained at a low cost. The accuracy of these analysis is depending very much on how the document updating process is approximated. In this study, we compare some of the coherence methods proposed for Web caching. Among other points, we study the side effects of these methods on the Internet traffic. The ultimate goal is to study the cache behavior under several conditions, which will cover some of the factors that play an important role in the Web cache performance evaluation and quantify their impact on the simulation accuracy. The results presented in this study show indeed some differences in the outcome of the simulation of a Web cache depending on the workload being used, and the probability distribution used to approximate updates on the cached documents. Each experiment shows two case studies that outline the impact of the considered parameter on the performance of the cache.

  3. Web cache location

    Directory of Open Access Journals (Sweden)

    Boffey Brian

    2004-01-01

    Full Text Available Stress placed on network infrastructure by the popularity of the World Wide Web may be partially relieved by keeping multiple copies of Web documents at geographically dispersed locations. In particular, use of proxy caches and replication provide a means of storing information 'nearer to end users'. This paper concentrates on the locational aspects of Web caching giving both an overview, from an operational research point of view, of existing research and putting forward avenues for possible further research. This area of research is in its infancy and the emphasis will be on themes and trends rather than on algorithm construction. Finally, Web caching problems are briefly related to referral systems more generally.

  4. Mobility- Aware Cache Management in Wireless Environment

    Science.gov (United States)

    Kaur, Gagandeep; Saini, J. S.

    2010-11-01

    In infrastructure wireless environments, a base station provides communication links between mobile client and remote servers. Placing a proxy cache at the base station is an effective way of managing the wireless Internet bandwidth efficiently. However, in the situation of non-uniform heavy traffic, requests of all the mobile clients in the service area of the base station may cause overload in the cache. If the proxy cache has to release some cache space for the new mobile client in the environment, overload occurs. In this paper, we propose a novel cache management strategy to decrease the penalty of overloaded traffic on the proxy and to reduce the number of remote accesses by increasing the cache hit ratio. We predict the number of overload ahead of time based on its history and adapt the cache for the heavy traffic to be able to provide continuous and fair service to the current mobile clients and incoming ones. We have tested the algorithms over a real implementation of the cache management system in presence of fault tolerance and security. In our cache replacement algorithm, mobility of the clients, predicted overload number, size of the cached packets and their access frequencies are considered altogether. Performance results show that our cache management strategy outperforms the existing policies with less number of overloads and higher cache hit ratio.

  5. FY08 LDRD Final Report LOCAL: Locality-Optimizing Caching Algorithms and Layouts

    Energy Technology Data Exchange (ETDEWEB)

    Lindstrom, P

    2009-02-27

    This project investigated layout and compression techniques for large, unstructured simulation data to reduce bandwidth requirements and latency in simulation I/O and subsequent post-processing, e.g. data analysis and visualization. The main goal was to eliminate the data-transfer bottleneck - for example, from disk to memory and from central processing unit to graphics processing unit - through coherent data access and by trading underutilized compute power for effective bandwidth and storage. This was accomplished by (1) designing algorithms that both enforce and exploit compactness and locality in unstructured data, and (2) adapting offline computations to a novel stream processing framework that supports pipelining and low-latency sequential access to compressed data. This report summarizes the techniques developed and results achieved, and includes references to publications that elaborate on the technical details of these methods.

  6. Funnel Heap - A Cache Oblivious Priority Queue

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Fagerberg, Rolf

    2002-01-01

    The cache oblivious model of computation is a two-level memory model with the assumption that the parameters of the model are unknown to the algorithms. A consequence of this assumption is that an algorithm efficient in the cache oblivious model is automatically efficient in a multi-level memory ...

  7. Efficient sorting using registers and caches

    DEFF Research Database (Denmark)

    Wickremesinghe, Rajiv; Arge, Lars Allan; Chase, Jeffrey S.

    2002-01-01

    on sorting performance. We introduce a new cache-conscious sorting algorithm, R-MERGE, which achieves better performance in practice over algorithms that are superior in the theoretical models. R-MERGE is designed to minimize memory stall cycles rather than cache misses by considering features common to many......Modern computer systems have increasingly complex memory systems. Common machine models for algorithm analysis do not reflect many of the features of these systems, e.g., large register sets, lockup-free caches, cache hierarchies, associativity, cache line fetching, and streaming behavior....... Inadequate models lead to poor algorithmic choices and an incomplete understanding of algorithm behavior on real machines.A key step toward developing better models is to quantify the performance effects of features not reflected in the models. This paper explores the effect of memory system features...

  8. Optimal and Scalable Caching for 5G Using Reinforcement Learning of Space-Time Popularities

    Science.gov (United States)

    Sadeghi, Alireza; Sheikholeslami, Fatemeh; Giannakis, Georgios B.

    2018-02-01

    Small basestations (SBs) equipped with caching units have potential to handle the unprecedented demand growth in heterogeneous networks. Through low-rate, backhaul connections with the backbone, SBs can prefetch popular files during off-peak traffic hours, and service them to the edge at peak periods. To intelligently prefetch, each SB must learn what and when to cache, while taking into account SB memory limitations, the massive number of available contents, the unknown popularity profiles, as well as the space-time popularity dynamics of user file requests. In this work, local and global Markov processes model user requests, and a reinforcement learning (RL) framework is put forth for finding the optimal caching policy when the transition probabilities involved are unknown. Joint consideration of global and local popularity demands along with cache-refreshing costs allow for a simple, yet practical asynchronous caching approach. The novel RL-based caching relies on a Q-learning algorithm to implement the optimal policy in an online fashion, thus enabling the cache control unit at the SB to learn, track, and possibly adapt to the underlying dynamics. To endow the algorithm with scalability, a linear function approximation of the proposed Q-learning scheme is introduced, offering faster convergence as well as reduced complexity and memory requirements. Numerical tests corroborate the merits of the proposed approach in various realistic settings.

  9. Cache-Oblivious Mesh Layouts

    International Nuclear Information System (INIS)

    Yoon, S; Lindstrom, P; Pascucci, V; Manocha, D

    2005-01-01

    We present a novel method for computing cache-oblivious layouts of large meshes that improve the performance of interactive visualization and geometric processing algorithms. Given that the mesh is accessed in a reasonably coherent manner, we assume no particular data access patterns or cache parameters of the memory hierarchy involved in the computation. Furthermore, our formulation extends directly to computing layouts of multi-resolution and bounding volume hierarchies of large meshes. We develop a simple and practical cache-oblivious metric for estimating cache misses. Computing a coherent mesh layout is reduced to a combinatorial optimization problem. We designed and implemented an out-of-core multilevel minimization algorithm and tested its performance on unstructured meshes composed of tens to hundreds of millions of triangles. Our layouts can significantly reduce the number of cache misses. We have observed 2-20 times speedups in view-dependent rendering, collision detection, and isocontour extraction without any modification of the algorithms or runtime applications

  10. Cache-Oblivious Hashing

    DEFF Research Database (Denmark)

    Pagh, Rasmus; Wei, Zhewei; Yi, Ke

    2014-01-01

    , can be easily made cache-oblivious but it only achieves t q =1+Θ(α/b) even if a truly random hash function is used. Then we demonstrate that the block probing algorithm (Pagh et al. in SIAM Rev. 53(3):547–558, 2011) achieves t q =1+1/2 Ω(b), thus matching the cache-aware bound, if the following two......The hash table, especially its external memory version, is one of the most important index structures in large databases. Assuming a truly random hash function, it is known that in a standard external hash table with block size b, searching for a particular key only takes expected average t q =1......+1/2 Ω(b) disk accesses for any load factor α bounded away from 1. However, such near-perfect performance is achieved only when b is known and the hash table is particularly tuned for working with such a blocking. In this paper we study if it is possible to build a cache-oblivious hash table that works...

  11. Stack Caching Using Split Data Caches

    DEFF Research Database (Denmark)

    Nielsen, Carsten; Schoeberl, Martin

    2015-01-01

    In most embedded and general purpose architectures, stack data and non-stack data is cached together, meaning that writing to or loading from the stack may expel non-stack data from the data cache. Manipulation of the stack has a different memory access pattern than that of non-stack data, showing...... higher temporal and spatial locality. We propose caching stack and non-stack data separately and develop four different stack caches that allow this separation without requiring compiler support. These are the simple, window, and prefilling with and without tag stack caches. The performance of the stack...

  12. Caching Patterns and Implementation

    Directory of Open Access Journals (Sweden)

    Octavian Paul ROTARU

    2006-01-01

    Full Text Available Repetitious access to remote resources, usually data, constitutes a bottleneck for many software systems. Caching is a technique that can drastically improve the performance of any database application, by avoiding multiple read operations for the same data. This paper addresses the caching problems from a pattern perspective. Both Caching and caching strategies, like primed and on demand, are presented as patterns and a pattern-based flexible caching implementation is proposed.The Caching pattern provides method of expensive resources reacquisition circumvention. Primed Cache pattern is applied in situations in which the set of required resources, or at least a part of it, can be predicted, while Demand Cache pattern is applied whenever the resources set required cannot be predicted or is unfeasible to be buffered.The advantages and disadvantages of all the caching patterns presented are also discussed, and the lessons learned are applied in the implementation of the pattern-based flexible caching solution proposed.

  13. Cache Management of Big Data in Equipment Condition Assessment

    Directory of Open Access Journals (Sweden)

    Ma Yan

    2016-01-01

    Full Text Available Big data platform for equipment condition assessment is built for comprehensive analysis. The platform has various application demands. According to its response time, its application can be divided into offline, interactive and real-time types. For real-time application, its data processing efficiency is important. In general, data cache is one of the most efficient ways to improve query time. However, big data caching is different from the traditional data caching. In the paper we propose a distributed cache management framework of big data for equipment condition assessment. It consists of three parts: cache structure, cache replacement algorithm and cache placement algorithm. Cache structure is the basis of the latter two algorithms. Based on the framework and algorithms, we make full use of the characteristics of just accessing some valuable data during a period of time, and put relevant data on the neighborhood nodes, which largely reduce network transmission cost. We also validate the performance of our proposed approaches through extensive experiments. It demonstrates that the proposed cache replacement algorithm and cache management framework has higher hit rate or lower query time than LRU algorithm and round-robin algorithm.

  14. Search-Order Independent State Caching

    DEFF Research Database (Denmark)

    Evangelista, Sami; Kristensen, Lars Michael

    2009-01-01

    State caching is a memory reduction technique used by model checkers to alleviate the state explosion problem. It has traditionally been coupled with a depth-first search to ensure termination.We propose and experimentally evaluate an extension of the state caching method for general state...... exploring algorithms that are independent of the search order (i.e., search algorithms that partition the state space into closed (visited) states, open (to visit) states and unmet states)....

  15. WATCHMAN: A Data Warehouse Intelligent Cache Manager

    Science.gov (United States)

    Scheuermann, Peter; Shim, Junho; Vingralek, Radek

    1996-01-01

    Data warehouses store large volumes of data which are used frequently by decision support applications. Such applications involve complex queries. Query performance in such an environment is critical because decision support applications often require interactive query response time. Because data warehouses are updated infrequently, it becomes possible to improve query performance by caching sets retrieved by queries in addition to query execution plans. In this paper we report on the design of an intelligent cache manager for sets retrieved by queries called WATCHMAN, which is particularly well suited for data warehousing environment. Our cache manager employs two novel, complementary algorithms for cache replacement and for cache admission. WATCHMAN aims at minimizing query response time and its cache replacement policy swaps out entire retrieved sets of queries instead of individual pages. The cache replacement and admission algorithms make use of a profit metric, which considers for each retrieved set its average rate of reference, its size, and execution cost of the associated query. We report on a performance evaluation based on the TPC-D and Set Query benchmarks. These experiments show that WATCHMAN achieves a substantial performance improvement in a decision support environment when compared to a traditional LRU replacement algorithm.

  16. OPTIMAL DATA REPLACEMENT TECHNIQUE FOR COOPERATIVE CACHING IN MANET

    Directory of Open Access Journals (Sweden)

    P. Kuppusamy

    2014-09-01

    Full Text Available A cooperative caching approach improves data accessibility and reduces query latency in Mobile Ad hoc Network (MANET. Maintaining the cache is challenging issue in large MANET due to mobility, cache size and power. The previous research works on caching primarily have dealt with LRU, LFU and LRU-MIN cache replacement algorithms that offered low query latency and greater data accessibility in sparse MANET. This paper proposes Memetic Algorithm (MA to locate the better replaceable data based on neighbours interest and fitness value of cached data to store the newly arrived data. This work also elects ideal CH using Meta heuristic search Ant Colony Optimization algorithm. The simulation results shown that proposed algorithm reduces the latency, control overhead and increases the packet delivery rate than existing approach by increasing nodes and speed respectively.

  17. Simulation of flow and habitat conditions under ice, Cache la Poudre River - January 2006

    Science.gov (United States)

    Waddle, Terry

    2007-01-01

    The U.S. Forest Service authorizes the occupancy and use of Forest Service lands by various projects, including water storage facilities, under the Federal Land Policy and Management Act. Federal Land Policy and Management Act permits can be renewed at the end of their term. The U.S. Forest Service analyzes the environmental effects for the initial issuance or renewal of a permit and the terms and conditions (for example, mitigations plans) contained in the permit for the facilities. The U.S. Forest Service is preparing an environmental impact statement (EIS) to determine the conditions for the occupancy and use for Long Draw Reservoir on National Forest System administered lands. The scope of the EIS includes evaluating current operations and effects to fish habitat of an ongoing winter release of 0.283 m3 /s (10 ft3 /s) from headwater reservoirs as part of a previously issued permit. The field conditions observed during this study included this release.

  18. Cache Energy Optimization Techniques For Modern Processors

    Energy Technology Data Exchange (ETDEWEB)

    Mittal, Sparsh [ORNL

    2013-01-01

    newcomers and veterans in the field of cache power management. It will help graduate students, CAD tool developers and designers in understanding the need of energy efficiency in modern computing systems. Further, it will be useful for researchers in gaining insights into algorithms and techniques for micro-architectural and system-level energy optimization using dynamic cache reconfiguration. We sincerely believe that the ``food for thought'' presented in this book will inspire the readers to develop even better ideas for designing ``green'' processors of tomorrow.

  19. Static analysis of worst-case stack cache behavior

    DEFF Research Database (Denmark)

    Jordan, Alexander; Brandner, Florian; Schoeberl, Martin

    2013-01-01

    Utilizing a stack cache in a real-time system can aid predictability by avoiding interference that heap memory traffic causes on the data cache. While loads and stores are guaranteed cache hits, explicit operations are responsible for managing the stack cache. The behavior of these operations can......-graph, the worst-case bounds can be efficiently yet precisely determined. Our evaluation using the MiBench benchmark suite shows that only 37% and 21% of potential stack cache operations actually store to and load from memory, respectively. Analysis times are modest, on average running between 0.46s and 1.30s per...... be analyzed statically. We present algorithms that derive worst-case bounds on the latency-inducing operations of the stack cache. Their results can be used by a static WCET tool. By breaking the analysis down into subproblems that solve intra-procedural data-flow analysis and path searches on the call...

  20. Cache-Cache Comparison for Supporting Meaningful Learning

    Science.gov (United States)

    Wang, Jingyun; Fujino, Seiji

    2015-01-01

    The paper presents a meaningful discovery learning environment called "cache-cache comparison" for a personalized learning support system. The processing of seeking hidden relations or concepts in "cache-cache comparison" is intended to encourage learners to actively locate new knowledge in their knowledge framework and check…

  1. Tag-Split Cache for Efficient GPGPU Cache Utilization

    Energy Technology Data Exchange (ETDEWEB)

    Li, Lingda; Hayes, Ari; Song, Shuaiwen; Zhang, Eddy

    2016-06-01

    Modern GPUs employ cache to improve memory system efficiency. However, large amount of cache space is underutilized due to irregular memory accesses and poor spatial locality which exhibited commonly in GPU applications. Our experiments show that using smaller cache lines could improve cache space utilization, but it also frequently suffers from significant performance loss by introducing large amount of extra cache requests. In this work, we propose a novel cache design named tag-split cache (TSC) that enables fine-grained cache storage to address the problem of cache space underutilization while keeping memory request number unchanged. TSC divides tag into two parts to reduce storage overhead, and it supports multiple cache line replacement in one cycle.

  2. Study of cache performance in distributed environment for data processing

    International Nuclear Information System (INIS)

    Makatun, Dzmitry; Lauret, Jérôme; Šumbera, Michal

    2014-01-01

    Processing data in distributed environment has found its application in many fields of science (Nuclear and Particle Physics (NPP), astronomy, biology to name only those). Efficiently transferring data between sites is an essential part of such processing. The implementation of caching strategies in data transfer software and tools, such as the Reasoner for Intelligent File Transfer (RIFT) being developed in the STAR collaboration, can significantly decrease network load and waiting time by reusing the knowledge of data provenance as well as data placed in transfer cache to further expand on the availability of sources for files and data-sets. Though, a great variety of caching algorithms is known, a study is needed to evaluate which one can deliver the best performance in data access considering the realistic demand patterns. Records of access to the complete data-sets of NPP experiments were analyzed and used as input for computer simulations. Series of simulations were done in order to estimate the possible cache hits and cache hits per byte for known caching algorithms. The simulations were done for cache of different sizes within interval 0.001 – 90% of complete data-set and low-watermark within 0-90%. Records of data access were taken from several experiments and within different time intervals in order to validate the results. In this paper, we will discuss the different data caching strategies from canonical algorithms to hybrid cache strategies, present the results of our simulations for the diverse algorithms, debate and identify the choice for the best algorithm in the context of Physics Data analysis in NPP. While the results of those studies have been implemented in RIFT, they can also be used when setting up cache in any other computational work-flow (Cloud processing for example) or managing data storages with partial replicas of the entire data-set

  3. An Adaptive Insertion and Promotion Policy for Partitioned Shared Caches

    Science.gov (United States)

    Mahrom, Norfadila; Liebelt, Michael; Raof, Rafikha Aliana A.; Daud, Shuhaizar; Hafizah Ghazali, Nur

    2018-03-01

    Cache replacement policies in chip multiprocessors (CMP) have been investigated extensively and proven able to enhance shared cache management. However, competition among multiple processors executing different threads that require simultaneous access to a shared memory may cause cache contention and memory coherence problems on the chip. These issues also exist due to some drawbacks of the commonly used Least Recently Used (LRU) policy employed in multiprocessor systems, which are because of the cache lines residing in the cache longer than required. In image processing analysis of for example extra pulmonary tuberculosis (TB), an accurate diagnosis for tissue specimen is required. Therefore, a fast and reliable shared memory management system to execute algorithms for processing vast amount of specimen image is needed. In this paper, the effects of the cache replacement policy in a partitioned shared cache are investigated. The goal is to quantify whether better performance can be achieved by using less complex replacement strategies. This paper proposes a Middle Insertion 2 Positions Promotion (MI2PP) policy to eliminate cache misses that could adversely affect the access patterns and the throughput of the processors in the system. The policy employs a static predefined insertion point, near distance promotion, and the concept of ownership in the eviction policy to effectively improve cache thrashing and to avoid resource stealing among the processors.

  4. The Effect of Garbage Collection on Cache Performance

    National Research Council Canada - National Science Library

    Zorn, Benjamin

    1991-01-01

    .... This paper describes the use of trace-driven simulation to estimate the effect of garbage collection algorithms on cache performance Traces from four large Common Lisp programs have been collected...

  5. Improving data caching for software MPEG video decompression

    Science.gov (United States)

    Feng, Wu-chi; Sechrest, Stuart

    1996-03-01

    Software implementations of MPEG decompression provide flexibility at low cost but suffer performance problems, including poor cache behavior. For MPEG video, decompressing the video in the implied order does not take advantage of coherence generated by dependent macroblocks and, therefore, undermines the effectiveness of processor caching. In this paper, we investigate the caching performance gain which is available to algorithms that use different traversal algorithms to decompress these MPEG streams. We have found that the total cache miss rate can be reduced considerably at the expense of a small increase in instructions. To show the potential gains available, we have implemented the different traversal algorithms using the standard Berkeley MPEG player. Without optimizing the MPEG decompression code itself, we are able to obtain better cache performance for the traversal orders examined. In one case, faster decompression rates are achieved by making better use of processor caching, even though additional overhead is introduced to implement the different traversal algorithm. With better instruction-level support in future architectures, low cache miss rates will be crucial for the overall performance of software MPEG video decompression.

  6. Efficacy of Code Optimization on Cache-based Processors

    Science.gov (United States)

    VanderWijngaart, Rob F.; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    computational algorithms employed at NASA Ames require different programming styles on vector machines and cache-based machines, respectively, neither architecture class appeared to be favored by particular algorithms in principle. Practice tells us that the situation is more complicated. This report presents observations and some analysis of performance tuning for cache-based systems. We point out several counterintuitive results that serve as a cautionary reminder that memory accesses are not the only factors that determine performance, and that within the class of cache-based systems, significant differences exist.

  7. Enabling Efficient Dynamic Resizing of Large DRAM Caches via A Hardware Consistent Hashing Mechanism

    OpenAIRE

    Chang, Kevin K.; Loh, Gabriel H.; Thottethodi, Mithuna; Eckert, Yasuko; O'Connor, Mike; Manne, Srilatha; Hsu, Lisa; Subramanian, Lavanya; Mutlu, Onur

    2016-01-01

    Die-stacked DRAM has been proposed for use as a large, high-bandwidth, last-level cache with hundreds or thousands of megabytes of capacity. Not all workloads (or phases) can productively utilize this much cache space, however. Unfortunately, the unused (or under-used) cache continues to consume power due to leakage in the peripheral circuitry and periodic DRAM refresh. Dynamically adjusting the available DRAM cache capacity could largely eliminate this energy overhead. However, the current p...

  8. Replication Strategy for Spatiotemporal Data Based on Distributed Caching System.

    Science.gov (United States)

    Xiong, Lian; Yang, Liu; Tao, Yang; Xu, Juan; Zhao, Lun

    2018-01-14

    The replica strategy in distributed cache can effectively reduce user access delay and improve system performance. However, developing a replica strategy suitable for varied application scenarios is still quite challenging, owing to differences in user access behavior and preferences. In this paper, a replication strategy for spatiotemporal data (RSSD) based on a distributed caching system is proposed. By taking advantage of the spatiotemporal locality and correlation of user access, RSSD mines high popularity and associated files from historical user access information, and then generates replicas and selects appropriate cache node for placement. Experimental results show that the RSSD algorithm is simple and efficient, and succeeds in significantly reducing user access delay.

  9. Replication Strategy for Spatiotemporal Data Based on Distributed Caching System

    Science.gov (United States)

    Xiong, Lian; Tao, Yang; Xu, Juan; Zhao, Lun

    2018-01-01

    The replica strategy in distributed cache can effectively reduce user access delay and improve system performance. However, developing a replica strategy suitable for varied application scenarios is still quite challenging, owing to differences in user access behavior and preferences. In this paper, a replication strategy for spatiotemporal data (RSSD) based on a distributed caching system is proposed. By taking advantage of the spatiotemporal locality and correlation of user access, RSSD mines high popularity and associated files from historical user access information, and then generates replicas and selects appropriate cache node for placement. Experimental results show that the RSSD algorithm is simple and efficient, and succeeds in significantly reducing user access delay. PMID:29342897

  10. Cache and memory hierarchy design a performance directed approach

    CERN Document Server

    Przybylski, Steven A

    1991-01-01

    An authoritative book for hardware and software designers. Caches are by far the simplest and most effective mechanism for improving computer performance. This innovative book exposes the characteristics of performance-optimal single and multi-level cache hierarchies by approaching the cache design process through the novel perspective of minimizing execution times. It presents useful data on the relative performance of a wide spectrum of machines and offers empirical and analytical evaluations of the underlying phenomena. This book will help computer professionals appreciate the impact of ca

  11. Random Fill Cache Architecture (Preprint)

    Science.gov (United States)

    2014-10-01

    D. Gullasch, E. Bangerter, and S. Krenn, “Cache Games — Bringing Access-Based Cache Attacks on AES to Practice,” in Proc. IEEE Symposium on Security...Effectiveness,” in Cryptogra- phers’ Track at the RSA Conference (CT-RSA’04), 2004, pp. 222–235. [27] K. Tiri, O. Aciicmez, M. Neve, and F. Andersen , “An

  12. An investigation of DUA caching strategies for public key certificates

    Energy Technology Data Exchange (ETDEWEB)

    Cheung, Terry Ching [Univ. of California, Davis, CA (United States)

    1993-11-01

    Internet Privacy Enhanced Mail (PEM) provides security services to users of Internet electronic mail. PEM is designed with the intention that it will eventually obtain public key certificates from the X.500 directory service. However, such a capability is not present in most PEM implementations today. While the prevalent PEM implementation uses a public key certificate-based strategy, certificates are mostly distributed via e-mail exchanges, which raises several security and performance issues. In this thesis research, we changed the reference PEM implementation to make use of the X.500 directory service instead of local databases for public key certificate management. The thesis discusses some problems with using the X.500 directory service, explores the relevant issues, and develops an approach to address them. The approach makes use of a memory cache to store public key certificates. We implemented a centralized cache server and addressed the denial-of-service security problem that is present in the server. In designing the cache, we investigated several cache management strategies. One result of our study is that the use of a cache significantly improves performance. Our research also indicates that security incurs extra performance cost. Different cache replacement algorithms do not seem to yield significant performance differences, while delaying dirty-writes to the backing store does improve performance over immediate writes.

  13. Time-predictable Stack Caching

    DEFF Research Database (Denmark)

    Abbaspourseyedi, Sahar

    completely. Thus, in systems with hard deadlines the worst-case execution time (WCET) of the real-time software running on them needs to be bounded. Modern architectures use features such as pipelining and caches for improving the average performance. These features, however, make the WCET analysis more...... addresses, provides an opportunity to predict and tighten the WCET of accesses to data in caches. In this thesis, we introduce the time-predictable stack cache design and implementation within a time-predictable processor. We introduce several optimizations to our design for tightening the WCET while...... keeping the timepredictability of the design intact. Moreover, we provide a solution for reducing the cost of context switching in a system using the stack cache. In design of these caches, we use custom hardware and compiler support for delivering time-predictable stack data accesses. Furthermore...

  14. A Comparison between Fixed Priority and EDF Scheduling accounting for Cache Related Pre-emption Delays

    Directory of Open Access Journals (Sweden)

    Will Lunniss

    2014-04-01

    Full Text Available In multitasking real-time systems, the choice of scheduling algorithm is an important factor to ensure that response time requirements are met while maximising limited system resources. Two popular scheduling algorithms include fixed priority (FP and earliest deadline first (EDF. While they have been studied in great detail before, they have not been compared when taking into account cache related pre-emption delays (CRPD. Memory and cache are split into a number of blocks containing instructions and data. During a pre-emption, cache blocks from the pre-empting task can evict those of the pre-empted task. When the pre-empted task is resumed, if it then has to re-load the evicted blocks, CRPD are introduced which then affect the schedulability of the task. In this paper we compare FP and EDF scheduling algorithms in the presence of CRPD using the state-of-the-art CRPD analysis. We find that when CRPD is accounted for, the performance gains offered by EDF over FP, while still notable, are diminished. Furthermore, we find that under scenarios that cause relatively high CRPD, task layout optimisation techniques can be applied to allow FP to schedule tasksets at a similar processor utilisation to EDF. Thus making the choice of the task layout in memory as important as the choice of scheduling algorithm. This is very relevant for industry, as it is much cheaper and simpler to adjust the task layout through the linker than it is to switch the scheduling algorithm.

  15. Fast and Cache-Oblivious Dynamic Programming with Local Dependencies

    DEFF Research Database (Denmark)

    Bille, Philip; Stöckel, Morten

    2012-01-01

    String comparison such as sequence alignment, edit distance computation, longest common subsequence computation, and approximate string matching is a key task (and often computational bottleneck) in large-scale textual information retrieval. For instance, algorithms for sequence alignment......-oblivious algorithm for this type of local dynamic programming suitable for comparing large-scale strings. Our algorithm outperforms the previous state-of-the-art solutions. Surprisingly, our new simple algorithm is competitive with a complicated, optimized, and tuned implementation of the best cache-aware algorithm....... Additionally, our new algorithm generalizes the best known theoretical complexity trade-offs for the problem....

  16. Caching Over-The-Top Services, the Netflix Case

    DEFF Research Database (Denmark)

    Jensen, Stefan; Jensen, Michael; Gutierrez Lopez, Jose Manuel

    2015-01-01

    Problem (LLB-CFL). The solution search processes are implemented based on Genetic Algorithms (GA), designing genetic operators highly targeted towards this specific problem. The proposed methods are applied to a case study focusing on the demand and cache specifications of Netflix, and framed into a real...

  17. Empirical study of parallel LRU simulation algorithms

    Science.gov (United States)

    Carr, Eric; Nicol, David M.

    1994-01-01

    This paper reports on the performance of five parallel algorithms for simulating a fully associative cache operating under the LRU (Least-Recently-Used) replacement policy. Three of the algorithms are SIMD, and are implemented on the MasPar MP-2 architecture. Two other algorithms are parallelizations of an efficient serial algorithm on the Intel Paragon. One SIMD algorithm is quite simple, but its cost is linear in the cache size. The two other SIMD algorithm are more complex, but have costs that are independent on the cache size. Both the second and third SIMD algorithms compute all stack distances; the second SIMD algorithm is completely general, whereas the third SIMD algorithm presumes and takes advantage of bounds on the range of reference tags. Both MIMD algorithm implemented on the Paragon are general and compute all stack distances; they differ in one step that may affect their respective scalability. We assess the strengths and weaknesses of these algorithms as a function of problem size and characteristics, and compare their performance on traces derived from execution of three SPEC benchmark programs.

  18. CALCULATION ALGORITHM TRUSS UNDER CRANE BEAMS

    Directory of Open Access Journals (Sweden)

    N. K. Akaev1

    2016-01-01

    Full Text Available Aim.The task of reducing the deflection and increase the rigidity of single-span beams are made. In the article the calculation algorithm for truss crane girders is determined.Methods. To identify the internal effort required for the selection of cross section elements the design uses the Green's function.Results. It was found that the simplest truss system reduces deflection and increases the strength of design. The upper crossbar is subjected not only to bending and shear and compression work due to tightening tension. Preliminary determination of the geometrical characteristics of the crane farms elements are offered to make a comparison with previous similar configuration of his farms, using a simple approximate calculation methods.Conclusion.The method of sequential movements (incrementally the two bridge cranes along the length of the upper crossbar truss beams is suggested. We give the corresponding formulas and conditions of safety.

  19. Data cache organization for accurate timing analysis

    DEFF Research Database (Denmark)

    Schoeberl, Martin; Huber, Benedikt; Puffitsch, Wolfgang

    2013-01-01

    Caches are essential to bridge the gap between the high latency main memory and the fast processor pipeline. Standard processor architectures implement two first-level caches to avoid a structural hazard in the pipeline: an instruction cache and a data cache. For tight worst-case execution times...... different data areas, such as stack, global data, and heap allocated data, share the same cache. Some addresses are known statically, other addresses are only known at runtime. With a standard cache organization all those different data areas must be considered by worst-case execution time analysis...

  20. Effective Padding of Multi-Dimensional Arrays to Avoid Cache Conflict Misses

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Changwan; Bao, Wenlei; Cohen, Albert; Krishnamoorthy, Sriram; Pouchet, Louis-noel; Rastello, Fabrice; Ramanujam, J.; Sadayappan, Ponnuswamy

    2016-06-02

    Caches are used to significantly improve performance. Even with high degrees of set-associativity, the number of accessed data elements mapping to the same set in a cache can easily exceed the degree of associativity, causing conflict misses and lowered performance, even if the working set is much smaller than cache capacity. Array padding (increasing the size of array dimensions) is a well known optimization technique that can reduce conflict misses. In this paper, we develop the first algorithms for optimal padding of arrays for a set associative cache for arbitrary tile sizes, In addition, we develop the first solution to padding for nested tiles and multi-level caches. The techniques are in implemented in PAdvisor tool. Experimental results with multiple benchmarks demonstrate significant performance improvement from use of PAdvisor for padding.

  1. Cache-oblivious string dictionaries

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Fagerberg, Rolf

    2006-01-01

    We present static cache-oblivious dictionary structures for strings which provide analogues of tries and suffix trees in the cache-oblivious model. Our construction takes as input either a set of strings to store, a single string for which all suffixes are to be stored, a trie, a compressed trie......, or a suffix tree, and creates a cache-oblivious data structure which performs prefix queries in O(logB n + |P|/B) I/Os, where n is the number of leaves in the trie, P is the query string, and B is the block size. This query cost is optimal for unbounded alphabets. The data structure uses linear space....

  2. Applying Data Mining Techniques to Improve Information Security in the Cloud: A Single Cache System Approach

    OpenAIRE

    Amany AlShawi

    2016-01-01

    Presently, the popularity of cloud computing is gradually increasing day by day. The purpose of this research was to enhance the security of the cloud using techniques such as data mining with specific reference to the single cache system. From the findings of the research, it was observed that the security in the cloud could be enhanced with the single cache system. For future purposes, an Apriori algorithm can be applied to the single cache system. This can be applied by all cloud providers...

  3. Temperature and leakage aware techniques to improve cache reliability

    Science.gov (United States)

    Akaaboune, Adil

    stored in the cache to reduce power consumption. The initial work done on this subject focuses on the type of data that increases leakage consumption and ways to manage without impacting the performance of the microprocessor. The second phase of the project focuses on managing the data storage in different blocks of the cache to smooth the leakage power as well as dynamic power consumption. The last technique is a voltage controlled cache to reduce the leakage consumption of the cache while in execution and even in idle state. Two blocks of the 4-way set associative cache go through a voltage regulator before getting to the voltage well, and the other two are directly connected to the voltage well. The idea behind this technique is to use the replacement algorithm information to increase or decrease voltage of the two blocks depending on the need of the information stored on them.

  4. Scope-Based Method Cache Analysis

    DEFF Research Database (Denmark)

    Huber, Benedikt; Hepp, Stefan; Schoeberl, Martin

    2014-01-01

    , as it requests memory transfers at well-defined instructions only. In this article, we present a new cache analysis framework that generalizes and improves work on cache persistence analysis. The analysis demonstrates that a global view on the cache behavior permits the precise analyses of caches which are hard......The quest for time-predictable systems has led to the exploration of new hardware architectures that simplify analysis and reasoning in the temporal domain, while still providing competitive performance. For the instruction memory, the method cache is a conceptually attractive solution...

  5. Caching web service for TICF project

    International Nuclear Information System (INIS)

    Pais, V.F.; Stancalie, V.

    2008-01-01

    A caching web service was developed to allow caching of any object to a network cache, presented in the form of a web service. This application was used to increase the speed of previously implemented web services and for new ones. Various tests were conducted to determine the impact of using this caching web service in the existing network environment and where it should be placed in order to achieve the greatest increase in performance. Since the cache is presented to applications as a web service, it can also be used for remote access to stored data and data sharing between applications

  6. A Time-predictable Stack Cache

    DEFF Research Database (Denmark)

    Abbaspour, Sahar; Brandner, Florian; Schoeberl, Martin

    2013-01-01

    Real-time systems need time-predictable architectures to support static worst-case execution time (WCET) analysis. One architectural feature, the data cache, is hard to analyze when different data areas (e.g., heap allocated and stack allocated data) share the same cache. This sharing leads to le...... of a cache for stack allocated data. Our port of the LLVM C++ compiler supports the management of the stack cache. The combination of stack cache instructions and the hardware implementation of the stack cache is a further step towards timepredictable architectures.......Real-time systems need time-predictable architectures to support static worst-case execution time (WCET) analysis. One architectural feature, the data cache, is hard to analyze when different data areas (e.g., heap allocated and stack allocated data) share the same cache. This sharing leads to less...... precise results of the cache analysis part of the WCET analysis. Splitting the data cache for different data areas enables composable data cache analysis. The WCET analysis tool can analyze the accesses to these different data areas independently. In this paper we present the design and implementation...

  7. BACKSTEPPING ALGORITHM FOR LINEAR SISO PLANTS UNDER STRUCTURAL UNCERTAINTIES

    Directory of Open Access Journals (Sweden)

    I. B. Furtat

    2016-01-01

    Full Text Available The robust algorithm is proposed for parametric and structurally uncertain linear plants under external bounded disturbances. The structural uncertainty is an unknown dynamic order of the model of plants. The developed algorithm provides plant output tracking for a smooth bounded reference signal with a required accuracy at a finite time. It is assumed that only scalar input and output of the plants are available for measurement, but not their derivatives. For the synthesis of the control algorithm we use a modified backstepping algorithm. The synthesis of control algorithm is separated into rsteps, where ris an upper bound of the relative degree of control plant model. At each step we synthesize auxiliary controls that stabilize each subsystem about a zero. At the last step we synthesize a basic control law, which provides output tracking for smooth reference signal. It is shown that for the implementation of the algorithm we need to use only one filter of the control signal and the simplified control laws obtained by application of the real derivative elements. It allows simplifying significantly the calculation and implementation of the control system. Numerical examples and results of computer simulation are given, illustrating the operation of the proposed scheme.

  8. Content Delivery in Fog-Aided Small-Cell Systems with Offline and Online Caching: An Information—Theoretic Analysis

    Directory of Open Access Journals (Sweden)

    Seyyed Mohammadreza Azimi

    2017-07-01

    Full Text Available The storage of frequently requested multimedia content at small-cell base stations (BSs can reduce the load of macro-BSs without relying on high-speed backhaul links. In this work, the optimal operation of a system consisting of a cache-aided small-cell BS and a macro-BS is investigated for both offline and online caching settings. In particular, a binary fading one-sided interference channel is considered in which the small-cell BS, whose transmission is interfered by the macro-BS, has a limited-capacity cache. The delivery time per bit (DTB is adopted as a measure of the coding latency, that is, the duration of the transmission block, required for reliable delivery. For offline caching, assuming a static set of popular contents, the minimum achievable DTB is characterized through information-theoretic achievability and converse arguments as a function of the cache capacity and of the capacity of the backhaul link connecting cloud and small-cell BS. For online caching, under a time-varying set of popular contents, the long-term (average DTB is evaluated for both proactive and reactive caching policies. Furthermore, a converse argument is developed to characterize the minimum achievable long-term DTB for online caching in terms of the minimum achievable DTB for offline caching. The performance of both online and offline caching is finally compared using numerical results.

  9. CryptoCache: A Secure Sharable File Cache for Roaming Users

    DEFF Research Database (Denmark)

    Jensen, Christian D.

    2000-01-01

    Small mobile computers are now sufficiently powerful to run many applications, but storage capacity remains limited so working files cannot be cached or stored locally. Even if files can be stored locally, the mobile device is not powerful enough to act as server in collaborations with other users....... Conventional distributed file systems cache everything locally or not at all; there is no possibility to cache files on nearby nodes.In this paper we present the design of a secure cache system called CryptoCache that allows roaming users to cache files on untrusted file hosting servers. The system allows...

  10. Cache Complexity and Multicore Implementation for Univariate Real Root Isolation

    International Nuclear Information System (INIS)

    Chen Changbo; Moreno Maza, Marc; Xie Yuzhen

    2012-01-01

    We present parallel algorithms with optimal cache complexity for the kernel routine of many real root isolation algorithms, namely the Taylor shift by 1. We then report on multicore implementation for isolating the real roots of univariate polynomials with integer coefficients based on a classical algorithm due to Vincent, Collins and Akritas. For processing some well-known benchmark examples with sufficiently large size, our software tool reaches linear speedup on an 8-core machine. In addition, we show that our software is able to fully utilize the many cores and the memory space of a 32-core machine to tackle large problems that are out of reach for a desktop implementation.

  11. Client-Driven Joint Cache Management and Rate Adaptation for Dynamic Adaptive Streaming over HTTP

    Directory of Open Access Journals (Sweden)

    Chenghao Liu

    2013-01-01

    Full Text Available Due to the fact that proxy-driven proxy cache management and the client-driven streaming solution of Dynamic Adaptive Streaming over HTTP (DASH are two independent processes, some difficulties and challenges arise in media data management at the proxy cache and rate adaptation at the DASH client. This paper presents a novel client-driven joint proxy cache management and DASH rate adaptation method, named CLICRA, which moves prefetching intelligence from the proxy cache to the client. Based on the philosophy of CLICRA, this paper proposes a rate adaptation algorithm, which selects bitrates for the next media segments to be requested by using the predicted buffered media time in the client. CLICRA is realized by conveying information on the segments that are likely to be fetched subsequently to the proxy cache so that it can use the information for prefetching. Simulation results show that the proposed method outperforms the conventional segment-fetch-time-based rate adaptation and the proxy-driven proxy cache management significantly not only in streaming quality at the client but also in bandwidth and storage usage in proxy caches.

  12. Applying Data Mining Techniques to Improve Information Security in the Cloud: A Single Cache System Approach

    Directory of Open Access Journals (Sweden)

    Amany AlShawi

    2016-01-01

    Full Text Available Presently, the popularity of cloud computing is gradually increasing day by day. The purpose of this research was to enhance the security of the cloud using techniques such as data mining with specific reference to the single cache system. From the findings of the research, it was observed that the security in the cloud could be enhanced with the single cache system. For future purposes, an Apriori algorithm can be applied to the single cache system. This can be applied by all cloud providers, vendors, data distributors, and others. Further, data objects entered into the single cache system can be extended into 12 components. Database and SPSS modelers can be used to implement the same.

  13. A novel coordinated edge caching with request filtration in radio access network.

    Science.gov (United States)

    Li, Yang; Xu, Yuemei; Lin, Tao; Wang, Xiaohui; Ci, Song

    2013-01-01

    Content caching at the base station of the Radio Access Network (RAN) is a way to reduce backhaul transmission and improve the quality of experience. So it is crucial to manage such massive microcaches to store the contents in a coordinated manner, in order to increase the overall mobile network capacity to support more number of requests. We achieve this goal in this paper with a novel caching scheme, which reduces the repeating traffic by request filtration and asynchronous multicast in a RAN. Request filtration can make the best use of the limited bandwidth and in turn ensure the good performance of the coordinated caching. Moreover, the storage at the mobile devices is also considered to be used to further reduce the backhaul traffic and improve the users' experience. In addition, we drive the optimal cache division in this paper with the aim of reducing the average latency user perceived. The simulation results show that the proposed scheme outperforms existing algorithms.

  14. A Survey of Cache Bypassing Techniques

    Directory of Open Access Journals (Sweden)

    Sparsh Mittal

    2016-04-01

    Full Text Available With increasing core-count, the cache demand of modern processors has also increased. However, due to strict area/power budgets and presence of poor data-locality workloads, blindly scaling cache capacity is both infeasible and ineffective. Cache bypassing is a promising technique to increase effective cache capacity without incurring power/area costs of a larger sized cache. However, injudicious use of cache bypassing can lead to bandwidth congestion and increased miss-rate and hence, intelligent techniques are required to harness its full potential. This paper presents a survey of cache bypassing techniques for CPUs, GPUs and CPU-GPU heterogeneous systems, and for caches designed with SRAM, non-volatile memory (NVM and die-stacked DRAM. By classifying the techniques based on key parameters, it underscores their differences and similarities. We hope that this paper will provide insights into cache bypassing techniques and associated tradeoffs and will be useful for computer architects, system designers and other researchers.

  15. Store operations to maintain cache coherence

    Energy Technology Data Exchange (ETDEWEB)

    Evangelinos, Constantinos; Nair, Ravi; Ohmacht, Martin

    2017-09-12

    In one embodiment, a computer-implemented method includes encountering a store operation during a compile-time of a program, where the store operation is applicable to a memory line. It is determined, by a computer processor, that no cache coherence action is necessary for the store operation. A store-without-coherence-action instruction is generated for the store operation, responsive to determining that no cache coherence action is necessary. The store-without-coherence-action instruction specifies that the store operation is to be performed without a cache coherence action, and cache coherence is maintained upon execution of the store-without-coherence-action instruction.

  16. Store operations to maintain cache coherence

    Energy Technology Data Exchange (ETDEWEB)

    Evangelinos, Constantinos; Nair, Ravi; Ohmacht, Martin

    2017-08-01

    In one embodiment, a computer-implemented method includes encountering a store operation during a compile-time of a program, where the store operation is applicable to a memory line. It is determined, by a computer processor, that no cache coherence action is necessary for the store operation. A store-without-coherence-action instruction is generated for the store operation, responsive to determining that no cache coherence action is necessary. The store-without-coherence-action instruction specifies that the store operation is to be performed without a cache coherence action, and cache coherence is maintained upon execution of the store-without-coherence-action instruction.

  17. The Cost of Cache-Oblivious Searching

    DEFF Research Database (Denmark)

    Bender, Michael A.; Brodal, Gerth Stølting; Fagerberg, Rolf

    2011-01-01

    This paper gives tight bounds on the cost of cache-oblivious searching. The paper shows that no cache-oblivious search structure can guarantee a search performance of fewer than lg elog  B N memory transfers between any two levels of the memory hierarchy. This lower bound holds even if all......-oblivious model. The DAM model naturally extends to k levels. The paper also shows that as k grows, the search costs of the optimal k-level DAM search structure and the optimal cache-oblivious search structure rapidly converge. This result demonstrates that for a multilevel memory hierarchy, a simple cache...

  18. I/O-Optimal Distribution Sweeping on Private-Cache Chip Multiprocessors

    DEFF Research Database (Denmark)

    Ajwani, Deepak; Sitchinava, Nodar; Zeh, Norbert

    2011-01-01

    The parallel external memory (PEM) model has been used as a basis for the design and analysis of a wide range of algorithms for private-cache multi-core architectures. As a tool for developing geometric algorithms in this model, a parallel version of the I/O-efficient distribution sweeping framew...

  19. Optimal Replacement Policies for Non-Uniform Cache Objects with Optional Eviction

    National Research Council Canada - National Science Library

    Bahat, Omri; Makowski, Armand M

    2002-01-01

    .... However, since the introduction of optimal replacement policies for conventional caching, the problem of finding optimal replacement policies under the factors indicated has not been studied in any systematic manner...

  20. Adjustable Two-Tier Cache for IPTV Based on Segmented Streaming

    Directory of Open Access Journals (Sweden)

    Kai-Chun Liang

    2012-01-01

    Full Text Available Internet protocol TV (IPTV is a promising Internet killer application, which integrates video, voice, and data onto a single IP network, and offers viewers an innovative set of choices and control over their TV content. To provide high-quality IPTV services, an effective strategy is based on caching. This work proposes a segment-based two-tier caching approach, which divides each video into multiple segments to be cached. This approach also partitions the cache space into two layers, where the first layer mainly caches to-be-played segments and the second layer saves possibly played segments. As the segment access becomes frequent, the proposed approach enlarges the first layer and reduces the second layer, and vice versa. Because requested segments may not be accessed frequently, this work further designs an admission control mechanism to determine whether an incoming segment should be cached or not. The cache architecture takes forward/stop playback into account and may replace the unused segments under the interrupted playback. Finally, we conduct comprehensive simulation experiments to evaluate the performance of the proposed approach. The results show that our approach can yield higher hit ratio than previous work under various environmental parameters.

  1. Efficient Resource Scheduling by Exploiting Relay Cache for Cellular Networks

    Directory of Open Access Journals (Sweden)

    Chun He

    2015-01-01

    Full Text Available In relay-enhanced cellular systems, throughput of User Equipment (UE is constrained by the bottleneck of the two-hop link, backhaul link (or the first hop link, and access link (the second hop link. To maximize the throughput, resource allocation should be coordinated between these two hops. A common resource scheduling algorithm, Adaptive Distributed Proportional Fair, only ensures that the throughput of the first hop is greater than or equal to that of the second hop. But it cannot guarantee a good balance of the throughput and fairness between the two hops. In this paper, we propose a Two-Hop Balanced Distributed Scheduling (TBS algorithm by exploiting relay cache for non-real-time data traffic. The evolved Node Basestation (eNB adaptively adjusts the number of Resource Blocks (RBs allocated to the backhaul link and direct links based on the cache information of relays. Each relay allocates RBs for relay UEs based on the size of the relay UE’s Transport Block. We also design a relay UE’s ACK feedback mechanism to update the data at relay cache. Simulation results show that the proposed TBS can effectively improve resource utilization and achieve a good trade-off between system throughput and fairness by balancing the throughput of backhaul and access link.

  2. La honte qui cache la honte qui cache...

    OpenAIRE

    Dussy, Dorothée

    2004-01-01

    Sommaire : http://www.sigila.msh-paris.fr/la_honte.htm; International audience; Ce texte explore les mécanismes par lesquels Louise, ancienne religieuse et secrétaire médicale à la retraite, a tout au long de sa vie enchaîné les raisons d'avoir honte se rapportant invariablement à une infraction à son intimité. Louise amnésique a caché une honte par une autre honte, sans souvenir de son secret originel. Jusqu'à ce que la mémoire lui revienne un matin, sur le trajet la menant à son travail, gr...

  3. Simplifying and speeding the management of intra-node cache coherence

    Energy Technology Data Exchange (ETDEWEB)

    Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton on Hudson, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Phillip [Cortlandt Manor, NY; Hoenicke, Dirk [Ossining, NY; Ohmacht, Martin [Yorktown Heights, NY

    2012-04-17

    A method and apparatus for managing coherence between two processors of a two processor node of a multi-processor computer system. Generally the present invention relates to a software algorithm that simplifies and significantly speeds the management of cache coherence in a message passing parallel computer, and to hardware apparatus that assists this cache coherence algorithm. The software algorithm uses the opening and closing of put/get windows to coordinate the activated required to achieve cache coherence. The hardware apparatus may be an extension to the hardware address decode, that creates, in the physical memory address space of the node, an area of virtual memory that (a) does not actually exist, and (b) is therefore able to respond instantly to read and write requests from the processing elements.

  4. Cache-Oblivious Red-Blue Line Segment Intersection

    DEFF Research Database (Denmark)

    Arge, Lars; Mølhave, Thomas; Zeh, Norbert

    2008-01-01

    We present an optimal cache-oblivious algorithm for finding all intersections between a set of non-intersecting red segments and a set of non-intersecting blue segments in the plane. Our algorithm uses $O(\\frac{N}{B}\\log_{M/B}\\frac{N}{B}+T/B)$ memory transfers, where N is the total number...... of segments, M and B are the memory and block transfer sizes of any two consecutive levels of any multilevel memory hierarchy, and T is the number of intersections....

  5. Minimizing cache misses in an event-driven network server: A case study of TUX

    DEFF Research Database (Denmark)

    Bhatia, Sapan; Consel, Charles; Lawall, Julia Laetitia

    2006-01-01

    servers by optimizing their use of the L2 CPU cache in the context of the TUX Web server, known for its robustness to heavy load. Our approach is based on a novel cache-aware memory allocator and a specific scheduling strategy that together ensure that the total working data set of the server stays...... in the L2 cache. Experiments show that under high concurrency, our optimizations improve the throughput of TUX by up to 40% and the number of requests serviced at the time of failure by 21%....

  6. A Scalable and Highly Configurable Cache-Aware Hybrid Flash Translation Layer

    Directory of Open Access Journals (Sweden)

    Jalil Boukhobza

    2014-03-01

    Full Text Available This paper presents a cache-aware configurable hybrid flash translation layer (FTL, named CACH-FTL. It was designed based on the observation that most state-of­­-the-art flash-specific cache systems above FTLs flush groups of pages belonging to the same data block. CACH-FTL relies on this characteristic to optimize flash write operations placement, as large groups of pages are flushed to a block-mapped region, named BMR, whereas small groups are buffered into a page-mapped region, named PMR. Page group placement is based on a configurable threshold defining the limit under which it is more cost-effective to use page mapping (PMR and wait for grouping more pages before flushing to the BMR. CACH-FTL is scalable in terms of mapping table size and flexible in terms of Input/Output (I/O workload support. CACH-FTL performs very well, as the performance difference with the ideal page-mapped FTL is less than 15% in most cases and has a mean of 4% for the best CACH-FTL configurations, while using at least 78% less memory for table mapping storage on RAM.

  7. Do Clark's nutcrackers demonstrate what-where-when memory on a cache-recovery task?

    Science.gov (United States)

    Gould, Kristy L; Ort, Amy J; Kamil, Alan C

    2012-01-01

    What-where-when (WWW) memory during cache recovery was investigated in six Clark's nutcrackers. During caching, both red- and blue-colored pine seeds were cached by the birds in holes filled with sand. Either a short (3 day) retention interval (RI) or a long (9 day) RI was followed by a recovery session during which caches were replaced with either a single seed or wooden bead depending upon the color of the cache and length of the retention interval. Knowledge of what was in the cache (seed or bead), where it was located, and when the cache had been made (3 or 9 days ago) were the three WWW memory components under investigation. Birds recovered items (bead or seed) at above chance levels, demonstrating accurate spatial memory. They also recovered seeds more than beads after the long RI, but not after the short RI, when they recovered seeds and beads equally often. The differential recovery after the long RI demonstrates that nutcrackers may have the capacity for WWW memory during this task, but it is not clear why it was influenced by RI duration.

  8. Cone Algorithm of Spinning Vehicles under Dynamic Coning Environment

    Directory of Open Access Journals (Sweden)

    Shuang-biao Zhang

    2015-01-01

    Full Text Available Due to the fact that attitude error of vehicles has an intense trend of divergence when vehicles undergo worsening coning environment, in this paper, the model of dynamic coning environment is derived firstly. Then, through investigation of the effect on Euler attitude algorithm for the equivalency of traditional attitude algorithm, it is found that attitude error is actually the roll angle error including drifting error and oscillating error, which is induced directly by dynamic coning environment and further affects the pitch angle and yaw angle through transferring. Based on definition of the cone frame and cone attitude, a cone algorithm is proposed by rotation relationship to calculate cone attitude, and the relationship between cone attitude and Euler attitude of spinning vehicle is established. Through numerical simulations with different conditions of dynamic coning environment, it is shown that the induced error of Euler attitude fluctuates by the variation of precession and nutation, especially by that of nutation, and the oscillating frequency of roll angle error is twice that of pitch angle error and yaw angle error. In addition, the rotation angle is more competent to describe the spinning process of vehicles under coning environment than Euler angle gamma, and the real pitch angle and yaw angle are calculated finally.

  9. Improving Internet Archive Service through Proxy Cache.

    Science.gov (United States)

    Yu, Hsiang-Fu; Chen, Yi-Ming; Wang, Shih-Yong; Tseng, Li-Ming

    2003-01-01

    Discusses file transfer protocol (FTP) servers for downloading archives (files with particular file extensions), and the change to HTTP (Hypertext transfer protocol) with increased Web use. Topics include the Archie server; proxy cache servers; and how to improve the hit rate of archives by a combination of caching and better searching mechanisms.…

  10. A Cache Timing Analysis of HC-256

    DEFF Research Database (Denmark)

    Zenner, Erik

    2009-01-01

    In this paper, we describe a cache-timing attack against the stream cipher HC-256, which is the strong version of eStream winner HC-128. The attack is based on an abstract model of cache timing attacks that can also be used for designing stream ciphers. From the observations made in our analysis,...

  11. Cache as ca$\\$$h can

    NARCIS (Netherlands)

    Grootjans, W.J.; Hochstenbach, M.; Hurink, Johann L.; Kern, Walter; Luczak, M.; Puite, Q.; Resing, J.; Spieksma, F.

    2000-01-01

    In this paper we consider the problem of placing proxy caches in a network to get a better performance of the net. We develop a heuristic method to decide in which nodes of the network proxies should be installed and what the sizes of these caches should be. The heuristic attempts to minimize a

  12. The Cost of Cache-Oblivious Searching

    DEFF Research Database (Denmark)

    Bender, Michael A.; Brodal, Gert Stølting; Fagerberg, Rolf

    2003-01-01

    Tight bounds on the cost of cache-oblivious searching are proved. It is shown that no cache-oblivious search structure can guarantee that a search performs fewer than lg e log B N block transfers between any two levels of the memory hierarchy. This lower bound holds even if all of the block sizes...

  13. A Technique for Improving Lifetime of Non-Volatile Caches Using Write-Minimization

    Directory of Open Access Journals (Sweden)

    Sparsh Mittal

    2016-01-01

    Full Text Available While non-volatile memories (NVMs provide high-density and low-leakage, they also have low write-endurance. This, along with the write-variation introduced by the cache management policies, can lead to very small cache lifetime. In this paper, we propose ENLIVE, a technique for ENhancing the LIfetime of non-Volatile cachEs. Our technique uses a small SRAM (static random access memory storage, called HotStore. ENLIVE detects frequently written blocks and transfers them to the HotStore so that they can be accessed with smaller latency and energy. This also reduces the number of writes to the NVM cache which improves its lifetime. We present microarchitectural schemes for managing the HotStore. Simulations have been performed using an x86-64 simulator and benchmarks from SPEC2006 suite. We observe that ENLIVE provides higher improvement in lifetime and better performance and energy efficiency than two state-of-the-art techniques for improving NVM cache lifetime. ENLIVE provides 8.47×, 14.67× and 15.79× improvement in lifetime or two, four and eight core systems, respectively. In addition, it works well for a range of system and algorithm parameters and incurs only small overhead.

  14. LPPS: A Distributed Cache Pushing Based K-Anonymity Location Privacy Preserving Scheme

    Directory of Open Access Journals (Sweden)

    Ming Chen

    2016-01-01

    Full Text Available Recent years have witnessed the rapid growth of location-based services (LBSs for mobile social network applications. To enable location-based services, mobile users are required to report their location information to the LBS servers and receive answers of location-based queries. Location privacy leak happens when such servers are compromised, which has been a primary concern for information security. To address this issue, we propose the Location Privacy Preservation Scheme (LPPS based on distributed cache pushing. Unlike existing solutions, LPPS deploys distributed cache proxies to cover users mostly visited locations and proactively push cache content to mobile users, which can reduce the risk of leaking users’ location information. The proposed LPPS includes three major process. First, we propose an algorithm to find the optimal deployment of proxies to cover popular locations. Second, we present cache strategies for location-based queries based on the Markov chain model and propose update and replacement strategies for cache content maintenance. Third, we introduce a privacy protection scheme which is proved to achieve k-anonymity guarantee for location-based services. Extensive experiments illustrate that the proposed LPPS achieves decent service coverage ratio and cache hit ratio with lower communication overhead compared to existing solutions.

  15. The dCache scientific storage cloud

    CERN Document Server

    CERN. Geneva

    2014-01-01

    For over a decade, the dCache team has provided software for handling big data for a diverse community of scientists. The team has also amassed a wealth of operational experience from using this software in production. With this experience, the team have refined dCache with the goal of providing a "scientific cloud": a storage solution that satisfies all requirements of a user community by exposing different facets of dCache with which users interact. Recent development, as part of this "scientific cloud" vision, has introduced a new facet: a sync-and-share service, often referred to as "dropbox-like storage". This work has been strongly focused on local requirements, but will be made available in future releases of dCache allowing others to adopt dCache solutions. In this presentation we will outline the current status of the work: both the successes and limitations, and the direction and time-scale of future work.

  16. Efficient Mobile Client Caching Supporting Transaction Semantics

    Directory of Open Access Journals (Sweden)

    IlYoung Chung

    2000-05-01

    Full Text Available In mobile client-server database systems, caching of frequently accessed data is an important technique that will reduce the contention on the narrow bandwidth wireless channel. As the server in mobile environments may not have any information about the state of its clients' cache(stateless server, using broadcasting approach to transmit the updated data lists to numerous concurrent mobile clients is an attractive approach. In this paper, a caching policy is proposed to maintain cache consistency for mobile computers. The proposed protocol adopts asynchronous(non-periodic broadcasting as the cache invalidation scheme, and supports transaction semantics in mobile environments. With the asynchronous broadcasting approach, the proposed protocol can improve the throughput by reducing the abortion of transactions with low communication costs. We study the performance of the protocol by means of simulation experiments.

  17. Using Shadow Page Cache to Improve Isolated Drivers Performance

    Directory of Open Access Journals (Sweden)

    Hao Zheng

    2015-01-01

    Full Text Available With the advantage of the reusability property of the virtualization technology, users can reuse various types and versions of existing operating systems and drivers in a virtual machine, so as to customize their application environment. In order to prevent users’ virtualization environments being impacted by driver faults in virtual machine, Chariot examines the correctness of driver’s write operations by the method of combining a driver’s write operation capture and a driver’s private access control table. However, this method needs to keep the write permission of shadow page table as read-only, so as to capture isolated driver’s write operations through page faults, which adversely affect the performance of the driver. Based on delaying setting frequently used shadow pages’ write permissions to read-only, this paper proposes an algorithm using shadow page cache to improve the performance of isolated drivers and carefully study the relationship between the performance of drivers and the size of shadow page cache. Experimental results show that, through the shadow page cache, the performance of isolated drivers can be greatly improved without impacting Chariot’s reliability too much.

  18. Using shadow page cache to improve isolated drivers performance.

    Science.gov (United States)

    Zheng, Hao; Dong, Xiaoshe; Wang, Endong; Chen, Baoke; Zhu, Zhengdong; Liu, Chengzhe

    2015-01-01

    With the advantage of the reusability property of the virtualization technology, users can reuse various types and versions of existing operating systems and drivers in a virtual machine, so as to customize their application environment. In order to prevent users' virtualization environments being impacted by driver faults in virtual machine, Chariot examines the correctness of driver's write operations by the method of combining a driver's write operation capture and a driver's private access control table. However, this method needs to keep the write permission of shadow page table as read-only, so as to capture isolated driver's write operations through page faults, which adversely affect the performance of the driver. Based on delaying setting frequently used shadow pages' write permissions to read-only, this paper proposes an algorithm using shadow page cache to improve the performance of isolated drivers and carefully study the relationship between the performance of drivers and the size of shadow page cache. Experimental results show that, through the shadow page cache, the performance of isolated drivers can be greatly improved without impacting Chariot's reliability too much.

  19. Blind signal processing algorithms under DC biased Gaussian noise

    Science.gov (United States)

    Kim, Namyong; Byun, Hyung-Gi; Lim, Jeong-Ok

    2013-05-01

    Distortions caused by the DC-biased laser input can be modeled as DC biased Gaussian noise and removing DC bias is important in the demodulation process of the electrical signal in most optical communications. In this paper, a new performance criterion and a related algorithm for unsupervised equalization are proposed for communication systems in the environment of channel distortions and DC biased Gaussian noise. The proposed criterion utilizes the Euclidean distance between the Dirac-delta function located at zero on the error axis and a probability density function of biased constant modulus errors, where constant modulus error is defined by the difference between the system out and a constant modulus calculated from the transmitted symbol points. From the results obtained from the simulation under channel models with fading and DC bias noise abruptly added to background Gaussian noise, the proposed algorithm converges rapidly even after the interruption of DC bias proving that the proposed criterion can be effectively applied to optical communication systems corrupted by channel distortions and DC bias noise.

  20. Automated Cache Performance Analysis And Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Mohror, Kathryn [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2013-12-23

    , cache behavior could only be measured reliably in the ag- gregate across tens or hundreds of thousands of instructions. With the newest iteration of PEBS technology, cache events can be tied to a tuple of instruction pointer, target address (for both loads and stores), memory hierarchy, and observed latency. With this information we can now begin asking questions regarding the efficiency of not only regions of code, but how these regions interact with particular data structures and how these interactions evolve over time. In the short term, this information will be vital for performance analysts understanding and optimizing the behavior of their codes for the memory hierarchy. In the future, we can begin to ask how data layouts might be changed to improve performance and, for a particular application, what the theoretical optimal performance might be. The overall benefit to be produced by this effort was a commercial quality easy-to- use and scalable performance tool that will allow both beginner and experienced parallel programmers to automatically tune their applications for optimal cache usage. Effective use of such a tool can literally save weeks of performance tuning effort. Easy to use. With the proposed innovations, finding and fixing memory performance issues would be more automated and hide most to all of the performance engineer exper- tise ”under the hood” of the Open|SpeedShop performance tool. One of the biggest public benefits from the proposed innovations is that it makes performance analysis more usable to a larger group of application developers. Intuitive reporting of results. The Open|SpeedShop performance analysis tool has a rich set of intuitive, yet detailed reports for presenting performance results to application developers. Our goal was to leverage this existing technology to present the results from our memory performance addition to Open|SpeedShop. Suitable for experts as well as novices. Application performance is getting more difficult

  1. New distributive web-caching technique for VOD services

    Science.gov (United States)

    Kim, Iksoo; Woo, Yoseop; Hwang, Taejune; Choi, Jintak; Kim, Youngjune

    2002-12-01

    At present, one of the most popular services through internet is on-demand services including VOD, EOD and NOD. But the main problems for on-demand service are excessive load of server and insufficiency of network resources. Therefore the service providers require a powerful expensive server and clients are faced with long end-to-end delay and network congestion problem. This paper presents a new distributive web-caching technique for fluent VOD services using distributed proxies in Head-end-Network (HNET). The HNET consists of a Switching-Agent (SA) as a control node, some Head-end Nodes (HEN) as proxies and clients connected to HEN. And each HEN is composing a LAN. Clients request VOD services to server through a HEN and SA. The SA operates the heart of HNET, all the operations using proposed distributive caching technique perform under the control of SA. This technique stores some parts of a requested video on the corresponding HENs when clients connected to each HEN request an identical video. Thus, clients access those HENs (proxies) alternatively for acquiring video streams. Eventually, this fact leads to equi-loaded proxy (HEN). We adopt the cache replacement strategy using the combination of LRU, LFU, remove streams from other HEN prior to server streams and the method of replacing the first block of video last to reduce end-to end delay.

  2. Enhancement web proxy cache performance using Wrapper Feature Selection methods with NB and J48

    Science.gov (United States)

    Mahmoud Al-Qudah, Dua'a.; Funke Olanrewaju, Rashidah; Wong Azman, Amelia

    2017-11-01

    Web proxy cache technique reduces response time by storing a copy of pages between client and server sides. If requested pages are cached in the proxy, there is no need to access the server. Due to the limited size and excessive cost of cache compared to the other storages, cache replacement algorithm is used to determine evict page when the cache is full. On the other hand, the conventional algorithms for replacement such as Least Recently Use (LRU), First in First Out (FIFO), Least Frequently Use (LFU), Randomized Policy etc. may discard important pages just before use. Furthermore, using conventional algorithm cannot be well optimized since it requires some decision to intelligently evict a page before replacement. Hence, most researchers propose an integration among intelligent classifiers and replacement algorithm to improves replacement algorithms performance. This research proposes using automated wrapper feature selection methods to choose the best subset of features that are relevant and influence classifiers prediction accuracy. The result present that using wrapper feature selection methods namely: Best First (BFS), Incremental Wrapper subset selection(IWSS)embedded NB and particle swarm optimization(PSO)reduce number of features and have a good impact on reducing computation time. Using PSO enhance NB classifier accuracy by 1.1%, 0.43% and 0.22% over using NB with all features, using BFS and using IWSS embedded NB respectively. PSO rises J48 accuracy by 0.03%, 1.91 and 0.04% over using J48 classifier with all features, using IWSS-embedded NB and using BFS respectively. While using IWSS embedded NB fastest NB and J48 classifiers much more than BFS and PSO. However, it reduces computation time of NB by 0.1383 and reduce computation time of J48 by 2.998.

  3. dCache, agile adoption of storage technology

    Energy Technology Data Exchange (ETDEWEB)

    Millar, A. P. [Hamburg U.; Baranova, T. [Hamburg U.; Behrmann, G. [Unlisted, DK; Bernardt, C. [Hamburg U.; Fuhrmann, P. [Hamburg U.; Litvintsev, D. O. [Fermilab; Mkrtchyan, T. [Hamburg U.; Petersen, A. [Hamburg U.; Rossi, A. [Fermilab; Schwank, K. [Hamburg U.

    2012-01-01

    For over a decade, dCache has been synonymous with large-capacity, fault-tolerant storage using commodity hardware that supports seamless data migration to and from tape. In this paper we provide some recent news of changes within dCache and the community surrounding it. We describe the flexible nature of dCache that allows both externally developed enhancements to dCache facilities and the adoption of new technologies. Finally, we present information about avenues the dCache team is exploring for possible future improvements in dCache.

  4. Selective epidemic vaccination under the performant routing algorithms

    Science.gov (United States)

    Bamaarouf, O.; Alweimine, A. Ould Baba; Rachadi, A.; EZ-Zahraouy, H.

    2018-04-01

    Despite the extensive research on traffic dynamics and epidemic spreading, the effect of the routing algorithms strategies on the traffic-driven epidemic spreading has not received an adequate attention. It is well known that more performant routing algorithm strategies are used to overcome the congestion problem. However, our main result shows unexpectedly that these algorithms favor the virus spreading more than the case where the shortest path based algorithm is used. In this work, we studied the virus spreading in a complex network using the efficient path and the global dynamic routing algorithms as compared to shortest path strategy. Some previous studies have tried to modify the routing rules to limit the virus spreading, but at the expense of reducing the traffic transport efficiency. This work proposed a solution to overcome this drawback by using a selective vaccination procedure instead of a random vaccination used often in the literature. We found that the selective vaccination succeeded in eradicating the virus better than a pure random intervention for the performant routing algorithm strategies.

  5. Evaluation of Underwater Image Enhancement Algorithms under Different Environmental Conditions

    Directory of Open Access Journals (Sweden)

    Marino Mangeruga

    2018-01-01

    Full Text Available Underwater images usually suffer from poor visibility, lack of contrast and colour casting, mainly due to light absorption and scattering. In literature, there are many algorithms aimed to enhance the quality of underwater images through different approaches. Our purpose was to identify an algorithm that performs well in different environmental conditions. We have selected some algorithms from the state of the art and we have employed them to enhance a dataset of images produced in various underwater sites, representing different environmental and illumination conditions. These enhanced images have been evaluated through some quantitative metrics. By analysing the results of these metrics, we tried to understand which of the selected algorithms performed better than the others. Another purpose of our research was to establish if a quantitative metric was enough to judge the behaviour of an underwater image enhancement algorithm. We aim to demonstrate that, even if the metrics can provide an indicative estimation of image quality, they could lead to inconsistent or erroneous evaluations.

  6. Cache-aware network-on-chip for chip multiprocessors

    Science.gov (United States)

    Tatas, Konstantinos; Kyriacou, Costas; Dekoulis, George; Demetriou, Demetris; Avraam, Costas; Christou, Anastasia

    2009-05-01

    This paper presents the hardware prototype of a Network-on-Chip (NoC) for a chip multiprocessor that provides support for cache coherence, cache prefetching and cache-aware thread scheduling. A NoC with support to these cache related mechanisms can assist in improving systems performance by reducing the cache miss ratio. The presented multi-core system employs the Data-Driven Multithreading (DDM) model of execution. In DDM thread scheduling is done according to data availability, thus the system is aware of the threads to be executed in the near future. This characteristic of the DDM model allows for cache aware thread scheduling and cache prefetching. The NoC prototype is a crossbar switch with output buffering that can support a cache-aware 4-node chip multiprocessor. The prototype is built on the Xilinx ML506 board equipped with a Xilinx Virtex-5 FPGA.

  7. A Stack Cache for Real-Time Systems

    DEFF Research Database (Denmark)

    Schoeberl, Martin; Nielsen, Carsten

    2016-01-01

    Real-time systems need time-predictable computing platforms to allowfor static analysis of the worst-case execution time. Caches are important for good performance, but data caches arehard to analyze for the worst-case execution time. Stack allocated data has different properties related to local......Real-time systems need time-predictable computing platforms to allowfor static analysis of the worst-case execution time. Caches are important for good performance, but data caches arehard to analyze for the worst-case execution time. Stack allocated data has different properties related...... to locality, lifetime, and static analyzability of access addresses comparedto static or heap allocated data. Therefore, caching of stack allocateddata benefits from having its own cache. In this paper we present a cache architecture optimized for stack allocateddata. This cache is additional to the normal...

  8. Cache timing attacks on recent microarchitectures

    DEFF Research Database (Denmark)

    Andreou, Alexandres; Bogdanov, Andrey; Tischhauser, Elmar Wolfgang

    2017-01-01

    Cache timing attacks have been known for a long time, however since the rise of cloud computing and shared hardware resources, such attacks found new potentially devastating applications. One prominent example is S$A (presented by Irazoqui et al at S&P 2015) which is a cache timing attack against...... engineered as part of this work. This is the first time CSSAs for the Skylake architecture are reported. Our attacks demonstrate that cryptographic applications in cloud computing environments using key-dependent tables for acceleration are still vulnerable even on recent architectures, including Skylake...

  9. Concurrent Evaluation of Web Cache Replacement and Coherence Strategies

    NARCIS (Netherlands)

    Belloum, A.S.Z.; Hertzberger, L.O.

    2002-01-01

    When studying Web cache replacement strategies, it is often assumed that documents are static. Such an assumption may not be realistic, especially when large-size caches are considered. Because of the strong correlation between the efficiency of the cache replacement strategy and the real state of

  10. Design Space Exploration of Object Caches with Cross-Profiling

    DEFF Research Database (Denmark)

    Schoeberl, Martin; Binder, Walter; Villazon, Alex

    2011-01-01

    To avoid data cache trashing between heap-allocated data and other data areas, a distinct object cache has been proposed for embedded real-time Java processors. This object cache uses high associativity in order to statically track different object pointers for worst-case execution-time analysis...

  11. Efficient Context Switching for the Stack Cache: Implementation and Analysis

    DEFF Research Database (Denmark)

    Abbaspourseyedi, Sahar; Brandner, Florian; Naji, Amine

    2015-01-01

    , the analysis of the stack cache was limited to individual tasks, ignoring aspects related to multitasking. A major drawback of the original stack cache design is that, due to its simplicity, it cannot hold the data of multiple tasks at the same time. Consequently, the entire cache content needs to be saved...

  12. Performance Comparison of Widely-Used Maximum Power Point Tracker Algorithms under Real Environmental Conditions

    Directory of Open Access Journals (Sweden)

    DURUSU, A.

    2014-08-01

    Full Text Available Maximum power point trackers (MPPTs play an essential role in extracting power from photovoltaic (PV panels as they make the solar panels to operate at the maximum power point (MPP whatever the changes of environmental conditions are. For this reason, they take an important place in the increase of PV system efficiency. MPPTs are driven by MPPT algorithms and a number of MPPT algorithms are proposed in the literature. The comparison of the MPPT algorithms in literature are made by a sun simulator based test system under laboratory conditions for short durations. However, in this study, the performances of four most commonly used MPPT algorithms are compared under real environmental conditions for longer periods. A dual identical experimental setup is designed to make a comparison between two the considered MPPT algorithms as synchronized. As a result of this study, the ranking among these algorithms are presented and the results show that Incremental Conductance (IC algorithm gives the best performance.

  13. Don't make cache too complex: A simple probability-based cache management scheme for SSDs.

    Directory of Open Access Journals (Sweden)

    Seungjae Baek

    Full Text Available Solid-state drives (SSDs have recently become a common storage component in computer systems, and they are fueled by continued bit cost reductions achieved with smaller feature sizes and multiple-level cell technologies. However, as the flash memory stores more bits per cell, the performance and reliability of the flash memory degrade substantially. To solve this problem, a fast non-volatile memory (NVM-based cache has been employed within SSDs to reduce the long latency required to write data. Absorbing small writes in a fast NVM cache can also reduce the number of flash memory erase operations. To maximize the benefits of an NVM cache, it is important to increase the NVM cache utilization. In this paper, we propose and study ProCache, a simple NVM cache management scheme, that makes cache-entrance decisions based on random probability testing. Our scheme is motivated by the observation that frequently written hot data will eventually enter the cache with a high probability, and that infrequently accessed cold data will not enter the cache easily. Owing to its simplicity, ProCache is easy to implement at a substantially smaller cost than similar previously studied techniques. We evaluate ProCache and conclude that it achieves comparable performance compared to a more complex reference counter-based cache-management scheme.

  14. Corvid caching : insights from a cognitive model

    NARCIS (Netherlands)

    van der Vaart, Elske; Verbrugge, Rineke; Hemelrijk, Charlotte K.

    Caching and recovery of food by corvids is well-studied, but some ambiguous results remain. To help clarify these, we built a computational cognitive model. It is inspired by similar models built for humans, and it assumes that memory strength depends on frequency and recency of use. We compared our

  15. Integration of recommender system for Web cache management

    Directory of Open Access Journals (Sweden)

    Pattarasinee Bhattarakosol

    2013-06-01

    Full Text Available Web caching is widely recognised as an effective technique that improves the quality of service over the Internet, such as reduction of user latency and network bandwidth usage. However, this method has limitations due to hardware and management policies of caches. The Behaviour-Based Cache Management Model (BBCMM is therefore proposed as an alternative caching architecture model with the integration of a recommender system. This architecture is a cache grouping mechanism where browsing characteristics are applied to improve the performance of the Internet services. The results indicate that the byte hit rate of the new architecture increases by more than 18% and the delay measurement drops by more than 56%. In addition, a theoretical comparison between the proposed model and the traditional cooperative caching models shows a performance improvement of the proposed model in the cache system.

  16. Multi-level Hybrid Cache: Impact and Feasibility

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Zhe [ORNL; Kim, Youngjae [ORNL; Ma, Xiaosong [ORNL; Shipman, Galen M [ORNL; Zhou, Yuanyuan [University of California, San Diego

    2012-02-01

    Storage class memories, including flash, has been attracting much attention as promising candidates fitting into in today's enterprise storage systems. In particular, since the cost and performance characteristics of flash are in-between those of DRAM and hard disks, it has been considered by many studies as an secondary caching layer underneath main memory cache. However, there has been a lack of studies of correlation and interdependency between DRAM and flash caching. This paper views this problem as a special form of multi-level caching, and tries to understand the benefits of this multi-level hybrid cache hierarchy. We reveal that significant costs could be saved by using Flash to reduce the size of DRAM cache, while maintaing the same performance. We also discuss design challenges of using flash in the caching hierarchy and present potential solutions.

  17. A Novel Coordinated Edge Caching with Request Filtration in Radio Access Network

    Directory of Open Access Journals (Sweden)

    Yang Li

    2013-01-01

    Full Text Available Content caching at the base station of the Radio Access Network (RAN is a way to reduce backhaul transmission and improve the quality of experience. So it is crucial to manage such massive microcaches to store the contents in a coordinated manner, in order to increase the overall mobile network capacity to support more number of requests. We achieve this goal in this paper with a novel caching scheme, which reduces the repeating traffic by request filtration and asynchronous multicast in a RAN. Request filtration can make the best use of the limited bandwidth and in turn ensure the good performance of the coordinated caching. Moreover, the storage at the mobile devices is also considered to be used to further reduce the backhaul traffic and improve the users’ experience. In addition, we drive the optimal cache division in this paper with the aim of reducing the average latency user perceived. The simulation results show that the proposed scheme outperforms existing algorithms.

  18. Algorithms

    Indian Academy of Sciences (India)

    have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used to describe an algorithm for execution on a computer. An algorithm expressed using a programming language Is called a program. From activities 1-3, we can observe that: • Each activity is a command.

  19. Accelerating Convolutional Neural Networks for Continuous Mobile Vision via Cache Reuse

    OpenAIRE

    Xu, Mengwei; Liu, Xuanzhe; Liu, Yunxin; Lin, Felix Xiaozhu

    2017-01-01

    Convolutional Neural Network (CNN) is the state-of-the-art algorithm of many mobile vision fields. It is also applied in many vision tasks such as face detection and augmented reality on mobile devices. Though benefited from the high accuracy achieved via deep CNN models, nowadays commercial mobile devices are often short in processing capacity and battery to continuously carry out such CNN-driven vision applications. In this paper, we propose a transparent caching mechanism, named CNNCache, ...

  20. A cache-friendly sampling strategy for texture-based volume rendering on GPU

    Directory of Open Access Journals (Sweden)

    Junpeng Wang

    2017-06-01

    Full Text Available The texture-based volume rendering is a memory-intensive algorithm. Its performance relies heavily on the performance of the texture cache. However, most existing texture-based volume rendering methods blindly map computational resources to texture memory and result in incoherent memory access patterns, causing low cache hit rates in certain cases. The distance between samples taken by threads of an atomic scheduling unit (e.g. a warp of 32 threads in CUDA of the GPU is a crucial factor that affects the texture cache performance. Based on this fact, we present a new sampling strategy, called Warp Marching, for the ray-casting algorithm of texture-based volume rendering. The effects of different sample organizations and different thread-pixel mappings in the ray-casting algorithm are thoroughly analyzed. Also, a pipeline manner color blending approach is introduced and the power of warp-level GPU operations is leveraged to improve the efficiency of parallel executions on the GPU. In addition, the rendering performance of the Warp Marching is view-independent, and it outperforms existing empty space skipping techniques in scenarios that need to render large dynamic volumes in a low resolution image. Through a series of micro-benchmarking and real-life data experiments, we rigorously analyze our sampling strategies and demonstrate significant performance enhancements over existing sampling methods.

  1. Seedling Establishment of Coast Live Oak in Relation to Seed Caching by Jays

    Science.gov (United States)

    Joe R. McBride; Ed Norberg; Sheauchi Cheng; Ahmad Mossadegh

    1991-01-01

    The purpose of this study was to simulate the caching of acorns by jays and rodents to see if less costly procedures could be developed for the establishment of coast live oak (Quercus agrifolia). Four treatments [(1) random - single acorn cache, (2) regular - single acorn cache, (3) regular - 5 acorn cache, (4) regular - 10 acorn cache] were planted...

  2. EM algorithm for one-shot device testing with competing risks under exponential distribution

    International Nuclear Information System (INIS)

    Balakrishnan, N.; So, H.Y.; Ling, M.H.

    2015-01-01

    This paper provides an extension of the work of Balakrishnan and Ling [1] by introducing a competing risks model into a one-shot device testing analysis under an accelerated life test setting. An Expectation Maximization (EM) algorithm is then developed for the estimation of the model parameters. An extensive Monte Carlo simulation study is carried out to assess the performance of the EM algorithm and then compare the obtained results with the initial estimates obtained by the Inequality Constrained Least Squares (ICLS) method of estimation. Finally, we apply the EM algorithm to a clinical data, ED01, to illustrate the method of inference developed here. - Highlights: • ALT data analysis for one-shot devices with competing risks is considered. • EM algorithm is developed for the determination of the MLEs. • The estimations of lifetime under normal operating conditions are presented. • The EM algorithm improves the convergence rate

  3. ENHANCE PERFORMANCE OF WEB PROXY CACHE CLUSTER USING CLOUD COMPUTING

    Directory of Open Access Journals (Sweden)

    Najat O. Alsaiari

    2013-12-01

    Full Text Available Web caching is a crucial technology in Internet because it represents an effective means for reducing bandwidth demands, improving web server availability and reducing network latencies. However, Web cache cluster, which is a potent solution to enhance web cache system’s capability, still, has limited capacity and cannot handle tremendous high workload. Maximizing resource utilization and system capability is a very important problem in Web cache cluster. This problem cannot be solved efficiently by merely using load balancing strategies. Thus, along with the advent of cloud computing, we can use cloud based proxies to achieve outstanding performance and higher resource efficiency, compared to traditional Web proxy cache clusters. In this paper, we propose an architecture for cloud based Web proxy cache cluster (CBWPCC and test the effectiveness of the proposed architecture, compared with traditional one in term of response time ,resource utilization using CloudSim tool.

  4. Algorithmic mechanisms for reliable crowdsourcing computation under collusion.

    Science.gov (United States)

    Fernández Anta, Antonio; Georgiou, Chryssis; Mosteiro, Miguel A; Pareja, Daniel

    2015-01-01

    We consider a computing system where a master processor assigns a task for execution to worker processors that may collude. We model the workers' decision of whether to comply (compute the task) or not (return a bogus result to save the computation cost) as a game among workers. That is, we assume that workers are rational in a game-theoretic sense. We identify analytically the parameter conditions for a unique Nash Equilibrium where the master obtains the correct result. We also evaluate experimentally mixed equilibria aiming to attain better reliability-profit trade-offs. For a wide range of parameter values that may be used in practice, our simulations show that, in fact, both master and workers are better off using a pure equilibrium where no worker cheats, even under collusion, and even for colluding behaviors that involve deviating from the game.

  5. Algorithms

    Indian Academy of Sciences (India)

    algorithms such as synthetic (polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language ... ·1 x:=sln(theta) x : = sm(theta) 1. ~. Idl d.t Read A.B,C. ~ lei ~ Print x.y.z. L;;;J. Figure 2 Symbols used In flowchart language to rep- resent Assignment, Read.

  6. Algorithms

    Indian Academy of Sciences (India)

    In the previous articles, we have discussed various common data-structures such as arrays, lists, queues and trees and illustrated the widely used algorithm design paradigm referred to as 'divide-and-conquer'. Although there has been a large effort in realizing efficient algorithms, there are not many universally accepted ...

  7. Greatly improved cache update times for conditions data with Frontier/Squid

    Science.gov (United States)

    Dykstra, Dave; Lueking, Lee

    2010-04-01

    The CMS detector project loads copies of conditions data to over 100,000 computer cores worldwide by using a software subsystem called Frontier. This subsystem translates database queries into HTTP, looks up the results in a central database at CERN, and caches the results in an industry-standard HTTP proxy/caching server called Squid. One of the most challenging aspects of any cache system is coherency, that is, ensuring that changes made to the underlying data get propagated out to all clients in a timely manner. Recently, the Frontier system was enhanced to drastically reduce the time for changes to be propagated everywhere without heavily loading servers. The propagation time is now as low as 15 minutes for some kinds of data and no more than 60 minutes for the rest of the data. This was accomplished by taking advantage of an HTTP and Squid feature called If-Modified-Since. In order to use this feature, the Frontier server sends a Last-Modified timestamp, but since modification times are not normally tracked by Oracle databases, a PL/SQL program was developed to track the modification times of database tables. We discuss the details of this caching scheme and the obstacles overcome including database and Squid bugs.

  8. Greatly improved cache update times for conditions data with Frontier/Squid

    Energy Technology Data Exchange (ETDEWEB)

    Dykstra, Dave; Lueking, Lee; /Fermilab

    2009-05-01

    The CMS detector project loads copies of conditions data to over 100,000 computer cores worldwide by using a software subsystem called Frontier. This subsystem translates database queries into HTTP, looks up the results in a central database at CERN, and caches the results in an industry-standard HTTP proxy/caching server called Squid. One of the most challenging aspects of any cache system is coherency, that is, ensuring that changes made to the underlying data get propagated out to all clients in a timely manner. Recently, the Frontier system was enhanced to drastically reduce the time for changes to be propagated everywhere without heavily loading servers. The propagation time is now as low as 15 minutes for some kinds of data and no more than 60 minutes for the rest of the data. This was accomplished by taking advantage of an HTTP and Squid feature called If-Modified-Since. In order to use this feature, the Frontier server sends a Last-Modified timestamp, but since modification times are not normally tracked by Oracle databases, a PL/SQL program was developed to track the modification times of database tables. We discuss the details of this caching scheme and the obstacles overcome including database and Squid bugs.

  9. Algorithm for Extracting Digital Terrain Models under Forest Canopy from Airborne LiDAR Data

    Directory of Open Access Journals (Sweden)

    Almasi S. Maguya

    2014-07-01

    Full Text Available Extracting digital elevationmodels (DTMs from LiDAR data under forest canopy is a challenging task. This is because the forest canopy tends to block a portion of the LiDAR pulses from reaching the ground, hence introducing gaps in the data. This paper presents an algorithm for DTM extraction from LiDAR data under forest canopy. The algorithm copes with the challenge of low data density by generating a series of coarse DTMs by using the few ground points available and using trend surfaces to interpolate missing elevation values in the vicinity of the available points. This process generates a cloud of ground points from which the final DTM is generated. The algorithm has been compared to two other algorithms proposed in the literature in three different test sites with varying degrees of difficulty. Results show that the algorithm presented in this paper is more tolerant to low data density compared to the other two algorithms. The results further show that with decreasing point density, the differences between the three algorithms dramatically increased from about 0.5m to over 10m.

  10. Clark’s Nutcrackers (Nucifraga columbiana Flexibly Adapt Caching Behaviour to a Cooperative Context

    Directory of Open Access Journals (Sweden)

    Dawson Clary

    2016-10-01

    Full Text Available Corvids recognize when their caches are at risk of being stolen by others and have developed strategies to protect these caches from pilferage. For instance, Clark’s nutcrackers will suppress the number of caches they make if being observed by a potential thief. However, cache protection has most often been studied using competitive contexts, so it is unclear whether corvids can adjust their caching in beneficial ways to accommodate non-competitive situations. Therefore, we examined whether Clark’s nutcrackers, a non-social corvid, would flexibly adapt their caching behaviours to a cooperative context. To do so, birds were given a caching task during which caches made by one individual were reciprocally exchanged for the caches of a partner bird over repeated trials. In this scenario, if caching behaviours can be flexibly deployed, then the birds should recognize the cooperative nature of the task and maintain or increase caching levels over time. However, if cache protection strategies are applied independent of social context and simply in response to cache theft, then cache suppression should occur. In the current experiment, we found that the birds maintained caching throughout the experiment. We report that males increased caching in response to a manipulation in which caches were artificially added, suggesting the birds could adapt to the cooperative nature of the task. Additionally, we show that caching decisions were not solely due to motivational factors, instead showing an additional influence attributed to the behaviour of the partner bird.

  11. A survey of checkpointing algorithms for parallel and distributed ...

    Indian Academy of Sciences (India)

    Checkpointing for sharedmemory systems primarily extend cache coherence protocolstomaintain a consistent memory. All of them assume that the main memory is safe for storing the context. Recently algorithms have been published for distributed shared memory systems, which extend the cache coherence protocols used ...

  12. Instant Varnish Cache how-to

    CERN Document Server

    Moutinho, Roberto

    2013-01-01

    Filled with practical, step-by-step instructions and clear explanations for the most important and useful tasks. Get the job done and learn as you go. Easy-to-follow, step-by-step recipes which will get you started with Varnish Cache. Practical examples will help you to get set up quickly and easily.This book is aimed at system administrators and web developers who need to scale websites without tossing money on a large and costly infrastructure. It's assumed that you have some knowledge of the HTTP protocol, how browsers and server communicate with each other, and basic Linux systems.

  13. Compiler-directed cache management in multiprocessors

    Science.gov (United States)

    Cheong, Hoichi; Veidenbaum, Alexander V.

    1990-01-01

    The necessity of finding alternatives to hardware-based cache coherence strategies for large-scale multiprocessor systems is discussed. Three different software-based strategies sharing the same goals and general approach are presented. They consist of a simple invalidation approach, a fast selective invalidation scheme, and a version control scheme. The strategies are suitable for shared-memory multiprocessor systems with interconnection networks and a large number of processors. Results of trace-driven simulations conducted on numerical benchmark routines to compare the performance of the three schemes are presented.

  14. Best practice for caching of single-path code

    DEFF Research Database (Denmark)

    Schoeberl, Martin; Cilku, Bekim; Prokesch, Daniel

    2017-01-01

    Single-path code has some unique properties that make it interesting to explore different caching and prefetching alternatives for the stream of instructions. In this paper, we explore different cache organizations and how they perform with single-path code....

  15. Best Practice for Caching of Single-Path Code

    DEFF Research Database (Denmark)

    Schoeberl, Martin; Cilku, Bekim; Prokesch, Daniel

    2017-01-01

    Single-path code has some unique properties that make it interesting to explore different caching and prefetching alternatives for the stream of instructions. In this paper, we explore different cache organizations and how they perform with single-path code....

  16. Cache Timing Analysis of LFSR-based Stream Ciphers

    DEFF Research Database (Denmark)

    Zenner, Erik; Leander, Gregor; Hawkes, Philip

    2009-01-01

    primitives. In this paper, we give a cache timing cryptanalysis of stream ciphers using word-based linear feedback shift registers (LFSRs), such as Snow, Sober, Turing, or Sosemanuk. Fast implementations of such ciphers use tables that can be the target for a cache timing attack. Assuming that a small number...

  17. Enable Cache Effect on Forwarding Table in Metro-Ethernet

    Science.gov (United States)

    Sun, Xiaocui; Wang, Zhijun

    Broadcast based Address Resolution Protocol (ARP) is a major challenge for deploying Ethernet in Metropolitan Area Networks (MAN). This paper proposes to enable Cache effect on Forwarding Table (CFT) in Metro Ethernet. CFT can reduce numerous broadcast messages by solving the address through cached entries. The simulation results show that the proposed scheme can significantly decrease communication messages for address resolution in Metro Ethernet.

  18. Evidence for cache surveillance by a scatter-hoarding rodent

    NARCIS (Netherlands)

    Hirsch, B.T.; Kays, R.; Jansen, P.A.

    2013-01-01

    The mechanisms by which food-hoarding animals are capable of remembering the locations of numerous cached food items over long time spans has been the focus of intensive research. The ‘memory enhancement hypothesis’ states that hoarders reinforce spatial memory of their caches by repeatedly

  19. Smart Caching for Efficient Information Sharing in Distributed Information Systems

    Science.gov (United States)

    2008-09-01

    Leighton, Matthew Levine, Daniel Lewin , Rina Panigrahy (1997), “Consistent Hashing and Random Trees: Distributed Caching Protocols for Relieving Hot...Danzig, Chuck Neerdaels, Michael Schwartz and Kurt Worrell (1996), “A Hierarchical Internet Object Cache,” in USENIX Proceedings, 1996. 51 INITIAL

  20. Cache-mesh, a Dynamics Data Structure for Performance Optimization

    DEFF Research Database (Denmark)

    Nguyen, Tuan T.; Dahl, Vedrana Andersen; Bærentzen, J. Andreas

    2017-01-01

    This paper proposes the cache-mesh, a dynamic mesh data structure in 3D that allows modifications of stored topological relations effortlessly. The cache-mesh can adapt to arbitrary problems and provide fast retrieval to the most-referred-to topological relations. This adaptation requires trivial...

  1. INTELLIGENT CACHE FARMING ARCHITECTURE WITH THE RECOMMENDER SYSTEM

    Directory of Open Access Journals (Sweden)

    S. HIRANPONGSIN

    2009-06-01

    Full Text Available The Quality of Services (QoS guaranteed by the Internet Service Providers (ISPs is an important factor for users’ satisfaction in using the Internet. The implementation of the web proxy caching has been implemented to support this objective and also support the security procedure of the organizations. However, the success of guaranteeing the QoS of each ISP must be depended on the cache size and efficient caching policy. This paper proposes a new architecture of cache farming with the recommender system concept to manage users’ requirements. This solution helps reducing the retrieval time and also increasing the hit rate although the number of users increases without expanding the size of caches in the farm.

  2. Experimental Results of Rover-Based Coring and Caching

    Science.gov (United States)

    Backes, Paul G.; Younse, Paulo; DiCicco, Matthew; Hudson, Nicolas; Collins, Curtis; Allwood, Abigail; Paolini, Robert; Male, Cason; Ma, Jeremy; Steele, Andrew; hide

    2011-01-01

    Experimental results are presented for experiments performed using a prototype rover-based sample coring and caching system. The system consists of a rotary percussive coring tool on a five degree-of-freedom manipulator arm mounted on a FIDO-class rover and a sample caching subsystem mounted on the rover. Coring and caching experiments were performed in a laboratory setting and in a field test at Mono Lake, California. Rock abrasion experiments using an abrading bit on the coring tool were also performed. The experiments indicate that the sample acquisition and caching architecture is viable for use in a 2018 timeframe Mars caching mission and that rock abrasion using an abrading bit may be feasible in place of a dedicated rock abrasion tool.

  3. Compiler-Enforced Cache Coherence Using a Functional Language

    Directory of Open Access Journals (Sweden)

    Rich Wolski

    1996-01-01

    Full Text Available The cost of hardware cache coherence, both in terms of execution delay and operational cost, is substantial for scalable systems. Fortunately, compiler-generated cache management can reduce program serialization due to cache contention; increase execution performance; and reduce the cost of parallel systems by eliminating the need for more expensive hardware support. In this article, we use the Sisal functional language system as a vehicle to implement and investigate automatic, compiler-based cache management. We describe our implementation of Sisal for the IBM Power/4. The Power/4, briefly available as a product, represents an early attempt to build a shared memory machine that relies strictly on the language system for cache coherence. We discuss the issues associated with deterministic execution and program correctness on a system without hardware coherence, and demonstrate how Sisal (as a functional language is able to address those issues.

  4. Tunable output-frequency filter algorithm for imaging through scattering media under LED illumination

    Science.gov (United States)

    Zhou, Meiling; Singh, Alok Kumar; Pedrini, Giancarlo; Osten, Wolfgang; Min, Junwei; Yao, Baoli

    2018-03-01

    We present a tunable output-frequency filter (TOF) algorithm to reconstruct the object from noisy experimental data under low-power partially coherent illumination, such as LED, when imaging through scattering media. In the iterative algorithm, we employ Gaussian functions with different filter windows at different stages of iteration process to reduce corruption from experimental noise to search for a global minimum in the reconstruction. In comparison with the conventional iterative phase retrieval algorithm, we demonstrate that the proposed TOF algorithm achieves consistent and reliable reconstruction in the presence of experimental noise. Moreover, the spatial resolution and distinctive features are retained in the reconstruction since the filter is applied only to the region outside the object. The feasibility of the proposed method is proved by experimental results.

  5. Algorithms

    Indian Academy of Sciences (India)

    In the program shown in Figure 1, we have repeated the algorithm. M times and we can make the following observations. Each block is essentially a different instance of "code"; that is, the objects differ by the value to which N is initialized before the execution of the. "code" block. Thus, we can now avoid the repetition of the ...

  6. Algorithms

    Indian Academy of Sciences (India)

    algorithms built into the computer corresponding to the logic- circuit rules that are used to .... For the purpose of carrying ou t ari thmetic or logical operations the memory is organized in terms .... In fixed point representation, one essentially uses integer arithmetic operators assuming the binary point to be at some point other ...

  7. A formally verified algorithm for interactive consistency under a hybrid fault model

    Science.gov (United States)

    Lincoln, Patrick; Rushby, John

    1993-01-01

    Consistent distribution of single-source data to replicated computing channels is a fundamental problem in fault-tolerant system design. The 'Oral Messages' (OM) algorithm solves this problem of Interactive Consistency (Byzantine Agreement) assuming that all faults are worst-cass. Thambidurai and Park introduced a 'hybrid' fault model that distinguished three fault modes: asymmetric (Byzantine), symmetric, and benign; they also exhibited, along with an informal 'proof of correctness', a modified version of OM. Unfortunately, their algorithm is flawed. The discipline of mechanically checked formal verification eventually enabled us to develop a correct algorithm for Interactive Consistency under the hybrid fault model. This algorithm withstands $a$ asymmetric, $s$ symmetric, and $b$ benign faults simultaneously, using $m+1$ rounds, provided $n is greater than 2a + 2s + b + m$, and $m\\geg a$. We present this algorithm, discuss its subtle points, and describe its formal specification and verification in PVS. We argue that formal verification systems such as PVS are now sufficiently effective that their application to fault-tolerance algorithms should be considered routine.

  8. MPPT Control Strategy of PV Based on Improved Shuffled Frog Leaping Algorithm under Complex Environments

    Directory of Open Access Journals (Sweden)

    Xiaohua Nie

    2017-01-01

    Full Text Available This work presents a maximum power point tracking (MPPT based on the particle swarm optimization (PSO improved shuffled frog leaping algorithm (PSFLA. The swarm intelligence algorithm (SIA has vast computing ability. The MPPT control strategies of PV array based on SIA are attracting considerable interests. Firstly, the PSFLA was proposed by adding the inertia weight factor w of PSO in standard SFLA to overcome the defect of falling into the partial optimal solutions and slow convergence speed. The proposed PSFLA algorithm increased calculation speed and excellent global search capability of MPPT. Then, the PSFLA was applied to MPPT to solve the multiple extreme point problems of nonlinear optimization. Secondly, for the problems of MPPT under complex environments, a new MPPT strategy of the PSFLA combined with recursive least square filtering was proposed to overcome the measurement noise effects on MPPT accuracy. Finally, the simulation comparisons between PSFLA and SFLA algorithm were developed. The experiment and comparison between PSLFA and PSO algorithm under complex environment were executed. The simulation and experiment results indicate that the proposed MPPT control strategy based on PSFLA can suppress the measurement noise effects effectively and improve the PV array efficiency.

  9. Heuristic algorithms for feature selection under Bayesian models with block-diagonal covariance structure.

    Science.gov (United States)

    Foroughi Pour, Ali; Dalton, Lori A

    2018-03-21

    Many bioinformatics studies aim to identify markers, or features, that can be used to discriminate between distinct groups. In problems where strong individual markers are not available, or where interactions between gene products are of primary interest, it may be necessary to consider combinations of features as a marker family. To this end, recent work proposes a hierarchical Bayesian framework for feature selection that places a prior on the set of features we wish to select and on the label-conditioned feature distribution. While an analytical posterior under Gaussian models with block covariance structures is available, the optimal feature selection algorithm for this model remains intractable since it requires evaluating the posterior over the space of all possible covariance block structures and feature-block assignments. To address this computational barrier, in prior work we proposed a simple suboptimal algorithm, 2MNC-Robust, with robust performance across the space of block structures. Here, we present three new heuristic feature selection algorithms. The proposed algorithms outperform 2MNC-Robust and many other popular feature selection algorithms on synthetic data. In addition, enrichment analysis on real breast cancer, colon cancer, and Leukemia data indicates they also output many of the genes and pathways linked to the cancers under study. Bayesian feature selection is a promising framework for small-sample high-dimensional data, in particular biomarker discovery applications. When applied to cancer data these algorithms outputted many genes already shown to be involved in cancer as well as potentially new biomarkers. Furthermore, one of the proposed algorithms, SPM, outputs blocks of heavily correlated genes, particularly useful for studying gene interactions and gene networks.

  10. Interleaved sectored caches: reconciling low tag volume and low miss ratio

    OpenAIRE

    Seznec , André

    1993-01-01

    Sectored caches have been used for many years in order to reduce the tag volume needed in a cache. In a sectored cache, a single address tag is associated with a sector consisting in several cache lines, while validity, dirty and coherency tags are associated with each of the inner cache lines. Using a sectored cache is a design trade-off between a low volume for cache tags allowed by a large line size and a low memory traffic induced by using a small line size. This technique has been used i...

  11. A two-level cache for distributed information retrieval in search engines.

    Science.gov (United States)

    Zhang, Weizhe; He, Hui; Ye, Jianwei

    2013-01-01

    To improve the performance of distributed information retrieval in search engines, we propose a two-level cache structure based on the queries of the users' logs. We extract the highest rank queries of users from the static cache, in which the queries are the most popular. We adopt the dynamic cache as an auxiliary to optimize the distribution of the cache data. We propose a distribution strategy of the cache data. The experiments prove that the hit rate, the efficiency, and the time consumption of the two-level cache have advantages compared with other structures of cache.

  12. Cache-Oblivious R-trees

    DEFF Research Database (Denmark)

    Arge, Lars; de Berg, Mark; Haverkort, Herman

    2009-01-01

    -oblivious R-tree with provable performance guarantees. If no point in the plane is contained in B or more rectangles in S, the structure answers a rectangle query using O(√N/B + T/B) memory transfers and a point query using O((N/B)ε) memory transfers for any ε > 0, where B is the block size of memory...... transfers between any two levels of a multilevel memory hierarchy. We also develop a variant of our structure that achieves the same performance on input sets with arbitrary overlap among the rectangles. The rectangle query bound matches the bound of the best known linear-space cache-aware structure....

  13. Cache-Oblivious R-trees

    DEFF Research Database (Denmark)

    Arge, Lars; de Berg, Mark; Haverkort, Herman

    2005-01-01

    -oblivious R-tree with provable performance guarantees. If no point in the plane is contained in B or more rectangles in S, the structure answers a rectangle query using �O(sqr{�N/B}+T/B) �memory transfers and a point query using O((N/B)ε) mem�ory transfers for any ε > 0, where B is the block size of memory...... transfers between any two levels of a multilevel memory hierarchy. We also develop a variant of our structure that achieves the same performance on input sets with arbitrary overlap among the rectangles. The rectangle query bound matches the bound of the best known linear-space cache-aware structure....

  14. A distributed storage system with dCache

    DEFF Research Database (Denmark)

    Behrmann, Gerd; Fuhrmann, Patrick; Grønager, Michael

    2008-01-01

    The LCG collaboration is encompassed by a number of Tier 1 centers. The Nordic LCG Tier 1, operated by NDGF, is in contrast to many other Tier 1 centers distributed over the Nordic countries. A distributed setup was chosen for both political and technical reasons, but also provides a number...... of unique challenges. dCache is well known and respected as a powerful distributed storage resource manager, and was chosen for implementing the storage aspects of the Nordic Tier 1. In contrast to classic dCache deployments, we deploy dCache over a WAN with limited bandwidth, high latency, frequent network...

  15. Método y sistema de modelado de memoria cache

    OpenAIRE

    Posadas Cobo, Héctor; Villar Bonet, Eugenio; Díaz Suárez, Luis

    2010-01-01

    Un método de modelado de una memoria cache de datos de un procesador destino, para simular el comportamiento de dicha memoria cache de datos en la ejecución de un código software en una plataforma que comprenda dicho procesador destino, donde dicha simulación se realiza en una plataforma nativa que tiene un procesador diferente del procesador destino que comprende dicha memoria cache de datos que se va a modelar, donde dicho modelado se realiza mediante la ejecución en dicha plataforma nativa...

  16. Genetic Algorithm for Multiuser Discrete Network Design Problem under Demand Uncertainty

    Directory of Open Access Journals (Sweden)

    Wu Juan

    2012-01-01

    Full Text Available Discrete network design is an important part of urban transportation planning. The purpose of this paper is to present a bilevel model for discrete network design. The upper-level model aims to minimize the total travel time under a stochastic demand to design a discrete network. In the lower-level model, demands are assigned to the network through a multiuser traffic equilibrium assignment. Generally, discrete network could affect path selections of demands, while the results of the multiuser traffic equilibrium assignment need to reconstruct a new discrete network. An iterative approach including an improved genetic algorithm and Frank-Wolfe algorithm is used to solve the bi-level model. The numerical results on Nguyen Dupuis network show that the model and the related algorithms were effective for discrete network design.

  17. Enhanced Grey Wolf Optimizer based MPPT Algorithm of PV system under Partial Shaded Condition

    Directory of Open Access Journals (Sweden)

    Santhan Kumar Cherukuri

    2017-11-01

    Full Text Available Partial shading condition is one of the adverse phenomena which effects the power output of photovoltaic (PV systems due to inaccurate tracking of global maximum power point. Conventional Maximum Power Point Tracking (MPPT techniques like Perturb and Observe, Incremental Conductance and Hill Climbing can track the maximum power point effectively under uniform shaded condition, but fails under partial shaded condition. An attractive solution under partial shaded condition is application of meta-heuristic algorithms to operate at global maximum power point. Hence in this paper, an Enhanced Grey Wolf Optimizer (EGWO based maximum power point tracking algorithm is proposed to track the global maximum power point of PV system under partial shading condition. A Mathematical model of PV system is developed under partial shaded condition using single diode model and EGWO is applied to track global maximum power point. The proposed method is programmed in MATLAB environment and simulations are carried out on 4S and 2S2P PV configurations for dynamically changing shading patterns. The results of the proposed method are analyzed and compared with GWO and PSO algorithms. It is observed that proposed method is effective in tracking global maximum power point with more accuracy in less computation time compared to other methods. Article History: Received June 12nd 2017; Received in revised form August 13rd 2017; Accepted August 15th 2017; Available online How to Cite This Article: Kumar, C.H.S and Rao, R.S. (2017 Enhanced Grey Wolf Optimizer Based MPPT Algorithm of PV System Under Partial Shaded Condition. Int. Journal of Renewable Energy Development, 6(3, 203-212. https://doi.org/10.14710/ijred.6.3.203-212

  18. The Spy in the Sandbox: Practical Cache Attacks in JavaScript and their Implications

    Science.gov (United States)

    2015-10-16

    The Spy in the Sandbox: Practical Cache Attacks in JavaScript and their Implications Yossef Oren Vasileios P. Kemerlis Simha Sethumadhavan Angelos D...security General Terms Languages, Measurement, Security Keywords side-channel attacks; cache-timing attacks; JavaScript -based cache attacks; covert...more detail in Section 3, executes a JavaScript - based cache attack, which lets the attacker track accesses to the victim’s last-level cache over

  19. Distributed caching mechanism for various MPE software services

    CERN Document Server

    Svec, Andrej

    2017-01-01

    The MPE Software Section provides multiple software services to facilitate the testing and the operation of the CERN Accelerator complex. Continuous growth in the number of users and the amount of processed data result in the requirement of high scalability. Our current priority is to move towards a distributed and properly load balanced set of services based on containers. The aim of this project is to implement the generic caching mechanism applicable to our services and chosen architecture. The project will at first require research about the different aspects of distributed caching (persistence, no gc-caching, cache consistency etc.) and the available technologies followed by the implementation of the chosen solution. In order to validate the correctness and performance of the implementation in the last phase of the project it will be required to implement a monitoring layer and integrate it with the current ELK stack.

  20. Co-Designed Cache Coherency Architecture for Embedded Multicore Systems

    OpenAIRE

    Marandola, Jussara; Cudennec, Loïc

    2011-01-01

    International audience; One of the key challenges in chip multi-processing is to provide a programming model that manages cache coherency in a transparent and efficient way. A large number of applications designed for embedded systems are known to read and write data following memory access patterns. Memory access patterns can be used to optimize cache consistency by prefetching data and reducing the number of memory transactions. In this paper, we present the round-robin method applied to ba...

  1. Effects of Cache Valley Particulate Matter on Human Lung Cells

    OpenAIRE

    Watterson, Todd L.

    2012-01-01

    During wintertime temperature inversion episodes the concentrations of particulate air pollution, also defined as particulate matter (PM), in Utah’s Cache Valley have often been highest in the nation, with concentrations surpassing more populated and industrial areas. This has attracted much local and national attention to the area and its pollution. The Cache Valley has recently been declared to be in non-attainment of provisions of Federal law bringing to bear Federal regulatory attention a...

  2. Improved cache performance in Monte Carlo transport calculations using energy banding

    Science.gov (United States)

    Siegel, A.; Smith, K.; Felker, K.; Romano, P.; Forget, B.; Beckman, P.

    2014-04-01

    We present an energy banding algorithm for Monte Carlo (MC) neutral particle transport simulations which depend on large cross section lookup tables. In MC codes, read-only cross section data tables are accessed frequently, exhibit poor locality, and are typically too much large to fit in fast memory. Thus, performance is often limited by long latencies to RAM, or by off-node communication latencies when the data footprint is very large and must be decomposed on a distributed memory machine. The proposed energy banding algorithm allows maximal temporal reuse of data in band sizes that can flexibly accommodate different architectural features. The energy banding algorithm is general and has a number of benefits compared to the traditional approach. In the present analysis we explore its potential to achieve improvements in time-to-solution on modern cache-based architectures.

  3. Algorithms

    Indian Academy of Sciences (India)

    1 It must be noted that if the input assertion is not satisfied at this point, then any output assertion holds due to the classical implication operator. ..... on our intuitive knowledge about the underlying theory. The above processes can be formalised in a logical framework without relying on the intuitive deductions we have used.

  4. Word-length algorithm for language identification of under-resourced languages

    Directory of Open Access Journals (Sweden)

    Ali Selamat

    2016-10-01

    Full Text Available Language identification is widely used in machine learning, text mining, information retrieval, and speech processing. Available techniques for solving the problem of language identification do require large amount of training text that are not available for under-resourced languages which form the bulk of the World’s languages. The primary objective of this study is to propose a lexicon based algorithm which is able to perform language identification using minimal training data. Because language identification is often the first step in many natural language processing tasks, it is necessary to explore techniques that will perform language identification in the shortest possible time. Hence, the second objective of this research is to study the effect of the proposed algorithm on the run-time performance of language identification. Precision, recall, and F1 measures were used to determine the effectiveness of the proposed word length algorithm using datasets drawn from the Universal Declaration of Human Rights Act in 15 languages. The experimental results show good accuracy on language identification at the document level and at the sentence level based on the available dataset. The improved algorithm also showed significant improvement in run time performance compared with the spelling checker approach.

  5. Performance Analysis of Maximum Power Point Tracking Algorithms Under Varying Irradiation

    Directory of Open Access Journals (Sweden)

    Bhukya Krishna Naick

    2017-03-01

    Full Text Available Photovoltaic (PV system is one of the reliable alternative sources of energy and its contribution in energy sector is growing rapidly. The performance of PV system depends upon the solar insolation, which will be varying throughout the day, season and year. The biggest challenge is to obtain the maximum power from PV array at varying insolation levels. The maximum power point tracking (MPPT controller, in association with tracking algorithm will act as a principal element in driving the PV system at maximum power point (MPP. In this paper, the simulation model has been developed and the results were compared for perturb and observe, incremental conductance, extremum seeking control and fuzzy logic controller based MPPT algorithms at different irradiation levels on a 10 KW PV array. The results obtained were analysed in terms of convergence rate and their efficiency to track the MPP. Keywords: Photovoltaic system, MPPT algorithms, perturb and observe, incremental conductance, scalar gradient extremum seeking control, fuzzy logic controller. Article History: Received 3rd Oct 2016; Received in revised form 6th January 2017; Accepted 10th February 2017; Available online How to Cite This Article: Naick, B. K., Chatterjee, T. K. & Chatterjee, K. (2017 Performance Analysis of Maximum Power Point Tracking Algorithms Under Varying Irradiation. Int Journal of Renewable Energy Development, 6(1, 65-74. http://dx.doi.org/10.14710/ijred.6.1.65-74

  6. Greatly improved cache update times for conditions data with Frontier/Squid

    CERN Document Server

    Dykstra, Dave

    2009-01-01

    The CMS detector project loads copies of conditions data to over 100,000 computer cores worldwide by using a software subsystem called Frontier. This subsystem translates database queries into HTTP, looks up the results in a central database at CERN, and caches the results in an industry-standard HTTP proxy/caching server called Squid. One of the most challenging aspects of any cache system is coherency, that is, ensuring that changes made to the underlying data get propagated out to all clients in a timely manner. Recently, the Frontier system was enhanced to drastically reduce the time for changes to be propagated everywhere without heavily loading servers. The propagation time is now as low as 15 minutes for some kinds of data and no more than 60 minutes for the rest of the data. This was accomplished by taking advantage of an HTTP and Squid feature called If-Modified-Since. In order to use this feature, the Frontier server sends a Last-Modified timestamp, but since modification times are not normally trac...

  7. Fox squirrels match food assessment and cache effort to value and scarcity.

    Directory of Open Access Journals (Sweden)

    Mikel M Delgado

    Full Text Available Scatter hoarders must allocate time to assess items for caching, and to carry and bury each cache. Such decisions should be driven by economic variables, such as the value of the individual food items, the scarcity of these items, competition for food items and risk of pilferage by conspecifics. The fox squirrel, an obligate scatter-hoarder, assesses cacheable food items using two overt movements, head flicks and paw manipulations. These behaviors allow an examination of squirrel decision processes when storing food for winter survival. We measured wild squirrels' time allocations and frequencies of assessment and investment behaviors during periods of food scarcity (summer and abundance (fall, giving the squirrels a series of 15 items (alternating five hazelnuts and five peanuts. Assessment and investment per cache increased when resource value was higher (hazelnuts or resources were scarcer (summer, but decreased as scarcity declined (end of sessions. This is the first study to show that assessment behaviors change in response to factors that indicate daily and seasonal resource abundance, and that these factors may interact in complex ways to affect food storing decisions. Food-storing tree squirrels may be a useful and important model species to understand the complex economic decisions made under natural conditions.

  8. Megafloods and Clovis cache at Wenatchee, Washington

    Science.gov (United States)

    Waitt, Richard B.

    2016-05-01

    Immense late Wisconsin floods from glacial Lake Missoula drowned the Wenatchee reach of Washington's Columbia valley by different routes. The earliest debacles, nearly 19,000 cal yr BP, raged 335 m deep down the Columbia and built high Pangborn bar at Wenatchee. As advancing ice blocked the northwest of Columbia valley, several giant floods descended Moses Coulee and backflooded up the Columbia past Wenatchee. Ice then blocked Moses Coulee, and Grand Coulee to Quincy basin became the westmost floodway. From Quincy basin many Missoula floods backflowed 50 km upvalley to Wenatchee 18,000 to 15,500 years ago. Receding ice dammed glacial Lake Columbia centuries more-till it burst about 15,000 years ago. After Glacier Peak ashfall about 13,600 years ago, smaller great flood(s) swept down the Columbia from glacial Lake Kootenay in British Columbia. The East Wenatchee cache of huge fluted Clovis points had been laid atop Pangborn bar after the Glacier Peak ashfall, then buried by loess. Clovis people came five and a half millennia after the early gigantic Missoula floods, two and a half millennia after the last small Missoula flood, and two millennia after the glacial Lake Columbia flood. People likely saw outburst flood(s) from glacial Lake Kootenay.

  9. dCache on Steroids - Delegated Storage Solutions

    Science.gov (United States)

    Mkrtchyan, T.; Adeyemi, F.; Ashish, A.; Behrmann, G.; Fuhrmann, P.; Litvintsev, D.; Millar, P.; Rossi, A.; Sahakyan, M.; Starek, J.

    2017-10-01

    For over a decade, dCache.org has delivered a robust software used at more than 80 Universities and research institutes around the world, allowing these sites to provide reliable storage services for the WLCG experiments as well as many other scientific communities. The flexible architecture of dCache allows running it in a wide variety of configurations and platforms - from a SoC based all-in-one Raspberry-Pi up to hundreds of nodes in a multipetabyte installation. Due to lack of managed storage at the time, dCache implemented data placement, replication and data integrity directly. Today, many alternatives are available: S3, GlusterFS, CEPH and others. While such solutions position themselves as scalable storage systems, they cannot be used by many scientific communities out of the box. The absence of community-accepted authentication and authorization mechanisms, the use of product specific protocols and the lack of namespace are some of the reasons that prevent wide-scale adoption of these alternatives. Most of these limitations are already solved by dCache. By delegating low-level storage management functionality to the above-mentioned new systems and providing the missing layer through dCache, we provide a solution which combines the benefits of both worlds - industry standard storage building blocks with the access protocols and authentication required by scientific communities. In this paper, we focus on CEPH, a popular software for clustered storage that supports file, block and object interfaces. CEPH is often used in modern computing centers, for example as a backend to OpenStack services. We will show prototypes of dCache running with a CEPH backend and discuss the benefits and limitations of such an approach. We will also outline the roadmap for supporting ‘delegated storage’ within the dCache releases.

  10. Novel stable structure of Li3PS4 predicted by evolutionary algorithm under high-pressure

    Directory of Open Access Journals (Sweden)

    S. Iikubo

    2018-01-01

    Full Text Available By combining theoretical predictions and in-situ X-ray diffraction under high pressure, we found a novel stable crystal structure of Li3PS4 under high pressures. At ambient pressure, Li3PS4 shows successive structural transitions from γ-type to β-type and from β-type to α type with increasing temperature, as is well established. In this study, an evolutionary algorithm successfully predicted the γ-type crystal structure at ambient pressure and further predicted a possible stable δ-type crystal structures under high pressure. The stability of the obtained structures is examined in terms of both static and dynamic stability by first-principles calculations. In situ X-ray diffraction using a synchrotron radiation revealed that the high-pressure phase is the predicted δ-Li3PS4 phase.

  11. Caching Efficiency Enhancement at Wireless Edges with Concerns on User’s Quality of Experience

    Directory of Open Access Journals (Sweden)

    Feng Li

    2018-01-01

    Full Text Available Content caching is a promising approach to enhancing bandwidth utilization and minimizing delivery delay for new-generation Internet applications. The design of content caching is based on the principles that popular contents are cached at appropriate network edges in order to reduce transmission delay and avoid backhaul bottleneck. In this paper, we propose a cooperative caching replacement and efficiency optimization scheme for IP-based wireless networks. Wireless edges are designed to establish a one-hop scope of caching information table for caching replacement in cases when there is not enough cache resource available within its own space. During the course, after receiving the caching request, every caching node should determine the weight of the required contents and provide a response according to the availability of its own caching space. Furthermore, to increase the caching efficiency from a practical perspective, we introduce the concept of quality of user experience (QoE and try to properly allocate the cache resource of the whole networks to better satisfy user demands. Different caching allocation strategies are devised to be adopted to enhance user QoE in various circumstances. Numerical results are further provided to justify the performance improvement of our proposal from various aspects.

  12. Horizontally scaling dCache SRM with the Terracotta platform

    International Nuclear Information System (INIS)

    Perelmutov, T; Crawford, M; Moibenko, A; Oleynik, G

    2011-01-01

    The dCache disk caching file system has been chosen by a majority of LHC experiments' Tier 1 centers for their data storage needs. It is also deployed at many Tier 2 centers. The Storage Resource Manager (SRM) is a standardized grid storage interface and a single point of remote entry into dCache, and hence is a critical component. SRM must scale to increasing transaction rates and remain resilient against changing usage patterns. The initial implementation of the SRM service in dCache suffered from an inability to support clustered deployment, and its performance was limited by the hardware of a single node. Using the Terracotta platform[l], we added the ability to horizontally scale the dCache SRM service to run on multiple nodes in a cluster configuration, coupled with network load balancing. This gives site administrators the ability to increase the performance and reliability of SRM service to face the ever-increasing requirements of LHC data handling. In this paper we will describe the previous limitations of the architecture SRM server and how the Terracotta platform allowed us to readily convert single node service into a highly scalable clustered application.

  13. A Scalable proxy cache for Grid Data Access

    Science.gov (United States)

    Cristian Cirstea, Traian; Just Keijser, Jan; Koeroo, Oscar Arthur; Starink, Ronald; Templon, Jeffrey Alan

    2012-12-01

    We describe a prototype grid proxy cache system developed at Nikhef, motivated by a desire to construct the first building block of a future https-based Content Delivery Network for grid infrastructures. Two goals drove the project: firstly to provide a “native view” of the grid for desktop-type users, and secondly to improve performance for physics-analysis type use cases, where multiple passes are made over the same set of data (residing on the grid). We further constrained the design by requiring that the system should be made of standard components wherever possible. The prototype that emerged from this exercise is a horizontally-scalable, cooperating system of web server / cache nodes, fronted by a customized webDAV server. The webDAV server is custom only in the sense that it supports http redirects (providing horizontal scaling) and that the authentication module has, as back end, a proxy delegation chain that can be used by the cache nodes to retrieve files from the grid. The prototype was deployed at Nikhef and tested at a scale of several terabytes of data and approximately one hundred fast cores of computing. Both small and large files were tested, in a number of scenarios, and with various numbers of cache nodes, in order to understand the scaling properties of the system. For properly-dimensioned cache-node hardware, the system showed speedup of several integer factors for the analysis-type use cases. These results and others are presented and discussed.

  14. High Performance Analytics with the R3-Cache

    Science.gov (United States)

    Eavis, Todd; Sayeed, Ruhan

    Contemporary data warehouses now represent some of the world’s largest databases. As these systems grow in size and complexity, however, it becomes increasingly difficult for brute force query processing approaches to meet the performance demands of end users. Certainly, improved indexing and more selective view materialization are helpful in this regard. Nevertheless, with warehouses moving into the multi-terabyte range, it is clear that the minimization of external memory accesses must be a primary performance objective. In this paper, we describe the R 3-cache, a natively multi-dimensional caching framework designed specifically to support sophisticated warehouse/OLAP environments. R 3-cache is based upon an in-memory version of the R-tree that has been extended to support buffer pages rather than disk blocks. A key strength of the R 3-cache is that it is able to utilize multi-dimensional fragments of previous query results so as to significantly minimize the frequency and scale of disk accesses. Moreover, the new caching model directly accommodates the standard relational storage model and provides mechanisms for pro-active updates that exploit the existence of query “hot spots”. The current prototype has been evaluated as a component of the Sidera DBMS, a “shared nothing” parallel OLAP server designed for multi-terabyte analytics. Experimental results demonstrate significant performance improvements relative to simpler alternatives.

  15. Unfolding an under-determined neutron spectrum using genetic algorithm based Monte Carlo

    International Nuclear Information System (INIS)

    Suman, V.; Sarkar, P.K.

    2011-01-01

    Spallation in addition to the other photon-neutron reactions in target materials and different components in accelerators may result in production of huge amount of energetic protons which further leads to the production of neutron and contributes to the main component of the total dose. For dosimetric purposes in accelerator facilities the detector measurements doesn't provide directly the actual neutron flux values but a cumulative picture. To obtain Neutron spectrum from the measured data, response functions of the measuring instrument together with the measurements are used into many unfolding techniques which are frequently used for unfolding the hidden spectral information. Here we discuss a genetic algorithm based unfolding technique which is in the process of development. Genetic Algorithm is a stochastic method based on natural selection, which mimics Darwinian theory of survival of the best. The above said method has been tested to unfold the neutron spectra obtained from a reaction carried out at an accelerator facility, with energetic carbon ions on thick silver target along with its respective neutron response of BC501A liquid scintillation detector. The problem dealt here is under-determined where the number of measurements is less than the required energy bin information. The results so obtained were compared with those obtained using the established unfolding code FERDOR, which unfolds data for completely determined problems. It is seen that the genetic algorithm based solution has a reasonable match with the results of FERDOR, when smoothening carried out by Monte Carlo is taken into consideration. This method appears to be a promising candidate for unfolding neutron spectrum in cases of under-determined and over-determined, where measurements are more. The method also has advantages of flexibility, computational simplicity and works well without need of any initial guess spectrum. (author)

  16. Design issues and caching strategies for CD-ROM-based multimedia storage

    Science.gov (United States)

    Shastri, Vijnan; Rajaraman, V.; Jamadagni, H. S.; Venkat-Rangan, P.; Sampath-Kumar, Srihari

    1996-03-01

    CD-ROMs have proliferated as a distribution media for desktop machines for a large variety of multimedia applications (targeted for a single-user environment) like encyclopedias, magazines and games. With CD-ROM capacities up to 3 GB being available in the near future, they will form an integral part of Video on Demand (VoD) servers to store full-length movies and multimedia. In the first section of this paper we look at issues related to the single- user desktop environment. Since these multimedia applications are highly interactive in nature, we take a pragmatic approach, and have made a detailed study of the multimedia application behavior in terms of the I/O request patterns generated to the CD-ROM subsystem by tracing these patterns. We discuss prefetch buffer design and seek time characteristics in the context of the analysis of these traces. We also propose an adaptive main-memory hosted cache that receives caching hints from the application to reduce the latency when the user moves from one node of the hyper graph to another. In the second section we look at the use of CD-ROM in a VoD server and discuss the problem of scheduling multiple request streams and buffer management in this scenario. We adapt the C-SCAN (Circular SCAN) algorithm to suit the CD-ROM drive characteristics and prove that it is optimal in terms of buffer size management. We provide computationally inexpensive relations by which this algorithm can be implemented. We then propose an admission control algorithm which admits new request streams without disrupting the continuity of playback of the previous request streams. The algorithm also supports operations such as fast forward and replay. Finally, we discuss the problem of optimal placement of MPEG streams on CD-ROMs in the third section.

  17. Caching at the Mobile Edge: a Practical Implementation

    DEFF Research Database (Denmark)

    Poderys, Justas; Artuso, Matteo; Lensbøl, Claus Michael Oest

    2018-01-01

    Thanks to recent advances in mobile networks, it is becoming increasingly popular to access heterogeneous content from mobile terminals. There are, however, unique challenges in mobile networks that affect the perceived quality of experience (QoE) at the user end. One such challenge is the higher...... latency that users typically experience in mobile networks compared to wired ones. Cloud-based radio access networks with content caches at the base stations are seen as a key contributor in reducing the latency required to access content and thus improve the QoE at the mobile user terminal. In this paper...... for the mobile user obtained by caching content at the base stations. This is quantified with a comparison to non-cached content by means of ping tests (10–11% shorter times), a higher response rate for web traffic (1.73–3.6 times higher), and an improvement in the jitter (6% reduction)....

  18. Decentralized Caching for Content Delivery Based on Blockchain: A Game Theoretic Perspective

    OpenAIRE

    Wang, Wenbo; Niyato, Dusit; Wang, Ping; Leshem, Amir

    2018-01-01

    Blockchains enables tamper-proof, ordered logging for transactional data in a decentralized manner over open-access, overlay peer-to-peer networks. In this paper, we propose a decentralized framework of proactive caching in a hierarchical wireless network based on blockchains. We employ the blockchain-based smart contracts to construct an autonomous content caching market. In the market, the cache helpers are able to autonomously adapt their caching strategies according to the market statisti...

  19. Design and Analysis of Optimization Algorithms to Minimize Cryptographic Processing in BGP Security Protocols.

    Science.gov (United States)

    Sriram, Vinay K; Montgomery, Doug

    2017-07-01

    The Internet is subject to attacks due to vulnerabilities in its routing protocols. One proposed approach to attain greater security is to cryptographically protect network reachability announcements exchanged between Border Gateway Protocol (BGP) routers. This study proposes and evaluates the performance and efficiency of various optimization algorithms for validation of digitally signed BGP updates. In particular, this investigation focuses on the BGPSEC (BGP with SECurity extensions) protocol, currently under consideration for standardization in the Internet Engineering Task Force. We analyze three basic BGPSEC update processing algorithms: Unoptimized, Cache Common Segments (CCS) optimization, and Best Path Only (BPO) optimization. We further propose and study cache management schemes to be used in conjunction with the CCS and BPO algorithms. The performance metrics used in the analyses are: (1) routing table convergence time after BGPSEC peering reset or router reboot events and (2) peak-second signature verification workload. Both analytical modeling and detailed trace-driven simulation were performed. Results show that the BPO algorithm is 330% to 628% faster than the unoptimized algorithm for routing table convergence in a typical Internet core-facing provider edge router.

  20. A Primer on Memory Consistency and Cache Coherence

    CERN Document Server

    Sorin, Daniel; Wood, David

    2011-01-01

    Many modern computer systems and most multicore chips (chip multiprocessors) support shared memory in hardware. In a shared memory system, each of the processor cores may read and write to a single shared address space. For a shared memory machine, the memory consistency model defines the architecturally visible behavior of its memory system. Consistency definitions provide rules about loads and stores (or memory reads and writes) and how they act upon memory. As part of supporting a memory consistency model, many machines also provide cache coherence protocols that ensure that multiple cached

  1. Effectiveness of caching in a distributed digital library system

    DEFF Research Database (Denmark)

    Hollmann, J.; Ardø, Anders; Stenstrom, P.

    2007-01-01

    offers a tremendous functional advantage to a user, the fulltext download delays caused by the network and queuing in servers make the user-perceived interactive performance poor. This paper studies how effective caching of articles at the client level can be achieved as well as at intermediate points...... as manifested by gateways that implement the interfaces to the many fulltext archives. A central research question in this approach is: What is the nature of locality in the user access stream to such a digital library? Based on access logs that drive the simulations, it is shown that client-side caching can...

  2. Effective caching of shortest paths for location-based services

    DEFF Research Database (Denmark)

    Jensen, Christian S.; Thomsen, Jeppe Rishede; Yiu, Man Lung

    2012-01-01

    Web search is ubiquitous in our daily lives. Caching has been extensively used to reduce the computation time of the search engine and reduce the network traffic beyond a proxy server. Another form of web search, known as online shortest path search, is popular due to advances in geo-positioning.......Web search is ubiquitous in our daily lives. Caching has been extensively used to reduce the computation time of the search engine and reduce the network traffic beyond a proxy server. Another form of web search, known as online shortest path search, is popular due to advances in geo...

  3. Web proxy cache replacement strategies simulation, implementation, and performance evaluation

    CERN Document Server

    ElAarag, Hala; Cobb, Jake

    2013-01-01

    This work presents a study of cache replacement strategies designed for static web content. Proxy servers can improve performance by caching static web content such as cascading style sheets, java script source files, and large files such as images. This topic is particularly important in wireless ad hoc networks, in which mobile devices act as proxy servers for a group of other mobile devices. Opening chapters present an introduction to web requests and the characteristics of web objects, web proxy servers and Squid, and artificial neural networks. This is followed by a comprehensive review o

  4. Alignment of Memory Transfers of a Time-Predictable Stack Cache

    DEFF Research Database (Denmark)

    Abbaspourseyedi, Sahar; Brandner, Florian

    2014-01-01

    of complex cache states. Instead, only the occupancy level of the cache has to be determined. The memory transfers generated by the standard stack cache are not generally aligned. These unaligned accesses risk to introduce complexity to the otherwise simple WCET analysis. In this work, we investigate three...

  5. Lazy Spilling for a Time-Predictable Stack Cache: Implementation and Analysis

    DEFF Research Database (Denmark)

    Abbaspourseyedi, Sahar; Jordan, Alexander; Brandner, Florian

    2014-01-01

    of the cache content to main memory, if the content was not modified in the meantime. At first sight, this appears to be an average-case optimization. Indeed, measurements show that the number of cache blocks spilled is reduced to about 17% and 30% in the mean, depending on the stack cache size. Furthermore...

  6. Benchmarking the Algorithms to Detect Seasonal Signals Under Different Noise Conditions

    Science.gov (United States)

    Klos, A.; Bogusz, J.; Bos, M. S.

    2017-12-01

    Global Positioning System (GPS) position time series contain seasonal signals. Among the others, annual and semi-annual are the most powerful. Widely, these oscillations are modelled as curves with constant amplitudes, using the Weighted Least-Squares (WLS) algorithm. However, in reality, the seasonal signatures vary over time, as their geophysical causes are not constant. Different algorithms have been already used to cover this time-variability, as Wavelet Decomposition (WD), Singular Spectrum Analysis (SSA), Chebyshev Polynomial (CP) or Kalman Filter (KF). In this research, we employed 376 globally distributed GPS stations which time series contributed to the newest International Terrestrial Reference Frame (ITRF2014). We show that for c.a. 20% of stations the amplitudes of seasonal signal varies over time of more than 1.0 mm. Then, we compare the WD, SSA, CP and KF algorithms for a set of synthetic time series to quantify them under different noise conditions. We show that when variations of seasonal signals are ignored, the power-law character is biased towards flicker noise. The most reliable estimates of the variations were found to be given by SSA and KF. These methods also perform the best for other noise levels while WD, and to a lesser extend also CP, have trouble in separating the seasonal signal from the noise which leads to an underestimation in the spectral index of power-law noise of around 0.1. For real ITRF2014 GPS data we discovered, that SSA and KF are capable to model 49-84% and 77-90% of the variance of the true varying seasonal signals, respectively.

  7. Transient Variable Caching in Java’s Stack-Based Intermediate Representation

    Directory of Open Access Journals (Sweden)

    Paul Týma

    1999-01-01

    Full Text Available Java’s stack‐based intermediate representation (IR is typically coerced to execute on register‐based architectures. Unoptimized compiled code dutifully replicates transient variable usage designated by the programmer and common optimization practices tend to introduce further usage (i.e., CSE, Loop‐invariant Code Motion, etc.. On register based machines, often transient variables are cached within registers (when available saving the expense of actually accessing memory. Unfortunately, in stack‐based environments because of the need to push and pop the transient values, further performance improvement is possible. This paper presents Transient Variable Caching (TVC, a technique for eliminating transient variable overhead whenever possible. This optimization would find a likely home in optimizers attached to the back of popular Java compilers. Side effects of the algorithm include significant instruction reordering and introduction of many stack‐manipulation operations. This combination has proven to greatly impede the ability to decompile stack‐based IR code sequences. The code that results from the transform is faster, smaller, and greatly impedes decompilation.

  8. A genetic-algorithm-aided stochastic optimization model for regional air quality management under uncertainty.

    Science.gov (United States)

    Qin, Xiaosheng; Huang, Guohe; Liu, Lei

    2010-01-01

    A genetic-algorithm-aided stochastic optimization (GASO) model was developed in this study for supporting regional air quality management under uncertainty. The model incorporated genetic algorithm (GA) and Monte Carlo simulation techniques into a general stochastic chance-constrained programming (CCP) framework and allowed uncertainties in simulation and optimization model parameters to be considered explicitly in the design of least-cost strategies. GA was used to seek the optimal solution of the management model by progressively evaluating the performances of individual solutions. Monte Carlo simulation was used to check the feasibility of each solution. A management problem in terms of regional air pollution control was studied to demonstrate the applicability of the proposed method. Results of the case study indicated the proposed model could effectively communicate uncertainties into the optimization process and generate solutions that contained a spectrum of potential air pollutant treatment options with risk and cost information. Decision alternatives could be obtained by analyzing tradeoffs between the overall pollutant treatment cost and the system-failure risk due to inherent uncertainties.

  9. Desigining of Computer Vision Algorithm to Detect Sweet Pepper for Robotic Harvesting Under Natural Light

    Directory of Open Access Journals (Sweden)

    A Moghimi

    2015-03-01

    Full Text Available In recent years, automation in agricultural field has attracted more attention of researchers and greenhouse producers. The main reasons are to reduce the cost including labor cost and to reduce the hard working conditions in greenhouse. In present research, a vision system of harvesting robot was developed for recognition of green sweet pepper on plant under natural light. The major challenge of this study was noticeable color similarity between sweet pepper and plant leaves. To overcome this challenge, a new texture index based on edge density approximation (EDA has been defined and utilized in combination with color indices such as Hue, Saturation and excessive green index (EGI. Fifty images were captured from fifty sweet pepper plants to evaluate the algorithm. The algorithm could recognize 92 out of 107 (i. e., the detection accuracy of 86% sweet peppers located within the workspace of robot. The error of system in recognition of background, mostly leaves, as a green sweet pepper, decreased 92.98% by using the new defined texture index in comparison with color analysis. This showed the importance of integration of texture with color features when used for recognizing sweet peppers. The main reasons of errors, besides color similarity, were waxy and rough surface of sweet pepper that cause higher reflectance and non-uniform lighting on surface, respectively.

  10. Real-Time Attitude Control Algorithm for Fast Tumbling Objects under Torque Constraint

    Science.gov (United States)

    Tsuda, Yuichi; Nakasuka, Shinichi

    This paper describes a new control algorithm for achieving any arbitrary attitude and angular velocity states of a rigid body, even fast and complicated tumbling rotations, under some practical constraints. This technique is expected to be applied for the attitude motion synchronization to capture a non-cooperative, tumbling object in such missions as removal of debris from orbit, servicing broken-down satellites for repairing or inspection, rescue of manned vehicles, etc. For this objective, we have introduced a novel control algorithm called Free Motion Path Method (FMPM) in the previous paper, which was formulated as an open-loop controller. The next step of this consecutive work is to derive a closed-loop FMPM controller, and as the preliminary step toward the objective, this paper attempts to derive a conservative state variables representation of a rigid body dynamics. 6-Dimensional conservative state variables are introduced in place of general angular velocity-attitude angle representation, and how to convert between both representations are shown in this paper.

  11. Detection of structural damage using novelty detection algorithm under variational environmental and operational conditions

    Science.gov (United States)

    El Mountassir, M.; Yaacoubi, S.; Dahmene, F.

    2015-07-01

    Novelty detection is a widely used algorithm in different fields of study due to its capabilities to recognize any kind of abnormalities in a specific process in order to ensure better working in normal conditions. In the context of Structural Health Monitoring (SHM), this method is utilized as damage detection technique because the presence of defects can be considered as abnormal to the structure. Nevertheless, the performance of such a method could be jeopardized if the structure is operating in harsh environmental and operational conditions (EOCs). In this paper, novelty detection statistical technique is used to investigate the detection of damages under various EOCs. Experiments were conducted with different scenarios: damage sizes and shapes. EOCs effects were simulated by adding stochastic noise to the collected experimental data. Different levels of noise were studied to determine the accuracy and the performance of the proposed method.

  12. Detection of structural damage using novelty detection algorithm under variational environmental and operational conditions

    International Nuclear Information System (INIS)

    Mountassir, M El; Yaacoubi, S; Dahmene, F

    2015-01-01

    Novelty detection is a widely used algorithm in different fields of study due to its capabilities to recognize any kind of abnormalities in a specific process in order to ensure better working in normal conditions. In the context of Structural Health Monitoring (SHM), this method is utilized as damage detection technique because the presence of defects can be considered as abnormal to the structure. Nevertheless, the performance of such a method could be jeopardized if the structure is operating in harsh environmental and operational conditions (EOCs). In this paper, novelty detection statistical technique is used to investigate the detection of damages under various EOCs. Experiments were conducted with different scenarios: damage sizes and shapes. EOCs effects were simulated by adding stochastic noise to the collected experimental data. Different levels of noise were studied to determine the accuracy and the performance of the proposed method. (paper)

  13. MPPT-Based Control Algorithm for PV System Using iteration-PSO under Irregular shadow Conditions

    Directory of Open Access Journals (Sweden)

    M. Abdulkadir

    2017-02-01

    Full Text Available The conventional maximum power point tracking (MPPT techniques can hardly track the global maximum power point (GMPP because the power-voltage characteristics of photovoltaic (PV exhibit multiple local peaks in irregular shadow, and therefore easily fall into the local maximum power point. These conditions make it very challenging, and to tackle this deficiency, an efficient Iteration Particle Swarm Optimization (IPSO has been developed to improve the quality of solution and convergence speed of the traditional PSO, so that it can effectively track the GMPP under irregular shadow conditions. This proposed technique has such advantages as simple structure, fast response and strong robustness, and convenient implementation. It is applied to MPPT control of PV system in irregular shadow to solve the problem of multi-peak optimization in partial shadow. In order to verify the rationality of the proposed algorithm, however, recently the dynamic MPPT performance under varying irradiance conditions has been given utmost attention to the PV society. As the European standard EN 50530 which defines the recommended varying irradiance profiles, was released lately, the corresponding researchers have been required to improve the dynamic MPPT performance. This paper tried to evaluate the dynamic MPPT performance using EN 50530 standard. The simulation results show that iterative-PSO method can fast track the global MPP, increase tracking speed and higher dynamic MPPT efficiency under EN 50530 than the conventional PSO.

  14. The impact of using combinatorial optimisation for static caching of posting lists

    DEFF Research Database (Denmark)

    Petersen, Casper; Simonsen, Jakob Grue; Lioma, Christina

    2015-01-01

    Caching posting lists can reduce the amount of disk I/O required to evaluate a query. Current methods use optimisation procedures for maximising the cache hit ratio. A recent method selects posting lists for static caching in a greedy manner and obtains higher hit rates than standard cache eviction...... policies such as LRU and LFU. However, a greedy method does not formally guarantee an optimal solution. We investigate whether the use of methods guaranteed, in theory, to and an approximately optimal solution would yield higher hit rates. Thus, we cast the selection of posting lists for caching...

  15. Unfavorable Strides in Cache Memory Systems (RNR Technical Report RNR-92-015

    Directory of Open Access Journals (Sweden)

    David H. Bailey

    1995-01-01

    Full Text Available An important issue in obtaining high performance on a scientific application running on a cache-based computer system is the behavior of the cache when data are accessed at a constant stride. Others who have discussed this issue have noted an odd phenomenon in such situations: A few particular innocent-looking strides result in sharply reduced cache efficiency. In this article, this problem is analyzed, and a simple formula is presented that accurately gives the cache efficiency for various cache parameters and data strides.

  16. Dynamic web cache publishing for IaaS clouds using Shoal

    Science.gov (United States)

    Gable, Ian; Chester, Michael; Armstrong, Patrick; Berghaus, Frank; Charbonneau, Andre; Leavett-Brown, Colin; Paterson, Michael; Prior, Robert; Sobie, Randall; Taylor, Ryan

    2014-06-01

    We have developed a highly scalable application, called Shoal, for tracking and utilizing a distributed set of HTTP web caches. Our application uses the Squid HTTP cache. Squid servers advertise their existence to the Shoal server via AMQP messaging by running Shoal Agent. The Shoal server provides a simple REST interface that allows clients to determine their closest Squid cache. Our goal is to dynamically instantiate Squid caches on IaaS clouds in response to client demand. Shoal provides the VMs on IaaS clouds with the location of the nearest dynamically instantiated Squid Cache.

  17. Dynamic web cache publishing for IaaS clouds using Shoal

    International Nuclear Information System (INIS)

    Gable, Ian; Chester, Michael; Berghaus, Frank; Leavett-Brown, Colin; Paterson, Michael; Prior, Robert; Sobie, Randall; Taylor, Ryan; Armstrong, Patrick; Charbonneau, Andre

    2014-01-01

    We have developed a highly scalable application, called Shoal, for tracking and utilizing a distributed set of HTTP web caches. Our application uses the Squid HTTP cache. Squid servers advertise their existence to the Shoal server via AMQP messaging by running Shoal Agent. The Shoal server provides a simple REST interface that allows clients to determine their closest Squid cache. Our goal is to dynamically instantiate Squid caches on IaaS clouds in response to client demand. Shoal provides the VMs on IaaS clouds with the location of the nearest dynamically instantiated Squid Cache

  18. Cache Timing Analysis of LFSR-based Stream Ciphers

    DEFF Research Database (Denmark)

    Zenner, Erik; Leander, Gregor; Hawkes, Philip

    2009-01-01

    Cache timing attacks are a class of side-channel attacks that is applicable against certain software implementations. They have generated significant interest when demonstrated against the Advanced Encryption Standard (AES), but have more recently also been applied against other cryptographic pri...

  19. ARC Cache: A solution for lightweight Grid sites in ATLAS

    CERN Document Server

    Garonne, Vincent; The ATLAS collaboration

    2016-01-01

    Many Grid sites have the need to reduce operational manpower, and running a storage element consumes a large amount of effort. In addition, setting up a new Grid site including a storage element involves a steep learning curve and large investment of time. For these reasons so-called storage-less sites are becoming more popular as a way to provide Grid computing resources with less operational overhead. ARC CE is a widely-used and mature Grid middleware which was designed from the start to be used on sites with no persistent storage element. Instead, it maintains a local self-managing cache of data which retains popular data for future jobs. As the cache is simply an area on a local posix shared filesystem with no external-facing service, it requires no extra maintenance. The cache can be scaled up as required by increasing the size of the filesystem or adding new filesystems. This paper describes how ARC CE and its cache are an ideal solution for lightweight Grid sites in the ATLAS experiment, and the integr...

  20. Language-Based Caching of Dynamically Generated HTML

    DEFF Research Database (Denmark)

    Brabrand, Claus; Møller, Anders; Olesen, Steffan

    2002-01-01

    are composed of higher-order templates that are plugged together to construct complete documents. We show how to exploit this feature to provide an automatic fine-grained caching of document templates, based on the service source code. A service transmits not the full HTML document but instead a compact JavaScript...

  1. A distributed storage system with dCache

    Science.gov (United States)

    Behrmann, G.; Fuhrmann, P.; Grønager, M.; Kleist, J.

    2008-07-01

    The LCG collaboration is encompassed by a number of Tier 1 centers. The Nordic LCG Tier 1, operated by NDGF, is in contrast to many other Tier 1 centers distributed over the Nordic countries. A distributed setup was chosen for both political and technical reasons, but also provides a number of unique challenges. dCache is well known and respected as a powerful distributed storage resource manager, and was chosen for implementing the storage aspects of the Nordic Tier 1. In contrast to classic dCache deployments, we deploy dCache over a WAN with limited bandwidth, high latency, frequent network failures, and spanning many administrative domains. These properties provide unique challenges, covering topics such as security, administration, maintenance, upgradability, reliability, and performance. Our initial focus has been on implementing the GFD.47 OGF recommendation (which introduced the GridFTP 2 protocol) in dCache and the Globus Toolkit. Compared to GridFTP 1, GridFTP 2 allows for more intelligent data flow between clients and storage pools, thus enabling more efficient use of our limited bandwidth.

  2. A distributed storage system with dCache

    International Nuclear Information System (INIS)

    Behrmann, G; Groenager, M; Fuhrmann, P; Kleist, J

    2008-01-01

    The LCG collaboration is encompassed by a number of Tier 1 centers. The Nordic LCG Tier 1, operated by NDGF, is in contrast to many other Tier 1 centers distributed over the Nordic countries. A distributed setup was chosen for both political and technical reasons, but also provides a number of unique challenges. dCache is well known and respected as a powerful distributed storage resource manager, and was chosen for implementing the storage aspects of the Nordic Tier 1. In contrast to classic dCache deployments, we deploy dCache over a WAN with limited bandwidth, high latency, frequent network failures, and spanning many administrative domains. These properties provide unique challenges, covering topics such as security, administration, maintenance, upgradability, reliability, and performance. Our initial focus has been on implementing the GFD.47 OGF recommendation (which introduced the GridFTP 2 protocol) in dCache and the Globus Toolkit. Compared to GridFTP 1, GridFTP 2 allows for more intelligent data flow between clients and storage pools, thus enabling more efficient use of our limited bandwidth

  3. Tier 3 batch system data locality via managed caches

    Science.gov (United States)

    Fischer, Max; Giffels, Manuel; Jung, Christopher; Kühn, Eileen; Quast, Günter

    2015-05-01

    Modern data processing increasingly relies on data locality for performance and scalability, whereas the common HEP approaches aim for uniform resource pools with minimal locality, recently even across site boundaries. To combine advantages of both, the High- Performance Data Analysis (HPDA) Tier 3 concept opportunistically establishes data locality via coordinated caches. In accordance with HEP Tier 3 activities, the design incorporates two major assumptions: First, only a fraction of data is accessed regularly and thus the deciding factor for overall throughput. Second, data access may fallback to non-local, making permanent local data availability an inefficient resource usage strategy. Based on this, the HPDA design generically extends available storage hierarchies into the batch system. Using the batch system itself for scheduling file locality, an array of independent caches on the worker nodes is dynamically populated with high-profile data. Cache state information is exposed to the batch system both for managing caches and scheduling jobs. As a result, users directly work with a regular, adequately sized storage system. However, their automated batch processes are presented with local replications of data whenever possible.

  4. A Software Managed Stack Cache for Real-Time Systems

    DEFF Research Database (Denmark)

    Jordan, Alexander; Abbaspourseyedi, Sahar; Schoeberl, Martin

    2016-01-01

    In a real-time system, the use of a scratchpad memory can mitigate the difficulties related to analyzing data caches, whose behavior is inherently hard to predict. We propose to use a scratchpad memory for stack allocated data. While statically allocating stack frames for individual functions to ...

  5. dCache, agile adoption of storage technology

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    For over a decade, dCache has been synonymous with large-capacity, fault-tolerant storage using commodity hardware that supports seamless data migration to and from tape. Over that time, it has satisfied the requirements of various demanding scientific user communities to store their data, transfer it between sites and fast, site-local access. When the dCache project started, the focus was on managing a relatively small disk cache in front of large tape archives. Over the project's lifetime storage technology has changed. During this period, technology changes have driven down the cost-per-GiB of harddisks. This resulted in a shift towards systems where the majority of data is stored on disk. More recently, the availability of Solid State Disks, while not yet a replacement for magnetic disks, offers an intriguing opportunity for significant performance improvement if they can be used intelligently within an existing system. New technologies provide new opportunities and dCache user communities' computi...

  6. Cache Timing Analysis of eStream Finalists

    DEFF Research Database (Denmark)

    Zenner, Erik

    2009-01-01

    Cache Timing Attacks have attracted a lot of cryptographic attention due to their relevance for the AES. However, their applicability to other cryptographic primitives is less well researched. In this talk, we give an overview over our analysis of the stream ciphers that were selected for phase 3...

  7. A trace-driven analysis of name and attribute caching in a distributed system

    Science.gov (United States)

    Shirriff, Ken W.; Ousterhout, John K.

    1992-01-01

    This paper presents the results of simulating file name and attribute caching on client machines in a distributed file system. The simulation used trace data gathered on a network of about 40 workstations. Caching was found to be advantageous: a cache on each client containing just 10 directories had a 91 percent hit rate on name look ups. Entry-based name caches (holding individual directory entries) had poorer performance for several reasons, resulting in a maximum hit rate of about 83 percent. File attribute caching obtained a 90 percent hit rate with a cache on each machine of the attributes for 30 files. The simulations show that maintaining cache consistency between machines is not a significant problem; only 1 in 400 name component look ups required invalidation of a remotely cached entry. Process migration to remote machines had little effect on caching. Caching was less successful in heavily shared and modified directories such as /tmp, but there weren't enough references to /tmp overall to affect the results significantly. We estimate that adding name and attribute caching to the Sprite operating system could reduce server load by 36 percent and the number of network packets by 30 percent.

  8. Concrete Mix Design for Service Life of RC Structures under Carbonation Using Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Seung-Jun Kwon

    2014-01-01

    Full Text Available Steel corrosion in reinforced concrete (RC structure is such a critical problem to structural safety that many researches have been performed for maintaining required performance during intended service life. This paper is for a numerical technique for obtaining optimum concrete mix proportions through genetic algorithm (GA for RC structures under carbonation which is considered as a serious deterioration in underground sites and big cities. For this study, mix proportions and CO2 diffusion coefficients are analyzed through the previous studies, and then the fitness function of CO2 diffusion coefficient is derived through regression analysis. The fitness function from 69 test results includes 5 variables of mix proportions such as w/c (water to cement ratio, cement content, sand content percentage, coarse aggregate content, and R.H. (relative humidity. Through GA technique, simulated mix proportions are obtained for 12 cases of verification and they show reasonable results with average relative error of 4.6%. Assuming intended service life and design parameters, intended CO2 diffusion coefficients and cement contents are determined and then related mix proportions are simulated. The proposed technique can provide initial concrete mix proportions which satisfy service life under carbonation.

  9. Distributed Formation State Estimation Algorithms Under Resource and Multi-Tasking Constraints, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Recent work has developed a number of architectures and algorithms for accurately estimating spacecraft and formation states. The estimation accuracy achievable...

  10. Assessing the Stability and Robustness of Semantic Web Services Recommendation Algorithms Under Profile Injection Attacks

    Directory of Open Access Journals (Sweden)

    GRANDIN, P. H.

    2014-06-01

    Full Text Available Recommendation systems based on collaborative filtering are open by nature, what makes them vulnerable to profile injection attacks that insert biased evaluations in the system database in order to manipulate recommendations. In this paper we evaluate the stability and robustness of collaborative filtering algorithms applied to semantic web services recommendation when submitted to random and segment profile injection attacks. We evaluated four algorithms: (1 IMEAN, that makes predictions using the average of the evaluations received by the target item; (2 UMEAN, that makes predictions using the average of the evaluation made by the target user; (3 an algorithm based on the k-nearest neighbor (k-NN method and (4, an algorithm based on the k-means clustering method.The experiments showed that the UMEAN algorithm is not affected by the attacks and that IMEAN is the most vulnerable of all algorithms tested. Nevertheless, both UMEAN and IMEAN have little practical application due to the low precision of their predictions. Among the algorithms with intermediate tolerance to attacks but with good prediction performance, the algorithm based on k-nn proved to be more robust and stable than the algorithm based on k-means.

  11. A high level implementation and performance evaluation of level-I asynchronous cache on FPGA

    Directory of Open Access Journals (Sweden)

    Mansi Jhamb

    2017-07-01

    Full Text Available To bridge the ever-increasing performance gap between the processor and the main memory in a cost-effective manner, novel cache designs and implementations are indispensable. Cache is responsible for a major part of energy consumption (approx. 50% of processors. This paper presents a high level implementation of a micropipelined asynchronous architecture of L1 cache. Due to the fact that each cache memory implementation is time consuming and error-prone process, a synthesizable and a configurable model proves out to be of immense help as it aids in generating a range of caches in a reproducible and quick fashion. The micropipelined cache, implemented using C-Elements acts as a distributed message-passing system. The RTL cache model implemented in this paper, comprising of data and instruction caches has a wide array of configurable parameters. In addition to timing robustness our implementation has high average cache throughput and low latency. The implemented architecture comprises of two direct-mapped, write-through caches for data and instruction. The architecture is implemented in a Field Programmable Gate Array (FPGA chip using Very High Speed Integrated Circuit Hardware Description Language (VHSIC HDL along with advanced synthesis and place-and-route tools.

  12. Seed perishability determines the caching behaviour of a food-hoarding bird.

    Science.gov (United States)

    Neuschulz, Eike Lena; Mueller, Thomas; Bollmann, Kurt; Gugerli, Felix; Böhning-Gaese, Katrin

    2015-01-01

    Many animals hoard seeds for later consumption and establish seed caches that are often located at sites with specific environmental characteristics. One explanation for the selection of non-random caching locations is the avoidance of pilferage by other animals. Another possible hypothesis is that animals choose locations that hamper the perishability of stored food, allowing the consumption of unspoiled food items over long time periods. We examined seed perishability and pilferage avoidance as potential drivers for caching behaviour of spotted nutcrackers (Nucifraga caryocatactes) in the Swiss Alps where the birds are specialized on caching seeds of Swiss stone pine (Pinus cembra). We used seedling establishment as an inverse measure of seed perishability, as established seedlings cannot longer be consumed by nutcrackers. We recorded the environmental conditions (i.e. canopy openness and soil moisture) of seed caching, seedling establishment and pilferage sites. Our results show that sites of seed caching and seedling establishment had opposed microenvironmental conditions. Canopy openness and soil moisture were negatively related to seed caching but positively related to seedling establishment, i.e. nutcrackers cached seeds preferentially at sites where seed perishability was low. We found no effects of environmental factors on cache pilferage, i.e. neither canopy openness nor soil moisture had significant effects on pilferage rates. We thus could not relate caching behaviour to pilferage avoidance. Our study highlights the importance of seed perishability as a mechanism for seed-caching behaviour, which should be considered in future studies. Our findings could have important implications for the regeneration of plants whose seeds are dispersed by seed-caching animals, as the potential of seedlings to establish may strongly decrease if animals cache seeds at sites that favour seed perishability rather than seedling establishment. © 2014 The Authors. Journal

  13. Food availability and animal space use both determine cache density of Eurasian red squirrels.

    Directory of Open Access Journals (Sweden)

    Ke Rong

    Full Text Available Scatter hoarders are not able to defend their caches. A longer hoarding distance combined with lower cache density can reduce cache losses but increase the costs of hoarding and retrieving. Scatter hoarders arrange their cache density to achieve an optimal balance between hoarding costs and main cache losses. We conducted systematic cache sampling investigations to estimate the effects of food availability on cache patterns of Eurasian red squirrels (Sciurus vulgaris. This study was conducted over a five-year period at two sample plots in a Korean pine (Pinus koraiensis-dominated forest with contrasting seed production patterns. During these investigations, the locations of nest trees were treated as indicators of squirrel space use to explore how space use affected cache pattern. The squirrels selectively hoarded heavier pine seeds farther away from seed-bearing trees. The heaviest seeds were placed in caches around nest trees regardless of the nest tree location, and this placement was not in response to decreased food availability. The cache density declined with the hoarding distance. Cache density was lower at sites with lower seed production and during poor seed years. During seed mast years, the cache density around nest trees was higher and invariant. The pine seeds were dispersed over a larger distance when seed availability was lower. Our results suggest that 1 animal space use is an important factor that affects food hoarding distance and associated cache densities, 2 animals employ different hoarding strategies based on food availability, and 3 seed dispersal outside the original stand is stimulated in poor seed years.

  14. Food Availability and Animal Space Use Both Determine Cache Density of Eurasian Red Squirrels

    Science.gov (United States)

    Rong, Ke; Yang, Hui; Ma, Jianzhang; Zong, Cheng; Cai, Tijiu

    2013-01-01

    Scatter hoarders are not able to defend their caches. A longer hoarding distance combined with lower cache density can reduce cache losses but increase the costs of hoarding and retrieving. Scatter hoarders arrange their cache density to achieve an optimal balance between hoarding costs and main cache losses. We conducted systematic cache sampling investigations to estimate the effects of food availability on cache patterns of Eurasian red squirrels (Sciurus vulgaris). This study was conducted over a five-year period at two sample plots in a Korean pine (Pinus koraiensis)-dominated forest with contrasting seed production patterns. During these investigations, the locations of nest trees were treated as indicators of squirrel space use to explore how space use affected cache pattern. The squirrels selectively hoarded heavier pine seeds farther away from seed-bearing trees. The heaviest seeds were placed in caches around nest trees regardless of the nest tree location, and this placement was not in response to decreased food availability. The cache density declined with the hoarding distance. Cache density was lower at sites with lower seed production and during poor seed years. During seed mast years, the cache density around nest trees was higher and invariant. The pine seeds were dispersed over a larger distance when seed availability was lower. Our results suggest that 1) animal space use is an important factor that affects food hoarding distance and associated cache densities, 2) animals employ different hoarding strategies based on food availability, and 3) seed dispersal outside the original stand is stimulated in poor seed years. PMID:24265833

  15. Covariance Structure Model Fit Testing under Missing Data: An Application of the Supplemented EM Algorithm

    Science.gov (United States)

    Cai, Li; Lee, Taehun

    2009-01-01

    We apply the Supplemented EM algorithm (Meng & Rubin, 1991) to address a chronic problem with the "two-stage" fitting of covariance structure models in the presence of ignorable missing data: the lack of an asymptotically chi-square distributed goodness-of-fit statistic. We show that the Supplemented EM algorithm provides a…

  16. I-Structure software cache for distributed applications

    Directory of Open Access Journals (Sweden)

    Alfredo Cristóbal Salas

    2004-01-01

    Full Text Available En este artículo, describimos el caché de software I-Structure para entornos de memoria distribuida (D-ISSC, lo cual toma ventaja de la localidad de los datos mientras mantiene la capacidad de tolerancia a la latencia de sistemas de memoria I-Structure. Las facilidades de programación de los programas MPI, le ocultan los problemas de sincronización al programador. Nuestra evaluación experimental usando un conjunto de pruebas de rendimiento indica que clusters de PC con I-Structure y su mecanismo de cache D-ISSC son más robustos. El sistema puede acelerar aplicaciones de comunicación intensiva regulares e irregulares.

  17. Study on data acquisition system based on reconfigurable cache technology

    Science.gov (United States)

    Zhang, Qinchuan; Li, Min; Jiang, Jun

    2018-03-01

    Waveform capture rate is one of the key features of digital acquisition systems, which represents the waveform processing capability of the system in a unit time. The higher the waveform capture rate is, the larger the chance to capture elusive events is and the more reliable the test result is. First, this paper analyzes the impact of several factors on the waveform capture rate of the system, then the novel technology based on reconfigurable cache is further proposed to optimize system architecture, and the simulation results show that the signal-to-noise ratio of signal, capacity, and structure of cache have significant effects on the waveform capture rate. Finally, the technology is demonstrated by the engineering practice, and the results show that the waveform capture rate of the system is improved substantially without significant increase of system's cost, and the technology proposed has a broad application prospect.

  18. The Potential Role of Cache Mechanism for Complicated Design Optimization

    International Nuclear Information System (INIS)

    Noriyasu, Hirokawa; Fujita, Kikuo

    2002-01-01

    This paper discusses the potential role of cache mechanism for complicated design optimization While design optimization is an application of mathematical programming techniques to engineering design problems over numerical computation, its progress has been coevolutionary. The trend in such progress indicates that more complicated applications become the next target of design optimization beyond growth of computational resources. As the progress in the past two decades had required response surface techniques, decomposition techniques, etc., any new framework must be introduced for the future of design optimization methods. This paper proposes a possibility of what we call cache mechanism for mediating the coming challenge and briefly demonstrates some promises in the idea of Voronoi diagram based cumulative approximation as an example of its implementation, development of strict robust design, extension of design optimization for product variety

  19. Storage and Caching: Synthesis of Flow-based Microfluidic Biochips

    OpenAIRE

    Tseng, Tsun-Ming; Li, Bing; Ho, Tsung-Yi; Schlichtmann, Ulf

    2017-01-01

    Flow-based microfluidic biochips are widely used in lab- on-a-chip experiments. In these chips, devices such as mixers and detectors connected by micro-channels execute specific operations. Intermediate fluid samples are saved in storage temporarily until target devices become avail- able. However, if the storage unit does not have enough capacity, fluid samples must wait in devices, reducing their efficiency and thus increasing the overall execution time. Consequently, storage and caching of...

  20. Justice and Immigrant Latino Recreation Geography in Cache Valley, Utah

    OpenAIRE

    Madsen, Jodie; Radel, Claudia; Endter-Wada, Joanna

    2014-01-01

    Latinos are the largest U.S. non-mainstreamed ethnic group, and social and environmental justice considerations dictate recreation professionals and researchers meet their recreation needs. This study reconceptualizes this diverse group’s recreation patterns, looking at where immigrant Latino individuals in Cache Valley, Utah do recreate rather than where they do not. Through qualitative interviews and interactive mapping, thirty participants discussed what recreation means to them and explai...

  1. Icarus: a caching simulator for information centric networking (ICN)

    OpenAIRE

    Saino, L.; Psaras, I.; Pavlou, G.

    2014-01-01

    Information-Centric Networking (ICN) is a new networking paradigm proposing a shift of the main network abstraction from host identifiers to location-agnostic content identifiers. So far, several architectures have been proposed implementing this paradigm shift. A key feature, common to all proposed architectures, is the in-network caching capability, enabled by the location-agnostic, explicit naming of contents. This aspect, in particular, has recently received considerable attention by ...

  2. An analysis dictionary learning algorithm under a noisy data model with orthogonality constraint.

    Science.gov (United States)

    Zhang, Ye; Yu, Tenglong; Wang, Wenwu

    2014-01-01

    Two common problems are often encountered in analysis dictionary learning (ADL) algorithms. The first one is that the original clean signals for learning the dictionary are assumed to be known, which otherwise need to be estimated from noisy measurements. This, however, renders a computationally slow optimization process and potentially unreliable estimation (if the noise level is high), as represented by the Analysis K-SVD (AK-SVD) algorithm. The other problem is the trivial solution to the dictionary, for example, the null dictionary matrix that may be given by a dictionary learning algorithm, as discussed in the learning overcomplete sparsifying transform (LOST) algorithm. Here we propose a novel optimization model and an iterative algorithm to learn the analysis dictionary, where we directly employ the observed data to compute the approximate analysis sparse representation of the original signals (leading to a fast optimization procedure) and enforce an orthogonality constraint on the optimization criterion to avoid the trivial solutions. Experiments demonstrate the competitive performance of the proposed algorithm as compared with three baselines, namely, the AK-SVD, LOST, and NAAOLA algorithms.

  3. IMPROVED VIRTUAL CIRCUIT ROUTING ALGORITHM FOR WIRELESS SENSOR NETWORKS UNDER THE ASPECT OF POWER AWARENESS

    Directory of Open Access Journals (Sweden)

    Abid Ali Minhas

    2006-06-01

    Full Text Available Routing algorithms have shown their importance in the power aware wireless micro-sensor networks. In this paper first we present virtual circuit algorithm (VCRA, a routing algorithm for wireless sensor networks. We analyze the power utilized by nodes to lengthen the battery life and thus improving the lifetime of wireless sensor network. We discuss VCRA in comparison with the Multihoprouter, an algorithm developed by UC Berkeley. Then we present Improved Virtual Circuit Routing Algorithm (IVCRA which is an improved form of VCRA. In IVCRA node failure detection and path repairing scheme has been implemented. We also present the energy analysis of IVCRA and prove that IVCRA is the best choice. We first implement our routing algorithms in simulator TOSSIM and then on real hardware of mica2 mote-sensor network platform and prove the reliable routing of the data packets from different nodes to the base station. The motes used as nodes in our mote-sensor network are from Berkeley USA. By using simulator POWERTOSSIM, we estimate and present the energy utilized by different nodes of the network. At the end we present a comparison of our work with the network layer of Zigbee/IEEE 802.15.4, which is an emerging standard for wireless sensor networks and then compare its energy efficiency with the packet size chosen for our algorithm.

  4. An Analysis Dictionary Learning Algorithm under a Noisy Data Model with Orthogonality Constraint

    Directory of Open Access Journals (Sweden)

    Ye Zhang

    2014-01-01

    Full Text Available Two common problems are often encountered in analysis dictionary learning (ADL algorithms. The first one is that the original clean signals for learning the dictionary are assumed to be known, which otherwise need to be estimated from noisy measurements. This, however, renders a computationally slow optimization process and potentially unreliable estimation (if the noise level is high, as represented by the Analysis K-SVD (AK-SVD algorithm. The other problem is the trivial solution to the dictionary, for example, the null dictionary matrix that may be given by a dictionary learning algorithm, as discussed in the learning overcomplete sparsifying transform (LOST algorithm. Here we propose a novel optimization model and an iterative algorithm to learn the analysis dictionary, where we directly employ the observed data to compute the approximate analysis sparse representation of the original signals (leading to a fast optimization procedure and enforce an orthogonality constraint on the optimization criterion to avoid the trivial solutions. Experiments demonstrate the competitive performance of the proposed algorithm as compared with three baselines, namely, the AK-SVD, LOST, and NAAOLA algorithms.

  5. Design of a shared coherent cache for a multiple channel architecture

    Science.gov (United States)

    Reisner, John A.

    1993-12-01

    The Multiple Channel Architecture (MCA) is a recently proposed computer architecture which uses fiber optic communications to overcome many of the problems associated with interconnection networks. There exists a detailed MCA simulator which faithfully simulates an MCA system, however, the original version of the simulator did not cache shared data. In order to improve the performance of the MCA, a cache coherency protocol was developed and implemented in the simulator. The protocol has two features which are significant: (1) a time-division multiplexed (TDM) communication bus is used for coherency traffic, and (2) the shared data is cached in an independent cache. The modified simulator was then used to test the protocol. Two applications and six test configurations were used throughout the testing. Experiment results showed that the protocol consistently improved system performance. Also, a proof-of-concept experiment indicated that performance improvements can be attained by varying cache parameters between the independent shared and private data caches.

  6. Effects of simulated mountain lion caching on decomposition of ungulate carcasses

    Science.gov (United States)

    Bischoff-Mattson, Z.; Mattson, D.

    2009-01-01

    Caching of animal remains is common among carnivorous species of all sizes, yet the effects of caching on larger prey are unstudied. We conducted a summer field experiment designed to test the effects of simulated mountain lion (Puma concolor) caching on mass loss, relative temperature, and odor dissemination of 9 prey-like carcasses. We deployed all but one of the carcasses in pairs, with one of each pair exposed and the other shaded and shallowly buried (cached). Caching substantially reduced wastage during dry and hot (drought) but not wet and cool (monsoon) periods, and it also reduced temperature and discernable odor to some degree during both seasons. These results are consistent with the hypotheses that caching serves to both reduce competition from arthropods and microbes and reduce odds of detection by larger vertebrates such as bears (Ursus spp.), wolves (Canis lupus), or other lions.

  7. Web Cache Prefetching as an Aspect: Towards a Dynamic-Weaving Based Solution

    DEFF Research Database (Denmark)

    Segura-Devillechaise, Marc; Menaud, Jean-Marc; Muller, Gilles

    2003-01-01

    Given the high proportion of HTTP traffic in the Internet, Web caches are crucial to reduce user access time, network latency, and bandwidth consumption. Prefetching in a Web cache can further enhance these benefits. For the best performance, however, the prefetching policy must match user and Web...... application characteristics. Thus, new prefetching policies must be loaded dynamically as needs change.Most Web caches are large C programs, and thus adding one or more prefetching policies to an existing Web cache is a daunting task. The main problem is that prefetching concerns crosscut the cache structure...... these issues. In particular, µ-Dyner provides a low overhead for aspect invocation, that meets the performance needs of Web caches....

  8. Parallel External Memory Graph Algorithms

    DEFF Research Database (Denmark)

    Arge, Lars Allan; Goodrich, Michael T.; Sitchinava, Nodari

    2010-01-01

    In this paper, we study parallel I/O efficient graph algorithms in the Parallel External Memory (PEM) model, one o f the private-cache chip multiprocessor (CMP) models. We study the fundamental problem of list ranking which leads to efficient solutions to problems on trees, such as computing lowest...... an optimal speedup of ¿(P) in parallel I/O complexity and parallel computation time, compared to the single-processor external memory counterparts....

  9. Application and research of fuzzy clustering analysis algorithm under “micro-lecture” English teaching mode

    Directory of Open Access Journals (Sweden)

    Shi Ying

    2016-01-01

    Full Text Available The fuzzy clustering algorithm is to classify the data or indicators with a greater degree of similarity based on the principle of the same type of individuals possessing a greater similarity, and different types of individuals possessing differences, establish clear category boundaries, form any shape of relationship clusters in the solving process, and input the research indicators at random, in order to accurately analyze the significance of the indicators in the algorithm. The evaluation value of the clustering analysis can be obtained by the establishment of the fuzzy factor set based on the membership analysis, and the evaluation result can be analyzed through reference to the evaluation indicators of the fuzzy clustering analysis. The “micro-lecture” English teaching mode can be estimated and the analysis indicators can be rationally established based on the fuzzy clustering analysis algorithm, with better algorithm applicability.

  10. Distributed Formation State Estimation Algorithms Under Resource and Multi-Tasking Constraints, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — Recent work on distributed multi-spacecraft systems has resulted in a number of architectures and algorithms for accurate estimation of spacecraft and formation...

  11. MPPT Control Strategy of PV Based on Improved Shuffled Frog Leaping Algorithm under Complex Environments

    OpenAIRE

    Nie, Xiaohua; Nie, Haoyao

    2017-01-01

    This work presents a maximum power point tracking (MPPT) based on the particle swarm optimization (PSO) improved shuffled frog leaping algorithm (PSFLA). The swarm intelligence algorithm (SIA) has vast computing ability. The MPPT control strategies of PV array based on SIA are attracting considerable interests. Firstly, the PSFLA was proposed by adding the inertia weight factor w of PSO in standard SFLA to overcome the defect of falling into the partial optimal solutions and slow convergence ...

  12. PARTIAL TRAINING METHOD FOR HEURISTIC ALGORITHM OF POSSIBLE CLUSTERIZATION UNDER UNKNOWN NUMBER OF CLASSES

    Directory of Open Access Journals (Sweden)

    D. A. Viattchenin

    2009-01-01

    Full Text Available A method for constructing a subset of labeled objects which is used in a heuristic algorithm of possible  clusterization with partial  training is proposed in the  paper.  The  method  is  based  on  data preprocessing by the heuristic algorithm of possible clusterization using a transitive closure of a fuzzy tolerance. Method efficiency is demonstrated by way of an illustrative example.

  13. An Online Algorithm for Learning Buyer Behavior under Realistic Pricing Restrictions

    OpenAIRE

    Saharoy, Debjyoti; Tulabandhula, Theja

    2018-01-01

    We propose a new efficient online algorithm to learn the parameters governing the purchasing behavior of a utility maximizing buyer, who responds to prices, in a repeated interaction setting. The key feature of our algorithm is that it can learn even non-linear buyer utility while working with arbitrary price constraints that the seller may impose. This overcomes a major shortcoming of previous approaches, which use unrealistic prices to learn these parameters making them unsuitable in practice.

  14. Comparison of stochastic search optimization algorithms for the laminated composites under mechanical and hygrothermal loadings

    OpenAIRE

    Aydın, Levent; Artem, Hatice Seçil

    2011-01-01

    The aim of the present study is to design the stacking sequence of the laminated composites that have low coefficient of thermal expansion and high elastic moduli. In design process, multi-objective genetic algorithm optimization of the carbon fiber laminated composite plates is verified by single objective optimization approach using three different stochastic optimization methods: genetic algorithm, generalized pattern search, and simulated annealing. However, both the multi- and single-obj...

  15. Living on the Edge: The Role of Proactive Caching in 5G Wireless Networks

    OpenAIRE

    Baştuğ, Ejder; Bennis, Mehdi; Debbah, Mérouane

    2014-01-01

    International audience; This article explores one of the key enablers of beyond 4G wireless networks leveraging small cell network deployments, namely proactive caching. Endowed with predictive capabilities and harnessing recent developments in storage, context-awareness and social networks, peak traffic demands can be substantially reduced by proactively serving predictable user demands, via caching at base stations and users' devices. In order to show the effectiveness of proactive caching,...

  16. Analisis Algoritma Pergantian Cache Pada Proxy Web Server Internet Dengan Simulasi

    OpenAIRE

    Nurwarsito, Heru

    2007-01-01

    Pertumbuhan jumlah client internet dari waktu ke waktu terus bertambah, maka respon akses internet menjadi semakin lambat. Untuk membantu kecepatan akses tersebut maka diperlukan cache pada Proxy Server. Penelitian ini bertujuan untuk menganalisis performansi Proxy Server pada Jaringan Internet terhadap penggunaan algoritma pergantian cache-nya.Analisis Algoritma Pergantian Cache Pada Proxy Server didesain dengan metoda pemodelan simulasi jaringan internet yang terdiri dari Web server, Proxy ...

  17. Bearing Fault Diagnosis under Variable Speed Using Convolutional Neural Networks and the Stochastic Diagonal Levenberg-Marquardt Algorithm

    Directory of Open Access Journals (Sweden)

    Viet Tra

    2017-12-01

    Full Text Available This paper presents a novel method for diagnosing incipient bearing defects under variable operating speeds using convolutional neural networks (CNNs trained via the stochastic diagonal Levenberg-Marquardt (S-DLM algorithm. The CNNs utilize the spectral energy maps (SEMs of the acoustic emission (AE signals as inputs and automatically learn the optimal features, which yield the best discriminative models for diagnosing incipient bearing defects under variable operating speeds. The SEMs are two-dimensional maps that show the distribution of energy across different bands of the AE spectrum. It is hypothesized that the variation of a bearing’s speed would not alter the overall shape of the AE spectrum rather, it may only scale and translate it. Thus, at different speeds, the same defect would yield SEMs that are scaled and shifted versions of each other. This hypothesis is confirmed by the experimental results, where CNNs trained using the S-DLM algorithm yield significantly better diagnostic performance under variable operating speeds compared to existing methods. In this work, the performance of different training algorithms is also evaluated to select the best training algorithm for the CNNs. The proposed method is used to diagnose both single and compound defects at six different operating speeds.

  18. A Cache System Design for CMPs with Built-In Coherence Verification

    Directory of Open Access Journals (Sweden)

    Mamata Dalui

    2016-01-01

    Full Text Available This work reports an effective design of cache system for Chip Multiprocessors (CMPs. It introduces built-in logic for verification of cache coherence in CMPs realizing directory based protocol. It is developed around the cellular automata (CA machine, invented by John von Neumann in the 1950s. A special class of CA referred to as single length cycle 2-attractor cellular automata (TACA has been planted to detect the inconsistencies in cache line states of processors’ private caches. The TACA module captures coherence status of the CMPs’ cache system and memorizes any inconsistent recording of the cache line states during the processors’ reference to a memory block. Theory has been developed to empower a TACA to analyse the cache state updates and then to settle to an attractor state indicating quick decision on a faulty recording of cache line status. The introduction of segmentation of the CMPs’ processor pool ensures a better efficiency, in determining the inconsistencies, by reducing the number of computation steps in the verification logic. The hardware requirement for the verification logic points to the fact that the overhead of proposed coherence verification module is much lesser than that of the conventional verification units and is insignificant with respect to the cost involved in CMPs’ cache system.

  19. Behavior-aware cache hierarchy optimization for low-power multi-core embedded systems

    Science.gov (United States)

    Zhao, Huatao; Luo, Xiao; Zhu, Chen; Watanabe, Takahiro; Zhu, Tianbo

    2017-07-01

    In modern embedded systems, the increasing number of cores requires efficient cache hierarchies to ensure data throughput, but such cache hierarchies are restricted by their tumid size and interference accesses which leads to both performance degradation and wasted energy. In this paper, we firstly propose a behavior-aware cache hierarchy (BACH) which can optimally allocate the multi-level cache resources to many cores and highly improved the efficiency of cache hierarchy, resulting in low energy consumption. The BACH takes full advantage of the explored application behaviors and runtime cache resource demands as the cache allocation bases, so that we can optimally configure the cache hierarchy to meet the runtime demand. The BACH was implemented on the GEM5 simulator. The experimental results show that energy consumption of a three-level cache hierarchy can be saved from 5.29% up to 27.94% compared with other key approaches while the performance of the multi-core system even has a slight improvement counting in hardware overhead.

  20. Cooperative Caching in Mobile Ad Hoc Networks Based on Data Utility

    Directory of Open Access Journals (Sweden)

    Narottam Chand

    2007-01-01

    Full Text Available Cooperative caching, which allows sharing and coordination of cached data among clients, is a potential technique to improve the data access performance and availability in mobile ad hoc networks. However, variable data sizes, frequent data updates, limited client resources, insufficient wireless bandwidth and client's mobility make cache management a challenge. In this paper, we propose a utility based cache replacement policy, least utility value (LUV, to improve the data availability and reduce the local cache miss ratio. LUV considers several factors that affect cache performance, namely access probability, distance between the requester and data source/cache, coherency and data size. A cooperative cache management strategy, Zone Cooperative (ZC, is developed that employs LUV as replacement policy. In ZC one-hop neighbors of a client form a cooperation zone since the cost for communication with them is low both in terms of energy consumption and message exchange. Simulation experiments have been conducted to evaluate the performance of LUV based ZC caching strategy. The simulation results show that, LUV replacement policy substantially outperforms the LRU policy.

  1. Optimal Caching in Multicast 5G Networks with Opportunistic Spectrum Access

    KAUST Repository

    Emara, Mostafa

    2018-01-15

    Cache-enabled small base station (SBS) densification is foreseen as a key component of 5G cellular networks. This architecture enables storing popular files at the network edge (i.e., SBS caches), which empowers local communication and alleviates traffic congestions at the core/backhaul network. This paper develops a mathematical framework, based on stochastic geometry, to characterize the hit probability of a cache-enabled multicast 5G network with SBS multi-channel capabilities and opportunistic spectrum access. To this end, we first derive the hit probability by characterizing opportunistic spectrum access success probabilities, service distance distributions, and coverage probabilities. The optimal caching distribution to maximize the hit probability is then computed. The performance and trade-offs of the derived optimal caching distributions are then assessed and compared with two widely employed caching distribution schemes, namely uniform and Zipf caching, through numerical results and extensive simulations. It is shown that the Zipf caching almost optimal only in scenarios with large number of available channels and large cache sizes.

  2. Organizing the pantry: cache management improves quality of overwinter food stores in a montane mammal

    Science.gov (United States)

    Jakopak, Rhiannon P.; Hall, L. Embere; Chalfoun, Anna

    2017-01-01

    Many mammals create food stores to enhance overwinter survival in seasonal environments. Strategic arrangement of food within caches may facilitate the physical integrity of the cache or improve access to high-quality food to ensure that cached resources meet future nutritional demands. We used the American pika (Ochotona princeps), a food-caching lagomorph, to evaluate variation in haypile (cache) structure (i.e., horizontal layering by plant functional group) in Wyoming, United States. Fifty-five percent of 62 haypiles contained at least 2 discrete layers of vegetation. Adults and juveniles layered haypiles in similar proportions. The probability of layering increased with haypile volume, but not haypile number per individual or nearby forage diversity. Vegetation cached in layered haypiles was also higher in nitrogen compared to vegetation in unlayered piles. We found that American pikas frequently structured their food caches, structured caches were larger, and the cached vegetation in structured piles was of higher nutritional quality. Improving access to stable, high-quality vegetation in haypiles, a critical overwinter food resource, may allow individuals to better persist amidst harsh conditions.

  3. Audience effects on food caching in grey squirrels (Sciurus carolinensis): evidence for pilferage avoidance strategies.

    Science.gov (United States)

    Leaver, Lisa A; Hopewell, Lucy; Caldwell, Christine; Mallarky, Lesley

    2007-01-01

    If food pilferage has been a reliable selection pressure on food caching animals, those animals should have evolved the ability to protect their caches from pilferers. Evidence that animals protect their caches would support the argument that pilferage has been an important adaptive challenge. We observed naturally caching Eastern grey squirrels (Sciurus carolinensis) in order to determine whether they used any evasive tactics in order to deter conspecific and heterospecific pilferage. We found that grey squirrels used evasive tactics when they had a conspecific audience, but not when they had a heterospecific (corvid) audience. When other squirrels were present, grey squirrels spaced their caches farther apart and preferentially cached when oriented with their backs to other squirrels, but no such effect was found when birds were present. Our data provide the first evidence that caching mammals are sensitive to the risk of pilferage posed by an audience of conspecifics, and that they utilise evasive tactics that should help to minimise cache loss. We discuss our results in relation to recent theory of reciprocal pilferage and compare them to behaviours shown by caching birds.

  4. The Aquarius IIU Node: The Caches, the Address Translation Unit, and the VME Bus Interface

    Science.gov (United States)

    1989-08-01

    between the caches and the processo r/prefetcber has a 32-bit bus, the cache uses a 128-bit bus to send blocks to the VME controller . This will even...ODCJO r-- c;;r -1\\J If) a: :r r- a: .., w "- "" .... CONTROLLER RCFRIL* + P 0 OUT (0~ 31) CE* WE* P I DE * V I DE * P 0 DE * V 0 DE * P 0 IN(0...SUN3/160). On every node, there are two controllers for data and instruction cach e that cooperate to suppon Berkeley’s snooping cache-lock state

  5. Towards Cache-Enabled, Order-Aware, Ontology-Based Stream Reasoning Framework

    Energy Technology Data Exchange (ETDEWEB)

    Yan, Rui; Praggastis, Brenda L.; Smith, William P.; McGuinness, Deborah L.

    2016-08-16

    While streaming data have become increasingly more popular in business and research communities, semantic models and processing software for streaming data have not kept pace. Traditional semantic solutions have not addressed transient data streams. Semantic web languages (e.g., RDF, OWL) have typically addressed static data settings and linked data approaches have predominantly addressed static or growing data repositories. Streaming data settings have some fundamental differences; in particular, data are consumed on the fly and data may expire. Stream reasoning, a combination of stream processing and semantic reasoning, has emerged with the vision of providing "smart" processing of streaming data. C-SPARQL is a prominent stream reasoning system that handles semantic (RDF) data streams. Many stream reasoning systems including C-SPARQL use a sliding window and use data arrival time to evict data. For data streams that include expiration times, a simple arrival time scheme is inadequate if the window size does not match the expiration period. In this paper, we propose a cache-enabled, order-aware, ontology-based stream reasoning framework. This framework consumes RDF streams with expiration timestamps assigned by the streaming source. Our framework utilizes both arrival and expiration timestamps in its cache eviction policies. In addition, we introduce the notion of "semantic importance" which aims to address the relevance of data to the expected reasoning, thus enabling the eviction algorithms to be more context- and reasoning-aware when choosing what data to maintain for question answering. We evaluate this framework by implementing three different prototypes and utilizing five metrics. The trade-offs of deploying the proposed framework are also discussed.

  6. Robust consensus algorithm for multi-agent systems with exogenous disturbances under convergence conditions

    Science.gov (United States)

    Jiang, Yulian; Liu, Jianchang; Tan, Shubin; Ming, Pingsong

    2014-09-01

    In this paper, a robust consensus algorithm is developed and sufficient conditions for convergence to consensus are proposed for a multi-agent system (MAS) with exogenous disturbances subject to partial information. By utilizing H∞ robust control, differential game theory and a design-based approach, the consensus problem of the MAS with exogenous bounded interference is resolved and the disturbances are restrained, simultaneously. Attention is focused on designing an H∞ robust controller (the robust consensus algorithm) based on minimisation of our proposed rational and individual cost functions according to goals of the MAS. Furthermore, sufficient conditions for convergence of the robust consensus algorithm are given. An example is employed to demonstrate that our results are effective and more capable to restrain exogenous disturbances than the existing literature.

  7. California scrub-jays reduce visual cues available to potential pilferers by matching food colour to caching substrate.

    Science.gov (United States)

    Kelley, Laura A; Clayton, Nicola S

    2017-07-01

    Some animals hide food to consume later; however, these caches are susceptible to theft by conspecifics and heterospecifics. Caching animals can use protective strategies to minimize sensory cues available to potential pilferers, such as caching in shaded areas and in quiet substrate. Background matching (where object patterning matches the visual background) is commonly seen in prey animals to reduce conspicuousness, and caching animals may also use this tactic to hide caches, for example, by hiding coloured food in a similar coloured substrate. We tested whether California scrub-jays ( Aphelocoma californica ) camouflage their food in this way by offering them caching substrates that either matched or did not match the colour of food available for caching. We also determined whether this caching behaviour was sensitive to social context by allowing the birds to cache when a conspecific potential pilferer could be both heard and seen (acoustic and visual cues present), or unseen (acoustic cues only). When caching events could be both heard and seen by a potential pilferer, birds cached randomly in matching and non-matching substrates. However, they preferentially hid food in the substrate that matched the food colour when only acoustic cues were present. This is a novel cache protection strategy that also appears to be sensitive to social context. We conclude that studies of cache protection strategies should consider the perceptual capabilities of the cacher and potential pilferers. © 2017 The Author(s).

  8. Parameterized Analysis of Paging and List Update Algorithms

    DEFF Research Database (Denmark)

    Dorrigiv, Reza; Ehmsen, Martin R.; López-Ortiz, Alejandro

    2015-01-01

    that a larger cache leads to a better performance. We also apply the parameterized analysis framework to list update and show that certain randomized algorithms which are superior to MTF in the classical model are not so in the parameterized case, which matches experimental results....... set model and express the performance of well known algorithms in terms of this parameter. This explicitly introduces parameterized-style analysis to online algorithms. The idea is that rather than normalizing the performance of an online algorithm by an (optimal) offline algorithm, we explicitly...... express the behavior of the algorithm in terms of two more natural parameters: the size of the cache and Denning’s working set measure. This technique creates a performance hierarchy of paging algorithms which better reflects their experimentally observed relative strengths. It also reflects the intuition...

  9. Parameterized analysis of paging and list update algorithms

    DEFF Research Database (Denmark)

    Dorrigiv, Reza; Ehmsen, Martin R.; López-Ortiz, Alejandro

    2009-01-01

    the behavior of the algorithm in terms of two more natural parameters: the size of the cache and Denning's working set measure. This technique creates a performance hierarchy of paging algorithms which better reflects their intuitive relative strengths. Also it reflects the intuition that a larger cache leads......It is well-established that input sequences for paging and list update have locality of reference. In this paper we analyze the performance of algorithms for these problems in terms of the amount of locality in the input sequence. We define a measure for locality that is based on Denning's working...... set model and express the performance of well known algorithms in term of this parameter. This introduces parameterizedstyle analysis to online algorithms. The idea is that rather than normalizing the performance of an online algorithm by an (optimal) offline algorithm, we explicitly express...

  10. A STUDY OF DEAD-RECKONING ALGORITHM FOR MECANUM WHEEL BASED MOBILE ROBOT UNDER VARIOUS TYPES OF ROAD SURFACE

    OpenAIRE

    上町, 亮介; KAMMACHI, Ryosuke

    2015-01-01

    In this paper, we describe about a study of dead-reckoning algorithm for mecanum wheel based mobile robot under various types of road surface. Because of mecanum wheel based mobile robot can move omni-direction by utilizing tire-road surface friction. Therefore depending on road surface condition, it is difficult to estimate accurate self-position by applying conventional dead-reckoning method. In order to overcome inaccuracy of conventional dead-reckoning method for mecanum wheel based mobil...

  11. A general approach for cache-oblivious range reporting and approximate range counting

    DEFF Research Database (Denmark)

    Afshani, Peyman; Hamilton, Chris; Zeh, Norbert

    2010-01-01

    We present cache-oblivious solutions to two important variants of range searching: range reporting and approximate range counting. Our main contribution is a general approach for constructing cache-oblivious data structures that provide relative (1+ε)-approximations for a general class of range...

  12. Re-caching by Western scrub-jays (Aphelocoma californica cannot be attributed to stress.

    Directory of Open Access Journals (Sweden)

    James M Thom

    Full Text Available Western scrub-jays (Aphelocoma californica live double lives, storing food for the future while raiding the stores of other birds. One tactic scrub-jays employ to protect stores is "re-caching"-relocating caches out of sight of would-be thieves. Recent computational modelling work suggests that re-caching might be mediated not by complex cognition, but by a combination of memory failure and stress. The "Stress Model" asserts that re-caching is a manifestation of a general drive to cache, rather than a desire to protect existing stores. Here, we present evidence strongly contradicting the central assumption of these models: that stress drives caching, irrespective of social context. In Experiment (i, we replicate the finding that scrub-jays preferentially relocate food they were watched hiding. In Experiment (ii we find no evidence that stress increases caching. In light of our results, we argue that the Stress Model cannot account for scrub-jay re-caching.

  13. Web Cache Prefetching as an Aspect: Towards a Dynamic-Weaving Based Solution

    DEFF Research Database (Denmark)

    Segura-Devillechaise, Marc; Menaud, Jean-Marc; Muller, Gilles

    2003-01-01

    Given the high proportion of HTTP traffic in the Internet, Web caches are crucial to reduce user access time, network latency, and bandwidth consumption. Prefetching in a Web cache can further enhance these benefits. For the best performance, however, the prefetching policy must match user and Web...

  14. An unbiased adaptive sampling algorithm for the exploration of RNA mutational landscapes under evolutionary pressure.

    Science.gov (United States)

    Waldispühl, Jérôme; Ponty, Yann

    2011-11-01

    The analysis of the relationship between sequences and structures (i.e., how mutations affect structures and reciprocally how structures influence mutations) is essential to decipher the principles driving molecular evolution, to infer the origins of genetic diseases, and to develop bioengineering applications such as the design of artificial molecules. Because their structures can be predicted from the sequence data only, RNA molecules provide a good framework to study this sequence-structure relationship. We recently introduced a suite of algorithms called RNAmutants which allows a complete exploration of RNA sequence-structure maps in polynomial time and space. Formally, RNAmutants takes an input sequence (or seed) to compute the Boltzmann-weighted ensembles of mutants with exactly k mutations, and sample mutations from these ensembles. However, this approach suffers from major limitations. Indeed, since the Boltzmann probabilities of the mutations depend of the free energy of the structures, RNAmutants has difficulties to sample mutant sequences with low G+C-contents. In this article, we introduce an unbiased adaptive sampling algorithm that enables RNAmutants to sample regions of the mutational landscape poorly covered by classical algorithms. We applied these methods to sample mutations with low G+C-contents. These adaptive sampling techniques can be easily adapted to explore other regions of the sequence and structural landscapes which are difficult to sample. Importantly, these algorithms come at a minimal computational cost. We demonstrate the insights offered by these techniques on studies of complete RNA sequence structures maps of sizes up to 40 nucleotides. Our results indicate that the G+C-content has a strong influence on the size and shape of the evolutionary accessible sequence and structural spaces. In particular, we show that low G+C-contents favor the apparition of internal loops and thus possibly the synthesis of tertiary structure motifs. On

  15. A New Fault Diagnosis Algorithm for PMSG Wind Turbine Power Converters under Variable Wind Speed Conditions

    Directory of Open Access Journals (Sweden)

    Yingning Qiu

    2016-07-01

    Full Text Available Although Permanent Magnet Synchronous Generator (PMSG wind turbines (WTs mitigate gearbox impacts, they requires high reliability of generators and converters. Statistical analysis shows that the failure rate of direct-drive PMSG wind turbines’ generators and inverters are high. Intelligent fault diagnosis algorithms to detect inverters faults is a premise for the condition monitoring system aimed at improving wind turbines’ reliability and availability. The influences of random wind speed and diversified control strategies lead to challenges for developing intelligent fault diagnosis algorithms for converters. This paper studies open-circuit fault features of wind turbine converters in variable wind speed situations through systematic simulation and experiment. A new fault diagnosis algorithm named Wind Speed Based Normalized Current Trajectory is proposed and used to accurately detect and locate faulted IGBT in the circuit arms. It is compared to direct current monitoring and current vector trajectory pattern approaches. The results show that the proposed method has advantages in the accuracy of fault diagnosis and has superior anti-noise capability in variable wind speed situations. The impact of the control strategy is also identified. Experimental results demonstrate its applicability on practical WT condition monitoring system which is used to improve wind turbine reliability and reduce their maintenance cost.

  16. On an algorithm of data compressian under filmless pickup of data from streamer chambers

    International Nuclear Information System (INIS)

    Ososkov, G.A.; Perelygin, S.P.; Prikhod'ko, V.I.; Ton, T.; Chelnokova, V.V.

    1978-01-01

    A primary data compression algorithm is discussed with features feasible loss of accuracy during reconstruction of event geometry. The most effective methods of data compression problem solution are: residual classes calculus and contour following. An approach toWards residual classes calculus operation is suggested, Which consists of two stages of digitized data processing. First, transformation of Cartesian coordination system With 12 13 x2 9 samples along X and Y axes, correspondingly, into a system with 2 8 x2 4 samples. The second stage can be fulfilled in either way: simple sorting of all transformed coordinates of tracks and interference - this algorithm can be implemented in two 2 8 x2 5 bit matrices (per each TV camera) resulting in 13-15-fold compression or track following with calculation of X increment perline at the end of each slice. The second algorithm needs a set of followers which can be implemented in 16x100 bit matrix. It also requires a controller which has some 100 medium IC's. This will enable to achieve 20-30-fold compression of data

  17. A Heuristic Algorithm for Constrained Multi-Source Location Problem with Closest Distance under Gauge: The Variational Inequality Approach

    Directory of Open Access Journals (Sweden)

    Jian-Lin Jiang

    2013-01-01

    Full Text Available This paper considers the locations of multiple facilities in the space , with the aim of minimizing the sum of weighted distances between facilities and regional customers, where the proximity between a facility and a regional customer is evaluated by the closest distance. Due to the fact that facilities are usually allowed to be sited in certain restricted areas, some locational constraints are imposed to the facilities of our problem. In addition, since the symmetry of distances is sometimes violated in practical situations, the gauge is employed in this paper instead of the frequently used norms for measuring both the symmetric and asymmetric distances. In the spirit of the Cooper algorithm (Cooper, 1964, a new location-allocation heuristic algorithm is proposed to solve this problem. In the location phase, the single-source subproblem with regional demands is reformulated into an equivalent linear variational inequality (LVI, and then, a projection-contraction (PC method is adopted to find the optimal locations of facilities, whereas in the allocation phase, the regional customers are allocated to facilities according to the nearest center reclassification (NCR. The convergence of the proposed algorithm is proved under mild assumptions. Some preliminary numerical results are reported to show the effectiveness of the new algorithm.

  18. Reducing the disk I/O of Web proxy server caches

    CERN Document Server

    Maltzahn, C G; Grunwald, D

    1999-01-01

    The dramatic increase of HTTP traffic on the Internet has resulted in widespread use of large caching proxy servers as critical Internet infrastructure components. With continued growth the demand for larger caches and higher performance proxies grows as well. The common bottleneck of large caching proxy servers is disk I/O. We evaluate ways to reduce the amount of required disk I/O. First we compare the file system interactions of two existing Web proxy servers, CERN and SQUID. Then we show how design adjustments to the current SQUID cache architecture can dramatically reduce disk I/O. Our findings suggest two that strategies can significantly reduce disk I/O: preserve locality of the HTTP reference stream while translating these references into cache references; and use virtual memory instead of the file system for objects smaller than the system page size. The evaluated techniques reduced disk I/O by 50to 70 (33 refs).

  19. Pattern recognition for cache management in distributed medical imaging environments.

    Science.gov (United States)

    Viana-Ferreira, Carlos; Ribeiro, Luís; Matos, Sérgio; Costa, Carlos

    2016-02-01

    Traditionally, medical imaging repositories have been supported by indoor infrastructures with huge operational costs. This paradigm is changing thanks to cloud outsourcing which not only brings technological advantages but also facilitates inter-institutional workflows. However, communication latency is one main problem in this kind of approaches, since we are dealing with tremendous volumes of data. To minimize the impact of this issue, cache and prefetching are commonly used. The effectiveness of these mechanisms is highly dependent on their capability of accurately selecting the objects that will be needed soon. This paper describes a pattern recognition system based on artificial neural networks with incremental learning to evaluate, from a set of usage pattern, which one fits the user behavior at a given time. The accuracy of the pattern recognition model in distinct training conditions was also evaluated. The solution was tested with a real-world dataset and a synthesized dataset, showing that incremental learning is advantageous. Even with very immature initial models, trained with just 1 week of data samples, the overall accuracy was very similar to the value obtained when using 75% of the long-term data for training the models. Preliminary results demonstrate an effective reduction in communication latency when using the proposed solution to feed a prefetching mechanism. The proposed approach is very interesting for cache replacement and prefetching policies due to the good results obtained since the first deployment moments.

  20. Broadcasted Location-Aware Data Cache for Vehicular Application

    Directory of Open Access Journals (Sweden)

    Fukuda Akira

    2007-01-01

    Full Text Available There has been increasing interest in the exploitation of advances in information technology, for example, mobile computing and wireless communications in ITS (intelligent transport systems. Classes of applications that can benefit from such an infrastructure include traffic information, roadside businesses, weather reports, entertainment, and so on. There are several wireless communication methods currently available that can be utilized for vehicular applications, such as cellular phone networks, DSRC (dedicated short-range communication, and digital broadcasting. While a cellular phone network is relatively slow and a DSRC has a very small communication area, one-segment digital terrestrial broadcasting service was launched in Japan in 2006, high-performance digital broadcasting for mobile hosts has been available recently. However, broadcast delivery methods have the drawback that clients need to wait for the required data items to appear on the broadcast channel. In this paper, we propose a new cache system to effectively prefetch and replace broadcast data using "scope" (an available area of location-dependent data and "mobility specification" (a schedule according to the direction in which a mobile host moves. We numerically evaluate the cache system on the model close to the traffic road environment, and implement the emulation system to evaluate this location-aware data delivery method for a concrete vehicular application that delivers geographic road map data to a car navigation system.

  1. Improved Space Bounds for Cache-Oblivious Range Reporting

    DEFF Research Database (Denmark)

    Afshani, Peyman; Zeh, Norbert

    2011-01-01

    We provide improved bounds on the size of cacheoblivious range reporting data structures that achieve the optimal query bound of O(logB N + K/B) block transfers. Our first main result is an O(N √ logN log logN)-space data structure that achieves this query bound for 3-d dominance reporting and 2-d...... three-sided range reporting. No cache-oblivious o(N log N/ log logN)-space data structure for these problems was known before, even when allowing a query bound of O(logO(1) 2 N + K/B) block transfers.1 Our result also implies improved space bounds for general 2-d and 3-d orthogonal range reporting. Our...... second main result shows that any cache-oblivious 2-d three-sided range reporting data structure with the optimal query bound has to use Ω(N logε N) space, thereby improving on a recent lower bound for the same problem. Using known transformations, the lower bound extends to 3-d dominance reporting and 3...

  2. Broadcasted Location-Aware Data Cache for Vehicular Application

    Directory of Open Access Journals (Sweden)

    Kenya Sato

    2007-05-01

    Full Text Available There has been increasing interest in the exploitation of advances in information technology, for example, mobile computing and wireless communications in ITS (intelligent transport systems. Classes of applications that can benefit from such an infrastructure include traffic information, roadside businesses, weather reports, entertainment, and so on. There are several wireless communication methods currently available that can be utilized for vehicular applications, such as cellular phone networks, DSRC (dedicated short-range communication, and digital broadcasting. While a cellular phone network is relatively slow and a DSRC has a very small communication area, one-segment digital terrestrial broadcasting service was launched in Japan in 2006, high-performance digital broadcasting for mobile hosts has been available recently. However, broadcast delivery methods have the drawback that clients need to wait for the required data items to appear on the broadcast channel. In this paper, we propose a new cache system to effectively prefetch and replace broadcast data using “scope” (an available area of location-dependent data and “mobility specification” (a schedule according to the direction in which a mobile host moves. We numerically evaluate the cache system on the model close to the traffic road environment, and implement the emulation system to evaluate this location-aware data delivery method for a concrete vehicular application that delivers geographic road map data to a car navigation system.

  3. Final Report---Optimization Under Nonconvexity and Uncertainty: Algorithms and Software

    Energy Technology Data Exchange (ETDEWEB)

    Jeff Linderoth

    2011-11-06

    the goal of this work was to develop new algorithmic techniques for solving large-scale numerical optimization problems, focusing on problems classes that have proven to be among the most challenging for practitioners: those involving uncertainty and those involving nonconvexity. This research advanced the state-of-the-art in solving mixed integer linear programs containing symmetry, mixed integer nonlinear programs, and stochastic optimization problems. The focus of the work done in the continuation was on Mixed Integer Nonlinear Programs (MINLP)s and Mixed Integer Linear Programs (MILP)s, especially those containing a great deal of symmetry.

  4. Modified Covariance Matrix Adaptation – Evolution Strategy algorithm for constrained optimization under uncertainty, application to rocket design

    Directory of Open Access Journals (Sweden)

    Chocat Rudy

    2015-01-01

    Full Text Available The design of complex systems often induces a constrained optimization problem under uncertainty. An adaptation of CMA-ES(λ, μ optimization algorithm is proposed in order to efficiently handle the constraints in the presence of noise. The update mechanisms of the parametrized distribution used to generate the candidate solutions are modified. The constraint handling method allows to reduce the semi-principal axes of the probable research ellipsoid in the directions violating the constraints. The proposed approach is compared to existing approaches on three analytic optimization problems to highlight the efficiency and the robustness of the algorithm. The proposed method is used to design a two stage solid propulsion launch vehicle.

  5. Generation of synthetic surface electromyography signals under fatigue conditions for varying force inputs using feedback control algorithm.

    Science.gov (United States)

    Venugopal, G; Deepak, P; Ghosh, Diptasree M; Ramakrishnan, S

    2017-11-01

    Surface electromyography is a non-invasive technique used for recording the electrical activity of neuromuscular systems. These signals are random, complex and multi-component. There are several techniques to extract information about the force exerted by muscles during any activity. This work attempts to generate surface electromyography signals for various magnitudes of force under isometric non-fatigue and fatigue conditions using a feedback model. The model is based on existing current distribution, volume conductor relations, the feedback control algorithm for rate coding and generation of firing pattern. The result shows that synthetic surface electromyography signals are highly complex in both non-fatigue and fatigue conditions. Furthermore, surface electromyography signals have higher amplitude and lower frequency under fatigue condition. This model can be used to study the influence of various signal parameters under fatigue and non-fatigue conditions.

  6. Solving the Bilevel Facility Location Problem under Preferences by a Stackelberg-Evolutionary Algorithm

    Directory of Open Access Journals (Sweden)

    José-Fernando Camacho-Vallejo

    2014-01-01

    Full Text Available This research highlights the use of game theory to solve the classical problem of the uncapacitated facility location optimization model with customer order preferences through a bilevel approach. The bilevel model provided herein consists of the classical facility location problem and an optimization of the customer preferences, which are the upper and lower level problems, respectively. Also, two reformulations of the bilevel model are presented, reducing it into a mixed-integer single-level problem. An evolutionary algorithm based on the equilibrium in a Stackelberg’s game is proposed to solve the bilevel model. Numerical experimentation is performed in this study and the results are compared to benchmarks from the existing literature on the subject in order to emphasize the benefits of the proposed approach in terms of solution quality and estimation time.

  7. Name-letter branding under scrutiny: real products, new algorithms, and the probability of buying.

    Science.gov (United States)

    Stieger, Stefan

    2010-06-01

    People like letters matching their own first and last name initials more than nonname letters. This name-letter effect has also been found for brands, i.e., people like brands resembling their own name letters (initial or first three). This has been termed name-letter branding effect. In the present study of 199 participants, ages 12 to 79 years, this name-letter branding effect was found for a modified design (1) using real products, (2) concentrating on product names rather than brand names, (3) using five different products for each letter of the Roman alphabet, (4) asking for the buying probability, and (5) using recently introduced algorithms, controlling for individual response tendencies (i.e., liking all letters more or less) and general normative popularity of particular letters (i.e., some letters are generally preferred more than other letters).

  8. Hybrid Electromagnetism-Like Algorithm for Dynamic Supply Chain Network Design under Traffic Congestion and Uncertainty

    Directory of Open Access Journals (Sweden)

    Javid Jouzdani

    2016-01-01

    Full Text Available With the constantly increasing pressure of the competitive environment, supply chain (SC decision makers are forced to consider several aspects of business climate. More specifically, they should take into account the endogenous features (e.g., available means of transportation, and the variety of products and exogenous criteria (e.g., the environmental uncertainty, and transportation system conditions. In this paper, a mixed integer nonlinear programming (MINLP model for dynamic design of a supply chain network is proposed. In this model, multiple products and multiple transportation modes, the time value of money, traffic congestion, and both supply-side and demand-side uncertainties are considered. Due to the complexity of such models, conventional solution methods are not applicable; therefore, two hybrid Electromagnetism-Like Algorithms (EMA are designed and discussed for tackling the problem. The numerical results show the applicability of the proposed model and the capabilities of the solution approaches to the MINLP problem.

  9. Design optimization under uncertainties of a mesoscale implant in biological tissues using a probabilistic learning algorithm

    Science.gov (United States)

    Soize, C.

    2017-11-01

    This paper deals with the optimal design of a titanium mesoscale implant in a cortical bone for which the apparent elasticity tensor is modeled by a non-Gaussian random field at mesoscale, which has been experimentally identified. The external applied forces are also random. The design parameters are geometrical dimensions related to the geometry of the implant. The stochastic elastostatic boundary value problem is discretized by the finite element method. The objective function and the constraints are related to normal, shear, and von Mises stresses inside the cortical bone. The constrained nonconvex optimization problem in presence of uncertainties is solved by using a probabilistic learning algorithm that allows for considerably reducing the numerical cost with respect to the classical approaches.

  10. Novel models and algorithms of load balancing for variable-structured collaborative simulation under HLA/RTI

    Science.gov (United States)

    Yue, Yingchao; Fan, Wenhui; Xiao, Tianyuan; Ma, Cheng

    2013-07-01

    High level architecture(HLA) is the open standard in the collaborative simulation field. Scholars have been paying close attention to theoretical research on and engineering applications of collaborative simulation based on HLA/RTI, which extends HLA in various aspects like functionality and efficiency. However, related study on the load balancing problem of HLA collaborative simulation is insufficient. Without load balancing, collaborative simulation under HLA/RTI may encounter performance reduction or even fatal errors. In this paper, load balancing is further divided into static problems and dynamic problems. A multi-objective model is established and the randomness of model parameters is taken into consideration for static load balancing, which makes the model more credible. The Monte Carlo based optimization algorithm(MCOA) is excogitated to gain static load balance. For dynamic load balancing, a new type of dynamic load balancing problem is put forward with regards to the variable-structured collaborative simulation under HLA/RTI. In order to minimize the influence against the running collaborative simulation, the ordinal optimization based algorithm(OOA) is devised to shorten the optimization time. Furthermore, the two algorithms are adopted in simulation experiments of different scenarios, which demonstrate their effectiveness and efficiency. An engineering experiment about collaborative simulation under HLA/RTI of high speed electricity multiple units(EMU) is also conducted to indentify credibility of the proposed models and supportive utility of MCOA and OOA to practical engineering systems. The proposed research ensures compatibility of traditional HLA, enhances the ability for assigning simulation loads onto computing units both statically and dynamically, improves the performance of collaborative simulation system and makes full use of the hardware resources.

  11. Closed-Form Algorithm for 3-D Near-Field OFDM Signal Localization under Uniform Circular Array.

    Science.gov (United States)

    Su, Xiaolong; Liu, Zhen; Chen, Xin; Wei, Xizhang

    2018-01-14

    Due to its widespread application in communications, radar, etc., the orthogonal frequency division multiplexing (OFDM) signal has become increasingly urgent in the field of localization. Under uniform circular array (UCA) and near-field conditions, this paper presents a closed-form algorithm based on phase difference for estimating the three-dimensional (3-D) location (azimuth angle, elevation angle, and range) of the OFDM signal. In the algorithm, considering that it is difficult to distinguish the frequency of the OFDM signal's subcarriers and the phase-based method is always affected by errors of the frequency estimation, this paper employs sparse representation (SR) to obtain the super-resolution frequencies and the corresponding phases of subcarriers. Further, as the phase differences of the adjacent sensors including azimuth angle, elevation angle and range parameters can be expressed as indefinite equations, the near-field OFDM signal's 3-D location is obtained by employing the least square method, where the phase differences are based on the average of the estimated subcarriers. Finally, the performance of the proposed algorithm is demonstrated by several simulations.

  12. Developing a Novel Hybrid Biogeography-Based Optimization Algorithm for Multilayer Perceptron Training under Big Data Challenge

    Directory of Open Access Journals (Sweden)

    Xun Pu

    2018-01-01

    Full Text Available A Multilayer Perceptron (MLP is a feedforward neural network model consisting of one or more hidden layers between the input and output layers. MLPs have been successfully applied to solve a wide range of problems in the fields of neuroscience, computational linguistics, and parallel distributed processing. While MLPs are highly successful in solving problems which are not linearly separable, two of the biggest challenges in their development and application are the local-minima problem and the problem of slow convergence under big data challenge. In order to tackle these problems, this study proposes a Hybrid Chaotic Biogeography-Based Optimization (HCBBO algorithm for training MLPs for big data analysis and processing. Four benchmark datasets are employed to investigate the effectiveness of HCBBO in training MLPs. The accuracy of the results and the convergence of HCBBO are compared to three well-known heuristic algorithms: (a Biogeography-Based Optimization (BBO, (b Particle Swarm Optimization (PSO, and (c Genetic Algorithms (GA. The experimental results show that training MLPs by using HCBBO is better than the other three heuristic learning approaches for big data processing.

  13. Nature as a treasure map! Teaching geoscience with the help of earth caches?!

    Science.gov (United States)

    Zecha, Stefanie; Schiller, Thomas

    2015-04-01

    This presentation looks at how earth caches are influence the learning process in the field of geo science in non-formal education. The development of mobile technologies using Global Positioning System (GPS) data to point geographical location together with the evolving Web 2.0 supporting the creation and consumption of content, suggest a potential for collaborative informal learning linked to location. With the help of the GIS in smartphones you can go directly in nature, search for information by your smartphone, and learn something about nature. Earth caches are a very good opportunity, which are organized and supervised geocaches with special information about physical geography high lights. Interested people can inform themselves about aspects in geoscience area by earth caches. The main question of this presentation is how these caches are created in relation to learning processes. As is not possible, to analyze all existing earth caches, there was focus on Bavaria and a certain feature of earth caches. At the end the authors show limits and potentials for the use of earth caches and give some remark for the future.

  14. A morphometric assessment of the intended function of cached Clovis points.

    Directory of Open Access Journals (Sweden)

    Briggs Buchanan

    Full Text Available A number of functions have been proposed for cached Clovis points. The least complicated hypothesis is that they were intended to arm hunting weapons. It has also been argued that they were produced for use in rituals or in connection with costly signaling displays. Lastly, it has been suggested that some cached Clovis points may have been used as saws. Here we report a study in which we morphometrically compared Clovis points from caches with Clovis points recovered from kill and camp sites to test two predictions of the hypothesis that cached Clovis points were intended to arm hunting weapons: 1 cached points should be the same shape as, but generally larger than, points from kill/camp sites, and 2 cached points and points from kill/camp sites should follow the same allometric trajectory. The results of the analyses are consistent with both predictions and therefore support the hypothesis. A follow-up review of the fit between the results of the analyses and the predictions of the other hypotheses indicates that the analyses support only the hunting equipment hypothesis. We conclude from this that cached Clovis points were likely produced with the intention of using them to arm hunting weapons.

  15. A morphometric assessment of the intended function of cached Clovis points.

    Science.gov (United States)

    Buchanan, Briggs; Kilby, J David; Huckell, Bruce B; O'Brien, Michael J; Collard, Mark

    2012-01-01

    A number of functions have been proposed for cached Clovis points. The least complicated hypothesis is that they were intended to arm hunting weapons. It has also been argued that they were produced for use in rituals or in connection with costly signaling displays. Lastly, it has been suggested that some cached Clovis points may have been used as saws. Here we report a study in which we morphometrically compared Clovis points from caches with Clovis points recovered from kill and camp sites to test two predictions of the hypothesis that cached Clovis points were intended to arm hunting weapons: 1) cached points should be the same shape as, but generally larger than, points from kill/camp sites, and 2) cached points and points from kill/camp sites should follow the same allometric trajectory. The results of the analyses are consistent with both predictions and therefore support the hypothesis. A follow-up review of the fit between the results of the analyses and the predictions of the other hypotheses indicates that the analyses support only the hunting equipment hypothesis. We conclude from this that cached Clovis points were likely produced with the intention of using them to arm hunting weapons.

  16. Application of multi-objective optimization based on genetic algorithm for sustainable strategic supplier selection under fuzzy environment

    Directory of Open Access Journals (Sweden)

    Muhammad Hashim

    2017-05-01

    Full Text Available Purpose:  The incorporation of environmental objective into the conventional supplier selection practices is crucial for corporations seeking to promote green supply chain management (GSCM. Challenges and risks associated with green supplier selection have been broadly recognized by procurement and supplier management professionals. This paper aims to solve a Tetra “S” (SSSS problem based on a fuzzy multi-objective optimization with genetic algorithm in a holistic supply chain environment. In this empirical study, a mathematical model with fuzzy coefficients is considered for sustainable strategic supplier selection (SSSS problem and a corresponding model is developed to tackle this problem. Design/methodology/approach: Sustainable strategic supplier selection (SSSS decisions are typically multi-objectives in nature and it is an important part of green production and supply chain management for many firms. The proposed uncertain model is transferred into deterministic model by applying the expected value mesurement (EVM and genetic algorithm with weighted sum approach for solving the multi-objective problem. This research focus on a multi-objective optimization model for minimizing lean cost, maximizing sustainable service and greener product quality level. Finally, a mathematical case of textile sector is presented to exemplify the effectiveness of the proposed model with a sensitivity analysis. Findings: This study makes a certain contribution by introducing the Tetra ‘S’ concept in both the theoretical and practical research related to multi-objective optimization as well as in the study of sustainable strategic supplier selection (SSSS under uncertain environment. Our results suggest that decision makers tend to select strategic supplier first then enhance the sustainability. Research limitations/implications: Although the fuzzy expected value model (EVM with fuzzy coefficients constructed in present research should be helpful for

  17. Application of multi-objective optimization based on genetic algorithm for sustainable strategic supplier selection under fuzzy environment

    Energy Technology Data Exchange (ETDEWEB)

    Hashim, M.; Nazam, M.; Yao, L.; Baig, S.A.; Abrar, M.; Zia-ur-Rehman, M.

    2017-07-01

    The incorporation of environmental objective into the conventional supplier selection practices is crucial for corporations seeking to promote green supply chain management (GSCM). Challenges and risks associated with green supplier selection have been broadly recognized by procurement and supplier management professionals. This paper aims to solve a Tetra “S” (SSSS) problem based on a fuzzy multi-objective optimization with genetic algorithm in a holistic supply chain environment. In this empirical study, a mathematical model with fuzzy coefficients is considered for sustainable strategic supplier selection (SSSS) problem and a corresponding model is developed to tackle this problem. Design/methodology/approach: Sustainable strategic supplier selection (SSSS) decisions are typically multi-objectives in nature and it is an important part of green production and supply chain management for many firms. The proposed uncertain model is transferred into deterministic model by applying the expected value mesurement (EVM) and genetic algorithm with weighted sum approach for solving the multi-objective problem. This research focus on a multi-objective optimization model for minimizing lean cost, maximizing sustainable service and greener product quality level. Finally, a mathematical case of textile sector is presented to exemplify the effectiveness of the proposed model with a sensitivity analysis. Findings: This study makes a certain contribution by introducing the Tetra ‘S’ concept in both the theoretical and practical research related to multi-objective optimization as well as in the study of sustainable strategic supplier selection (SSSS) under uncertain environment. Our results suggest that decision makers tend to select strategic supplier first then enhance the sustainability. Research limitations/implications: Although the fuzzy expected value model (EVM) with fuzzy coefficients constructed in present research should be helpful for solving real world

  18. Application of multi-objective optimization based on genetic algorithm for sustainable strategic supplier selection under fuzzy environment

    International Nuclear Information System (INIS)

    Hashim, M.; Nazam, M.; Yao, L.; Baig, S.A.; Abrar, M.; Zia-ur-Rehman, M.

    2017-01-01

    The incorporation of environmental objective into the conventional supplier selection practices is crucial for corporations seeking to promote green supply chain management (GSCM). Challenges and risks associated with green supplier selection have been broadly recognized by procurement and supplier management professionals. This paper aims to solve a Tetra “S” (SSSS) problem based on a fuzzy multi-objective optimization with genetic algorithm in a holistic supply chain environment. In this empirical study, a mathematical model with fuzzy coefficients is considered for sustainable strategic supplier selection (SSSS) problem and a corresponding model is developed to tackle this problem. Design/methodology/approach: Sustainable strategic supplier selection (SSSS) decisions are typically multi-objectives in nature and it is an important part of green production and supply chain management for many firms. The proposed uncertain model is transferred into deterministic model by applying the expected value mesurement (EVM) and genetic algorithm with weighted sum approach for solving the multi-objective problem. This research focus on a multi-objective optimization model for minimizing lean cost, maximizing sustainable service and greener product quality level. Finally, a mathematical case of textile sector is presented to exemplify the effectiveness of the proposed model with a sensitivity analysis. Findings: This study makes a certain contribution by introducing the Tetra ‘S’ concept in both the theoretical and practical research related to multi-objective optimization as well as in the study of sustainable strategic supplier selection (SSSS) under uncertain environment. Our results suggest that decision makers tend to select strategic supplier first then enhance the sustainability. Research limitations/implications: Although the fuzzy expected value model (EVM) with fuzzy coefficients constructed in present research should be helpful for solving real world

  19. Value-Based Caching in Information-Centric Wireless Body Area Networks

    Directory of Open Access Journals (Sweden)

    Fadi M. Al-Turjman

    2017-01-01

    Full Text Available We propose a resilient cache replacement approach based on a Value of sensed Information (VoI policy. To resolve and fetch content when the origin is not available due to isolated in-network nodes (fragmentation and harsh operational conditions, we exploit a content caching approach. Our approach depends on four functional parameters in sensory Wireless Body Area Networks (WBANs. These four parameters are: age of data based on periodic request, popularity of on-demand requests, communication interference cost, and the duration for which the sensor node is required to operate in active mode to capture the sensed readings. These parameters are considered together to assign a value to the cached data to retain the most valuable information in the cache for prolonged time periods. The higher the value, the longer the duration for which the data will be retained in the cache. This caching strategy provides significant availability for most valuable and difficult to retrieve data in the WBANs. Extensive simulations are performed to compare the proposed scheme against other significant caching schemes in the literature while varying critical aspects in WBANs (e.g., data popularity, cache size, publisher load, connectivity-degree, and severe probabilities of node failures. These simulation results indicate that the proposed VoI-based approach is a valid tool for the retrieval of cached content in disruptive and challenging scenarios, such as the one experienced in WBANs, since it allows the retrieval of content for a long period even while experiencing severe in-network node failures.

  20. Jacobo Machover, La face cachée du Che

    OpenAIRE

    Boisard, Stéphane

    2013-01-01

    S’attaquer aux mythes est une tâche prométhéenne et Jacobo Machover, dans son livre sur La face cachée du Che, en fait la cruelle expérience. S’il faut savoir gré à cet auteur de porter un regard sans complaisance sur la figure emblématique d’Ernesto Guevara de la Serna, il faut aussi s’interroger sur l’échec de son entreprise. À sa décharge et comme le confirme les commentaires – ou bien laudateurs ou bien injurieux – suscités par ce livre, il n’est pas aisé de proposer une lecture critique ...

  1. Optical RAM-enabled cache memory and optical routing for chip multiprocessors: technologies and architectures

    Science.gov (United States)

    Pleros, Nikos; Maniotis, Pavlos; Alexoudi, Theonitsa; Fitsios, Dimitris; Vagionas, Christos; Papaioannou, Sotiris; Vyrsokinos, K.; Kanellos, George T.

    2014-03-01

    The processor-memory performance gap, commonly referred to as "Memory Wall" problem, owes to the speed mismatch between processor and electronic RAM clock frequencies, forcing current Chip Multiprocessor (CMP) configurations to consume more than 50% of the chip real-estate for caching purposes. In this article, we present our recent work spanning from Si-based integrated optical RAM cell architectures up to complete optical cache memory architectures for Chip Multiprocessor configurations. Moreover, we discuss on e/o router subsystems with up to Tb/s routing capacity for cache interconnection purposes within CMP configurations, currently pursued within the FP7 PhoxTrot project.

  2. Smart Collaborative Caching for Information-Centric IoT in Fog Computing

    Science.gov (United States)

    Song, Fei; Ai, Zheng-Yang; Li, Jun-Jie; Zhang, Hong-Ke

    2017-01-01

    The significant changes enabled by the fog computing had demonstrated that Internet of Things (IoT) urgently needs more evolutional reforms. Limited by the inflexible design philosophy; the traditional structure of a network is hard to meet the latest demands. However, Information-Centric Networking (ICN) is a promising option to bridge and cover these enormous gaps. In this paper, a Smart Collaborative Caching (SCC) scheme is established by leveraging high-level ICN principles for IoT within fog computing paradigm. The proposed solution is supposed to be utilized in resource pooling, content storing, node locating and other related situations. By investigating the available characteristics of ICN, some challenges of such combination are reviewed in depth. The details of building SCC, including basic model and advanced algorithms, are presented based on theoretical analysis and simplified examples. The validation focuses on two typical scenarios: simple status inquiry and complex content sharing. The number of clusters, packet loss probability and other parameters are also considered. The analytical results demonstrate that the performance of our scheme, regarding total packet number and average transmission latency, can outperform that of the original ones. We expect that the SCC will contribute an efficient solution to the related studies. PMID:29104219

  3. Smart Collaborative Caching for Information-Centric IoT in Fog Computing.

    Science.gov (United States)

    Song, Fei; Ai, Zheng-Yang; Li, Jun-Jie; Pau, Giovanni; Collotta, Mario; You, Ilsun; Zhang, Hong-Ke

    2017-11-01

    The significant changes enabled by the fog computing had demonstrated that Internet of Things (IoT) urgently needs more evolutional reforms. Limited by the inflexible design philosophy; the traditional structure of a network is hard to meet the latest demands. However, Information-Centric Networking (ICN) is a promising option to bridge and cover these enormous gaps. In this paper, a Smart Collaborative Caching (SCC) scheme is established by leveraging high-level ICN principles for IoT within fog computing paradigm. The proposed solution is supposed to be utilized in resource pooling, content storing, node locating and other related situations. By investigating the available characteristics of ICN, some challenges of such combination are reviewed in depth. The details of building SCC, including basic model and advanced algorithms, are presented based on theoretical analysis and simplified examples. The validation focuses on two typical scenarios: simple status inquiry and complex content sharing. The number of clusters, packet loss probability and other parameters are also considered. The analytical results demonstrate that the performance of our scheme, regarding total packet number and average transmission latency, can outperform that of the original ones. We expect that the SCC will contribute an efficient solution to the related studies.

  4. Smart Collaborative Caching for Information-Centric IoT in Fog Computing

    Directory of Open Access Journals (Sweden)

    Fei Song

    2017-11-01

    Full Text Available The significant changes enabled by the fog computing had demonstrated that Internet of Things (IoT urgently needs more evolutional reforms. Limited by the inflexible design philosophy; the traditional structure of a network is hard to meet the latest demands. However, Information-Centric Networking (ICN is a promising option to bridge and cover these enormous gaps. In this paper, a Smart Collaborative Caching (SCC scheme is established by leveraging high-level ICN principles for IoT within fog computing paradigm. The proposed solution is supposed to be utilized in resource pooling, content storing, node locating and other related situations. By investigating the available characteristics of ICN, some challenges of such combination are reviewed in depth. The details of building SCC, including basic model and advanced algorithms, are presented based on theoretical analysis and simplified examples. The validation focuses on two typical scenarios: simple status inquiry and complex content sharing. The number of clusters, packet loss probability and other parameters are also considered. The analytical results demonstrate that the performance of our scheme, regarding total packet number and average transmission latency, can outperform that of the original ones. We expect that the SCC will contribute an efficient solution to the related studies.

  5. Resource-Constrained Project Scheduling Under Uncertainty: Models, Algorithms and Applications

    Science.gov (United States)

    2014-11-10

    Make-to-Order (MTO) Production Planning using Bayesian Updating, International Journal of Production Economics (04 2014) Norman Keith Womer, Haitao...2013) Made-to-Order Production Scheduling using Bayesian Updating, Working Paper, under second-round review in International Journal of Production Economics . VI

  6. Researching of Covert Timing Channels Based on HTTP Cache Headers in Web API

    Directory of Open Access Journals (Sweden)

    Denis Nikolaevich Kolegov

    2015-12-01

    Full Text Available In this paper, it is shown how covert timing channels based on HTTP cache headers can be implemented using different Web API of Google Drive, Dropbox and Facebook  Internet services.

  7. Prospective thinking in a mustelid? Eira barbara (Carnivora) cache unripe fruits to consume them once ripened

    Science.gov (United States)

    Soley, Fernando G.; Alvarado-Díaz, Isaías

    2011-08-01

    The ability of nonhuman animals to project individual actions into the future is a hotly debated topic. We describe the caching behaviour of tayras ( Eira barbara) based on direct observations in the field, pictures from camera traps and radio telemetry, providing evidence that these mustelids pick and cache unripe fruit for future consumption. This is the first reported case of harvesting of unripe fruits by a nonhuman animal. Ripe fruits are readily taken by a variety of animals, and tayras might benefit by securing a food source before strong competition takes place. Unripe climacteric fruits need to be harvested when mature to ensure that they continue their ripening process, and tayras accurately choose mature stages of these fruits for caching. Tayras cache both native (sapote) and non-native (plantain) fruits that differ in morphology and developmental timeframes, showing sophisticated cognitive ability that might involve highly developed learning abilities and/or prospective thinking.

  8. Optimization of wear behavior of electroless Ni-P-W coating under dry and lubricated conditions using genetic algorithm (GA

    Directory of Open Access Journals (Sweden)

    Arkadeb Mukhopadhyay

    2016-12-01

    Full Text Available The present study aims to investigate the tribological behavior of Ni-P-W coating under dry and lubricated condition. The coating is deposited onto mild steel (AISI 1040 specimens by the electroless method using a sodium hypophosphite based alkaline bath. Coating characterization is done to investigate the effect of microstructure on its performance. The change in microhardness is observed to be quite significant after annealing the deposits at 400°C for 1h. A pin–on–disc type tribo-tester is used to investigate the tribological behavior of the coating under dry and lubricated conditions. The experimental design formulation is based on Taguchi’s orthogonal array. The design parameters considered are the applied normal load, sliding speed and sliding duration while the response parameter is wear depth. Multiple regression analysis is employed to obtain a quadratic model of the response variables with the main design parameters under considerations. A high value of coefficient of determination of 95.3% and 87.5% of wear depth is obtained under dry and lubricated conditions, respectively which indicate good correlation between experimental results and the multiple regression models. Analysis of variance at a confidence level of 95% shows that the models are statistically significant. Finally, the quadratic equations are used as objective functions to obtain the optimal combination of tribo testing parameters for minimum wear depth using genetic algorithm (GA.

  9. A novel vibration-based fault diagnostic algorithm for gearboxes under speed fluctuations without rotational speed measurement

    Science.gov (United States)

    Hong, Liu; Qu, Yongzhi; Dhupia, Jaspreet Singh; Sheng, Shuangwen; Tan, Yuegang; Zhou, Zude

    2017-09-01

    The localized failures of gears introduce cyclic-transient impulses in the measured gearbox vibration signals. These impulses are usually identified from the sidebands around gear-mesh harmonics through the spectral analysis of cyclo-stationary signals. However, in practice, several high-powered applications of gearboxes like wind turbines are intrinsically characterized by nonstationary processes that blur the measured vibration spectra of a gearbox and deteriorate the efficacy of spectral diagnostic methods. Although order-tracking techniques have been proposed to improve the performance of spectral diagnosis for nonstationary signals measured in such applications, the required hardware for the measurement of rotational speed of these machines is often unavailable in industrial settings. Moreover, existing tacho-less order-tracking approaches are usually limited by the high time-frequency resolution requirement, which is a prerequisite for the precise estimation of the instantaneous frequency. To address such issues, a novel fault-signature enhancement algorithm is proposed that can alleviate the spectral smearing without the need of rotational speed measurement. This proposed tacho-less diagnostic technique resamples the measured acceleration signal of the gearbox based on the optimal warping path evaluated from the fast dynamic time-warping algorithm, which aligns a filtered shaft rotational harmonic signal with respect to a reference signal assuming a constant shaft rotational speed estimated from the approximation of operational speed. The effectiveness of this method is validated using both simulated signals from a fixed-axis gear pair under nonstationary conditions and experimental measurements from a 750-kW planetary wind turbine gearbox on a dynamometer test rig. The results demonstrate that the proposed algorithm can identify fault information from typical gearbox vibration measurements carried out in a resource-constrained industrial environment.

  10. dCache: Big Data storage for HEP communities and beyond

    Energy Technology Data Exchange (ETDEWEB)

    Millar, A. P. [DESY; Behrmann, G. [Unlisted, DK; Bernardt, C. [DESY; Fuhrmann, P. [DESY; Litvintsev, D. [Fermilab; Mkrtchyan, T. [DESY; Petersen, A. [DESY; Rossi, A. [Fermilab; Schwank, K. [DESY

    2014-01-01

    With over ten years in production use dCache data storage system has evolved to match ever changing lansdcape of continually evolving storage technologies with new solutions to both existing problems and new challenges. In this paper, we present three areas of innovation in dCache: providing efficient access to data with NFS v4.1 pNFS, adoption of CDMI and WebDAV as an alternative to SRM for managing data, and integration with alternative authentication mechanisms.

  11. A Novel Coordinated Edge Caching with Request Filtration in Radio Access Network

    OpenAIRE

    Li, Yang; Xu, Yuemei; Lin, Tao; Wang, Xiaohui; Ci, Song

    2013-01-01

    Content caching at the base station of the Radio Access Network (RAN) is a way to reduce backhaul transmission and improve the quality of experience. So it is crucial to manage such massive microcaches to store the contents in a coordinated manner, in order to increase the overall mobile network capacity to support more number of requests. We achieve this goal in this paper with a novel caching scheme, which reduces the repeating traffic by request filtration and asynchronous multicast in a R...

  12. Milestone Report - Level-2 Milestone 5589: Modernization and Expansion of LLNL Archive Disk Cache

    Energy Technology Data Exchange (ETDEWEB)

    Shoopman, J. D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-02-04

    This report documents Livermore Computing (LC) activities in support of ASC L2 milestone 5589: Modernization and Expansion of LLNL Archive Disk Cache, due March 31, 2016. The full text of the milestone is included in Attachment 1. The description of the milestone is: Description: Configuration of archival disk cache systems will be modernized to reduce fragmentation, and new, higher capacity disk subsystems will be deployed. This will enhance archival disk cache capability for ASC archive users, enabling files written to the archives to remain resident on disk for many (6–12) months, regardless of file size. The milestone was completed in three phases. On August 26, 2015 subsystems with 6PB of disk cache were deployed for production use in LLNL’s unclassified HPSS environment. Following that, on September 23, 2015 subsystems with 9 PB of disk cache were deployed for production use in LLNL’s classified HPSS environment. On January 31, 2016, the milestone was fully satisfied when the legacy Data Direct Networks (DDN) archive disk cache subsystems were fully retired from production use in both LLNL’s unclassified and classified HPSS environments, and only the newly deployed systems were in use.

  13. Numerical simulations of bubble motion in a vibrated cell under microgravity using level set and VOF algorithms.

    Science.gov (United States)

    Friesen, Timothy J; Takahira, Hiroyuki; Allegro, Lisa; Yasuda, Yoshitaka; Kawaji, Masahiro

    2002-10-01

    Understanding the stability of fluid interfaces subjected to small vibrations under microgravity conditions is important for designing future materials science experiments to be conducted aboard orbiting spacecraft. During the STS-85 mission, experiments investigating the motion of a large bubble resulting from small, controlled vibrations were performed aboard the Space Shuttle Discovery. To better understand the experimental results, two-and three-dimensional simulations of the experiment were performed using level set and volume-of-fluid interface tracking algorithms. The simulations proved capable of predicting accurately the experimentally determined bubble translation behavior. Linear dependence of the bubble translation amplitude on the container translation amplitude was confirmed. In addition, the simulation model was used to confirm predictions of a theoretical inviscid model of bubble motion developed in a previous study.

  14. The Performance Evaluation of an IEEE 802.11 Network Containing Misbehavior Nodes under Different Backoff Algorithms

    Directory of Open Access Journals (Sweden)

    Trong-Minh Hoang

    2017-01-01

    Full Text Available Security of any wireless network is always an important issue due to its serious impacts on network performance. Practically, the IEEE 802.11 medium access control can be violated by several native or smart attacks that result in downgrading network performance. In recent years, there are several studies using analytical model to analyze medium access control (MAC layer misbehavior issue to explore this problem but they have focused on binary exponential backoff only. Moreover, a practical condition such as the freezing backoff issue is not included in the previous models. Hence, this paper presents a novel analytical model of the IEEE 802.11 MAC to thoroughly understand impacts of misbehaving node on network throughput and delay parameters. Particularly, the model can express detailed backoff algorithms so that the evaluation of the network performance under some typical attacks through numerical simulation results would be easy.

  15. A standardized algorithm for determining the underlying cause of death in HIV infection as AIDS or non-AIDS related

    DEFF Research Database (Denmark)

    Kowalska, Justyna D; Mocroft, Amanda; Ledergerber, Bruno

    2011-01-01

    cohort classification (LCC) as reported by the site investigator, and 4 algorithms (ALG) created based on survival times after specific AIDS events. Results: A total of 2,783 deaths occurred, 540 CoDe forms were collected, and 488 were used to evaluate agreements. The agreement between CC and LCC...... are a natural consequence of an increased awareness and knowledge in the field. To monitor and analyze changes in mortality over time, we have explored this issue within the EuroSIDA study and propose a standardized protocol unifying data collected and allowing for classification of all deaths as AIDS or non-AIDS...... related, including events with missing cause of death. Methods: Several classifications of the underlying cause of death as AIDS or non-AIDS related within the EuroSIDA study were compared: central classification (CC-reference group) based on an externally standardised method (the CoDe procedures), local...

  16. Towards Transparent Throughput Elasticity for IaaS Cloud Storage: Exploring the Benefits of Adaptive Block-Level Caching

    Energy Technology Data Exchange (ETDEWEB)

    Nicolae, Bogdan [IBM Research, Dublin, Ireland; Riteau, Pierre [University of Chicago, Chicago, IL, USA; Keahey, Kate [Argonne National Laboratory, Lemont, IL, USA

    2015-10-01

    Storage elasticity on IaaS clouds is a crucial feature in the age of data-intensive computing, especially when considering fluctuations of I/O throughput. This paper provides a transparent solution that automatically boosts I/O bandwidth during peaks for underlying virtual disks, effectively avoiding over-provisioning without performance loss. The authors' proposal relies on the idea of leveraging short-lived virtual disks of better performance characteristics (and thus more expensive) to act during peaks as a caching layer for the persistent virtual disks where the application data is stored. Furthermore, they introduce a performance and cost prediction methodology that can be used both independently to estimate in advance what trade-off between performance and cost is possible, as well as an optimization technique that enables better cache size selection to meet the desired performance level with minimal cost. The authors demonstrate the benefits of their proposal both for microbenchmarks and for two real-life applications using large-scale experiments.

  17. Data Locality via Coordinated Caching for Distributed Processing

    Science.gov (United States)

    Fischer, M.; Kuehn, E.; Giffels, M.; Jung, C.

    2016-10-01

    To enable data locality, we have developed an approach of adding coordinated caches to existing compute clusters. Since the data stored locally is volatile and selected dynamically, only a fraction of local storage space is required. Our approach allows to freely select the degree at which data locality is provided. It may be used to work in conjunction with large network bandwidths, providing only highly used data to reduce peak loads. Alternatively, local storage may be scaled up to perform data analysis even with low network bandwidth. To prove the applicability of our approach, we have developed a prototype implementing all required functionality. It integrates seamlessly into batch systems, requiring practically no adjustments by users. We have now been actively using this prototype on a test cluster for HEP analyses. Specifically, it has been integral to our jet energy calibration analyses for CMS during run 2. The system has proven to be easily usable, while providing substantial performance improvements. Since confirming the applicability for our use case, we have investigated the design in a more general way. Simulations show that many infrastructure setups can benefit from our approach. For example, it may enable us to dynamically provide data locality in opportunistic cloud resources. The experience we have gained from our prototype enables us to realistically assess the feasibility for general production use.

  18. Balance control of grid currents for UPQC under unbalanced loads based on matching-ratio compensation algorithm

    DEFF Research Database (Denmark)

    Zhao, Xiaojun; Zhang, Chunjiang; Chai, Xiuhui

    2018-01-01

    In three-phase four-wire systems, unbalanced loads can cause grid currents to be unbalanced, and this may cause the neutral point potential on the grid side to shift. The neutral point potential shift will worsen the control precision as well as the performance of the threephase four-wire unified...... power quality conditioner (UPQC), and it also leads to unbalanced three-phase output voltage, even causing damage to electric equipment. To deal with unbalanced loads, this paper proposes a matching-ratio compensation algorithm (MCA) for the fundamental active component of load currents......, and by employing this MCA, balanced three-phase grid currents can be realized under 100% unbalanced loads. The steady-state fluctuation and the transient drop of the DC bus voltage can also be restrained. This paper establishes the mathematical model of the UPQC, analyzes the mechanism of the DC bus voltage...... fluctuations, and elaborates the interaction between unbalanced grid currents and DC bus voltage fluctuations; two control strategies of UPQC under three-phase stationary coordinate based on the MCA are given, and finally, the feasibility and effectiveness of the proposed control strategy are verified...

  19. Prediction of composite fatigue life under variable amplitude loading using artificial neural network trained by genetic algorithm

    Science.gov (United States)

    Rohman, Muhamad Nur; Hidayat, Mas Irfan P.; Purniawan, Agung

    2018-04-01

    Neural networks (NN) have been widely used in application of fatigue life prediction. In the use of fatigue life prediction for polymeric-base composite, development of NN model is necessary with respect to the limited fatigue data and applicable to be used to predict the fatigue life under varying stress amplitudes in the different stress ratios. In the present paper, Multilayer-Perceptrons (MLP) model of neural network is developed, and Genetic Algorithm was employed to optimize the respective weights of NN for prediction of polymeric-base composite materials under variable amplitude loading. From the simulation result obtained with two different composite systems, named E-glass fabrics/epoxy (layups [(±45)/(0)2]S), and E-glass/polyester (layups [90/0/±45/0]S), NN model were trained with fatigue data from two different stress ratios, which represent limited fatigue data, can be used to predict another four and seven stress ratios respectively, with high accuracy of fatigue life prediction. The accuracy of NN prediction were quantified with the small value of mean square error (MSE). When using 33% from the total fatigue data for training, the NN model able to produce high accuracy for all stress ratios. When using less fatigue data during training (22% from the total fatigue data), the NN model still able to produce high coefficient of determination between the prediction result compared with obtained by experiment.

  20. Distributed late-binding micro-scheduling and data caching for data-intensive workflows

    International Nuclear Information System (INIS)

    Delgado Peris, A.

    2015-01-01

    Today's world is flooded with vast amounts of digital information coming from innumerable sources. Moreover, it seems clear that this trend will only intensify in the future. Industry, society and remarkably science are not indifferent to this fact. On the contrary, they are struggling to get the most out of this data, which means that they need to capture, transfer, store and process it in a timely and efficient manner, using a wide range of computational resources. And this task is not always simple. A very representative example of the challenges posed by the management and processing of large quantities of data is that of the Large Hadron Collider experiments, which handle tens of petabytes of physics information every year. Based on the experience of one of these collaborations, we have studied the main issues involved in the management of huge volumes of data and in the completion of sizeable workflows that consume it. In this context, we have developed a general-purpose architecture for the scheduling and execution of workflows with heavy data requirements: the Task Queue. This new system builds on the late-binding overlay model, which has helped experiments to successfully overcome the problems associated to the heterogeneity and complexity of large computational grids. Our proposal introduces several enhancements to the existing systems. The execution agents of the Task Queue architecture share a Distributed Hash Table (DHT) and perform job matching and assignment cooperatively. In this way, scalability problems of centralized matching algorithms are avoided and workflow execution times are improved. Scalability makes fine-grained micro-scheduling possible and enables new functionalities, like the implementation of a distributed data cache on the execution nodes and the integration of data location information in the scheduling decisions...(Author)

  1. Joshua tree (Yucca brevifolia) seeds are dispersed by seed-caching rodents

    Science.gov (United States)

    Vander Wall, S.B.; Esque, T.; Haines, D.; Garnett, M.; Waitman, B.A.

    2006-01-01

    Joshua tree (Yucca brevifolia) is a distinctive and charismatic plant of the Mojave Desert. Although floral biology and seed production of Joshua tree and other yuccas are well understood, the fate of Joshua tree seeds has never been studied. We tested the hypothesis that Joshua tree seeds are dispersed by seed-caching rodents. We radioactively labelled Joshua tree seeds and followed their fates at five source plants in Potosi Wash, Clark County, Nevada, USA. Rodents made a mean of 30.6 caches, usually within 30 m of the base of source plants. Caches contained a mean of 5.2 seeds buried 3-30 nun deep. A variety of rodent species appears to have prepared the caches. Three of the 836 Joshua tree seeds (0.4%) cached germinated the following spring. Seed germination using rodent exclosures was nearly 15%. More than 82% of seeds in open plots were removed by granivores, and neither microsite nor supplemental water significantly affected germination. Joshua tree produces seeds in indehiscent pods or capsules, which rodents dismantle to harvest seeds. Because there is no other known means of seed dispersal, it is possible that the Joshua tree-rodent seed dispersal interaction is an obligate mutualism for the plant.

  2. Improving the performance of heterogeneous multi-core processors by modifying the cache coherence protocol

    Science.gov (United States)

    Fang, Juan; Hao, Xiaoting; Fan, Qingwen; Chang, Zeqing; Song, Shuying

    2017-05-01

    In the Heterogeneous multi-core architecture, CPU and GPU processor are integrated on the same chip, which poses a new challenge to the last-level cache management. In this architecture, the CPU application and the GPU application execute concurrently, accessing the last-level cache. CPU and GPU have different memory access characteristics, so that they have differences in the sensitivity of last-level cache (LLC) capacity. For many CPU applications, a reduced share of the LLC could lead to significant performance degradation. On the contrary, GPU applications can tolerate increase in memory access latency when there is sufficient thread-level parallelism. Taking into account the GPU program memory latency tolerance characteristics, this paper presents a method that let GPU applications can access to memory directly, leaving lots of LLC space for CPU applications, in improving the performance of CPU applications and does not affect the performance of GPU applications. When the CPU application is cache sensitive, and the GPU application is insensitive to the cache, the overall performance of the system is improved significantly.

  3. Dynamic Allocation of SPM Based on Time-Slotted Cache Conflict Graph for System Optimization

    Science.gov (United States)

    Wu, Jianping; Ling, Ming; Zhang, Yang; Mei, Chen; Wang, Huan

    This paper proposes a novel dynamic Scratch-pad Memory allocation strategy to optimize the energy consumption of the memory sub-system. Firstly, the whole program execution process is sliced into several time slots according to the temporal dimension; thereafter, a Time-Slotted Cache Conflict Graph (TSCCG) is introduced to model the behavior of Data Cache (D-Cache) conflicts within each time slot. Then, Integer Nonlinear Programming (INP) is implemented, which can avoid time-consuming linearization process, to select the most profitable data pages. Virtual Memory System (VMS) is adopted to remap those data pages, which will cause severe Cache conflicts within a time slot, to SPM. In order to minimize the swapping overhead of dynamic SPM allocation, a novel SPM controller with a tightly coupled DMA is introduced to issue the swapping operations without CPU's intervention. Last but not the least, this paper discusses the fluctuation of system energy profit based on different MMU page size as well as the Time Slot duration quantitatively. According to our design space exploration, the proposed method can optimize all of the data segments, including global data, heap and stack data in general, and reduce the total energy consumption by 27.28% on average, up to 55.22% with a marginal performance promotion. And comparing to the conventional static CCG (Cache Conflicts Graph), our approach can obtain 24.7% energy profit on average, up to 30.5% with a sight boost in performance.

  4. Will video caching remain energy efficient in future core optical networks?

    Directory of Open Access Journals (Sweden)

    Niemah Izzeldin Osman

    2017-02-01

    Full Text Available Optical networks are expected to cater for the future Internet due to the high speed and capacity that they offer. Caching in the core network has proven to reduce power usage for various video services in current optical networks. This paper investigates whether video caching will still remain power efficient in future optical networks. The study compares the power consumption of caching in a current IP over WDM core network to a future network. The study considers a number of features to exemplify future networks. Future optical networks are considered where: (1 network devices consume less power, (2 network devices have sleep-mode capabilities, (3 IP over WDM implements lightpath bypass, and (4 the demand for video content significantly increases and high definition video dominates. Results show that video caching in future optical networks saves up to 42% of power consumption even when the power consumption of transport reduces. These results suggest that video caching is expected to remain a green option in video services in the future Internet.

  5. Distribution network design under demand uncertainty using genetic algorithm and Monte Carlo simulation approach: a case study in pharmaceutical industry

    Science.gov (United States)

    Izadi, Arman; Kimiagari, Ali Mohammad

    2014-05-01

    Distribution network design as a strategic decision has long-term effect on tactical and operational supply chain management. In this research, the location-allocation problem is studied under demand uncertainty. The purposes of this study were to specify the optimal number and location of distribution centers and to determine the allocation of customer demands to distribution centers. The main feature of this research is solving the model with unknown demand function which is suitable with the real-world problems. To consider the uncertainty, a set of possible scenarios for customer demands is created based on the Monte Carlo simulation. The coefficient of variation of costs is mentioned as a measure of risk and the most stable structure for firm's distribution network is defined based on the concept of robust optimization. The best structure is identified using genetic algorithms and 14 % reduction in total supply chain costs is the outcome. Moreover, it imposes the least cost variation created by fluctuation in customer demands (such as epidemic diseases outbreak in some areas of the country) to the logistical system. It is noteworthy that this research is done in one of the largest pharmaceutical distribution firms in Iran.

  6. Finding Risk Groups by Optimizing Artificial Neural Networks on the Area under the Survival Curve Using Genetic Algorithms.

    Directory of Open Access Journals (Sweden)

    Jonas Kalderstam

    Full Text Available We investigate a new method to place patients into risk groups in censored survival data. Properties such as median survival time, and end survival rate, are implicitly improved by optimizing the area under the survival curve. Artificial neural networks (ANN are trained to either maximize or minimize this area using a genetic algorithm, and combined into an ensemble to predict one of low, intermediate, or high risk groups. Estimated patient risk can influence treatment choices, and is important for study stratification. A common approach is to sort the patients according to a prognostic index and then group them along the quartile limits. The Cox proportional hazards model (Cox is one example of this approach. Another method of doing risk grouping is recursive partitioning (Rpart, which constructs a decision tree where each branch point maximizes the statistical separation between the groups. ANN, Cox, and Rpart are compared on five publicly available data sets with varying properties. Cross-validation, as well as separate test sets, are used to validate the models. Results on the test sets show comparable performance, except for the smallest data set where Rpart's predicted risk groups turn out to be inverted, an example of crossing survival curves. Cross-validation shows that all three models exhibit crossing of some survival curves on this small data set but that the ANN model manages the best separation of groups in terms of median survival time before such crossings. The conclusion is that optimizing the area under the survival curve is a viable approach to identify risk groups. Training ANNs to optimize this area combines two key strengths from both prognostic indices and Rpart. First, a desired minimum group size can be specified, as for a prognostic index. Second, the ability to utilize non-linear effects among the covariates, which Rpart is also able to do.

  7. Evict on write, a management strategy for a prefetch unit and/or first level cache in a multiprocessor system with speculative execution

    Science.gov (United States)

    Gara, Alan; Ohmacht, Martin

    2014-09-16

    In a multiprocessor system with at least two levels of cache, a speculative thread may run on a core processor in parallel with other threads. When the thread seeks to do a write to main memory, this access is to be written through the first level cache to the second level cache. After the write though, the corresponding line is deleted from the first level cache and/or prefetch unit, so that any further accesses to the same location in main memory have to be retrieved from the second level cache. The second level cache keeps track of multiple versions of data, where more than one speculative thread is running in parallel, while the first level cache does not have any of the versions during speculation. A switch allows choosing between modes of operation of a speculation blind first level cache.

  8. A matrix-algebraic algorithm for the Riemannian logarithm on the Stiefel manifold under the canonical metric

    DEFF Research Database (Denmark)

    Zimmermann, Ralf

    2017-01-01

    We derive a numerical algorithm for evaluating the Riemannian logarithm on the Stiefel manifold with respect to the canonical metric. In contrast to the optimization-based approach known from the literature, we work from a purely matrix-algebraic perspective. Moreover, we prove that the algorithm...... converges locally and exhibits a linear rate of convergence....

  9. A matrix-algebraic algorithm for the Riemannian logarithm on the Stiefel manifold under the canonical metric

    OpenAIRE

    Zimmermann, Ralf

    2016-01-01

    We derive a numerical algorithm for evaluating the Riemannian logarithm on the Stiefel manifold with respect to the canonical metric. In contrast to the optimization-based approach known from the literature, we work from a purely matrix-algebraic perspective. Moreover, we prove that the algorithm converges locally and exhibits a linear rate of convergence.

  10. Worst-case execution time analysis-driven object cache design

    DEFF Research Database (Denmark)

    Huber, Benedikt; Puffitsch, Wolfgang; Schoeberl, Martin

    2012-01-01

    Hard real‐time systems need a time‐predictable computing platform to enable static worst‐case execution time (WCET) analysis. All performance‐enhancing features need to be WCET analyzable. However, standard data caches containing heap‐allocated data are very hard to analyze statically. In this pa......Hard real‐time systems need a time‐predictable computing platform to enable static worst‐case execution time (WCET) analysis. All performance‐enhancing features need to be WCET analyzable. However, standard data caches containing heap‐allocated data are very hard to analyze statically...... result in a WCET analysis‐friendly design. Aiming for a time‐predictable design, we therefore propose to employ WCET analysis techniques for the design space exploration of processor architectures. We evaluated different object cache configurations using static analysis techniques. The number of field...

  11. Pixels grouping and shadow cache for faster integral 3D ray tracing

    Science.gov (United States)

    Youssef, Osama; Aggoun, Amar; Wolf, Wayne H.; McCormick, Malcolm

    2002-05-01

    This paper presents for the first time, a theory for obtaining the optimum pixel grouping for improving the coherence and the shadow cache in integral 3D ray-tracing in order to reduce execution time. A theoretical study of the number of shadow cache hits with respect to the properties of the lenses and the shadow size and its location is discussed with analysis for three different styles of pixel grouping in order to obtain the optimum grouping. The first style traces rows of pixels in the horizontal direction, the second traces similar pixels in adjacent lenses in the horizontal direction, and the third traces columns of pixels in the vertical direction. The optimum grouping is a combination of all three dependant up on the number of cache hits in each. Experimental results show validation of the theory and tests on benchmark scenes show that up to a 37% improvement in execution time can be achieved by proper pixel grouping.

  12. Using Solid State Disk Array as a Cache for LHC ATLAS Data Analysis

    CERN Document Server

    Yang, W; The ATLAS collaboration; Mount, R

    2014-01-01

    User data analysis in high energy physics presents a challenge to spinning-disk based storage systems. The analysis is data intense, yet reads are small, sparse and cover a large volume of data files. It is also unpredictable due to users' response to storage performance. We describe here a system with an array of Solid State Disk as a non-conventional, standalone file level cache in front of the spinning disk storage to help improve the performance of LHC ATLAS user analysis at SLAC. The system uses a long period of data access records to make caching decisions. It can also use information from other sources such as a work-flow management system. We evaluate the performance of the system both in terms of caching and its impact on user analysis jobs. The system currently uses Xrootd technology, but the technique can be applied to any storage system.

  13. Using Solid State Disk Array as a Cache for LHC ATLAS Data Analysis

    CERN Document Server

    Yang, W; The ATLAS collaboration; Mount, R

    2013-01-01

    User data analysis in high energy physics presents a challenge to spinning-disk based storage systems. The analysis is data intense, yet reads are small, sparse and cover a large volume of data files. It is also unpredictable due to users' response to storage performance. We describe here a system with an array of Solid State Disk as a non-conventional, standalone file level cache in front of the spinning disk storage to help improve the performance of LHC ATLAS user analysis at SLAC. The system uses a long period of data access records to make caching decisions. It can also use information from other sources such as a work-flow management system. We evaluate the performance of the system both in terms of caching and its impact on user analysis jobs. The system currently uses Xrootd technology, but the technique can be applied to any storage system.

  14. Cache-Oblivious Search Trees via Binary Trees of Small Height

    DEFF Research Database (Denmark)

    Brodal, G.S.; Fagerberg, R.; Jacob, R.

    2002-01-01

    We propose a version of cache oblivious search trees which is simpler than the previous proposal of Bender, Demaine and Farach-Colton and has the same complexity bounds. In particular, our data structure avoids the use of weight balanced B-trees, and can be implemented as just a single array......, and range queries in worst case O(logB n + k/B) memory transfers, where k is the size of the output.The basic idea of our data structure is to maintain a dynamic binary tree of height log n+O(1) using existing methods, embed this tree in a static binary tree, which in turn is embedded in an array in a cache...... oblivious fashion, using the van Emde Boas layout of Prokop.We also investigate the practicality of cache obliviousness in the area of search trees, by providing an empirical comparison of different methods for laying out a search tree in memory....

  15. A minimum scale architecture for rover-based sample acquisition and caching

    Science.gov (United States)

    Backes, Paul; Younse, Paulo; Ganino, Anthony

    The Minimum Scale Sample Acquisition and Caching (MinSAC) architecture has been developed to enable rover-based sample acquisition and caching while minimizing the system mass. The MinSAC architecture is a version of the previously developed Integrated Mars Sample Acquisition and Handling (IMSAH) architecture. The MinSAC implementation utilizes the sampling manipulator both for sampling and sample tube transfer. This significantly reduces the number of actuators in the sample acquisition and caching subsystem. A core sample is acquired directly into its sample tube in the coring bit. The bit is transferred and released on the rover. A tube gripper on the robotic arm turret pulls the filled sample tube out of the back of the coring bit and the tube is sealed. The sample tube is then placed in the return sample canister. A new tube is placed in the bit for acquisition of another sample.

  16. Memory for multiple cache locations and prey quantities in a food-hoarding songbird

    Directory of Open Access Journals (Sweden)

    Nicola eArmstrong

    2012-12-01

    Full Text Available Most animals can discriminate between pairs of numbers that are each less than four without training. However, North Island robins (Petroica longipes, a food hoarding songbird endemic to New Zealand, can discriminate between quantities of items as high as eight without training. Here we investigate whether robins are capable of other complex quantity discrimination tasks. We test whether their ability to discriminate between small quantities declines with 1. the number of cache sites containing prey rewards and 2. the length of time separating cache creation and retrieval (retention interval. Results showed that subjects generally performed above chance expectations. They were equally able to discriminate between different combinations of prey quantities that were hidden from view in 2, 3 and 4 cache sites from between 1, 10 and 60 seconds. Overall results indicate that North Island robins can process complex quantity information involving more than two discrete quantities of items for up to one minute long retention intervals without training.

  17. A Network-Aware Distributed Storage Cache for Data Intensive Environments

    Energy Technology Data Exchange (ETDEWEB)

    Tierney, B.L.; Lee, J.R.; Johnston, W.E.; Crowley, B.; Holding, M.

    1999-12-23

    Modern scientific computing involves organizing, moving, visualizing, and analyzing massive amounts of data at multiple sites around the world. The technologies, the middleware services, and the architectures that are used to build useful high-speed, wide area distributed systems, constitute the field of data intensive computing. In this paper the authors describe an architecture for data intensive applications where they use a high-speed distributed data cache as a common element for all of the sources and sinks of data. This cache-based approach provides standard interfaces to a large, application-oriented, distributed, on-line, transient storage system. They describe their implementation of this cache, how they have made it network aware, and how they do dynamic load balancing based on the current network conditions. They also show large increases in application throughput by access to knowledge of the network conditions.

  18. Fixed priority scheduling with pre-emption thresholds and cache-related pre-emption delays: integrated analysis and evaluation

    NARCIS (Netherlands)

    Bril, R.J.; Altmeyer, S.; van den Heuvel, M.M.H.P.; Davis, R.I.; Behnam, M.

    Commercial off-the-shelf programmable platforms for real-time systems typically contain a cache to bridge the gap between the processor speed and main memory speed. Because cache-related pre-emption delays (CRPD) can have a significant influence on the computation times of tasks, CRPD have been

  19. Killing and caching of an adult White-tailed deer, Odocoileus virginianus, by a single Gray Wolf, Canis lupus

    Science.gov (United States)

    Nelson, Michael E.

    2011-01-01

    A single Gray Wolf (Canis lupus) killed an adult male White-tailed Deer (Odocoileus virginianus) and cached the intact carcass in 76 cm of snow. The carcass was revisited and entirely consumed between four and seven days later. This is the first recorded observation of a Gray Wolf caching an entire adult deer.

  20. Security in the CernVM File System and the Frontier Distributed Database Caching System

    CERN Document Server

    Dykstra, David

    2014-01-01

    Both the CernVM File System (CVMFS) and the Frontier Distributed Database Caching System (Frontier) distribute centrally updated data worldwide for LHC experiments using http proxy caches. Neither system provides privacy or access control on reading the data, but both control access to updates of the data and can guarantee the integrity of the data transferred to clients over the internet. CVMFS has since its early days required digital signatures and secure hashes on all distributed data, and recently both CVMFS and Frontier have added X509-based integrity checking. In this paper we detail and compare the security models of CVMFS and Frontier.

  1. Security in the CernVM File System and the Frontier Distributed Database Caching System

    Science.gov (United States)

    Dykstra, D.; Blomer, J.

    2014-06-01

    Both the CernVM File System (CVMFS) and the Frontier Distributed Database Caching System (Frontier) distribute centrally updated data worldwide for LHC experiments using http proxy caches. Neither system provides privacy or access control on reading the data, but both control access to updates of the data and can guarantee the authenticity and integrity of the data transferred to clients over the internet. CVMFS has since its early days required digital signatures and secure hashes on all distributed data, and recently Frontier has added X.509-based authenticity and integrity checking. In this paper we detail and compare the security models of CVMFS and Frontier.

  2. Using XRootD to provide caches for CernVM-FS

    CERN Document Server

    Domenighini, Matteo

    2017-01-01

    CernVM-FS recently added the possibility of using plugin for cache management. In order to investigate the capabilities and limits of such possibility, an XRootD plugin was written and benchmarked; as a byproduct, a POSIX plugin was also generated. The tests revealed that the plugin interface introduces no signicant performance over- head; moreover, the XRootD plugin performance was discovered to be worse than the ones of the built-in cache manager and the POSIX plugin. Further test of the XRootD component revealed that its per- formance is dependent on the server disk speed.

  3. Security in the CernVM File System and the Frontier Distributed Database Caching System

    International Nuclear Information System (INIS)

    Dykstra, D; Blomer, J

    2014-01-01

    Both the CernVM File System (CVMFS) and the Frontier Distributed Database Caching System (Frontier) distribute centrally updated data worldwide for LHC experiments using http proxy caches. Neither system provides privacy or access control on reading the data, but both control access to updates of the data and can guarantee the authenticity and integrity of the data transferred to clients over the internet. CVMFS has since its early days required digital signatures and secure hashes on all distributed data, and recently Frontier has added X.509-based authenticity and integrity checking. In this paper we detail and compare the security models of CVMFS and Frontier.

  4. Hydrologic data for the Cache Creek-Bear Thrust environmental impact statement near Jackson, Wyoming

    Science.gov (United States)

    Craig, G.S.; Ringen, B.H.; Cox, E.R.

    1981-01-01

    Information on the quantity and quality of surface and ground water in an area of concern for the Cache Creek-Bear Thrust Environmental Impact Statement in northwestern Wyoming is presented without interpretation. The environmental impact statement is being prepared jointly by the U.S. Geological Survey and the U.S. Forest Service and concerns proposed exploration and development of oil and gas on leased Federal land near Jackson, Wyoming. Information includes data from a gaging station on Cache Creek and from wells, springs, and miscellaneous sites on streams. Data include streamflow, chemical and suspended-sediment quality of streams, and the occurrence and chemical quality of ground water. (USGS)

  5. Education for sustainability and environmental education in National Geoparks. EarthCaching - a new method?

    Science.gov (United States)

    Zecha, Stefanie; Regelous, Anette

    2017-04-01

    National Geoparks are restricted areas incorporating educational resources of great importance in promoting education for sustainable development, mobilizing knowledge inherent to the EarthSciences. Different methods can be used to implement the education of sustainability. Here we present possibilities for National Geoparks to support sustainability focusing on new media and EarthCaches based on the data set of the "EarthCachers International EarthCaching" conference in Goslar in October 2015. Using an empirical study designed by ourselves we collected actual information about the environmental consciousness of Earthcachers. The data set was analyzed using SPSS and statistical methods. Here we present the results and their consequences for National Geoparks.

  6. A coupled remote sensing and the Surface Energy Balance with Topography Algorithm (SEBTA) to estimate actual evapotranspiration under complex terrain

    Science.gov (United States)

    Gao, Z. Q.; Liu, C. S.; Gao, W.; Chang, N. B.

    2010-07-01

    Evapotranspiration (ET) may be used as an ecological indicator to address the ecosystem complexity. The accurate measurement of ET is of great significance for studying environmental sustainability, global climate changes, and biodiversity. Remote sensing technologies are capable of monitoring both energy and water fluxes on the surface of the Earth. With this advancement, existing models, such as SEBAL, S_SEBI and SEBS, enable us to estimate the regional ET with limited temporal and spatial scales. This paper extends the existing modeling efforts with the inclusion of new components for ET estimation at varying temporal and spatial scales under complex terrain. Following a coupled remote sensing and surface energy balance approach, this study emphasizes the structure and function of the Surface Energy Balance with Topography Algorithm (SEBTA). With the aid of the elevation and landscape information, such as slope and aspect parameters derived from the digital elevation model (DEM), and the vegetation cover derived from satellite images, the SEBTA can fully account for the dynamic impacts of complex terrain and changing land cover in concert with some varying kinetic parameters (i.e., roughness and zero-plane displacement) over time. Besides, the dry and wet pixels can be recognized automatically and dynamically in image processing thereby making the SEBTA more sensitive to derive the sensible heat flux for ET estimation. To prove the application potential, the SEBTA was carried out to present the robust estimates of 24 h solar radiation over time, which leads to the smooth simulation of the ET over seasons in northern China where the regional climate and vegetation cover in different seasons compound the ET calculations. The SEBTA was validated by the measured data at the ground level. During validation, it shows that the consistency index reached 0.92 and the correlation coefficient was 0.87.

  7. Topology Control Algorithms for Spacecraft Formation Flying Networks Under Connectivity and Time-Delay Constraints, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — SSCI is proposing to develop, test and deliver a set of topology control algorithms and software for a formation flying spacecraft that can be used to design and...

  8. Topology Control Algorithms for Spacecraft Formation Flying Networks Under Connectivity and Time-Delay Constraints, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — SSCI is proposing to develop a set of topology control algorithms for a formation flying spacecraft that can be used to design and evaluate candidate formation...

  9. A flocking algorithm for multi-agent systems with connectivity preservation under hybrid metric-topological interactions.

    Directory of Open Access Journals (Sweden)

    Chenlong He

    Full Text Available In this paper, we propose a connectivity-preserving flocking algorithm for multi-agent systems in which the neighbor set of each agent is determined by the hybrid metric-topological distance so that the interaction topology can be represented as the range-limited Delaunay graph, which combines the properties of the commonly used disk graph and Delaunay graph. As a result, the proposed flocking algorithm has the following advantages over the existing ones. First, range-limited Delaunay graph is sparser than the disk graph so that the information exchange among agents is reduced significantly. Second, some links irrelevant to the connectivity can be dynamically deleted during the evolution of the system. Thus, the proposed flocking algorithm is more flexible than existing algorithms, where links are not allowed to be disconnected once they are created. Finally, the multi-agent system spontaneously generates a regular quasi-lattice formation without imposing the constraint on the ratio of the sensing range of the agent to the desired distance between two adjacent agents. With the interaction topology induced by the hybrid distance, the proposed flocking algorithm can still be implemented in a distributed manner. We prove that the proposed flocking algorithm can steer the multi-agent system to a stable flocking motion, provided the initial interaction topology of multi-agent systems is connected and the hysteresis in link addition is smaller than a derived upper bound. The correctness and effectiveness of the proposed algorithm are verified by extensive numerical simulations, where the flocking algorithms based on the disk and Delaunay graph are compared.

  10. A flocking algorithm for multi-agent systems with connectivity preservation under hybrid metric-topological interactions.

    Science.gov (United States)

    He, Chenlong; Feng, Zuren; Ren, Zhigang

    2018-01-01

    In this paper, we propose a connectivity-preserving flocking algorithm for multi-agent systems in which the neighbor set of each agent is determined by the hybrid metric-topological distance so that the interaction topology can be represented as the range-limited Delaunay graph, which combines the properties of the commonly used disk graph and Delaunay graph. As a result, the proposed flocking algorithm has the following advantages over the existing ones. First, range-limited Delaunay graph is sparser than the disk graph so that the information exchange among agents is reduced significantly. Second, some links irrelevant to the connectivity can be dynamically deleted during the evolution of the system. Thus, the proposed flocking algorithm is more flexible than existing algorithms, where links are not allowed to be disconnected once they are created. Finally, the multi-agent system spontaneously generates a regular quasi-lattice formation without imposing the constraint on the ratio of the sensing range of the agent to the desired distance between two adjacent agents. With the interaction topology induced by the hybrid distance, the proposed flocking algorithm can still be implemented in a distributed manner. We prove that the proposed flocking algorithm can steer the multi-agent system to a stable flocking motion, provided the initial interaction topology of multi-agent systems is connected and the hysteresis in link addition is smaller than a derived upper bound. The correctness and effectiveness of the proposed algorithm are verified by extensive numerical simulations, where the flocking algorithms based on the disk and Delaunay graph are compared.

  11. The Social Relationship Based Adaptive Multi-Spray-and-Wait Routing Algorithm for Disruption Tolerant Network

    Directory of Open Access Journals (Sweden)

    Jianfeng Guan

    2017-01-01

    Full Text Available The existing spray-based routing algorithms in DTN cannot dynamically adjust the number of message copies based on actual conditions, which results in a waste of resource and a reduction of the message delivery rate. Besides, the existing spray-based routing protocols may result in blind spots or dead end problems due to the limitation of various given metrics. Therefore, this paper proposes a social relationship based adaptive multiple spray-and-wait routing algorithm (called SRAMSW which retransmits the message copies based on their residence times in the node via buffer management and selects forwarders based on the social relationship. By these means, the proposed algorithm can remove the plight of the message congestion in the buffer and improve the probability of replicas to reach their destinations. The simulation results under different scenarios show that the SRAMSW algorithm can improve the message delivery rate and reduce the messages’ dwell time in the cache and further improve the buffer effectively.

  12. Automatic Frequency Identification under Sample Loss in Sinusoidal Pulse Width Modulation Signals Using an Iterative Autocorrelation Algorithm

    Directory of Open Access Journals (Sweden)

    Alejandro Said

    2016-08-01

    Full Text Available In this work, we present a simple algorithm to calculate automatically the Fourier spectrum of a Sinusoidal Pulse Width Modulation Signal (SPWM. Modulated voltage signals of this kind are used in industry by speed drives to vary the speed of alternating current motors while maintaining a smooth torque. Nevertheless, the SPWM technique produces undesired harmonics, which yield stator heating and power losses. By monitoring these signals without human interaction, it is possible to identify the harmonic content of SPWM signals in a fast and continuous manner. The algorithm is based in the autocorrelation function, commonly used in radar and voice signal processing. Taking advantage of the symmetry properties of the autocorrelation, the algorithm is capable of estimating half of the period of the fundamental frequency; thus, allowing one to estimate the necessary number of samples to produce an accurate Fourier spectrum. To deal with the loss of samples, i.e., the scan backlog, the algorithm iteratively acquires and trims the discrete sequence of samples until the required number of samples reaches a stable value. The simulation shows that the algorithm is not affected by either the magnitude of the switching pulses or the acquisition noise.

  13. Copyright aspects of caching: DIPPER (Digital Intellectual Property Practice Economic Report): legal report

    NARCIS (Netherlands)

    Hugenholtz, P.B.

    2000-01-01

    Studie naar auteursrechtelijke aspecten van (proxy- en client-) caching, geschreven in opdracht van de Europese Commissie in het kader van het Esprit-programma. De nadruk ligt op huidig en toekomstig Europees en Amerikaans recht. Bevat een afdeling over de aansprakelijkheid van Internet (access)

  14. Delivery Time Minimization in Edge Caching: Synergistic Benefits of Subspace Alignment and Zero Forcing

    KAUST Repository

    Kakar, Jaber

    2017-10-29

    An emerging trend of next generation communication systems is to provide network edges with additional capabilities such as additional storage resources in the form of caches to reduce file delivery latency. To investigate this aspect, we study the fundamental limits of a cache-aided wireless network consisting of one central base station, $M$ transceivers and $K$ receivers from a latency-centric perspective. We use the normalized delivery time (NDT) to capture the per-bit latency for the worst-case file request pattern at high signal-to-noise ratios (SNR), normalized with respect to a reference interference-free system with unlimited transceiver cache capabilities. For various special cases with $M=\\\\{1,2\\\\}$ and $K=\\\\{1,2,3\\\\}$ that satisfy $M+K\\\\leq 4$, we establish the optimal tradeoff between cache storage and latency. This is facilitated through establishing a novel converse (for arbitrary $M$ and $K$) and an achievability scheme on the NDT. Our achievability scheme is a synergistic combination of multicasting, zero-forcing beamforming and interference alignment.

  15. OneService - Generic Cache Aggregator Framework for Service Depended Cloud Applications

    NARCIS (Netherlands)

    Tekinerdogan, B.; Oral, O.A.

    2017-01-01

    Current big data cloud systems often use different data migration strategies from providers to customers. This often results in increased bandwidth usage and herewith a decrease of the performance. To enhance the performance often caching mechanisms are adopted. However, the implementations of these

  16. Randomized Caches Can Be Pretty Useful to Hard Real-Time Systems

    Directory of Open Access Journals (Sweden)

    Enrico Mezzetti

    2015-03-01

    Full Text Available Cache randomization per se, and its viability for probabilistic timing analysis (PTA of critical real-time systems, are receiving increasingly close attention from the scientific community and the industrial practitioners. In fact, the very notion of introducing randomness and probabilities in time-critical systems has caused strenuous debates owing to the apparent clash that this idea has with the strictly deterministic view traditionally held for those systems. A paper recently appeared in LITES (Reineke, J. (2014. Randomized Caches Considered Harmful in Hard Real-Time Systems. LITES, 1(1, 03:1-03:13. provides a critical analysis of the weaknesses and risks entailed in using randomized caches in hard real-time systems. In order to provide the interested reader with a fuller, balanced appreciation of the subject matter, a critical analysis of the benefits brought about by that innovation should be provided also. This short paper addresses that need by revisiting the array of issues addressed in the cited work, in the light of the latest advances to the relevant state of the art. Accordingly, we show that the potential benefits of randomized caches do offset their limitations, causing them to be - when used in conjunction with PTA - a serious competitor to conventional designs.

  17. On-chip COMA cache-coherence protocol for microgrids of microthreaded cores

    NARCIS (Netherlands)

    Zhang, L.; Jesshope, C.

    2008-01-01

    This paper describes an on-chip COMA cache coherency protocol to support the microthread model of concurrent program composition. The model gives a sound basis for building multi-core computers as it captures concurrency, abstracts communication and identifies resources, such as processor groups

  18. TaPT: Temperature-Aware Dynamic Cache Optimization for Embedded Systems

    Directory of Open Access Journals (Sweden)

    Tosiron Adegbija

    2017-12-01

    Full Text Available Embedded systems have stringent design constraints, which has necessitated much prior research focus on optimizing energy consumption and/or performance. Since embedded systems typically have fewer cooling options, rising temperature, and thus temperature optimization, is an emergent concern. Most embedded systems only dissipate heat by passive convection, due to the absence of dedicated thermal management hardware mechanisms. The embedded system’s temperature not only affects the system’s reliability, but can also affect the performance, power, and cost. Thus, embedded systems require efficient thermal management techniques. However, thermal management can conflict with other optimization objectives, such as execution time and energy consumption. In this paper, we focus on managing the temperature using a synergy of cache optimization and dynamic frequency scaling, while also optimizing the execution time and energy consumption. This paper provides new insights on the impact of cache parameters on efficient temperature-aware cache tuning heuristics. In addition, we present temperature-aware phase-based tuning, TaPT, which determines Pareto optimal clock frequency and cache configurations for fine-grained execution time, energy, and temperature tradeoffs. TaPT enables autonomous system optimization and also allows designers to specify temperature constraints and optimization priorities. Experiments show that TaPT can effectively reduce execution time, energy, and temperature, while imposing minimal hardware overhead.

  19. Sex, estradiol, and spatial memory in a food-caching corvid.

    Science.gov (United States)

    Rensel, Michelle A; Ellis, Jesse M S; Harvey, Brigit; Schlinger, Barney A

    2015-09-01

    Estrogens significantly impact spatial memory function in mammalian species. Songbirds express the estrogen synthetic enzyme aromatase at relatively high levels in the hippocampus and there is evidence from zebra finches that estrogens facilitate performance on spatial learning and/or memory tasks. It is unknown, however, whether estrogens influence hippocampal function in songbirds that naturally exhibit memory-intensive behaviors, such as cache recovery observed in many corvid species. To address this question, we examined the impact of estradiol on spatial memory in non-breeding Western scrub-jays, a species that routinely participates in food caching and retrieval in nature and in captivity. We also asked if there were sex differences in performance or responses to estradiol. Utilizing a combination of an aromatase inhibitor, fadrozole, with estradiol implants, we found that while overall cache recovery rates were unaffected by estradiol, several other indices of spatial memory, including searching efficiency and efficiency to retrieve the first item, were impaired in the presence of estradiol. In addition, males and females differed in some performance measures, although these differences appeared to be a consequence of the nature of the task as neither sex consistently out-performed the other. Overall, our data suggest that a sustained estradiol elevation in a food-caching bird impairs some, but not all, aspects of spatial memory on an innate behavioral task, at times in a sex-specific manner. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. Performance Evaluation of Moving Small-Cell Network with Proactive Cache

    Directory of Open Access Journals (Sweden)

    Young Min Kwon

    2016-01-01

    Full Text Available Due to rapid growth in mobile traffic, mobile network operators (MNOs are considering the deployment of moving small-cells (mSCs. mSC is a user-centric network which provides voice and data services during mobility. mSC can receive and forward data traffic via wireless backhaul and sidehaul links. In addition, due to the predictive nature of users demand, mSCs can proactively cache the predicted contents in off-peak-traffic periods. Due to these characteristics, MNOs consider mSCs as a cost-efficient solution to not only enhance the system capacity but also provide guaranteed quality of service (QoS requirements to moving user equipment (UE in peak-traffic periods. In this paper, we conduct extensive system level simulations to analyze the performance of mSCs with varying cache size and content popularity and their effect on wireless backhaul load. The performance evaluation confirms that the QoS of moving small-cell UE (mSUE notably improves by using mSCs together with proactive caching. We also show that the effective use of proactive cache significantly reduces the wireless backhaul load and increases the overall network capacity.

  1. Model checking a cache coherence protocol for a Java DSM implementation

    NARCIS (Netherlands)

    J. Pang; W.J. Fokkink (Wan); R. Hofman (Rutger); R. Veldema

    2007-01-01

    textabstractJackal is a fine-grained distributed shared memory implementation of the Java programming language. It aims to implement Java's memory model and allows multithreaded Java programs to run unmodified on a distributed memory system. It employs a multiple-writer cache coherence

  2. Model checking a cache coherence protocol of a Java DSM implementation

    NARCIS (Netherlands)

    Pang, J.; Fokkink, W.J.; Hofman, R.; Veldema, R.S.

    2007-01-01

    Jackal is a fine-grained distributed shared memory implementation of the Java programming language. It aims to implement Java's memory model and allows multithreaded Java programs to run unmodified on a distributed memory system. It employs a multiple-writer cache coherence protocol. In this paper,

  3. Comparison of dose calculation algorithms in phantoms with lung equivalent heterogeneities under conditions of lateral electronic disequilibrium

    International Nuclear Information System (INIS)

    Carrasco, P.; Jornet, N.; Duch, M.A.; Weber, L.; Ginjaume, M.; Eudaldo, T.; Jurado, D.; Ruiz, A.; Ribas, M.

    2004-01-01

    An extensive set of benchmark measurement of PDDs and beam profiles was performed in a heterogeneous layer phantom, including a lung equivalent heterogeneity, by means of several detectors and compared against the predicted dose values by different calculation algorithms in two treatment planning systems. PDDs were measured with TLDs, plane parallel and cylindrical ionization chambers and beam profiles with films. Additionally, Monte Carlo simulations by meansof the PENELOPE code were performed. Four different field sizes (10x10, 5x5, 2x2, and1x1 cm 2 ) and two lung equivalent materials (CIRS, ρ e w =0.195 and St. Bartholomew Hospital, London, ρ e w =0.244-0.322) were studied. The performance of four correction-based algorithms and one based on convolution-superposition was analyzed. The correction-based algorithms were the Batho, the Modified Batho, and the Equivalent TAR implemented in the Cadplan (Varian) treatment planning system and the TMS Pencil Beam from the Helax-TMS (Nucletron) treatment planning system. The convolution-superposition algorithm was the Collapsed Cone implemented in the Helax-TMS. The only studied calculation methods that correlated successfully with the measured values with a 2% average inside all media were the Collapsed Cone and the Monte Carlo simulation. The biggest difference between the predicted and the delivered dose in the beam axis was found for the EqTAR algorithm inside the CIRS lung equivalent material in a 2x2 cm 2 18 MV x-ray beam. In these conditions, average and maximum difference against the TLD measurements were 32% and 39%, respectively. In the water equivalent part of the phantom every algorithm correctly predicted the dose (within 2%) everywhere except very close to the interfaces where differences up to 24% were found for 2x2 cm 2 18 MV photon beams. Consistent values were found between the reference detector (ionization chamber in water and TLD in lung) and Monte Carlo simulations, yielding minimal differences (0

  4. Quantifying animal movement for caching foragers: the path identification index (PII) and cougars, Puma concolor

    Science.gov (United States)

    Ironside, Kirsten E.; Mattson, David J.; Theimer, Tad; Jansen, Brian; Holton, Brandon; Arundel, Terry; Peters, Michael; Sexton, Joseph O.; Edwards, Thomas C.

    2017-01-01

    Relocation studies of animal movement have focused on directed versus area restricted movement, which rely on correlations between step-length and turn angles, along with a degree of stationarity through time to define behavioral states. Although these approaches may work well for grazing foraging strategies in a patchy landscape, species that do not spend a significant amount of time searching out and gathering small dispersed food items, but instead feed for short periods on large, concentrated sources or cache food result in movements that maybe difficult to analyze using turning and velocity alone. We use GPS telemetry collected from a prey-caching predator, the cougar (Puma concolor), to test whether adding additional movement metrics capturing site recursion, to the more traditional velocity and turning, improve the ability to identify behaviors. We evaluated our movement index’s ability to identify behaviors using field investigations. We further tested for statistical stationarity across behaviors for use of topographic view-sheds. We found little correlation between turn angle, velocity, tortuosity, and site fidelity and combined them into a movement index used to identify movement paths (temporally autocorrelated movements) related to fast directed movements (taxis), area restricted movements (search), and prey caching (foraging). Changes in the frequency and duration of these movements were helpful for identifying seasonal activities such as migration and denning in females. Comparison of field investigations of cougar activities to behavioral classes defined using the movement index and found an overall classification accuracy of 81%. Changes in behaviors resulted in changes in how cougars used topographic view-sheds, showing statistical non-stationarity over time. The movement index shows promise for identifying behaviors in species that frequently return to specific locations such as food caches, watering holes, or dens, and highlights the role

  5. Servidor proxy caché: comprensión y asimilación tecnológica

    Directory of Open Access Journals (Sweden)

    Carlos E. Gómez

    2012-01-01

    Full Text Available Los proveedores de acceso a Internet usualmente incluyen el concepto de aceleradores de Internet para reducir el tiempo promedio que tarda un navegador en obtener los archivos solicitados. Para los administradores del sistema es difícil elegir la configuración del servidor proxy caché, ya que es necesario decidir los valores que se deben usar en diferentes variables. En este artículo se presenta la forma como se abordó el proceso de comprensión y asimilación tecnológica del servicio de proxy caché, un servicio de alto impacto organizacional. Además, este artículo es producto del proyecto de investigación “Análisis de configuraciones de servidores proxy caché”, en el cual se estudiaron aspectos relevantes del rendimiento de Squid como servidor proxy caché.

  6. High-speed mapping of water isotopes and residence time in Cache Slough Complex, San Francisco Bay Delta, CA

    Data.gov (United States)

    Department of the Interior — Real-time, high frequency (1-second sample interval) GPS location, water quality, and water isotope (δ2H, δ18O) data was collected in the Cache Slough Complex (CSC),...

  7. Wolves, Canis lupus, carry and cache the collars of radio-collared White-tailed Deer, Odocoileus virginianus, they killed

    Science.gov (United States)

    Nelson, Michael E.; Mech, L. David

    2011-01-01

    Wolves (Canis lupus) in northeastern Minnesota cached six radio-collars (four in winter, two in spring-summer) of 202 radio-collared White-tailed Deer (Odocoileus virginianus) they killed or consumed from 1975 to 2010. A Wolf bedded on top of one collar cached in snow. We found one collar each at a Wolf den and Wolf rendezvous site, 2.5 km and 0.5 km respectively, from each deer's previous locations.

  8. Algorithmic Principles of Mathematical Programming

    NARCIS (Netherlands)

    Faigle, Ulrich; Kern, Walter; Still, Georg

    2002-01-01

    Algorithmic Principles of Mathematical Programming investigates the mathematical structures and principles underlying the design of efficient algorithms for optimization problems. Recent advances in algorithmic theory have shown that the traditionally separate areas of discrete optimization, linear

  9. EqualChance: Addressing Intra-set Write Variation to Increase Lifetime of Non-volatile Caches

    Energy Technology Data Exchange (ETDEWEB)

    Mittal, Sparsh [ORNL; Vetter, Jeffrey S [ORNL

    2014-01-01

    To address the limitations of SRAM such as high-leakage and low-density, researchers have explored use of non-volatile memory (NVM) devices, such as ReRAM (resistive RAM) and STT-RAM (spin transfer torque RAM) for designing on-chip caches. A crucial limitation of NVMs, however, is that their write endurance is low and the large intra-set write variation introduced by existing cache management policies may further exacerbate this problem, thereby reducing the cache lifetime significantly. We present EqualChance, a technique to increase cache lifetime by reducing intra-set write variation. EqualChance works by periodically changing the physical cache-block location of a write-intensive data item within a set to achieve wear-leveling. Simulations using workloads from SPEC CPU2006 suite and HPC (high-performance computing) field show that EqualChance improves the cache lifetime by 4.29X. Also, its implementation overhead is small, and it incurs very small performance and energy loss.

  10. Ecosystem services from keystone species: diversionary seeding and seed-caching desert rodents can enhance Indian ricegrass seedling establishment

    Science.gov (United States)

    Longland, William; Ostoja, Steven M.

    2013-01-01

    Seeds of Indian ricegrass (Achnatherum hymenoides), a native bunchgrass common to sandy soils on arid western rangelands, are naturally dispersed by seed-caching rodent species, particularly Dipodomys spp. (kangaroo rats). These animals cache large quantities of seeds when mature seeds are available on or beneath plants and recover most of their caches for consumption during the remainder of the year. Unrecovered seeds in caches account for the vast majority of Indian ricegrass seedling recruitment. We applied three different densities of white millet (Panicum miliaceum) seeds as “diversionary foods” to plots at three Great Basin study sites in an attempt to reduce rodents' over-winter cache recovery so that more Indian ricegrass seeds would remain in soil seedbanks and potentially establish new seedlings. One year after diversionary seed application, a moderate level of Indian ricegrass seedling recruitment occurred at two of our study sites in western Nevada, although there was no recruitment at the third site in eastern California. At both Nevada sites, the number of Indian ricegrass seedlings sampled along transects was significantly greater on all plots treated with diversionary seeds than on non-seeded control plots. However, the density of diversionary seeds applied to plots had a marginally non-significant effect on seedling recruitment, and it was not correlated with recruitment patterns among plots. Results suggest that application of a diversionary seed type that is preferred by seed-caching rodents provides a promising passive restoration strategy for target plant species that are dispersed by these rodents.

  11. Secure File Allocation and Caching in Large-scale Distributed Systems

    DEFF Research Database (Denmark)

    Di Mauro, Alessio; Mei, Alessandro; Jajodia, Sushil

    2012-01-01

    In this paper, we present a file allocation and caching scheme that guarantees high assurance, availability, and load balancing in a large-scale distributed file system that can support dynamic updates of authorization policies. The scheme uses fragmentation and replication to store files with high...... security requirements in a system composed of a majority of low-security servers. We develop mechanisms to fragment files, to allocate them into multiple servers, and to cache them as close as possible to their readers while preserving the security requirement of the files, providing load......-balancing, and reducing delay of read operations. The system offers a trade-off-between performance and security that is dynamically tunable according to the current level of threat. We validate our mechanisms with extensive simulations in an Internet-like network....

  12. Federated or cached searches: providing expected performance from multiple invasive species databases

    Science.gov (United States)

    Graham, Jim; Jarnevich, Catherine S.; Simpson, Annie; Newman, Gregory J.; Stohlgren, Thomas J.

    2011-01-01

    Invasive species are a universal global problem, but the information to identify them, manage them, and prevent invasions is stored around the globe in a variety of formats. The Global Invasive Species Information Network is a consortium of organizations working toward providing seamless access to these disparate databases via the Internet. A distributed network of databases can be created using the Internet and a standard web service protocol. There are two options to provide this integration. First, federated searches are being proposed to allow users to search “deep” web documents such as databases for invasive species. A second method is to create a cache of data from the databases for searching. We compare these two methods, and show that federated searches will not provide the performance and flexibility required from users and a central cache of the datum are required to improve performance.

  13. XRootd, disk-based, caching proxy for optimization of data access, data placement and data replication

    International Nuclear Information System (INIS)

    Bauerdick, L A T; Bloom, K; Bockelman, B; Bradley, D C; Dasu, S; Dost, J M; Sfiligoi, I; Tadel, A; Tadel, M; Wuerthwein, F; Yagil, A

    2014-01-01

    Following the success of the XRootd-based US CMS data federation, the AAA project investigated extensions of the federation architecture by developing two sample implementations of an XRootd, disk-based, caching proxy. The first one simply starts fetching a whole file as soon as a file open request is received and is suitable when completely random file access is expected or it is already known that a whole file be read. The second implementation supports on-demand downloading of partial files. Extensions to the Hadoop Distributed File System have been developed to allow for an immediate fallback to network access when local HDFS storage fails to provide the requested block. Both cache implementations are in pre-production testing at UCSD.

  14. The Use of Proxy Caches for File Access in a Multi-Tier Grid Environment

    International Nuclear Information System (INIS)

    Brun, R; Duellmann, D; Ganis, G; Janyst, L; Peters, A J; Rademakers, F; Sindrilaru, E; Hanushevsky, A

    2011-01-01

    The use of proxy caches has been extensively studied in the HEP environment for efficient access of database data and showed significant performance with only very moderate operational effort at higher grid tiers (T2, T3). In this contribution we propose to apply the same concept to the area of file access and analyse the possible performance gains, operational impact on site services and applicability to different HEP use cases. Base on a proof-of-concept studies with a modified XROOT proxy server we review the cache efficiency and overheads for access patterns of typical ROOT based analysis programs. We conclude with a discussion of the potential role of this new component at the different tiers of a distributed computing grid.

  15. Orbitofrontal cortex supports behavior and learning using inferred but not cached values.

    Science.gov (United States)

    Jones, Joshua L; Esber, Guillem R; McDannald, Michael A; Gruber, Aaron J; Hernandez, Alex; Mirenzi, Aaron; Schoenbaum, Geoffrey

    2012-11-16

    Computational and learning theory models propose that behavioral control reflects value that is both cached (computed and stored during previous experience) and inferred (estimated on the fly on the basis of knowledge of the causal structure of the environment). The latter is thought to depend on the orbitofrontal cortex. Yet some accounts propose that the orbitofrontal cortex contributes to behavior by signaling "economic" value, regardless of the associative basis of the information. We found that the orbitofrontal cortex is critical for both value-based behavior and learning when value must be inferred but not when a cached value is sufficient. The orbitofrontal cortex is thus fundamental for accessing model-based representations of the environment to compute value rather than for signaling value per se.

  16. Using High-Speed WANs and Network Data Caches to Enable Remote and Distributed Visualization

    Energy Technology Data Exchange (ETDEWEB)

    Bethel, Wes; Lau, Stephen; Tierney, Brian; Lee, Jason; Gunter, Dan

    2000-04-18

    Visapult is a prototype application and framework for remote visualization of large scientific datasets. We approach the technical challenges of tera-scale visualization with a unique architecture that employs high speed WANs and network data caches for data staging and transmission. This architecture allows for the use of available cache and compute resources at arbitrary locations on the network. High data throughput rates and network utilization are achieved by parallelizing I/O at each stage in the application, and by pipe-lining the visualization process. On the desktop, the graphics interactivity is effectively decoupled from the latency inherent in network applications. We present a detailed performance analysis of the application, and improvements resulting from field-test analysis conducted as part of the DOE Combustion Corridor project.

  17. 3Es System Optimization under Uncertainty Using Hybrid Intelligent Algorithm: A Fuzzy Chance-Constrained Programming Model

    Directory of Open Access Journals (Sweden)

    Jiekun Song

    2016-01-01

    Full Text Available Harmonious development of 3Es (economy-energy-environment system is the key to realize regional sustainable development. The structure and components of 3Es system are analyzed. Based on the analysis of causality diagram, GDP and industrial structure are selected as the target parameters of economy subsystem, energy consumption intensity is selected as the target parameter of energy subsystem, and the emissions of COD, ammonia nitrogen, SO2, and NOX and CO2 emission intensity are selected as the target parameters of environment system. Fixed assets investment of three industries, total energy consumption, and investment in environmental pollution control are selected as the decision variables. By regarding the parameters of 3Es system optimization as fuzzy numbers, a fuzzy chance-constrained goal programming (FCCGP model is constructed, and a hybrid intelligent algorithm including fuzzy simulation and genetic algorithm is proposed for solving it. The results of empirical analysis on Shandong province of China show that the FCCGP model can reflect the inherent relationship and evolution law of 3Es system and provide the effective decision-making support for 3Es system optimization.

  18. Development of a signal-analysis algorithm for the ZEUS transition-radiation detector under application of a neural network

    International Nuclear Information System (INIS)

    Wollschlaeger, U.

    1992-07-01

    The aim of this thesis consisted in the development of a procedure for the analysis of the data of the transition-radiation detector at ZEUS. For this a neural network was applied and first studied, which results concerning the separation power between electron an pions can be reached by this procedure. It was shown that neural nets yield within the error limits as well results as standard algorithms (total charge, cluster analysis). At an electron efficiency of 90% pion contaminations in the range 1%-2% were reached. Furthermore it could be confirmed that neural networks can be considered for the here present application field as robust in relatively insensitive against external perturbations. For the application in the experiment beside the separation power also the time-behaviour is of importance. The requirement to keep dead-times small didn't allow the application of standard method. By a simulation the time availabel for the signal analysis was estimated. For the testing of the processing time in a neural network subsequently the corresponding algorithm was implemented into an assembler code for the digital signal processor DSP56001. (orig./HSI) [de

  19. Application of the distributed genetic algorithm for in-core fuel optimization problems under parallel computational environment

    International Nuclear Information System (INIS)

    Yamamoto, Akio; Hashimoto, Hiroshi

    2002-01-01

    The distributed genetic algorithm (DGA) is applied for loading pattern optimization problems of the pressurized water reactors. A basic concept of DGA follows that of the conventional genetic algorithm (GA). However, DGA equally distributes candidates of solutions (i.e. loading patterns) to several independent ''islands'' and evolves them in each island. Communications between islands, i.e. migrations of some candidates between islands are performed with a certain period. Since candidates of solutions independently evolve in each island while accepting different genes of migrants, premature convergence in the conventional GA can be prevented. Because many candidate loading patterns should be evaluated in GA or DGA, the parallelization is efficient to reduce turn around time. Parallel efficiency of DGA was measured using our optimization code and good efficiency was attained even in a heterogeneous cluster environment due to dynamic distribution of the calculation load. The optimization code is based on the client/server architecture with the TCP/IP native socket and a client (optimization) module and calculation server modules communicate the objects of loading patterns each other. Throughout the sensitivity study on optimization parameters of DGA, a suitable set of the parameters for a test problem was identified. Finally, optimization capability of DGA and the conventional GA was compared in the test problem and DGA provided better optimization results than the conventional GA. (author)

  20. Feasibility Report and Environmental Statement for Water Resources Development, Cache Creek Basin, California

    Science.gov (United States)

    1979-02-01

    classified as Porno , Lake Miwok, and Patwin. Recent surveys within the Clear Lake-Cache Creek Basin have located 28 archeological sites, some of which...additional 8,400 acre-feet annually to the Lakeport area. Porno Reservoir on Kelsey Creek, being studied by Lake County, also would supplement M&l water...project on Scotts Creek could provide 9,100 acre- feet annually of irrigation water. Also, as previously discussed, Porno Reservoir would furnish

  1. Fundamentals of Cluster-Centric Content Placement in Cache-Enabled Device-to-Device Networks

    OpenAIRE

    Afshang, Mehrnaz; Dhillon, Harpreet S.; Chong, Peter Han Joo

    2015-01-01

    This paper develops a comprehensive analytical framework with foundations in stochastic geometry to characterize the performance of cluster-centric content placement in a cache-enabled device-to-device (D2D) network. Different from device-centric content placement, cluster-centric placement focuses on placing content in each cluster such that the collective performance of all the devices in each cluster is optimized. Modeling the locations of the devices by a Poisson cluster process, we defin...

  2. Algorithming the Algorithm

    DEFF Research Database (Denmark)

    Mahnke, Martina; Uprichard, Emma

    2014-01-01

    changes: it’s not the ocean, it’s the internet we’re talking about, and it’s not a TV show producer, but algorithms that constitute a sort of invisible wall. Building on this assumption, most research is trying to ‘tame the algorithmic tiger’. While this is a valuable and often inspiring approach, we...

  3. The role of seed mass on the caching decision by agoutis, Dasyprocta leporina (Rodentia: Agoutidae

    Directory of Open Access Journals (Sweden)

    Mauro Galetti

    2010-06-01

    Full Text Available It has been shown that the local extinction of large-bodied frugivores may cause cascading consequences for plant recruitment and overall plant diversity. However, to what extent the resilient mammals can compensate the role of seed dispersal in defaunated sites is poorly understood. Caviomorph rodents, especially Dasyprocta spp., are usually resilient frugivores in hunted forests and their seed caching behavior may be important for many plant species which lack primary dispersers. We compared the effect of the variation in seed mass of six vertebrate-dispersed plant species on the caching decision by the red-rumped agoutis Dasyprocta leporina Linnaeus, 1758 in a land-bridge island of the Atlantic forest, Brazil. We found a strong positive effect of seed mass on seed fate and dispersal distance, but there was a great variation between species. Agoutis never cached seeds smaller than 0.9 g and larger seeds were dispersed for longer distances. Therefore, agoutis can be important seed dispersers of large-seeded species in defaunated forests.

  4. On the use of genetic algorithm to optimize industrial assets lifecycle management under safety and budget constraints

    International Nuclear Information System (INIS)

    Lonchampt, J.; Fessart, K.

    2013-01-01

    The purpose of this paper is to describe the method and tool dedicated to optimize investments planning for industrial assets. These investments may either be preventive maintenance tasks, asset enhancements or logistic investments such as spare parts purchases. The two methodological points to investigate in such an issue are: 1. The measure of the profitability of a portfolio of investments 2. The selection and planning of an optimal set of investments 3. The measure of the risk of a portfolio of investments The measure of the profitability of a set of investments in the IPOP tool is synthesised in the Net Present Value indicator. The NPV is the sum of the differences of discounted cash flows (direct costs, forced outages...) between the situations with and without a given investment. These cash flows are calculated through a pseudo-Markov reliability model representing independently the components of the industrial asset and the spare parts inventories. The component model has been widely discussed over the years but the spare part model is a new one based on some approximations that will be discussed. This model, referred as the NPV function, takes for input an investments portfolio and gives its NPV. The second issue is to optimize the NPV. If all investments were independent, this optimization would be an easy calculation, unfortunately there are two sources of dependency. The first one is introduced by the spare part model, as if components are indeed independent in their reliability model, the fact that several components use the same inventory induces a dependency. The second dependency comes from economic, technical or logistic constraints, such as a global maintenance budget limit or a safety requirement limiting the residual risk of failure of a component or group of component, making the aggregation of individual optimum not necessary feasible. The algorithm used to solve such a difficult optimization problem is a genetic algorithm. After a description

  5. Maximum Power Point tracking algorithm based on I-V characteristic of PV array under uniform and non-uniform conditions

    DEFF Research Database (Denmark)

    Kouchaki, Alireza; Iman-Eini, H.; Asaei, B.

    2012-01-01

    This paper presents a new algorithm based on characteristic equation of solar cells to determine the Maximum Power Point (MPP) of PV modules under partially shaded conditions (PSC). To achieve this goal, an analytic condition is introduced to determine uniform or non-uniform atmospheric conditions...... quickly. This paper also proposes an effective and quick response technique to find the MPP of PV array among Global Peak (GP) and local peaks when PSC occurs based on the analytic condition. It also can perform in a manner like conventional MPPT method when the insolation conditions are uniform. In order...

  6. Cache-aware data structure model for parallelism and dynamic load balancing

    International Nuclear Information System (INIS)

    Sridi, Marwa

    2016-01-01

    This PhD thesis is dedicated to the implementation of innovative parallel methods in the framework of fast transient fluid-structure dynamics. It improves existing methods within EUROPLEXUS software, in order to optimize the shared memory parallel strategy, complementary to the original distributed memory approach, brought together into a global hybrid strategy for clusters of multi-core nodes. Starting from a sound analysis of the state of the art concerning data structuring techniques correlated to the hierarchic memory organization of current multi-processor architectures, the proposed work introduces an approach suitable for an explicit time integration (i.e. with no linear system to solve at each step). A data structure of type 'Structure of arrays' is conserved for the global data storage, providing flexibility and efficiency for current operations on kinematics fields (displacement, velocity and acceleration). On the contrary, in the particular case of elementary operations (for internal forces generic computations, as well as fluxes computations between cell faces for fluid models), particularly time consuming but localized in the program, a temporary data structure of type 'Array of structures' is used instead, to force an efficient filling of the cache memory and increase the performance of the resolution, for both serial and shared memory parallel processing. Switching from the global structure to the temporary one is based on a cell grouping strategy, following classing cache-blocking principles but handling specifically for this work neighboring data necessary to the efficient treatment of ALE fluxes for cells on the group boundaries. The proposed approach is extensively tested, from the point of views of both the computation time and the access failures into cache memory, confronting the gains obtained within the elementary operations to the potential overhead generated by the data structure switch. Obtained results are very

  7. EarthCache as a Tool to Promote Earth-Science in Public School Classrooms

    Science.gov (United States)

    Gochis, E. E.; Rose, W. I.; Klawiter, M.; Vye, E. C.; Engelmann, C. A.

    2011-12-01

    Geoscientists often find it difficult to bridge the gap in communication between university research and what is learned in the public schools. Today's schools operate in a high stakes environment that only allow instruction based on State and National Earth Science curriculum standards. These standards are often unknown by academics or are written in a style that obfuscates the transfer of emerging scientific research to students in the classroom. Earth Science teachers are in an ideal position to make this link because they have a background in science as well as a solid understanding of the required curriculum standards for their grade and the pedagogical expertise to pass on new information to their students. As part of the Michigan Teacher Excellence Program (MiTEP), teachers from Grand Rapids, Kalamazoo, and Jackson school districts participate in 2 week field courses with Michigan Tech University to learn from earth science experts about how the earth works. This course connects Earth Science Literacy Principles' Big Ideas and common student misconceptions with standards-based education. During the 2011 field course, we developed and began to implement a three-phase EarthCache model that will provide a geospatial interactive medium for teachers to translate the material they learn in the field to the students in their standards based classrooms. MiTEP participants use GPS and Google Earth to navigate to Michigan sites of geo-significance. At each location academic experts aide participants in making scientific observations about the locations' geologic features, and "reading the rocks" methodology to interpret the area's geologic history. The participants are then expected to develop their own EarthCache site to be used as pedagogical tool bridging the gap between standards-based classroom learning, contemporary research and unique outdoor field experiences. The final phase supports teachers in integrating inquiry based, higher-level learning student

  8. Autodriver algorithm

    Directory of Open Access Journals (Sweden)

    Anna Bourmistrova

    2011-02-01

    Full Text Available The autodriver algorithm is an intelligent method to eliminate the need of steering by a driver on a well-defined road. The proposed method performs best on a four-wheel steering (4WS vehicle, though it is also applicable to two-wheel-steering (TWS vehicles. The algorithm is based on coinciding the actual vehicle center of rotation and road center of curvature, by adjusting the kinematic center of rotation. The road center of curvature is assumed prior information for a given road, while the dynamic center of rotation is the output of dynamic equations of motion of the vehicle using steering angle and velocity measurements as inputs. We use kinematic condition of steering to set the steering angles in such a way that the kinematic center of rotation of the vehicle sits at a desired point. At low speeds the ideal and actual paths of the vehicle are very close. With increase of forward speed the road and tire characteristics, along with the motion dynamics of the vehicle cause the vehicle to turn about time-varying points. By adjusting the steering angles, our algorithm controls the dynamic turning center of the vehicle so that it coincides with the road curvature center, hence keeping the vehicle on a given road autonomously. The position and orientation errors are used as feedback signals in a closed loop control to adjust the steering angles. The application of the presented autodriver algorithm demonstrates reliable performance under different driving conditions.

  9. Using heuristic algorithms for capacity leasing and task allocation issues in telecommunication networks under fuzzy quality of service constraints

    Science.gov (United States)

    Huseyin Turan, Hasan; Kasap, Nihat; Savran, Huseyin

    2014-03-01

    Nowadays, every firm uses telecommunication networks in different amounts and ways in order to complete their daily operations. In this article, we investigate an optimisation problem that a firm faces when acquiring network capacity from a market in which there exist several network providers offering different pricing and quality of service (QoS) schemes. The QoS level guaranteed by network providers and the minimum quality level of service, which is needed for accomplishing the operations are denoted as fuzzy numbers in order to handle the non-deterministic nature of the telecommunication network environment. Interestingly, the mathematical formulation of the aforementioned problem leads to the special case of a well-known two-dimensional bin packing problem, which is famous for its computational complexity. We propose two different heuristic solution procedures that have the capability of solving the resulting nonlinear mixed integer programming model with fuzzy constraints. In conclusion, the efficiency of each algorithm is tested in several test instances to demonstrate the applicability of the methodology.

  10. Screen-Space Normal Distribution Function Caching for Consistent Multi-Resolution Rendering of Large Particle Data

    KAUST Repository

    Ibrahim, Mohamed

    2017-08-28

    Molecular dynamics (MD) simulations are crucial to investigating important processes in physics and thermodynamics. The simulated atoms are usually visualized as hard spheres with Phong shading, where individual particles and their local density can be perceived well in close-up views. However, for large-scale simulations with 10 million particles or more, the visualization of large fields-of-view usually suffers from strong aliasing artifacts, because the mismatch between data size and output resolution leads to severe under-sampling of the geometry. Excessive super-sampling can alleviate this problem, but is prohibitively expensive. This paper presents a novel visualization method for large-scale particle data that addresses aliasing while enabling interactive high-quality rendering. We introduce the novel concept of screen-space normal distribution functions (S-NDFs) for particle data. S-NDFs represent the distribution of surface normals that map to a given pixel in screen space, which enables high-quality re-lighting without re-rendering particles. In order to facilitate interactive zooming, we cache S-NDFs in a screen-space mipmap (S-MIP). Together, these two concepts enable interactive, scale-consistent re-lighting and shading changes, as well as zooming, without having to re-sample the particle data. We show how our method facilitates the interactive exploration of real-world large-scale MD simulation data in different scenarios.

  11. dCache: implementing a high-end NFSv4.1 service using a Java NIO framework

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    dCache is a high performance scalable storage system widely used by HEP community. In addition to set of home grown protocols we also provide industry standard access mechanisms like WebDAV and NFSv4.1. This support places dCache as a direct competitor to commercial solutions. Nevertheless conforming to a protocol is not enough; our implementations must perform comparably or even better than commercial systems. To achieve this, dCache uses two high-end IO frameworks from well know application servers: GlassFish and JBoss. This presentation describes how we implemented an rfc1831 and rfc2203 compliant ONC RPC (Sun RPC) service based on the Grizzly NIO framework, part of the GlassFish application server. This ONC RPC service is the key component of dCache’s NFSv4.1 implementation, but is independent of dCache and available for other projects. We will also show some details of dCache NFS v4.1 implementations, describe some of the Java NIO techniques used and, finally, present details of our performance e...

  12. Cliff swallows Petrochelidon pyrrhonota as bioindicators of environmental mercury, Cache Creek Watershed, California

    Science.gov (United States)

    Hothem, Roger L.; Trejo, Bonnie S.; Bauer, Marissa L.; Crayon, John J.

    2008-01-01

    To evaluate mercury (Hg) and other element exposure in cliff swallows (Petrochelidon pyrrhonota), eggs were collected from 16 sites within the mining-impacted Cache Creek watershed, Colusa, Lake, and Yolo counties, California, USA, in 1997-1998. Nestlings were collected from seven sites in 1998. Geometric mean total Hg (THg) concentrations ranged from 0.013 to 0.208 ??g/g wet weight (ww) in cliff swallow eggs and from 0.047 to 0.347 ??g/g ww in nestlings. Mercury detected in eggs generally followed the spatial distribution of Hg in the watershed based on proximity to both anthropogenic and natural sources. Mean Hg concentrations in samples of eggs and nestlings collected from sites near Hg sources were up to five and seven times higher, respectively, than in samples from reference sites within the watershed. Concentrations of other detected elements, including aluminum, beryllium, boron, calcium, manganese, strontium, and vanadium, were more frequently elevated at sites near Hg sources. Overall, Hg concentrations in eggs from Cache Creek were lower than those reported in eggs of tree swallows (Tachycineta bicolor) from highly contaminated locations in North America. Total Hg concentrations were lower in all Cache Creek egg samples than adverse effects levels established for other species. Total Hg concentrations in bullfrogs (Rana catesbeiana) and foothill yellow-legged frogs (Rana boylii) collected from 10 of the study sites were both positively correlated with THg concentrations in cliff swallow eggs. Our data suggest that cliff swallows are reliable bioindicators of environmental Hg. ?? Springer Science+Business Media, LLC 2007.

  13. Incorporating cache management behavior into seed dispersal: the effect of pericarp removal on acorn germination.

    Directory of Open Access Journals (Sweden)

    Xianfeng Yi

    Full Text Available Selecting seeds for long-term storage is a key factor for food hoarding animals. Siberian chipmunks (Tamias sibiricus remove the pericarp and scatter hoard sound acorns of Quercus mongolica over those that are insect-infested to maximize returns from caches. We have no knowledge of whether these chipmunks remove the pericarp from acorns of other species of oaks and if this behavior benefits seedling establishment. In this study, we tested whether Siberian chipmunks engage in this behavior with acorns of three other Chinese oak species, Q. variabilis, Q. aliena and Q. serrata var. brevipetiolata, and how the dispersal and germination of these acorns are affected. Our results show that when chipmunks were provided with sound and infested acorns of Quercus variabilis, Q. aliena and Q. serrata var. brevipetiolata, the two types were equally harvested and dispersed. This preference suggests that Siberian chipmunks are incapable of distinguishing between sound and insect-infested acorns. However, Siberian chipmunks removed the pericarp from acorns of these three oak species prior to dispersing and caching them. Consequently, significantly more sound acorns were scatter hoarded and more infested acorns were immediately consumed. Additionally, indoor germination experiments showed that pericarp removal by chipmunks promoted acorn germination while artificial removal showed no significant effect. Our results show that pericarp removal allows Siberian chipmunks to effectively discriminate against insect-infested acorns and may represent an adaptive behavior for cache management. Because of the germination patterns of pericarp-removed acorns, we argue that the foraging behavior of Siberian chipmunks could have potential impacts on the dispersal and germination of acorns from various oak species.

  14. Globalized Newton-Krylov-Schwarz Algorithms and Software for Parallel Implicit CFD

    Science.gov (United States)

    Gropp, W. D.; Keyes, D. E.; McInnes, L. C.; Tidriri, M. D.

    1998-01-01

    Implicit solution methods are important in applications modeled by PDEs with disparate temporal and spatial scales. Because such applications require high resolution with reasonable turnaround, "routine" parallelization is essential. The pseudo-transient matrix-free Newton-Krylov-Schwarz (Psi-NKS) algorithmic framework is presented as an answer. We show that, for the classical problem of three-dimensional transonic Euler flow about an M6 wing, Psi-NKS can simultaneously deliver: globalized, asymptotically rapid convergence through adaptive pseudo- transient continuation and Newton's method-, reasonable parallelizability for an implicit method through deferred synchronization and favorable communication-to-computation scaling in the Krylov linear solver; and high per- processor performance through attention to distributed memory and cache locality, especially through the Schwarz preconditioner. Two discouraging features of Psi-NKS methods are their sensitivity to the coding of the underlying PDE discretization and the large number of parameters that must be selected to govern convergence. We therefore distill several recommendations from our experience and from our reading of the literature on various algorithmic components of Psi-NKS, and we describe a freely available, MPI-based portable parallel software implementation of the solver employed here.

  15. Globalized Newton-Krylov-Schwarz algorithms and software for parallel implicit CFD.

    Energy Technology Data Exchange (ETDEWEB)

    Gropp, W. D.; Keyes, D. E.; McInnes, L. C.; Tidriri, M. D.; Mathematics and Computer Science; Old Dominion Univ.; Iowa State Univ.

    2000-01-01

    Implicit solution methods are important in applications modeled by PDEs with disparate temporal and spatial scales. Because such applications require high resolution with reasonable turnaround, parallelization is essential. The pseudo-transient matrix-free Newton-Krylov-Schwarz ({psi}NKS) algorithmic framework is presented as a widely applicable answer. This article shows that for the classical problem of three-dimensional transonic Euler flow about an M6 wing, {psi}NKS can simultaneously deliver globalized, asymptotically rapid convergence through adaptive pseudo-transient continuation and Newton's method; reasonable parallelizability for an implicit method through deferred synchronization and favorable communication-to-computation scaling in the Krylov linear solver; and high per processor performance through attention to distributed memory and cache locality, especially through the Schwarz preconditioner. Two discouraging features of {psi}NKS methods are their sensitivity to the coding of the underlying PDE discretization and the large number of parameters that must be selected to govern convergence. The authors therefore distill several recommendations from their experience and reading of the literature on various algorithmic components of {psi}NKS, and they describe a freely available MPI-based portable parallel software implementation of the solver employed here.

  16. Analytical derivation of traffic patterns in cache-coherent shared-memory systems

    DEFF Research Database (Denmark)

    Stuart, Matthias Bo; Sparsø, Jens

    2011-01-01

    This paper presents an analytical method to derive the worst-case traffic pattern caused by a task graph mapped to a cache-coherent shared-memory system. Our analysis allows designers to rapidly evaluate the impact of different mappings of tasks to IP cores on the traffic pattern. The accuracy...... varies with the application’s data sharing pattern, and is around 65% in the average case and 1% in the best case when considering the traffic pattern as a whole. For individual connections, our method produces tight worst-case bandwidths....

  17. Comparison of the Frontier Distributed Database Caching System with NoSQL Databases

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Non-relational "NoSQL" databases such as Cassandra and CouchDB are best known for their ability to scale to large numbers of clients spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects, is based on traditional SQL databases but also has the same high scalability and wide-area distributability for an important subset of applications. This paper compares the architectures, behavior, performance, and maintainability of the two different approaches and identifies the criteria for choosing which approach to prefer over the other.

  18. Mercury and Methylmercury concentrations and loads in Cache Creek Basin, California, January 2000 through May 2001

    Science.gov (United States)

    Domagalski, Joseph L.; Alpers, Charles N.; Slotton, Darrell G.; Suchanek, Thomas H.; Ayers, Shaun M.

    2004-01-01

    Concentrations and mass loads of total mercury and methylmercury in streams draining abandoned mercury mines and near geothermal discharge in Cache Creek Basin, California, were measured during a 17-month period from January 2000 through May 2001. Rainfall and runoff averages during the study period were lower than long-term averages. Mass loads of mercury and methylmercury from upstream sources to downstream receiving waters, such as San Francisco Bay, were generally the highest during or after winter rainfall events. During the study period, mass loads of mercury and methylmercury from geothermal sources tended to be greater than those from abandoned mining areas because of a lack of large precipitation events capable of mobilizing significant amounts of either mercury-laden sediment or dissolved mercury and methylmercury from mine waste. Streambed sediments of Cache Creek are a source of mercury and methylmercury to downstream receiving bodies of water such as the Delta of the San Joaquin and Sacramento Rivers. Much of the mercury in these sediments was deposited over the last 150 years by erosion and stream discharge from abandoned mines or by continuous discharges from geothermal areas. Several geochemical constituents were useful as natural tracers for mining and geothermal areas. These constituents included aqueous concentrations of boron, chloride, lithium, and sulfate, and the stable isotopes of hydrogen and oxygen in water. Stable isotopes of water in areas draining geothermal discharges were enriched with more oxygen-18 relative to oxygen-16 than meteoric waters, whereas the enrichment by stable isotopes of water from much of the runoff from abandoned mines was similar to that of meteoric water. Geochemical signatures from stable isotopes and trace-element concentrations may be useful as tracers of total mercury or methylmercury from specific locations; however, mercury and methylmercury are not conservatively transported. A distinct mixing trend of

  19. Photogrammetric UAV Mapping of Terrain under Dense Coastal Vegetation: An Object-Oriented Classification Ensemble Algorithm for Classification and Terrain Correction

    Directory of Open Access Journals (Sweden)

    Xuelian Meng

    2017-11-01

    Full Text Available Photogrammetric UAV sees a surge in use for high-resolution mapping, but its use to map terrain under dense vegetation cover remains challenging due to a lack of exposed ground surfaces. This paper presents a novel object-oriented classification ensemble algorithm to leverage height, texture and contextual information of UAV data to improve landscape classification and terrain estimation. Its implementation incorporates multiple heuristics, such as multi-input machine learning-based classification, object-oriented ensemble, and integration of UAV and GPS surveys for terrain correction. Experiments based on a densely vegetated wetland restoration site showed classification improvement from 83.98% to 96.12% in overall accuracy and from 0.7806 to 0.947 in kappa value. Use of standard and existing UAV terrain mapping algorithms and software produced reliable digital terrain model only over exposed bare grounds (mean error = −0.019 m and RMSE = 0.035 m but severely overestimated the terrain by ~80% of mean vegetation height in vegetated areas. The terrain correction method successfully reduced the mean error from 0.302 m to −0.002 m (RMSE from 0.342 m to 0.177 m in low vegetation and from 1.305 m to 0.057 m (RMSE from 1.399 m to 0.550 m in tall vegetation. Overall, this research validated a feasible solution to integrate UAV and RTK GPS for terrain mapping in densely vegetated environments.

  20. Fast and Near-Optimal Timing-Driven Cell Sizing under Cell Area and Leakage Power Constraints Using a Simplified Discrete Network Flow Algorithm

    Directory of Open Access Journals (Sweden)

    Huan Ren

    2013-01-01

    Full Text Available We propose a timing-driven discrete cell-sizing algorithm that can address total cell size and/or leakage power constraints. We model cell sizing as a “discretized” mincost network flow problem, wherein available sizes of each cell are modeled as nodes. Flow passing through a node indicates the choice of the corresponding cell size, and the total flow cost reflects the timing objective function value corresponding to these choices. Compared to other discrete optimization methods for cell sizing, our method can obtain near-optimal solutions in a time-efficient manner. We tested our algorithm on ISCAS’85 benchmarks, and compared our results to those produced by an optimal dynamic programming- (DP- based method. The results show that compared to the optimal method, the improvements to an initial sizing solution obtained by our method is only 1% (3% worse when using a 180 nm (90 nm library, while being 40–60 times faster. We also obtained results for ISPD’12 cell-sizing benchmarks, under leakage power constraint, and compared them to those of a state-of-the-art approximate DP method (optimal DP runs out of memory for the smallest of these circuits. Our results show that we are only 0.9% worse than the approximate DP method, while being more than twice as fast.

  1. Prediction of crack growth direction by Strain Energy Sih's Theory on specimens SEN under tension-compression biaxial loading employing Genetic Algorithms

    International Nuclear Information System (INIS)

    Rodriguez-MartInez R; Lugo-Gonzalez E; Urriolagoitia-Calderon G; Urriolagoitia-Sosa G; Hernandez-Gomez L H; Romero-Angeles B; Torres-San Miguel Ch

    2011-01-01

    Crack growth direction has been studied in many ways. Particularly Sih's strain energy theory predicts that a fracture under a three-dimensional state of stress spreads in direction of the minimum strain energy density. In this work a study for angle of fracture growth was made, considering a biaxial stress state at the crack tip on SEN specimens. The stress state applied on a tension-compression SEN specimen is biaxial one on crack tip, as it can observed in figure 1. A solution method proposed to obtain a mathematical model considering genetic algorithms, which have demonstrated great capacity for the solution of many engineering problems. From the model given by Sih one can deduce the density of strain energy stored for unit of volume at the crack tip as dW = [1/2E(σ 2 x + σ 2 y ) - ν/E(σ x σy)]dV (1). From equation (1) a mathematical deduction to solve in terms of θ of this case was developed employing Genetic Algorithms, where θ is a crack propagation direction in plane x-y. Steel and aluminium mechanical properties to modelled specimens were employed, because they are two of materials but used in engineering design. Obtained results show stable zones of fracture propagation but only in a range of applied loading.

  2. A hybrid genetic-simulated annealing algorithm for the location-inventory-routing problem considering returns under e-supply chain environment.

    Science.gov (United States)

    Li, Yanhui; Guo, Hao; Wang, Lin; Fu, Jing

    2013-01-01

    Facility location, inventory control, and vehicle routes scheduling are critical and highly related problems in the design of logistics system for e-business. Meanwhile, the return ratio in Internet sales was significantly higher than in the traditional business. Many of returned merchandise have no quality defects, which can reenter sales channels just after a simple repackaging process. Focusing on the existing problem in e-commerce logistics system, we formulate a location-inventory-routing problem model with no quality defects returns. To solve this NP-hard problem, an effective hybrid genetic simulated annealing algorithm (HGSAA) is proposed. Results of numerical examples show that HGSAA outperforms GA on computing time, optimal solution, and computing stability. The proposed model is very useful to help managers make the right decisions under e-supply chain environment.

  3. A Hybrid Genetic-Simulated Annealing Algorithm for the Location-Inventory-Routing Problem Considering Returns under E-Supply Chain Environment

    Directory of Open Access Journals (Sweden)

    Yanhui Li

    2013-01-01

    Full Text Available Facility location, inventory control, and vehicle routes scheduling are critical and highly related problems in the design of logistics system for e-business. Meanwhile, the return ratio in Internet sales was significantly higher than in the traditional business. Many of returned merchandise have no quality defects, which can reenter sales channels just after a simple repackaging process. Focusing on the existing problem in e-commerce logistics system, we formulate a location-inventory-routing problem model with no quality defects returns. To solve this NP-hard problem, an effective hybrid genetic simulated annealing algorithm (HGSAA is proposed. Results of numerical examples show that HGSAA outperforms GA on computing time, optimal solution, and computing stability. The proposed model is very useful to help managers make the right decisions under e-supply chain environment.

  4. A Hybrid Genetic-Simulated Annealing Algorithm for the Location-Inventory-Routing Problem Considering Returns under E-Supply Chain Environment

    Science.gov (United States)

    Guo, Hao; Fu, Jing

    2013-01-01

    Facility location, inventory control, and vehicle routes scheduling are critical and highly related problems in the design of logistics system for e-business. Meanwhile, the return ratio in Internet sales was significantly higher than in the traditional business. Many of returned merchandise have no quality defects, which can reenter sales channels just after a simple repackaging process. Focusing on the existing problem in e-commerce logistics system, we formulate a location-inventory-routing problem model with no quality defects returns. To solve this NP-hard problem, an effective hybrid genetic simulated annealing algorithm (HGSAA) is proposed. Results of numerical examples show that HGSAA outperforms GA on computing time, optimal solution, and computing stability. The proposed model is very useful to help managers make the right decisions under e-supply chain environment. PMID:24489489

  5. Evolutionary Connectionism: Algorithmic Principles Underlying the Evolution of Biological Organisation in Evo-Devo, Evo-Eco and Evolutionary Transitions.

    Science.gov (United States)

    Watson, Richard A; Mills, Rob; Buckley, C L; Kouvaris, Kostas; Jackson, Adam; Powers, Simon T; Cox, Chris; Tudge, Simon; Davies, Adam; Kounios, Loizos; Power, Daniel

    2016-01-01

    The mechanisms of variation, selection and inheritance, on which evolution by natural selection depends, are not fixed over evolutionary time. Current evolutionary biology is increasingly focussed on understanding how the evolution of developmental organisations modifies the distribution of phenotypic variation, the evolution of ecological relationships modifies the selective environment, and the evolution of reproductive relationships modifies the heritability of the evolutionary unit. The major transitions in evolution, in particular, involve radical changes in developmental, ecological and reproductive organisations that instantiate variation, selection and inheritance at a higher level of biological organisation. However, current evolutionary theory is poorly equipped to describe how these organisations change over evolutionary time and especially how that results in adaptive complexes at successive scales of organisation (the key problem is that evolution is self-referential, i.e. the products of evolution change the parameters of the evolutionary process). Here we first reinterpret the central open questions in these domains from a perspective that emphasises the common underlying themes. We then synthesise the findings from a developing body of work that is building a new theoretical approach to these questions by converting well-understood theory and results from models of cognitive learning. Specifically, connectionist models of memory and learning demonstrate how simple incremental mechanisms, adjusting the relationships between individually-simple components, can produce organisations that exhibit complex system-level behaviours and improve the adaptive capabilities of the system. We use the term "evolutionary connectionism" to recognise that, by functionally equivalent processes, natural selection acting on the relationships within and between evolutionary entities can result in organisations that produce complex system-level behaviours in evolutionary

  6. Replicas Strategy and Cache Optimization of Video Surveillance Systems Based on Cloud Storage

    Directory of Open Access Journals (Sweden)

    Rongheng Li

    2018-04-01

    Full Text Available With the rapid development of video surveillance technology, especially the popularity of cloud-based video surveillance applications, video data begins to grow explosively. However, in the cloud-based video surveillance system, replicas occupy an amount of storage space. Also, the slow response to video playback constrains the performance of the system. In this paper, considering the characteristics of video data comprehensively, we propose a dynamic redundant replicas mechanism based on security levels that can dynamically adjust the number of replicas. Based on the location correlation between cameras, this paper also proposes a data cache strategy to improve the response speed of data reading. Experiments illustrate that: (1 our dynamic redundant replicas mechanism can save storage space while ensuring data security; (2 the cache mechanism can predict the playback behaviors of the users in advance and improve the response speed of data reading according to the location and time correlation of the front-end cameras; and (3 in terms of cloud-based video surveillance, our proposed approaches significantly outperform existing methods.

  7. Observations of territorial breeding common ravens caching eggs of greater sage-grouse

    Science.gov (United States)

    Howe, Kristy B.; Coates, Peter S.

    2015-01-01

    Previous investigations using continuous video monitoring of greater sage-grouse Centrocercus urophasianus nests have unambiguously identified common ravens Corvus corax as an important egg predator within the western United States. The quantity of greater sage-grouse eggs an individual common raven consumes during the nesting period and the extent to which common ravens actively hunt greater sage-grouse nests are largely unknown. However, some evidence suggests that territorial breeding common ravens, rather than nonbreeding transients, are most likely responsible for nest depredations. We describe greater sage-grouse egg depredation observations obtained opportunistically from three common raven nests located in Idaho and Nevada where depredated greater sage-grouse eggs were found at or in the immediate vicinity of the nest site, including the caching of eggs in nearby rock crevices. We opportunistically monitored these nests by counting and removing depredated eggs and shell fragments from the nest sites during each visit to determine the extent to which the common raven pairs preyed on greater sage-grouse eggs. To our knowledge, our observations represent the first evidence that breeding, territorial pairs of common ravens cache greater sage-grouse eggs and are capable of depredating multiple greater sage-grouse nests.

  8. Flood Frequency Analysis of Future Climate Projections in the Cache Creek Watershed

    Science.gov (United States)

    Fischer, I.; Trihn, T.; Ishida, K.; Jang, S.; Kavvas, E.; Kavvas, M. L.

    2014-12-01

    Effects of climate change on hydrologic flow regimes, particularly extreme events, necessitate modeling of future flows to best inform water resources management. Future flow projections may be modeled through the joint use of carbon emission scenarios, general circulation models and watershed models. This research effort ran 13 simulations for carbon emission scenarios (taken from the A1, A2 and B1 families) over the 21st century (2001-2100) for the Cache Creek watershed in Northern California. Atmospheric data from general circulation models, CCSM3 and ECHAM5, were dynamically downscaled to a 9 km resolution using MM5, a regional mesoscale model, before being input into the physically based watershed environmental hydrology (WEHY) model. Ensemble mean and standard deviation of simulated flows describe the expected hydrologic system response. Frequency histograms and cumulative distribution functions characterize the range of hydrologic responses that may occur. The modeled flow results comprise a dataset suitable for time series and frequency analysis allowing for more robust system characterization, including indices such as the 100 year flood return period. These results are significant for water quality management as the Cache Creek watershed is severely impacted by mercury pollution from historic mining activities. Extreme flow events control mercury fate and transport affecting the downstream water bodies of the Sacramento River and Sacramento- San Joaquin Delta which provide drinking water to over 25 million people.

  9. A Partial Backlogging Inventory Model for Deteriorating Item under Fuzzy Inflation and Discounting over Random Planning Horizon: A Fuzzy Genetic Algorithm Approach

    Directory of Open Access Journals (Sweden)

    Dipak Kumar Jana

    2013-01-01

    Full Text Available An inventory model for deteriorating item is considered in a random planning horizon under inflation and time value money. The model is described in two different environments: random and fuzzy random. The proposed model allows stock-dependent consumption rate and shortages with partial backlogging. In the fuzzy stochastic model, possibility chance constraints are used for defuzzification of imprecise expected total profit. Finally, genetic algorithm (GA and fuzzy simulation-based genetic algorithm (FSGA are used to make decisions for the above inventory models. The models are illustrated with some numerical data. Sensitivity analysis on expected profit function is also presented. Scope and Purpose. The traditional inventory model considers the ideal case in which depletion of inventory is caused by a constant demand rate. However, to keep sales higher, the inventory level would need to remain high. Of course, this would also result in higher holding or procurement cost. Also, in many real situations, during a longer-shortage period some of the customers may refuse the management. For instance, for fashionable commodities and high-tech products with short product life cycle, the willingness for a customer to wait for backlogging is diminishing with the length of the waiting time. Most of the classical inventory models did not take into account the effects of inflation and time value of money. But in the past, the economic situation of most of the countries has changed to such an extent due to large-scale inflation and consequent sharp decline in the purchasing power of money. So, it has not been possible to ignore the effects of inflation and time value of money any more. The purpose of this paper is to maximize the expected profit in the random planning horizon.

  10. Integrating Cache-Related Pre-emption Delays into Analysis of Fixed Priority Scheduling with Pre-emption Thresholds

    NARCIS (Netherlands)

    Bril, R.J.; Altmeyer, S.; van den Heuvel, M.H.P.; Davis, R.I.; Behnam, M.

    2014-01-01

    Cache-related pre-emption delays (CRPD) have been integrated into the schedulability analysis of sporadic tasks with constrained deadlines for fixed-priority pre-emptive scheduling (FPPS). This paper generalizes that work by integrating CRPD into the schedulability analysis of tasks with arbitrary

  11. Assessment of watershed vulnerability to climate change for the Uinta-Wasatch-Cache and Ashley National Forests, Utah

    Science.gov (United States)

    Janine Rice; Tim Bardsley; Pete Gomben; Dustin Bambrough; Stacey Weems; Sarah Leahy; Christopher Plunkett; Charles Condrat; Linda A. Joyce

    2017-01-01

    Watersheds on the Uinta-Wasatch-Cache and Ashley National Forests provide many ecosystem services, and climate change poses a risk to these services. We developed a watershed vulnerability assessment to provide scientific information for land managers facing the challenge of managing these watersheds. Literature-based information and expert elicitation is used to...

  12. Tannin concentration enhances seed caching by scatter-hoarding rodents: An experiment using artificial ‘seeds’

    Science.gov (United States)

    Wang, Bo; Chen, Jin

    2008-11-01

    Tannins are very common among plant seeds but their effects on the fate of seeds, for example, via mediation of the feeding preferences of scatter-hoarding rodents, are poorly understood. In this study, we created a series of artificial 'seeds' that only differed in tannin concentration and the type of tannin, and placed them in a pine forest in the Shangri-La Alpine Botanical Garden, Yunnan Province of China. Two rodent species ( Apodemus latronum and A. chevrieri) showed significant preferences for 'seeds' with different tannin concentrations. A significantly higher proportion of seeds with low tannin concentration were consumed in situ compared with seeds with a higher tannin concentration. Meanwhile, the tannin concentration was significantly positively correlated with the proportion of seeds cached. The different types of tannin (hydrolysable tannin vs condensed tannin) did not differ significantly in their effect on the proportion of seeds eaten in situ vs seeds cached. Tannin concentrations had no significant effect on the distance that cached seeds were carried, which suggests that rodents may respond to different seed traits in deciding whether or not to cache seeds and how far they will transport seeds.

  13. Application of computer graphics to generate coal resources of the Cache coal bed, Recluse geologic model area, Campbell County, Wyoming

    Science.gov (United States)

    Schneider, G.B.; Crowley, S.S.; Carey, M.A.

    1982-01-01

    Low-sulfur subbituminous coal resources have been calculated, using both manual and computer methods, for the Cache coal bed in the Recluse Model Area, which covers the White Tail Butte, Pitch Draw, Recluse, and Homestead Draw SW 7 1/2 minute quadrangles, Campbell County, Wyoming. Approximately 275 coal thickness measurements obtained from drill hole data are evenly distributed throughout the area. The Cache coal and associated beds are in the Paleocene Tongue River Member of the Fort Union Formation. The depth from the surface to the Cache bed ranges from 269 to 1,257 feet. The thickness of the coal is as much as 31 feet, but in places the Cache coal bed is absent. Comparisons between hand-drawn and computer-generated isopach maps show minimal differences. Total coal resources calculated by computer show the bed to contain 2,316 million short tons or about 6.7 percent more than the hand-calculated figure of 2,160 million short tons.

  14. Achieving cost/performance balance ratio using tiered storage caching techniques: A case study with CephFS

    Science.gov (United States)

    Poat, M. D.; Lauret, J.

    2017-10-01

    As demand for widely accessible storage capacity increases and usage is on the rise, steady IO performance is desired but tends to suffer within multi-user environments. Typical deployments use standard hard drives as the cost per/GB is quite low. On the other hand, HDD based solutions for storage is not known to scale well with process concurrency and soon enough, high rate of IOPs create a “random access” pattern killing performance. Though not all SSDs are alike, SSDs are an established technology often used to address this exact “random access” problem. In this contribution, we will first discuss the IO performance of many different SSD drives (tested in a comparable and standalone manner). We will then be discussing the performance and integrity of at least three low-level disk caching techniques (Flashcache, dm-cache, and bcache) including individual policies, procedures, and IO performance. Furthermore, the STAR online computing infrastructure currently hosts a POSIX-compliant Ceph distributed storage cluster - while caching is not a native feature of CephFS (only exists in the Ceph Object store), we will show how one can implement a caching mechanism profiting from an implementation at a lower level. As our illustration, we will present our CephFS setup, IO performance tests, and overall experience from such configuration. We hope this work will service the community’s interest for using disk-caching mechanisms with applicable uses such as distributed storage systems and seeking an overall IO performance gain.

  15. Algorithmic cryptanalysis

    CERN Document Server

    Joux, Antoine

    2009-01-01

    Illustrating the power of algorithms, Algorithmic Cryptanalysis describes algorithmic methods with cryptographically relevant examples. Focusing on both private- and public-key cryptographic algorithms, it presents each algorithm either as a textual description, in pseudo-code, or in a C code program.Divided into three parts, the book begins with a short introduction to cryptography and a background chapter on elementary number theory and algebra. It then moves on to algorithms, with each chapter in this section dedicated to a single topic and often illustrated with simple cryptographic applic

  16. Explaining algorithms using metaphors

    CERN Document Server

    Forišek, Michal

    2013-01-01

    There is a significant difference between designing a new algorithm, proving its correctness, and teaching it to an audience. When teaching algorithms, the teacher's main goal should be to convey the underlying ideas and to help the students form correct mental models related to the algorithm. This process can often be facilitated by using suitable metaphors. This work provides a set of novel metaphors identified and developed as suitable tools for teaching many of the 'classic textbook' algorithms taught in undergraduate courses worldwide. Each chapter provides exercises and didactic notes fo

  17. Summary and Synthesis of Mercury Studies in the Cache Creek Watershed, California, 2000-01

    Science.gov (United States)

    Domagalski, Joseph L.; Slotton, Darell G.; Alpers, Charles N.; Suchanek, Thomas H.; Churchill, Ronald; Bloom, Nicolas; Ayers, Shaun M.; Clinkenbeard, John

    2004-01-01

    This report summarizes the principal findings of the Cache Creek, California, components of a project funded by the CALFED Bay?Delta Program entitled 'An Assessment of Ecological and Human Health Impacts of Mercury in the Bay?Delta Watershed.' A companion report summarizes the key findings of other components of the project based in the San Francisco Bay and the Delta of the Sacramento and San Joaquin Rivers. These summary documents present the more important findings of the various studies in a format intended for a wide audience. For more in-depth, scientific presentation and discussion of the research, a series of detailed technical reports of the integrated mercury studies is available at the following website: .

  18. An ecological response model for the Cache la Poudre River through Fort Collins

    Science.gov (United States)

    Shanahan, Jennifer; Baker, Daniel; Bledsoe, Brian P.; Poff, LeRoy; Merritt, David M.; Bestgen, Kevin R.; Auble, Gregor T.; Kondratieff, Boris C.; Stokes, John; Lorie, Mark; Sanderson, John

    2014-01-01

    The Poudre River Ecological Response Model (ERM) is a collaborative effort initiated by the City of Fort Collins and a team of nine river scientists to provide the City with a tool to improve its understanding of the past, present, and likely future conditions of the Cache la Poudre River ecosystem. The overall ecosystem condition is described through the measurement of key ecological indicators such as shape and character of the stream channel and banks, streamside plant communities and floodplain wetlands, aquatic vegetation and insects, and fishes, both coolwater trout and warmwater native species. The 13- mile-long study area of the Poudre River flows through Fort Collins, Colorado, and is located in an ecological transition zone between the upstream, cold-water, steep-gradient system in the Front Range of the Southern Rocky Mountains and the downstream, warm-water, low-gradient reach in the Colorado high plains.

  19. The Identification and Treatment of a Unique Cache of Organic Artefacts from Menorca's Bronze Age

    Directory of Open Access Journals (Sweden)

    Howard Wellman

    1996-05-01

    Full Text Available A unique cache of organic artefacts was excavated in March 1995 from Cova d'es Carritx, Menorca, a sealed cave system that was used as a mortuary in the late second or early first millennia BC. This deposit included a set of unique conical tubes made of bovine horn sheath, stuffed with hair or other fibres, and capped with wooden disks. Other materials were found in association with the tubes, including a copper-tin alloy rod. The decision to display some of the tubes required a degree of consolidative strengthening which would conflict with conservation aims of preserving the artefacts essentially unchanged for future study. The two most complete artefacts were treated by localised consolidation (with Paraloid B-72, while the other two were left untreated. The two consolidated tubes were provided with display-ready mounts, while the others were packaged to minimise the effects of handling and long-term storage.

  20. Caching behaviour by red squirrels may contribute to food conditioning of grizzly bears

    Directory of Open Access Journals (Sweden)

    Julia Elizabeth Put

    2017-08-01

    Full Text Available We describe an interspecific relationship wherein grizzly bears (Ursus arctos horribilis appear to seek out and consume agricultural seeds concentrated in the middens of red squirrels (Tamiasciurus hudsonicus, which had collected and cached spilled grain from a railway. We studied this interaction by estimating squirrel density, midden density and contents, and bear activity along paired transects that were near (within 50 m or far (200 m from the railway. Relative to far ones, near transects had 2.4 times more squirrel sightings, but similar numbers of squirrel middens. Among 15 middens in which agricultural products were found, 14 were near the rail and 4 subsequently exhibited evidence of bear digging. Remote cameras confirmed the presence of squirrels on the rail and bears excavating middens. We speculate that obtaining grain from squirrel middens encourages bears to seek grain on the railway, potentially contributing to their rising risk of collisions with trains.

  1. Comparison of the Frontier Distributed Database Caching System to NoSQL Databases

    International Nuclear Information System (INIS)

    Dykstra, Dave

    2012-01-01

    One of the main attractions of non-relational “NoSQL” databases is their ability to scale to large numbers of readers, including readers spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects for Conditions data, is based on traditional SQL databases but also adds high scalability and the ability to be distributed over a wide-area for an important subset of applications. This paper compares the major characteristics of the two different approaches and identifies the criteria for choosing which approach to prefer over the other. It also compares in some detail the NoSQL databases used by CMS and ATLAS: MongoDB, CouchDB, HBase, and Cassandra.

  2. Comparison of the Frontier Distributed Database Caching System to NoSQL Databases

    Science.gov (United States)

    Dykstra, Dave

    2012-12-01

    One of the main attractions of non-relational “NoSQL” databases is their ability to scale to large numbers of readers, including readers spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects for Conditions data, is based on traditional SQL databases but also adds high scalability and the ability to be distributed over a wide-area for an important subset of applications. This paper compares the major characteristics of the two different approaches and identifies the criteria for choosing which approach to prefer over the other. It also compares in some detail the NoSQL databases used by CMS and ATLAS: MongoDB, CouchDB, HBase, and Cassandra.

  3. Comparison of the Frontier Distributed Database Caching System to NoSQL Databases

    Energy Technology Data Exchange (ETDEWEB)

    Dykstra, Dave [Fermilab

    2012-07-20

    One of the main attractions of non-relational NoSQL databases is their ability to scale to large numbers of readers, including readers spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects for Conditions data, is based on traditional SQL databases but also adds high scalability and the ability to be distributed over a wide-area for an important subset of applications. This paper compares the major characteristics of the two different approaches and identifies the criteria for choosing which approach to prefer over the other. It also compares in some detail the NoSQL databases used by CMS and ATLAS: MongoDB, CouchDB, HBase, and Cassandra.

  4. Modified stretched exponential model of computer system resources management limitations-The case of cache memory

    Science.gov (United States)

    Strzałka, Dominik; Dymora, Paweł; Mazurek, Mirosław

    2018-02-01

    In this paper we present some preliminary results in the field of computer systems management with relation to Tsallis thermostatistics and the ubiquitous problem of hardware limited resources. In the case of systems with non-deterministic behaviour, management of their resources is a key point that guarantees theirs acceptable performance and proper working. This is very wide problem that stands for many challenges in financial, transport, water and food, health, etc. areas. We focus on computer systems with attention paid to cache memory and propose to use an analytical model that is able to connect non-extensive entropy formalism, long-range dependencies, management of system resources and queuing theory. Obtained analytical results are related to the practical experiment showing interesting and valuable results.

  5. A Cache-Oblivious Implicit Dictionary with the Working Set Property

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Kejlberg-Rasmussen, Casper; Truelsen, Jakob

    2010-01-01

    In this paper we present an implicit dictionary with the working set property i.e. a dictionary supporting \\op{insert}($e$), \\op{delete}($x$) and \\op{predecessor}($x$) in~$\\O(\\log n)$ time and \\op{search}($x$) in $\\O(\\log\\ell)$ time, where $n$ is the number of elements stored in the dictionary...... and $\\ell$ is the number of distinct elements searched for since the element with key~$x$ was last searched for. The dictionary stores the elements in an array of size~$n$ using \\emph{no} additional space. In the cache-oblivious model the operations \\op{insert}($e$), \\op{delete}($x$) and \\op...

  6. Mercury and methylmercury concentrations and loads in the Cache Creek watershed, California

    International Nuclear Information System (INIS)

    Domagalski, Joseph L.; Alpers, Charles N.; Slotton, Darell G.; Suchanek, Thomas H.; Ayers, Shaun M.

    2004-01-01

    Concentrations and loads of total mercury and methylmercury were measured in streams draining abandoned mercury mines and in the proximity of geothermal discharge in the Cache Creek watershed of California during a 17-month period from January 2000 through May 2001. Rainfall and runoff were lower than long-term averages during the study period. The greatest loading of mercury and methylmercury from upstream sources to downstream receiving waters, such as San Francisco Bay, generally occurred during or after winter rainfall events. During the study period, loads of mercury and methylmercury from geothermal sources tended to be greater than those from abandoned mining areas, a pattern attributable to the lack of large precipitation events capable of mobilizing significant amounts of either mercury-laden sediment or dissolved mercury and methylmercury from mine waste. Streambed sediments of Cache Creek are a significant source of mercury and methylmercury to downstream receiving bodies of water. Much of the mercury in these sediments is the result of deposition over the last 100-150 years by either storm-water runoff, from abandoned mines, or continuous discharges from geothermal areas. Several geochemical constituents were useful as natural tracers for mining and geothermal areas, including the aqueous concentrations of boron, chloride, lithium and sulfate, and the stable isotopes of hydrogen and oxygen in water. Stable isotopes of water in areas draining geothermal discharges showed a distinct trend toward enrichment of 18 O compared with meteoric waters, whereas much of the runoff from abandoned mines indicated a stable isotopic pattern more consistent with local meteoric water

  7. Mercury and methylmercury concentrations and loads in the Cache Creek watershed, California.

    Science.gov (United States)

    Domagalski, Joseph L; Alpers, Charles N; Slotton, Darell G; Suchanek, Thomas H; Ayers, Shaun M

    2004-07-05

    Concentrations and loads of total mercury and methylmercury were measured in streams draining abandoned mercury mines and in the proximity of geothermal discharge in the Cache Creek watershed of California during a 17-month period from January 2000 through May 2001. Rainfall and runoff were lower than long-term averages during the study period. The greatest loading of mercury and methylmercury from upstream sources to downstream receiving waters, such as San Francisco Bay, generally occurred during or after winter rainfall events. During the study period, loads of mercury and methylmercury from geothermal sources tended to be greater than those from abandoned mining areas, a pattern attributable to the lack of large precipitation events capable of mobilizing significant amounts of either mercury-laden sediment or dissolved mercury and methylmercury from mine waste. Streambed sediments of Cache Creek are a significant source of mercury and methylmercury to downstream receiving bodies of water. Much of the mercury in these sediments is the result of deposition over the last 100-150 years by either storm-water runoff, from abandoned mines, or continuous discharges from geothermal areas. Several geochemical constituents were useful as natural tracers for mining and geothermal areas, including the aqueous concentrations of boron, chloride, lithium and sulfate, and the stable isotopes of hydrogen and oxygen in water. Stable isotopes of water in areas draining geothermal discharges showed a distinct trend toward enrichment of (18)O compared with meteoric waters, whereas much of the runoff from abandoned mines indicated a stable isotopic pattern more consistent with local meteoric water.

  8. Algorithmic mathematics

    CERN Document Server

    Hougardy, Stefan

    2016-01-01

    Algorithms play an increasingly important role in nearly all fields of mathematics. This book allows readers to develop basic mathematical abilities, in particular those concerning the design and analysis of algorithms as well as their implementation. It presents not only fundamental algorithms like the sieve of Eratosthenes, the Euclidean algorithm, sorting algorithms, algorithms on graphs, and Gaussian elimination, but also discusses elementary data structures, basic graph theory, and numerical questions. In addition, it provides an introduction to programming and demonstrates in detail how to implement algorithms in C++. This textbook is suitable for students who are new to the subject and covers a basic mathematical lecture course, complementing traditional courses on analysis and linear algebra. Both authors have given this "Algorithmic Mathematics" course at the University of Bonn several times in recent years.

  9. Total algorithms

    NARCIS (Netherlands)

    Tel, G.

    We define the notion of total algorithms for networks of processes. A total algorithm enforces that a "decision" is taken by a subset of the processes, and that participation of all processes is required to reach this decision. Total algorithms are an important building block in the design of

  10. Policy-aware algorithms for proxy placement in the Internet

    Science.gov (United States)

    Kamath, Krishnanand M.; Bassali, Harpal S.; Hosamani, Rajendraprasad B.; Gao, Lixin

    2001-07-01

    Internet has grown explosively for the past few years and has matured into an important commercial infrastructure. The explosive growth of traffic has contributed to degradation of user perceived response times in today's Internet. Caching at the proxy server have emerged as an effective way of reducing the overall latency. The effectiveness of a proxy server is primarily determined by its locality. This locality is affected by factors such as the Internet topology and routing policies. In this paper, we present heuristic algorithms for placing proxies in the Internet by considering both Internet topology and routing policies. In particular, we make use of the logical topology inferred from Autonomous System (AS) relationships to derive the path between a proxy and a client. We present heuristic algorithms for placing proxies and evaluate these algorithms for the Internet logical topology over three years. To the best of our knowledge, this is the first work on placing proxy servers in the Internet that considers logical topology.

  11. Tracking Seed Fates of Tropical Tree Species: Evidence for Seed Caching in a Tropical Forest in North-East India

    Science.gov (United States)

    Sidhu, Swati; Datta, Aparajita

    2015-01-01

    Rodents affect the post-dispersal fate of seeds by acting either as on-site seed predators or as secondary dispersers when they scatter-hoard seeds. The tropical forests of north-east India harbour a high diversity of little-studied terrestrial murid and hystricid rodents. We examined the role played by these rodents in determining the seed fates of tropical evergreen tree species in a forest site in north-east India. We selected ten tree species (3 mammal-dispersed and 7 bird-dispersed) that varied in seed size and followed the fates of 10,777 tagged seeds. We used camera traps to determine the identity of rodent visitors, visitation rates and their seed-handling behavior. Seeds of all tree species were handled by at least one rodent taxon. Overall rates of seed removal (44.5%) were much higher than direct on-site seed predation (9.9%), but seed-handling behavior differed between the terrestrial rodent groups: two species of murid rodents removed and cached seeds, and two species of porcupines were on-site seed predators. In addition, a true cricket, Brachytrupes sp., cached seeds of three species underground. We found 309 caches formed by the rodents and the cricket; most were single-seeded (79%) and seeds were moved up to 19 m. Over 40% of seeds were re-cached from primary cache locations, while about 12% germinated in the primary caches. Seed removal rates varied widely amongst tree species, from 3% in Beilschmiedia assamica to 97% in Actinodaphne obovata. Seed predation was observed in nine species. Chisocheton cumingianus (57%) and Prunus ceylanica (25%) had moderate levels of seed predation while the remaining species had less than 10% seed predation. We hypothesized that seed traits that provide information on resource quantity would influence rodent choice of a seed, while traits that determine resource accessibility would influence whether seeds are removed or eaten. Removal rates significantly decreased (p seed size. Removal rates were significantly

  12. A Refined Self-Tuning Filter-Based Instantaneous Power Theory Algorithm for Indirect Current Controlled Three-Level Inverter-Based Shunt Active Power Filters under Non-sinusoidal Source Voltage Conditions

    Directory of Open Access Journals (Sweden)

    Yap Hoon

    2017-02-01

    Full Text Available In this paper, a refined reference current generation algorithm based on instantaneous power (pq theory is proposed, for operation of an indirect current controlled (ICC three-level neutral-point diode clamped (NPC inverter-based shunt active power filter (SAPF under non-sinusoidal source voltage conditions. SAPF is recognized as one of the most effective solutions to current harmonics due to its flexibility in dealing with various power system conditions. As for its controller, pq theory has widely been applied to generate the desired reference current due to its simple implementation features. However, the conventional dependency on self-tuning filter (STF in generating reference current has significantly limited mitigation performance of SAPF. Besides, the conventional STF-based pq theory algorithm is still considered to possess needless features which increase computational complexity. Furthermore, the conventional algorithm is mostly designed to suit operation of direct current controlled (DCC SAPF which is incapable of handling switching ripples problems, thereby leading to inefficient mitigation performance. Therefore, three main improvements are performed which include replacement of STF with mathematical-based fundamental real power identifier, removal of redundant features, and generation of sinusoidal reference current. To validate effectiveness and feasibility of the proposed algorithm, simulation work in MATLAB-Simulink and laboratory test utilizing a TMS320F28335 digital signal processor (DSP are performed. Both simulation and experimental findings demonstrate superiority of the proposed algorithm over the conventional algorithm.

  13. Genetic algorithm-based optimization of testing and maintenance under uncertain unavailability and cost estimation: A survey of strategies for harmonizing evolution and accuracy

    International Nuclear Information System (INIS)

    Villanueva, J.F.; Sanchez, A.I.; Carlos, S.; Martorell, S.

    2008-01-01

    This paper presents the results of a survey to show the applicability of an approach based on a combination of distribution-free tolerance interval and genetic algorithms for testing and maintenance optimization of safety-related systems based on unavailability and cost estimation acting as uncertain decision criteria. Several strategies have been checked using a combination of Monte Carlo (simulation)--genetic algorithm (search-evolution). Tolerance intervals for the unavailability and cost estimation are obtained to be used by the genetic algorithms. Both single- and multiple-objective genetic algorithms are used. In general, it is shown that the approach is a robust, fast and powerful tool that performs very favorably in the face of noise in the output (i.e. uncertainty) and it is able to find the optimum over a complicated, high-dimensional nonlinear space in a tiny fraction of the time required for enumeration of the decision space. This approach reduces the computational effort by means of providing appropriate balance between accuracy of simulation and evolution; however, negative effects are also shown when a not well-balanced accuracy-evolution couple is used, which can be avoided or mitigated with the use of a single-objective genetic algorithm or the use of a multiple-objective genetic algorithm with additional statistical information

  14. Seed drops and caches by the harvester ant Messor barbarus: do they contribute to seed dispersal in Mediterranean grasslands?

    Science.gov (United States)

    Detrain, C.; Tasse, Olivier

    To determine whether the harvester ant Messor barbarus acts as a seed disperser in Mediterranean grasslands, the accuracy level of seed processing was assessed in the field by quantifying seed drops by loaded foragers. In the vicinity of exploited seed patches 3times as many diaspores were found as in controls due to seed losses by foragers. Over trails, up to 30% of harvested seeds were dropped, singly, by workers but all were recovered by nestmates within 24h. Seeds were also dropped within temporary caches with very few viable diaspores being left per cache when ants no longer used the trail. Globally, ant-dispersed diaspores accounted for only 0.1% of seeds harvested by M. barbarus. We discuss the possible significance for grassland vegetation of harvester-ant-mediated seed dispersal.

  15. Minimizing End-to-End Interference in I/O Stacks Spanning Shared Multi-Level Buffer Caches

    Science.gov (United States)

    Patrick, Christina M.

    2011-01-01

    This thesis presents an end-to-end interference minimizing uniquely designed high performance I/O stack that spans multi-level shared buffer cache hierarchies accessing shared I/O servers to deliver a seamless high performance I/O stack. In this thesis, I show that I can build a superior I/O stack which minimizes the inter-application interference…

  16. The People of Bear Hunter Speak: Oral Histories of the Cache Valley Shoshones Regarding the Bear River Massacre

    OpenAIRE

    Crawford, Aaron L.

    2007-01-01

    The Cache Valley Shoshone are the survivors of the Bear River Massacre, where a battle between a group of US. volunteer troops from California and a Shoshone village degenerated into the worst Indian massacre in US. history, resulting in the deaths of over 200 Shoshones. The massacre occurred due to increasing tensions over land use between the Shoshones and the Mormon settlers. Following the massacre, the Shoshones attempted settling in several different locations in Box Elder County, eventu...

  17. Potential Mechanisms Driving Population Variation in Spatial Memory and the Hippocampus in Food-caching Chickadees.

    Science.gov (United States)

    Croston, Rebecca; Branch, Carrie L; Kozlovsky, Dovid Y; Roth, Timothy C; LaDage, Lara D; Freas, Cody A; Pravosudov, Vladimir V

    2015-09-01

    Harsh environments and severe winters have been hypothesized to favor improvement of the cognitive abilities necessary for successful foraging. Geographic variation in winter climate, then, is likely associated with differences in selection pressures on cognitive ability, which could lead to evolutionary changes in cognition and its neural mechanisms, assuming that variation in these traits is heritable. Here, we focus on two species of food-caching chickadees (genus Poecile), which rely on stored food for survival over winter and require the use of spatial memory to recover their stores. These species also exhibit extensive climate-related population level variation in spatial memory and the hippocampus, including volume, the total number and size of neurons, and adults' rates of neurogenesis. Such variation could be driven by several mechanisms within the context of natural selection, including independent, population-specific selection (local adaptation), environment experience-based plasticity, developmental differences, and/or epigenetic differences. Extensive data on cognition, brain morphology, and behavior in multiple populations of these two species of chickadees along longitudinal, latitudinal, and elevational gradients in winter climate are most consistent with the hypothesis that natural selection drives the evolution of local adaptations associated with spatial memory differences among populations. Conversely, there is little support for the hypotheses that environment-induced plasticity or developmental differences are the main causes of population differences across climatic gradients. Available data on epigenetic modifications of memory ability are also inconsistent with the observed patterns of population variation, with birds living in more stressful and harsher environments having better spatial memory associated with a larger hippocampus and a larger number of hippocampal neurons. Overall, the existing data are most consistent with the

  18. Fast autodidactic adaptive equalization algorithms

    Science.gov (United States)

    Hilal, Katia

    Autodidactic equalization by adaptive filtering is addressed in a mobile radio communication context. A general method, using an adaptive stochastic gradient Bussgang type algorithm, to deduce two low cost computation algorithms is given: one equivalent to the initial algorithm and the other having improved convergence properties thanks to a block criteria minimization. Two start algorithms are reworked: the Godard algorithm and the decision controlled algorithm. Using a normalization procedure, and block normalization, the performances are improved, and their common points are evaluated. These common points are used to propose an algorithm retaining the advantages of the two initial algorithms. This thus inherits the robustness of the Godard algorithm and the precision and phase correction of the decision control algorithm. The work is completed by a study of the stable states of Bussgang type algorithms and of the stability of the Godard algorithms, initial and normalized. The simulation of these algorithms, carried out in a mobile radio communications context, and under severe conditions on the propagation channel, gave a 75% reduction in the number of samples required for the processing in relation with the initial algorithms. The improvement of the residual error was of a much lower return. These performances are close to making possible the use of autodidactic equalization in the mobile radio system.

  19. AirCache: A Crowd-Based Solution for Geoanchored Floating Data

    Directory of Open Access Journals (Sweden)

    Armir Bujari

    2016-01-01

    Full Text Available The Internet edge has evolved from a simple consumer of information and data to eager producer feeding sensed data at a societal scale. The crowdsensing paradigm is a representative example which has the potential to revolutionize the way we acquire and consume data. Indeed, especially in the era of smartphones, the geographical and temporal scopus of data is often local. For instance, users’ queries are more and more frequently about a nearby object, event, person, location, and so forth. These queries could certainly be processed and answered locally, without the need for contacting a remote server through the Internet. In this scenario, the data is alimented (sensed by the users and, as a consequence, data lifetime is limited by human organizational factors (e.g., mobility. From this basis, data survivability in the Area of Interest (AoI is crucial and, if not guaranteed, could undermine system deployment. Addressing this scenario, we discuss and contribute with a novel protocol named AirCache, whose aim is to guarantee data availability in the AoI while at the same time reducing the data access costs at the network edges. We assess our proposal through a simulation analysis showing that our approach effectively fulfills its design objectives.

  20. Diets of three species of anurans from the cache creek watershed, California, USA

    Science.gov (United States)

    Hothem, R.L.; Meckstroth, A.M.; Wegner, K.E.; Jennings, M.R.; Crayon, J.J.

    2009-01-01

    We evaluated the diets of three sympatric anuran species, the native Northern Pacific Treefrog, Pseudacris regilla, and Foothill Yellow-Legged Frog, Rana boylii, and the introduced American Bullfrog, Lithobates catesbeianus, based on stomach contents of frogs collected at 36 sites in 1997 and 1998. This investigation was part of a study of mercury bioaccumulation in the biota of the Cache Creek Watershed in north-central California, an area affected by mercury contamination from natural sources and abandoned mercury mines. We collected R. boylii at 22 sites, L. catesbeianus at 21 sites, and P. regilla at 13 sites. We collected both L. catesbeianus and R. boylii at nine sites and all three species at five sites. Pseudacris regilla had the least aquatic diet (100% of the samples had terrestrial prey vs. 5% with aquatic prey), followed by R. boylii (98% terrestrial, 28% aquatic), and L. catesbeianus, which had similar percentages of terrestrial (81%) and aquatic prey (74%). Observed predation by L. catesbeianus on R. boylii may indicate that interaction between these two species is significant. Based on their widespread abundance and their preference for aquatic foods, we suggest that, where present, L. catesbeianus should be the species of choice for all lethal biomonitoring of mercury in amphibians. Copyright ?? 2009 Society for the Study of Amphibians and Reptiles.

  1. Leveraging KVM Events to Detect Cache-Based Side Channel Attacks in a Virtualization Environment

    Directory of Open Access Journals (Sweden)

    Ady Wahyudi Paundu

    2018-01-01

    Full Text Available Cache-based side channel attack (CSCa techniques in virtualization systems are becoming more advanced, while defense methods against them are still perceived as nonpractical. The most recent CSCa variant called Flush + Flush has showed that the current detection methods can be easily bypassed. Within this work, we introduce a novel monitoring approach to detect CSCa operations inside a virtualization environment. We utilize the Kernel Virtual Machine (KVM event data in the kernel and process this data using a machine learning technique to identify any CSCa operation in the guest Virtual Machine (VM. We evaluate our approach using Receiver Operating Characteristic (ROC diagram of multiple attack and benign operation scenarios. Our method successfully separate the CSCa datasets from the non-CSCa datasets, on both trained and nontrained data scenarios. The successful classification also include the Flush + Flush attack scenario. We are also able to explain the classification results by extracting the set of most important features that separate both classes using their Fisher scores and show that our monitoring approach can work to detect CSCa in general. Finally, we evaluate the overhead impact of our CSCa monitoring method and show that it has a negligible computation overhead on the host and the guest VM.

  2. Dementia severity and the longitudinal costs of informal care in the Cache County population.

    Science.gov (United States)

    Rattinger, Gail B; Schwartz, Sarah; Mullins, C Daniel; Corcoran, Chris; Zuckerman, Ilene H; Sanders, Chelsea; Norton, Maria C; Fauth, Elizabeth B; Leoutsakos, Jeannie-Marie S; Lyketsos, Constantine G; Tschanz, JoAnn T

    2015-08-01

    Dementia costs are critical for influencing healthcare policy, but limited longitudinal information exists. We examined longitudinal informal care costs of dementia in a population-based sample. Data from the Cache County Study included dementia onset, duration, and severity assessed by the Mini-Mental State Examination (MMSE), Clinical Dementia Rating Scale (CDR), and Neuropsychiatric Inventory (NPI). Informal costs of daily care (COC) was estimated based on median Utah wages. Mixed models estimated the relationship between severity and longitudinal COC in separate models for MMSE and CDR. Two hundred and eighty-seven subjects (53% female, mean (standard deviation) age was 82.3 (5.9) years) participated. Overall COC increased by 18% per year. COC was 6% lower per MMSE-point increase and compared with very mild dementia, COC increased over twofold for mild, fivefold for moderate, and sixfold for severe dementia on the CDR. Greater dementia severity predicted higher costs. Disease management strategies addressing dementia progression may curb costs. Copyright © 2015 The Alzheimer's Association. Published by Elsevier Inc. All rights reserved.

  3. Políticas de reemplazo en la caché de web

    Directory of Open Access Journals (Sweden)

    Carlos Quesada Sánchez

    2006-05-01

    Full Text Available La web es el mecanismo de comunicación más utilizado en la actualidad debido a su flexibilidad y a la oferta casi interminable de herramientas para navegarla. Esto hace que día con día se agreguen alrededor de un millón de páginas en ella. De esta manera, es entonces la biblioteca más grande, con recursos textuales y de multimedia, que jamás se haya visto antes. Eso sí, es una biblioteca distribuida alrededor de todos los servidores que contienen esa información. Como fuente de consulta, es importante que la recuperación de los datos sea eficiente. Para ello existe el Web Caching, técnica mediante la cual se almacenan temporalmente algunos datos de la web en los servidores locales, de manera que no haya que pedirlos al servidor remoto cada vez que un usuario los solicita. Empero, la cantidad de memoria disponible en los servidores locales para almacenar esa información es limitada: hay que decidir cuáles objetos de la web se almacenan y cuáles no. Esto da pie a varias políticas de reemplazo que se explorarán en este artículo. Mediante un experimento de peticiones reales de la Web, compararemos el desempeño de estas técnicas.

  4. A Query Cache Tool for Optimizing Repeatable and Parallel OLAP Queries

    Science.gov (United States)

    Santos, Ricardo Jorge; Bernardino, Jorge

    On-line analytical processing against data warehouse databases is a common form of getting decision making information for almost every business field. Decision support information oftenly concerns periodic values based on regular attributes, such as sales amounts, percentages, most transactioned items, etc. This means that many similar OLAP instructions are periodically repeated, and simultaneously, between the several decision makers. Our Query Cache Tool takes advantage of previously executed queries, storing their results and the current state of the data which was accessed. Future queries only need to execute against the new data, inserted since the queries were last executed, and join these results with the previous ones. This makes query execution much faster, because we only need to process the most recent data. Our tool also minimizes the execution time and resource consumption for similar queries simultaneously executed by different users, putting the most recent ones on hold until the first finish and returns the results for all of them. The stored query results are held until they are considered outdated, then automatically erased. We present an experimental evaluation of our tool using a data warehouse based on a real-world business dataset and use a set of typical decision support queries to discuss the results, showing a very high gain in query execution time.

  5. Traversal Caches: A Framework for FPGA Acceleration of Pointer Data Structures

    Directory of Open Access Journals (Sweden)

    James Coole

    2010-01-01

    Full Text Available Field-programmable gate arrays (FPGAs and other reconfigurable computing (RC devices have been widely shown to have numerous advantages including order of magnitude performance and power improvements compared to microprocessors for some applications. Unfortunately, FPGA usage has largely been limited to applications exhibiting sequential memory access patterns, thereby prohibiting acceleration of important applications with irregular patterns (e.g., pointer-based data structures. In this paper, we present a design pattern for RC application development that serializes irregular data structure traversals online into a traversal cache, which allows the corresponding data to be efficiently streamed to the FPGA. The paper presents a generalized framework that benefits applications with repeated traversals, which we show can achieve between 7x and 29x speedup over pointer-based software. For applications without strictly repeated traversals, we present application-specialized extensions that benefit applications with highly similar traversals by exploiting similarity to improve memory bandwidth and execute multiple traversals in parallel. We show that these extensions can achieve a speedup between 11x and 70x on a Virtex4 LX100 for Barnes-Hut n-body simulation.

  6. External Memory Algorithms for Diameter and All-Pair Shortest-Paths on Sparse Graphs

    DEFF Research Database (Denmark)

    Arge, Lars; Meyer, Ulrich; Toma, Laura

    2004-01-01

    We present several new external-memory algorithms for finding all-pairs shortest paths in a V -node, Eedge undirected graph. For all-pairs shortest paths and diameter in unweighted undirected graphs we present cache-oblivious algorithms with O(V · E B logM B E B) I/Os, where B is the block......-size and M is the size of internal memory. For weighted undirected graphs we present a cache-aware APSP algorithm that performs O(V · ( V E B +E B log E B )) I/Os. We also present efficient cacheaware algorithms that find paths between all pairs of vertices in an unweighted graph with lengths within a small...... additive constant of the shortest path length. All of our results improve earlier results known for these problems. For approximate APSP we provide the first nontrivial results. Our diameter result uses O(V + E) extra space, and all of our other algorithms use O(V 2) space....

  7. A comparative study of three model-based algorithms for estimating state-of-charge of lithium-ion batteries under a new combined dynamic loading profile

    International Nuclear Information System (INIS)

    Yang, Fangfang; Xing, Yinjiao; Wang, Dong; Tsui, Kwok-Leung

    2016-01-01

    Highlights: • Three different model-based filtering algorithms for SOC estimation are compared. • A combined dynamic loading profile is proposed to evaluate the three algorithms. • Robustness against uncertainty of initial states of SOC estimators are investigated. • Battery capacity degradation is considered in SOC estimation. - Abstract: Accurate state-of-charge (SOC) estimation is critical for the safety and reliability of battery management systems in electric vehicles. Because SOC cannot be directly measured and SOC estimation is affected by many factors, such as ambient temperature, battery aging, and current rate, a robust SOC estimation approach is necessary to be developed so as to deal with time-varying and nonlinear battery systems. In this paper, three popular model-based filtering algorithms, including extended Kalman filter, unscented Kalman filter, and particle filter, are respectively used to estimate SOC and their performances regarding to tracking accuracy, computation time, robustness against uncertainty of initial values of SOC, and battery degradation, are compared. To evaluate the performances of these algorithms, a new combined dynamic loading profile composed of the dynamic stress test, the federal urban driving schedule and the US06 is proposed. The comparison results showed that the unscented Kalman filter is the most robust to different initial values of SOC, while the particle filter owns the fastest convergence ability when an initial guess of SOC is far from a true initial SOC.

  8. HPL for The Computer Farm & Retina Algorithm for the LHCb VELO Summer Student Report Diego Berdeja Suárez under Daniel Cámpora

    CERN Document Server

    Berdeja Suarez, Diego

    2014-01-01

    The Retina Algorithm attempts to emulate the human eye's ability to respond quickly to linear patterns. The general scheme consists on building a database of pre- constructed hit tracks and on assigning a weight to each track depending on the Berdeja 1 proximity of actual input data hits. This weight is then considered in the parameter space of the database and searched for high-weight clusters.

  9. A Novel Two-Tier Cooperative Caching Mechanism for the Optimization of Multi-Attribute Periodic Queries in Wireless Sensor Networks

    Science.gov (United States)

    Zhou, ZhangBing; Zhao, Deng; Shu, Lei; Tsang, Kim-Fung

    2015-01-01

    Wireless sensor networks, serving as an important interface between physical environments and computational systems, have been used extensively for supporting domain applications, where multiple-attribute sensory data are queried from the network continuously and periodically. Usually, certain sensory data may not vary significantly within a certain time duration for certain applications. In this setting, sensory data gathered at a certain time slot can be used for answering concurrent queries and may be reused for answering the forthcoming queries when the variation of these data is within a certain threshold. To address this challenge, a popularity-based cooperative caching mechanism is proposed in this article, where the popularity of sensory data is calculated according to the queries issued in recent time slots. This popularity reflects the possibility that sensory data are interested in the forthcoming queries. Generally, sensory data with the highest popularity are cached at the sink node, while sensory data that may not be interested in the forthcoming queries are cached in the head nodes of divided grid cells. Leveraging these cooperatively cached sensory data, queries are answered through composing these two-tier cached data. Experimental evaluation shows that this approach can reduce the network communication cost significantly and increase the network capability. PMID:26131665

  10. Texture analysis for mapping Tamarix parviflora using aerial photographs along the Cache Creek, California.

    Science.gov (United States)

    Ge, Shaokui; Carruthers, Raymond; Gong, Peng; Herrera, Angelica

    2006-03-01

    Natural color photographs were used to detect the coverage of saltcedar, Tamarix parviflora, along a 40 km portion of Cache Creek near Woodland, California. Historical aerial photographs from 2001 were retrospectively evaluated and compared with actual ground-based information to assess accuracy of the assessment process. The color aerial photos were sequentially digitized, georeferenced, classified using color and texture methods, and mosaiced into maps for field use. Eight types of ground cover (Tamarix, agricultural crops, roads, rocks, water bodies, evergreen trees, non-evergreen trees and shrubs (excluding Tamarix)) were selected from the digitized photos for separability analysis and supervised classification. Due to color similarities among the eight cover types, the average separability, based originally only on color, was very low. The separability was improved significantly through the inclusion of texture analysis. Six types of texture measures with various window sizes were evaluated. The best texture was used as an additional feature along with the color, for identifying Tamarix. A total of 29 color photographs were processed to detect Tamarix infestations using a combination of the original digital images and optimal texture features. It was found that the saltcedar covered a total of 3.96 km(2) (396 hectares) within the study area. For the accuracy assessment, 95 classified samples from the resulting map were checked in the field with a global position system (GPS) unit to verify Tamarix presence. The producer's accuracy was 77.89%. In addition, 157 independently located ground sites containing saltcedar were compared with the classified maps, producing a user's accuracy of 71.33%.

  11. Evaluation of low-temperature geothermal potential in Cache Valley, Utah. Report of investigation No. 174

    Energy Technology Data Exchange (ETDEWEB)

    de Vries, J.L.

    1982-11-01

    Field work consisted of locating 90 wells and springs throughout the study area, collecting water samples for later laboratory analyses, and field measurement of pH, temperature, bicarbonate alkalinity, and electrical conductivity. Na/sup +/, K/sup +/, Ca/sup +2/, Mg/sup +2/, SiO/sub 2/, Fe, SO/sub 4//sup -2/, Cl/sup -/, F/sup -/, and total dissolved solids were determined in the laboratory. Temperature profiles were measured in 12 additional, unused walls. Thermal gradients calculated from the profiles were approximately the same as the average for the Basin and Range province, about 35/sup 0/C/km. One well produced a gradient of 297/sup 0/C/km, most probably as a result of a near-surface occurrence of warm water. Possible warm water reservoir temperatures were calculated using both the silica and the Na-K-Ca geothermometers, with the results averaging about 50 to 100/sup 0/C. If mixing calculations were applied, taking into account the temperatures and silica contents of both warm springs or wells and the cold groundwater, reservoir temperatures up to about 200/sup 0/C were indicated. Considering measured surface water temperatures, calculated reservoir temperatures, thermal gradients, and the local geology, most of the Cache Valley, Utah area is unsuited for geothermal development. However, the areas of North Logan, Benson, and Trenton were found to have anomalously warm groundwater in comparison to the background temperature of 13.0/sup 0/C for the study area. The warm water has potential for isolated energy development but is not warm enough for major commercial development.

  12. A Comparison between Fixed Priority and EDF Scheduling accounting for Cache Related Pre-emption Delays

    NARCIS (Netherlands)

    Lunniss, W.; Altmeyer, S.; Davis, R.I.

    2014-01-01

    In multitasking real-time systems, the choice of scheduling algorithm is an important factor to ensure that response time requirements are met while maximising limited system resources. Two popular scheduling algorithms include fixed priority (FP) and earliest deadline first (EDF). While they have

  13. The Caregiver Contribution to Heart Failure Self-Care (CACHS): Further Psychometric Testing of a Novel Instrument.

    Science.gov (United States)

    Buck, Harleah G; Harkness, Karen; Ali, Muhammad Usman; Carroll, Sandra L; Kryworuchko, Jennifer; McGillion, Michael

    2017-04-01

    Caregivers (CGs) contribute important assistance with heart failure (HF) self-care, including daily maintenance, symptom monitoring, and management. Until CGs' contributions to self-care can be quantified, it is impossible to characterize it, account for its impact on patient outcomes, or perform meaningful cost analyses. The purpose of this study was to conduct psychometric testing and item reduction on the recently developed 34-item Caregiver Contribution to Heart Failure Self-care (CACHS) instrument using classical and item response theory methods. Fifty CGs (mean age 63 years ±12.84; 70% female) recruited from a HF clinic completed the CACHS in 2014 and results evaluated using classical test theory and item response theory. Items would be deleted for low (.95) endorsement, low (.7) corrected item-total correlations, significant pairwise correlation coefficients, floor or ceiling effects, relatively low latent trait and item information function levels ( .5), and differential item functioning. After analysis, 14 items were excluded, resulting in a 20-item instrument (self-care maintenance eight items; monitoring seven items; and management five items). Most items demonstrated moderate to high discrimination (median 2.13, minimum .77, maximum 5.05), and appropriate item difficulty (-2.7 to 1.4). Internal consistency reliability was excellent (Cronbach α = .94, average inter-item correlation = .41) with no ceiling effects. The newly developed 20-item version of the CACHS is supported by rigorous instrument development and represents a novel instrument to measure CGs' contribution to HF self-care. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  14. Novel Zooming Scale Hough Transform Pattern Recognition Algorithm for the PHENIX Detector

    Science.gov (United States)

    Koblesky, Theodore

    2012-03-01

    Single ultra-relativistic heavy ion collisions at RHIC and the LHC and multiple overlapping proton-proton collisions at the LHC present challenges to pattern recognition algorithms for tracking in these high multiplicity environments. One must satisfy many constraints including high track finding efficiency, ghost track rejection, and CPU time and memory constraints. A novel algorithm based on a zooming scale Hough Transform is now available in Ref [1] that is optimized for efficient high speed caching and flexible in terms of its implementation. In this presentation, we detail the application of this algorithm to the PHENIX Experiment silicon vertex tracker (VTX) and show initial results from Au+Au at √sNN = 200 GeV collision data taken in 2011. We demonstrate the current algorithmic performance and also show first results for the proposed sPHENIX detector. [4pt] Ref [1] Dr. Dion, Alan. ``Helix Hough'' http://code.google.com/p/helixhough/

  15. Recognition algorithms in knot theory

    International Nuclear Information System (INIS)

    Dynnikov, I A

    2003-01-01

    In this paper the problem of constructing algorithms for comparing knots and links is discussed. A survey of existing approaches and basic results in this area is given. In particular, diverse combinatorial methods for representing links are discussed, the Haken algorithm for recognizing a trivial knot (the unknot) and a scheme for constructing a general algorithm (using Haken's ideas) for comparing links are presented, an approach based on representing links by closed braids is described, the known algorithms for solving the word problem and the conjugacy problem for braid groups are described, and the complexity of the algorithms under consideration is discussed. A new method of combinatorial description of knots is given together with a new algorithm (based on this description) for recognizing the unknot by using a procedure for monotone simplification. In the conclusion of the paper several problems are formulated whose solution could help to advance towards the 'algorithmization' of knot theory

  16. Combinatorial algorithms

    CERN Document Server

    Hu, T C

    2002-01-01

    Newly enlarged, updated second edition of a valuable text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discusses binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. 153 black-and-white illus. 23 tables.Newly enlarged, updated second edition of a valuable, widely used text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discussed are binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. New to this edition: Chapter 9

  17. Multisensor satellite data for water quality analysis and water pollution risk assessment: decision making under deep uncertainty with fuzzy algorithm in framework of multimodel approach

    Science.gov (United States)

    Kostyuchenko, Yuriy V.; Sztoyka, Yulia; Kopachevsky, Ivan; Artemenko, Igor; Yuschenko, Maxim

    2017-10-01

    Multi-model approach for remote sensing data processing and interpretation is described. The problem of satellite data utilization in multi-modeling approach for socio-ecological risks assessment is formally defined. Observation, measurement and modeling data utilization method in the framework of multi-model approach is described. Methodology and models of risk assessment in framework of decision support approach are defined and described. Method of water quality assessment using satellite observation data is described. Method is based on analysis of spectral reflectance of aquifers. Spectral signatures of freshwater bodies and offshores are analyzed. Correlations between spectral reflectance, pollutions and selected water quality parameters are analyzed and quantified. Data of MODIS, MISR, AIRS and Landsat sensors received in 2002-2014 have been utilized verified by in-field spectrometry and lab measurements. Fuzzy logic based approach for decision support in field of water quality degradation risk is discussed. Decision on water quality category is making based on fuzzy algorithm using limited set of uncertain parameters. Data from satellite observations, field measurements and modeling is utilizing in the framework of the approach proposed. It is shown that this algorithm allows estimate water quality degradation rate and pollution risks. Problems of construction of spatial and temporal distribution of calculated parameters, as well as a problem of data regularization are discussed. Using proposed approach, maps of surface water pollution risk from point and diffuse sources are calculated and discussed.

  18. Algorithmic Self

    DEFF Research Database (Denmark)

    Markham, Annette

    layered set of accounts to help build our understanding of how individuals relate to their devices, search systems, and social network sites. This work extends critical analyses of the power of algorithms in implicating the social self by offering narrative accounts from multiple perspectives. It also...

  19. Genetic algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Grefenstette, J.J.

    1994-12-31

    Genetic algorithms solve problems by using principles inspired by natural population genetics: They maintain a population of knowledge structures that represent candidate solutions, and then let that population evolve over time through competition and controlled variation. GAs are being applied to a wide range of optimization and learning problems in many domains.

  20. Using genetic algorithm to determine the optimal order quantities for multi-item multi-period under warehouse capacity constraints in kitchenware manufacturing

    Science.gov (United States)

    Saraswati, D.; Sari, D. K.; Johan, V.

    2017-11-01

    The study was conducted on a manufacturer that produced various kinds of kitchenware with kitchen sink as the main product. There were four types of steel sheets selected as the raw materials of the kitchen sink. The problem was the manufacturer wanted to determine how much steel sheets to order from a single supplier to meet the production requirements in a way to minimize the total inventory cost. In this case, the economic order quantity (EOQ) model was developed using all-unit discount as the price of steel sheets and the warehouse capacity was limited. Genetic algorithm (GA) was used to find the minimum of the total inventory cost as a sum of purchasing cost, ordering cost, holding cost and penalty cost.

  1. Land Use And Land Cover Dynamics Under Climate Change In Urbanizing Intermountain West: A Case Study From Cache County, Utah

    OpenAIRE

    Li, Enjie

    2013-01-01

    Climate change is tightly linked with urbanization. Urban development with increasing greenhouse gas emission worsens climate change, while climate change in turn influence hydroclimate and ecosystem functions, and indirectly affect urban systems. The Intermountain West is experiencing rapid urban growth, climate change interacting with urbanization poses new challenges to the Intermountain West. Urban planning needs to adapt to these new changes and constrains, and to develop new tools and p...

  2. Recovery Rate of Clustering Algorithms

    NARCIS (Netherlands)

    Li, Fajie; Klette, Reinhard; Wada, T; Huang, F; Lin, S

    2009-01-01

    This article provides a simple and general way for defining the recovery rate of clustering algorithms using a given family of old clusters for evaluating the performance of the algorithm when calculating a family of new clusters. Under the assumption of dealing with simulated data (i.e., known old

  3. Thousands of RNA-cached copies of whole chromosomes are present in the ciliateOxytrichaduring development.

    Science.gov (United States)

    Lindblad, Kelsi A; Bracht, John R; Williams, April E; Landweber, Laura F

    2017-08-01

    The ciliate Oxytricha trifallax maintains two genomes: a germline genome that is active only during sexual conjugation and a transcriptionally active, somatic genome that derives from the germline via extensive sequence reduction and rearrangement. Previously, we found that long noncoding (lnc) RNA "templates"-telomere-containing, RNA-cached copies of mature chromosomes-provide the information to program the rearrangement process. Here we used a modified RNA-seq approach to conduct the first genome-wide search for endogenous, telomere-to-telomere RNA transcripts. We find that during development, Oxytricha produces long noncoding RNA copies for over 10,000 of its 16,000 somatic chromosomes, consistent with a model in which Oxytricha transmits an RNA-cached copy of its somatic genome to the sexual progeny. Both the primary sequence and expression profile of a somatic chromosome influence the temporal distribution and abundance of individual template RNAs. This suggests that Oxytricha may undergo multiple rounds of DNA rearrangement during development. These observations implicate a complex set of thousands of long RNA molecules in the wiring and maintenance of a highly elaborate somatic genome architecture. © 2017 Lindblad et al.; Published by Cold Spring Harbor Laboratory Press for the RNA Society.

  4. From the Island of the Blue Dolphins: A unique 19th century cache feature from San Nicolas Island, California

    Science.gov (United States)

    Erlandson, Jon M.; Thomas-Barnett, Lisa; Vellanoweth, René L.; Schwartz, Steven J.; Muhs, Daniel R.

    2013-01-01

    A cache feature salvaged from an eroding sea cliff on San Nicolas Island produced two redwood boxes containing more than 200 artifacts of Nicoleño, Native Alaskan, and Euro-American origin. Outside the boxes were four asphaltum-coated baskets, abalone shells, a sandstone dish, and a hafted stone knife. The boxes, made from split redwood planks, contained a variety of artifacts and numerous unmodified bones and teeth from marine mammals, fish, birds, and large land mammals. Nicoleño-style artifacts include 11 knives with redwood handles and stone blades, stone projectile points, steatite ornaments and effigies, a carved stone pipe, abraders and burnishing stones, bird bone whistles, bone and shell pendants, abalone shell dishes, and two unusual barbed shell fishhooks. Artifacts of Native Alaskan style include four bone toggling harpoons, two unilaterally barbed bone harpoon heads, bone harpoon fore-shafts, a ground slate blade, and an adze blade. Objects of Euro-American origin or materials include a brass button, metal harpoon blades, and ten flaked glass bifaces. The contents of the cache feature, dating to the early-to-mid nineteenth century, provide an extraordinary window on a time of European expansion and global economic development that created unique cultural interactions and social transformations.

  5. XRootd, disk-based, caching-proxy for optimization of data-access, data-placement and data-replication

    CERN Document Server

    Tadel, Matevz

    2013-01-01

    Following the smashing success of XRootd-based USCMS data-federation, AAA project investigated extensions of the federation architecture by developing two sample implementations of an XRootd, disk-based, caching-proxy. The first one simply starts fetching a whole file as soon as a file-open request is received and is suitable when completely random file access is expected or it is already known that a whole file be read. The second implementation supports on-demand downloading of partial files. Extensions to the Hadoop file-system have been developed to allow foran immediate fallback to network access when local HDFS storage fails to provide the requested block. Tools needed to analyze and to tweak block replication factors and to inject downloaded blocks into a running HDFS installation have also been developed. Both cache implementations are in operation at UCSD and several tests were also performed at UNL and UW-M. Operational experience and applications to automatic storage healing and opportunistic compu...

  6. Contrasting patterns of survival and dispersal in multiple habitats reveal an ecological trap in a food-caching bird.

    Science.gov (United States)

    Norris, D Ryan; Flockhart, D T Tyler; Strickland, Dan

    2013-11-01

    A comprehensive understanding of how natural and anthropogenic variation in habitat influences populations requires long-term information on how such variation affects survival and dispersal throughout the annual cycle. Gray jays Perisoreus canadensis are widespread boreal resident passerines that use cached food to survive over the winter and to begin breeding during the late winter. Using multistate capture-recapture analysis, we examined apparent survival and dispersal in relation to habitat quality in a gray jay population over 34 years (1977-2010). Prior evidence suggests that natural variation in habitat quality is driven by the proportion of conifers on territories because of their superior ability to preserve cached food. Although neither adults (>1 year) nor juveniles (preference ecological trap for birds. Reproductive success, as shown in a previous study, but not survival, is sensitive to natural variation in habitat quality, suggesting that gray jays, despite living in harsh winter conditions, likely favor the allocation of limited resources towards self-maintenance over reproduction.

  7. Optimal Bidding and Operation of a Power Plant with Solvent-Based Carbon Capture under a CO2 Allowance Market: A Solution with a Reinforcement Learning-Based Sarsa Temporal-Difference Algorithm

    Directory of Open Access Journals (Sweden)

    Ziang Li

    2017-04-01

    Full Text Available In this paper, a reinforcement learning (RL-based Sarsa temporal-difference (TD algorithm is applied to search for a unified bidding and operation strategy for a coal-fired power plant with monoethanolamine (MEA-based post-combustion carbon capture under different carbon dioxide (CO2 allowance market conditions. The objective of the decision maker for the power plant is to maximize the discounted cumulative profit during the power plant lifetime. Two constraints are considered for the objective formulation. Firstly, the tradeoff between the energy-intensive carbon capture and the electricity generation should be made under presumed fixed fuel consumption. Secondly, the CO2 allowances purchased from the CO2 allowance market should be approximately equal to the quantity of CO2 emission from power generation. Three case studies are demonstrated thereafter. In the first case, we show the convergence of the Sarsa TD algorithm and find a deterministic optimal bidding and operation strategy. In the second case, compared with the independently designed operation and bidding strategies discussed in most of the relevant literature, the Sarsa TD-based unified bidding and operation strategy with time-varying flexible market-oriented CO2 capture levels is demonstrated to help the power plant decision maker gain a higher discounted cumulative profit. In the third case, a competitor operating another power plant identical to the preceding plant is considered under the same CO2 allowance market. The competitor also has carbon capture facilities but applies a different strategy to earn profits. The discounted cumulative profits of the two power plants are then compared, thus exhibiting the competitiveness of the power plant that is using the unified bidding and operation strategy explored by the Sarsa TD algorithm.

  8. Mars Rover proposed for 2018 to seek signs of life and to cache samples for potential return to Earth

    Science.gov (United States)

    Pratt, Lisa; Beaty, David; Westall, Frances; Parnell, John; Poulet, François

    2010-05-01

    Mars Rover proposed for 2018 to seek signs of life and to cache samples for potential return to Earth Lisa Pratt, David Beatty, Frances Westall, John Parnell, François Poulet, and the MRR-SAG team The search for preserved evidence of life is the keystone concept for a new generation of Mars rover capable of exploring, sampling, and caching diverse suites of rocks from outcrops. The proposed mission is conceived to address two general objectives: conduct high-priority in situ science and make concrete steps towards the possible future return of samples to Earth. We propose the name Mars Astrobiology Explorer-Cacher (MAX-C) to best reflect the dual purpose of the proposed mission. The scientific objective of the proposed MAX-C would require rover access to a site with high preservation potential for physical and chemical biosignatures in order to evaluate paleo-environmental conditions, characterize the potential for preservation of biosignatures, and access multiple sequences of geological units in a search for evidence of past life and/or prebiotic chemistry. Samples addressing a variety of high-priority scientific objectives should be collected, documented, and packaged in a manner suitable for possible return to Earth by a future mission. Relevant experience from study of ancient terrestrial strata, martian meteorites, and from the Mars exploration Rovers indicates that the proposed MAX-C's interpretive capability should include: meter to submillimeter texture (optical imaging), mineral identification, major element content, and organic molecular composition. Analytical data should be obtained by direct investigation of outcrops and should not entail acquisition of rock chips or powders. We propose, therefore, a set of arm-mounted instruments that would be capable of interrogating a relatively smooth, abraded surface by creating co-registered 2-D maps of visual texture, mineralogy and geochemical properties. This approach is judged to have particularly high

  9. Trends in causes of death among children under 5 in Bangladesh, 1993-2004: an exercise applying a standardized computer algorithm to assign causes of death using verbal autopsy data

    Directory of Open Access Journals (Sweden)

    Walker Neff

    2011-08-01

    Full Text Available Abstract Background Trends in the causes of child mortality serve as important global health information to guide efforts to improve child survival. With child mortality declining in Bangladesh, the distribution of causes of death also changes. The three verbal autopsy (VA studies conducted with the Bangladesh Demographic and Health Surveys provide a unique opportunity to study these changes in child causes of death. Methods To ensure comparability of these trends, we developed a standardized algorithm to assign causes of death using symptoms collected through the VA studies. The original algorithms applied were systematically reviewed and key differences in cause categorization, hierarchy, case definition, and the amount of data collected were compared to inform the development of the standardized algorithm. Based primarily on the 2004 cause categorization and hierarchy, the standardized algorithm guarantees comparability of the trends by only including symptom data commonly available across all three studies. Results Between 1993 and 2004, pneumonia remained the leading cause of death in Bangladesh, contributing to 24% to 33% of deaths among children under 5. The proportion of neonatal mortality increased significantly from 36% (uncertainty range [UR]: 31%-41% to 56% (49%-62% during the same period. The cause-specific mortality fractions due to birth asphyxia/birth injury and prematurity/low birth weight (LBW increased steadily, with both rising from 3% (2%-5% to 13% (10%-17% and 10% (7%-15%, respectively. The cause-specific mortality rates decreased significantly due to neonatal tetanus and several postneonatal causes (tetanus: from 7 [4-11] to 2 [0.4-4] per 1,000 live births (LB; pneumonia: from 26 [20-33] to 15 [11-20] per 1,000 LB; diarrhea: from 12 [8-17] to 4 [2-7] per 1,000 LB; measles: from 5 [2-8] to 0.2 [0-0.7] per 1,000 LB; injury: from 11 [7-17] to 3 [1-5] per 1,000 LB; and malnutrition: from 9 [6-13] to 5 [2-7]. Conclusions

  10. Seepage safety monitoring model for an earth rock dam under influence of high-impact typhoons based on particle swarm optimization algorithm

    Directory of Open Access Journals (Sweden)

    Yan Xiang

    2017-01-01

    Full Text Available Extreme hydrological events induced by typhoons in reservoir areas have presented severe challenges to the safe operation of hydraulic structures. Based on analysis of the seepage characteristics of an earth rock dam, a novel seepage safety monitoring model was constructed in this study. The nonlinear influence processes of the antecedent reservoir water level and rainfall were assumed to follow normal distributions. The particle swarm optimization (PSO algorithm was used to optimize the model parameters so as to raise the fitting accuracy. In addition, a mutation factor was introduced to simulate the sudden increase in the piezometric level induced by short-duration heavy rainfall and the possible historical extreme reservoir water level during a typhoon. In order to verify the efficacy of this model, the earth rock dam of the Siminghu Reservoir was used as an example. The piezometric level at the SW1-2 measuring point during Typhoon Fitow in 2013 was fitted with the present model, and a corresponding theoretical expression was established. Comparison of fitting results of the piezometric level obtained from the present statistical model and traditional statistical model with monitored values during the typhoon shows that the present model has a higher fitting accuracy and can simulate the uprush feature of the seepage pressure during the typhoon perfectly.

  11. Optimal Fungal Space Searching Algorithms.

    Science.gov (United States)

    Asenova, Elitsa; Lin, Hsin-Yu; Fu, Eileen; Nicolau, Dan V; Nicolau, Dan V

    2016-10-01

    Previous experiments have shown that fungi use an efficient natural algorithm for searching the space available for their growth in micro-confined networks, e.g., mazes. This natural "master" algorithm, which comprises two "slave" sub-algorithms, i.e., collision-induced branching and directional memory, has been shown to be more efficient than alternatives, with one, or the other, or both sub-algorithms turned off. In contrast, the present contribution compares the performance of the fungal natural algorithm against several standard artificial homologues. It was found that the space-searching fungal algorithm consistently outperforms uninformed algorithms, such as Depth-First-Search (DFS). Furthermore, while the natural algorithm is inferior to informed ones, such as A*, this under-performance does not importantly increase with the increase of the size of the maze. These findings suggest that a systematic effort of harvesting the natural space searching algorithms used by microorganisms is warranted and possibly overdue. These natural algorithms, if efficient, can be reverse-engineered for graph and tree search strategies.

  12. Population genetic structure and its implications for adaptive variation in memory and the hippocampus on a continental scale in food-caching black-capped chickadees.

    Science.gov (United States)

    Pravosudov, V V; Roth, T C; Forister, M L; Ladage, L D; Burg, T M; Braun, M J; Davidson, B S

    2012-09-01

    Food-caching birds rely on stored food to survive the winter, and spatial memory has been shown to be critical in successful cache recovery. Both spatial memory and the hippocampus, an area of the brain involved in spatial memory, exhibit significant geographic variation linked to climate-based environmental harshness and the potential reliance on food caches for survival. Such geographic variation has been suggested to have a heritable basis associated with differential selection. Here, we ask whether population genetic differentiation and potential isolation among multiple populations of food-caching black-capped chickadees is associated with differences in memory and hippocampal morphology by exploring population genetic structure within and among groups of populations that are divergent to different degrees in hippocampal morphology. Using mitochondrial DNA and 583 AFLP loci, we found that population divergence in hippocampal morphology is not significantly associated with neutral genetic divergence or geographic distance, but instead is significantly associated with differences in winter climate. These results are consistent with variation in a history of natural selection on memory and hippocampal morphology that creates and maintains differences in these traits regardless of population genetic structure and likely associated gene flow. Published 2012. This article is a US Government work and is in the public domain in the USA.

  13. Parallel algorithms

    CERN Document Server

    Casanova, Henri; Robert, Yves

    2008-01-01

    ""…The authors of the present book, who have extensive credentials in both research and instruction in the area of parallelism, present a sound, principled treatment of parallel algorithms. … This book is very well written and extremely well designed from an instructional point of view. … The authors have created an instructive and fascinating text. The book will serve researchers as well as instructors who need a solid, readable text for a course on parallelism in computing. Indeed, for anyone who wants an understandable text from which to acquire a current, rigorous, and broad vi

  14. A New Perspective on Randomized Gossip Algorithms

    OpenAIRE

    Loizou, Nicolas; Richtárik, Peter

    2016-01-01

    In this short note we propose a new approach for the design and analysis of randomized gossip algorithms which can be used to solve the average consensus problem. We show how that Randomized Block Kaczmarz (RBK) method - a method for solving linear systems - works as gossip algorithm when applied to a special system encoding the underlying network. The famous pairwise gossip algorithm arises as a special case. Subsequently, we reveal a hidden duality of randomized gossip algorithms, with the ...

  15. Geochemistry of mercury and other constituents in subsurface sediment—Analyses from 2011 and 2012 coring campaigns, Cache Creek Settling Basin, Yolo County, California

    Science.gov (United States)

    Arias, Michelle R.; Alpers, Charles N.; Marvin-DiPasquale, Mark C.; Fuller, Christopher C.; Agee, Jennifer L.; Sneed, Michelle; Morita, Andrew Y.; Salas, Antonia

    2017-10-31

    Cache Creek Settling Basin was constructed in 1937 to trap sediment from Cache Creek before delivery to the Yolo Bypass, a flood conveyance for the Sacramento River system that is tributary to the Sacramento–San Joaquin Delta. Sediment management options being considered by stakeholders in the Cache Creek Settling Basin include sediment excavation; however, that could expose sediments containing elevated mercury concentrations from historical mercury mining in the watershed. In cooperation with the California Department of Water Resources, the U.S. Geological Survey undertook sediment coring campaigns in 2011–12 (1) to describe lateral and vertical distributions of mercury concentrations in deposits of sediment in the Cache Creek Settling Basin and (2) to improve constraint of estimates of the rate of sediment deposition in the basin.Sediment cores were collected in the Cache Creek Settling Basin, Yolo County, California, during October 2011 at 10 locations and during August 2012 at 5 other locations. Total core depths ranged from approximately 4.6 to 13.7 meters (15 to 45 feet), with penetration to about 9.1 meters (30 feet) at most locations. Unsplit cores were logged for two geophysical parameters (gamma bulk density and magnetic susceptibility); then, selected cores were split lengthwise. One half of each core was then photographed and archived, and the other half was subsampled. Initial subsamples from the cores (20-centimeter composite samples from five predetermined depths in each profile) were analyzed for total mercury, methylmercury, total reduced sulfur, iron speciation, organic content (as the percentage of weight loss on ignition), and grain-size distribution. Detailed follow-up subsampling (3-centimeter intervals) was done at six locations along an east-west transect in the southern part of the Cache Creek Settling Basin and at one location in the northern part of the basin for analyses of total mercury; organic content; and cesium-137, which was

  16. Un calcul de Viterbi pour un Modèle de Markov Caché Contraint

    DEFF Research Database (Denmark)

    Petit, Matthieu; Christiansen, Henning

    2009-01-01

    of these hidden states in regards to an observed data sequence. Constrained HMM extends this framework by adding some constraints on a HMM process run. In this paper, we propose to introduce constrained HMMs into Constraint Programming. We propose new version of the Viterbi algorithm for this new framework....... Several constraint techniques are used to reduce the search of the most probable value of hidden states of a constrained HMM. An implementation based on PRISM, a logic programming language for statistical modeling, is presented....

  17. MSDR-D Network Localization Algorithm

    Science.gov (United States)

    Coogan, Kevin; Khare, Varun; Kobourov, Stephen G.; Katz, Bastian

    We present a distributed multi-scale dead-reckoning (MSDR-D) algorithm for network localization that utilizes local distance and angular information for nearby sensors. The algorithm is anchor-free and does not require particular network topology, rigidity of the underlying communication graph, or high average connectivity. The algorithm scales well to large and sparse networks with complex topologies and outperforms previous algorithms when the noise levels are high. The algorithm is simple to implement and is available, along with source code, executables, and experimental results, at http://msdr-d.cs.arizona.edu/.

  18. Portable Health Algorithms Test System

    Science.gov (United States)

    Melcher, Kevin J.; Wong, Edmond; Fulton, Christopher E.; Sowers, Thomas S.; Maul, William A.

    2010-01-01

    A document discusses the Portable Health Algorithms Test (PHALT) System, which has been designed as a means for evolving the maturity and credibility of algorithms developed to assess the health of aerospace systems. Comprising an integrated hardware-software environment, the PHALT system allows systems health management algorithms to be developed in a graphical programming environment, to be tested and refined using system simulation or test data playback, and to be evaluated in a real-time hardware-in-the-loop mode with a live test article. The integrated hardware and software development environment provides a seamless transition from algorithm development to real-time implementation. The portability of the hardware makes it quick and easy to transport between test facilities. This hard ware/software architecture is flexible enough to support a variety of diagnostic applications and test hardware, and the GUI-based rapid prototyping capability is sufficient to support development execution, and testing of custom diagnostic algorithms. The PHALT operating system supports execution of diagnostic algorithms under real-time constraints. PHALT can perform real-time capture and playback of test rig data with the ability to augment/ modify the data stream (e.g. inject simulated faults). It performs algorithm testing using a variety of data input sources, including real-time data acquisition, test data playback, and system simulations, and also provides system feedback to evaluate closed-loop diagnostic response and mitigation control.

  19. A quantum causal discovery algorithm

    Science.gov (United States)

    Giarmatzi, Christina; Costa, Fabio

    2018-03-01

    Finding a causal model for a set of classical variables is now a well-established task—but what about the quantum equivalent? Even the notion of a quantum causal model is controversial. Here, we present a causal discovery algorithm for quantum systems. The input to the algorithm is a process matrix describing correlations between quantum events. Its output consists of different levels of information about the underlying causal model. Our algorithm determines whether the process is causally ordered by grouping the events into causally ordered non-signaling sets. It detects if all relevant common causes are included in the process, which we label Markovian, or alternatively if some causal relations are mediated through some external memory. For a Markovian process, it outputs a causal model, namely the causal relations and the corresponding mechanisms, represented as quantum states and channels. Our algorithm opens the route to more general quantum causal discovery methods.

  20. Greedy algorithm with weights for decision tree construction

    KAUST Repository

    Moshkov, Mikhail

    2010-12-01

    An approximate algorithm for minimization of weighted depth of decision trees is considered. A bound on accuracy of this algorithm is obtained which is unimprovable in general case. Under some natural assumptions on the class NP, the considered algorithm is close (from the point of view of accuracy) to best polynomial approximate algorithms for minimization of weighted depth of decision trees.

  1. Church attendance and new episodes of major depression in a community study of older adults: the Cache County Study.

    Science.gov (United States)

    Norton, Maria C; Singh, Archana; Skoog, Ingmar; Corcoran, Christopher; Tschanz, Joann T; Zandi, Peter P; Breitner, John C S; Welsh-Bohmer, Kathleen A; Steffens, David C

    2008-05-01

    We examined the relation between church attendance, membership in the Church of Jesus Christ of Latter-Day Saints (LDS), and major depressive episode, in a population-based study of aging and dementia in Cache County, Utah. Participants included 2,989 nondemented individuals aged between 65 and 100 years who were interviewed initially in 1995 to 1996 and again in 1998 to 1999. LDS church members reported twice the rate of major depression that non-LDS members did (odds ratio = 2.56, 95% confidence interval = 1.07-6.08). Individuals attending church weekly or more often had a significantly lower risk for major depression. After controlling for demographic and health variables and the strongest predictor of future episodes of depression, a prior depression history, we found that church attendance more often than weekly remained a significant protectant (odds ratio = 0.51, 95% confidence interval = 0.28-0.92). Results suggest that there may be a threshold of church attendance that is necessary for a person to garner long-term protection from depression. We discuss sociological factors relevant to LDS culture.

  2. A province of many eyes – Rear window and caché: when the city discloses secrets through the cinema

    Directory of Open Access Journals (Sweden)

    Eliana Kuster

    2009-06-01

    Full Text Available In the city, all people see. In the city, all people are seen. The look and its related questions – what to see, as to see, the interpretation of what is seen – Is one of the central questions of the urban space since century XIX, with the growth of the cities and the phenomenon of the multitude. The look becomes, therefore, crucial to this urban man, whom it looks to recognize in this another one – the stranger – the signals of friendship or danger. This importance of the look in the city is investigated in this essay through two films: Rear window, Alfred Hitchcock (1954, and Caché, Michael Haneke (2005. In the first movie, the personages look the city. In the other, they are seen by this city. In the two films, we have the extremities of the same process: the social life transformed into spectacle. And the cinema, playing one of its main functions: the construction of representations of the human lives in the city.

  3. Carbon stored in forest plantations of Pinus caribaea, Cupressus lusitanica and Eucalyptus deglupta in Cachí Hydroelectric Project

    Directory of Open Access Journals (Sweden)

    Marylin Rojas

    2014-06-01

    Full Text Available Forest plantations are considered the main carbon sinks thought to reduce the impact of climate change. Regarding many species, however, there is a lack of information in order to establish metrics on accumulation of biomass and carbon, principally due to the level of difficulty and the cost of quantification through direct measurement and destructive sampling. In this research, it was evaluated carbon stocks of forest plantations near the dam of hydroelectric project Cachí, which belongs to Instituto Costarricense de Electricidad. 25 unit samples were evaluated along some plantations that contain three different species. 30 Pinus caribacea trees, 14 Cupressus lusitanica and 15 Eucalyptus deglupta were extracted. The biomass was quantified by means of the destructive method. First of all, every component of the tree was weighed separately; then, sampling was obtained in order to determine the dry matter and the carbon fraction. 110 biomass samples from the three species were analyzed in laboratory, including all the components (leaves, branches, shaft, and root. The carbon fraction varied between 47,5 and 48,0 for Pinus caribacea; between 32,6 and 52,7 for Cupressus lusitanica, and beween 36,4 and 50,3% for Eucalyptus deglupta. The stored carbon was 230, 123, and 69 Mg ha-1 in plantations of P. caribaea, C. lusitanica and E. deglupta, respectively. Approximately, 75% of the stored carbon was detected in the shaft.

  4. Fatigue evaluation algorithms: Review

    Energy Technology Data Exchange (ETDEWEB)

    Passipoularidis, V.A.; Broendsted, P.

    2009-11-15

    A progressive damage fatigue simulator for variable amplitude loads named FADAS is discussed in this work. FADAS (Fatigue Damage Simulator) performs ply by ply stress analysis using classical lamination theory and implements adequate stiffness discount tactics based on the failure criterion of Puck, to model the degradation caused by failure events in ply level. Residual strength is incorporated as fatigue damage accumulation metric. Once the typical fatigue and static properties of the constitutive ply are determined,the performance of an arbitrary lay-up under uniaxial and/or multiaxial load time series can be simulated. The predictions are validated against fatigue life data both from repeated block tests at a single stress ratio as well as against spectral fatigue using the WISPER, WISPERX and NEW WISPER load sequences on a Glass/Epoxy multidirectional laminate typical of a wind turbine rotor blade construction. Two versions of the algorithm, the one using single-step and the other using incremental application of each load cycle (in case of ply failure) are implemented and compared. Simulation results confirm the ability of the algorithm to take into account load sequence effects. In general, FADAS performs well in predicting life under both spectral and block loading fatigue. (author)

  5. Genetic algorithms and fuzzy multiobjective optimization

    CERN Document Server

    Sakawa, Masatoshi

    2002-01-01

    Since the introduction of genetic algorithms in the 1970s, an enormous number of articles together with several significant monographs and books have been published on this methodology. As a result, genetic algorithms have made a major contribution to optimization, adaptation, and learning in a wide variety of unexpected fields. Over the years, many excellent books in genetic algorithm optimization have been published; however, they focus mainly on single-objective discrete or other hard optimization problems under certainty. There appears to be no book that is designed to present genetic algorithms for solving not only single-objective but also fuzzy and multiobjective optimization problems in a unified way. Genetic Algorithms And Fuzzy Multiobjective Optimization introduces the latest advances in the field of genetic algorithm optimization for 0-1 programming, integer programming, nonconvex programming, and job-shop scheduling problems under multiobjectiveness and fuzziness. In addition, the book treats a w...

  6. The algorithm design manual

    CERN Document Server

    Skiena, Steven S

    2008-01-01

    Explaining designing algorithms, and analyzing their efficacy and efficiency, this book covers combinatorial algorithms technology, stressing design over analysis. It presents instruction on methods for designing and analyzing computer algorithms. It contains the catalog of algorithmic resources, implementations and a bibliography

  7. The Algorithmic Imaginary

    DEFF Research Database (Denmark)

    Bucher, Taina

    2017-01-01

    of algorithms affect people's use of these platforms, if at all? To help answer these questions, this article examines people's personal stories about the Facebook algorithm through tweets and interviews with 25 ordinary users. To understand the spaces where people and algorithms meet, this article develops....... Examining how algorithms make people feel, then, seems crucial if we want to understand their social power....

  8. Distributed late-binding micro-scheduling and data caching for data-intensive workflows; Microplanificación de asignación tardía y almacenamiento temporal distribuidos para flujos de trabajo intensivos en datos

    Energy Technology Data Exchange (ETDEWEB)

    Delgado Peris, A.

    2015-07-01

    Today's world is flooded with vast amounts of digital information coming from innumerable sources. Moreover, it seems clear that this trend will only intensify in the future. Industry, society and remarkably science are not indifferent to this fact. On the contrary, they are struggling to get the most out of this data, which means that they need to capture, transfer, store and process it in a timely and efficient manner, using a wide range of computational resources. And this task is not always simple. A very representative example of the challenges posed by the management and processing of large quantities of data is that of the Large Hadron Collider experiments, which handle tens of petabytes of physics information every year. Based on the experience of one of these collaborations, we have studied the main issues involved in the management of huge volumes of data and in the completion of sizeable workflows that consume it. In this context, we have developed a general-purpose architecture for the scheduling and execution of workflows with heavy data requirements: the Task Queue. This new system builds on the late-binding overlay model, which has helped experiments to successfully overcome the problems associated to the heterogeneity and complexity of large computational grids. Our proposal introduces several enhancements to the existing systems. The execution agents of the Task Queue architecture share a Distributed Hash Table (DHT) and perform job matching and assignment cooperatively. In this way, scalability problems of centralized matching algorithms are avoided and workflow execution times are improved. Scalability makes fine-grained micro-scheduling possible and enables new functionalities, like the implementation of a distributed data cache on the execution nodes and the integration of data location information in the scheduling decisions...(Author)

  9. Algorithmically specialized parallel computers

    CERN Document Server

    Snyder, Lawrence; Gannon, Dennis B

    1985-01-01

    Algorithmically Specialized Parallel Computers focuses on the concept and characteristics of an algorithmically specialized computer.This book discusses the algorithmically specialized computers, algorithmic specialization using VLSI, and innovative architectures. The architectures and algorithms for digital signal, speech, and image processing and specialized architectures for numerical computations are also elaborated. Other topics include the model for analyzing generalized inter-processor, pipelined architecture for search tree maintenance, and specialized computer organization for raster

  10. La cage qui cache : La Cage Dorée de Ruben Alves

    Directory of Open Access Journals (Sweden)

    Cristina Marinho

    2015-01-01

    Full Text Available The French Comedy La Cage Dorée (produced by the luso descendant Ruben Alves, 2013 success seems to be mainly due to its clichés of Portuguese epics in Paris, and its miseries may not have been underlined enough. Thus, under this apparently naif portrait an intriguing painting of Portuguese immigrants’ French dis-integration may really be hiding, which is the aim of this essay, on one hand, to bring out and, on the other one, to clarify, by questioning comparative critical common denominators of the two countries.

  11. La cage qui cache : La Cage Dorée de Ruben Alves

    Directory of Open Access Journals (Sweden)

    Cristina Marinho

    2015-12-01

    Full Text Available The French Comedy La Cage Dorée (produced by the luso descendant Ruben Alves, 2013 success seems to be mainly due to its clichés of Portuguese epics in Paris, and its miseries may not have been underlined enough. Thus, under this apparently naif portrait an intriguing painting of Portuguese immigrants’ French dis-integration may really be hiding, which is the aim of this essay, on one hand, to bring out and, on the other one, to clarify, by questioning comparative critical common denominators of the two countries.

  12. A Decomposition Algorithm for Parametric Design

    NARCIS (Netherlands)

    Jauregui Becker, Juan Manuel; Schotborgh, W.O.; van Houten, Frederikus J.A.M.; Culley, T.C.; Hicks, B.J.; McAloone, T.C.; Howard, T.J.; Dong, A.

    2011-01-01

    This paper presents a recursive division algorithm to decompose an under constraint parametric design problem. The algorithm defines the separation of the problem at the hand of two complexity measures that are calculated for each parameter in the problem, namely, the effort E and the influence Inf.

  13. Deceptiveness and genetic algorithm dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Liepins, G.E. (Oak Ridge National Lab., TN (USA)); Vose, M.D. (Tennessee Univ., Knoxville, TN (USA))

    1990-01-01

    We address deceptiveness, one of at least four reasons genetic algorithms can fail to converge to function optima. We construct fully deceptive functions and other functions of intermediate deceptiveness. For the fully deceptive functions of our construction, we generate linear transformations that induce changes of representation to render the functions fully easy. We further model genetic algorithm selection recombination as the interleaving of linear and quadratic operators. Spectral analysis of the underlying matrices allows us to draw preliminary conclusions about fixed points and their stability. We also obtain an explicit formula relating the nonuniform Walsh transform to the dynamics of genetic search. 21 refs.

  14. Online Planning Algorithm

    Science.gov (United States)

    Rabideau, Gregg R.; Chien, Steve A.

    2010-01-01

    AVA v2 software selects goals for execution from a set of goals that oversubscribe shared resources. The term goal refers to a science or engineering request to execute a possibly complex command sequence, such as image targets or ground-station downlinks. Developed as an extension to the Virtual Machine Language (VML) execution system, the software enables onboard and remote goal triggering through the use of an embedded, dynamic goal set that can oversubscribe resources. From the set of conflicting goals, a subset must be chosen that maximizes a given quality metric, which in this case is strict priority selection. A goal can never be pre-empted by a lower priority goal, and high-level goals can be added, removed, or updated at any time, and the "best" goals will be selected for execution. The software addresses the issue of re-planning that must be performed in a short time frame by the embedded system where computational resources are constrained. In particular, the algorithm addresses problems with well-defined goal requests without temporal flexibility that oversubscribes available resources. By using a fast, incremental algorithm, goal selection can be postponed in a "just-in-time" fashion allowing requests to be changed or added at the last minute. Thereby enabling shorter response times and greater autonomy for the system under control.

  15. Stability and chaos of LMSER PCA learning algorithm

    International Nuclear Information System (INIS)

    Lv Jiancheng; Y, Zhang

    2007-01-01

    LMSER PCA algorithm is a principal components analysis algorithm. It is used to extract principal components on-line from input data. The algorithm has both stability and chaotic dynamic behavior under some conditions. This paper studies the local stability of the LMSER PCA algorithm via a corresponding deterministic discrete time system. Conditions for local stability are derived. The paper also explores the chaotic behavior of this algorithm. It shows that the LMSER PCA algorithm can produce chaos. Waveform plots, Lyapunov exponents and bifurcation diagrams are presented to illustrate the existence of chaotic behavior of this algorithm

  16. Next Generation Suspension Dynamics Algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Schunk, Peter Randall [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Higdon, Jonathon [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Chen, Steven [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-12-01

    This research project has the objective to extend the range of application, improve the efficiency and conduct simulations with the Fast Lubrication Dynamics (FLD) algorithm for concentrated particle suspensions in a Newtonian fluid solvent. The research involves a combination of mathematical development, new computational algorithms, and application to processing flows of relevance in materials processing. The mathematical developments clarify the underlying theory, facilitate verification against classic monographs in the field and provide the framework for a novel parallel implementation optimized for an OpenMP shared memory environment. The project considered application to consolidation flows of major interest in high throughput materials processing and identified hitherto unforeseen challenges in the use of FLD in these applications. Extensions to the algorithm have been developed to improve its accuracy in these applications.

  17. WDM Multicast Tree Construction Algorithms and Their Comparative Evaluations

    Science.gov (United States)

    Makabe, Tsutomu; Mikoshi, Taiju; Takenaka, Toyofumi

    We propose novel tree construction algorithms for multicast communication in photonic networks. Since multicast communications consume many more link resources than unicast communications, effective algorithms for route selection and wavelength assignment are required. We propose a novel tree construction algorithm, called the Weighted Steiner Tree (WST) algorithm and a variation of the WST algorithm, called the Composite Weighted Steiner Tree (CWST) algorithm. Because these algorithms are based on the Steiner Tree algorithm, link resources among source and destination pairs tend to be commonly used and link utilization ratios are improved. Because of this, these algorithms can accept many more multicast requests than other multicast tree construction algorithms based on the Dijkstra algorithm. However, under certain delay constraints, the blocking characteristics of the proposed Weighted Steiner Tree algorithm deteriorate since some light paths between source and destinations use many hops and cannot satisfy the delay constraint. In order to adapt the approach to the delay-sensitive environments, we have devised the Composite Weighted Steiner Tree algorithm comprising the Weighted Steiner Tree algorithm and the Dijkstra algorithm for use in a delay constrained environment such as an IPTV application. In this paper, we also give the results of simulation experiments which demonstrate the superiority of the proposed Composite Weighted Steiner Tree algorithm compared with the Distributed Minimum Hop Tree (DMHT) algorithm, from the viewpoint of the light-tree request blocking.

  18. Relative Pose Estimation Algorithm with Gyroscope Sensor

    Directory of Open Access Journals (Sweden)

    Shanshan Wei

    2016-01-01

    Full Text Available This paper proposes a novel vision and inertial fusion algorithm S2fM (Simplified Structure from Motion for camera relative pose estimation. Different from current existing algorithms, our algorithm estimates rotation parameter and translation parameter separately. S2fM employs gyroscopes to estimate camera rotation parameter, which is later fused with the image data to estimate camera translation parameter. Our contributions are in two aspects. (1 Under the circumstance that no inertial sensor can estimate accurately enough translation parameter, we propose a translation estimation algorithm by fusing gyroscope sensor and image data. (2 Our S2fM algorithm is efficient and suitable for smart devices. Experimental results validate efficiency of the proposed S2fM algorithm.

  19. Algorithms, complexity, and the sciences.

    Science.gov (United States)

    Papadimitriou, Christos

    2014-11-11

    Algorithms, perhaps together with Moore's law, compose the engine of the information technology revolution, whereas complexity--the antithesis of algorithms--is one of the deepest realms of mathematical investigation. After introducing the basic concepts of algorithms and complexity, and the fundamental complexity classes P (polynomial time) and NP (nondeterministic polynomial time, or search problems), we discuss briefly the P vs. NP problem. We then focus on certain classes between P and NP which capture important phenomena in the social and life sciences, namely the Nash equlibrium and other equilibria in economics and game theory, and certain processes in population genetics and evolution. Finally, an algorithm known as multiplicative weights update (MWU) provides an algorithmic interpretation of the evolution of allele frequencies in a population under sex and weak selection. All three of these equivalences are rife with domain-specific implications: The concept of Nash equilibrium may be less universal--and therefore less compelling--than has been presumed; selection on gene interactions may entail the maintenance of genetic variation for longer periods than selection on single alleles predicts; whereas MWU can be shown to maximize, for each gene, a convex combination of the gene's cumulative fitness in the population and the entropy of the allele distribution, an insight that may be pertinent to the maintenance of variation in evolution.

  20. Approximate iterative algorithms

    CERN Document Server

    Almudevar, Anthony Louis

    2014-01-01

    Iterative algorithms often rely on approximate evaluation techniques, which may include statistical estimation, computer simulation or functional approximation. This volume presents methods for the study of approximate iterative algorithms, providing tools for the derivation of error bounds and convergence rates, and for the optimal design of such algorithms. Techniques of functional analysis are used to derive analytical relationships between approximation methods and convergence properties for general classes of algorithms. This work provides the necessary background in functional analysis a

  1. Autonomous Star Tracker Algorithms

    DEFF Research Database (Denmark)

    Betto, Maurizio; Jørgensen, John Leif; Kilsgaard, Søren

    1998-01-01

    Proposal, in response to an ESA R.f.P., to design algorithms for autonomous star tracker operations.The proposal also included the development of a star tracker breadboard to test the algorithms performances.......Proposal, in response to an ESA R.f.P., to design algorithms for autonomous star tracker operations.The proposal also included the development of a star tracker breadboard to test the algorithms performances....

  2. A verified LLL algorithm

    NARCIS (Netherlands)

    Divasón, Jose; Joosten, Sebastiaan; Thiemann, René; Yamada, Akihisa

    2018-01-01

    The Lenstra-Lenstra-Lovász basis reduction algorithm, also known as LLL algorithm, is an algorithm to find a basis with short, nearly orthogonal vectors of an integer lattice. Thereby, it can also be seen as an approximation to solve the shortest vector problem (SVP), which is an NP-hard problem,

  3. Nature-inspired optimization algorithms

    CERN Document Server

    Yang, Xin-She

    2014-01-01

    Nature-Inspired Optimization Algorithms provides a systematic introduction to all major nature-inspired algorithms for optimization. The book's unified approach, balancing algorithm introduction, theoretical background and practical implementation, complements extensive literature with well-chosen case studies to illustrate how these algorithms work. Topics include particle swarm optimization, ant and bee algorithms, simulated annealing, cuckoo search, firefly algorithm, bat algorithm, flower algorithm, harmony search, algorithm analysis, constraint handling, hybrid methods, parameter tuning

  4. Mitigation of cache memory using an embedded hard-core PPC440 processor in a Virtex-5 Field Programmable Gate Array.

    Energy Technology Data Exchange (ETDEWEB)

    Learn, Mark Walter

    2010-02-01

    Sandia National Laboratories is currently developing new processing and data communication architectures for use in future satellite payloads. These architectures will leverage the flexibility and performance of state-of-the-art static-random-access-memory-based Field Programmable Gate Arrays (FPGAs). One such FPGA is the radiation-hardened version of the Virtex-5 being developed by Xilinx. However, not all features of this FPGA are being radiation-hardened by design and could still be susceptible to on-orbit upsets. One such feature is the embedded hard-core PPC440 processor. Since this processor is implemented in the FPGA as a hard-core, traditional mitigation approaches such as Triple Modular Redundancy (TMR) are not available to improve the processor's on-orbit reliability. The goal of this work is to investigate techniques that can help mitigate the embedded hard-core PPC440 processor within the Virtex-5 FPGA other than TMR. Implementing various mitigation schemes reliably within the PPC440 offers a powerful reconfigurable computing resource to these node-based processing architectures. This document summarizes the work done on the cache mitigation scheme for the embedded hard-core PPC440 processor within the Virtex-5 FPGAs, and describes in detail the design of the cache mitigation scheme and the testing conducted at the radiation effects facility on the Texas A&M campus.

  5. Parallel sorting algorithms

    CERN Document Server

    Akl, Selim G

    1985-01-01

    Parallel Sorting Algorithms explains how to use parallel algorithms to sort a sequence of items on a variety of parallel computers. The book reviews the sorting problem, the parallel models of computation, parallel algorithms, and the lower bounds on the parallel sorting problems. The text also presents twenty different algorithms, such as linear arrays, mesh-connected computers, cube-connected computers. Another example where algorithm can be applied is on the shared-memory SIMD (single instruction stream multiple data stream) computers in which the whole sequence to be sorted can fit in the

  6. VISUALIZATION OF PAGERANK ALGORITHM

    OpenAIRE

    Perhaj, Ervin

    2013-01-01

    The goal of the thesis is to develop a web application that help users understand the functioning of the PageRank algorithm. The thesis consists of two parts. First we develop an algorithm to calculate PageRank values of web pages. The input of algorithm is a list of web pages and links between them. The user enters the list through the web interface. From the data the algorithm calculates PageRank value for each page. The algorithm repeats the process, until the difference of PageRank va...

  7. Digital Arithmetic: Division Algorithms

    DEFF Research Database (Denmark)

    Montuschi, Paolo; Nannarelli, Alberto

    2017-01-01

    implement it in hardware to not compromise the overall computation performances. This entry explains the basic algorithms, suitable for hardware and software, to implement division in computer systems. Two classes of algorithms implement division or square root: digit-recurrence and multiplicative (e.......g., Newton–Raphson) algorithms. The first class of algorithms, the digit-recurrence type, is particularly suitable for hardware implementation as it requires modest resources and provides good performance on contemporary technology. The second class of algorithms, the multiplicative type, requires...

  8. Modified Clipped LMS Algorithm

    Directory of Open Access Journals (Sweden)

    Lotfizad Mojtaba

    2005-01-01

    Full Text Available Abstract A new algorithm is proposed for updating the weights of an adaptive filter. The proposed algorithm is a modification of an existing method, namely, the clipped LMS, and uses a three-level quantization ( scheme that involves the threshold clipping of the input signals in the filter weight update formula. Mathematical analysis shows the convergence of the filter weights to the optimum Wiener filter weights. Also, it can be proved that the proposed modified clipped LMS (MCLMS algorithm has better tracking than the LMS algorithm. In addition, this algorithm has reduced computational complexity relative to the unmodified one. By using a suitable threshold, it is possible to increase the tracking capability of the MCLMS algorithm compared to the LMS algorithm, but this causes slower convergence. Computer simulations confirm the mathematical analysis presented.

  9. A Non-static Data Layout Enhancing Parallelism and Vectorization in Sparse Grid Algorithms

    KAUST Repository

    Buse, Gerrit

    2012-06-01

    The name sparse grids denotes a highly space-efficient, grid-based numerical technique to approximate high-dimensional functions. Although employed in a broad spectrum of applications from different fields, there have only been few tries to use it in real time visualization (e.g. [1]), due to complex data structures and long algorithm runtime. In this work we present a novel approach inspired by principles of I/0-efficient algorithms. Locally applied coefficient permutations lead to improved cache performance and facilitate the use of vector registers for our sparse grid benchmark problem hierarchization. Based on the compact data structure proposed for regular sparse grids in [2], we developed a new algorithm that outperforms existing implementations on modern multi-core systems by a factor of 37 for a grid size of 127 million points. For larger problems the speedup is even increasing, and with execution times below 1 s, sparse grids are well-suited for visualization applications. Furthermore, we point out how a broad class of sparse grid algorithms can benefit from our approach. © 2012 IEEE.

  10. An efficient tensor transpose algorithm for multicore CPU, Intel Xeon Phi, and NVidia Tesla GPU

    Science.gov (United States)

    Lyakh, Dmitry I.

    2015-04-01

    An efficient parallel tensor transpose algorithm is suggested for shared-memory computing units, namely, multicore CPU, Intel Xeon Phi, and NVidia GPU. The algorithm operates on dense tensors (multidimensional arrays) and is based on the optimization of cache utilization on x86 CPU and the use of shared memory on NVidia GPU. From the applied side, the ultimate goal is to minimize the overhead encountered in the transformation of tensor contractions into matrix multiplications in computer implementations of advanced methods of quantum many-body theory (e.g., in electronic structure theory and nuclear physics). A particular accent is made on higher-dimensional tensors that typically appear in the so-called multireference correlated methods of electronic structure theory. Depending on tensor dimensionality, the presented optimized algorithms can achieve an order of magnitude speedup on x86 CPUs and 2-3 times speedup on NVidia Tesla K20X GPU with respect to the naïve scattering algorithm (no memory access optimization). The tensor transpose routines developed in this work have been incorporated into a general-purpose tensor algebra library (TAL-SH).

  11. Sequential and Adaptive Learning Algorithms for M-Estimation

    Directory of Open Access Journals (Sweden)

    Guang Deng

    2008-05-01

    Full Text Available The M-estimate of a linear observation model has many important engineering applications such as identifying a linear system under non-Gaussian noise. Batch algorithms based on the EM algorithm or the iterative reweighted least squares algorithm have been widely adopted. In recent years, several sequential algorithms have been proposed. In this paper, we propose a family of sequential algorithms based on the Bayesian formulation of the problem. The basic idea is that in each step we use a Gaussian approximation for the posterior and a quadratic approximation for the log-likelihood function. The maximum a posteriori (MAP estimation leads naturally to algorithms similar to the recursive least squares (RLSs algorithm. We discuss the quality of the estimate, issues related to the initialization and estimation of parameters, and robustness of the proposed algorithm. We then develop LMS-type algorithms by replacing the covariance matrix with a scaled identity matrix under the constraint that the determinant of the covariance matrix is preserved. We have proposed two LMS-type algorithms which are effective and low-cost replacement of RLS-type of algorithms working under Gaussian and impulsive noise, respectively. Numerical examples show that the performance of the proposed algorithms are very competitive to that of other recently published algorithms.

  12. South Pacific mineral cache

    Science.gov (United States)

    Recent deep-water sampling of mineral-rich crusts on the seafloor between the Hawaiian Islands and Samoa revealed deposits of cobalt, nickel, and manganese that are richer than previous samples, according to a team of scientists from the U.S. Geological Survey (USGS) and the Federal Republic of Germany aboard the research vessel S.P. Lee.Thin pieces of crust dredged from a seamount about 260 km northwest of Palmyra Atoll and Kingman Reef (U.S. territorial possessions roughly midway between Honolulu and American Samoa) had a cobalt concentration of 2.5%, or more than twice the concentration that earlier reconnaissance studies indicated would be found. The rock samples also contained 0.8% nickel and 32% manganese, compared to the estimated concentrations of 0.5% and 25%, respectively. The areas in which the deposits were found are part of the relatively unexplored ocean bottom included in the recently proclaimed 200-nautical-mile U.S. Exclusive Economic Zone (EEZ).

  13. A theoretical comparison of evolutionary algorithms and simulated annealing

    Energy Technology Data Exchange (ETDEWEB)

    Hart, W.E.

    1995-08-28

    This paper theoretically compares the performance of simulated annealing and evolutionary algorithms. Our main result is that under mild conditions a wide variety of evolutionary algorithms can be shown to have greater performance than simulated annealing after a sufficiently large number of function evaluations. This class of EAs includes variants of evolutionary strategie and evolutionary programming, the canonical genetic algorithm, as well as a variety of genetic algorithms that have been applied to combinatorial optimization problems. The proof of this result is based on a performance analysis of a very general class of stochastic optimization algorithms, which has implications for the performance of a variety of other optimization algorithm.

  14. A New Aloha Anti-Collision Algorithm Based on CDMA

    Science.gov (United States)

    Bai, Enjian; Feng, Zhu

    The tags' collision is a common problem in RFID (radio frequency identification) system. The problem has affected the integrity of the data transmission during the process of communication in the RFID system. Based on analysis of the existing anti-collision algorithm, a novel anti-collision algorithm is presented. The new algorithm combines the group dynamic frame slotted Aloha algorithm with code division multiple access technology. The algorithm can effectively reduce the collision probability between tags. Under the same number of tags, the algorithm is effective in reducing the reader recognition time and improve overall system throughput rate.

  15. Cloud Model Bat Algorithm

    OpenAIRE

    Yongquan Zhou; Jian Xie; Liangliang Li; Mingzhi Ma

    2014-01-01

    Bat algorithm (BA) is a novel stochastic global optimization algorithm. Cloud model is an effective tool in transforming between qualitative concepts and their quantitative representation. Based on the bat echolocation mechanism and excellent characteristics of cloud model on uncertainty knowledge representation, a new cloud model bat algorithm (CBA) is proposed. This paper focuses on remodeling echolocation model based on living and preying characteristics of bats, utilizing the transformati...

  16. Recursive forgetting algorithms

    DEFF Research Database (Denmark)

    Parkum, Jens; Poulsen, Niels Kjølstad; Holst, Jan

    1992-01-01

    In the first part of the paper, a general forgetting algorithm is formulated and analysed. It contains most existing forgetting schemes as special cases. Conditions are given ensuring that the basic convergence properties will hold. In the second part of the paper, the results are applied...... to a specific algorithm with selective forgetting. Here, the forgetting is non-uniform in time and space. The theoretical analysis is supported by a simulation example demonstrating the practical performance of this algorithm...

  17. Bit Loading Algorithms for Cooperative OFDM Systems

    Directory of Open Access Journals (Sweden)

    Gui Bo

    2008-01-01

    Full Text Available Abstract We investigate the resource allocation problem for an OFDM cooperative network with a single source-destination pair and multiple relays. Assuming knowledge of the instantaneous channel gains for all links in the entire network, we propose several bit and power allocation schemes aiming at minimizing the total transmission power under a target rate constraint. First, an optimal and efficient bit loading algorithm is proposed when the relay node uses the same subchannel to relay the information transmitted by the source node. To further improve the performance gain, subchannel permutation, in which the subchannels are reallocated at relay nodes, is considered. An optimal subchannel permutation algorithm is first proposed and then an efficient suboptimal algorithm is considered to achieve a better complexity-performance tradeoff. A distributed bit loading algorithm is also proposed for ad hoc networks. Simulation results show that significant performance gains can be achieved by the proposed bit loading algorithms, especially when subchannel permutation is employed.

  18. MM Algorithms for Geometric and Signomial Programming.

    Science.gov (United States)

    Lange, Kenneth; Zhou, Hua

    2014-02-01

    This paper derives new algorithms for signomial programming, a generalization of geometric programming. The algorithms are based on a generic principle for optimization called the MM algorithm. In this setting, one can apply the geometric-arithmetic mean inequality and a supporting hyperplane inequality to create a surrogate function with parameters separated. Thus, unconstrained signomial programming reduces to a sequence of one-dimensional minimization problems. Simple examples demonstrate that the MM algorithm derived can converge to a boundary point or to one point of a continuum of minimum points. Conditions under which the minimum point is unique or occurs in the interior of parameter space are proved for geometric programming. Convergence to an interior point occurs at a linear rate. Finally, the MM framework easily accommodates equality and inequality constraints of signomial type. For the most important special case, constrained quadratic programming, the MM algorithm involves very simple updates.

  19. Spectral Decomposition Algorithm (SDA)

    Data.gov (United States)

    National Aeronautics and Space Administration — Spectral Decomposition Algorithm (SDA) is an unsupervised feature extraction technique similar to PCA that was developed to better distinguish spectral features in...

  20. Algorithms in Algebraic Geometry

    CERN Document Server

    Dickenstein, Alicia; Sommese, Andrew J

    2008-01-01

    In the last decade, there has been a burgeoning of activity in the design and implementation of algorithms for algebraic geometric computation. Some of these algorithms were originally designed for abstract algebraic geometry, but now are of interest for use in applications and some of these algorithms were originally designed for applications, but now are of interest for use in abstract algebraic geometry. The workshop on Algorithms in Algebraic Geometry that was held in the framework of the IMA Annual Program Year in Applications of Algebraic Geometry by the Institute for Mathematics and Its