WorldWideScience

Sample records for reservoir cache county

  1. Application of computer graphics to generate coal resources of the Cache coal bed, Recluse geologic model area, Campbell County, Wyoming

    Science.gov (United States)

    Schneider, G.B.; Crowley, S.S.; Carey, M.A.

    1982-01-01

    Low-sulfur subbituminous coal resources have been calculated, using both manual and computer methods, for the Cache coal bed in the Recluse Model Area, which covers the White Tail Butte, Pitch Draw, Recluse, and Homestead Draw SW 7 1/2 minute quadrangles, Campbell County, Wyoming. Approximately 275 coal thickness measurements obtained from drill hole data are evenly distributed throughout the area. The Cache coal and associated beds are in the Paleocene Tongue River Member of the Fort Union Formation. The depth from the surface to the Cache bed ranges from 269 to 1,257 feet. The thickness of the coal is as much as 31 feet, but in places the Cache coal bed is absent. Comparisons between hand-drawn and computer-generated isopach maps show minimal differences. Total coal resources calculated by computer show the bed to contain 2,316 million short tons or about 6.7 percent more than the hand-calculated figure of 2,160 million short tons.

  2. Dementia severity and the longitudinal costs of informal care in the Cache County population.

    Science.gov (United States)

    Rattinger, Gail B; Schwartz, Sarah; Mullins, C Daniel; Corcoran, Chris; Zuckerman, Ilene H; Sanders, Chelsea; Norton, Maria C; Fauth, Elizabeth B; Leoutsakos, Jeannie-Marie S; Lyketsos, Constantine G; Tschanz, JoAnn T

    2015-08-01

    Dementia costs are critical for influencing healthcare policy, but limited longitudinal information exists. We examined longitudinal informal care costs of dementia in a population-based sample. Data from the Cache County Study included dementia onset, duration, and severity assessed by the Mini-Mental State Examination (MMSE), Clinical Dementia Rating Scale (CDR), and Neuropsychiatric Inventory (NPI). Informal costs of daily care (COC) was estimated based on median Utah wages. Mixed models estimated the relationship between severity and longitudinal COC in separate models for MMSE and CDR. Two hundred and eighty-seven subjects (53% female, mean (standard deviation) age was 82.3 (5.9) years) participated. Overall COC increased by 18% per year. COC was 6% lower per MMSE-point increase and compared with very mild dementia, COC increased over twofold for mild, fivefold for moderate, and sixfold for severe dementia on the CDR. Greater dementia severity predicted higher costs. Disease management strategies addressing dementia progression may curb costs. Copyright © 2015 The Alzheimer's Association. Published by Elsevier Inc. All rights reserved.

  3. Bathymetry and capacity of Blackfoot Reservoir, Caribou County, Idaho, 2011

    Science.gov (United States)

    Wood, Molly S.; Skinner, Kenneth D.; Fosness, Ryan L.

    2012-01-01

    The U.S. Geological Survey (USGS), in cooperation with the Shoshone-Bannock Tribes, surveyed the bathymetry and selected above-water sections of Blackfoot Reservoir, Caribou County, Idaho, in 2011. Reservoir operators manage releases from Government Dam on Blackfoot Reservoir based on a stage-capacity relation developed about the time of dam construction in the early 1900s. Reservoir operation directly affects the amount of water that is available for irrigation of agricultural land on the Fort Hall Indian Reservation and surrounding areas. The USGS surveyed the below-water sections of the reservoir using a multibeam echosounder and real-time kinematic global positioning system (RTK-GPS) equipment at full reservoir pool in June 2011, covering elevations from 6,090 to 6,119 feet (ft) above the North American Vertical Datum of 1988 (NAVD 88). The USGS used data from a light detection and ranging (LiDAR) survey performed in 2000 to map reservoir bathymetry from 6,116 to 6,124 ft NAVD 88, which were mostly in depths too shallow to measure with the multibeam echosounder, and most of the above-water section of the reservoir (above 6,124 ft NAVD 88). Selected points and bank erosional features were surveyed by the USGS using RTK-GPS and a total station at low reservoir pool in September 2011 to supplement and verify the LiDAR data. The stage-capacity relation was revised and presented in a tabular format. The datasets show a 2.0-percent decrease in capacity from the original survey, due to sedimentation or differences in accuracy between surveys. A 1.3-percent error also was detected in the previously used capacity table and measured water-level elevation because of questionable reference elevation at monitoring stations near Government Dam. Reservoir capacity in 2011 at design maximum pool of 6,124 ft above NAVD 88 was 333,500 acre-ft.

  4. Church attendance and new episodes of major depression in a community study of older adults: the Cache County Study.

    Science.gov (United States)

    Norton, Maria C; Singh, Archana; Skoog, Ingmar; Corcoran, Christopher; Tschanz, Joann T; Zandi, Peter P; Breitner, John C S; Welsh-Bohmer, Kathleen A; Steffens, David C

    2008-05-01

    We examined the relation between church attendance, membership in the Church of Jesus Christ of Latter-Day Saints (LDS), and major depressive episode, in a population-based study of aging and dementia in Cache County, Utah. Participants included 2,989 nondemented individuals aged between 65 and 100 years who were interviewed initially in 1995 to 1996 and again in 1998 to 1999. LDS church members reported twice the rate of major depression that non-LDS members did (odds ratio = 2.56, 95% confidence interval = 1.07-6.08). Individuals attending church weekly or more often had a significantly lower risk for major depression. After controlling for demographic and health variables and the strongest predictor of future episodes of depression, a prior depression history, we found that church attendance more often than weekly remained a significant protectant (odds ratio = 0.51, 95% confidence interval = 0.28-0.92). Results suggest that there may be a threshold of church attendance that is necessary for a person to garner long-term protection from depression. We discuss sociological factors relevant to LDS culture.

  5. Feasibility Report and Environmental Statement for Water Resources Development, Cache Creek Basin, California

    Science.gov (United States)

    1979-02-01

    classified as Porno , Lake Miwok, and Patwin. Recent surveys within the Clear Lake-Cache Creek Basin have located 28 archeological sites, some of which...additional 8,400 acre-feet annually to the Lakeport area. Porno Reservoir on Kelsey Creek, being studied by Lake County, also would supplement M&l water...project on Scotts Creek could provide 9,100 acre- feet annually of irrigation water. Also, as previously discussed, Porno Reservoir would furnish

  6. Web Caching

    Indian Academy of Sciences (India)

    The user may never realize that the cache is between the client and server except in special circumstances. It is important to distinguish between Web cache and a proxy server as their functions are often misunderstood. Proxy servers serve as an intermediary to place a firewall between network users and the outside world.

  7. Web Caching

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 7; Issue 7. Web Caching - A Technique to Speedup Access to Web Contents. Harsha Srinath Shiva Shankar Ramanna. General Article Volume 7 Issue 7 July 2002 pp 54-62 ... Keywords. World wide web; data caching; internet traffic; web page access.

  8. Web Caching

    Indian Academy of Sciences (India)

    operating systems, computer networks, distributed systems,. E-commerce and security. The World Wide Web has been growing in leaps and bounds. Studies have indicated that this massive distributed system can benefit greatly by making use of appropriate caching methods. Intelligent Web caching can lessen the burden ...

  9. Geochemistry of mercury and other constituents in subsurface sediment—Analyses from 2011 and 2012 coring campaigns, Cache Creek Settling Basin, Yolo County, California

    Science.gov (United States)

    Arias, Michelle R.; Alpers, Charles N.; Marvin-DiPasquale, Mark C.; Fuller, Christopher C.; Agee, Jennifer L.; Sneed, Michelle; Morita, Andrew Y.; Salas, Antonia

    2017-10-31

    Cache Creek Settling Basin was constructed in 1937 to trap sediment from Cache Creek before delivery to the Yolo Bypass, a flood conveyance for the Sacramento River system that is tributary to the Sacramento–San Joaquin Delta. Sediment management options being considered by stakeholders in the Cache Creek Settling Basin include sediment excavation; however, that could expose sediments containing elevated mercury concentrations from historical mercury mining in the watershed. In cooperation with the California Department of Water Resources, the U.S. Geological Survey undertook sediment coring campaigns in 2011–12 (1) to describe lateral and vertical distributions of mercury concentrations in deposits of sediment in the Cache Creek Settling Basin and (2) to improve constraint of estimates of the rate of sediment deposition in the basin.Sediment cores were collected in the Cache Creek Settling Basin, Yolo County, California, during October 2011 at 10 locations and during August 2012 at 5 other locations. Total core depths ranged from approximately 4.6 to 13.7 meters (15 to 45 feet), with penetration to about 9.1 meters (30 feet) at most locations. Unsplit cores were logged for two geophysical parameters (gamma bulk density and magnetic susceptibility); then, selected cores were split lengthwise. One half of each core was then photographed and archived, and the other half was subsampled. Initial subsamples from the cores (20-centimeter composite samples from five predetermined depths in each profile) were analyzed for total mercury, methylmercury, total reduced sulfur, iron speciation, organic content (as the percentage of weight loss on ignition), and grain-size distribution. Detailed follow-up subsampling (3-centimeter intervals) was done at six locations along an east-west transect in the southern part of the Cache Creek Settling Basin and at one location in the northern part of the basin for analyses of total mercury; organic content; and cesium-137, which was

  10. Nutritional Status is Associated with Faster Cognitive Decline and Worse Functional Impairment in the Progression of Dementia: The Cache County Dementia Progression Study1.

    Science.gov (United States)

    Sanders, Chelsea; Behrens, Stephanie; Schwartz, Sarah; Wengreen, Heidi; Corcoran, Chris D; Lyketsos, Constantine G; Tschanz, JoAnn T

    2016-02-27

    Nutritional status may be a modifiable factor in the progression of dementia. We examined the association of nutritional status and rate of cognitive and functional decline in a U.S. population-based sample. Study design was an observational longitudinal study with annual follow-ups up to 6 years of 292 persons with dementia (72% Alzheimer's disease, 56% female) in Cache County, UT using the Mini-Mental State Exam (MMSE), Clinical Dementia Rating Sum of Boxes (CDR-sb), and modified Mini Nutritional Assessment (mMNA). mMNA scores declined by approximately 0.50 points/year, suggesting increasing risk for malnutrition. Lower mMNA score predicted faster rate of decline on the MMSE at earlier follow-up times, but slower decline at later follow-up times, whereas higher mMNA scores had the opposite pattern (mMNA by time β= 0.22, p = 0.017; mMNA by time2 β= -0.04, p = 0.04). Lower mMNA score was associated with greater impairment on the CDR-sb over the course of dementia (β= 0.35, p <  0.001). Assessment of malnutrition may be useful in predicting rates of progression in dementia and may provide a target for clinical intervention.

  11. Reservoir management strategy for East Randolph Field, Randolph Township, Portage County, Ohio

    Energy Technology Data Exchange (ETDEWEB)

    Safley, L.E.; Salamy, S.P.; Young, M.A.; Fowler, M.L.; Wing, J.L.; Thomas, J.B.; Mills, J.; Wood, D.

    1998-07-01

    The primary objective of the Reservoir Management Field Demonstration Program is to demonstrate that multidisciplinary reservoir management teams using appropriate software and methodologies with efforts scaled to the size of the resource are a cost-effective method for: Increasing current profitability of field operations; Forestalling abandonment of the reservoir; and Improving long-term economic recovery for the company. The primary objective of the Reservoir Management Demonstration Project with Belden and Blake Corporation is to develop a comprehensive reservoir management strategy to improve the operational economics and optimize oil production from East Randolph field, Randolph Township, Portage County, Ohio. This strategy identifies the viable improved recovery process options and defines related operational and facility requirements. In addition, strategies are addressed for field operation problems, such as paraffin buildup, hydraulic fracture stimulation, pumping system optimization, and production treatment requirements, with the goal of reducing operating costs and improving oil recovery.

  12. Stack Caching Using Split Data Caches

    DEFF Research Database (Denmark)

    Nielsen, Carsten; Schoeberl, Martin

    2015-01-01

    In most embedded and general purpose architectures, stack data and non-stack data is cached together, meaning that writing to or loading from the stack may expel non-stack data from the data cache. Manipulation of the stack has a different memory access pattern than that of non-stack data, showing...... higher temporal and spatial locality. We propose caching stack and non-stack data separately and develop four different stack caches that allow this separation without requiring compiler support. These are the simple, window, and prefilling with and without tag stack caches. The performance of the stack...

  13. Caching Patterns and Implementation

    Directory of Open Access Journals (Sweden)

    Octavian Paul ROTARU

    2006-01-01

    Full Text Available Repetitious access to remote resources, usually data, constitutes a bottleneck for many software systems. Caching is a technique that can drastically improve the performance of any database application, by avoiding multiple read operations for the same data. This paper addresses the caching problems from a pattern perspective. Both Caching and caching strategies, like primed and on demand, are presented as patterns and a pattern-based flexible caching implementation is proposed.The Caching pattern provides method of expensive resources reacquisition circumvention. Primed Cache pattern is applied in situations in which the set of required resources, or at least a part of it, can be predicted, while Demand Cache pattern is applied whenever the resources set required cannot be predicted or is unfeasible to be buffered.The advantages and disadvantages of all the caching patterns presented are also discussed, and the lessons learned are applied in the implementation of the pattern-based flexible caching solution proposed.

  14. Lifestyle behavior pattern is associated with different levels of risk for incident dementia and Alzheimer's disease: the Cache County study.

    Science.gov (United States)

    Norton, Maria C; Dew, Jeffrey; Smith, Heeyoung; Fauth, Elizabeth; Piercy, Kathleen W; Breitner, John C S; Tschanz, JoAnn; Wengreen, Heidi; Welsh-Bohmer, Kathleen

    2012-03-01

    To identify distinct behavioral patterns of diet, exercise, social interaction, church attendance, alcohol consumption, and smoking and to examine their association with subsequent dementia risk. Longitudinal, population-based dementia study. Rural county in northern Utah, at-home evaluations. Two thousand four hundred ninety-one participants without dementia (51% male, average age 73.0 ± 5,7; average education 13.7 ± 4.1 years) initially reported no problems in activities of daily living and no stroke or head injury within the past 5 years. Six dichotomized lifestyle behaviors were examined (diet: high ≥ median on the Dietary Approaches to Stop Hypertension scale; exercise: ≥5 h/wk of light activity and at least occasional moderate to vigorous activity; church attendance: attending church services at least weekly; social Interaction: spending time with family and friends at least twice weekly; alcohol: currently drinking alcoholic beverages ≥ 2 times/wk; nonsmoker: no current use or fewer than 100 cigarettes ever). Latent class analysis (LCA) was used to identify patterns among these behaviors. Proportional hazards regression modeled time to dementia onset as a function of behavioral class, age, sex, education, and apolipoprotein E status. Follow-up averaged 6.3 ± 5.3 years, during which 278 cases of incident dementia (200 Alzheimer's disease (AD)) were diagnosed. LCA identified four distinct lifestyle classes. Unhealthy-religious (UH-R; 11.5%), unhealthy-nonreligious (UH-NR; 10.5%), healthy-moderately religious (H-MR; 38.5%), and healthy-very religious (H-VR; 39.5%). UH-NR (hazard ratio (HR) = 0.54, P = .028), H-MR (HR = 0.56, P = .003), and H-VR (HR = 0.58, P = .005) had significantly lower dementia risk than UH-R. Results were comparable for AD, except that UH-NR was less definitive. Functionally independent older adults appear to cluster into subpopulations with distinct patterns of lifestyle behaviors with different levels of risk for subsequent

  15. Reservoir characterization of the Mississippian Ratcliffe, Richland County, Montana, Williston Basin. Topical report, September 1997

    Energy Technology Data Exchange (ETDEWEB)

    Sippel, M.; Luff, K.D.; Hendricks, M.L.

    1998-07-01

    This topical report is a compilation of characterizations by different disciplines of the Mississippian Ratcliffe in portions of Richland County, MT. Goals of the report are to increase understanding of the reservoir rocks, oil-in-place, heterogeneity and methods for improved recovery. The report covers investigations of geology, petrography, reservoir engineering and seismic. The Ratcliffe is a low permeability oil reservoir which appears to be developed across much of the study area and occurs across much of the Williston Basin. The reservoir has not been a primary drilling target in the study area because average reserves have been insufficient to payout the cost of drilling and completion despite the application of hydraulic fracture stimulation. Oil trapping does not appear to be structurally controlled. For the Ratcliffe to be a viable drilling objective, methods need to be developed for (1) targeting better reservoir development and (2) better completions. A geological model is presented for targeting areas with greater potential for commercial reserves in the Ratcliffe. This model can be best utilized with the aid of 3D seismic. A 3D seismic survey was acquired and is used to demonstrate a methodology for targeting the Ratcliffe. Other data obtained during the project include oriented core, special formation-imaging log, pressure transient measurements and oil PVT. Although re-entry horizontal drilling was unsuccessfully tested, this completion technology should improve the economic viability of the Ratcliffe. Reservoir simulation of horizontal completions with productivity of three times that of a vertical well suggested two or three horizontal wells in a 258-ha (640-acre) area could recover sufficient reserves for profitable drilling.

  16. Arsenic in freshwater fish in the Chihuahua County water reservoirs (Mexico).

    Science.gov (United States)

    Nevárez, Myrna; Moreno, Myriam Verónica; Sosa, Manuel; Bundschuh, Jochen

    2011-01-01

    Water reservoirs in Chihuahua County, Mexico, are affected by some punctual and non-punctual geogenic and anthropogenic pollution sources; fish are located at the top of the food chain and are good indicators for the ecosystems pollution. The study goal was to: (i) determine arsenic concentration in fish collected from the Chuviscar, Chihuahua, San Marcos and El Rejon water reservoirs; (ii) to assess if the fishes are suitable for human consumption and (iii) link the arsenic contents in fish with those in sediment and water reported in studies made the same year for these water reservoirs. Sampling was done in summer, fall and winter. The highest arsenic concentration in the species varied through the sampling periods: Channel catfish (Ictalurus punctatus) with 0.22 ± 0.15 mg/kg dw in winter and Green sunfish (Lepomis cyanellus) with 2.00 ± 0.15 mg/kg dw in summer in El Rejon water reservoir. A positive correlation of arsenic contents was found through all sampling seasons in fish samples and the samples of sediment and water. The contribution of the weekly intake of inorganic arsenic, based on the consumption of 0.245 kg fish muscles/body weight/week was found lower than the acceptable weekly intake of 0.015 mg/kg/body weight for inorganic arsenic suggested by FAO/WHO.

  17. Bathymetry of Lake William C. Bowen and Municipal Reservoir #1, Spartanburg County, South Carolina, 2008

    Science.gov (United States)

    Nagle, D.D.; Campbell, B.G.; Lowery, M.A.

    2009-01-01

    The increasing use and importance of lakes for water supply to communities enhance the need for an accurate methodology to determine lake bathymetry and storage capacity. A global positioning receiver and a fathometer were used to collect position data and water depth in February 2008 at Lake William C. Bowen and Municipal Reservoir #1, Spartanburg County, South Carolina. All collected data were imported into a geographic information system database. A bathymetric surface model, contour map, and stage-area and -volume relations were created from the geographic information database.

  18. Risk Based Reservoir Operations Using Ensemble Streamflow Predictions for Lake Mendocino in Mendocino County, California

    Science.gov (United States)

    Delaney, C.; Mendoza, J.; Whitin, B.; Hartman, R. K.

    2017-12-01

    Ensemble Forecast Operations (EFO) is a risk based approach of reservoir flood operations that incorporates ensemble streamflow predictions (ESPs) made by NOAA's California-Nevada River Forecast Center (CNRFC). With the EFO approach, each member of an ESP is individually modeled to forecast system conditions and calculate risk of reaching critical operational thresholds. Reservoir release decisions are computed which seek to manage forecasted risk to established risk tolerance levels. A water management model was developed for Lake Mendocino, a 111,000 acre-foot reservoir located near Ukiah, California, to evaluate the viability of the EFO alternative to improve water supply reliability but not increase downstream flood risk. Lake Mendocino is a dual use reservoir, which is owned and operated for flood control by the United States Army Corps of Engineers and is operated for water supply by the Sonoma County Water Agency. Due to recent changes in the operations of an upstream hydroelectric facility, this reservoir has suffered from water supply reliability issues since 2007. The EFO alternative was simulated using a 26-year (1985-2010) ESP hindcast generated by the CNRFC, which approximates flow forecasts for 61 ensemble members for a 15-day horizon. Model simulation results of the EFO alternative demonstrate a 36% increase in median end of water year (September 30) storage levels over existing operations. Additionally, model results show no increase in occurrence of flows above flood stage for points downstream of Lake Mendocino. This investigation demonstrates that the EFO alternative may be a viable approach for managing Lake Mendocino for multiple purposes (water supply, flood mitigation, ecosystems) and warrants further investigation through additional modeling and analysis.

  19. Bathymetric maps and water-quality profiles of Table Rock and North Saluda Reservoirs, Greenville County, South Carolina

    Science.gov (United States)

    Clark, Jimmy M.; Journey, Celeste A.; Nagle, Doug D.; Lanier, Timothy H.

    2014-01-01

    Lakes and reservoirs are the water-supply source for many communities. As such, water-resource managers that oversee these water supplies require monitoring of the quantity and quality of the resource. Monitoring information can be used to assess the basic conditions within the reservoir and to establish a reliable estimate of storage capacity. In April and May 2013, a global navigation satellite system receiver and fathometer were used to collect bathymetric data, and an autonomous underwater vehicle was used to collect water-quality and bathymetric data at Table Rock Reservoir and North Saluda Reservoir in Greenville County, South Carolina. These bathymetric data were used to create a bathymetric contour map and stage-area and stage-volume relation tables for each reservoir. Additionally, statistical summaries of the water-quality data were used to provide a general description of water-quality conditions in the reservoirs.

  20. Problems related to water quality and algal control in Lopez Reservoir, San Luis Obispo County, California

    Science.gov (United States)

    Fuller, Richard H.; Averett, Robert C.; Hines, Walter G.

    1975-01-01

    A study to determine the present enrichment status of Liopez Reservoir in San Luis Obispo county, California, and to evaluate copper sulfate algal treatment found that stratification in the reservoir regulates nutrient release and that algal control has been ineffective. Nuisance algal blooms, particularly from March to June, have been a problem in the warm multipurpose reservoir since it was initially filled following intense storms in 1968-69. The cyanophyte Anabaena unispora has been dominant; cospecies are the diatoms Stephanodiscus astraea and Cyclotella operculata, and the chlorophytes Pediastrum deplex and Sphaerocystis schroeteri. During an A. unispora bloom in May 1972 the total lake surface cell count was nearly 100,000 cells/ml. Thermal stratification from late spring through autumn results in oxygen deficiency in the hypolimnion and metalimnion caused by bacterial oxidation of organic detritus. The anaerobic conditions favor chemical reduction of organic matter, which constitute 10-14% of the sediment. As algae die, sink to the bottom, and decompose, nutrients are released to the hypolimnion , and with the autumn overturn are spread to the epilimnion. Algal blooms not only hamper recreation, but through depletion of dissolved oxygen in the epilimnion may have caused periodic fishkills. Copper sulfate mixed with sodium citrate and applied at 1.10-1.73 lbs/acre has not significantly reduced algal growth; a method for determining correct dosage is presented. (Lynch-Wisconsin)

  1. Cache-Cache Comparison for Supporting Meaningful Learning

    Science.gov (United States)

    Wang, Jingyun; Fujino, Seiji

    2015-01-01

    The paper presents a meaningful discovery learning environment called "cache-cache comparison" for a personalized learning support system. The processing of seeking hidden relations or concepts in "cache-cache comparison" is intended to encourage learners to actively locate new knowledge in their knowledge framework and check…

  2. Tag-Split Cache for Efficient GPGPU Cache Utilization

    Energy Technology Data Exchange (ETDEWEB)

    Li, Lingda; Hayes, Ari; Song, Shuaiwen; Zhang, Eddy

    2016-06-01

    Modern GPUs employ cache to improve memory system efficiency. However, large amount of cache space is underutilized due to irregular memory accesses and poor spatial locality which exhibited commonly in GPU applications. Our experiments show that using smaller cache lines could improve cache space utilization, but it also frequently suffers from significant performance loss by introducing large amount of extra cache requests. In this work, we propose a novel cache design named tag-split cache (TSC) that enables fine-grained cache storage to address the problem of cache space underutilization while keeping memory request number unchanged. TSC divides tag into two parts to reduce storage overhead, and it supports multiple cache line replacement in one cycle.

  3. Advanced Oil Recovery Technologies for Improved Recovery from Slope Basin Clastic Reservoirs, Nash Draw Brushy Canyon Pool, Eddy County, NM

    Energy Technology Data Exchange (ETDEWEB)

    Murphy, Mark B.

    1999-02-24

    The Nash Draw Brushy Canyon Pool in Eddy County New Mexico is a cost-shared field demonstration project in the US Department of Energy Class II Program. A major goal of the Class III Program is to stimulate the use of advanced technologies to increase ultimate recovery from slope-basin clastic reservoirs. Advanced characterization techniques are being used at the Nash Draw project to develop reservoir management strategies for optimizing oil recovery from this Delaware reservoir. Analysis, interpretation, and integration of recently acquired geologic, geophysical, and engineering data revealed that the initial reservoir characterization was too simplistic to capture the critical features of this complex formation. Contrary to the initial characterization, a new reservoir description evolved that provided sufficient detail regarding the complexity of the Brushy Canyon interval at Nash Draw. This new reservoir description is being used as a risk reduction tool to identify ''sweet spots'' for a development drilling program as well as to evaluate pressure maintenance strategies. The reservoir characterization, geological modeling, 3-D seismic interpretation, and simulation studies have provided a detailed model of the Brushy Canyon zones. This model was used to predict the success of different reservoir management scenarios and to aid in determining the most favorable combination of targeted drilling, pressure maintenance, well simulation, and well spacing to improve recovery from this reservoir.

  4. Stress, seismicity and structure of shallow oil reservoirs of Clinton County, Kentucky. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Hamilton-Smith, T. [Kentucky Geological Survey, Lexington, KY (United States)

    1995-12-12

    Between 1993 and 1995 geophysicists of the Los Alamos National Laboratory, in a project funded by the US Department of Energy, conducted extensive microseismic monitoring of oil production in the recently discovered High Bridge pools of Clinton County and were able to acquire abundant, high-quality data in the northern of the two pools. This investigation provided both three-dimensional spatial and kinetic data relating to the High Bridge fracture system that previously had not been available. Funded in part by the Los Alamos National Laboratory, the Kentucky Geological Survey committed to develop a geological interpretation of these geophysical results, that would be of practical benefit to future oils exploration. This publication is a summary of the results of that project. Contents include the following: introduction; discovery and development; regional geology; fractured reservoir geology; oil migration and entrapment; subsurface stress; induced seismicity; structural geology; references; and appendices.

  5. Estimation of reservoir storage capacity using multibeam sonar and terrestrial lidar, Randy Poynter Lake, Rockdale County, Georgia, 2012

    Science.gov (United States)

    Lee, K.G.

    2013-01-01

    The U.S. Geological Survey, in cooperation with the Rockdale County Department of Water Resources, conducted a bathymetric and topographic survey of Randy Poynter Lake in northern Georgia in 2012. The Randy Poynter Lake watershed drains surface area from Rockdale, Gwinnett, and Walton Counties. The reservoir serves as the water supply for the Conyers-Rockdale Big Haynes Impoundment Authority. The Randy Poynter reservoir was surveyed to prepare a current bathymetric map and determine storage capacities at specified water-surface elevations. Topographic and bathymetric data were collected using a marine-based mobile mapping unit to estimate storage capacity. The marine-based mobile mapping unit operates with several components: multibeam echosounder, singlebeam echosounder, light detection and ranging system, navigation and motion-sensing system, and data acquisition computer. All data were processed and combined to develop a triangulated irregular network, a reservoir capacity table, and a bathymetric contour map.

  6. Maintaining Web Cache Coherency

    Directory of Open Access Journals (Sweden)

    2000-01-01

    Full Text Available Document coherency is a challenging problem for Web caching. Once the documents are cached throughout the Internet, it is often difficult to keep them coherent with the origin document without generating a new traffic that could increase the traffic on the international backbone and overload the popular servers. Several solutions have been proposed to solve this problem, among them two categories have been widely discussed: the strong document coherency and the weak document coherency. The cost and the efficiency of the two categories are still a controversial issue, while in some studies the strong coherency is far too expensive to be used in the Web context, in other studies it could be maintained at a low cost. The accuracy of these analysis is depending very much on how the document updating process is approximated. In this study, we compare some of the coherence methods proposed for Web caching. Among other points, we study the side effects of these methods on the Internet traffic. The ultimate goal is to study the cache behavior under several conditions, which will cover some of the factors that play an important role in the Web cache performance evaluation and quantify their impact on the simulation accuracy. The results presented in this study show indeed some differences in the outcome of the simulation of a Web cache depending on the workload being used, and the probability distribution used to approximate updates on the cached documents. Each experiment shows two case studies that outline the impact of the considered parameter on the performance of the cache.

  7. An evaluation of seepage gains and losses in Indian Creek Reservoir, Ada County, Idaho, April 2010–November 2011

    Science.gov (United States)

    Williams, Marshall L.; Etheridge, Alexandra B.

    2013-01-01

    The U.S. Geological Survey, in cooperation with the Idaho Department of Water Resources, conducted an investigation on Indian Creek Reservoir, a small impoundment in east Ada County, Idaho, to quantify groundwater seepage into and out of the reservoir. Data from the study will assist the Idaho Water Resources Department’s Comprehensive Aquifer Management Planning effort to estimate available water resources in Ada County. Three independent methods were utilized to estimate groundwater seepage: (1) the water-budget method; (2) the seepage-meter method; and (3) the segmented Darcy method. Reservoir seepage was quantified during the periods of April through August 2010 and February through November 2011. With the water-budget method, all measureable sources of inflow to and outflow from the reservoir were quantified, with the exception of groundwater; the water-budget equation was solved for groundwater inflow to or outflow from the reservoir. The seepage-meter method relies on the placement of seepage meters into the bottom sediments of the reservoir for the direct measurement of water flux across the sediment-water interface. The segmented-Darcy method utilizes a combination of water-level measurements in the reservoir and in adjacent near-shore wells to calculate water-table gradients between the wells and the reservoir within defined segments of the reservoir shoreline. The Darcy equation was used to calculate groundwater inflow to and outflow from the reservoir. Water-budget results provided continuous, daily estimates of seepage over the full period of data collection, while the seepage-meter and segmented Darcy methods provided instantaneous estimates of seepage. As a result of these and other difference in methodologies, comparisons of seepage estimates provided by the three methods are considered semi-quantitative. The results of the water-budget derived estimates of seepage indicate seepage to be seasonally variable in terms of the direction and magnitude

  8. Web cache location

    Directory of Open Access Journals (Sweden)

    Boffey Brian

    2004-01-01

    Full Text Available Stress placed on network infrastructure by the popularity of the World Wide Web may be partially relieved by keeping multiple copies of Web documents at geographically dispersed locations. In particular, use of proxy caches and replication provide a means of storing information 'nearer to end users'. This paper concentrates on the locational aspects of Web caching giving both an overview, from an operational research point of view, of existing research and putting forward avenues for possible further research. This area of research is in its infancy and the emphasis will be on themes and trends rather than on algorithm construction. Finally, Web caching problems are briefly related to referral systems more generally.

  9. Numerical simulation of groundwater movement and managed aquifer recharge from Sand Hollow Reservoir, Hurricane Bench area, Washington County, Utah

    Science.gov (United States)

    Marston, Thomas M.; Heilweil, Victor M.

    2012-01-01

    The Hurricane Bench area of Washington County, Utah, is a 70 square-mile area extending south from the Virgin River and encompassing Sand Hollow basin. Sand Hollow Reservoir, located on Hurricane Bench, was completed in March 2002 and is operated primarily as a managed aquifer recharge project by the Washington County Water Conservancy District. The reservoir is situated on a thick sequence of the Navajo Sandstone and Kayenta Formation. Total recharge to the underlying Navajo aquifer from the reservoir was about 86,000 acre-feet from 2002 to 2009. Natural recharge as infiltration of precipitation was approximately 2,100 acre-feet per year for the same period. Discharge occurs as seepage to the Virgin River, municipal and irrigation well withdrawals, and seepage to drains at the base of reservoir dams. Within the Hurricane Bench area, unconfined groundwater-flow conditions generally exist throughout the Navajo Sandstone. Navajo Sandstone hydraulic-conductivity values from regional aquifer testing range from 0.8 to 32 feet per day. The large variability in hydraulic conductivity is attributed to bedrock fractures that trend north-northeast across the study area.A numerical groundwater-flow model was developed to simulate groundwater movement in the Hurricane Bench area and to simulate the movement of managed aquifer recharge from Sand Hollow Reservoir through the groundwater system. The model was calibrated to combined steady- and transient-state conditions. The steady-state portion of the simulation was developed and calibrated by using hydrologic data that represented average conditions for 1975. The transient-state portion of the simulation was developed and calibrated by using hydrologic data collected from 1976 to 2009. Areally, the model grid was 98 rows by 76 columns with a variable cell size ranging from about 1.5 to 25 acres. Smaller cells were used to represent the reservoir to accurately simulate the reservoir bathymetry and nearby monitoring wells; larger

  10. Cache Oblivious Distribution Sweeping

    DEFF Research Database (Denmark)

    Brodal, G.S.; Fagerberg, R.

    2002-01-01

    We adapt the distribution sweeping method to the cache oblivious model. Distribution sweeping is the name used for a general approach for divide-and-conquer algorithms where the combination of solved subproblems can be viewed as a merging process of streams. We demonstrate by a series of algorith...

  11. Assessment of managed aquifer recharge at Sand Hollow Reservoir, Washington County, Utah, updated to conditions through 2007

    Science.gov (United States)

    Heilweil, Victor M.; Ortiz, Gema; Susong, David D.

    2009-01-01

    Sand Hollow Reservoir in Washington County, Utah, was completed in March 2002 and is operated primarily as an aquifer storage and recovery project by the Washington County Water Conservancy District (WCWCD). Since its inception in 2002 through 2007, surface-water diversions of about 126,000 acre-feet to Sand Hollow Reservoir have resulted in a generally rising reservoir stage and surface area. Large volumes of runoff during spring 2005-06 allowed the WCWCD to fill the reservoir to a total storage capacity of more than 50,000 acre-feet, with a corresponding surface area of about 1,300 acres and reservoir stage of about 3,060 feet during 2006. During 2007, reservoir stage generally decreased to about 3,040 feet with a surface-water storage volume of about 30,000 acre-feet. Water temperature in the reservoir shows large seasonal variation and has ranged from about 3 to 30 deg C from 2003 through 2007. Except for anomalously high recharge rates during the first year when the vadose zone beneath the reservoir was becoming saturated, estimated ground-water recharge rates have ranged from 0.01 to 0.09 feet per day. Estimated recharge volumes have ranged from about 200 to 3,500 acre-feet per month from March 2002 through December 2007. Total ground-water recharge during the same period is estimated to have been about 69,000 acre-feet. Estimated evaporation rates have varied from 0.04 to 0.97 feet per month, resulting in evaporation losses of 20 to 1,200 acre-feet per month. Total evaporation from March 2002 through December 2007 is estimated to have been about 25,000 acre-feet. Results of water-quality sampling at monitoring wells indicate that by 2007, managed aquifer recharge had arrived at sites 37 and 36, located 60 and 160 feet from the reservoir, respectively. However, different peak arrival dates for specific conductance, chloride, chloride/bromide ratios, dissolved oxygen, and total dissolved-gas pressures at each monitoring well indicate the complicated nature of

  12. Small County: Development of a Virtual Environment for Instruction in Geological Characterization of Petroleum Reservoirs

    Science.gov (United States)

    Banz, B.; Bohling, G.; Doveton, J.

    2008-12-01

    Traditional programs of geological education continue to be focused primarily on the evaluation of surface or near-surface geology accessed at outcrops and shallow boreholes. However, most students who graduate to careers in geology work almost entirely on subsurface problems, interpreting drilling records and petrophysical logs from exploration and production wells. Thus, college graduates commonly find themselves ill-prepared when they enter the petroleum industry and require specialized training in drilling and petrophysical log interpretation. To aid in this training process, we are developing an environment for interactive instruction in the geological aspects of petroleum reservoir characterization employing a virtual subsurface closely reflecting the geology of the US mid-continent, in the fictional setting of Small County, Kansas. Stochastic simulation techniques are used to generate the subsurface characteristics, including the overall geological structure, distributions of facies, porosity, and fluid saturations, and petrophysical logs. The student then explores this subsurface by siting exploratory wells and examining drilling and petrophysical log records obtained from those wells. We are developing the application using the Eclipse Rich Client Platform, which allows for the rapid development of a platform-agnostic application while providing an immersive graphical interface. The application provides an array of views to enable relevant data display and student interaction. One such view is an interactive map of the county allowing the student to view the locations of existing well bores and select pertinent data overlays such as a contour map of the elevation of an interesting interval. Additionally, from this view a student may choose the site of a new well. Another view emulates a drilling log, complete with drilling rate plot and iconic representation of examined drill cuttings. From here, students are directed to stipulate subsurface lithology and

  13. Cache-Oblivious Hashing

    DEFF Research Database (Denmark)

    Pagh, Rasmus; Wei, Zhewei; Yi, Ke

    2014-01-01

    , can be easily made cache-oblivious but it only achieves t q =1+Θ(α/b) even if a truly random hash function is used. Then we demonstrate that the block probing algorithm (Pagh et al. in SIAM Rev. 53(3):547–558, 2011) achieves t q =1+1/2 Ω(b), thus matching the cache-aware bound, if the following two......The hash table, especially its external memory version, is one of the most important index structures in large databases. Assuming a truly random hash function, it is known that in a standard external hash table with block size b, searching for a particular key only takes expected average t q =1......+1/2 Ω(b) disk accesses for any load factor α bounded away from 1. However, such near-perfect performance is achieved only when b is known and the hash table is particularly tuned for working with such a blocking. In this paper we study if it is possible to build a cache-oblivious hash table that works...

  14. Prospective study of Dietary Approaches to Stop Hypertension- and Mediterranean-style dietary patterns and age-related cognitive change: the Cache County Study on Memory, Health and Aging.

    Science.gov (United States)

    Wengreen, Heidi; Munger, Ronald G; Cutler, Adele; Quach, Anna; Bowles, Austin; Corcoran, Christopher; Tschanz, Joann T; Norton, Maria C; Welsh-Bohmer, Kathleen A

    2013-11-01

    Healthy dietary patterns may protect against age-related cognitive decline, but results of studies have been inconsistent. We examined associations between Dietary Approaches to Stop Hypertension (DASH)- and Mediterranean-style dietary patterns and age-related cognitive change in a prospective, population-based study. Participants included 3831 men and women ≥65 y of age who were residents of Cache County, UT, in 1995. Cognitive function was assessed by using the Modified Mini-Mental State Examination (3MS) ≤4 times over 11 y. Diet-adherence scores were computed by summing across the energy-adjusted rank-order of individual food and nutrient components and categorizing participants into quintiles of the distribution of the diet accordance score. Mixed-effects repeated-measures models were used to examine 3MS scores over time across increasing quintiles of dietary accordance scores and individual food components that comprised each score. The range of rank-order DASH and Mediterranean diet scores was 1661-25,596 and 2407-26,947, respectively. Higher DASH and Mediterranean diet scores were associated with higher average 3MS scores. People in quintile 5 of DASH averaged 0.97 points higher than those in quintile 1 (P = 0.001). The corresponding difference for Mediterranean quintiles was 0.94 (P = 0.001). These differences were consistent over 11 y. Higher intakes of whole grains and nuts and legumes were also associated with higher average 3MS scores [mean quintile 5 compared with 1 differences: 1.19 (P the DASH and Mediterranean dietary patterns were associated with consistently higher levels of cognitive function in elderly men and women over an 11-y period. Whole grains and nuts and legumes were positively associated with higher cognitive functions and may be core neuroprotective foods common to various healthy plant-centered diets around the globe.

  15. Prospective study of Dietary Approaches to Stop Hypertension– and Mediterranean-style dietary patterns and age-related cognitive change: the Cache County Study on Memory, Health and Aging123

    Science.gov (United States)

    Munger, Ronald G; Cutler, Adele; Quach, Anna; Bowles, Austin; Corcoran, Christopher; Tschanz, JoAnn T; Norton, Maria C; Welsh-Bohmer, Kathleen A

    2013-01-01

    Background: Healthy dietary patterns may protect against age-related cognitive decline, but results of studies have been inconsistent. Objective: We examined associations between Dietary Approaches to Stop Hypertension (DASH)– and Mediterranean-style dietary patterns and age-related cognitive change in a prospective, population-based study. Design: Participants included 3831 men and women ≥65 y of age who were residents of Cache County, UT, in 1995. Cognitive function was assessed by using the Modified Mini-Mental State Examination (3MS) ≤4 times over 11 y. Diet-adherence scores were computed by summing across the energy-adjusted rank-order of individual food and nutrient components and categorizing participants into quintiles of the distribution of the diet accordance score. Mixed-effects repeated-measures models were used to examine 3MS scores over time across increasing quintiles of dietary accordance scores and individual food components that comprised each score. Results: The range of rank-order DASH and Mediterranean diet scores was 1661–25,596 and 2407–26,947, respectively. Higher DASH and Mediterranean diet scores were associated with higher average 3MS scores. People in quintile 5 of DASH averaged 0.97 points higher than those in quintile 1 (P = 0.001). The corresponding difference for Mediterranean quintiles was 0.94 (P = 0.001). These differences were consistent over 11 y. Higher intakes of whole grains and nuts and legumes were also associated with higher average 3MS scores [mean quintile 5 compared with 1 differences: 1.19 (P the DASH and Mediterranean dietary patterns were associated with consistently higher levels of cognitive function in elderly men and women over an 11-y period. Whole grains and nuts and legumes were positively associated with higher cognitive functions and may be core neuroprotective foods common to various healthy plant-centered diets around the globe. PMID:24047922

  16. Preliminary Assessment of Landslides Along the Florida River Downstream from Lemon Reservoir, La Plata County, Colorado

    Science.gov (United States)

    Schulz, William H.; Coe, Jeffrey A.; Ellis, William L.; Kibler, John D.

    2006-01-01

    Nearly two-dozen shallow landslides were active during spring 2005 on a hillside located along the east side of the Florida River about one kilometer downstream from Lemon Reservoir in La Plata County, southwestern Colorado. Landslides on the hillside directly threaten human safety, residential structures, a county roadway, utilities, and the Florida River, and indirectly threaten downstream areas and Lemon Dam. Most of the area where the landslides occurred was burned during the 2002 Missionary Ridge wildfire. We performed geologic mapping, subsurface exploration and sampling, radiocarbon dating, and shallow ground-water and ground-displacement monitoring to assess landslide activity. Active landslides during spring 2005 were as large as 35,000 m3 and confined to colluvium. Debris flows were mobilized from most of the landslides, were as large as 1,500 m3, and traveled as far as 250 m. Landslide activity was triggered by elevated ground-water pressures within the colluvium caused by infiltration of snowmelt. Landslide activity ceased as ground-water pressures dropped during the summer. Shallow landslides on the hillside appear to be much more likely following the Missionary Ridge fire because of the loss of tree root strength and evapotranspiration. We used monitoring data and observations to develop preliminary, approximate rainfall/snowmelt thresholds above which shallow landslide activity can be expected. Landslides triggered during spring 2005 occurred within a 1.97 x 107 m3 older landslide that extends, on average, about 40 m into bedrock. The south end of this older landslide appears to have experienced deep secondary landsliding. Radiocarbon dating of sediments at the head of the older landslide suggests that the landslide was active about 1,424-1,696 years ago. A relatively widespread wildfire may have preceded the older landslide, and the landslide may have occurred during a wetter time. The wetter climate and effects of the wildfire would likely have

  17. Reservoir characterization of the Ordovician Red River Formation in southwest Williston Basin Bowman County, ND and Harding County, SD

    Energy Technology Data Exchange (ETDEWEB)

    Sippel, M.A.; Luff, K.D.; Hendricks, M.L.; Eby, D.E.

    1998-07-01

    This topical report is a compilation of characterizations by different disciplines of the Red River Formation in the southwest portion of the Williston Basin and the oil reservoirs which it contains in an area which straddles the state line between North Dakota and South Dakota. Goals of the report are to increase understanding of the reservoir rocks, oil-in-place, heterogeneity, and methods for improved recovery. The report is divided by discipline into five major sections: (1) geology, (2) petrography-petrophysical, (3) engineering, (4) case studies and (5) geophysical. Interwoven in these sections are results from demonstration wells which were drilled or selected for special testing to evaluate important concepts for field development and enhanced recovery. The Red River study area has been successfully explored with two-dimensional (2D) seismic. Improved reservoir characterization utilizing 3-dimensional (3D) and has been investigated for identification of structural and stratigraphic reservoir compartments. These seismic characterization tools are integrated with geological and engineering studies. Targeted drilling from predictions using 3D seismic for porosity development were successful in developing significant reserves at close distances to old wells. Short-lateral and horizontal drilling technologies were tested for improved completion efficiency. Lateral completions should improve economics for both primary and secondary recovery where low permeability is a problem and higher density drilling is limited by drilling cost. Low water injectivity and widely spaced wells have restricted the application of waterflooding in the past. Water injection tests were performed in both a vertical and a horizontal well. Data from these tests were used to predict long-term injection and oil recovery.

  18. Cooperative Proxy Caching for Wireless Base Stations

    Directory of Open Access Journals (Sweden)

    James Z. Wang

    2007-01-01

    Full Text Available This paper proposes a mobile cache model to facilitate the cooperative proxy caching in wireless base stations. This mobile cache model uses a network cache line to record the caching state information about a web document for effective data search and cache space management. Based on the proposed mobile cache model, a P2P cooperative proxy caching scheme is proposed to use a self-configured and self-managed virtual proxy graph (VPG, independent of the underlying wireless network structure and adaptive to the network and geographic environment changes, to achieve efficient data search, data cache and date replication. Based on demand, the aggregate effect of data caching, searching and replicating actions by individual proxy servers automatically migrates the cached web documents closer to the interested clients. In addition, a cache line migration (CLM strategy is proposed to flow and replicate the heads of network cache lines of web documents associated with a moving mobile host to the new base station during the mobile host handoff. These replicated cache line heads provide direct links to the cached web documents accessed by the moving mobile hosts in the previous base station, thus improving the mobile web caching performance. Performance studies have shown that the proposed P2P cooperative proxy caching schemes significantly outperform existing caching schemes.

  19. Stratigraphic Interpretation and Reservoir Implications of the Arbuckle Group (Cambrian-Ordovician) using 3D Seismic, Osage County, Oklahoma

    Science.gov (United States)

    Keeling, Ryan Marc

    The Arbuckle Group in northeastern Oklahoma consists of multiple carbonate formations, along with several relatively thin sandstone units. The group is a part of the "Great American Carbonate Bank" of the mid-continent and can be found regionally as far east as the Arkoma Basin in Arkansas, and as far west as the Anadarko Basin in Oklahoma. The Arbuckle is part of the craton-wide Sauk sequence, which is both underlain and overlain by regional unconformities. Arbuckle is not deposited directly on top of a source rock. In order for reservoirs within the Arbuckle to become charged with hydrocarbons, they must be juxtaposed against source rocks or along migration pathways. Inspired by the petroleum potential of proximal Arbuckle reservoirs and the lack of local stratigraphic understanding, this study aims to subdivide Arbuckle stratigraphy and identify porosity networks using 3D seismic within the study area of western Osage County, Oklahoma. These methods and findings can then be applied to petroleum exploration in Cambro-Ordovician carbonates in other localities. My research question is: Can the Arbuckle in SW Osage County be stratigraphically subdivided based on 3D seismic characteristics? This paper outlines the depositional environment of the Arbuckle, synthesizes previous studies and examines the Arbuckle as a petroleum system in Northeastern Oklahoma. The investigation includes an interpretation of intra-Arbuckle unconformities, areas of secondary porosity (specifically, sequence boundaries), and hydrocarbon potential of the Arbuckle Group using 3D seismic data interpretation with a cursory analysis of cored intervals.

  20. Random Fill Cache Architecture (Preprint)

    Science.gov (United States)

    2014-10-01

    D. Gullasch, E. Bangerter, and S. Krenn, “Cache Games — Bringing Access-Based Cache Attacks on AES to Practice,” in Proc. IEEE Symposium on Security...Effectiveness,” in Cryptogra- phers’ Track at the RSA Conference (CT-RSA’04), 2004, pp. 222–235. [27] K. Tiri, O. Aciicmez, M. Neve, and F. Andersen , “An

  1. Time-predictable Stack Caching

    DEFF Research Database (Denmark)

    Abbaspourseyedi, Sahar

    completely. Thus, in systems with hard deadlines the worst-case execution time (WCET) of the real-time software running on them needs to be bounded. Modern architectures use features such as pipelining and caches for improving the average performance. These features, however, make the WCET analysis more...... addresses, provides an opportunity to predict and tighten the WCET of accesses to data in caches. In this thesis, we introduce the time-predictable stack cache design and implementation within a time-predictable processor. We introduce several optimizations to our design for tightening the WCET while...... keeping the timepredictability of the design intact. Moreover, we provide a solution for reducing the cost of context switching in a system using the stack cache. In design of these caches, we use custom hardware and compiler support for delivering time-predictable stack data accesses. Furthermore...

  2. Mercury, methylmercury, and other constituents in sediment and water from seasonal and permanent wetlands in the Cache Creek settling basin and Yolo Bypass, Yolo County, California, 2005-06

    Science.gov (United States)

    Marvin-DiPasquale, Mark; Alpers, Charles N.; Fleck, Jacob A.

    2009-01-01

    This report presents surface water and surface (top 0-2 cm) sediment geochemical data collected during 2005-2006, as part of a larger study of mercury (Hg) dynamics in seasonal and permanently flooded wetland habitats within the lower Sacramento River basin, Yolo County, California. The study was conducted in two phases. Phase I represented reconnaissance sampling and included three locations within the Cache Creek drainage basin; two within the Cache Creek Nature Preserve (CCNP) and one in the Cache Creek Settling Basin (CCSB) within the creek's main channel near the southeast outlet to the Yolo Bypass. Two additional downstream sites within the Yolo Bypass Wildlife Area (YBWA) were also sampled during Phase I, including one permanently flooded wetland and one seasonally flooded wetland, which had began being flooded only 1–2 days before Phase I sampling.Results from Phase I include: (a) a negative correlation between total mercury (THg) and the percentage of methylmercury (MeHg) in unfiltered surface water; (b) a positive correlation between sediment THg concentration and sediment organic content; (c) surface water and sediment THg concentrations were highest at the CCSB site; (d) sediment inorganic reactive mercury (Hg(II)R) concentration was positively related to sediment oxidation-reduction potential and negatively related to sediment acid volatile sulfur (AVS) concentration; (e) sediment Hg(II)R concentrations were highest at the two YBWA sites; (f) unfiltered surface water MeHg concentration was highest at the seasonal wetland YBWA site, and sediment MeHg was highest at the permanently flooded YBWA site; (g) a 1,000-fold increase in sediment pore water sulfate concentration was observed in the downstream transect from the CCNP to the YBWA; (h) low sediment pore water sulfide concentrations (towards the end of the seasonal flooding period (end of May 2006). Results from Phase II sampling include: (a) sediment MeHg concentration and the percentage of THg as

  3. On the Limits of Cache-Obliviousness

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Fagerberg, Rolf

    2003-01-01

    In this paper, we present lower bounds for permuting and sorting in the cache-oblivious model. We prove that (1) I/O optimal cache-oblivious comparison based sorting is not possible without a tall cache assumption, and (2) there does not exist an I/O optimal cache-oblivious algorithm for permutin...

  4. 75 FR 13301 - Los Vaqueros Reservoir Expansion, Contra Costa and Alameda Counties, CA

    Science.gov (United States)

    2010-03-19

    ...The Bureau of Reclamation, as the National Environmental Policy Act Federal lead agency, and the Contra Costa Water District, as the California Environmental Quality Act lead agency, have prepared the Los Vaqueros Reservoir Expansion Final EIS/EIR. Los Vaqueros Expansion is a proposed action in the August 2000 CALFED Bay-Delta Program Programmatic Record of Decision. The Final EIS/EIR evaluated two options for expanding Los Vaqueros Reservoir from its existing capacity of 100 thousand acre-feet (TAF). A 175 TAF expansion option would expand the reservoir to 275 TAF. This expansion option would be operated for environmental water management and San Francisco Bay Area water supply reliability. A 60 TAF expansion option would expand the reservoir to 160 TAF. This second option would primarily be operated to improve Contra Costa Water District (CCWD) dry year water supply reliability and water quality. A Notice of Availability of the Draft EIS/EIR was published in the Federal Register on February 20, 2009 (74 FR 7922). The written comment period on the Draft EIS/EIR ended on April 21, 2009. The Final EIS/EIR contains responses to all written comments received during the review period.

  5. Analysis of Three Cobble Ring Sites at Abiquiu Reservoir, Rio Arriba County, New Mexico.

    Science.gov (United States)

    1989-01-01

    confluence of the Rio Chama and the Ojo Caliente River. Wendorf and Miller (1959) note the occurrence of a number of Late Archaic sites at high...the late 1870s, the village of Tierra Amarilla assumed the role of administrative and commercial center of the Rio Chama region. For centuries, the...Chama and Ojo Caliente Rivers, Rio Arriba County, New Mexico. School of American Research, Contract Archaeology Division, Report 065. 1980

  6. Chemical and physical characteristics of water and sediment in Scofield Reservoir, Carbon County, Utah

    Science.gov (United States)

    Waddell, Kidd M.; Darby, D.W.; Theobald, S.M.

    1985-01-01

    Evaluations based on the nutrient content of the inflow, outflow, water in storage, and the dissolved-oxygen depletion during the summer indicate that the trophic state of Scofield Reservoir is borderline between mesotrophic and eutrophic and may become highly eutrophic unless corrective measures are taken to limit nutrient inflow.Sediment deposition in Scofield Reservoir during 1943-79 is estimated to be 3,000 acre-feet, and has decreased the original storage capacity of the reservoir by 4 percent. The sediment contains some coal, and age dating of those sediments (based on the radioisotope lead-210) indicates that most of the coal was deposited prior to about 1950.Scofield Reservoir is dimictic, with turnovers occurring in the spring and autumn. Water in the reservoir circulates completely to the bottom during turnovers. The concentration of dissolved oxygen decreases with depth except during parts of the turnover periods. Below an altitude of about 7,590 feet, where 20 percent of the water is stored, the concentration of dissolved oxygen was less than 2 milligrams per liter during most of the year. During the summer stratification period, the depletion of dissolved oxygen in the deeper layers is coincident with supersaturated conditions in the shallow layers; this is attributed to plant photosynthesis and bacterial respiration in the reservoir.During October 1,1979-August 31,1980, thedischargeweighted average concentrations of dissolved solids was 195 milligrams per liter in the combined inflow from Fish, Pondtown, and Mud Creeks, and was 175 milligrams per liter in the outflow (and to the Price River). The smaller concentration in the outflow was due primarily to precipitation of calcium carbonate in the reservoir about 80 percent of the decrease can be accounted for through loss as calcium carbonate.The estimated discharge-weighted average concentration of total nitrogen (dissolved plus suspended) in the combined inflow of Fish, Pondtown, and Mud Creeks was 1

  7. Data cache organization for accurate timing analysis

    DEFF Research Database (Denmark)

    Schoeberl, Martin; Huber, Benedikt; Puffitsch, Wolfgang

    2013-01-01

    Caches are essential to bridge the gap between the high latency main memory and the fast processor pipeline. Standard processor architectures implement two first-level caches to avoid a structural hazard in the pipeline: an instruction cache and a data cache. For tight worst-case execution times...... different data areas, such as stack, global data, and heap allocated data, share the same cache. Some addresses are known statically, other addresses are only known at runtime. With a standard cache organization all those different data areas must be considered by worst-case execution time analysis...

  8. Advanced Oil Recovery Technologies for Improved Recovery from Slope Basin Clastic Reservoirs, Nash Draw Brushy Canyon Pool, Eddy County, NM

    Energy Technology Data Exchange (ETDEWEB)

    Mark B. Murphy

    2005-09-30

    The Nash Draw Brushy Canyon Pool in Eddy County New Mexico was a cost-shared field demonstration project in the U.S. Department of Energy Class III Program. A major goal of the Class III Program was to stimulate the use of advanced technologies to increase ultimate recovery from slope-basin clastic reservoirs. Advanced characterization techniques were used at the Nash Draw Pool (NDP) project to develop reservoir management strategies for optimizing oil recovery from this Delaware reservoir. The objective of the project was to demonstrate that a development program, which was based on advanced reservoir management methods, could significantly improve oil recovery at the NDP. Initial goals were (1) to demonstrate that an advanced development drilling and pressure maintenance program can significantly improve oil recovery compared to existing technology applications and (2) to transfer these advanced methodologies to other oil and gas producers. Analysis, interpretation, and integration of recently acquired geological, geophysical, and engineering data revealed that the initial reservoir characterization was too simplistic to capture the critical features of this complex formation. Contrary to the initial characterization, a new reservoir description evolved that provided sufficient detail regarding the complexity of the Brushy Canyon interval at Nash Draw. This new reservoir description was used as a risk reduction tool to identify 'sweet spots' for a development drilling program as well as to evaluate pressure maintenance strategies. The reservoir characterization, geological modeling, 3-D seismic interpretation, and simulation studies have provided a detailed model of the Brushy Canyon zones. This model was used to predict the success of different reservoir management scenarios and to aid in determining the most favorable combination of targeted drilling, pressure maintenance, well stimulation, and well spacing to improve recovery from this reservoir. An

  9. 3D Seismic Reflection Amplitude and Instantaneous Frequency Attributes in Mapping Thin Hydrocarbon Reservoir Lithofacies: Morrison NE Field and Morrison Field, Clark County, KS

    Science.gov (United States)

    Raef, Abdelmoneam; Totten, Matthew; Vohs, Andrew; Linares, Aria

    2017-12-01

    Thin hydrocarbon reservoir facies pose resolution challenges and waveform-signature opportunities in seismic reservoir characterization and prospect identification. In this study, we present a case study, where instantaneous frequency variation in response to a thin hydrocarbon pay zone is analyzed and integrated with other independent information to explain drilling results and optimize future drilling decisions. In Morrison NE Field, some wells with poor economics have resulted from well-placement incognizant of reservoir heterogeneities. The study area in Clark County, Kanas, USA, has been covered by a surface 3D seismic reflection survey in 2010. The target horizon is the Viola limestone, which continues to produce from 7 of the 12 wells drilled within the survey area. Seismic attributes extraction and analyses were conducted with emphasis on instantaneous attributes and amplitude anomalies to better understand and predict reservoir heterogeneities and their control on hydrocarbon entrapment settings. We have identified a higher instantaneous frequency, lower amplitude seismic facies that is in good agreement with distinct lithofacies that exhibit better (higher porosity) reservoir properties, as inferred from well-log analysis and petrographic inspection of well cuttings. This study presents a pre-drilling, data-driven approach of identifying sub-resolution reservoir seismic facies in a carbonate formation. This workflow will assist in placing new development wells in other locations within the area. Our low amplitude high instantaneous frequency seismic reservoir facies have been corroborated by findings based on well logs, petrographic analysis data, and drilling results.

  10. Cache-Oblivious Mesh Layouts

    International Nuclear Information System (INIS)

    Yoon, S; Lindstrom, P; Pascucci, V; Manocha, D

    2005-01-01

    We present a novel method for computing cache-oblivious layouts of large meshes that improve the performance of interactive visualization and geometric processing algorithms. Given that the mesh is accessed in a reasonably coherent manner, we assume no particular data access patterns or cache parameters of the memory hierarchy involved in the computation. Furthermore, our formulation extends directly to computing layouts of multi-resolution and bounding volume hierarchies of large meshes. We develop a simple and practical cache-oblivious metric for estimating cache misses. Computing a coherent mesh layout is reduced to a combinatorial optimization problem. We designed and implemented an out-of-core multilevel minimization algorithm and tested its performance on unstructured meshes composed of tens to hundreds of millions of triangles. Our layouts can significantly reduce the number of cache misses. We have observed 2-20 times speedups in view-dependent rendering, collision detection, and isocontour extraction without any modification of the algorithms or runtime applications

  11. Cache-Aware and Cache-Oblivious Adaptive Sorting

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Fagerberg, Rolf; Moruz, Gabriel

    2005-01-01

    Two new adaptive sorting algorithms are introduced which perform an optimal number of comparisons with respect to the number of inversions in the input. The first algorithm is based on a new linear time reduction to (non-adaptive) sorting. The second algorithm is based on a new division protocol...... for the GenericSort algorithm by Estivill-Castro and Wood. From both algorithms we derive I/O-optimal cache-aware and cache-oblivious adaptive sorting algorithms. These are the first I/O-optimal adaptive sorting algorithms....

  12. Cache-oblivious string dictionaries

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Fagerberg, Rolf

    2006-01-01

    We present static cache-oblivious dictionary structures for strings which provide analogues of tries and suffix trees in the cache-oblivious model. Our construction takes as input either a set of strings to store, a single string for which all suffixes are to be stored, a trie, a compressed trie......, or a suffix tree, and creates a cache-oblivious data structure which performs prefix queries in O(logB n + |P|/B) I/Os, where n is the number of leaves in the trie, P is the query string, and B is the block size. This query cost is optimal for unbounded alphabets. The data structure uses linear space....

  13. National Dam Safety Program. Hillburn Reservoir Dam (Inventory Number NY 974), Passaic River Basin, Rockland County, New York. Phase I Inspection Report.

    Science.gov (United States)

    1981-06-30

    Ieupcoemar an Identify by block rnumber) Natna S fet eyyrora Hilburn ’Reservoir*Dam Vioal n s aetyiPgr- Rockland County Visul Ispecion saic River Basin...o the 1dan a3 bit the report dae Information and F~nlj is ara based CKI .pIr:ial .0 inspecti&i of the darn ’ii:’ t*,., performing o-:Zizaton...196A-934 PASSAIC RIVER BASIN ROCKLAND COUNTY, NEW YORK TABLE OF CONTENTS PAGE NO. ASSESSMENT OVERVIEW PHOTOGRAPH PROJECT INFORMATION 1 1.1 GENERAL 1

  14. Mobility- Aware Cache Management in Wireless Environment

    Science.gov (United States)

    Kaur, Gagandeep; Saini, J. S.

    2010-11-01

    In infrastructure wireless environments, a base station provides communication links between mobile client and remote servers. Placing a proxy cache at the base station is an effective way of managing the wireless Internet bandwidth efficiently. However, in the situation of non-uniform heavy traffic, requests of all the mobile clients in the service area of the base station may cause overload in the cache. If the proxy cache has to release some cache space for the new mobile client in the environment, overload occurs. In this paper, we propose a novel cache management strategy to decrease the penalty of overloaded traffic on the proxy and to reduce the number of remote accesses by increasing the cache hit ratio. We predict the number of overload ahead of time based on its history and adapt the cache for the heavy traffic to be able to provide continuous and fair service to the current mobile clients and incoming ones. We have tested the algorithms over a real implementation of the cache management system in presence of fault tolerance and security. In our cache replacement algorithm, mobility of the clients, predicted overload number, size of the cached packets and their access frequencies are considered altogether. Performance results show that our cache management strategy outperforms the existing policies with less number of overloads and higher cache hit ratio.

  15. Geothermal low-temperature reservoir assessment in Dona Ana County, New Mexico. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Icerman, L.; Lohse, R.L.

    1983-04-01

    Sixty-four shallow temperature gradient holes were drilled on the Mesilla Valley East Mesa (east of Interstate Highways 10 and 25), stretching from US Highway 70 north of Las Cruces to NM Highway 404 adjacent to Anthony, New Mexico. Using these data as part of the site selection process, Chaffee Geothermal, Ltd. of Denver, Colorado, drilled two low-temperature geothermal production wells to the immediate north and south of Tortugas Mountain and encountered a significant low-temperature reservoir, with a temperature of about 150{sup 0}F and flow rates of 750 to 1500 gallons per minute at depths from 650 to 1250 feet. These joint exploration activities resulted in the discovery and confirmation of a 30-square-mile low-temperature geothermal anomaly just a few miles to the east of Las Cruces that has been newly named as the Las Cruces east Mesa Geothermal Field. Elevated temperature and heat flow data suggest that the thermal anomaly is fault controlled and extends southward to the Texas border covering a 100-square-mile area. With the exception of some localized perturbations, the anomaly appears to decrease in temperature from the north to the south. Deeper drilling is required in the southern part of the anomaly to confirm the existence of commercially-exploitable geothermal waters.

  16. Water- and air-quality and surficial bed-sediment monitoring of the Sweetwater Reservoir watershed, San Diego County, California, 2003-09

    Science.gov (United States)

    Mendez, Gregory O.; Majewski, Michael S.; Foreman, William T.; Morita, Andrew Y.

    2015-01-01

    In 1998, the U.S. Geological Survey, in cooperation with the Sweetwater Authority, began a study to assess the overall health of the Sweetwater watershed in San Diego County, California. This study was designed to provide a data set that could be used to evaluate potential effects from the construction and operation of State Route 125 within the broader context of the water quality and air quality in the watershed. The study included regular sampling of water, air, and surficial bed sediment at Sweetwater Reservoir (SWR) for chemical constituents, including volatile organic compounds (VOCs), base-neutral and acid- extractable organic compounds (BNAs) that include polycyclic aromatic hydrocarbons (PAHs), pesticides, and metals. Additionally, water samples were collected for anthropogenic organic indicator compounds in and around SWR. Background water samples were collected at Loveland Reservoir for VOCs, BNAs, pesticides, and metals. Surficial bed-sediment samples were collected for PAHs, organochlorine pesticides, and metals at Sweetwater and Loveland Reservoirs.

  17. Advanced Reservoir Characterization and Development through High-Resolution 3C3D Seismic and Horizontal Drilling: Eva South Marrow Sand Unit, Texas County, Oklahoma

    Energy Technology Data Exchange (ETDEWEB)

    Wheeler,David M.; Miller, William A.; Wilson, Travis C.

    2002-03-11

    The Eva South Morrow Sand Unit is located in western Texas County, Oklahoma. The field produces from an upper Morrow sandstone, termed the Eva sandstone, deposited in a transgressive valley-fill sequence. The field is defined as a combination structural stratigraphic trap; the reservoir lies in a convex up -dip bend in the valley and is truncated on the west side by the Teepee Creek fault. Although the field has been a successful waterflood since 1993, reservoir heterogeneity and compartmentalization has impeded overall sweep efficiency. A 4.25 square mile high-resolution, three component three-dimensional (3C3D) seismic survey was acquired in order to improve reservoir characterization and pinpoint the optimal location of a new horizontal producing well, the ESU 13-H.

  18. Scope-Based Method Cache Analysis

    DEFF Research Database (Denmark)

    Huber, Benedikt; Hepp, Stefan; Schoeberl, Martin

    2014-01-01

    , as it requests memory transfers at well-defined instructions only. In this article, we present a new cache analysis framework that generalizes and improves work on cache persistence analysis. The analysis demonstrates that a global view on the cache behavior permits the precise analyses of caches which are hard......The quest for time-predictable systems has led to the exploration of new hardware architectures that simplify analysis and reasoning in the temporal domain, while still providing competitive performance. For the instruction memory, the method cache is a conceptually attractive solution...

  19. Caching web service for TICF project

    International Nuclear Information System (INIS)

    Pais, V.F.; Stancalie, V.

    2008-01-01

    A caching web service was developed to allow caching of any object to a network cache, presented in the form of a web service. This application was used to increase the speed of previously implemented web services and for new ones. Various tests were conducted to determine the impact of using this caching web service in the existing network environment and where it should be placed in order to achieve the greatest increase in performance. Since the cache is presented to applications as a web service, it can also be used for remote access to stored data and data sharing between applications

  20. A Time-predictable Stack Cache

    DEFF Research Database (Denmark)

    Abbaspour, Sahar; Brandner, Florian; Schoeberl, Martin

    2013-01-01

    Real-time systems need time-predictable architectures to support static worst-case execution time (WCET) analysis. One architectural feature, the data cache, is hard to analyze when different data areas (e.g., heap allocated and stack allocated data) share the same cache. This sharing leads to le...... of a cache for stack allocated data. Our port of the LLVM C++ compiler supports the management of the stack cache. The combination of stack cache instructions and the hardware implementation of the stack cache is a further step towards timepredictable architectures.......Real-time systems need time-predictable architectures to support static worst-case execution time (WCET) analysis. One architectural feature, the data cache, is hard to analyze when different data areas (e.g., heap allocated and stack allocated data) share the same cache. This sharing leads to less...... precise results of the cache analysis part of the WCET analysis. Splitting the data cache for different data areas enables composable data cache analysis. The WCET analysis tool can analyze the accesses to these different data areas independently. In this paper we present the design and implementation...

  1. CryptoCache: A Secure Sharable File Cache for Roaming Users

    DEFF Research Database (Denmark)

    Jensen, Christian D.

    2000-01-01

    Small mobile computers are now sufficiently powerful to run many applications, but storage capacity remains limited so working files cannot be cached or stored locally. Even if files can be stored locally, the mobile device is not powerful enough to act as server in collaborations with other users....... Conventional distributed file systems cache everything locally or not at all; there is no possibility to cache files on nearby nodes.In this paper we present the design of a secure cache system called CryptoCache that allows roaming users to cache files on untrusted file hosting servers. The system allows...

  2. A Survey of Cache Bypassing Techniques

    Directory of Open Access Journals (Sweden)

    Sparsh Mittal

    2016-04-01

    Full Text Available With increasing core-count, the cache demand of modern processors has also increased. However, due to strict area/power budgets and presence of poor data-locality workloads, blindly scaling cache capacity is both infeasible and ineffective. Cache bypassing is a promising technique to increase effective cache capacity without incurring power/area costs of a larger sized cache. However, injudicious use of cache bypassing can lead to bandwidth congestion and increased miss-rate and hence, intelligent techniques are required to harness its full potential. This paper presents a survey of cache bypassing techniques for CPUs, GPUs and CPU-GPU heterogeneous systems, and for caches designed with SRAM, non-volatile memory (NVM and die-stacked DRAM. By classifying the techniques based on key parameters, it underscores their differences and similarities. We hope that this paper will provide insights into cache bypassing techniques and associated tradeoffs and will be useful for computer architects, system designers and other researchers.

  3. Store operations to maintain cache coherence

    Energy Technology Data Exchange (ETDEWEB)

    Evangelinos, Constantinos; Nair, Ravi; Ohmacht, Martin

    2017-09-12

    In one embodiment, a computer-implemented method includes encountering a store operation during a compile-time of a program, where the store operation is applicable to a memory line. It is determined, by a computer processor, that no cache coherence action is necessary for the store operation. A store-without-coherence-action instruction is generated for the store operation, responsive to determining that no cache coherence action is necessary. The store-without-coherence-action instruction specifies that the store operation is to be performed without a cache coherence action, and cache coherence is maintained upon execution of the store-without-coherence-action instruction.

  4. Store operations to maintain cache coherence

    Energy Technology Data Exchange (ETDEWEB)

    Evangelinos, Constantinos; Nair, Ravi; Ohmacht, Martin

    2017-08-01

    In one embodiment, a computer-implemented method includes encountering a store operation during a compile-time of a program, where the store operation is applicable to a memory line. It is determined, by a computer processor, that no cache coherence action is necessary for the store operation. A store-without-coherence-action instruction is generated for the store operation, responsive to determining that no cache coherence action is necessary. The store-without-coherence-action instruction specifies that the store operation is to be performed without a cache coherence action, and cache coherence is maintained upon execution of the store-without-coherence-action instruction.

  5. The Cost of Cache-Oblivious Searching

    DEFF Research Database (Denmark)

    Bender, Michael A.; Brodal, Gerth Stølting; Fagerberg, Rolf

    2011-01-01

    This paper gives tight bounds on the cost of cache-oblivious searching. The paper shows that no cache-oblivious search structure can guarantee a search performance of fewer than lg elog  B N memory transfers between any two levels of the memory hierarchy. This lower bound holds even if all......-oblivious model. The DAM model naturally extends to k levels. The paper also shows that as k grows, the search costs of the optimal k-level DAM search structure and the optimal cache-oblivious search structure rapidly converge. This result demonstrates that for a multilevel memory hierarchy, a simple cache...

  6. La honte qui cache la honte qui cache...

    OpenAIRE

    Dussy, Dorothée

    2004-01-01

    Sommaire : http://www.sigila.msh-paris.fr/la_honte.htm; International audience; Ce texte explore les mécanismes par lesquels Louise, ancienne religieuse et secrétaire médicale à la retraite, a tout au long de sa vie enchaîné les raisons d'avoir honte se rapportant invariablement à une infraction à son intimité. Louise amnésique a caché une honte par une autre honte, sans souvenir de son secret originel. Jusqu'à ce que la mémoire lui revienne un matin, sur le trajet la menant à son travail, gr...

  7. Test data generation for LRU cache-memory testing

    OpenAIRE

    Evgeni, Kornikhin

    2009-01-01

    System functional testing of microprocessors deals with many assembly programs of given behavior. The paper proposes new constraint-based algorithm of initial cache-memory contents generation for given behavior of assembly program (with cache misses and hits). Although algorithm works for any types of cache-memory, the paper describes algorithm in detail for basis types of cache-memory only: fully associative cache and direct mapped cache.

  8. Limnological Conditions and Occurrence of Taste-and-Odor Compounds in Lake William C. Bowen and Municipal Reservoir #1, Spartanburg County, South Carolina, 2006-2009

    Science.gov (United States)

    Journey, Celeste A.; Arrington, Jane M.; Beaulieu, Karen M.; Graham, Jennifer L.; Bradley, Paul M.

    2011-01-01

    Limnological conditions and the occurrence of taste-and-odor compounds were studied in two reservoirs in Spartanburg County, South Carolina, from May 2006 to June 2009. Lake William C. Bowen and Municipal Reservoir #1 are relatively shallow, meso-eutrophic, warm monomictic, cascading impoundments on the South Pacolet River. Overall, water-quality conditions and phytoplankton community assemblages were similar between the two reservoirs but differed seasonally. Median dissolved geosmin concentrations in the reservoirs ranged from 0.004 to 0.006 microgram per liter. Annual maximum dissolved geosmin concentrations tended to occur between March and May. In this study, peak dissolved geosmin production occurred in April and May 2008, ranging from 0.050 to 0.100 microgram per liter at the deeper reservoir sites. Peak dissolved geosmin production was not concurrent with maximum cyanobacterial biovolumes, which tended to occur in the summer (July to August), but was concurrent with a peak in the fraction of genera with known geosmin-producing strains in the cyanobacteria group. Nonetheless, annual maximum cyanobacterial biovolumes rarely resulted in cyanobacteria dominance of the phytoplankton community. In both reservoirs, elevated dissolved geosmin concentrations were correlated to environmental factors indicative of unstratified conditions and reduced algal productivity, but not to nutrient concentrations or ratios. With respect to potential geosmin sources, elevated geosmin concentrations were correlated to greater fractions of genera with known geosmin-producing strains in the cyanobacteria group and to biovolumes of a specific geosmin-producing cyanobacteria genus (Oscillatoria), but not to actinomycetes concentrations. Conversely, environmental factors that correlated with elevated cyanobacterial biovolumes were indicative of stable water columns (stratified conditions), warm water temperatures, reduced nitrogen concentrations, longer residence times, and high

  9. Improving Internet Archive Service through Proxy Cache.

    Science.gov (United States)

    Yu, Hsiang-Fu; Chen, Yi-Ming; Wang, Shih-Yong; Tseng, Li-Ming

    2003-01-01

    Discusses file transfer protocol (FTP) servers for downloading archives (files with particular file extensions), and the change to HTTP (Hypertext transfer protocol) with increased Web use. Topics include the Archie server; proxy cache servers; and how to improve the hit rate of archives by a combination of caching and better searching mechanisms.…

  10. Funnel Heap - A Cache Oblivious Priority Queue

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Fagerberg, Rolf

    2002-01-01

    The cache oblivious model of computation is a two-level memory model with the assumption that the parameters of the model are unknown to the algorithms. A consequence of this assumption is that an algorithm efficient in the cache oblivious model is automatically efficient in a multi-level memory ...

  11. Engineering a cache-oblivious sorting algorithm

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Fagerberg, Rolf; Vinther, Kristoffer

    2007-01-01

    This paper is an algorithmic engineering study of cache-oblivious sorting. We investigate by empirical methods a number of implementation issues and parameter choices for the cache-oblivious sorting algorithm Lazy Funnelsort, and compare the final algorithm with Quicksort, the established standard...

  12. A Cache Timing Analysis of HC-256

    DEFF Research Database (Denmark)

    Zenner, Erik

    2009-01-01

    In this paper, we describe a cache-timing attack against the stream cipher HC-256, which is the strong version of eStream winner HC-128. The attack is based on an abstract model of cache timing attacks that can also be used for designing stream ciphers. From the observations made in our analysis,...

  13. Cache as ca$\\$$h can

    NARCIS (Netherlands)

    Grootjans, W.J.; Hochstenbach, M.; Hurink, Johann L.; Kern, Walter; Luczak, M.; Puite, Q.; Resing, J.; Spieksma, F.

    2000-01-01

    In this paper we consider the problem of placing proxy caches in a network to get a better performance of the net. We develop a heuristic method to decide in which nodes of the network proxies should be installed and what the sizes of these caches should be. The heuristic attempts to minimize a

  14. The Cost of Cache-Oblivious Searching

    DEFF Research Database (Denmark)

    Bender, Michael A.; Brodal, Gert Stølting; Fagerberg, Rolf

    2003-01-01

    Tight bounds on the cost of cache-oblivious searching are proved. It is shown that no cache-oblivious search structure can guarantee that a search performs fewer than lg e log B N block transfers between any two levels of the memory hierarchy. This lower bound holds even if all of the block sizes...

  15. Cache Energy Optimization Techniques For Modern Processors

    Energy Technology Data Exchange (ETDEWEB)

    Mittal, Sparsh [ORNL

    2013-01-01

    Modern multicore processors are employing large last-level caches, for example Intel's E7-8800 processor uses 24MB L3 cache. Further, with each CMOS technology generation, leakage energy has been dramatically increasing and hence, leakage energy is expected to become a major source of energy dissipation, especially in last-level caches (LLCs). The conventional schemes of cache energy saving either aim at saving dynamic energy or are based on properties specific to first-level caches, and thus these schemes have limited utility for last-level caches. Further, several other techniques require offline profiling or per-application tuning and hence are not suitable for product systems. In this book, we present novel cache leakage energy saving schemes for single-core and multicore systems; desktop, QoS, real-time and server systems. Also, we present cache energy saving techniques for caches designed with both conventional SRAM devices and emerging non-volatile devices such as STT-RAM (spin-torque transfer RAM). We present software-controlled, hardware-assisted techniques which use dynamic cache reconfiguration to configure the cache to the most energy efficient configuration while keeping the performance loss bounded. To profile and test a large number of potential configurations, we utilize low-overhead, micro-architecture components, which can be easily integrated into modern processor chips. We adopt a system-wide approach to save energy to ensure that cache reconfiguration does not increase energy consumption of other components of the processor. We have compared our techniques with state-of-the-art techniques and have found that our techniques outperform them in terms of energy efficiency and other relevant metrics. The techniques presented in this book have important applications in improving energy-efficiency of higher-end embedded, desktop, QoS, real-time, server processors and multitasking systems. This book is intended to be a valuable guide for both

  16. The dCache scientific storage cloud

    CERN Document Server

    CERN. Geneva

    2014-01-01

    For over a decade, the dCache team has provided software for handling big data for a diverse community of scientists. The team has also amassed a wealth of operational experience from using this software in production. With this experience, the team have refined dCache with the goal of providing a "scientific cloud": a storage solution that satisfies all requirements of a user community by exposing different facets of dCache with which users interact. Recent development, as part of this "scientific cloud" vision, has introduced a new facet: a sync-and-share service, often referred to as "dropbox-like storage". This work has been strongly focused on local requirements, but will be made available in future releases of dCache allowing others to adopt dCache solutions. In this presentation we will outline the current status of the work: both the successes and limitations, and the direction and time-scale of future work.

  17. Efficient Mobile Client Caching Supporting Transaction Semantics

    Directory of Open Access Journals (Sweden)

    IlYoung Chung

    2000-05-01

    Full Text Available In mobile client-server database systems, caching of frequently accessed data is an important technique that will reduce the contention on the narrow bandwidth wireless channel. As the server in mobile environments may not have any information about the state of its clients' cache(stateless server, using broadcasting approach to transmit the updated data lists to numerous concurrent mobile clients is an attractive approach. In this paper, a caching policy is proposed to maintain cache consistency for mobile computers. The proposed protocol adopts asynchronous(non-periodic broadcasting as the cache invalidation scheme, and supports transaction semantics in mobile environments. With the asynchronous broadcasting approach, the proposed protocol can improve the throughput by reducing the abortion of transactions with low communication costs. We study the performance of the protocol by means of simulation experiments.

  18. Efficient sorting using registers and caches

    DEFF Research Database (Denmark)

    Wickremesinghe, Rajiv; Arge, Lars Allan; Chase, Jeffrey S.

    2002-01-01

    on sorting performance. We introduce a new cache-conscious sorting algorithm, R-MERGE, which achieves better performance in practice over algorithms that are superior in the theoretical models. R-MERGE is designed to minimize memory stall cycles rather than cache misses by considering features common to many......Modern computer systems have increasingly complex memory systems. Common machine models for algorithm analysis do not reflect many of the features of these systems, e.g., large register sets, lockup-free caches, cache hierarchies, associativity, cache line fetching, and streaming behavior....... Inadequate models lead to poor algorithmic choices and an incomplete understanding of algorithm behavior on real machines.A key step toward developing better models is to quantify the performance effects of features not reflected in the models. This paper explores the effect of memory system features...

  19. Geochemical analysis of atlantic rim water, carbon county, wyoming: New applications for characterizing coalbed natural gas reservoirs

    Science.gov (United States)

    McLaughlin, J.F.; Frost, C.D.; Sharma, Shruti

    2011-01-01

    Coalbed natural gas (CBNG) production typically requires the extraction of large volumes of water from target formations, thereby influencing any associated reservoir systems. We describe isotopic tracers that provide immediate data on the presence or absence of biogenic natural gas and the identify methane-containing reservoirs are hydrologically confined. Isotopes of dissolved inorganic carbon and strontium, along with water quality data, were used to characterize the CBNG reservoirs and hydrogeologic systems of Wyoming's Atlantic Rim. Water was analyzed from a stream, springs, and CBNG wells. Strontium isotopic composition and major ion geochemistry identify two groups of surface water samples. Muddy Creek and Mesaverde Group spring samples are Ca-Mg-S04-type water with higher 87Sr/86Sr, reflecting relatively young groundwater recharged from precipitation in the Sierra Madre. Groundwaters emitted from the Lewis Shale springs are Na-HCO3-type waters with lower 87Sr/86Sr, reflecting sulfate reduction and more extensive water-rock interaction. To distinguish coalbed waters, methanogenically enriched ??13CDIC wasused from other natural waters. Enriched ??13CDIC, between -3.6 and +13.3???, identified spring water that likely originates from Mesaverde coalbed reservoirs. Strongly positive ??13CDIC, between +12.6 and +22.8???, identified those coalbed reservoirs that are confined, whereas lower ??13CDIC, between +0.0 and +9.9???, identified wells within unconfined reservoir systems. Copyright ?? 2011. The American Association of Petroleum Geologists. All rights reserved.

  20. Soil erosion and sediment fluxes analysis: a watershed study of the Ni Reservoir, Spotsylvania County, VA, USA.

    Science.gov (United States)

    Pope, Ian C; Odhiambo, Ben K

    2014-03-01

    Anthropogenic forces that alter the physical landscape are known to cause significant soil erosion, which has negative impact on surface water bodies, such as rivers, lakes/reservoirs, and coastal zones, and thus sediment control has become one of the central aspects of catchment management planning. The revised universal soil loss equation empirical model, erosion pins, and isotopic sediment core analyses were used to evaluate watershed erosion, stream bank erosion, and reservoir sediment accumulation rates for Ni Reservoir, in central Virginia. Land-use and land cover seems to be dominant control in watershed soil erosion, with barren land and human-disturbed areas contributing the most sediment, and forest and herbaceous areas contributing the least. Results show a 7 % increase in human development from 2001 (14 %) to 2009 (21.6 %), corresponding to an increase in soil loss of 0.82 Mg ha(-1) year(-1) in the same time period. (210)Pb-based sediment accumulation rates at three locations in Ni Reservoir were 1.020, 0.364, and 0.543 g cm(-2) year(-1) respectively, indicating that sediment accumulation and distribution in the reservoir is influenced by reservoir configuration and significant contributions from bedload. All three locations indicate an increase in modern sediment accumulation rates. Erosion pin results show variability in stream bank erosion with values ranging from 4.7 to 11.3 cm year(-1). These results indicate that urban growth and the decline in vegetative cover has increased sediment fluxes from the watershed and poses a significant threat to the long-term sustainability of the Ni Reservoir as urbanization continues to increase.

  1. Joshua tree (Yucca brevifolia) seeds are dispersed by seed-caching rodents

    Science.gov (United States)

    Vander Wall, S.B.; Esque, T.; Haines, D.; Garnett, M.; Waitman, B.A.

    2006-01-01

    Joshua tree (Yucca brevifolia) is a distinctive and charismatic plant of the Mojave Desert. Although floral biology and seed production of Joshua tree and other yuccas are well understood, the fate of Joshua tree seeds has never been studied. We tested the hypothesis that Joshua tree seeds are dispersed by seed-caching rodents. We radioactively labelled Joshua tree seeds and followed their fates at five source plants in Potosi Wash, Clark County, Nevada, USA. Rodents made a mean of 30.6 caches, usually within 30 m of the base of source plants. Caches contained a mean of 5.2 seeds buried 3-30 nun deep. A variety of rodent species appears to have prepared the caches. Three of the 836 Joshua tree seeds (0.4%) cached germinated the following spring. Seed germination using rodent exclosures was nearly 15%. More than 82% of seeds in open plots were removed by granivores, and neither microsite nor supplemental water significantly affected germination. Joshua tree produces seeds in indehiscent pods or capsules, which rodents dismantle to harvest seeds. Because there is no other known means of seed dispersal, it is possible that the Joshua tree-rodent seed dispersal interaction is an obligate mutualism for the plant.

  2. WATCHMAN: A Data Warehouse Intelligent Cache Manager

    Science.gov (United States)

    Scheuermann, Peter; Shim, Junho; Vingralek, Radek

    1996-01-01

    Data warehouses store large volumes of data which are used frequently by decision support applications. Such applications involve complex queries. Query performance in such an environment is critical because decision support applications often require interactive query response time. Because data warehouses are updated infrequently, it becomes possible to improve query performance by caching sets retrieved by queries in addition to query execution plans. In this paper we report on the design of an intelligent cache manager for sets retrieved by queries called WATCHMAN, which is particularly well suited for data warehousing environment. Our cache manager employs two novel, complementary algorithms for cache replacement and for cache admission. WATCHMAN aims at minimizing query response time and its cache replacement policy swaps out entire retrieved sets of queries instead of individual pages. The cache replacement and admission algorithms make use of a profit metric, which considers for each retrieved set its average rate of reference, its size, and execution cost of the associated query. We report on a performance evaluation based on the TPC-D and Set Query benchmarks. These experiments show that WATCHMAN achieves a substantial performance improvement in a decision support environment when compared to a traditional LRU replacement algorithm.

  3. Taste and odor occurrence in Lake William C. Bowen and Municipal Reservoir #1, Spartanburg County, South Carolina

    Science.gov (United States)

    Journey, Celeste; Arrington, Jane M.

    2009-01-01

    The U.S. Geological Survey and Spartanburg Water are working cooperatively on an ongoing study of Lake Bowen and Reservoir #1 to identify environmental factors that enhance or influence the production of geosmin in the source-water reservoirs. Spartanburg Water is using information from this study to develop management strategies to reduce (short-term solution) and prevent (long-term solution) geosmin occurrence. Spartanburg Water utility treats and distributes drinking water to the Spartanburg area of South Carolina. The drinking water sources for the area are Lake William C. Bowen (Lake Bowen) and Municipal Reservoir #1 (Reservoir #1), located north of Spartanburg. These reservoirs, which were formed by the impoundment of the South Pacolet River, were assessed in 2006 by the South Carolina Department of Health and Environmental Control (SCDHEC) as being fully supportive of all uses based on established criteria. Nonetheless, Spartanburg Water had noted periodic taste and odor problems due to the presence of geosmin, a naturally occurring compound in the source water. Geosmin is not harmful, but its presence in drinking water is aesthetically unpleasant.

  4. An Integrated Study of the Grayburg/San Andres Reservoir, Foster and South Cowden Fields, Ector County, Texas, Class II

    Energy Technology Data Exchange (ETDEWEB)

    Trentham, Robert C.; Weinbrandt, Richard; Robinson, William C.; Widner, Kevin

    2001-05-03

    The objectives of the project were to: (1) Thoroughly understand the 60-year history of the field. (2) Develop a reservoir description using geology and 3D seismic. (3) Isolate the upper Grayburg in wells producing from multiple intervals to stop cross flow. (4) Re-align and optimize the upper Grayburg waterflood. (5) Determine well condition, identify re-frac candidates, evaluate the effectiveness of well work and obtain bottom hole pressure data for simulation utilizing pressure transient testing field wide. (6) Quantitatively integrate all the data to guide the field operations, including identification of new well locations utilizing reservoir simulation.

  5. Search-Order Independent State Caching

    DEFF Research Database (Denmark)

    Evangelista, Sami; Kristensen, Lars Michael

    2009-01-01

    State caching is a memory reduction technique used by model checkers to alleviate the state explosion problem. It has traditionally been coupled with a depth-first search to ensure termination.We propose and experimentally evaluate an extension of the state caching method for general state...... exploring algorithms that are independent of the search order (i.e., search algorithms that partition the state space into closed (visited) states, open (to visit) states and unmet states)....

  6. An integrated study of the Grayburg/San Andres reservoir, Foster and South Cowden fields, Ector County, Texas, Class II

    Energy Technology Data Exchange (ETDEWEB)

    Trentham, DGS, Robert C.; Robinson, M.S., William C.; Wider, Kevin; Weinbrandt, Ph.D.,PE, Richard

    2000-04-14

    A project to recover economic amounts of oil from a very mature oil field is being conducted by Laguna Petroleum Corporation of Midland, Texas, with partial funding from a U. S. Department of Energy (DOE) grant to study shallow carbonate rock reservoirs. The objectives of the project are to use modern engineering methods to optimize oil field management and to use geological and geophysical data to recover untapped potential within the petroleum reservoirs. The integration of data and techniques from these disciplines has yielded results greater than those achievable without their cooperation. The cost of successfully accomplishing these goals is to be low enough for even small independent operators to afford. This article is a report describing accomplishments for the fiscal year 1998-1999.

  7. An Integrated Study of the Grayburg/San Andres Reservoir, Foster and South Cowden Fields, Ector County, Texas

    Energy Technology Data Exchange (ETDEWEB)

    Robinson, William C.; Trentham, Robert C.; Widner, Kevin; Wienbrandt, Richard

    1999-06-22

    A project to recover economic amounts of oil from a very mature oil field is being conducted by Laguna Petroleum Corporation of Midland, Texas, with partial funding from a U. S. Department of Energy grant to study shallow carbonate rock reservoirs. The objectives of the project are to use modern engineering methods to optimize oil field management and to use geological and geophysical data to recover untapped potential within the petroleum reservoirs. The integration of data and techniques from these disciplines has yielded results greater than those achievable without their cooperation. The cost of successfully accomplishing these goals is to be low enough for even small independent operators to afford. This article is a report describing accomplishments for the fiscal year 1997-1998.

  8. Spatial-temporal variations of natural suitability of human settlement environment in the Three Gorges Reservoir Area—A case study in Fengjie County, China

    Science.gov (United States)

    Luo, Jieqiong; Zhou, Tinggang; Du, Peijun; Xu, Zhigang

    2018-01-01

    With rapid environmental degeneration and socio-economic development, the human settlement environment (HSE) has experienced dramatic changes and attracted attention from different communities. Consequently, the spatial-temporal evaluation of natural suitability of the human settlement environment (NSHSE) has become essential for understanding the patterns and dynamics of HSE, and for coordinating sustainable development among regional populations, resources, and environments. This study aims to explore the spatialtemporal evolution of NSHSE patterns in 1997, 2005, and 2009 in Fengjie County near the Three Gorges Reservoir Area (TGRA). A spatially weighted NSHSE model was established by integrating multi-source data (e.g., census data, meteorological data, remote sensing images, DEM data, and GIS data) into one framework, where the Ordinary Least Squares (OLS) linear regression model was applied to calculate the weights of indices in the NSHSE model. Results show that the trend of natural suitability has been first downward and then upward, which is evidenced by the disparity of NSHSE existing in the south, north, and central areas of Fengjie County. Results also reveal clustered NSHSE patterns for all 30 townships. Meanwhile, NSHSE has significant influence on population distribution, and 71.49% of the total population is living in moderate and high suitable districts.

  9. dCache, agile adoption of storage technology

    Energy Technology Data Exchange (ETDEWEB)

    Millar, A. P. [Hamburg U.; Baranova, T. [Hamburg U.; Behrmann, G. [Unlisted, DK; Bernardt, C. [Hamburg U.; Fuhrmann, P. [Hamburg U.; Litvintsev, D. O. [Fermilab; Mkrtchyan, T. [Hamburg U.; Petersen, A. [Hamburg U.; Rossi, A. [Fermilab; Schwank, K. [Hamburg U.

    2012-01-01

    For over a decade, dCache has been synonymous with large-capacity, fault-tolerant storage using commodity hardware that supports seamless data migration to and from tape. In this paper we provide some recent news of changes within dCache and the community surrounding it. We describe the flexible nature of dCache that allows both externally developed enhancements to dCache facilities and the adoption of new technologies. Finally, we present information about avenues the dCache team is exploring for possible future improvements in dCache.

  10. Cache-aware network-on-chip for chip multiprocessors

    Science.gov (United States)

    Tatas, Konstantinos; Kyriacou, Costas; Dekoulis, George; Demetriou, Demetris; Avraam, Costas; Christou, Anastasia

    2009-05-01

    This paper presents the hardware prototype of a Network-on-Chip (NoC) for a chip multiprocessor that provides support for cache coherence, cache prefetching and cache-aware thread scheduling. A NoC with support to these cache related mechanisms can assist in improving systems performance by reducing the cache miss ratio. The presented multi-core system employs the Data-Driven Multithreading (DDM) model of execution. In DDM thread scheduling is done according to data availability, thus the system is aware of the threads to be executed in the near future. This characteristic of the DDM model allows for cache aware thread scheduling and cache prefetching. The NoC prototype is a crossbar switch with output buffering that can support a cache-aware 4-node chip multiprocessor. The prototype is built on the Xilinx ML506 board equipped with a Xilinx Virtex-5 FPGA.

  11. A Stack Cache for Real-Time Systems

    DEFF Research Database (Denmark)

    Schoeberl, Martin; Nielsen, Carsten

    2016-01-01

    Real-time systems need time-predictable computing platforms to allowfor static analysis of the worst-case execution time. Caches are important for good performance, but data caches arehard to analyze for the worst-case execution time. Stack allocated data has different properties related to local......Real-time systems need time-predictable computing platforms to allowfor static analysis of the worst-case execution time. Caches are important for good performance, but data caches arehard to analyze for the worst-case execution time. Stack allocated data has different properties related...... to locality, lifetime, and static analyzability of access addresses comparedto static or heap allocated data. Therefore, caching of stack allocateddata benefits from having its own cache. In this paper we present a cache architecture optimized for stack allocateddata. This cache is additional to the normal...

  12. Cache-Oblivious Algorithms and Data Structures

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting

    2004-01-01

    as standard RAM algorithms with only one memory level, i.e. without any knowledge about memory hierarchies, but are analyzed in the two-level I/O model of Aggarwal and Vitter for an arbitrary memory and block size and an optimal off-line cache replacement strategy. The result are algorithms that automatically...... apply to multi-level memory hierarchies. This paper gives an overview of the results achieved on cache-oblivious algorithms and data structures since the seminal paper by Frigo et al....

  13. Cache timing attacks on recent microarchitectures

    DEFF Research Database (Denmark)

    Andreou, Alexandres; Bogdanov, Andrey; Tischhauser, Elmar Wolfgang

    2017-01-01

    Cache timing attacks have been known for a long time, however since the rise of cloud computing and shared hardware resources, such attacks found new potentially devastating applications. One prominent example is S$A (presented by Irazoqui et al at S&P 2015) which is a cache timing attack against...... engineered as part of this work. This is the first time CSSAs for the Skylake architecture are reported. Our attacks demonstrate that cryptographic applications in cloud computing environments using key-dependent tables for acceleration are still vulnerable even on recent architectures, including Skylake...

  14. Concurrent Evaluation of Web Cache Replacement and Coherence Strategies

    NARCIS (Netherlands)

    Belloum, A.S.Z.; Hertzberger, L.O.

    2002-01-01

    When studying Web cache replacement strategies, it is often assumed that documents are static. Such an assumption may not be realistic, especially when large-size caches are considered. Because of the strong correlation between the efficiency of the cache replacement strategy and the real state of

  15. Design Space Exploration of Object Caches with Cross-Profiling

    DEFF Research Database (Denmark)

    Schoeberl, Martin; Binder, Walter; Villazon, Alex

    2011-01-01

    To avoid data cache trashing between heap-allocated data and other data areas, a distinct object cache has been proposed for embedded real-time Java processors. This object cache uses high associativity in order to statically track different object pointers for worst-case execution-time analysis...

  16. Efficient Context Switching for the Stack Cache: Implementation and Analysis

    DEFF Research Database (Denmark)

    Abbaspourseyedi, Sahar; Brandner, Florian; Naji, Amine

    2015-01-01

    , the analysis of the stack cache was limited to individual tasks, ignoring aspects related to multitasking. A major drawback of the original stack cache design is that, due to its simplicity, it cannot hold the data of multiple tasks at the same time. Consequently, the entire cache content needs to be saved...

  17. Testing geopressured geothermal reservoirs in existing wells: Pauline Kraft Well No. 1, Nueces County, Texas. Final report

    Energy Technology Data Exchange (ETDEWEB)

    1981-01-01

    The Pauline Kraft Well No. 1 was originally drilled to a depth of 13,001 feet and abandoned as a dry hole. The well was re-entered in an effort to obtain a source of GEO/sup 2/ energy for a proposed gasohol manufacturing plant. The well was tested through a 5-inch by 2-3/8 inch annulus. The geological section tested was the Frio-Anderson sand of Mid-Oligocene age. The interval tested was from 12,750 to 12,860 feet. A saltwater disposal well was drilled on the site and completed in a Micocene sand section. The disposal interval was perforated from 4710 to 4770 feet and from 4500 to 4542 feet. The test well failed to produce water at substantial rates. Initial production was 34 BWPD. A large acid stimulation treatment increased productivity to 132 BWPD, which was still far from an acceptable rate. During the acid treatment, a failure of the 5-inch production casing occurred. The poor production rates are attributed to a reservoir with very low permeability and possible formation damage. The casing failure is related to increased tensile strain resulting from cooling of the casing by acid and from the high surface injection pressure. The location of the casing failure is now known at this time, but it is not at the surface. Failure as a result of a defect in a crossover joint at 723 feet is suspected.

  18. Don't make cache too complex: A simple probability-based cache management scheme for SSDs.

    Directory of Open Access Journals (Sweden)

    Seungjae Baek

    Full Text Available Solid-state drives (SSDs have recently become a common storage component in computer systems, and they are fueled by continued bit cost reductions achieved with smaller feature sizes and multiple-level cell technologies. However, as the flash memory stores more bits per cell, the performance and reliability of the flash memory degrade substantially. To solve this problem, a fast non-volatile memory (NVM-based cache has been employed within SSDs to reduce the long latency required to write data. Absorbing small writes in a fast NVM cache can also reduce the number of flash memory erase operations. To maximize the benefits of an NVM cache, it is important to increase the NVM cache utilization. In this paper, we propose and study ProCache, a simple NVM cache management scheme, that makes cache-entrance decisions based on random probability testing. Our scheme is motivated by the observation that frequently written hot data will eventually enter the cache with a high probability, and that infrequently accessed cold data will not enter the cache easily. Owing to its simplicity, ProCache is easy to implement at a substantially smaller cost than similar previously studied techniques. We evaluate ProCache and conclude that it achieves comparable performance compared to a more complex reference counter-based cache-management scheme.

  19. Distributed snow data as a tool to inform water management decisions: Using Airborne Snow Observatory (ASO) at the Hetch Hetchy Reservoir in Yosemite National Park, City and County of San Francisco.

    Science.gov (United States)

    Graham, C. B.

    2016-12-01

    The timing and magnitude of spring snowmelt and runoff is critical in managing reservoirs in the Western United States. The Hetch Hetchy Reservoir in Yosemite National Park provides drinking water for 2.6 million customers in over 30 communities in the San Francisco Bay Area. Power generation from Hetch Hetchy meets the municipal load of the City and County of San Francisco. Water from the Hetch Hetchy Reservoir is also released in the Tuolumne River, supporting critical ecosystems in Yosemite National Park and the Stanislaus National Forest. Better predictions of long (seasonal) and short (weekly) term streamflow allow for more secure water resource planning, earlier power generation and ecologically beneficial releases from the Reservoir. Hetch Hetchy Reservoir is fed by snow dominated watersheds in the Sierra Mountains. Better knowledge of snowpack conditions allow for better predictions of inflows, both at the seasonal and at the weekly time scales. The ASO project has provided the managers of Hetch Hetchy Reservoir with high resolution estimates of total snowpack and snowpack distribution in the 460 mi2 Hetch Hetchy. We show that there is a tight correlation between snowpack estimates and future streamflow, allowing earlier, more confident operational decisions. We also show how distributed SWE estimates were used to develop and test a hydrologic model of the system (PRMS). This model, calibrated directly to snowpack conditions, is shown to correctly simulate snowpack volume and distribution, as well as streamflow patterns.

  20. Corvid caching : insights from a cognitive model

    NARCIS (Netherlands)

    van der Vaart, Elske; Verbrugge, Rineke; Hemelrijk, Charlotte K.

    Caching and recovery of food by corvids is well-studied, but some ambiguous results remain. To help clarify these, we built a computational cognitive model. It is inspired by similar models built for humans, and it assumes that memory strength depends on frequency and recency of use. We compared our

  1. C-Aware: A Cache Management Algorithm Considering Cache Media Access Characteristic in Cloud Computing

    Directory of Open Access Journals (Sweden)

    Zhu Xudong

    2013-01-01

    Full Text Available Data congestion and network delay are the important factors that affect performance of cloud computing systems. Using local disk of computing nodes as a cache can sometimes get better performance than accessing data through the network. This paper presents a storage cache placement algorithm—C-Aware, which traces history access information of cache and data source, adaptively decides whether to cache data according to cache media characteristic and current access environment, and achieves good performance under different workload on storage server. We implement this algorithm in both simulated and real environments. Our simulation results using OLTP and WebSearch traces demonstrate that C-Aware achieves better adaptability to the changes of server workload. Our benchmark results in real system show that, in the scenario where the size of local cache is half of data set, C-Aware gets nearly 80% improvement compared with traditional methods when the server is not busy and still presents comparable performance when there is high workload on server side.

  2. Optimal file-bundle caching algorithms for data-grids

    Energy Technology Data Exchange (ETDEWEB)

    Otoo, Ekow; Rotem, Doron; Romosan, Alexandru

    2004-04-24

    The file-bundle caching problem arises frequently in scientific applications where jobs need to process several files simultaneously. Consider a host system in a data-grid that maintains a staging disk or disk cache for servicing jobs of file requests. In this environment, a job can only be serviced if all its file requests are present in the disk cache. Files must be admitted into the cache or replaced in sets of file-bundles, i.e. the set of files that must all be processed simultaneously. In this paper we show that traditional caching algorithms based on file popularity measures do not perform well in such caching environments since they are not sensitive to the inter-file dependencies and may hold in the cache non-relevant combinations of files. We present and analyze a new caching algorithm for maximizing the throughput of jobs and minimizing data replacement costs to such data-grid hosts. We tested the new algorithm using a disk cache simulation model under a wide range of conditions such as file request distributions, relative cache size, file size distribution, etc. In all these cases, the results show significant improvement as compared with traditional caching algorithms.

  3. Integration of recommender system for Web cache management

    Directory of Open Access Journals (Sweden)

    Pattarasinee Bhattarakosol

    2013-06-01

    Full Text Available Web caching is widely recognised as an effective technique that improves the quality of service over the Internet, such as reduction of user latency and network bandwidth usage. However, this method has limitations due to hardware and management policies of caches. The Behaviour-Based Cache Management Model (BBCMM is therefore proposed as an alternative caching architecture model with the integration of a recommender system. This architecture is a cache grouping mechanism where browsing characteristics are applied to improve the performance of the Internet services. The results indicate that the byte hit rate of the new architecture increases by more than 18% and the delay measurement drops by more than 56%. In addition, a theoretical comparison between the proposed model and the traditional cooperative caching models shows a performance improvement of the proposed model in the cache system.

  4. Multi-level Hybrid Cache: Impact and Feasibility

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Zhe [ORNL; Kim, Youngjae [ORNL; Ma, Xiaosong [ORNL; Shipman, Galen M [ORNL; Zhou, Yuanyuan [University of California, San Diego

    2012-02-01

    Storage class memories, including flash, has been attracting much attention as promising candidates fitting into in today's enterprise storage systems. In particular, since the cost and performance characteristics of flash are in-between those of DRAM and hard disks, it has been considered by many studies as an secondary caching layer underneath main memory cache. However, there has been a lack of studies of correlation and interdependency between DRAM and flash caching. This paper views this problem as a special form of multi-level caching, and tries to understand the benefits of this multi-level hybrid cache hierarchy. We reveal that significant costs could be saved by using Flash to reduce the size of DRAM cache, while maintaing the same performance. We also discuss design challenges of using flash in the caching hierarchy and present potential solutions.

  5. Cache Management of Big Data in Equipment Condition Assessment

    Directory of Open Access Journals (Sweden)

    Ma Yan

    2016-01-01

    Full Text Available Big data platform for equipment condition assessment is built for comprehensive analysis. The platform has various application demands. According to its response time, its application can be divided into offline, interactive and real-time types. For real-time application, its data processing efficiency is important. In general, data cache is one of the most efficient ways to improve query time. However, big data caching is different from the traditional data caching. In the paper we propose a distributed cache management framework of big data for equipment condition assessment. It consists of three parts: cache structure, cache replacement algorithm and cache placement algorithm. Cache structure is the basis of the latter two algorithms. Based on the framework and algorithms, we make full use of the characteristics of just accessing some valuable data during a period of time, and put relevant data on the neighborhood nodes, which largely reduce network transmission cost. We also validate the performance of our proposed approaches through extensive experiments. It demonstrates that the proposed cache replacement algorithm and cache management framework has higher hit rate or lower query time than LRU algorithm and round-robin algorithm.

  6. Seedling Establishment of Coast Live Oak in Relation to Seed Caching by Jays

    Science.gov (United States)

    Joe R. McBride; Ed Norberg; Sheauchi Cheng; Ahmad Mossadegh

    1991-01-01

    The purpose of this study was to simulate the caching of acorns by jays and rodents to see if less costly procedures could be developed for the establishment of coast live oak (Quercus agrifolia). Four treatments [(1) random - single acorn cache, (2) regular - single acorn cache, (3) regular - 5 acorn cache, (4) regular - 10 acorn cache] were planted...

  7. ENHANCE PERFORMANCE OF WEB PROXY CACHE CLUSTER USING CLOUD COMPUTING

    Directory of Open Access Journals (Sweden)

    Najat O. Alsaiari

    2013-12-01

    Full Text Available Web caching is a crucial technology in Internet because it represents an effective means for reducing bandwidth demands, improving web server availability and reducing network latencies. However, Web cache cluster, which is a potent solution to enhance web cache system’s capability, still, has limited capacity and cannot handle tremendous high workload. Maximizing resource utilization and system capability is a very important problem in Web cache cluster. This problem cannot be solved efficiently by merely using load balancing strategies. Thus, along with the advent of cloud computing, we can use cloud based proxies to achieve outstanding performance and higher resource efficiency, compared to traditional Web proxy cache clusters. In this paper, we propose an architecture for cloud based Web proxy cache cluster (CBWPCC and test the effectiveness of the proposed architecture, compared with traditional one in term of response time ,resource utilization using CloudSim tool.

  8. OPTIMAL DATA REPLACEMENT TECHNIQUE FOR COOPERATIVE CACHING IN MANET

    Directory of Open Access Journals (Sweden)

    P. Kuppusamy

    2014-09-01

    Full Text Available A cooperative caching approach improves data accessibility and reduces query latency in Mobile Ad hoc Network (MANET. Maintaining the cache is challenging issue in large MANET due to mobility, cache size and power. The previous research works on caching primarily have dealt with LRU, LFU and LRU-MIN cache replacement algorithms that offered low query latency and greater data accessibility in sparse MANET. This paper proposes Memetic Algorithm (MA to locate the better replaceable data based on neighbours interest and fitness value of cached data to store the newly arrived data. This work also elects ideal CH using Meta heuristic search Ant Colony Optimization algorithm. The simulation results shown that proposed algorithm reduces the latency, control overhead and increases the packet delivery rate than existing approach by increasing nodes and speed respectively.

  9. Static analysis of worst-case stack cache behavior

    DEFF Research Database (Denmark)

    Jordan, Alexander; Brandner, Florian; Schoeberl, Martin

    2013-01-01

    Utilizing a stack cache in a real-time system can aid predictability by avoiding interference that heap memory traffic causes on the data cache. While loads and stores are guaranteed cache hits, explicit operations are responsible for managing the stack cache. The behavior of these operations can......-graph, the worst-case bounds can be efficiently yet precisely determined. Our evaluation using the MiBench benchmark suite shows that only 37% and 21% of potential stack cache operations actually store to and load from memory, respectively. Analysis times are modest, on average running between 0.46s and 1.30s per...... be analyzed statically. We present algorithms that derive worst-case bounds on the latency-inducing operations of the stack cache. Their results can be used by a static WCET tool. By breaking the analysis down into subproblems that solve intra-procedural data-flow analysis and path searches on the call...

  10. The People of Bear Hunter Speak: Oral Histories of the Cache Valley Shoshones Regarding the Bear River Massacre

    OpenAIRE

    Crawford, Aaron L.

    2007-01-01

    The Cache Valley Shoshone are the survivors of the Bear River Massacre, where a battle between a group of US. volunteer troops from California and a Shoshone village degenerated into the worst Indian massacre in US. history, resulting in the deaths of over 200 Shoshones. The massacre occurred due to increasing tensions over land use between the Shoshones and the Mormon settlers. Following the massacre, the Shoshones attempted settling in several different locations in Box Elder County, eventu...

  11. An Adaptive Insertion and Promotion Policy for Partitioned Shared Caches

    Science.gov (United States)

    Mahrom, Norfadila; Liebelt, Michael; Raof, Rafikha Aliana A.; Daud, Shuhaizar; Hafizah Ghazali, Nur

    2018-03-01

    Cache replacement policies in chip multiprocessors (CMP) have been investigated extensively and proven able to enhance shared cache management. However, competition among multiple processors executing different threads that require simultaneous access to a shared memory may cause cache contention and memory coherence problems on the chip. These issues also exist due to some drawbacks of the commonly used Least Recently Used (LRU) policy employed in multiprocessor systems, which are because of the cache lines residing in the cache longer than required. In image processing analysis of for example extra pulmonary tuberculosis (TB), an accurate diagnosis for tissue specimen is required. Therefore, a fast and reliable shared memory management system to execute algorithms for processing vast amount of specimen image is needed. In this paper, the effects of the cache replacement policy in a partitioned shared cache are investigated. The goal is to quantify whether better performance can be achieved by using less complex replacement strategies. This paper proposes a Middle Insertion 2 Positions Promotion (MI2PP) policy to eliminate cache misses that could adversely affect the access patterns and the throughput of the processors in the system. The policy employs a static predefined insertion point, near distance promotion, and the concept of ownership in the eviction policy to effectively improve cache thrashing and to avoid resource stealing among the processors.

  12. Reservoir characterization and final pre-test analysis in support of the compressed-air-energy-storage Pittsfield aquifer field test in Pike County, Illinois

    Energy Technology Data Exchange (ETDEWEB)

    Wiles, L.E.; McCann, R.A.

    1983-06-01

    The work reported is part of a field experimental program to demonstrate and evaluate compressed air energy storage in a porous media aquifer reservoir near Pittsfield, Illinois. The reservoir is described. Numerical modeling of the reservoir was performed concurrently with site development. The numerical models were applied to predict the thermohydraulic performance of the porous media reservoir. This reservoir characterization and pre-test analysis made use of evaluation of bubble development, water coning, thermal development, and near-wellbore desaturation. The work was undertaken to define the time required to develop an air storage bubble of adequate size, to assess the specification of instrumentation and above-ground equipment, and to develop and evaluate operational strategies for air cycling. A parametric analysis was performed for the field test reservoir. (LEW)

  13. Hydrogeology and geochemistry of acid mine drainage in ground water in the vicinity of Penn Mine and Camanche Reservoir, Calaveras County, California; first-year summary

    Science.gov (United States)

    Hamlin, S.N.; Alpers, Charles N.

    1995-01-01

    Acid drainage from the Penn Mine in Calaveras County, California, has caused contamination of ground water between Mine Run Dam and Camanche Reservoir. The Penn Mine was first developed in the 1860's primarily for copper and later produced lesser amounts of zinc, lead, silver, and gold from steeply dipping massive sulfide lenses in metamorphic rocks. Surface disposal of sulfidic waste rock and tailings from mine operations has produced acidic drainage with pH values between 2.3 and 2.7 and elevated concentrations of sulfate and metals, including copper, zinc, cadmium, iron, and aluminum. During the mine's operation and after its subsequent abandonment in the late 1950's, acid mine drainage flowed down Mine Run into the Mokelumne River. Construction of Camanche Dam in 1963 flooded part of the Mokelumne River adjacent to Penn Mine. Surface-water diversions and unlined impoundments were constructed at Penn Mine in 1979 to reduce runoff from the mine, collect contaminated surface water, and enhance evaporation. Some of the contaminated surface water infiltrates the ground water and flows toward Camanche Reservoir. Ground- water flow in the study area is controlled by the local hydraulic gradient and the hydraulic characteristics of two principal rock types, a Jurassic metavolcanic unit and the underlying Salt Spring slate. The hydraulic gradient is west from Mine Run impoundment toward Camanche Reservoir. The median hydraulic conductivity was about 10 to 50 times higher in the metavolcanic rock (0.1 foot per day) than in the slate (0.002 to 0.01 foot per day); most flow occurs in the metavolcanic rock where hydraulic conductivity is as high as 50 feet per day in two locations. The contact between the two rock units is a fault plane that strikes N20?W, dips 20?NE, and is a likely conduit for ground-water flow, based on down-hole measurements with a heatpulse flowmeter. Analyses of water samples collected during April 1992 provide a comprehensive characterization of

  14. Hydrogeology and groundwater quality at monitoring wells installed for the Tunnel and Reservoir Plan System and nearby water-supply wells, Cook County, Illinois, 1995–2013

    Science.gov (United States)

    Kay, Robert T.

    2016-04-04

    Groundwater-quality data collected from 1995 through 2013 from 106 monitoring wells open to the base of the Silurian aquifer surrounding the Tunnel and Reservoir Plan (TARP) System in Cook County, Illinois, were analyzed by the U.S. Geological Survey, in cooperation with the Metropolitan Water Reclamation District of Greater Chicago, to assess the efficacy of the monitoring network and the effects of water movement from the tunnel system to the surrounding aquifer. Groundwater from the Silurian aquifer typically drains to the tunnel system so that analyte concentrations in most of the samples from most of the monitoring wells primarily reflect the concentration of the analyte in the nearby Silurian aquifer. Water quality in the Silurian aquifer is spatially variable because of a variety of natural and non-TARP anthropogenic processes. Therefore, the trends in analyte values at a given well from 1995 through 2013 are primarily a reflection of the spatial variation in the value of the analyte in groundwater within that part of the Silurian aquifer draining to the tunnels. Intermittent drainage of combined sewer flow from the tunnel system to the Silurian aquifer when flow in the tunnel systemis greater than 80 million gallons per day may affect water quality in some nearby monitoring wells. Intermittent drainage of combined sewer flow from the tunnel system to the Silurian aquifer appears to affect the values of electrical conductivity, hardness, sulfate, chloride, dissolved organic carbon, ammonia, and fecal coliform in samples from many wells but typically during less than 5 percent of the sampling events. Drainage of combined sewer flow into the aquifer is most prevalent in the downstream parts of the tunnel systems because of the hydraulic pressures elevated above background values and long residence time of combined sewer flow in those areas. Elevated values of the analytes emplaced during intermittent migration of combined sewer flow into the Silurian aquifer

  15. Clark’s Nutcrackers (Nucifraga columbiana Flexibly Adapt Caching Behaviour to a Cooperative Context

    Directory of Open Access Journals (Sweden)

    Dawson Clary

    2016-10-01

    Full Text Available Corvids recognize when their caches are at risk of being stolen by others and have developed strategies to protect these caches from pilferage. For instance, Clark’s nutcrackers will suppress the number of caches they make if being observed by a potential thief. However, cache protection has most often been studied using competitive contexts, so it is unclear whether corvids can adjust their caching in beneficial ways to accommodate non-competitive situations. Therefore, we examined whether Clark’s nutcrackers, a non-social corvid, would flexibly adapt their caching behaviours to a cooperative context. To do so, birds were given a caching task during which caches made by one individual were reciprocally exchanged for the caches of a partner bird over repeated trials. In this scenario, if caching behaviours can be flexibly deployed, then the birds should recognize the cooperative nature of the task and maintain or increase caching levels over time. However, if cache protection strategies are applied independent of social context and simply in response to cache theft, then cache suppression should occur. In the current experiment, we found that the birds maintained caching throughout the experiment. We report that males increased caching in response to a manipulation in which caches were artificially added, suggesting the birds could adapt to the cooperative nature of the task. Additionally, we show that caching decisions were not solely due to motivational factors, instead showing an additional influence attributed to the behaviour of the partner bird.

  16. Instant Varnish Cache how-to

    CERN Document Server

    Moutinho, Roberto

    2013-01-01

    Filled with practical, step-by-step instructions and clear explanations for the most important and useful tasks. Get the job done and learn as you go. Easy-to-follow, step-by-step recipes which will get you started with Varnish Cache. Practical examples will help you to get set up quickly and easily.This book is aimed at system administrators and web developers who need to scale websites without tossing money on a large and costly infrastructure. It's assumed that you have some knowledge of the HTTP protocol, how browsers and server communicate with each other, and basic Linux systems.

  17. Compiler-directed cache management in multiprocessors

    Science.gov (United States)

    Cheong, Hoichi; Veidenbaum, Alexander V.

    1990-01-01

    The necessity of finding alternatives to hardware-based cache coherence strategies for large-scale multiprocessor systems is discussed. Three different software-based strategies sharing the same goals and general approach are presented. They consist of a simple invalidation approach, a fast selective invalidation scheme, and a version control scheme. The strategies are suitable for shared-memory multiprocessor systems with interconnection networks and a large number of processors. Results of trace-driven simulations conducted on numerical benchmark routines to compare the performance of the three schemes are presented.

  18. Best practice for caching of single-path code

    DEFF Research Database (Denmark)

    Schoeberl, Martin; Cilku, Bekim; Prokesch, Daniel

    2017-01-01

    Single-path code has some unique properties that make it interesting to explore different caching and prefetching alternatives for the stream of instructions. In this paper, we explore different cache organizations and how they perform with single-path code....

  19. Best Practice for Caching of Single-Path Code

    DEFF Research Database (Denmark)

    Schoeberl, Martin; Cilku, Bekim; Prokesch, Daniel

    2017-01-01

    Single-path code has some unique properties that make it interesting to explore different caching and prefetching alternatives for the stream of instructions. In this paper, we explore different cache organizations and how they perform with single-path code....

  20. Cache Timing Analysis of LFSR-based Stream Ciphers

    DEFF Research Database (Denmark)

    Zenner, Erik; Leander, Gregor; Hawkes, Philip

    2009-01-01

    primitives. In this paper, we give a cache timing cryptanalysis of stream ciphers using word-based linear feedback shift registers (LFSRs), such as Snow, Sober, Turing, or Sosemanuk. Fast implementations of such ciphers use tables that can be the target for a cache timing attack. Assuming that a small number...

  1. Enable Cache Effect on Forwarding Table in Metro-Ethernet

    Science.gov (United States)

    Sun, Xiaocui; Wang, Zhijun

    Broadcast based Address Resolution Protocol (ARP) is a major challenge for deploying Ethernet in Metropolitan Area Networks (MAN). This paper proposes to enable Cache effect on Forwarding Table (CFT) in Metro Ethernet. CFT can reduce numerous broadcast messages by solving the address through cached entries. The simulation results show that the proposed scheme can significantly decrease communication messages for address resolution in Metro Ethernet.

  2. Evidence for cache surveillance by a scatter-hoarding rodent

    NARCIS (Netherlands)

    Hirsch, B.T.; Kays, R.; Jansen, P.A.

    2013-01-01

    The mechanisms by which food-hoarding animals are capable of remembering the locations of numerous cached food items over long time spans has been the focus of intensive research. The ‘memory enhancement hypothesis’ states that hoarders reinforce spatial memory of their caches by repeatedly

  3. Smart Caching for Efficient Information Sharing in Distributed Information Systems

    Science.gov (United States)

    2008-09-01

    Leighton, Matthew Levine, Daniel Lewin , Rina Panigrahy (1997), “Consistent Hashing and Random Trees: Distributed Caching Protocols for Relieving Hot...Danzig, Chuck Neerdaels, Michael Schwartz and Kurt Worrell (1996), “A Hierarchical Internet Object Cache,” in USENIX Proceedings, 1996. 51 INITIAL

  4. A Cache-Optimal Alternative to the Unidirectional Hierarchization Algorithm

    DEFF Research Database (Denmark)

    Hupp, Philipp; Jacob, Riko

    2016-01-01

    of the cache misses by a factor of d compared to the unidirectional algorithm which is the common standard up to now. The new algorithm is also optimal in the sense that the leading term of the cache misses is reduced to scanning complexity, i.e., every degree of freedom has to be touched once. We also present...

  5. Cache-mesh, a Dynamics Data Structure for Performance Optimization

    DEFF Research Database (Denmark)

    Nguyen, Tuan T.; Dahl, Vedrana Andersen; Bærentzen, J. Andreas

    2017-01-01

    This paper proposes the cache-mesh, a dynamic mesh data structure in 3D that allows modifications of stored topological relations effortlessly. The cache-mesh can adapt to arbitrary problems and provide fast retrieval to the most-referred-to topological relations. This adaptation requires trivial...

  6. Efficacy of Code Optimization on Cache-based Processors

    Science.gov (United States)

    VanderWijngaart, Rob F.; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    The current common wisdom in the U.S. is that the powerful, cost-effective supercomputers of tomorrow will be based on commodity (RISC) micro-processors with cache memories. Already, most distributed systems in the world use such hardware as building blocks. This shift away from vector supercomputers and towards cache-based systems has brought about a change in programming paradigm, even when ignoring issues of parallelism. Vector machines require inner-loop independence and regular, non-pathological memory strides (usually this means: non-power-of-two strides) to allow efficient vectorization of array operations. Cache-based systems require spatial and temporal locality of data, so that data once read from main memory and stored in high-speed cache memory is used optimally before being written back to main memory. This means that the most cache-friendly array operations are those that feature zero or unit stride, so that each unit of data read from main memory (a cache line) contains information for the next iteration in the loop. Moreover, loops ought to be 'fat', meaning that as many operations as possible are performed on cache data-provided instruction caches do not overflow and enough registers are available. If unit stride is not possible, for example because of some data dependency, then care must be taken to avoid pathological strides, just ads on vector computers. For cache-based systems the issues are more complex, due to the effects of associativity and of non-unit block (cache line) size. But there is more to the story. Most modern micro-processors are superscalar, which means that they can issue several (arithmetic) instructions per clock cycle, provided that there are enough independent instructions in the loop body. This is another argument for providing fat loop bodies. With these restrictions, it appears fairly straightforward to produce code that will run efficiently on any cache-based system. It can be argued that although some of the important

  7. INTELLIGENT CACHE FARMING ARCHITECTURE WITH THE RECOMMENDER SYSTEM

    Directory of Open Access Journals (Sweden)

    S. HIRANPONGSIN

    2009-06-01

    Full Text Available The Quality of Services (QoS guaranteed by the Internet Service Providers (ISPs is an important factor for users’ satisfaction in using the Internet. The implementation of the web proxy caching has been implemented to support this objective and also support the security procedure of the organizations. However, the success of guaranteeing the QoS of each ISP must be depended on the cache size and efficient caching policy. This paper proposes a new architecture of cache farming with the recommender system concept to manage users’ requirements. This solution helps reducing the retrieval time and also increasing the hit rate although the number of users increases without expanding the size of caches in the farm.

  8. Experimental Results of Rover-Based Coring and Caching

    Science.gov (United States)

    Backes, Paul G.; Younse, Paulo; DiCicco, Matthew; Hudson, Nicolas; Collins, Curtis; Allwood, Abigail; Paolini, Robert; Male, Cason; Ma, Jeremy; Steele, Andrew; hide

    2011-01-01

    Experimental results are presented for experiments performed using a prototype rover-based sample coring and caching system. The system consists of a rotary percussive coring tool on a five degree-of-freedom manipulator arm mounted on a FIDO-class rover and a sample caching subsystem mounted on the rover. Coring and caching experiments were performed in a laboratory setting and in a field test at Mono Lake, California. Rock abrasion experiments using an abrading bit on the coring tool were also performed. The experiments indicate that the sample acquisition and caching architecture is viable for use in a 2018 timeframe Mars caching mission and that rock abrasion using an abrading bit may be feasible in place of a dedicated rock abrasion tool.

  9. Compiler-Enforced Cache Coherence Using a Functional Language

    Directory of Open Access Journals (Sweden)

    Rich Wolski

    1996-01-01

    Full Text Available The cost of hardware cache coherence, both in terms of execution delay and operational cost, is substantial for scalable systems. Fortunately, compiler-generated cache management can reduce program serialization due to cache contention; increase execution performance; and reduce the cost of parallel systems by eliminating the need for more expensive hardware support. In this article, we use the Sisal functional language system as a vehicle to implement and investigate automatic, compiler-based cache management. We describe our implementation of Sisal for the IBM Power/4. The Power/4, briefly available as a product, represents an early attempt to build a shared memory machine that relies strictly on the language system for cache coherence. We discuss the issues associated with deterministic execution and program correctness on a system without hardware coherence, and demonstrate how Sisal (as a functional language is able to address those issues.

  10. Ensemble Flow Forecasts for Risk Based Reservoir Operations of Lake Mendocino in Mendocino County, California: A Framework for Objectively Leveraging Weather and Climate Forecasts in a Decision Support Environment

    Science.gov (United States)

    Delaney, C.; Hartman, R. K.; Mendoza, J.; Whitin, B.

    2017-12-01

    Forecast informed reservoir operations (FIRO) is a methodology that incorporates short to mid-range precipitation and flow forecasts to inform the flood operations of reservoirs. The Ensemble Forecast Operations (EFO) alternative is a probabilistic approach of FIRO that incorporates ensemble streamflow predictions (ESPs) made by NOAA's California-Nevada River Forecast Center (CNRFC). With the EFO approach, release decisions are made to manage forecasted risk of reaching critical operational thresholds. A water management model was developed for Lake Mendocino, a 111,000 acre-foot reservoir located near Ukiah, California, to evaluate the viability of the EFO alternative to improve water supply reliability but not increase downstream flood risk. Lake Mendocino is a dual use reservoir, which is owned and operated for flood control by the United States Army Corps of Engineers and is operated for water supply by the Sonoma County Water Agency. Due to recent changes in the operations of an upstream hydroelectric facility, this reservoir has suffered from water supply reliability issues since 2007. The EFO alternative was simulated using a 26-year (1985-2010) ESP hindcast generated by the CNRFC. The ESP hindcast was developed using Global Ensemble Forecast System version 10 precipitation reforecasts processed with the Hydrologic Ensemble Forecast System to generate daily reforecasts of 61 flow ensemble members for a 15-day forecast horizon. Model simulation results demonstrate that the EFO alternative may improve water supply reliability for Lake Mendocino yet not increase flood risk for downstream areas. The developed operations framework can directly leverage improved skill in the second week of the forecast and is extendable into the S2S time domain given the demonstration of improved skill through a reliable reforecast of adequate historical duration and consistent with operationally available numerical weather predictions.

  11. Study of cache performance in distributed environment for data processing

    International Nuclear Information System (INIS)

    Makatun, Dzmitry; Lauret, Jérôme; Šumbera, Michal

    2014-01-01

    Processing data in distributed environment has found its application in many fields of science (Nuclear and Particle Physics (NPP), astronomy, biology to name only those). Efficiently transferring data between sites is an essential part of such processing. The implementation of caching strategies in data transfer software and tools, such as the Reasoner for Intelligent File Transfer (RIFT) being developed in the STAR collaboration, can significantly decrease network load and waiting time by reusing the knowledge of data provenance as well as data placed in transfer cache to further expand on the availability of sources for files and data-sets. Though, a great variety of caching algorithms is known, a study is needed to evaluate which one can deliver the best performance in data access considering the realistic demand patterns. Records of access to the complete data-sets of NPP experiments were analyzed and used as input for computer simulations. Series of simulations were done in order to estimate the possible cache hits and cache hits per byte for known caching algorithms. The simulations were done for cache of different sizes within interval 0.001 – 90% of complete data-set and low-watermark within 0-90%. Records of data access were taken from several experiments and within different time intervals in order to validate the results. In this paper, we will discuss the different data caching strategies from canonical algorithms to hybrid cache strategies, present the results of our simulations for the diverse algorithms, debate and identify the choice for the best algorithm in the context of Physics Data analysis in NPP. While the results of those studies have been implemented in RIFT, they can also be used when setting up cache in any other computational work-flow (Cloud processing for example) or managing data storages with partial replicas of the entire data-set

  12. National Dam Safety Program. Cobbs Hill Reservoir Dam (Inventory Number NY 1448), Lake Ontario Basin, Monroe County, New York. Phase I Inspection Report,

    Science.gov (United States)

    1981-06-30

    Reservoir are hori- zontally lying dolostones of the Lockport Group of Upper Silurian age. Depth to bedrock is unknown. The reservoir is sited on Cobbs Hill...t7 oil L PE w !J " ° ~A 92o IuIsI I FIGURE lb Q , ,. IL I .i. *,’X. ’ -,_ OIL (I) ," ’. o S ,.’lt / I .. ’ " I,"_ __4- . . l ’’I -. .* . I- I ’i’.I

  13. Improving data caching for software MPEG video decompression

    Science.gov (United States)

    Feng, Wu-chi; Sechrest, Stuart

    1996-03-01

    Software implementations of MPEG decompression provide flexibility at low cost but suffer performance problems, including poor cache behavior. For MPEG video, decompressing the video in the implied order does not take advantage of coherence generated by dependent macroblocks and, therefore, undermines the effectiveness of processor caching. In this paper, we investigate the caching performance gain which is available to algorithms that use different traversal algorithms to decompress these MPEG streams. We have found that the total cache miss rate can be reduced considerably at the expense of a small increase in instructions. To show the potential gains available, we have implemented the different traversal algorithms using the standard Berkeley MPEG player. Without optimizing the MPEG decompression code itself, we are able to obtain better cache performance for the traversal orders examined. In one case, faster decompression rates are achieved by making better use of processor caching, even though additional overhead is introduced to implement the different traversal algorithm. With better instruction-level support in future architectures, low cache miss rates will be crucial for the overall performance of software MPEG video decompression.

  14. Interleaved sectored caches: reconciling low tag volume and low miss ratio

    OpenAIRE

    Seznec , André

    1993-01-01

    Sectored caches have been used for many years in order to reduce the tag volume needed in a cache. In a sectored cache, a single address tag is associated with a sector consisting in several cache lines, while validity, dirty and coherency tags are associated with each of the inner cache lines. Using a sectored cache is a design trade-off between a low volume for cache tags allowed by a large line size and a low memory traffic induced by using a small line size. This technique has been used i...

  15. A two-level cache for distributed information retrieval in search engines.

    Science.gov (United States)

    Zhang, Weizhe; He, Hui; Ye, Jianwei

    2013-01-01

    To improve the performance of distributed information retrieval in search engines, we propose a two-level cache structure based on the queries of the users' logs. We extract the highest rank queries of users from the static cache, in which the queries are the most popular. We adopt the dynamic cache as an auxiliary to optimize the distribution of the cache data. We propose a distribution strategy of the cache data. The experiments prove that the hit rate, the efficiency, and the time consumption of the two-level cache have advantages compared with other structures of cache.

  16. Cache-Oblivious R-trees

    DEFF Research Database (Denmark)

    Arge, Lars; de Berg, Mark; Haverkort, Herman

    2009-01-01

    -oblivious R-tree with provable performance guarantees. If no point in the plane is contained in B or more rectangles in S, the structure answers a rectangle query using O(√N/B + T/B) memory transfers and a point query using O((N/B)ε) memory transfers for any ε > 0, where B is the block size of memory...... transfers between any two levels of a multilevel memory hierarchy. We also develop a variant of our structure that achieves the same performance on input sets with arbitrary overlap among the rectangles. The rectangle query bound matches the bound of the best known linear-space cache-aware structure....

  17. Cache-Oblivious R-trees

    DEFF Research Database (Denmark)

    Arge, Lars; de Berg, Mark; Haverkort, Herman

    2005-01-01

    -oblivious R-tree with provable performance guarantees. If no point in the plane is contained in B or more rectangles in S, the structure answers a rectangle query using �O(sqr{�N/B}+T/B) �memory transfers and a point query using O((N/B)ε) mem�ory transfers for any ε > 0, where B is the block size of memory...... transfers between any two levels of a multilevel memory hierarchy. We also develop a variant of our structure that achieves the same performance on input sets with arbitrary overlap among the rectangles. The rectangle query bound matches the bound of the best known linear-space cache-aware structure....

  18. Cache and memory hierarchy design a performance directed approach

    CERN Document Server

    Przybylski, Steven A

    1991-01-01

    An authoritative book for hardware and software designers. Caches are by far the simplest and most effective mechanism for improving computer performance. This innovative book exposes the characteristics of performance-optimal single and multi-level cache hierarchies by approaching the cache design process through the novel perspective of minimizing execution times. It presents useful data on the relative performance of a wide spectrum of machines and offers empirical and analytical evaluations of the underlying phenomena. This book will help computer professionals appreciate the impact of ca

  19. Replication Strategy for Spatiotemporal Data Based on Distributed Caching System.

    Science.gov (United States)

    Xiong, Lian; Yang, Liu; Tao, Yang; Xu, Juan; Zhao, Lun

    2018-01-14

    The replica strategy in distributed cache can effectively reduce user access delay and improve system performance. However, developing a replica strategy suitable for varied application scenarios is still quite challenging, owing to differences in user access behavior and preferences. In this paper, a replication strategy for spatiotemporal data (RSSD) based on a distributed caching system is proposed. By taking advantage of the spatiotemporal locality and correlation of user access, RSSD mines high popularity and associated files from historical user access information, and then generates replicas and selects appropriate cache node for placement. Experimental results show that the RSSD algorithm is simple and efficient, and succeeds in significantly reducing user access delay.

  20. A distributed storage system with dCache

    DEFF Research Database (Denmark)

    Behrmann, Gerd; Fuhrmann, Patrick; Grønager, Michael

    2008-01-01

    The LCG collaboration is encompassed by a number of Tier 1 centers. The Nordic LCG Tier 1, operated by NDGF, is in contrast to many other Tier 1 centers distributed over the Nordic countries. A distributed setup was chosen for both political and technical reasons, but also provides a number...... of unique challenges. dCache is well known and respected as a powerful distributed storage resource manager, and was chosen for implementing the storage aspects of the Nordic Tier 1. In contrast to classic dCache deployments, we deploy dCache over a WAN with limited bandwidth, high latency, frequent network...

  1. Método y sistema de modelado de memoria cache

    OpenAIRE

    Posadas Cobo, Héctor; Villar Bonet, Eugenio; Díaz Suárez, Luis

    2010-01-01

    Un método de modelado de una memoria cache de datos de un procesador destino, para simular el comportamiento de dicha memoria cache de datos en la ejecución de un código software en una plataforma que comprenda dicho procesador destino, donde dicha simulación se realiza en una plataforma nativa que tiene un procesador diferente del procesador destino que comprende dicha memoria cache de datos que se va a modelar, donde dicho modelado se realiza mediante la ejecución en dicha plataforma nativa...

  2. Replication Strategy for Spatiotemporal Data Based on Distributed Caching System

    Science.gov (United States)

    Xiong, Lian; Tao, Yang; Xu, Juan; Zhao, Lun

    2018-01-01

    The replica strategy in distributed cache can effectively reduce user access delay and improve system performance. However, developing a replica strategy suitable for varied application scenarios is still quite challenging, owing to differences in user access behavior and preferences. In this paper, a replication strategy for spatiotemporal data (RSSD) based on a distributed caching system is proposed. By taking advantage of the spatiotemporal locality and correlation of user access, RSSD mines high popularity and associated files from historical user access information, and then generates replicas and selects appropriate cache node for placement. Experimental results show that the RSSD algorithm is simple and efficient, and succeeds in significantly reducing user access delay. PMID:29342897

  3. Automated Cache Performance Analysis And Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Mohror, Kathryn [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2013-12-23

    While there is no lack of performance counter tools for coarse-grained measurement of cache activity, there is a critical lack of tools for relating data layout to cache behavior to application performance. Generally, any nontrivial optimizations are either not done at all, or are done ”by hand” requiring significant time and expertise. To the best of our knowledge no tool available to users measures the latency of memory reference instructions for partic- ular addresses and makes this information available to users in an easy-to-use and intuitive way. In this project, we worked to enable the Open|SpeedShop performance analysis tool to gather memory reference latency information for specific instructions and memory ad- dresses, and to gather and display this information in an easy-to-use and intuitive way to aid performance analysts in identifying problematic data structures in their codes. This tool was primarily designed for use in the supercomputer domain as well as grid, cluster, cloud-based parallel e-commerce, and engineering systems and middleware. Ultimately, we envision a tool to automate optimization of application cache layout and utilization in the Open|SpeedShop performance analysis tool. To commercialize this soft- ware, we worked to develop core capabilities for gathering enhanced memory usage per- formance data from applications and create and apply novel methods for automatic data structure layout optimizations, tailoring the overall approach to support existing supercom- puter and cluster programming models and constraints. In this Phase I project, we focused on infrastructure necessary to gather performance data and present it in an intuitive way to users. With the advent of enhanced Precise Event-Based Sampling (PEBS) counters on recent Intel processor architectures and equivalent technology on AMD processors, we are now in a position to access memory reference information for particular addresses. Prior to the introduction of PEBS counters

  4. The Spy in the Sandbox: Practical Cache Attacks in JavaScript and their Implications

    Science.gov (United States)

    2015-10-16

    The Spy in the Sandbox: Practical Cache Attacks in JavaScript and their Implications Yossef Oren Vasileios P. Kemerlis Simha Sethumadhavan Angelos D...security General Terms Languages, Measurement, Security Keywords side-channel attacks; cache-timing attacks; JavaScript -based cache attacks; covert...more detail in Section 3, executes a JavaScript - based cache attack, which lets the attacker track accesses to the victim’s last-level cache over

  5. An integrated study of the Grayburg/San Andres reservoir, Foster and south Cowden fields, Ector County, Texas. Quarterly report, January 1--March 31, 1996

    Energy Technology Data Exchange (ETDEWEB)

    Trentham, R.C.; Weinbrandt, R.; Reeves, J.J.

    1996-06-17

    The principal objective of this research is to demonstrate in the field that 3D seismic data can be used to aid in identifying porosity zones, permeability barriers and thief zones and thereby improve waterflood design. Geologic and engineering data will be integrated with the geophysical data to result in a detailed reservoir characterization. Reservoir simulation will then be used to determine infill drilling potential and the optimum waterflood design for the project area. This design will be implemented and the success of the waterflood evaluated.

  6. The Effect of Garbage Collection on Cache Performance

    National Research Council Canada - National Science Library

    Zorn, Benjamin

    1991-01-01

    .... This paper describes the use of trace-driven simulation to estimate the effect of garbage collection algorithms on cache performance Traces from four large Common Lisp programs have been collected...

  7. Distributed caching mechanism for various MPE software services

    CERN Document Server

    Svec, Andrej

    2017-01-01

    The MPE Software Section provides multiple software services to facilitate the testing and the operation of the CERN Accelerator complex. Continuous growth in the number of users and the amount of processed data result in the requirement of high scalability. Our current priority is to move towards a distributed and properly load balanced set of services based on containers. The aim of this project is to implement the generic caching mechanism applicable to our services and chosen architecture. The project will at first require research about the different aspects of distributed caching (persistence, no gc-caching, cache consistency etc.) and the available technologies followed by the implementation of the chosen solution. In order to validate the correctness and performance of the implementation in the last phase of the project it will be required to implement a monitoring layer and integrate it with the current ELK stack.

  8. Fundamental Parallel Algorithms for Private-Cache Chip Multiprocessors

    DEFF Research Database (Denmark)

    Arge, Lars Allan; Goodrich, Michael T.; Nelson, Michael

    2008-01-01

    In this paper, we study parallel algorithms for private-cache chip multiprocessors (CMPs), focusing on methods for foundational problems that are scalable with the number of cores. By focusing on private-cache CMPs, we show that we can design efficient algorithms that need no additional assumptions...... about the way cores are interconnected, for we assume that all inter-processor communication occurs through the memory hierarchy. We study several fundamental problems, including prefix sums, selection, and sorting, which often form the building blocks of other parallel algorithms. Indeed, we present...... two sorting algorithms, a distribution sort and a mergesort. Our algorithms are asymptotically optimal in terms of parallel cache accesses and space complexity under reasonable assumptions about the relationships between the number of processors, the size of memory, and the size of cache blocks...

  9. Co-Designed Cache Coherency Architecture for Embedded Multicore Systems

    OpenAIRE

    Marandola, Jussara; Cudennec, Loïc

    2011-01-01

    International audience; One of the key challenges in chip multi-processing is to provide a programming model that manages cache coherency in a transparent and efficient way. A large number of applications designed for embedded systems are known to read and write data following memory access patterns. Memory access patterns can be used to optimize cache consistency by prefetching data and reducing the number of memory transactions. In this paper, we present the round-robin method applied to ba...

  10. Effects of Cache Valley Particulate Matter on Human Lung Cells

    OpenAIRE

    Watterson, Todd L.

    2012-01-01

    During wintertime temperature inversion episodes the concentrations of particulate air pollution, also defined as particulate matter (PM), in Utah’s Cache Valley have often been highest in the nation, with concentrations surpassing more populated and industrial areas. This has attracted much local and national attention to the area and its pollution. The Cache Valley has recently been declared to be in non-attainment of provisions of Federal law bringing to bear Federal regulatory attention a...

  11. Cliff swallows Petrochelidon pyrrhonota as bioindicators of environmental mercury, Cache Creek Watershed, California

    Science.gov (United States)

    Hothem, Roger L.; Trejo, Bonnie S.; Bauer, Marissa L.; Crayon, John J.

    2008-01-01

    To evaluate mercury (Hg) and other element exposure in cliff swallows (Petrochelidon pyrrhonota), eggs were collected from 16 sites within the mining-impacted Cache Creek watershed, Colusa, Lake, and Yolo counties, California, USA, in 1997-1998. Nestlings were collected from seven sites in 1998. Geometric mean total Hg (THg) concentrations ranged from 0.013 to 0.208 ??g/g wet weight (ww) in cliff swallow eggs and from 0.047 to 0.347 ??g/g ww in nestlings. Mercury detected in eggs generally followed the spatial distribution of Hg in the watershed based on proximity to both anthropogenic and natural sources. Mean Hg concentrations in samples of eggs and nestlings collected from sites near Hg sources were up to five and seven times higher, respectively, than in samples from reference sites within the watershed. Concentrations of other detected elements, including aluminum, beryllium, boron, calcium, manganese, strontium, and vanadium, were more frequently elevated at sites near Hg sources. Overall, Hg concentrations in eggs from Cache Creek were lower than those reported in eggs of tree swallows (Tachycineta bicolor) from highly contaminated locations in North America. Total Hg concentrations were lower in all Cache Creek egg samples than adverse effects levels established for other species. Total Hg concentrations in bullfrogs (Rana catesbeiana) and foothill yellow-legged frogs (Rana boylii) collected from 10 of the study sites were both positively correlated with THg concentrations in cliff swallow eggs. Our data suggest that cliff swallows are reliable bioindicators of environmental Hg. ?? Springer Science+Business Media, LLC 2007.

  12. An investigation of DUA caching strategies for public key certificates

    Energy Technology Data Exchange (ETDEWEB)

    Cheung, Terry Ching [Univ. of California, Davis, CA (United States)

    1993-11-01

    Internet Privacy Enhanced Mail (PEM) provides security services to users of Internet electronic mail. PEM is designed with the intention that it will eventually obtain public key certificates from the X.500 directory service. However, such a capability is not present in most PEM implementations today. While the prevalent PEM implementation uses a public key certificate-based strategy, certificates are mostly distributed via e-mail exchanges, which raises several security and performance issues. In this thesis research, we changed the reference PEM implementation to make use of the X.500 directory service instead of local databases for public key certificate management. The thesis discusses some problems with using the X.500 directory service, explores the relevant issues, and develops an approach to address them. The approach makes use of a memory cache to store public key certificates. We implemented a centralized cache server and addressed the denial-of-service security problem that is present in the server. In designing the cache, we investigated several cache management strategies. One result of our study is that the use of a cache significantly improves performance. Our research also indicates that security incurs extra performance cost. Different cache replacement algorithms do not seem to yield significant performance differences, while delaying dirty-writes to the backing store does improve performance over immediate writes.

  13. Testing geopressured geothermal reservoirs in existing wells. Final report: Saldana well No. 2, Zapata County, Texas. Volume II. Well test data

    Energy Technology Data Exchange (ETDEWEB)

    The following are included: field test data, compiled and edited raw data, time/pressure data, tentative method of testing for hydrogen sulfide in natural gas using length of stain tubes, combined sample log, report on reservoir fluids study, well test analysis, smoothing with weighted moving averages, chemical analysis procedures, scale monitoring report, sand detector strip charts, and analyses of water and gas samples. (MHR)

  14. Megafloods and Clovis cache at Wenatchee, Washington

    Science.gov (United States)

    Waitt, Richard B.

    2016-05-01

    Immense late Wisconsin floods from glacial Lake Missoula drowned the Wenatchee reach of Washington's Columbia valley by different routes. The earliest debacles, nearly 19,000 cal yr BP, raged 335 m deep down the Columbia and built high Pangborn bar at Wenatchee. As advancing ice blocked the northwest of Columbia valley, several giant floods descended Moses Coulee and backflooded up the Columbia past Wenatchee. Ice then blocked Moses Coulee, and Grand Coulee to Quincy basin became the westmost floodway. From Quincy basin many Missoula floods backflowed 50 km upvalley to Wenatchee 18,000 to 15,500 years ago. Receding ice dammed glacial Lake Columbia centuries more-till it burst about 15,000 years ago. After Glacier Peak ashfall about 13,600 years ago, smaller great flood(s) swept down the Columbia from glacial Lake Kootenay in British Columbia. The East Wenatchee cache of huge fluted Clovis points had been laid atop Pangborn bar after the Glacier Peak ashfall, then buried by loess. Clovis people came five and a half millennia after the early gigantic Missoula floods, two and a half millennia after the last small Missoula flood, and two millennia after the glacial Lake Columbia flood. People likely saw outburst flood(s) from glacial Lake Kootenay.

  15. dCache on Steroids - Delegated Storage Solutions

    Science.gov (United States)

    Mkrtchyan, T.; Adeyemi, F.; Ashish, A.; Behrmann, G.; Fuhrmann, P.; Litvintsev, D.; Millar, P.; Rossi, A.; Sahakyan, M.; Starek, J.

    2017-10-01

    For over a decade, dCache.org has delivered a robust software used at more than 80 Universities and research institutes around the world, allowing these sites to provide reliable storage services for the WLCG experiments as well as many other scientific communities. The flexible architecture of dCache allows running it in a wide variety of configurations and platforms - from a SoC based all-in-one Raspberry-Pi up to hundreds of nodes in a multipetabyte installation. Due to lack of managed storage at the time, dCache implemented data placement, replication and data integrity directly. Today, many alternatives are available: S3, GlusterFS, CEPH and others. While such solutions position themselves as scalable storage systems, they cannot be used by many scientific communities out of the box. The absence of community-accepted authentication and authorization mechanisms, the use of product specific protocols and the lack of namespace are some of the reasons that prevent wide-scale adoption of these alternatives. Most of these limitations are already solved by dCache. By delegating low-level storage management functionality to the above-mentioned new systems and providing the missing layer through dCache, we provide a solution which combines the benefits of both worlds - industry standard storage building blocks with the access protocols and authentication required by scientific communities. In this paper, we focus on CEPH, a popular software for clustered storage that supports file, block and object interfaces. CEPH is often used in modern computing centers, for example as a backend to OpenStack services. We will show prototypes of dCache running with a CEPH backend and discuss the benefits and limitations of such an approach. We will also outline the roadmap for supporting ‘delegated storage’ within the dCache releases.

  16. Copper avoidance and mortality of juvenile brown trout (Salmo trutta) in tests with copper-sulfate-treated water from West Branch Reservoir, Putnam County, New York

    Science.gov (United States)

    Baldigo, Barry P.; Baudanza, T.P.

    2001-01-01

    Copper-avoidance tests and acute-toxicity (mortality) tests on hatchery-reared, young-of- the-year brown trout (salmo trutta) were conducted with water from West Branch Reservoir to assess the avoidance response to copper sulfate treatment, which is used occasionally by New York City Department of Environmental Protection to decrease phytoplankton populations in the reservoir. Avoidance-test results indicate that juvenile brown trout tend to avoid dissolved copper concentrations greater than about 55 μg/L (micrograms per liter), which is the approximate avoidance-response threshold. The mean net avoidance response of brown trout to dissolved copper concentrations of 70 and 100 μg/L, and possibly 80 μg/L, was significantly different (at α= 0.1) from the mean net avoidance response of fish to control (untreated) water and to treated water at most other tested concentrations. Mortality-test results indicate that the 96-hr median lethal concentration (LC50) of dissolved copper was 61.5 μg/L. All (100 percent) of the brown trout died at a dissolved copper concentration of 85 μg/L, many died at concentrations of 62 μg/L and 70 μg/L, and none died in the control waters (7 μg/L) or at concentrations of 10, 20, or 45 μg/L. The estimated concentration of dissolved copper that caused fish mortality (threshold) was 53.5 μg/L, virtually equivalent to the avoidance-response threshold.Additional factors that could affect the copper-avoidance and mortality response of individual brown trout and their populations in West Branch Reservoir include seasonal variations in certain water-quality parameters, copper-treatment regimes, natural fish distributions during treatment, and increased tolerance due to acclimation. These warrant additional study before the findings from this study can be used to predict the effects that copper sulfate treatments have on resident fish populations in New York City reservoirs.

  17. The benefits of a synergistic approach to reservoir characterization and proration Rose City Prairie Du Chien Gas field, Ogemaw County, Michigan

    International Nuclear Information System (INIS)

    Tinker, C.N.; Chambers, L.D.; Ritch, H.J.; McRae, C.D.; Keen, M.A.

    1991-01-01

    This paper reports on proration of gas fields in Michigan that is regulated by the Michigan Public Service Commission (MPSC). Unlike other states the MPSC determines allowables for the purpose of allocating reserves. Therefore, exemplary reservoir characterization is essential to ensure each party receives, as far as can be practicably determined, an equitable share. SWEPI's Central Division Management recognizes the reality of the Michigan regulatory arena as well as the principles and value of effective leadership and teamwork. Accordingly, to better understand Rose City, a multi-disciplinary team was formed to analyze the extensive database, to prorate the field appropriately and to establish and maintain maximum acceptable production rates

  18. Caching Efficiency Enhancement at Wireless Edges with Concerns on User’s Quality of Experience

    Directory of Open Access Journals (Sweden)

    Feng Li

    2018-01-01

    Full Text Available Content caching is a promising approach to enhancing bandwidth utilization and minimizing delivery delay for new-generation Internet applications. The design of content caching is based on the principles that popular contents are cached at appropriate network edges in order to reduce transmission delay and avoid backhaul bottleneck. In this paper, we propose a cooperative caching replacement and efficiency optimization scheme for IP-based wireless networks. Wireless edges are designed to establish a one-hop scope of caching information table for caching replacement in cases when there is not enough cache resource available within its own space. During the course, after receiving the caching request, every caching node should determine the weight of the required contents and provide a response according to the availability of its own caching space. Furthermore, to increase the caching efficiency from a practical perspective, we introduce the concept of quality of user experience (QoE and try to properly allocate the cache resource of the whole networks to better satisfy user demands. Different caching allocation strategies are devised to be adopted to enhance user QoE in various circumstances. Numerical results are further provided to justify the performance improvement of our proposal from various aspects.

  19. Horizontally scaling dCache SRM with the Terracotta platform

    International Nuclear Information System (INIS)

    Perelmutov, T; Crawford, M; Moibenko, A; Oleynik, G

    2011-01-01

    The dCache disk caching file system has been chosen by a majority of LHC experiments' Tier 1 centers for their data storage needs. It is also deployed at many Tier 2 centers. The Storage Resource Manager (SRM) is a standardized grid storage interface and a single point of remote entry into dCache, and hence is a critical component. SRM must scale to increasing transaction rates and remain resilient against changing usage patterns. The initial implementation of the SRM service in dCache suffered from an inability to support clustered deployment, and its performance was limited by the hardware of a single node. Using the Terracotta platform[l], we added the ability to horizontally scale the dCache SRM service to run on multiple nodes in a cluster configuration, coupled with network load balancing. This gives site administrators the ability to increase the performance and reliability of SRM service to face the ever-increasing requirements of LHC data handling. In this paper we will describe the previous limitations of the architecture SRM server and how the Terracotta platform allowed us to readily convert single node service into a highly scalable clustered application.

  20. A Scalable proxy cache for Grid Data Access

    Science.gov (United States)

    Cristian Cirstea, Traian; Just Keijser, Jan; Koeroo, Oscar Arthur; Starink, Ronald; Templon, Jeffrey Alan

    2012-12-01

    We describe a prototype grid proxy cache system developed at Nikhef, motivated by a desire to construct the first building block of a future https-based Content Delivery Network for grid infrastructures. Two goals drove the project: firstly to provide a “native view” of the grid for desktop-type users, and secondly to improve performance for physics-analysis type use cases, where multiple passes are made over the same set of data (residing on the grid). We further constrained the design by requiring that the system should be made of standard components wherever possible. The prototype that emerged from this exercise is a horizontally-scalable, cooperating system of web server / cache nodes, fronted by a customized webDAV server. The webDAV server is custom only in the sense that it supports http redirects (providing horizontal scaling) and that the authentication module has, as back end, a proxy delegation chain that can be used by the cache nodes to retrieve files from the grid. The prototype was deployed at Nikhef and tested at a scale of several terabytes of data and approximately one hundred fast cores of computing. Both small and large files were tested, in a number of scenarios, and with various numbers of cache nodes, in order to understand the scaling properties of the system. For properly-dimensioned cache-node hardware, the system showed speedup of several integer factors for the analysis-type use cases. These results and others are presented and discussed.

  1. High Performance Analytics with the R3-Cache

    Science.gov (United States)

    Eavis, Todd; Sayeed, Ruhan

    Contemporary data warehouses now represent some of the world’s largest databases. As these systems grow in size and complexity, however, it becomes increasingly difficult for brute force query processing approaches to meet the performance demands of end users. Certainly, improved indexing and more selective view materialization are helpful in this regard. Nevertheless, with warehouses moving into the multi-terabyte range, it is clear that the minimization of external memory accesses must be a primary performance objective. In this paper, we describe the R 3-cache, a natively multi-dimensional caching framework designed specifically to support sophisticated warehouse/OLAP environments. R 3-cache is based upon an in-memory version of the R-tree that has been extended to support buffer pages rather than disk blocks. A key strength of the R 3-cache is that it is able to utilize multi-dimensional fragments of previous query results so as to significantly minimize the frequency and scale of disk accesses. Moreover, the new caching model directly accommodates the standard relational storage model and provides mechanisms for pro-active updates that exploit the existence of query “hot spots”. The current prototype has been evaluated as a component of the Sidera DBMS, a “shared nothing” parallel OLAP server designed for multi-terabyte analytics. Experimental results demonstrate significant performance improvements relative to simpler alternatives.

  2. National Dam Safety Program. Eaton Brook Reservoir Dam (Inventory Number NY 352), Susquehanna River Basin, Madison County, New York. Phase I Inspection Report,

    Science.gov (United States)

    1980-02-19

    Visul IspecionWest Eaton pydrology, Structural Stability OeS iJiiTiAcr~U roawo p*Va *irow % ff n*,c...y sod Idenfi fr by block numbe.r) C) Tis report...provides inforMation andl analys-s on the physical condition of C-") I the dlam as of the report dnte. Information ’and analysis nre based~ on visual...SUSQUEHANNA RIVER BASIN MADISON COUNTY, NEW YORK TABLE OF CONTENTS PAGE NO. ASSESSMENT - OVERVIEW PHOTOGRAPH 1 PROJECT INFORMATION 1 1.1 GENERAL 1 1.2

  3. Caching at the Mobile Edge: a Practical Implementation

    DEFF Research Database (Denmark)

    Poderys, Justas; Artuso, Matteo; Lensbøl, Claus Michael Oest

    2018-01-01

    Thanks to recent advances in mobile networks, it is becoming increasingly popular to access heterogeneous content from mobile terminals. There are, however, unique challenges in mobile networks that affect the perceived quality of experience (QoE) at the user end. One such challenge is the higher...... latency that users typically experience in mobile networks compared to wired ones. Cloud-based radio access networks with content caches at the base stations are seen as a key contributor in reducing the latency required to access content and thus improve the QoE at the mobile user terminal. In this paper...... for the mobile user obtained by caching content at the base stations. This is quantified with a comparison to non-cached content by means of ping tests (10–11% shorter times), a higher response rate for web traffic (1.73–3.6 times higher), and an improvement in the jitter (6% reduction)....

  4. Temperature and leakage aware techniques to improve cache reliability

    Science.gov (United States)

    Akaaboune, Adil

    Decreasing power consumption in small devices such as handhelds, cell phones and high-performance processors is now one of the most critical design concerns. On-chip cache memories dominate the chip area in microprocessors and thus arises the need for power efficient cache memories. Cache is the simplest cost effective method to attain high speed memory hierarchy and, its performance is extremely critical for high speed computers. Cache is used by the microprocessor for channeling the performance gap between processor and main memory (RAM) hence the memory bandwidth is frequently a bottleneck which can affect the peak throughput significantly. In the design of any cache system, the tradeoffs of area/cost, performance, power consumption, and thermal management must be taken into consideration. Previous work has mainly concentrated on performance and area/cost constraints. More recent works have focused on low power design especially for portable devices and media-processing systems, however fewer research has been done on the relationship between heat management, Leakage power and cost per die. Lately, the focus of power dissipation in the new generations of microprocessors has shifted from dynamic power to idle power, a previously underestimated form of power loss that causes battery charge to drain and shutdown too early due the waste of energy. The problem has been aggravated by the aggressive scaling of process; device level method used originally by designers to enhance performance, conserve dissipation and reduces the sizes of digital circuits that are increasingly condensed. This dissertation studies the impact of hotspots, in the cache memory, on leakage consumption and microprocessor reliability and durability. The work will first prove that by eliminating hotspots in the cache memory, leakage power will be reduced and therefore, the reliability will be improved. The second technique studied is data quality management that improves the quality of the data

  5. Reservoir management

    International Nuclear Information System (INIS)

    Satter, A.; Varnon, J.E.; Hoang, M.T.

    1992-01-01

    A reservoir's life begins with exploration leading to discovery followed by delineation of the reservoir, development of the field, production by primary, secondary and tertiary means, and finally to abandonment. Sound reservoir management is the key to maximizing economic operation of the reservoir throughout its entire life. Technological advances and rapidly increasing computer power are providing tools to better manage reservoirs and are increasing the gap between good and neutral reservoir management. The modern reservoir management process involves goal setting, planning, implementing, monitoring, evaluating, and revising plans. Setting a reservoir management strategy requires knowledge of the reservoir, availability of technology, and knowledge of the business, political, and environmental climate. Formulating a comprehensive management plan involves depletion and development strategies, data acquisition and analyses, geological and numerical model studies, production and reserves forecasts, facilities requirements, economic optimization, and management approval. This paper provides management, engineers geologists, geophysicists, and field operations staff with a better understanding of the practical approach to reservoir management using a multidisciplinary, integrated team approach

  6. Efficient cache oblivious algorithms for randomized divide-and-conquer on the multicore model

    OpenAIRE

    Sharma, Neeraj; Sen, Sandeep

    2012-01-01

    In this paper we present randomized algorithms for sorting and convex hull that achieves optimal performance (for speed-up and cache misses) on the multicore model with private cache model. Our algorithms are cache oblivious and generalize the randomized divide and conquer strategy given by Reischuk and Reif and Sen. Although the approach yielded optimal speed-up in the PRAM model, we require additional techniques to optimize cache-misses in an oblivious setting. Under a mild assumption on in...

  7. Decentralized Caching for Content Delivery Based on Blockchain: A Game Theoretic Perspective

    OpenAIRE

    Wang, Wenbo; Niyato, Dusit; Wang, Ping; Leshem, Amir

    2018-01-01

    Blockchains enables tamper-proof, ordered logging for transactional data in a decentralized manner over open-access, overlay peer-to-peer networks. In this paper, we propose a decentralized framework of proactive caching in a hierarchical wireless network based on blockchains. We employ the blockchain-based smart contracts to construct an autonomous content caching market. In the market, the cache helpers are able to autonomously adapt their caching strategies according to the market statisti...

  8. Enabling Efficient Dynamic Resizing of Large DRAM Caches via A Hardware Consistent Hashing Mechanism

    OpenAIRE

    Chang, Kevin K.; Loh, Gabriel H.; Thottethodi, Mithuna; Eckert, Yasuko; O'Connor, Mike; Manne, Srilatha; Hsu, Lisa; Subramanian, Lavanya; Mutlu, Onur

    2016-01-01

    Die-stacked DRAM has been proposed for use as a large, high-bandwidth, last-level cache with hundreds or thousands of megabytes of capacity. Not all workloads (or phases) can productively utilize this much cache space, however. Unfortunately, the unused (or under-used) cache continues to consume power due to leakage in the peripheral circuitry and periodic DRAM refresh. Dynamically adjusting the available DRAM cache capacity could largely eliminate this energy overhead. However, the current p...

  9. A Primer on Memory Consistency and Cache Coherence

    CERN Document Server

    Sorin, Daniel; Wood, David

    2011-01-01

    Many modern computer systems and most multicore chips (chip multiprocessors) support shared memory in hardware. In a shared memory system, each of the processor cores may read and write to a single shared address space. For a shared memory machine, the memory consistency model defines the architecturally visible behavior of its memory system. Consistency definitions provide rules about loads and stores (or memory reads and writes) and how they act upon memory. As part of supporting a memory consistency model, many machines also provide cache coherence protocols that ensure that multiple cached

  10. Effectiveness of caching in a distributed digital library system

    DEFF Research Database (Denmark)

    Hollmann, J.; Ardø, Anders; Stenstrom, P.

    2007-01-01

    offers a tremendous functional advantage to a user, the fulltext download delays caused by the network and queuing in servers make the user-perceived interactive performance poor. This paper studies how effective caching of articles at the client level can be achieved as well as at intermediate points...... as manifested by gateways that implement the interfaces to the many fulltext archives. A central research question in this approach is: What is the nature of locality in the user access stream to such a digital library? Based on access logs that drive the simulations, it is shown that client-side caching can...

  11. Effective caching of shortest paths for location-based services

    DEFF Research Database (Denmark)

    Jensen, Christian S.; Thomsen, Jeppe Rishede; Yiu, Man Lung

    2012-01-01

    Web search is ubiquitous in our daily lives. Caching has been extensively used to reduce the computation time of the search engine and reduce the network traffic beyond a proxy server. Another form of web search, known as online shortest path search, is popular due to advances in geo-positioning.......Web search is ubiquitous in our daily lives. Caching has been extensively used to reduce the computation time of the search engine and reduce the network traffic beyond a proxy server. Another form of web search, known as online shortest path search, is popular due to advances in geo...

  12. Web proxy cache replacement strategies simulation, implementation, and performance evaluation

    CERN Document Server

    ElAarag, Hala; Cobb, Jake

    2013-01-01

    This work presents a study of cache replacement strategies designed for static web content. Proxy servers can improve performance by caching static web content such as cascading style sheets, java script source files, and large files such as images. This topic is particularly important in wireless ad hoc networks, in which mobile devices act as proxy servers for a group of other mobile devices. Opening chapters present an introduction to web requests and the characteristics of web objects, web proxy servers and Squid, and artificial neural networks. This is followed by a comprehensive review o

  13. Alignment of Memory Transfers of a Time-Predictable Stack Cache

    DEFF Research Database (Denmark)

    Abbaspourseyedi, Sahar; Brandner, Florian

    2014-01-01

    of complex cache states. Instead, only the occupancy level of the cache has to be determined. The memory transfers generated by the standard stack cache are not generally aligned. These unaligned accesses risk to introduce complexity to the otherwise simple WCET analysis. In this work, we investigate three...

  14. Lazy Spilling for a Time-Predictable Stack Cache: Implementation and Analysis

    DEFF Research Database (Denmark)

    Abbaspourseyedi, Sahar; Jordan, Alexander; Brandner, Florian

    2014-01-01

    of the cache content to main memory, if the content was not modified in the meantime. At first sight, this appears to be an average-case optimization. Indeed, measurements show that the number of cache blocks spilled is reduced to about 17% and 30% in the mean, depending on the stack cache size. Furthermore...

  15. The impact of using combinatorial optimisation for static caching of posting lists

    DEFF Research Database (Denmark)

    Petersen, Casper; Simonsen, Jakob Grue; Lioma, Christina

    2015-01-01

    Caching posting lists can reduce the amount of disk I/O required to evaluate a query. Current methods use optimisation procedures for maximising the cache hit ratio. A recent method selects posting lists for static caching in a greedy manner and obtains higher hit rates than standard cache eviction...... policies such as LRU and LFU. However, a greedy method does not formally guarantee an optimal solution. We investigate whether the use of methods guaranteed, in theory, to and an approximately optimal solution would yield higher hit rates. Thus, we cast the selection of posting lists for caching...

  16. Unfavorable Strides in Cache Memory Systems (RNR Technical Report RNR-92-015

    Directory of Open Access Journals (Sweden)

    David H. Bailey

    1995-01-01

    Full Text Available An important issue in obtaining high performance on a scientific application running on a cache-based computer system is the behavior of the cache when data are accessed at a constant stride. Others who have discussed this issue have noted an odd phenomenon in such situations: A few particular innocent-looking strides result in sharply reduced cache efficiency. In this article, this problem is analyzed, and a simple formula is presented that accurately gives the cache efficiency for various cache parameters and data strides.

  17. Dynamic web cache publishing for IaaS clouds using Shoal

    Science.gov (United States)

    Gable, Ian; Chester, Michael; Armstrong, Patrick; Berghaus, Frank; Charbonneau, Andre; Leavett-Brown, Colin; Paterson, Michael; Prior, Robert; Sobie, Randall; Taylor, Ryan

    2014-06-01

    We have developed a highly scalable application, called Shoal, for tracking and utilizing a distributed set of HTTP web caches. Our application uses the Squid HTTP cache. Squid servers advertise their existence to the Shoal server via AMQP messaging by running Shoal Agent. The Shoal server provides a simple REST interface that allows clients to determine their closest Squid cache. Our goal is to dynamically instantiate Squid caches on IaaS clouds in response to client demand. Shoal provides the VMs on IaaS clouds with the location of the nearest dynamically instantiated Squid Cache.

  18. Dynamic web cache publishing for IaaS clouds using Shoal

    International Nuclear Information System (INIS)

    Gable, Ian; Chester, Michael; Berghaus, Frank; Leavett-Brown, Colin; Paterson, Michael; Prior, Robert; Sobie, Randall; Taylor, Ryan; Armstrong, Patrick; Charbonneau, Andre

    2014-01-01

    We have developed a highly scalable application, called Shoal, for tracking and utilizing a distributed set of HTTP web caches. Our application uses the Squid HTTP cache. Squid servers advertise their existence to the Shoal server via AMQP messaging by running Shoal Agent. The Shoal server provides a simple REST interface that allows clients to determine their closest Squid cache. Our goal is to dynamically instantiate Squid caches on IaaS clouds in response to client demand. Shoal provides the VMs on IaaS clouds with the location of the nearest dynamically instantiated Squid Cache

  19. Cache Timing Analysis of LFSR-based Stream Ciphers

    DEFF Research Database (Denmark)

    Zenner, Erik; Leander, Gregor; Hawkes, Philip

    2009-01-01

    Cache timing attacks are a class of side-channel attacks that is applicable against certain software implementations. They have generated significant interest when demonstrated against the Advanced Encryption Standard (AES), but have more recently also been applied against other cryptographic pri...

  20. ARC Cache: A solution for lightweight Grid sites in ATLAS

    CERN Document Server

    Garonne, Vincent; The ATLAS collaboration

    2016-01-01

    Many Grid sites have the need to reduce operational manpower, and running a storage element consumes a large amount of effort. In addition, setting up a new Grid site including a storage element involves a steep learning curve and large investment of time. For these reasons so-called storage-less sites are becoming more popular as a way to provide Grid computing resources with less operational overhead. ARC CE is a widely-used and mature Grid middleware which was designed from the start to be used on sites with no persistent storage element. Instead, it maintains a local self-managing cache of data which retains popular data for future jobs. As the cache is simply an area on a local posix shared filesystem with no external-facing service, it requires no extra maintenance. The cache can be scaled up as required by increasing the size of the filesystem or adding new filesystems. This paper describes how ARC CE and its cache are an ideal solution for lightweight Grid sites in the ATLAS experiment, and the integr...

  1. Language-Based Caching of Dynamically Generated HTML

    DEFF Research Database (Denmark)

    Brabrand, Claus; Møller, Anders; Olesen, Steffan

    2002-01-01

    are composed of higher-order templates that are plugged together to construct complete documents. We show how to exploit this feature to provide an automatic fine-grained caching of document templates, based on the service source code. A service transmits not the full HTML document but instead a compact JavaScript...

  2. A distributed storage system with dCache

    Science.gov (United States)

    Behrmann, G.; Fuhrmann, P.; Grønager, M.; Kleist, J.

    2008-07-01

    The LCG collaboration is encompassed by a number of Tier 1 centers. The Nordic LCG Tier 1, operated by NDGF, is in contrast to many other Tier 1 centers distributed over the Nordic countries. A distributed setup was chosen for both political and technical reasons, but also provides a number of unique challenges. dCache is well known and respected as a powerful distributed storage resource manager, and was chosen for implementing the storage aspects of the Nordic Tier 1. In contrast to classic dCache deployments, we deploy dCache over a WAN with limited bandwidth, high latency, frequent network failures, and spanning many administrative domains. These properties provide unique challenges, covering topics such as security, administration, maintenance, upgradability, reliability, and performance. Our initial focus has been on implementing the GFD.47 OGF recommendation (which introduced the GridFTP 2 protocol) in dCache and the Globus Toolkit. Compared to GridFTP 1, GridFTP 2 allows for more intelligent data flow between clients and storage pools, thus enabling more efficient use of our limited bandwidth.

  3. A distributed storage system with dCache

    International Nuclear Information System (INIS)

    Behrmann, G; Groenager, M; Fuhrmann, P; Kleist, J

    2008-01-01

    The LCG collaboration is encompassed by a number of Tier 1 centers. The Nordic LCG Tier 1, operated by NDGF, is in contrast to many other Tier 1 centers distributed over the Nordic countries. A distributed setup was chosen for both political and technical reasons, but also provides a number of unique challenges. dCache is well known and respected as a powerful distributed storage resource manager, and was chosen for implementing the storage aspects of the Nordic Tier 1. In contrast to classic dCache deployments, we deploy dCache over a WAN with limited bandwidth, high latency, frequent network failures, and spanning many administrative domains. These properties provide unique challenges, covering topics such as security, administration, maintenance, upgradability, reliability, and performance. Our initial focus has been on implementing the GFD.47 OGF recommendation (which introduced the GridFTP 2 protocol) in dCache and the Globus Toolkit. Compared to GridFTP 1, GridFTP 2 allows for more intelligent data flow between clients and storage pools, thus enabling more efficient use of our limited bandwidth

  4. Tier 3 batch system data locality via managed caches

    Science.gov (United States)

    Fischer, Max; Giffels, Manuel; Jung, Christopher; Kühn, Eileen; Quast, Günter

    2015-05-01

    Modern data processing increasingly relies on data locality for performance and scalability, whereas the common HEP approaches aim for uniform resource pools with minimal locality, recently even across site boundaries. To combine advantages of both, the High- Performance Data Analysis (HPDA) Tier 3 concept opportunistically establishes data locality via coordinated caches. In accordance with HEP Tier 3 activities, the design incorporates two major assumptions: First, only a fraction of data is accessed regularly and thus the deciding factor for overall throughput. Second, data access may fallback to non-local, making permanent local data availability an inefficient resource usage strategy. Based on this, the HPDA design generically extends available storage hierarchies into the batch system. Using the batch system itself for scheduling file locality, an array of independent caches on the worker nodes is dynamically populated with high-profile data. Cache state information is exposed to the batch system both for managing caches and scheduling jobs. As a result, users directly work with a regular, adequately sized storage system. However, their automated batch processes are presented with local replications of data whenever possible.

  5. A Software Managed Stack Cache for Real-Time Systems

    DEFF Research Database (Denmark)

    Jordan, Alexander; Abbaspourseyedi, Sahar; Schoeberl, Martin

    2016-01-01

    In a real-time system, the use of a scratchpad memory can mitigate the difficulties related to analyzing data caches, whose behavior is inherently hard to predict. We propose to use a scratchpad memory for stack allocated data. While statically allocating stack frames for individual functions to ...

  6. dCache, agile adoption of storage technology

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    For over a decade, dCache has been synonymous with large-capacity, fault-tolerant storage using commodity hardware that supports seamless data migration to and from tape. Over that time, it has satisfied the requirements of various demanding scientific user communities to store their data, transfer it between sites and fast, site-local access. When the dCache project started, the focus was on managing a relatively small disk cache in front of large tape archives. Over the project's lifetime storage technology has changed. During this period, technology changes have driven down the cost-per-GiB of harddisks. This resulted in a shift towards systems where the majority of data is stored on disk. More recently, the availability of Solid State Disks, while not yet a replacement for magnetic disks, offers an intriguing opportunity for significant performance improvement if they can be used intelligently within an existing system. New technologies provide new opportunities and dCache user communities' computi...

  7. Cache Timing Analysis of eStream Finalists

    DEFF Research Database (Denmark)

    Zenner, Erik

    2009-01-01

    Cache Timing Attacks have attracted a lot of cryptographic attention due to their relevance for the AES. However, their applicability to other cryptographic primitives is less well researched. In this talk, we give an overview over our analysis of the stream ciphers that were selected for phase 3...

  8. Caching Over-The-Top Services, the Netflix Case

    DEFF Research Database (Denmark)

    Jensen, Stefan; Jensen, Michael; Gutierrez Lopez, Jose Manuel

    2015-01-01

    Problem (LLB-CFL). The solution search processes are implemented based on Genetic Algorithms (GA), designing genetic operators highly targeted towards this specific problem. The proposed methods are applied to a case study focusing on the demand and cache specifications of Netflix, and framed into a real...

  9. County Spending

    Data.gov (United States)

    Montgomery County of Maryland — This dataset includes County spending data for Montgomery County government. It does not include agency spending. Data considered sensitive or confidential and will...

  10. A trace-driven analysis of name and attribute caching in a distributed system

    Science.gov (United States)

    Shirriff, Ken W.; Ousterhout, John K.

    1992-01-01

    This paper presents the results of simulating file name and attribute caching on client machines in a distributed file system. The simulation used trace data gathered on a network of about 40 workstations. Caching was found to be advantageous: a cache on each client containing just 10 directories had a 91 percent hit rate on name look ups. Entry-based name caches (holding individual directory entries) had poorer performance for several reasons, resulting in a maximum hit rate of about 83 percent. File attribute caching obtained a 90 percent hit rate with a cache on each machine of the attributes for 30 files. The simulations show that maintaining cache consistency between machines is not a significant problem; only 1 in 400 name component look ups required invalidation of a remotely cached entry. Process migration to remote machines had little effect on caching. Caching was less successful in heavily shared and modified directories such as /tmp, but there weren't enough references to /tmp overall to affect the results significantly. We estimate that adding name and attribute caching to the Sprite operating system could reduce server load by 36 percent and the number of network packets by 30 percent.

  11. Massively parallel algorithms for trace-driven cache simulations

    Science.gov (United States)

    Nicol, David M.; Greenberg, Albert G.; Lubachevsky, Boris D.

    1991-01-01

    Trace driven cache simulation is central to computer design. A trace is a very long sequence of reference lines from main memory. At the t(exp th) instant, reference x sub t is hashed into a set of cache locations, the contents of which are then compared with x sub t. If at the t sup th instant x sub t is not present in the cache, then it is said to be a miss, and is loaded into the cache set, possibly forcing the replacement of some other memory line, and making x sub t present for the (t+1) sup st instant. The problem of parallel simulation of a subtrace of N references directed to a C line cache set is considered, with the aim of determining which references are misses and related statistics. A simulation method is presented for the Least Recently Used (LRU) policy, which regradless of the set size C runs in time O(log N) using N processors on the exclusive read, exclusive write (EREW) parallel model. A simpler LRU simulation algorithm is given that runs in O(C log N) time using N/log N processors. Timings are presented of the second algorithm's implementation on the MasPar MP-1, a machine with 16384 processors. A broad class of reference based line replacement policies are considered, which includes LRU as well as the Least Frequently Used and Random replacement policies. A simulation method is presented for any such policy that on any trace of length N directed to a C line set runs in the O(C log N) time with high probability using N processors on the EREW model. The algorithms are simple, have very little space overhead, and are well suited for SIMD implementation.

  12. Evaluation of low-temperature geothermal potential in Cache Valley, Utah. Report of investigation No. 174

    Energy Technology Data Exchange (ETDEWEB)

    de Vries, J.L.

    1982-11-01

    Field work consisted of locating 90 wells and springs throughout the study area, collecting water samples for later laboratory analyses, and field measurement of pH, temperature, bicarbonate alkalinity, and electrical conductivity. Na/sup +/, K/sup +/, Ca/sup +2/, Mg/sup +2/, SiO/sub 2/, Fe, SO/sub 4//sup -2/, Cl/sup -/, F/sup -/, and total dissolved solids were determined in the laboratory. Temperature profiles were measured in 12 additional, unused walls. Thermal gradients calculated from the profiles were approximately the same as the average for the Basin and Range province, about 35/sup 0/C/km. One well produced a gradient of 297/sup 0/C/km, most probably as a result of a near-surface occurrence of warm water. Possible warm water reservoir temperatures were calculated using both the silica and the Na-K-Ca geothermometers, with the results averaging about 50 to 100/sup 0/C. If mixing calculations were applied, taking into account the temperatures and silica contents of both warm springs or wells and the cold groundwater, reservoir temperatures up to about 200/sup 0/C were indicated. Considering measured surface water temperatures, calculated reservoir temperatures, thermal gradients, and the local geology, most of the Cache Valley, Utah area is unsuited for geothermal development. However, the areas of North Logan, Benson, and Trenton were found to have anomalously warm groundwater in comparison to the background temperature of 13.0/sup 0/C for the study area. The warm water has potential for isolated energy development but is not warm enough for major commercial development.

  13. A high level implementation and performance evaluation of level-I asynchronous cache on FPGA

    Directory of Open Access Journals (Sweden)

    Mansi Jhamb

    2017-07-01

    Full Text Available To bridge the ever-increasing performance gap between the processor and the main memory in a cost-effective manner, novel cache designs and implementations are indispensable. Cache is responsible for a major part of energy consumption (approx. 50% of processors. This paper presents a high level implementation of a micropipelined asynchronous architecture of L1 cache. Due to the fact that each cache memory implementation is time consuming and error-prone process, a synthesizable and a configurable model proves out to be of immense help as it aids in generating a range of caches in a reproducible and quick fashion. The micropipelined cache, implemented using C-Elements acts as a distributed message-passing system. The RTL cache model implemented in this paper, comprising of data and instruction caches has a wide array of configurable parameters. In addition to timing robustness our implementation has high average cache throughput and low latency. The implemented architecture comprises of two direct-mapped, write-through caches for data and instruction. The architecture is implemented in a Field Programmable Gate Array (FPGA chip using Very High Speed Integrated Circuit Hardware Description Language (VHSIC HDL along with advanced synthesis and place-and-route tools.

  14. Seed perishability determines the caching behaviour of a food-hoarding bird.

    Science.gov (United States)

    Neuschulz, Eike Lena; Mueller, Thomas; Bollmann, Kurt; Gugerli, Felix; Böhning-Gaese, Katrin

    2015-01-01

    Many animals hoard seeds for later consumption and establish seed caches that are often located at sites with specific environmental characteristics. One explanation for the selection of non-random caching locations is the avoidance of pilferage by other animals. Another possible hypothesis is that animals choose locations that hamper the perishability of stored food, allowing the consumption of unspoiled food items over long time periods. We examined seed perishability and pilferage avoidance as potential drivers for caching behaviour of spotted nutcrackers (Nucifraga caryocatactes) in the Swiss Alps where the birds are specialized on caching seeds of Swiss stone pine (Pinus cembra). We used seedling establishment as an inverse measure of seed perishability, as established seedlings cannot longer be consumed by nutcrackers. We recorded the environmental conditions (i.e. canopy openness and soil moisture) of seed caching, seedling establishment and pilferage sites. Our results show that sites of seed caching and seedling establishment had opposed microenvironmental conditions. Canopy openness and soil moisture were negatively related to seed caching but positively related to seedling establishment, i.e. nutcrackers cached seeds preferentially at sites where seed perishability was low. We found no effects of environmental factors on cache pilferage, i.e. neither canopy openness nor soil moisture had significant effects on pilferage rates. We thus could not relate caching behaviour to pilferage avoidance. Our study highlights the importance of seed perishability as a mechanism for seed-caching behaviour, which should be considered in future studies. Our findings could have important implications for the regeneration of plants whose seeds are dispersed by seed-caching animals, as the potential of seedlings to establish may strongly decrease if animals cache seeds at sites that favour seed perishability rather than seedling establishment. © 2014 The Authors. Journal

  15. Food availability and animal space use both determine cache density of Eurasian red squirrels.

    Directory of Open Access Journals (Sweden)

    Ke Rong

    Full Text Available Scatter hoarders are not able to defend their caches. A longer hoarding distance combined with lower cache density can reduce cache losses but increase the costs of hoarding and retrieving. Scatter hoarders arrange their cache density to achieve an optimal balance between hoarding costs and main cache losses. We conducted systematic cache sampling investigations to estimate the effects of food availability on cache patterns of Eurasian red squirrels (Sciurus vulgaris. This study was conducted over a five-year period at two sample plots in a Korean pine (Pinus koraiensis-dominated forest with contrasting seed production patterns. During these investigations, the locations of nest trees were treated as indicators of squirrel space use to explore how space use affected cache pattern. The squirrels selectively hoarded heavier pine seeds farther away from seed-bearing trees. The heaviest seeds were placed in caches around nest trees regardless of the nest tree location, and this placement was not in response to decreased food availability. The cache density declined with the hoarding distance. Cache density was lower at sites with lower seed production and during poor seed years. During seed mast years, the cache density around nest trees was higher and invariant. The pine seeds were dispersed over a larger distance when seed availability was lower. Our results suggest that 1 animal space use is an important factor that affects food hoarding distance and associated cache densities, 2 animals employ different hoarding strategies based on food availability, and 3 seed dispersal outside the original stand is stimulated in poor seed years.

  16. Food Availability and Animal Space Use Both Determine Cache Density of Eurasian Red Squirrels

    Science.gov (United States)

    Rong, Ke; Yang, Hui; Ma, Jianzhang; Zong, Cheng; Cai, Tijiu

    2013-01-01

    Scatter hoarders are not able to defend their caches. A longer hoarding distance combined with lower cache density can reduce cache losses but increase the costs of hoarding and retrieving. Scatter hoarders arrange their cache density to achieve an optimal balance between hoarding costs and main cache losses. We conducted systematic cache sampling investigations to estimate the effects of food availability on cache patterns of Eurasian red squirrels (Sciurus vulgaris). This study was conducted over a five-year period at two sample plots in a Korean pine (Pinus koraiensis)-dominated forest with contrasting seed production patterns. During these investigations, the locations of nest trees were treated as indicators of squirrel space use to explore how space use affected cache pattern. The squirrels selectively hoarded heavier pine seeds farther away from seed-bearing trees. The heaviest seeds were placed in caches around nest trees regardless of the nest tree location, and this placement was not in response to decreased food availability. The cache density declined with the hoarding distance. Cache density was lower at sites with lower seed production and during poor seed years. During seed mast years, the cache density around nest trees was higher and invariant. The pine seeds were dispersed over a larger distance when seed availability was lower. Our results suggest that 1) animal space use is an important factor that affects food hoarding distance and associated cache densities, 2) animals employ different hoarding strategies based on food availability, and 3) seed dispersal outside the original stand is stimulated in poor seed years. PMID:24265833

  17. I-Structure software cache for distributed applications

    Directory of Open Access Journals (Sweden)

    Alfredo Cristóbal Salas

    2004-01-01

    Full Text Available En este artículo, describimos el caché de software I-Structure para entornos de memoria distribuida (D-ISSC, lo cual toma ventaja de la localidad de los datos mientras mantiene la capacidad de tolerancia a la latencia de sistemas de memoria I-Structure. Las facilidades de programación de los programas MPI, le ocultan los problemas de sincronización al programador. Nuestra evaluación experimental usando un conjunto de pruebas de rendimiento indica que clusters de PC con I-Structure y su mecanismo de cache D-ISSC son más robustos. El sistema puede acelerar aplicaciones de comunicación intensiva regulares e irregulares.

  18. Study on data acquisition system based on reconfigurable cache technology

    Science.gov (United States)

    Zhang, Qinchuan; Li, Min; Jiang, Jun

    2018-03-01

    Waveform capture rate is one of the key features of digital acquisition systems, which represents the waveform processing capability of the system in a unit time. The higher the waveform capture rate is, the larger the chance to capture elusive events is and the more reliable the test result is. First, this paper analyzes the impact of several factors on the waveform capture rate of the system, then the novel technology based on reconfigurable cache is further proposed to optimize system architecture, and the simulation results show that the signal-to-noise ratio of signal, capacity, and structure of cache have significant effects on the waveform capture rate. Finally, the technology is demonstrated by the engineering practice, and the results show that the waveform capture rate of the system is improved substantially without significant increase of system's cost, and the technology proposed has a broad application prospect.

  19. The Potential Role of Cache Mechanism for Complicated Design Optimization

    International Nuclear Information System (INIS)

    Noriyasu, Hirokawa; Fujita, Kikuo

    2002-01-01

    This paper discusses the potential role of cache mechanism for complicated design optimization While design optimization is an application of mathematical programming techniques to engineering design problems over numerical computation, its progress has been coevolutionary. The trend in such progress indicates that more complicated applications become the next target of design optimization beyond growth of computational resources. As the progress in the past two decades had required response surface techniques, decomposition techniques, etc., any new framework must be introduced for the future of design optimization methods. This paper proposes a possibility of what we call cache mechanism for mediating the coming challenge and briefly demonstrates some promises in the idea of Voronoi diagram based cumulative approximation as an example of its implementation, development of strict robust design, extension of design optimization for product variety

  20. Storage and Caching: Synthesis of Flow-based Microfluidic Biochips

    OpenAIRE

    Tseng, Tsun-Ming; Li, Bing; Ho, Tsung-Yi; Schlichtmann, Ulf

    2017-01-01

    Flow-based microfluidic biochips are widely used in lab- on-a-chip experiments. In these chips, devices such as mixers and detectors connected by micro-channels execute specific operations. Intermediate fluid samples are saved in storage temporarily until target devices become avail- able. However, if the storage unit does not have enough capacity, fluid samples must wait in devices, reducing their efficiency and thus increasing the overall execution time. Consequently, storage and caching of...

  1. Justice and Immigrant Latino Recreation Geography in Cache Valley, Utah

    OpenAIRE

    Madsen, Jodie; Radel, Claudia; Endter-Wada, Joanna

    2014-01-01

    Latinos are the largest U.S. non-mainstreamed ethnic group, and social and environmental justice considerations dictate recreation professionals and researchers meet their recreation needs. This study reconceptualizes this diverse group’s recreation patterns, looking at where immigrant Latino individuals in Cache Valley, Utah do recreate rather than where they do not. Through qualitative interviews and interactive mapping, thirty participants discussed what recreation means to them and explai...

  2. Icarus: a caching simulator for information centric networking (ICN)

    OpenAIRE

    Saino, L.; Psaras, I.; Pavlou, G.

    2014-01-01

    Information-Centric Networking (ICN) is a new networking paradigm proposing a shift of the main network abstraction from host identifiers to location-agnostic content identifiers. So far, several architectures have been proposed implementing this paradigm shift. A key feature, common to all proposed architectures, is the in-network caching capability, enabled by the location-agnostic, explicit naming of contents. This aspect, in particular, has recently received considerable attention by ...

  3. Using Shadow Page Cache to Improve Isolated Drivers Performance

    Directory of Open Access Journals (Sweden)

    Hao Zheng

    2015-01-01

    Full Text Available With the advantage of the reusability property of the virtualization technology, users can reuse various types and versions of existing operating systems and drivers in a virtual machine, so as to customize their application environment. In order to prevent users’ virtualization environments being impacted by driver faults in virtual machine, Chariot examines the correctness of driver’s write operations by the method of combining a driver’s write operation capture and a driver’s private access control table. However, this method needs to keep the write permission of shadow page table as read-only, so as to capture isolated driver’s write operations through page faults, which adversely affect the performance of the driver. Based on delaying setting frequently used shadow pages’ write permissions to read-only, this paper proposes an algorithm using shadow page cache to improve the performance of isolated drivers and carefully study the relationship between the performance of drivers and the size of shadow page cache. Experimental results show that, through the shadow page cache, the performance of isolated drivers can be greatly improved without impacting Chariot’s reliability too much.

  4. New distributive web-caching technique for VOD services

    Science.gov (United States)

    Kim, Iksoo; Woo, Yoseop; Hwang, Taejune; Choi, Jintak; Kim, Youngjune

    2002-12-01

    At present, one of the most popular services through internet is on-demand services including VOD, EOD and NOD. But the main problems for on-demand service are excessive load of server and insufficiency of network resources. Therefore the service providers require a powerful expensive server and clients are faced with long end-to-end delay and network congestion problem. This paper presents a new distributive web-caching technique for fluent VOD services using distributed proxies in Head-end-Network (HNET). The HNET consists of a Switching-Agent (SA) as a control node, some Head-end Nodes (HEN) as proxies and clients connected to HEN. And each HEN is composing a LAN. Clients request VOD services to server through a HEN and SA. The SA operates the heart of HNET, all the operations using proposed distributive caching technique perform under the control of SA. This technique stores some parts of a requested video on the corresponding HENs when clients connected to each HEN request an identical video. Thus, clients access those HENs (proxies) alternatively for acquiring video streams. Eventually, this fact leads to equi-loaded proxy (HEN). We adopt the cache replacement strategy using the combination of LRU, LFU, remove streams from other HEN prior to server streams and the method of replacing the first block of video last to reduce end-to end delay.

  5. Using shadow page cache to improve isolated drivers performance.

    Science.gov (United States)

    Zheng, Hao; Dong, Xiaoshe; Wang, Endong; Chen, Baoke; Zhu, Zhengdong; Liu, Chengzhe

    2015-01-01

    With the advantage of the reusability property of the virtualization technology, users can reuse various types and versions of existing operating systems and drivers in a virtual machine, so as to customize their application environment. In order to prevent users' virtualization environments being impacted by driver faults in virtual machine, Chariot examines the correctness of driver's write operations by the method of combining a driver's write operation capture and a driver's private access control table. However, this method needs to keep the write permission of shadow page table as read-only, so as to capture isolated driver's write operations through page faults, which adversely affect the performance of the driver. Based on delaying setting frequently used shadow pages' write permissions to read-only, this paper proposes an algorithm using shadow page cache to improve the performance of isolated drivers and carefully study the relationship between the performance of drivers and the size of shadow page cache. Experimental results show that, through the shadow page cache, the performance of isolated drivers can be greatly improved without impacting Chariot's reliability too much.

  6. Efficient Resource Scheduling by Exploiting Relay Cache for Cellular Networks

    Directory of Open Access Journals (Sweden)

    Chun He

    2015-01-01

    Full Text Available In relay-enhanced cellular systems, throughput of User Equipment (UE is constrained by the bottleneck of the two-hop link, backhaul link (or the first hop link, and access link (the second hop link. To maximize the throughput, resource allocation should be coordinated between these two hops. A common resource scheduling algorithm, Adaptive Distributed Proportional Fair, only ensures that the throughput of the first hop is greater than or equal to that of the second hop. But it cannot guarantee a good balance of the throughput and fairness between the two hops. In this paper, we propose a Two-Hop Balanced Distributed Scheduling (TBS algorithm by exploiting relay cache for non-real-time data traffic. The evolved Node Basestation (eNB adaptively adjusts the number of Resource Blocks (RBs allocated to the backhaul link and direct links based on the cache information of relays. Each relay allocates RBs for relay UEs based on the size of the relay UE’s Transport Block. We also design a relay UE’s ACK feedback mechanism to update the data at relay cache. Simulation results show that the proposed TBS can effectively improve resource utilization and achieve a good trade-off between system throughput and fairness by balancing the throughput of backhaul and access link.

  7. Design of a shared coherent cache for a multiple channel architecture

    Science.gov (United States)

    Reisner, John A.

    1993-12-01

    The Multiple Channel Architecture (MCA) is a recently proposed computer architecture which uses fiber optic communications to overcome many of the problems associated with interconnection networks. There exists a detailed MCA simulator which faithfully simulates an MCA system, however, the original version of the simulator did not cache shared data. In order to improve the performance of the MCA, a cache coherency protocol was developed and implemented in the simulator. The protocol has two features which are significant: (1) a time-division multiplexed (TDM) communication bus is used for coherency traffic, and (2) the shared data is cached in an independent cache. The modified simulator was then used to test the protocol. Two applications and six test configurations were used throughout the testing. Experiment results showed that the protocol consistently improved system performance. Also, a proof-of-concept experiment indicated that performance improvements can be attained by varying cache parameters between the independent shared and private data caches.

  8. Effects of simulated mountain lion caching on decomposition of ungulate carcasses

    Science.gov (United States)

    Bischoff-Mattson, Z.; Mattson, D.

    2009-01-01

    Caching of animal remains is common among carnivorous species of all sizes, yet the effects of caching on larger prey are unstudied. We conducted a summer field experiment designed to test the effects of simulated mountain lion (Puma concolor) caching on mass loss, relative temperature, and odor dissemination of 9 prey-like carcasses. We deployed all but one of the carcasses in pairs, with one of each pair exposed and the other shaded and shallowly buried (cached). Caching substantially reduced wastage during dry and hot (drought) but not wet and cool (monsoon) periods, and it also reduced temperature and discernable odor to some degree during both seasons. These results are consistent with the hypotheses that caching serves to both reduce competition from arthropods and microbes and reduce odds of detection by larger vertebrates such as bears (Ursus spp.), wolves (Canis lupus), or other lions.

  9. Web Cache Prefetching as an Aspect: Towards a Dynamic-Weaving Based Solution

    DEFF Research Database (Denmark)

    Segura-Devillechaise, Marc; Menaud, Jean-Marc; Muller, Gilles

    2003-01-01

    Given the high proportion of HTTP traffic in the Internet, Web caches are crucial to reduce user access time, network latency, and bandwidth consumption. Prefetching in a Web cache can further enhance these benefits. For the best performance, however, the prefetching policy must match user and Web...... application characteristics. Thus, new prefetching policies must be loaded dynamically as needs change.Most Web caches are large C programs, and thus adding one or more prefetching policies to an existing Web cache is a daunting task. The main problem is that prefetching concerns crosscut the cache structure...... these issues. In particular, µ-Dyner provides a low overhead for aspect invocation, that meets the performance needs of Web caches....

  10. Living on the Edge: The Role of Proactive Caching in 5G Wireless Networks

    OpenAIRE

    Baştuğ, Ejder; Bennis, Mehdi; Debbah, Mérouane

    2014-01-01

    International audience; This article explores one of the key enablers of beyond 4G wireless networks leveraging small cell network deployments, namely proactive caching. Endowed with predictive capabilities and harnessing recent developments in storage, context-awareness and social networks, peak traffic demands can be substantially reduced by proactively serving predictable user demands, via caching at base stations and users' devices. In order to show the effectiveness of proactive caching,...

  11. Analisis Algoritma Pergantian Cache Pada Proxy Web Server Internet Dengan Simulasi

    OpenAIRE

    Nurwarsito, Heru

    2007-01-01

    Pertumbuhan jumlah client internet dari waktu ke waktu terus bertambah, maka respon akses internet menjadi semakin lambat. Untuk membantu kecepatan akses tersebut maka diperlukan cache pada Proxy Server. Penelitian ini bertujuan untuk menganalisis performansi Proxy Server pada Jaringan Internet terhadap penggunaan algoritma pergantian cache-nya.Analisis Algoritma Pergantian Cache Pada Proxy Server didesain dengan metoda pemodelan simulasi jaringan internet yang terdiri dari Web server, Proxy ...

  12. Cache and energy efficient algorithms for Nussinov's RNA Folding.

    Science.gov (United States)

    Zhao, Chunchun; Sahni, Sartaj

    2017-12-06

    An RNA folding/RNA secondary structure prediction algorithm determines the non-nested/pseudoknot-free structure by maximizing the number of complementary base pairs and minimizing the energy. Several implementations of Nussinov's classical RNA folding algorithm have been proposed. Our focus is to obtain run time and energy efficiency by reducing the number of cache misses. Three cache-efficient algorithms, ByRow, ByRowSegment and ByBox, for Nussinov's RNA folding are developed. Using a simple LRU cache model, we show that the Classical algorithm of Nussinov has the highest number of cache misses followed by the algorithms Transpose (Li et al.), ByRow, ByRowSegment, and ByBox (in this order). Extensive experiments conducted on four computational platforms-Xeon E5, AMD Athlon 64 X2, Intel I7 and PowerPC A2-using two programming languages-C and Java-show that our cache efficient algorithms are also efficient in terms of run time and energy. Our benchmarking shows that, depending on the computational platform and programming language, either ByRow or ByBox give best run time and energy performance. The C version of these algorithms reduce run time by as much as 97.2% and energy consumption by as much as 88.8% relative to Classical and by as much as 56.3% and 57.8% relative to Transpose. The Java versions reduce run time by as much as 98.3% relative to Classical and by as much as 75.2% relative to Transpose. Transpose achieves run time and energy efficiency at the expense of memory as it takes twice the memory required by Classical. The memory required by ByRow, ByRowSegment, and ByBox is the same as that of Classical. As a result, using the same amount of memory, the algorithms proposed by us can solve problems up to 40% larger than those solvable by Transpose.

  13. A Cache System Design for CMPs with Built-In Coherence Verification

    Directory of Open Access Journals (Sweden)

    Mamata Dalui

    2016-01-01

    Full Text Available This work reports an effective design of cache system for Chip Multiprocessors (CMPs. It introduces built-in logic for verification of cache coherence in CMPs realizing directory based protocol. It is developed around the cellular automata (CA machine, invented by John von Neumann in the 1950s. A special class of CA referred to as single length cycle 2-attractor cellular automata (TACA has been planted to detect the inconsistencies in cache line states of processors’ private caches. The TACA module captures coherence status of the CMPs’ cache system and memorizes any inconsistent recording of the cache line states during the processors’ reference to a memory block. Theory has been developed to empower a TACA to analyse the cache state updates and then to settle to an attractor state indicating quick decision on a faulty recording of cache line status. The introduction of segmentation of the CMPs’ processor pool ensures a better efficiency, in determining the inconsistencies, by reducing the number of computation steps in the verification logic. The hardware requirement for the verification logic points to the fact that the overhead of proposed coherence verification module is much lesser than that of the conventional verification units and is insignificant with respect to the cost involved in CMPs’ cache system.

  14. Behavior-aware cache hierarchy optimization for low-power multi-core embedded systems

    Science.gov (United States)

    Zhao, Huatao; Luo, Xiao; Zhu, Chen; Watanabe, Takahiro; Zhu, Tianbo

    2017-07-01

    In modern embedded systems, the increasing number of cores requires efficient cache hierarchies to ensure data throughput, but such cache hierarchies are restricted by their tumid size and interference accesses which leads to both performance degradation and wasted energy. In this paper, we firstly propose a behavior-aware cache hierarchy (BACH) which can optimally allocate the multi-level cache resources to many cores and highly improved the efficiency of cache hierarchy, resulting in low energy consumption. The BACH takes full advantage of the explored application behaviors and runtime cache resource demands as the cache allocation bases, so that we can optimally configure the cache hierarchy to meet the runtime demand. The BACH was implemented on the GEM5 simulator. The experimental results show that energy consumption of a three-level cache hierarchy can be saved from 5.29% up to 27.94% compared with other key approaches while the performance of the multi-core system even has a slight improvement counting in hardware overhead.

  15. Cooperative Caching in Mobile Ad Hoc Networks Based on Data Utility

    Directory of Open Access Journals (Sweden)

    Narottam Chand

    2007-01-01

    Full Text Available Cooperative caching, which allows sharing and coordination of cached data among clients, is a potential technique to improve the data access performance and availability in mobile ad hoc networks. However, variable data sizes, frequent data updates, limited client resources, insufficient wireless bandwidth and client's mobility make cache management a challenge. In this paper, we propose a utility based cache replacement policy, least utility value (LUV, to improve the data availability and reduce the local cache miss ratio. LUV considers several factors that affect cache performance, namely access probability, distance between the requester and data source/cache, coherency and data size. A cooperative cache management strategy, Zone Cooperative (ZC, is developed that employs LUV as replacement policy. In ZC one-hop neighbors of a client form a cooperation zone since the cost for communication with them is low both in terms of energy consumption and message exchange. Simulation experiments have been conducted to evaluate the performance of LUV based ZC caching strategy. The simulation results show that, LUV replacement policy substantially outperforms the LRU policy.

  16. Optimal Caching in Multicast 5G Networks with Opportunistic Spectrum Access

    KAUST Repository

    Emara, Mostafa

    2018-01-15

    Cache-enabled small base station (SBS) densification is foreseen as a key component of 5G cellular networks. This architecture enables storing popular files at the network edge (i.e., SBS caches), which empowers local communication and alleviates traffic congestions at the core/backhaul network. This paper develops a mathematical framework, based on stochastic geometry, to characterize the hit probability of a cache-enabled multicast 5G network with SBS multi-channel capabilities and opportunistic spectrum access. To this end, we first derive the hit probability by characterizing opportunistic spectrum access success probabilities, service distance distributions, and coverage probabilities. The optimal caching distribution to maximize the hit probability is then computed. The performance and trade-offs of the derived optimal caching distributions are then assessed and compared with two widely employed caching distribution schemes, namely uniform and Zipf caching, through numerical results and extensive simulations. It is shown that the Zipf caching almost optimal only in scenarios with large number of available channels and large cache sizes.

  17. Organizing the pantry: cache management improves quality of overwinter food stores in a montane mammal

    Science.gov (United States)

    Jakopak, Rhiannon P.; Hall, L. Embere; Chalfoun, Anna

    2017-01-01

    Many mammals create food stores to enhance overwinter survival in seasonal environments. Strategic arrangement of food within caches may facilitate the physical integrity of the cache or improve access to high-quality food to ensure that cached resources meet future nutritional demands. We used the American pika (Ochotona princeps), a food-caching lagomorph, to evaluate variation in haypile (cache) structure (i.e., horizontal layering by plant functional group) in Wyoming, United States. Fifty-five percent of 62 haypiles contained at least 2 discrete layers of vegetation. Adults and juveniles layered haypiles in similar proportions. The probability of layering increased with haypile volume, but not haypile number per individual or nearby forage diversity. Vegetation cached in layered haypiles was also higher in nitrogen compared to vegetation in unlayered piles. We found that American pikas frequently structured their food caches, structured caches were larger, and the cached vegetation in structured piles was of higher nutritional quality. Improving access to stable, high-quality vegetation in haypiles, a critical overwinter food resource, may allow individuals to better persist amidst harsh conditions.

  18. Audience effects on food caching in grey squirrels (Sciurus carolinensis): evidence for pilferage avoidance strategies.

    Science.gov (United States)

    Leaver, Lisa A; Hopewell, Lucy; Caldwell, Christine; Mallarky, Lesley

    2007-01-01

    If food pilferage has been a reliable selection pressure on food caching animals, those animals should have evolved the ability to protect their caches from pilferers. Evidence that animals protect their caches would support the argument that pilferage has been an important adaptive challenge. We observed naturally caching Eastern grey squirrels (Sciurus carolinensis) in order to determine whether they used any evasive tactics in order to deter conspecific and heterospecific pilferage. We found that grey squirrels used evasive tactics when they had a conspecific audience, but not when they had a heterospecific (corvid) audience. When other squirrels were present, grey squirrels spaced their caches farther apart and preferentially cached when oriented with their backs to other squirrels, but no such effect was found when birds were present. Our data provide the first evidence that caching mammals are sensitive to the risk of pilferage posed by an audience of conspecifics, and that they utilise evasive tactics that should help to minimise cache loss. We discuss our results in relation to recent theory of reciprocal pilferage and compare them to behaviours shown by caching birds.

  19. The Aquarius IIU Node: The Caches, the Address Translation Unit, and the VME Bus Interface

    Science.gov (United States)

    1989-08-01

    between the caches and the processo r/prefetcber has a 32-bit bus, the cache uses a 128-bit bus to send blocks to the VME controller . This will even...ODCJO r-- c;;r -1\\J If) a: :r r- a: .., w "- "" .... CONTROLLER RCFRIL* + P 0 OUT (0~ 31) CE* WE* P I DE * V I DE * P 0 DE * V 0 DE * P 0 IN(0...SUN3/160). On every node, there are two controllers for data and instruction cach e that cooperate to suppon Berkeley’s snooping cache-lock state

  20. California scrub-jays reduce visual cues available to potential pilferers by matching food colour to caching substrate.

    Science.gov (United States)

    Kelley, Laura A; Clayton, Nicola S

    2017-07-01

    Some animals hide food to consume later; however, these caches are susceptible to theft by conspecifics and heterospecifics. Caching animals can use protective strategies to minimize sensory cues available to potential pilferers, such as caching in shaded areas and in quiet substrate. Background matching (where object patterning matches the visual background) is commonly seen in prey animals to reduce conspicuousness, and caching animals may also use this tactic to hide caches, for example, by hiding coloured food in a similar coloured substrate. We tested whether California scrub-jays ( Aphelocoma californica ) camouflage their food in this way by offering them caching substrates that either matched or did not match the colour of food available for caching. We also determined whether this caching behaviour was sensitive to social context by allowing the birds to cache when a conspecific potential pilferer could be both heard and seen (acoustic and visual cues present), or unseen (acoustic cues only). When caching events could be both heard and seen by a potential pilferer, birds cached randomly in matching and non-matching substrates. However, they preferentially hid food in the substrate that matched the food colour when only acoustic cues were present. This is a novel cache protection strategy that also appears to be sensitive to social context. We conclude that studies of cache protection strategies should consider the perceptual capabilities of the cacher and potential pilferers. © 2017 The Author(s).

  1. Hydroacoustic Estimates of Fish Density Distributions in Cougar Reservoir, 2011

    Energy Technology Data Exchange (ETDEWEB)

    Ploskey, Gene R.; Zimmerman, Shon A.; Hennen, Matthew J.; Batten, George W.; Mitchell, T. D.

    2012-09-01

    Day and night mobile hydroacoustic surveys were conducted once each month from April through December 2011 to quantify the horizontal and vertical distributions of fish throughout Cougar Reservoir, Lane County, Oregon.

  2. Fast and Cache-Oblivious Dynamic Programming with Local Dependencies

    DEFF Research Database (Denmark)

    Bille, Philip; Stöckel, Morten

    2012-01-01

    String comparison such as sequence alignment, edit distance computation, longest common subsequence computation, and approximate string matching is a key task (and often computational bottleneck) in large-scale textual information retrieval. For instance, algorithms for sequence alignment......-oblivious algorithm for this type of local dynamic programming suitable for comparing large-scale strings. Our algorithm outperforms the previous state-of-the-art solutions. Surprisingly, our new simple algorithm is competitive with a complicated, optimized, and tuned implementation of the best cache-aware algorithm....... Additionally, our new algorithm generalizes the best known theoretical complexity trade-offs for the problem....

  3. Cache-Oblivious Red-Blue Line Segment Intersection

    DEFF Research Database (Denmark)

    Arge, Lars; Mølhave, Thomas; Zeh, Norbert

    2008-01-01

    We present an optimal cache-oblivious algorithm for finding all intersections between a set of non-intersecting red segments and a set of non-intersecting blue segments in the plane. Our algorithm uses $O(\\frac{N}{B}\\log_{M/B}\\frac{N}{B}+T/B)$ memory transfers, where N is the total number...... of segments, M and B are the memory and block transfer sizes of any two consecutive levels of any multilevel memory hierarchy, and T is the number of intersections....

  4. Cache Complexity and Multicore Implementation for Univariate Real Root Isolation

    International Nuclear Information System (INIS)

    Chen Changbo; Moreno Maza, Marc; Xie Yuzhen

    2012-01-01

    We present parallel algorithms with optimal cache complexity for the kernel routine of many real root isolation algorithms, namely the Taylor shift by 1. We then report on multicore implementation for isolating the real roots of univariate polynomials with integer coefficients based on a classical algorithm due to Vincent, Collins and Akritas. For processing some well-known benchmark examples with sufficiently large size, our software tool reaches linear speedup on an 8-core machine. In addition, we show that our software is able to fully utilize the many cores and the memory space of a 32-core machine to tackle large problems that are out of reach for a desktop implementation.

  5. A general approach for cache-oblivious range reporting and approximate range counting

    DEFF Research Database (Denmark)

    Afshani, Peyman; Hamilton, Chris; Zeh, Norbert

    2010-01-01

    We present cache-oblivious solutions to two important variants of range searching: range reporting and approximate range counting. Our main contribution is a general approach for constructing cache-oblivious data structures that provide relative (1+ε)-approximations for a general class of range...

  6. Re-caching by Western scrub-jays (Aphelocoma californica cannot be attributed to stress.

    Directory of Open Access Journals (Sweden)

    James M Thom

    Full Text Available Western scrub-jays (Aphelocoma californica live double lives, storing food for the future while raiding the stores of other birds. One tactic scrub-jays employ to protect stores is "re-caching"-relocating caches out of sight of would-be thieves. Recent computational modelling work suggests that re-caching might be mediated not by complex cognition, but by a combination of memory failure and stress. The "Stress Model" asserts that re-caching is a manifestation of a general drive to cache, rather than a desire to protect existing stores. Here, we present evidence strongly contradicting the central assumption of these models: that stress drives caching, irrespective of social context. In Experiment (i, we replicate the finding that scrub-jays preferentially relocate food they were watched hiding. In Experiment (ii we find no evidence that stress increases caching. In light of our results, we argue that the Stress Model cannot account for scrub-jay re-caching.

  7. Web Cache Prefetching as an Aspect: Towards a Dynamic-Weaving Based Solution

    DEFF Research Database (Denmark)

    Segura-Devillechaise, Marc; Menaud, Jean-Marc; Muller, Gilles

    2003-01-01

    Given the high proportion of HTTP traffic in the Internet, Web caches are crucial to reduce user access time, network latency, and bandwidth consumption. Prefetching in a Web cache can further enhance these benefits. For the best performance, however, the prefetching policy must match user and Web...

  8. Cache-Oblivious Data Structures and Algorithms for Undirected Breadth-First Search and Shortest Paths

    DEFF Research Database (Denmark)

    Brodal, G.S.; Fagerberg, Rolf; Meyer, U.

    2004-01-01

    We present improved cache-oblivious data structures and algorithms for breadth-first search and the single-source shortest path problem on undirected graphs with non-negative edge weights. Our results removes the performance gap between the currently best cache-aware algorithms for these problems...

  9. Reducing the disk I/O of Web proxy server caches

    CERN Document Server

    Maltzahn, C G; Grunwald, D

    1999-01-01

    The dramatic increase of HTTP traffic on the Internet has resulted in widespread use of large caching proxy servers as critical Internet infrastructure components. With continued growth the demand for larger caches and higher performance proxies grows as well. The common bottleneck of large caching proxy servers is disk I/O. We evaluate ways to reduce the amount of required disk I/O. First we compare the file system interactions of two existing Web proxy servers, CERN and SQUID. Then we show how design adjustments to the current SQUID cache architecture can dramatically reduce disk I/O. Our findings suggest two that strategies can significantly reduce disk I/O: preserve locality of the HTTP reference stream while translating these references into cache references; and use virtual memory instead of the file system for objects smaller than the system page size. The evaluated techniques reduced disk I/O by 50to 70 (33 refs).

  10. Effective Padding of Multi-Dimensional Arrays to Avoid Cache Conflict Misses

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Changwan; Bao, Wenlei; Cohen, Albert; Krishnamoorthy, Sriram; Pouchet, Louis-noel; Rastello, Fabrice; Ramanujam, J.; Sadayappan, Ponnuswamy

    2016-06-02

    Caches are used to significantly improve performance. Even with high degrees of set-associativity, the number of accessed data elements mapping to the same set in a cache can easily exceed the degree of associativity, causing conflict misses and lowered performance, even if the working set is much smaller than cache capacity. Array padding (increasing the size of array dimensions) is a well known optimization technique that can reduce conflict misses. In this paper, we develop the first algorithms for optimal padding of arrays for a set associative cache for arbitrary tile sizes, In addition, we develop the first solution to padding for nested tiles and multi-level caches. The techniques are in implemented in PAdvisor tool. Experimental results with multiple benchmarks demonstrate significant performance improvement from use of PAdvisor for padding.

  11. Pattern recognition for cache management in distributed medical imaging environments.

    Science.gov (United States)

    Viana-Ferreira, Carlos; Ribeiro, Luís; Matos, Sérgio; Costa, Carlos

    2016-02-01

    Traditionally, medical imaging repositories have been supported by indoor infrastructures with huge operational costs. This paradigm is changing thanks to cloud outsourcing which not only brings technological advantages but also facilitates inter-institutional workflows. However, communication latency is one main problem in this kind of approaches, since we are dealing with tremendous volumes of data. To minimize the impact of this issue, cache and prefetching are commonly used. The effectiveness of these mechanisms is highly dependent on their capability of accurately selecting the objects that will be needed soon. This paper describes a pattern recognition system based on artificial neural networks with incremental learning to evaluate, from a set of usage pattern, which one fits the user behavior at a given time. The accuracy of the pattern recognition model in distinct training conditions was also evaluated. The solution was tested with a real-world dataset and a synthesized dataset, showing that incremental learning is advantageous. Even with very immature initial models, trained with just 1 week of data samples, the overall accuracy was very similar to the value obtained when using 75% of the long-term data for training the models. Preliminary results demonstrate an effective reduction in communication latency when using the proposed solution to feed a prefetching mechanism. The proposed approach is very interesting for cache replacement and prefetching policies due to the good results obtained since the first deployment moments.

  12. Broadcasted Location-Aware Data Cache for Vehicular Application

    Directory of Open Access Journals (Sweden)

    Fukuda Akira

    2007-01-01

    Full Text Available There has been increasing interest in the exploitation of advances in information technology, for example, mobile computing and wireless communications in ITS (intelligent transport systems. Classes of applications that can benefit from such an infrastructure include traffic information, roadside businesses, weather reports, entertainment, and so on. There are several wireless communication methods currently available that can be utilized for vehicular applications, such as cellular phone networks, DSRC (dedicated short-range communication, and digital broadcasting. While a cellular phone network is relatively slow and a DSRC has a very small communication area, one-segment digital terrestrial broadcasting service was launched in Japan in 2006, high-performance digital broadcasting for mobile hosts has been available recently. However, broadcast delivery methods have the drawback that clients need to wait for the required data items to appear on the broadcast channel. In this paper, we propose a new cache system to effectively prefetch and replace broadcast data using "scope" (an available area of location-dependent data and "mobility specification" (a schedule according to the direction in which a mobile host moves. We numerically evaluate the cache system on the model close to the traffic road environment, and implement the emulation system to evaluate this location-aware data delivery method for a concrete vehicular application that delivers geographic road map data to a car navigation system.

  13. Improved Space Bounds for Cache-Oblivious Range Reporting

    DEFF Research Database (Denmark)

    Afshani, Peyman; Zeh, Norbert

    2011-01-01

    We provide improved bounds on the size of cacheoblivious range reporting data structures that achieve the optimal query bound of O(logB N + K/B) block transfers. Our first main result is an O(N √ logN log logN)-space data structure that achieves this query bound for 3-d dominance reporting and 2-d...... three-sided range reporting. No cache-oblivious o(N log N/ log logN)-space data structure for these problems was known before, even when allowing a query bound of O(logO(1) 2 N + K/B) block transfers.1 Our result also implies improved space bounds for general 2-d and 3-d orthogonal range reporting. Our...... second main result shows that any cache-oblivious 2-d three-sided range reporting data structure with the optimal query bound has to use Ω(N logε N) space, thereby improving on a recent lower bound for the same problem. Using known transformations, the lower bound extends to 3-d dominance reporting and 3...

  14. Broadcasted Location-Aware Data Cache for Vehicular Application

    Directory of Open Access Journals (Sweden)

    Kenya Sato

    2007-05-01

    Full Text Available There has been increasing interest in the exploitation of advances in information technology, for example, mobile computing and wireless communications in ITS (intelligent transport systems. Classes of applications that can benefit from such an infrastructure include traffic information, roadside businesses, weather reports, entertainment, and so on. There are several wireless communication methods currently available that can be utilized for vehicular applications, such as cellular phone networks, DSRC (dedicated short-range communication, and digital broadcasting. While a cellular phone network is relatively slow and a DSRC has a very small communication area, one-segment digital terrestrial broadcasting service was launched in Japan in 2006, high-performance digital broadcasting for mobile hosts has been available recently. However, broadcast delivery methods have the drawback that clients need to wait for the required data items to appear on the broadcast channel. In this paper, we propose a new cache system to effectively prefetch and replace broadcast data using “scope” (an available area of location-dependent data and “mobility specification” (a schedule according to the direction in which a mobile host moves. We numerically evaluate the cache system on the model close to the traffic road environment, and implement the emulation system to evaluate this location-aware data delivery method for a concrete vehicular application that delivers geographic road map data to a car navigation system.

  15. Some practical aspects of reservoir management

    Energy Technology Data Exchange (ETDEWEB)

    Fowler, M.L.; Young, M.A.; Cole, E.L.; Madden, M.P. [BDM-Oklahoma, Bartlesville, OK (United States)

    1996-09-01

    The practical essence of reservoir management is the optimal application of available resources-people, equipment, technology, and money to maximize profitability and recovery. Success must include knowledge and consideration of (1) the reservoir system, (2) the technologies available, and (3) the reservoir management business environment. Two Reservoir Management Demonstration projects (one in a small, newly-discovered field and one in a large, mature water-flood) implemented by the Department of Energy through BDM-Oklahoma illustrate the diversity of situations suited for reservoir management efforts. Project teams made up of experienced engineers, geoscientists, and other professionals arrived at an overall reservoir management strategy for each field. in 1993, Belden & Blake Corporation discovered a regionally significant oil reservoir (East Randolph Field) in the Cambrian Rose Run formation in Portage County, Ohio. Project objectives are to improve field operational economics and optimize oil recovery. The team focused on characterizing the reservoir geology and analyzing primary production and reservoir data to develop simulation models. Historical performance was simulated and predictions were made to assess infill drilling, water flooding, and gas repressurization. The Citronelle Field, discovered in 1955 in Mobile County, Alabama, has produced 160 million barrels from fluvial sandstones of the Cretaceous Rodessa formation. Project objectives are to address improving recovery through waterflood optimization and problems related to drilling, recompletions, production operations, and regulatory and environmental issues. Initial efforts focused on defining specific problems and on defining a geographic area within the field where solutions might best be pursued. Geologic and reservoir models were used to evaluate past performance and to investigate improved recovery operations.

  16. Advanced oil recovery technologies for improved recovery from slope basin clastic reservoirs, Nash Draw Brushy Canyon Pool, Eddy County, NM. Quarterly technical progress report, April 1, 1996--June 30, 1996

    Energy Technology Data Exchange (ETDEWEB)

    Murphy, M.B.

    1996-07-26

    The overall objective of this project is to demonstrate that a development program based on advanced reservoir management methods can significantly improve oil recovery. The demonstration plan includes developing a control area using standard reservoir management techniques and comparing the performance of the control area with an area developed using advanced reservoir management methods. Specific goals to attain the objective are: (1) to demonstrate that a development drilling program and pressure maintenance program, based on advanced reservoir management methods, can significantly improve oil recovery compared with existing technology applications, and (2) to transfer the advanced methodologies to oil and gas producers in the Permian Basin and elsewhere in the U.S. oil and gas industry.

  17. Hydrography, HydroBndy-The data set is a line feature containing representing the outline ponds and small reservoirs. It consists of more than 150 lines representing natural and engineered surface water bodies., Published in 2005, Davis County Government.

    Data.gov (United States)

    NSGIC Local Govt | GIS Inventory — Hydrography dataset current as of 2005. HydroBndy-The data set is a line feature containing representing the outline ponds and small reservoirs. It consists of more...

  18. Advanced oil recovery technologies for improved recovery from slope basin clastic reservoirs, Nash Draw Brushy Canyon Pool, Eddy County, New Mexico. Annual report, September 25, 1995--September 24, 1996

    Energy Technology Data Exchange (ETDEWEB)

    Murphy, M.B.

    1997-08-01

    The basic driver for this project is the low recovery observed in Delaware reservoirs, such as the Nash Draw Pool (NDP). This low recovery is caused by low reservoir energy, less than optimum permeabilities and porosities, and inadequate reservoir characterization and reservoir management strategies which are typical of projects operated by independent producers. Rapid oil decline rates and high gas/oil ratios are typically observed in the first year of primary production. Based on the production characteristics that have been observed in similar Delaware fields, pressure maintenance is a likely requirement at the Nash Pool. Three basic constraints to producing the Nash Draw Brushy Canyon Reservoir are: (1) limited areal and interwell geologic knowledge, (2) lack of an engineering tool to evaluate the various producing strategies, and (3) limited surface access prohibiting development with conventional drilling. The limited surface access is caused by the proximity of underground potash mining and surface playa lakes. The objectives of this project are: (1) to demonstrate that a development drilling program and pressure maintenance program, based on advanced reservoir management methods, can significantly improve oil recovery compared with existing technology applications and (2) to transfer these advanced methodologies to oil and gas producers, especially in the Permian Basin.

  19. Client-Driven Joint Cache Management and Rate Adaptation for Dynamic Adaptive Streaming over HTTP

    Directory of Open Access Journals (Sweden)

    Chenghao Liu

    2013-01-01

    Full Text Available Due to the fact that proxy-driven proxy cache management and the client-driven streaming solution of Dynamic Adaptive Streaming over HTTP (DASH are two independent processes, some difficulties and challenges arise in media data management at the proxy cache and rate adaptation at the DASH client. This paper presents a novel client-driven joint proxy cache management and DASH rate adaptation method, named CLICRA, which moves prefetching intelligence from the proxy cache to the client. Based on the philosophy of CLICRA, this paper proposes a rate adaptation algorithm, which selects bitrates for the next media segments to be requested by using the predicted buffered media time in the client. CLICRA is realized by conveying information on the segments that are likely to be fetched subsequently to the proxy cache so that it can use the information for prefetching. Simulation results show that the proposed method outperforms the conventional segment-fetch-time-based rate adaptation and the proxy-driven proxy cache management significantly not only in streaming quality at the client but also in bandwidth and storage usage in proxy caches.

  20. Nature as a treasure map! Teaching geoscience with the help of earth caches?!

    Science.gov (United States)

    Zecha, Stefanie; Schiller, Thomas

    2015-04-01

    This presentation looks at how earth caches are influence the learning process in the field of geo science in non-formal education. The development of mobile technologies using Global Positioning System (GPS) data to point geographical location together with the evolving Web 2.0 supporting the creation and consumption of content, suggest a potential for collaborative informal learning linked to location. With the help of the GIS in smartphones you can go directly in nature, search for information by your smartphone, and learn something about nature. Earth caches are a very good opportunity, which are organized and supervised geocaches with special information about physical geography high lights. Interested people can inform themselves about aspects in geoscience area by earth caches. The main question of this presentation is how these caches are created in relation to learning processes. As is not possible, to analyze all existing earth caches, there was focus on Bavaria and a certain feature of earth caches. At the end the authors show limits and potentials for the use of earth caches and give some remark for the future.

  1. A Scalable and Highly Configurable Cache-Aware Hybrid Flash Translation Layer

    Directory of Open Access Journals (Sweden)

    Jalil Boukhobza

    2014-03-01

    Full Text Available This paper presents a cache-aware configurable hybrid flash translation layer (FTL, named CACH-FTL. It was designed based on the observation that most state-of­­-the-art flash-specific cache systems above FTLs flush groups of pages belonging to the same data block. CACH-FTL relies on this characteristic to optimize flash write operations placement, as large groups of pages are flushed to a block-mapped region, named BMR, whereas small groups are buffered into a page-mapped region, named PMR. Page group placement is based on a configurable threshold defining the limit under which it is more cost-effective to use page mapping (PMR and wait for grouping more pages before flushing to the BMR. CACH-FTL is scalable in terms of mapping table size and flexible in terms of Input/Output (I/O workload support. CACH-FTL performs very well, as the performance difference with the ideal page-mapped FTL is less than 15% in most cases and has a mean of 4% for the best CACH-FTL configurations, while using at least 78% less memory for table mapping storage on RAM.

  2. A morphometric assessment of the intended function of cached Clovis points.

    Directory of Open Access Journals (Sweden)

    Briggs Buchanan

    Full Text Available A number of functions have been proposed for cached Clovis points. The least complicated hypothesis is that they were intended to arm hunting weapons. It has also been argued that they were produced for use in rituals or in connection with costly signaling displays. Lastly, it has been suggested that some cached Clovis points may have been used as saws. Here we report a study in which we morphometrically compared Clovis points from caches with Clovis points recovered from kill and camp sites to test two predictions of the hypothesis that cached Clovis points were intended to arm hunting weapons: 1 cached points should be the same shape as, but generally larger than, points from kill/camp sites, and 2 cached points and points from kill/camp sites should follow the same allometric trajectory. The results of the analyses are consistent with both predictions and therefore support the hypothesis. A follow-up review of the fit between the results of the analyses and the predictions of the other hypotheses indicates that the analyses support only the hunting equipment hypothesis. We conclude from this that cached Clovis points were likely produced with the intention of using them to arm hunting weapons.

  3. A morphometric assessment of the intended function of cached Clovis points.

    Science.gov (United States)

    Buchanan, Briggs; Kilby, J David; Huckell, Bruce B; O'Brien, Michael J; Collard, Mark

    2012-01-01

    A number of functions have been proposed for cached Clovis points. The least complicated hypothesis is that they were intended to arm hunting weapons. It has also been argued that they were produced for use in rituals or in connection with costly signaling displays. Lastly, it has been suggested that some cached Clovis points may have been used as saws. Here we report a study in which we morphometrically compared Clovis points from caches with Clovis points recovered from kill and camp sites to test two predictions of the hypothesis that cached Clovis points were intended to arm hunting weapons: 1) cached points should be the same shape as, but generally larger than, points from kill/camp sites, and 2) cached points and points from kill/camp sites should follow the same allometric trajectory. The results of the analyses are consistent with both predictions and therefore support the hypothesis. A follow-up review of the fit between the results of the analyses and the predictions of the other hypotheses indicates that the analyses support only the hunting equipment hypothesis. We conclude from this that cached Clovis points were likely produced with the intention of using them to arm hunting weapons.

  4. Do Clark's nutcrackers demonstrate what-where-when memory on a cache-recovery task?

    Science.gov (United States)

    Gould, Kristy L; Ort, Amy J; Kamil, Alan C

    2012-01-01

    What-where-when (WWW) memory during cache recovery was investigated in six Clark's nutcrackers. During caching, both red- and blue-colored pine seeds were cached by the birds in holes filled with sand. Either a short (3 day) retention interval (RI) or a long (9 day) RI was followed by a recovery session during which caches were replaced with either a single seed or wooden bead depending upon the color of the cache and length of the retention interval. Knowledge of what was in the cache (seed or bead), where it was located, and when the cache had been made (3 or 9 days ago) were the three WWW memory components under investigation. Birds recovered items (bead or seed) at above chance levels, demonstrating accurate spatial memory. They also recovered seeds more than beads after the long RI, but not after the short RI, when they recovered seeds and beads equally often. The differential recovery after the long RI demonstrates that nutcrackers may have the capacity for WWW memory during this task, but it is not clear why it was influenced by RI duration.

  5. Adjustable Two-Tier Cache for IPTV Based on Segmented Streaming

    Directory of Open Access Journals (Sweden)

    Kai-Chun Liang

    2012-01-01

    Full Text Available Internet protocol TV (IPTV is a promising Internet killer application, which integrates video, voice, and data onto a single IP network, and offers viewers an innovative set of choices and control over their TV content. To provide high-quality IPTV services, an effective strategy is based on caching. This work proposes a segment-based two-tier caching approach, which divides each video into multiple segments to be cached. This approach also partitions the cache space into two layers, where the first layer mainly caches to-be-played segments and the second layer saves possibly played segments. As the segment access becomes frequent, the proposed approach enlarges the first layer and reduces the second layer, and vice versa. Because requested segments may not be accessed frequently, this work further designs an admission control mechanism to determine whether an incoming segment should be cached or not. The cache architecture takes forward/stop playback into account and may replace the unused segments under the interrupted playback. Finally, we conduct comprehensive simulation experiments to evaluate the performance of the proposed approach. The results show that our approach can yield higher hit ratio than previous work under various environmental parameters.

  6. Optimal and Scalable Caching for 5G Using Reinforcement Learning of Space-Time Popularities

    Science.gov (United States)

    Sadeghi, Alireza; Sheikholeslami, Fatemeh; Giannakis, Georgios B.

    2018-02-01

    Small basestations (SBs) equipped with caching units have potential to handle the unprecedented demand growth in heterogeneous networks. Through low-rate, backhaul connections with the backbone, SBs can prefetch popular files during off-peak traffic hours, and service them to the edge at peak periods. To intelligently prefetch, each SB must learn what and when to cache, while taking into account SB memory limitations, the massive number of available contents, the unknown popularity profiles, as well as the space-time popularity dynamics of user file requests. In this work, local and global Markov processes model user requests, and a reinforcement learning (RL) framework is put forth for finding the optimal caching policy when the transition probabilities involved are unknown. Joint consideration of global and local popularity demands along with cache-refreshing costs allow for a simple, yet practical asynchronous caching approach. The novel RL-based caching relies on a Q-learning algorithm to implement the optimal policy in an online fashion, thus enabling the cache control unit at the SB to learn, track, and possibly adapt to the underlying dynamics. To endow the algorithm with scalability, a linear function approximation of the proposed Q-learning scheme is introduced, offering faster convergence as well as reduced complexity and memory requirements. Numerical tests corroborate the merits of the proposed approach in various realistic settings.

  7. Value-Based Caching in Information-Centric Wireless Body Area Networks

    Directory of Open Access Journals (Sweden)

    Fadi M. Al-Turjman

    2017-01-01

    Full Text Available We propose a resilient cache replacement approach based on a Value of sensed Information (VoI policy. To resolve and fetch content when the origin is not available due to isolated in-network nodes (fragmentation and harsh operational conditions, we exploit a content caching approach. Our approach depends on four functional parameters in sensory Wireless Body Area Networks (WBANs. These four parameters are: age of data based on periodic request, popularity of on-demand requests, communication interference cost, and the duration for which the sensor node is required to operate in active mode to capture the sensed readings. These parameters are considered together to assign a value to the cached data to retain the most valuable information in the cache for prolonged time periods. The higher the value, the longer the duration for which the data will be retained in the cache. This caching strategy provides significant availability for most valuable and difficult to retrieve data in the WBANs. Extensive simulations are performed to compare the proposed scheme against other significant caching schemes in the literature while varying critical aspects in WBANs (e.g., data popularity, cache size, publisher load, connectivity-degree, and severe probabilities of node failures. These simulation results indicate that the proposed VoI-based approach is a valid tool for the retrieval of cached content in disruptive and challenging scenarios, such as the one experienced in WBANs, since it allows the retrieval of content for a long period even while experiencing severe in-network node failures.

  8. Jacobo Machover, La face cachée du Che

    OpenAIRE

    Boisard, Stéphane

    2013-01-01

    S’attaquer aux mythes est une tâche prométhéenne et Jacobo Machover, dans son livre sur La face cachée du Che, en fait la cruelle expérience. S’il faut savoir gré à cet auteur de porter un regard sans complaisance sur la figure emblématique d’Ernesto Guevara de la Serna, il faut aussi s’interroger sur l’échec de son entreprise. À sa décharge et comme le confirme les commentaires – ou bien laudateurs ou bien injurieux – suscités par ce livre, il n’est pas aisé de proposer une lecture critique ...

  9. Geometric Algorithms for Private-Cache Chip Multiprocessors

    DEFF Research Database (Denmark)

    Ajwani, Deepak; Sitchinava, Nodari; Zeh, Norbert

    2010-01-01

    these problems from the ones in the previous group is the variable output size, which requires I/O-efficient load balancing strategies based on the contribution of the individual input elements to the output size. To obtain nearly optimal algorithms for these problems, we introduce a parallel distribution......We study techniques for obtaining efficient algorithms for geometric problems on private-cache chip multiprocessors. We show how to obtain optimal algorithms for interval stabbing counting, 1-D range counting, weighted 2-D dominance counting, and for computing 3-D maxima, 2-D lower envelopes, and 2......-D convex hulls. These results are obtained by analyzing adaptations of either the PEM merge sort algorithm or PRAM algorithms. For the second group of problems—orthogonal line segment intersection reporting, batched range reporting, and related problems—more effort is required. What distinguishes...

  10. Optical RAM-enabled cache memory and optical routing for chip multiprocessors: technologies and architectures

    Science.gov (United States)

    Pleros, Nikos; Maniotis, Pavlos; Alexoudi, Theonitsa; Fitsios, Dimitris; Vagionas, Christos; Papaioannou, Sotiris; Vyrsokinos, K.; Kanellos, George T.

    2014-03-01

    The processor-memory performance gap, commonly referred to as "Memory Wall" problem, owes to the speed mismatch between processor and electronic RAM clock frequencies, forcing current Chip Multiprocessor (CMP) configurations to consume more than 50% of the chip real-estate for caching purposes. In this article, we present our recent work spanning from Si-based integrated optical RAM cell architectures up to complete optical cache memory architectures for Chip Multiprocessor configurations. Moreover, we discuss on e/o router subsystems with up to Tb/s routing capacity for cache interconnection purposes within CMP configurations, currently pursued within the FP7 PhoxTrot project.

  11. Minimizing cache misses in an event-driven network server: A case study of TUX

    DEFF Research Database (Denmark)

    Bhatia, Sapan; Consel, Charles; Lawall, Julia Laetitia

    2006-01-01

    servers by optimizing their use of the L2 CPU cache in the context of the TUX Web server, known for its robustness to heavy load. Our approach is based on a novel cache-aware memory allocator and a specific scheduling strategy that together ensure that the total working data set of the server stays...... in the L2 cache. Experiments show that under high concurrency, our optimizations improve the throughput of TUX by up to 40% and the number of requests serviced at the time of failure by 21%....

  12. Applying Data Mining Techniques to Improve Information Security in the Cloud: A Single Cache System Approach

    OpenAIRE

    Amany AlShawi

    2016-01-01

    Presently, the popularity of cloud computing is gradually increasing day by day. The purpose of this research was to enhance the security of the cloud using techniques such as data mining with specific reference to the single cache system. From the findings of the research, it was observed that the security in the cloud could be enhanced with the single cache system. For future purposes, an Apriori algorithm can be applied to the single cache system. This can be applied by all cloud providers...

  13. Caregiver–Recipient Closeness and Symptom Progression in Alzheimer Disease. The Cache County Dementia Progression Study

    Science.gov (United States)

    Piercy, Kathleen W.; Rabins, Peter V.; Green, Robert C.; Breitner, John C. S.; Østbye, Truls; Corcoran, Christopher; Welsh-Bohmer, Kathleen A.; Lyketsos, Constantine G.; Tschanz, JoAnn T.

    2009-01-01

    Applying Rusbult's investment model of dyadic relationships, we examined the effect of caregiver–care recipient relationship closeness (RC) on cognitive and functional decline in Alzheimer's disease. After diagnosis, 167 participants completed up to six visits, observed over an average of 20 months. Participants were 64% women, had a mean age of 86 years, and mean dementia duration of 4 years. Caregiver-rated closeness was measured using a six-item scale. In mixed models adjusted for dementia severity, dyads with higher levels of closeness (p caregivers (p = .01) had slower cognitive decline. Effect of higher RC on functional decline was greater with spouse caregivers (p = .007). These findings of attenuated Alzheimer's dementia (AD) decline with closer relationships, particularly with spouse caregivers, are consistent with investment theory. Future interventions designed to enhance the caregiving dyadic relationship may help slow decline in AD. PMID:19564210

  14. Predictors of dementia caregiver depressive symptoms in a population: the Cache County dementia progression study.

    Science.gov (United States)

    Piercy, Kathleen W; Fauth, Elizabeth B; Norton, Maria C; Pfister, Roxane; Corcoran, Chris D; Rabins, Peter V; Lyketsos, Constantine; Tschanz, JoAnn T

    2013-11-01

    Previous research has consistently reported elevated rates of depressive symptoms in dementia caregivers, but mostly with convenience samples. This study examined rates and correlates of depression at the baseline visit of a population sample of dementia caregivers (N = 256). Using a modified version of Williams (Williams, I. C. [2005]. Emotional health of black and white dementia caregivers: A contextual examination. The Journals of Gerontology, Series B: Psychological Sciences and Social Sciences, 60, P287-P295) ecological contextual model, we examined 5 contexts that have contributed to dementia caregiver depression. A series of linear regressions were performed to determine correlates of depression. Rates of depressive symptoms were lower than those reported in most convenience studies. We found fewer depressive symptoms in caregivers with higher levels of education and larger social support networks, fewer health problems, greater likelihood of using problem-focused coping, and less likelihood of wishful thinking and with fewer behavioral disturbances in the persons with dementia. These results suggest that depression may be less prevalent in populations of dementia caregivers than in clinic-based samples, but that the correlates of depression are similar for both population and convenience samples. Interventions targeting individuals with small support networks, emotion-focused coping styles, poorer health, low quality of life, and those caring for persons with higher numbers of behavioral problems need development and testing.

  15. Application of integrated reservoir management and reservoir characterization to optimize infill drilling

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-04-01

    This project has used a multi-disciplinary approach employing geology, geophysics, and engineering to conduct advanced reservoir characterization and management activities to design and implement an optimized infill drilling program at the North Robertson (Clearfork) Unit in Gaines County, Texas. The activities during the first Budget Period consisted of developing an integrated reservoir description from geological, engineering, and geostatistical studies, and using this description for reservoir flow simulation. Specific reservoir management activities were identified and tested. The geologically targeted infill drilling program currently being implemented is a result of this work. A significant contribution of this project is to demonstrate the use of cost-effective reservoir characterization and management tools that will be helpful to both independent and major operators for the optimal development of heterogeneous, low permeability shallow-shelf carbonate (SSC) reservoirs. The techniques that are outlined for the formulation of an integrated reservoir description apply to all oil and gas reservoirs, but are specifically tailored for use in the heterogeneous, low permeability carbonate reservoirs of West Texas.

  16. Optimal Replacement Policies for Non-Uniform Cache Objects with Optional Eviction

    National Research Council Canada - National Science Library

    Bahat, Omri; Makowski, Armand M

    2002-01-01

    .... However, since the introduction of optimal replacement policies for conventional caching, the problem of finding optimal replacement policies under the factors indicated has not been studied in any systematic manner...

  17. Researching of Covert Timing Channels Based on HTTP Cache Headers in Web API

    Directory of Open Access Journals (Sweden)

    Denis Nikolaevich Kolegov

    2015-12-01

    Full Text Available In this paper, it is shown how covert timing channels based on HTTP cache headers can be implemented using different Web API of Google Drive, Dropbox and Facebook  Internet services.

  18. Prospective thinking in a mustelid? Eira barbara (Carnivora) cache unripe fruits to consume them once ripened

    Science.gov (United States)

    Soley, Fernando G.; Alvarado-Díaz, Isaías

    2011-08-01

    The ability of nonhuman animals to project individual actions into the future is a hotly debated topic. We describe the caching behaviour of tayras ( Eira barbara) based on direct observations in the field, pictures from camera traps and radio telemetry, providing evidence that these mustelids pick and cache unripe fruit for future consumption. This is the first reported case of harvesting of unripe fruits by a nonhuman animal. Ripe fruits are readily taken by a variety of animals, and tayras might benefit by securing a food source before strong competition takes place. Unripe climacteric fruits need to be harvested when mature to ensure that they continue their ripening process, and tayras accurately choose mature stages of these fruits for caching. Tayras cache both native (sapote) and non-native (plantain) fruits that differ in morphology and developmental timeframes, showing sophisticated cognitive ability that might involve highly developed learning abilities and/or prospective thinking.

  19. Applying Data Mining Techniques to Improve Information Security in the Cloud: A Single Cache System Approach

    Directory of Open Access Journals (Sweden)

    Amany AlShawi

    2016-01-01

    Full Text Available Presently, the popularity of cloud computing is gradually increasing day by day. The purpose of this research was to enhance the security of the cloud using techniques such as data mining with specific reference to the single cache system. From the findings of the research, it was observed that the security in the cloud could be enhanced with the single cache system. For future purposes, an Apriori algorithm can be applied to the single cache system. This can be applied by all cloud providers, vendors, data distributors, and others. Further, data objects entered into the single cache system can be extended into 12 components. Database and SPSS modelers can be used to implement the same.

  20. A novel coordinated edge caching with request filtration in radio access network.

    Science.gov (United States)

    Li, Yang; Xu, Yuemei; Lin, Tao; Wang, Xiaohui; Ci, Song

    2013-01-01

    Content caching at the base station of the Radio Access Network (RAN) is a way to reduce backhaul transmission and improve the quality of experience. So it is crucial to manage such massive microcaches to store the contents in a coordinated manner, in order to increase the overall mobile network capacity to support more number of requests. We achieve this goal in this paper with a novel caching scheme, which reduces the repeating traffic by request filtration and asynchronous multicast in a RAN. Request filtration can make the best use of the limited bandwidth and in turn ensure the good performance of the coordinated caching. Moreover, the storage at the mobile devices is also considered to be used to further reduce the backhaul traffic and improve the users' experience. In addition, we drive the optimal cache division in this paper with the aim of reducing the average latency user perceived. The simulation results show that the proposed scheme outperforms existing algorithms.

  1. Content Delivery in Fog-Aided Small-Cell Systems with Offline and Online Caching: An Information—Theoretic Analysis

    Directory of Open Access Journals (Sweden)

    Seyyed Mohammadreza Azimi

    2017-07-01

    Full Text Available The storage of frequently requested multimedia content at small-cell base stations (BSs can reduce the load of macro-BSs without relying on high-speed backhaul links. In this work, the optimal operation of a system consisting of a cache-aided small-cell BS and a macro-BS is investigated for both offline and online caching settings. In particular, a binary fading one-sided interference channel is considered in which the small-cell BS, whose transmission is interfered by the macro-BS, has a limited-capacity cache. The delivery time per bit (DTB is adopted as a measure of the coding latency, that is, the duration of the transmission block, required for reliable delivery. For offline caching, assuming a static set of popular contents, the minimum achievable DTB is characterized through information-theoretic achievability and converse arguments as a function of the cache capacity and of the capacity of the backhaul link connecting cloud and small-cell BS. For online caching, under a time-varying set of popular contents, the long-term (average DTB is evaluated for both proactive and reactive caching policies. Furthermore, a converse argument is developed to characterize the minimum achievable long-term DTB for online caching in terms of the minimum achievable DTB for offline caching. The performance of both online and offline caching is finally compared using numerical results.

  2. dCache: Big Data storage for HEP communities and beyond

    Energy Technology Data Exchange (ETDEWEB)

    Millar, A. P. [DESY; Behrmann, G. [Unlisted, DK; Bernardt, C. [DESY; Fuhrmann, P. [DESY; Litvintsev, D. [Fermilab; Mkrtchyan, T. [DESY; Petersen, A. [DESY; Rossi, A. [Fermilab; Schwank, K. [DESY

    2014-01-01

    With over ten years in production use dCache data storage system has evolved to match ever changing lansdcape of continually evolving storage technologies with new solutions to both existing problems and new challenges. In this paper, we present three areas of innovation in dCache: providing efficient access to data with NFS v4.1 pNFS, adoption of CDMI and WebDAV as an alternative to SRM for managing data, and integration with alternative authentication mechanisms.

  3. A Novel Coordinated Edge Caching with Request Filtration in Radio Access Network

    OpenAIRE

    Li, Yang; Xu, Yuemei; Lin, Tao; Wang, Xiaohui; Ci, Song

    2013-01-01

    Content caching at the base station of the Radio Access Network (RAN) is a way to reduce backhaul transmission and improve the quality of experience. So it is crucial to manage such massive microcaches to store the contents in a coordinated manner, in order to increase the overall mobile network capacity to support more number of requests. We achieve this goal in this paper with a novel caching scheme, which reduces the repeating traffic by request filtration and asynchronous multicast in a R...

  4. A Technique for Improving Lifetime of Non-Volatile Caches Using Write-Minimization

    Directory of Open Access Journals (Sweden)

    Sparsh Mittal

    2016-01-01

    Full Text Available While non-volatile memories (NVMs provide high-density and low-leakage, they also have low write-endurance. This, along with the write-variation introduced by the cache management policies, can lead to very small cache lifetime. In this paper, we propose ENLIVE, a technique for ENhancing the LIfetime of non-Volatile cachEs. Our technique uses a small SRAM (static random access memory storage, called HotStore. ENLIVE detects frequently written blocks and transfers them to the HotStore so that they can be accessed with smaller latency and energy. This also reduces the number of writes to the NVM cache which improves its lifetime. We present microarchitectural schemes for managing the HotStore. Simulations have been performed using an x86-64 simulator and benchmarks from SPEC2006 suite. We observe that ENLIVE provides higher improvement in lifetime and better performance and energy efficiency than two state-of-the-art techniques for improving NVM cache lifetime. ENLIVE provides 8.47×, 14.67× and 15.79× improvement in lifetime or two, four and eight core systems, respectively. In addition, it works well for a range of system and algorithm parameters and incurs only small overhead.

  5. LPPS: A Distributed Cache Pushing Based K-Anonymity Location Privacy Preserving Scheme

    Directory of Open Access Journals (Sweden)

    Ming Chen

    2016-01-01

    Full Text Available Recent years have witnessed the rapid growth of location-based services (LBSs for mobile social network applications. To enable location-based services, mobile users are required to report their location information to the LBS servers and receive answers of location-based queries. Location privacy leak happens when such servers are compromised, which has been a primary concern for information security. To address this issue, we propose the Location Privacy Preservation Scheme (LPPS based on distributed cache pushing. Unlike existing solutions, LPPS deploys distributed cache proxies to cover users mostly visited locations and proactively push cache content to mobile users, which can reduce the risk of leaking users’ location information. The proposed LPPS includes three major process. First, we propose an algorithm to find the optimal deployment of proxies to cover popular locations. Second, we present cache strategies for location-based queries based on the Markov chain model and propose update and replacement strategies for cache content maintenance. Third, we introduce a privacy protection scheme which is proved to achieve k-anonymity guarantee for location-based services. Extensive experiments illustrate that the proposed LPPS achieves decent service coverage ratio and cache hit ratio with lower communication overhead compared to existing solutions.

  6. Milestone Report - Level-2 Milestone 5589: Modernization and Expansion of LLNL Archive Disk Cache

    Energy Technology Data Exchange (ETDEWEB)

    Shoopman, J. D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-02-04

    This report documents Livermore Computing (LC) activities in support of ASC L2 milestone 5589: Modernization and Expansion of LLNL Archive Disk Cache, due March 31, 2016. The full text of the milestone is included in Attachment 1. The description of the milestone is: Description: Configuration of archival disk cache systems will be modernized to reduce fragmentation, and new, higher capacity disk subsystems will be deployed. This will enhance archival disk cache capability for ASC archive users, enabling files written to the archives to remain resident on disk for many (6–12) months, regardless of file size. The milestone was completed in three phases. On August 26, 2015 subsystems with 6PB of disk cache were deployed for production use in LLNL’s unclassified HPSS environment. Following that, on September 23, 2015 subsystems with 9 PB of disk cache were deployed for production use in LLNL’s classified HPSS environment. On January 31, 2016, the milestone was fully satisfied when the legacy Data Direct Networks (DDN) archive disk cache subsystems were fully retired from production use in both LLNL’s unclassified and classified HPSS environments, and only the newly deployed systems were in use.

  7. Resource assessment of low- and moderate-temperature geothermal waters in Calistoga, Napa County, California. Report of the second year, 1979 to 1980 of the US Department of Energy-California State-Coupled Program for reservoir assessment and confirmation

    Energy Technology Data Exchange (ETDEWEB)

    Youngs, L.G.; Bacon, C.F.; Chapman, R.H.; Chase, G.W.; Higgins, C.T.; Majmundar, H.H.; Taylor, G.C.

    1980-11-10

    Statewide assessment studies included updating and completing the USGS GEOTHERM File for California and compiling all data needed for a California Geothermal Resources Map. Site specific assessment studies included a program to assess the geothermal resource at Calistoga, Napa County, California. The Calistoga effort was comprised of a series of studies involving different disciplines, including geologic, hydrologic, geochemical and geophysical studies.

  8. Data Locality via Coordinated Caching for Distributed Processing

    Science.gov (United States)

    Fischer, M.; Kuehn, E.; Giffels, M.; Jung, C.

    2016-10-01

    To enable data locality, we have developed an approach of adding coordinated caches to existing compute clusters. Since the data stored locally is volatile and selected dynamically, only a fraction of local storage space is required. Our approach allows to freely select the degree at which data locality is provided. It may be used to work in conjunction with large network bandwidths, providing only highly used data to reduce peak loads. Alternatively, local storage may be scaled up to perform data analysis even with low network bandwidth. To prove the applicability of our approach, we have developed a prototype implementing all required functionality. It integrates seamlessly into batch systems, requiring practically no adjustments by users. We have now been actively using this prototype on a test cluster for HEP analyses. Specifically, it has been integral to our jet energy calibration analyses for CMS during run 2. The system has proven to be easily usable, while providing substantial performance improvements. Since confirming the applicability for our use case, we have investigated the design in a more general way. Simulations show that many infrastructure setups can benefit from our approach. For example, it may enable us to dynamically provide data locality in opportunistic cloud resources. The experience we have gained from our prototype enables us to realistically assess the feasibility for general production use.

  9. Improving the performance of heterogeneous multi-core processors by modifying the cache coherence protocol

    Science.gov (United States)

    Fang, Juan; Hao, Xiaoting; Fan, Qingwen; Chang, Zeqing; Song, Shuying

    2017-05-01

    In the Heterogeneous multi-core architecture, CPU and GPU processor are integrated on the same chip, which poses a new challenge to the last-level cache management. In this architecture, the CPU application and the GPU application execute concurrently, accessing the last-level cache. CPU and GPU have different memory access characteristics, so that they have differences in the sensitivity of last-level cache (LLC) capacity. For many CPU applications, a reduced share of the LLC could lead to significant performance degradation. On the contrary, GPU applications can tolerate increase in memory access latency when there is sufficient thread-level parallelism. Taking into account the GPU program memory latency tolerance characteristics, this paper presents a method that let GPU applications can access to memory directly, leaving lots of LLC space for CPU applications, in improving the performance of CPU applications and does not affect the performance of GPU applications. When the CPU application is cache sensitive, and the GPU application is insensitive to the cache, the overall performance of the system is improved significantly.

  10. Dynamic Allocation of SPM Based on Time-Slotted Cache Conflict Graph for System Optimization

    Science.gov (United States)

    Wu, Jianping; Ling, Ming; Zhang, Yang; Mei, Chen; Wang, Huan

    This paper proposes a novel dynamic Scratch-pad Memory allocation strategy to optimize the energy consumption of the memory sub-system. Firstly, the whole program execution process is sliced into several time slots according to the temporal dimension; thereafter, a Time-Slotted Cache Conflict Graph (TSCCG) is introduced to model the behavior of Data Cache (D-Cache) conflicts within each time slot. Then, Integer Nonlinear Programming (INP) is implemented, which can avoid time-consuming linearization process, to select the most profitable data pages. Virtual Memory System (VMS) is adopted to remap those data pages, which will cause severe Cache conflicts within a time slot, to SPM. In order to minimize the swapping overhead of dynamic SPM allocation, a novel SPM controller with a tightly coupled DMA is introduced to issue the swapping operations without CPU's intervention. Last but not the least, this paper discusses the fluctuation of system energy profit based on different MMU page size as well as the Time Slot duration quantitatively. According to our design space exploration, the proposed method can optimize all of the data segments, including global data, heap and stack data in general, and reduce the total energy consumption by 27.28% on average, up to 55.22% with a marginal performance promotion. And comparing to the conventional static CCG (Cache Conflicts Graph), our approach can obtain 24.7% energy profit on average, up to 30.5% with a sight boost in performance.

  11. Will video caching remain energy efficient in future core optical networks?

    Directory of Open Access Journals (Sweden)

    Niemah Izzeldin Osman

    2017-02-01

    Full Text Available Optical networks are expected to cater for the future Internet due to the high speed and capacity that they offer. Caching in the core network has proven to reduce power usage for various video services in current optical networks. This paper investigates whether video caching will still remain power efficient in future optical networks. The study compares the power consumption of caching in a current IP over WDM core network to a future network. The study considers a number of features to exemplify future networks. Future optical networks are considered where: (1 network devices consume less power, (2 network devices have sleep-mode capabilities, (3 IP over WDM implements lightpath bypass, and (4 the demand for video content significantly increases and high definition video dominates. Results show that video caching in future optical networks saves up to 42% of power consumption even when the power consumption of transport reduces. These results suggest that video caching is expected to remain a green option in video services in the future Internet.

  12. Geothermal development plan: Maricopa county

    Energy Technology Data Exchange (ETDEWEB)

    White, D.H.

    1981-01-01

    Maricopa county is the area of Arizona receiving top priority since it contains over half of the state's population. The county is located entirely within the Basin and Range physiographic region in which geothermal resources are known to occur. Several approaches were taken to match potential users to geothermal resources. One approach involved matching some of the largest facilities in the county to nearby geothermal resources. Other approaches involved identifying industrial processes whose heat requirements are less than the average assessed geothermal reservoir temperature of 110/sup 0/C (230/sup 0/F). Since many of the industries are located on or near geothermal resources, geothermal energy potentially could be adapted to many industrial processes.

  13. Reservoir fisheries of Asia

    International Nuclear Information System (INIS)

    Silva, S.S. De.

    1990-01-01

    At a workshop on reservoir fisheries research, papers were presented on the limnology of reservoirs, the changes that follow impoundment, fisheries management and modelling, and fish culture techniques. Separate abstracts have been prepared for three papers from this workshop

  14. Large reservoirs: Chapter 17

    Science.gov (United States)

    Miranda, Leandro E.; Bettoli, Phillip William

    2010-01-01

    Large impoundments, defined as those with surface area of 200 ha or greater, are relatively new aquatic ecosystems in the global landscape. They represent important economic and environmental resources that provide benefits such as flood control, hydropower generation, navigation, water supply, commercial and recreational fisheries, and various other recreational and esthetic values. Construction of large impoundments was initially driven by economic needs, and ecological consequences received little consideration. However, in recent decades environmental issues have come to the forefront. In the closing decades of the 20th century societal values began to shift, especially in the developed world. Society is no longer willing to accept environmental damage as an inevitable consequence of human development, and it is now recognized that continued environmental degradation is unsustainable. Consequently, construction of large reservoirs has virtually stopped in North America. Nevertheless, in other parts of the world construction of large reservoirs continues. The emergence of systematic reservoir management in the early 20th century was guided by concepts developed for natural lakes (Miranda 1996). However, we now recognize that reservoirs are different and that reservoirs are not independent aquatic systems inasmuch as they are connected to upstream rivers and streams, the downstream river, other reservoirs in the basin, and the watershed. Reservoir systems exhibit longitudinal patterns both within and among reservoirs. Reservoirs are typically arranged sequentially as elements of an interacting network, filter water collected throughout their watersheds, and form a mosaic of predictable patterns. Traditional approaches to fisheries management such as stocking, regulating harvest, and in-lake habitat management do not always produce desired effects in reservoirs. As a result, managers may expend resources with little benefit to either fish or fishing. Some locally

  15. Evict on write, a management strategy for a prefetch unit and/or first level cache in a multiprocessor system with speculative execution

    Science.gov (United States)

    Gara, Alan; Ohmacht, Martin

    2014-09-16

    In a multiprocessor system with at least two levels of cache, a speculative thread may run on a core processor in parallel with other threads. When the thread seeks to do a write to main memory, this access is to be written through the first level cache to the second level cache. After the write though, the corresponding line is deleted from the first level cache and/or prefetch unit, so that any further accesses to the same location in main memory have to be retrieved from the second level cache. The second level cache keeps track of multiple versions of data, where more than one speculative thread is running in parallel, while the first level cache does not have any of the versions during speculation. A switch allows choosing between modes of operation of a speculation blind first level cache.

  16. Worst-case execution time analysis-driven object cache design

    DEFF Research Database (Denmark)

    Huber, Benedikt; Puffitsch, Wolfgang; Schoeberl, Martin

    2012-01-01

    Hard real‐time systems need a time‐predictable computing platform to enable static worst‐case execution time (WCET) analysis. All performance‐enhancing features need to be WCET analyzable. However, standard data caches containing heap‐allocated data are very hard to analyze statically. In this pa......Hard real‐time systems need a time‐predictable computing platform to enable static worst‐case execution time (WCET) analysis. All performance‐enhancing features need to be WCET analyzable. However, standard data caches containing heap‐allocated data are very hard to analyze statically...... result in a WCET analysis‐friendly design. Aiming for a time‐predictable design, we therefore propose to employ WCET analysis techniques for the design space exploration of processor architectures. We evaluated different object cache configurations using static analysis techniques. The number of field...

  17. Pixels grouping and shadow cache for faster integral 3D ray tracing

    Science.gov (United States)

    Youssef, Osama; Aggoun, Amar; Wolf, Wayne H.; McCormick, Malcolm

    2002-05-01

    This paper presents for the first time, a theory for obtaining the optimum pixel grouping for improving the coherence and the shadow cache in integral 3D ray-tracing in order to reduce execution time. A theoretical study of the number of shadow cache hits with respect to the properties of the lenses and the shadow size and its location is discussed with analysis for three different styles of pixel grouping in order to obtain the optimum grouping. The first style traces rows of pixels in the horizontal direction, the second traces similar pixels in adjacent lenses in the horizontal direction, and the third traces columns of pixels in the vertical direction. The optimum grouping is a combination of all three dependant up on the number of cache hits in each. Experimental results show validation of the theory and tests on benchmark scenes show that up to a 37% improvement in execution time can be achieved by proper pixel grouping.

  18. Using Solid State Disk Array as a Cache for LHC ATLAS Data Analysis

    CERN Document Server

    Yang, W; The ATLAS collaboration; Mount, R

    2014-01-01

    User data analysis in high energy physics presents a challenge to spinning-disk based storage systems. The analysis is data intense, yet reads are small, sparse and cover a large volume of data files. It is also unpredictable due to users' response to storage performance. We describe here a system with an array of Solid State Disk as a non-conventional, standalone file level cache in front of the spinning disk storage to help improve the performance of LHC ATLAS user analysis at SLAC. The system uses a long period of data access records to make caching decisions. It can also use information from other sources such as a work-flow management system. We evaluate the performance of the system both in terms of caching and its impact on user analysis jobs. The system currently uses Xrootd technology, but the technique can be applied to any storage system.

  19. Using Solid State Disk Array as a Cache for LHC ATLAS Data Analysis

    CERN Document Server

    Yang, W; The ATLAS collaboration; Mount, R

    2013-01-01

    User data analysis in high energy physics presents a challenge to spinning-disk based storage systems. The analysis is data intense, yet reads are small, sparse and cover a large volume of data files. It is also unpredictable due to users' response to storage performance. We describe here a system with an array of Solid State Disk as a non-conventional, standalone file level cache in front of the spinning disk storage to help improve the performance of LHC ATLAS user analysis at SLAC. The system uses a long period of data access records to make caching decisions. It can also use information from other sources such as a work-flow management system. We evaluate the performance of the system both in terms of caching and its impact on user analysis jobs. The system currently uses Xrootd technology, but the technique can be applied to any storage system.

  20. Simplifying and speeding the management of intra-node cache coherence

    Energy Technology Data Exchange (ETDEWEB)

    Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton on Hudson, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Phillip [Cortlandt Manor, NY; Hoenicke, Dirk [Ossining, NY; Ohmacht, Martin [Yorktown Heights, NY

    2012-04-17

    A method and apparatus for managing coherence between two processors of a two processor node of a multi-processor computer system. Generally the present invention relates to a software algorithm that simplifies and significantly speeds the management of cache coherence in a message passing parallel computer, and to hardware apparatus that assists this cache coherence algorithm. The software algorithm uses the opening and closing of put/get windows to coordinate the activated required to achieve cache coherence. The hardware apparatus may be an extension to the hardware address decode, that creates, in the physical memory address space of the node, an area of virtual memory that (a) does not actually exist, and (b) is therefore able to respond instantly to read and write requests from the processing elements.

  1. Cache-Oblivious Search Trees via Binary Trees of Small Height

    DEFF Research Database (Denmark)

    Brodal, G.S.; Fagerberg, R.; Jacob, R.

    2002-01-01

    We propose a version of cache oblivious search trees which is simpler than the previous proposal of Bender, Demaine and Farach-Colton and has the same complexity bounds. In particular, our data structure avoids the use of weight balanced B-trees, and can be implemented as just a single array......, and range queries in worst case O(logB n + k/B) memory transfers, where k is the size of the output.The basic idea of our data structure is to maintain a dynamic binary tree of height log n+O(1) using existing methods, embed this tree in a static binary tree, which in turn is embedded in an array in a cache...... oblivious fashion, using the van Emde Boas layout of Prokop.We also investigate the practicality of cache obliviousness in the area of search trees, by providing an empirical comparison of different methods for laying out a search tree in memory....

  2. A minimum scale architecture for rover-based sample acquisition and caching

    Science.gov (United States)

    Backes, Paul; Younse, Paulo; Ganino, Anthony

    The Minimum Scale Sample Acquisition and Caching (MinSAC) architecture has been developed to enable rover-based sample acquisition and caching while minimizing the system mass. The MinSAC architecture is a version of the previously developed Integrated Mars Sample Acquisition and Handling (IMSAH) architecture. The MinSAC implementation utilizes the sampling manipulator both for sampling and sample tube transfer. This significantly reduces the number of actuators in the sample acquisition and caching subsystem. A core sample is acquired directly into its sample tube in the coring bit. The bit is transferred and released on the rover. A tube gripper on the robotic arm turret pulls the filled sample tube out of the back of the coring bit and the tube is sealed. The sample tube is then placed in the return sample canister. A new tube is placed in the bit for acquisition of another sample.

  3. 5G Network Communication, Caching, and Computing Algorithms Based on the Two‐Tier Game Model

    Directory of Open Access Journals (Sweden)

    Sungwook Kim

    2018-02-01

    Full Text Available In this study, we developed hybrid control algorithms in smart base stations (SBSs along with devised communication, caching, and computing techniques. In the proposed scheme, SBSs are equipped with computing power and data storage to collectively offload the computation from mobile user equipment and to cache the data from clouds. To combine in a refined manner the communication, caching, and computing algorithms, game theory is adopted to characterize competitive and cooperative interactions. The main contribution of our proposed scheme is to illuminate the ultimate synergy behind a fully integrated approach, while providing excellent adaptability and flexibility to satisfy the different performance requirements. Simulation results demonstrate that the proposed approach can outperform existing schemes by approximately 5% to 15% in terms of bandwidth utilization, access delay, and system throughput.

  4. Memory for multiple cache locations and prey quantities in a food-hoarding songbird

    Directory of Open Access Journals (Sweden)

    Nicola eArmstrong

    2012-12-01

    Full Text Available Most animals can discriminate between pairs of numbers that are each less than four without training. However, North Island robins (Petroica longipes, a food hoarding songbird endemic to New Zealand, can discriminate between quantities of items as high as eight without training. Here we investigate whether robins are capable of other complex quantity discrimination tasks. We test whether their ability to discriminate between small quantities declines with 1. the number of cache sites containing prey rewards and 2. the length of time separating cache creation and retrieval (retention interval. Results showed that subjects generally performed above chance expectations. They were equally able to discriminate between different combinations of prey quantities that were hidden from view in 2, 3 and 4 cache sites from between 1, 10 and 60 seconds. Overall results indicate that North Island robins can process complex quantity information involving more than two discrete quantities of items for up to one minute long retention intervals without training.

  5. A Network-Aware Distributed Storage Cache for Data Intensive Environments

    Energy Technology Data Exchange (ETDEWEB)

    Tierney, B.L.; Lee, J.R.; Johnston, W.E.; Crowley, B.; Holding, M.

    1999-12-23

    Modern scientific computing involves organizing, moving, visualizing, and analyzing massive amounts of data at multiple sites around the world. The technologies, the middleware services, and the architectures that are used to build useful high-speed, wide area distributed systems, constitute the field of data intensive computing. In this paper the authors describe an architecture for data intensive applications where they use a high-speed distributed data cache as a common element for all of the sources and sinks of data. This cache-based approach provides standard interfaces to a large, application-oriented, distributed, on-line, transient storage system. They describe their implementation of this cache, how they have made it network aware, and how they do dynamic load balancing based on the current network conditions. They also show large increases in application throughput by access to knowledge of the network conditions.

  6. Fixed priority scheduling with pre-emption thresholds and cache-related pre-emption delays: integrated analysis and evaluation

    NARCIS (Netherlands)

    Bril, R.J.; Altmeyer, S.; van den Heuvel, M.M.H.P.; Davis, R.I.; Behnam, M.

    Commercial off-the-shelf programmable platforms for real-time systems typically contain a cache to bridge the gap between the processor speed and main memory speed. Because cache-related pre-emption delays (CRPD) can have a significant influence on the computation times of tasks, CRPD have been

  7. Killing and caching of an adult White-tailed deer, Odocoileus virginianus, by a single Gray Wolf, Canis lupus

    Science.gov (United States)

    Nelson, Michael E.

    2011-01-01

    A single Gray Wolf (Canis lupus) killed an adult male White-tailed Deer (Odocoileus virginianus) and cached the intact carcass in 76 cm of snow. The carcass was revisited and entirely consumed between four and seven days later. This is the first recorded observation of a Gray Wolf caching an entire adult deer.

  8. Security in the CernVM File System and the Frontier Distributed Database Caching System

    CERN Document Server

    Dykstra, David

    2014-01-01

    Both the CernVM File System (CVMFS) and the Frontier Distributed Database Caching System (Frontier) distribute centrally updated data worldwide for LHC experiments using http proxy caches. Neither system provides privacy or access control on reading the data, but both control access to updates of the data and can guarantee the integrity of the data transferred to clients over the internet. CVMFS has since its early days required digital signatures and secure hashes on all distributed data, and recently both CVMFS and Frontier have added X509-based integrity checking. In this paper we detail and compare the security models of CVMFS and Frontier.

  9. Security in the CernVM File System and the Frontier Distributed Database Caching System

    Science.gov (United States)

    Dykstra, D.; Blomer, J.

    2014-06-01

    Both the CernVM File System (CVMFS) and the Frontier Distributed Database Caching System (Frontier) distribute centrally updated data worldwide for LHC experiments using http proxy caches. Neither system provides privacy or access control on reading the data, but both control access to updates of the data and can guarantee the authenticity and integrity of the data transferred to clients over the internet. CVMFS has since its early days required digital signatures and secure hashes on all distributed data, and recently Frontier has added X.509-based authenticity and integrity checking. In this paper we detail and compare the security models of CVMFS and Frontier.

  10. Using XRootD to provide caches for CernVM-FS

    CERN Document Server

    Domenighini, Matteo

    2017-01-01

    CernVM-FS recently added the possibility of using plugin for cache management. In order to investigate the capabilities and limits of such possibility, an XRootD plugin was written and benchmarked; as a byproduct, a POSIX plugin was also generated. The tests revealed that the plugin interface introduces no signicant performance over- head; moreover, the XRootD plugin performance was discovered to be worse than the ones of the built-in cache manager and the POSIX plugin. Further test of the XRootD component revealed that its per- formance is dependent on the server disk speed.

  11. Security in the CernVM File System and the Frontier Distributed Database Caching System

    International Nuclear Information System (INIS)

    Dykstra, D; Blomer, J

    2014-01-01

    Both the CernVM File System (CVMFS) and the Frontier Distributed Database Caching System (Frontier) distribute centrally updated data worldwide for LHC experiments using http proxy caches. Neither system provides privacy or access control on reading the data, but both control access to updates of the data and can guarantee the authenticity and integrity of the data transferred to clients over the internet. CVMFS has since its early days required digital signatures and secure hashes on all distributed data, and recently Frontier has added X.509-based authenticity and integrity checking. In this paper we detail and compare the security models of CVMFS and Frontier.

  12. Hydrologic data for the Cache Creek-Bear Thrust environmental impact statement near Jackson, Wyoming

    Science.gov (United States)

    Craig, G.S.; Ringen, B.H.; Cox, E.R.

    1981-01-01

    Information on the quantity and quality of surface and ground water in an area of concern for the Cache Creek-Bear Thrust Environmental Impact Statement in northwestern Wyoming is presented without interpretation. The environmental impact statement is being prepared jointly by the U.S. Geological Survey and the U.S. Forest Service and concerns proposed exploration and development of oil and gas on leased Federal land near Jackson, Wyoming. Information includes data from a gaging station on Cache Creek and from wells, springs, and miscellaneous sites on streams. Data include streamflow, chemical and suspended-sediment quality of streams, and the occurrence and chemical quality of ground water. (USGS)

  13. Education for sustainability and environmental education in National Geoparks. EarthCaching - a new method?

    Science.gov (United States)

    Zecha, Stefanie; Regelous, Anette

    2017-04-01

    National Geoparks are restricted areas incorporating educational resources of great importance in promoting education for sustainable development, mobilizing knowledge inherent to the EarthSciences. Different methods can be used to implement the education of sustainability. Here we present possibilities for National Geoparks to support sustainability focusing on new media and EarthCaches based on the data set of the "EarthCachers International EarthCaching" conference in Goslar in October 2015. Using an empirical study designed by ourselves we collected actual information about the environmental consciousness of Earthcachers. The data set was analyzed using SPSS and statistical methods. Here we present the results and their consequences for National Geoparks.

  14. Stream, Lake, and Reservoir Management.

    Science.gov (United States)

    Dai, Jingjing; Mei, Ying; Chang, Chein-Chi

    2017-10-01

    This review on stream, lake, and reservoir management covers selected 2016 publications on the focus of the following sections: Stream, lake, and reservoir management • Water quality of stream, lake, and reservoirReservoir operations • Models of stream, lake, and reservoir • Remediation and restoration of stream, lake, and reservoir • Biota of stream, lake, and reservoir • Climate effect of stream, lake, and reservoir.

  15. Status of Wheeler Reservoir

    Energy Technology Data Exchange (ETDEWEB)

    1990-09-01

    This is one in a series of status reports prepared by the Tennessee Valley Authority (TVA) for those interested in the conditions of TVA reservoirs. This overview of Wheeler Reservoir summarizes reservoir purposes and operation, reservoir and watershed characteristics, reservoir uses and use impairments, and water quality and aquatic biological conditions. The information presented here is from the most recent reports, publications, and original data available. If no recent data were available, historical data were summarized. If data were completely lacking, environmental professionals with special knowledge of the resource were interviewed. 12 refs., 2 figs.

  16. Improved characterization of reservoir behavior by integration of reservoir performances data and rock type distributions

    Energy Technology Data Exchange (ETDEWEB)

    Davies, D.K.; Vessell, R.K. [David K. Davies & Associates, Kingwood, TX (United States); Doublet, L.E. [Texas A& M Univ., College Station, TX (United States)] [and others

    1997-08-01

    An integrated geological/petrophysical and reservoir engineering study was performed for a large, mature waterflood project (>250 wells, {approximately}80% water cut) at the North Robertson (Clear Fork) Unit, Gaines County, Texas. The primary goal of the study was to develop an integrated reservoir description for {open_quotes}targeted{close_quotes} (economic) 10-acre (4-hectare) infill drilling and future recovery operations in a low permeability, carbonate (dolomite) reservoir. Integration of the results from geological/petrophysical studies and reservoir performance analyses provide a rapid and effective method for developing a comprehensive reservoir description. This reservoir description can be used for reservoir flow simulation, performance prediction, infill targeting, waterflood management, and for optimizing well developments (patterns, completions, and stimulations). The following analyses were performed as part of this study: (1) Geological/petrophysical analyses: (core and well log data) - {open_quotes}Rock typing{close_quotes} based on qualitative and quantitative visualization of pore-scale features. Reservoir layering based on {open_quotes}rock typing {close_quotes} and hydraulic flow units. Development of a {open_quotes}core-log{close_quotes} model to estimate permeability using porosity and other properties derived from well logs. The core-log model is based on {open_quotes}rock types.{close_quotes} (2) Engineering analyses: (production and injection history, well tests) Material balance decline type curve analyses to estimate total reservoir volume, formation flow characteristics (flow capacity, skin factor, and fracture half-length), and indications of well/boundary interference. Estimated ultimate recovery analyses to yield movable oil (or injectable water) volumes, as well as indications of well and boundary interference.

  17. Copyright aspects of caching: DIPPER (Digital Intellectual Property Practice Economic Report): legal report

    NARCIS (Netherlands)

    Hugenholtz, P.B.

    2000-01-01

    Studie naar auteursrechtelijke aspecten van (proxy- en client-) caching, geschreven in opdracht van de Europese Commissie in het kader van het Esprit-programma. De nadruk ligt op huidig en toekomstig Europees en Amerikaans recht. Bevat een afdeling over de aansprakelijkheid van Internet (access)

  18. Delivery Time Minimization in Edge Caching: Synergistic Benefits of Subspace Alignment and Zero Forcing

    KAUST Repository

    Kakar, Jaber

    2017-10-29

    An emerging trend of next generation communication systems is to provide network edges with additional capabilities such as additional storage resources in the form of caches to reduce file delivery latency. To investigate this aspect, we study the fundamental limits of a cache-aided wireless network consisting of one central base station, $M$ transceivers and $K$ receivers from a latency-centric perspective. We use the normalized delivery time (NDT) to capture the per-bit latency for the worst-case file request pattern at high signal-to-noise ratios (SNR), normalized with respect to a reference interference-free system with unlimited transceiver cache capabilities. For various special cases with $M=\\\\{1,2\\\\}$ and $K=\\\\{1,2,3\\\\}$ that satisfy $M+K\\\\leq 4$, we establish the optimal tradeoff between cache storage and latency. This is facilitated through establishing a novel converse (for arbitrary $M$ and $K$) and an achievability scheme on the NDT. Our achievability scheme is a synergistic combination of multicasting, zero-forcing beamforming and interference alignment.

  19. OneService - Generic Cache Aggregator Framework for Service Depended Cloud Applications

    NARCIS (Netherlands)

    Tekinerdogan, B.; Oral, O.A.

    2017-01-01

    Current big data cloud systems often use different data migration strategies from providers to customers. This often results in increased bandwidth usage and herewith a decrease of the performance. To enhance the performance often caching mechanisms are adopted. However, the implementations of these

  20. Randomized Caches Can Be Pretty Useful to Hard Real-Time Systems

    Directory of Open Access Journals (Sweden)

    Enrico Mezzetti

    2015-03-01

    Full Text Available Cache randomization per se, and its viability for probabilistic timing analysis (PTA of critical real-time systems, are receiving increasingly close attention from the scientific community and the industrial practitioners. In fact, the very notion of introducing randomness and probabilities in time-critical systems has caused strenuous debates owing to the apparent clash that this idea has with the strictly deterministic view traditionally held for those systems. A paper recently appeared in LITES (Reineke, J. (2014. Randomized Caches Considered Harmful in Hard Real-Time Systems. LITES, 1(1, 03:1-03:13. provides a critical analysis of the weaknesses and risks entailed in using randomized caches in hard real-time systems. In order to provide the interested reader with a fuller, balanced appreciation of the subject matter, a critical analysis of the benefits brought about by that innovation should be provided also. This short paper addresses that need by revisiting the array of issues addressed in the cited work, in the light of the latest advances to the relevant state of the art. Accordingly, we show that the potential benefits of randomized caches do offset their limitations, causing them to be - when used in conjunction with PTA - a serious competitor to conventional designs.

  1. On-chip COMA cache-coherence protocol for microgrids of microthreaded cores

    NARCIS (Netherlands)

    Zhang, L.; Jesshope, C.

    2008-01-01

    This paper describes an on-chip COMA cache coherency protocol to support the microthread model of concurrent program composition. The model gives a sound basis for building multi-core computers as it captures concurrency, abstracts communication and identifies resources, such as processor groups

  2. Greatly improved cache update times for conditions data with Frontier/Squid

    Science.gov (United States)

    Dykstra, Dave; Lueking, Lee

    2010-04-01

    The CMS detector project loads copies of conditions data to over 100,000 computer cores worldwide by using a software subsystem called Frontier. This subsystem translates database queries into HTTP, looks up the results in a central database at CERN, and caches the results in an industry-standard HTTP proxy/caching server called Squid. One of the most challenging aspects of any cache system is coherency, that is, ensuring that changes made to the underlying data get propagated out to all clients in a timely manner. Recently, the Frontier system was enhanced to drastically reduce the time for changes to be propagated everywhere without heavily loading servers. The propagation time is now as low as 15 minutes for some kinds of data and no more than 60 minutes for the rest of the data. This was accomplished by taking advantage of an HTTP and Squid feature called If-Modified-Since. In order to use this feature, the Frontier server sends a Last-Modified timestamp, but since modification times are not normally tracked by Oracle databases, a PL/SQL program was developed to track the modification times of database tables. We discuss the details of this caching scheme and the obstacles overcome including database and Squid bugs.

  3. Greatly improved cache update times for conditions data with Frontier/Squid

    Energy Technology Data Exchange (ETDEWEB)

    Dykstra, Dave; Lueking, Lee; /Fermilab

    2009-05-01

    The CMS detector project loads copies of conditions data to over 100,000 computer cores worldwide by using a software subsystem called Frontier. This subsystem translates database queries into HTTP, looks up the results in a central database at CERN, and caches the results in an industry-standard HTTP proxy/caching server called Squid. One of the most challenging aspects of any cache system is coherency, that is, ensuring that changes made to the underlying data get propagated out to all clients in a timely manner. Recently, the Frontier system was enhanced to drastically reduce the time for changes to be propagated everywhere without heavily loading servers. The propagation time is now as low as 15 minutes for some kinds of data and no more than 60 minutes for the rest of the data. This was accomplished by taking advantage of an HTTP and Squid feature called If-Modified-Since. In order to use this feature, the Frontier server sends a Last-Modified timestamp, but since modification times are not normally tracked by Oracle databases, a PL/SQL program was developed to track the modification times of database tables. We discuss the details of this caching scheme and the obstacles overcome including database and Squid bugs.

  4. TaPT: Temperature-Aware Dynamic Cache Optimization for Embedded Systems

    Directory of Open Access Journals (Sweden)

    Tosiron Adegbija

    2017-12-01

    Full Text Available Embedded systems have stringent design constraints, which has necessitated much prior research focus on optimizing energy consumption and/or performance. Since embedded systems typically have fewer cooling options, rising temperature, and thus temperature optimization, is an emergent concern. Most embedded systems only dissipate heat by passive convection, due to the absence of dedicated thermal management hardware mechanisms. The embedded system’s temperature not only affects the system’s reliability, but can also affect the performance, power, and cost. Thus, embedded systems require efficient thermal management techniques. However, thermal management can conflict with other optimization objectives, such as execution time and energy consumption. In this paper, we focus on managing the temperature using a synergy of cache optimization and dynamic frequency scaling, while also optimizing the execution time and energy consumption. This paper provides new insights on the impact of cache parameters on efficient temperature-aware cache tuning heuristics. In addition, we present temperature-aware phase-based tuning, TaPT, which determines Pareto optimal clock frequency and cache configurations for fine-grained execution time, energy, and temperature tradeoffs. TaPT enables autonomous system optimization and also allows designers to specify temperature constraints and optimization priorities. Experiments show that TaPT can effectively reduce execution time, energy, and temperature, while imposing minimal hardware overhead.

  5. I/O-Optimal Distribution Sweeping on Private-Cache Chip Multiprocessors

    DEFF Research Database (Denmark)

    Ajwani, Deepak; Sitchinava, Nodar; Zeh, Norbert

    2011-01-01

    The parallel external memory (PEM) model has been used as a basis for the design and analysis of a wide range of algorithms for private-cache multi-core architectures. As a tool for developing geometric algorithms in this model, a parallel version of the I/O-efficient distribution sweeping framew...

  6. Sex, estradiol, and spatial memory in a food-caching corvid.

    Science.gov (United States)

    Rensel, Michelle A; Ellis, Jesse M S; Harvey, Brigit; Schlinger, Barney A

    2015-09-01

    Estrogens significantly impact spatial memory function in mammalian species. Songbirds express the estrogen synthetic enzyme aromatase at relatively high levels in the hippocampus and there is evidence from zebra finches that estrogens facilitate performance on spatial learning and/or memory tasks. It is unknown, however, whether estrogens influence hippocampal function in songbirds that naturally exhibit memory-intensive behaviors, such as cache recovery observed in many corvid species. To address this question, we examined the impact of estradiol on spatial memory in non-breeding Western scrub-jays, a species that routinely participates in food caching and retrieval in nature and in captivity. We also asked if there were sex differences in performance or responses to estradiol. Utilizing a combination of an aromatase inhibitor, fadrozole, with estradiol implants, we found that while overall cache recovery rates were unaffected by estradiol, several other indices of spatial memory, including searching efficiency and efficiency to retrieve the first item, were impaired in the presence of estradiol. In addition, males and females differed in some performance measures, although these differences appeared to be a consequence of the nature of the task as neither sex consistently out-performed the other. Overall, our data suggest that a sustained estradiol elevation in a food-caching bird impairs some, but not all, aspects of spatial memory on an innate behavioral task, at times in a sex-specific manner. Copyright © 2015 Elsevier Inc. All rights reserved.

  7. Performance Evaluation of Moving Small-Cell Network with Proactive Cache

    Directory of Open Access Journals (Sweden)

    Young Min Kwon

    2016-01-01

    Full Text Available Due to rapid growth in mobile traffic, mobile network operators (MNOs are considering the deployment of moving small-cells (mSCs. mSC is a user-centric network which provides voice and data services during mobility. mSC can receive and forward data traffic via wireless backhaul and sidehaul links. In addition, due to the predictive nature of users demand, mSCs can proactively cache the predicted contents in off-peak-traffic periods. Due to these characteristics, MNOs consider mSCs as a cost-efficient solution to not only enhance the system capacity but also provide guaranteed quality of service (QoS requirements to moving user equipment (UE in peak-traffic periods. In this paper, we conduct extensive system level simulations to analyze the performance of mSCs with varying cache size and content popularity and their effect on wireless backhaul load. The performance evaluation confirms that the QoS of moving small-cell UE (mSUE notably improves by using mSCs together with proactive caching. We also show that the effective use of proactive cache significantly reduces the wireless backhaul load and increases the overall network capacity.

  8. A Class-Based Least-Recently-Used Caching Algorithm for WWW Proxies Proceedings

    NARCIS (Netherlands)

    Khayari el Abdouni, Rachid; Sadre, R.; Haverkort, Boudewijn R.H.M.; Kemper, P.; Sanders, W.H.

    2003-01-01

    In this paper we study and analyze the in uence of caching stategies on the performance of WWW proxies. We propose a new strategy called class-based LRU that works recency-based as well as size-based, with the ultimate aim to obtain a well-balanced mixture between large and small documents in the

  9. Model checking a cache coherence protocol for a Java DSM implementation

    NARCIS (Netherlands)

    J. Pang; W.J. Fokkink (Wan); R. Hofman (Rutger); R. Veldema

    2007-01-01

    textabstractJackal is a fine-grained distributed shared memory implementation of the Java programming language. It aims to implement Java's memory model and allows multithreaded Java programs to run unmodified on a distributed memory system. It employs a multiple-writer cache coherence

  10. Model checking a cache coherence protocol of a Java DSM implementation

    NARCIS (Netherlands)

    Pang, J.; Fokkink, W.J.; Hofman, R.; Veldema, R.S.

    2007-01-01

    Jackal is a fine-grained distributed shared memory implementation of the Java programming language. It aims to implement Java's memory model and allows multithreaded Java programs to run unmodified on a distributed memory system. It employs a multiple-writer cache coherence protocol. In this paper,

  11. A Novel Coordinated Edge Caching with Request Filtration in Radio Access Network

    Directory of Open Access Journals (Sweden)

    Yang Li

    2013-01-01

    Full Text Available Content caching at the base station of the Radio Access Network (RAN is a way to reduce backhaul transmission and improve the quality of experience. So it is crucial to manage such massive microcaches to store the contents in a coordinated manner, in order to increase the overall mobile network capacity to support more number of requests. We achieve this goal in this paper with a novel caching scheme, which reduces the repeating traffic by request filtration and asynchronous multicast in a RAN. Request filtration can make the best use of the limited bandwidth and in turn ensure the good performance of the coordinated caching. Moreover, the storage at the mobile devices is also considered to be used to further reduce the backhaul traffic and improve the users’ experience. In addition, we drive the optimal cache division in this paper with the aim of reducing the average latency user perceived. The simulation results show that the proposed scheme outperforms existing algorithms.

  12. Application of integrated reservoir management and reservoir characterization to optimize infill drilling. Annual report, June 13, 1994--June 12, 1995

    Energy Technology Data Exchange (ETDEWEB)

    Pande, P.K.

    1996-11-01

    This project has used a multi-disciplinary approach employing geology, geophysics, and engineering to conduct advanced reservoir characterization and management activities to design and implement an optimized infill drilling program at the North Robertson (Clearfork) Unit in Gaines County, Texas. The activities during the first Budget Period have consisted of developing an integrated reservoir description from geological, engineering, and geostatistical studies, and using this description for reservoir flow simulation. Specific reservoir management activities are being identified and tested. The geologically targeted infill drilling program will be implemented using the results of this work. A significant contribution of this project is to demonstrate the use of cost-effective reservoir characterization and management tools that will be helpful to both independent and major operators for the optimal development of heterogeneous, low permeability shallow-shelf carbonate (SSC) reservoirs. The techniques that are outlined for the formulation of an integrated reservoir description apply to all oil and gas reservoirs, but are specifically tailored for use in the heterogeneous, low permeability carbonate reservoirs of West Texas.

  13. Enhancement web proxy cache performance using Wrapper Feature Selection methods with NB and J48

    Science.gov (United States)

    Mahmoud Al-Qudah, Dua'a.; Funke Olanrewaju, Rashidah; Wong Azman, Amelia

    2017-11-01

    Web proxy cache technique reduces response time by storing a copy of pages between client and server sides. If requested pages are cached in the proxy, there is no need to access the server. Due to the limited size and excessive cost of cache compared to the other storages, cache replacement algorithm is used to determine evict page when the cache is full. On the other hand, the conventional algorithms for replacement such as Least Recently Use (LRU), First in First Out (FIFO), Least Frequently Use (LFU), Randomized Policy etc. may discard important pages just before use. Furthermore, using conventional algorithm cannot be well optimized since it requires some decision to intelligently evict a page before replacement. Hence, most researchers propose an integration among intelligent classifiers and replacement algorithm to improves replacement algorithms performance. This research proposes using automated wrapper feature selection methods to choose the best subset of features that are relevant and influence classifiers prediction accuracy. The result present that using wrapper feature selection methods namely: Best First (BFS), Incremental Wrapper subset selection(IWSS)embedded NB and particle swarm optimization(PSO)reduce number of features and have a good impact on reducing computation time. Using PSO enhance NB classifier accuracy by 1.1%, 0.43% and 0.22% over using NB with all features, using BFS and using IWSS embedded NB respectively. PSO rises J48 accuracy by 0.03%, 1.91 and 0.04% over using J48 classifier with all features, using IWSS-embedded NB and using BFS respectively. While using IWSS embedded NB fastest NB and J48 classifiers much more than BFS and PSO. However, it reduces computation time of NB by 0.1383 and reduce computation time of J48 by 2.998.

  14. Quantifying animal movement for caching foragers: the path identification index (PII) and cougars, Puma concolor

    Science.gov (United States)

    Ironside, Kirsten E.; Mattson, David J.; Theimer, Tad; Jansen, Brian; Holton, Brandon; Arundel, Terry; Peters, Michael; Sexton, Joseph O.; Edwards, Thomas C.

    2017-01-01

    Relocation studies of animal movement have focused on directed versus area restricted movement, which rely on correlations between step-length and turn angles, along with a degree of stationarity through time to define behavioral states. Although these approaches may work well for grazing foraging strategies in a patchy landscape, species that do not spend a significant amount of time searching out and gathering small dispersed food items, but instead feed for short periods on large, concentrated sources or cache food result in movements that maybe difficult to analyze using turning and velocity alone. We use GPS telemetry collected from a prey-caching predator, the cougar (Puma concolor), to test whether adding additional movement metrics capturing site recursion, to the more traditional velocity and turning, improve the ability to identify behaviors. We evaluated our movement index’s ability to identify behaviors using field investigations. We further tested for statistical stationarity across behaviors for use of topographic view-sheds. We found little correlation between turn angle, velocity, tortuosity, and site fidelity and combined them into a movement index used to identify movement paths (temporally autocorrelated movements) related to fast directed movements (taxis), area restricted movements (search), and prey caching (foraging). Changes in the frequency and duration of these movements were helpful for identifying seasonal activities such as migration and denning in females. Comparison of field investigations of cougar activities to behavioral classes defined using the movement index and found an overall classification accuracy of 81%. Changes in behaviors resulted in changes in how cougars used topographic view-sheds, showing statistical non-stationarity over time. The movement index shows promise for identifying behaviors in species that frequently return to specific locations such as food caches, watering holes, or dens, and highlights the role

  15. Transport of reservoir fines

    DEFF Research Database (Denmark)

    Yuan, Hao; Shapiro, Alexander; Stenby, Erling Halfdan

    Modeling transport of reservoir fines is of great importance for evaluating the damage of production wells and infectivity decline. The conventional methodology accounts for neither the formation heterogeneity around the wells nor the reservoir fines’ heterogeneity. We have developed an integral...

  16. SILTATION IN RESERVOIRS

    African Journals Online (AJOL)

    Calls have been made to the government through various media to assist its populace in combating this nagging problem. It was concluded that sediment maximum accumulation is experienced in reservoir during the periods of maximum flow. Keywords: reservoir model, siltation, sediment, catchment, sediment transport. 1.

  17. Dynamic reservoir well interaction

    NARCIS (Netherlands)

    Sturm, W.L.; Belfroid, S.P.C.; Wolfswinkel, O. van; Peters, M.C.A.M.; Verhelst, F.J.P.C.M.

    2004-01-01

    In order to develop smart well control systems for unstable oil wells, realistic modeling of the dynamics of the well is essential. Most dynamic well models use a semi-steady state inflow model to describe the inflow of oil and gas from the reservoir. On the other hand, reservoir models use steady

  18. Reservoir Engineering Management Program

    Energy Technology Data Exchange (ETDEWEB)

    Howard, J.H.; Schwarz, W.J.

    1977-12-14

    The Reservoir Engineering Management Program being conducted at Lawrence Berkeley Laboratory includes two major tasks: 1) the continuation of support to geothermal reservoir engineering related work, started under the NSF-RANN program and transferred to ERDA at the time of its formation; 2) the development and subsequent implementation of a broad plan for support of research in topics related to the exploitation of geothermal reservoirs. This plan is now known as the GREMP plan. Both the NSF-RANN legacies and GREMP are in direct support of the DOE/DGE mission in general and the goals of the Resource and Technology/Resource Exploitation and Assessment Branch in particular. These goals are to determine the magnitude and distribution of geothermal resources and reduce risk in their exploitation through improved understanding of generically different reservoir types. These goals are to be accomplished by: 1) the creation of a large data base about geothermal reservoirs, 2) improved tools and methods for gathering data on geothermal reservoirs, and 3) modeling of reservoirs and utilization options. The NSF legacies are more research and training oriented, and the GREMP is geared primarily to the practical development of the geothermal reservoirs. 2 tabs., 3 figs.

  19. Servidor proxy caché: comprensión y asimilación tecnológica

    Directory of Open Access Journals (Sweden)

    Carlos E. Gómez

    2012-01-01

    Full Text Available Los proveedores de acceso a Internet usualmente incluyen el concepto de aceleradores de Internet para reducir el tiempo promedio que tarda un navegador en obtener los archivos solicitados. Para los administradores del sistema es difícil elegir la configuración del servidor proxy caché, ya que es necesario decidir los valores que se deben usar en diferentes variables. En este artículo se presenta la forma como se abordó el proceso de comprensión y asimilación tecnológica del servicio de proxy caché, un servicio de alto impacto organizacional. Además, este artículo es producto del proyecto de investigación “Análisis de configuraciones de servidores proxy caché”, en el cual se estudiaron aspectos relevantes del rendimiento de Squid como servidor proxy caché.

  20. High-speed mapping of water isotopes and residence time in Cache Slough Complex, San Francisco Bay Delta, CA

    Data.gov (United States)

    Department of the Interior — Real-time, high frequency (1-second sample interval) GPS location, water quality, and water isotope (δ2H, δ18O) data was collected in the Cache Slough Complex (CSC),...

  1. Wolves, Canis lupus, carry and cache the collars of radio-collared White-tailed Deer, Odocoileus virginianus, they killed

    Science.gov (United States)

    Nelson, Michael E.; Mech, L. David

    2011-01-01

    Wolves (Canis lupus) in northeastern Minnesota cached six radio-collars (four in winter, two in spring-summer) of 202 radio-collared White-tailed Deer (Odocoileus virginianus) they killed or consumed from 1975 to 2010. A Wolf bedded on top of one collar cached in snow. We found one collar each at a Wolf den and Wolf rendezvous site, 2.5 km and 0.5 km respectively, from each deer's previous locations.

  2. EqualChance: Addressing Intra-set Write Variation to Increase Lifetime of Non-volatile Caches

    Energy Technology Data Exchange (ETDEWEB)

    Mittal, Sparsh [ORNL; Vetter, Jeffrey S [ORNL

    2014-01-01

    To address the limitations of SRAM such as high-leakage and low-density, researchers have explored use of non-volatile memory (NVM) devices, such as ReRAM (resistive RAM) and STT-RAM (spin transfer torque RAM) for designing on-chip caches. A crucial limitation of NVMs, however, is that their write endurance is low and the large intra-set write variation introduced by existing cache management policies may further exacerbate this problem, thereby reducing the cache lifetime significantly. We present EqualChance, a technique to increase cache lifetime by reducing intra-set write variation. EqualChance works by periodically changing the physical cache-block location of a write-intensive data item within a set to achieve wear-leveling. Simulations using workloads from SPEC CPU2006 suite and HPC (high-performance computing) field show that EqualChance improves the cache lifetime by 4.29X. Also, its implementation overhead is small, and it incurs very small performance and energy loss.

  3. Ecosystem services from keystone species: diversionary seeding and seed-caching desert rodents can enhance Indian ricegrass seedling establishment

    Science.gov (United States)

    Longland, William; Ostoja, Steven M.

    2013-01-01

    Seeds of Indian ricegrass (Achnatherum hymenoides), a native bunchgrass common to sandy soils on arid western rangelands, are naturally dispersed by seed-caching rodent species, particularly Dipodomys spp. (kangaroo rats). These animals cache large quantities of seeds when mature seeds are available on or beneath plants and recover most of their caches for consumption during the remainder of the year. Unrecovered seeds in caches account for the vast majority of Indian ricegrass seedling recruitment. We applied three different densities of white millet (Panicum miliaceum) seeds as “diversionary foods” to plots at three Great Basin study sites in an attempt to reduce rodents' over-winter cache recovery so that more Indian ricegrass seeds would remain in soil seedbanks and potentially establish new seedlings. One year after diversionary seed application, a moderate level of Indian ricegrass seedling recruitment occurred at two of our study sites in western Nevada, although there was no recruitment at the third site in eastern California. At both Nevada sites, the number of Indian ricegrass seedlings sampled along transects was significantly greater on all plots treated with diversionary seeds than on non-seeded control plots. However, the density of diversionary seeds applied to plots had a marginally non-significant effect on seedling recruitment, and it was not correlated with recruitment patterns among plots. Results suggest that application of a diversionary seed type that is preferred by seed-caching rodents provides a promising passive restoration strategy for target plant species that are dispersed by these rodents.

  4. Sediment management for reservoir

    International Nuclear Information System (INIS)

    Rahman, A.

    2005-01-01

    All natural lakes and reservoirs whether on rivers, tributaries or off channel storages are doomed to be sited up. Pakistan has two major reservoirs of Tarbela and Managla and shallow lake created by Chashma Barrage. Tarbela and Mangla Lakes are losing their capacities ever since first impounding, Tarbela since 1974 and Mangla since 1967. Tarbela Reservoir receives average annual flow of about 62 MAF and sediment deposits of 0.11 MAF whereas Mangla gets about 23 MAF of average annual flows and is losing its storage at the rate of average 34,000 MAF annually. The loss of storage is a great concern and studies for Tarbela were carried out by TAMS and Wallingford to sustain its capacity whereas no study has been done for Mangla as yet except as part of study for Raised Mangla, which is only desk work. Delta of Tarbala reservoir has advanced to about 6.59 miles (Pivot Point) from power intakes. In case of liquefaction of delta by tremor as low as 0.12g peak ground acceleration the power tunnels I, 2 and 3 will be blocked. Minimum Pool of reservoir is being raised so as to check the advance of delta. Mangla delta will follow the trend of Tarbela. Tarbela has vast amount of data as reservoir is surveyed every year, whereas Mangla Reservoir survey was done at five-year interval, which has now been proposed .to be reduced to three-year interval. In addition suspended sediment sampling of inflow streams is being done by Surface Water Hydrology Project of WAPDA as also some bed load sampling. The problem of Chasma Reservoir has also been highlighted, as it is being indiscriminately being filled up and drawdown several times a year without regard to its reaction to this treatment. The Sediment Management of these reservoirs is essential and the paper discusses pros and cons of various alternatives. (author)

  5. Secure File Allocation and Caching in Large-scale Distributed Systems

    DEFF Research Database (Denmark)

    Di Mauro, Alessio; Mei, Alessandro; Jajodia, Sushil

    2012-01-01

    In this paper, we present a file allocation and caching scheme that guarantees high assurance, availability, and load balancing in a large-scale distributed file system that can support dynamic updates of authorization policies. The scheme uses fragmentation and replication to store files with high...... security requirements in a system composed of a majority of low-security servers. We develop mechanisms to fragment files, to allocate them into multiple servers, and to cache them as close as possible to their readers while preserving the security requirement of the files, providing load......-balancing, and reducing delay of read operations. The system offers a trade-off-between performance and security that is dynamically tunable according to the current level of threat. We validate our mechanisms with extensive simulations in an Internet-like network....

  6. Federated or cached searches: providing expected performance from multiple invasive species databases

    Science.gov (United States)

    Graham, Jim; Jarnevich, Catherine S.; Simpson, Annie; Newman, Gregory J.; Stohlgren, Thomas J.

    2011-01-01

    Invasive species are a universal global problem, but the information to identify them, manage them, and prevent invasions is stored around the globe in a variety of formats. The Global Invasive Species Information Network is a consortium of organizations working toward providing seamless access to these disparate databases via the Internet. A distributed network of databases can be created using the Internet and a standard web service protocol. There are two options to provide this integration. First, federated searches are being proposed to allow users to search “deep” web documents such as databases for invasive species. A second method is to create a cache of data from the databases for searching. We compare these two methods, and show that federated searches will not provide the performance and flexibility required from users and a central cache of the datum are required to improve performance.

  7. XRootd, disk-based, caching proxy for optimization of data access, data placement and data replication

    International Nuclear Information System (INIS)

    Bauerdick, L A T; Bloom, K; Bockelman, B; Bradley, D C; Dasu, S; Dost, J M; Sfiligoi, I; Tadel, A; Tadel, M; Wuerthwein, F; Yagil, A

    2014-01-01

    Following the success of the XRootd-based US CMS data federation, the AAA project investigated extensions of the federation architecture by developing two sample implementations of an XRootd, disk-based, caching proxy. The first one simply starts fetching a whole file as soon as a file open request is received and is suitable when completely random file access is expected or it is already known that a whole file be read. The second implementation supports on-demand downloading of partial files. Extensions to the Hadoop Distributed File System have been developed to allow for an immediate fallback to network access when local HDFS storage fails to provide the requested block. Both cache implementations are in pre-production testing at UCSD.

  8. The Use of Proxy Caches for File Access in a Multi-Tier Grid Environment

    International Nuclear Information System (INIS)

    Brun, R; Duellmann, D; Ganis, G; Janyst, L; Peters, A J; Rademakers, F; Sindrilaru, E; Hanushevsky, A

    2011-01-01

    The use of proxy caches has been extensively studied in the HEP environment for efficient access of database data and showed significant performance with only very moderate operational effort at higher grid tiers (T2, T3). In this contribution we propose to apply the same concept to the area of file access and analyse the possible performance gains, operational impact on site services and applicability to different HEP use cases. Base on a proof-of-concept studies with a modified XROOT proxy server we review the cache efficiency and overheads for access patterns of typical ROOT based analysis programs. We conclude with a discussion of the potential role of this new component at the different tiers of a distributed computing grid.

  9. Orbitofrontal cortex supports behavior and learning using inferred but not cached values.

    Science.gov (United States)

    Jones, Joshua L; Esber, Guillem R; McDannald, Michael A; Gruber, Aaron J; Hernandez, Alex; Mirenzi, Aaron; Schoenbaum, Geoffrey

    2012-11-16

    Computational and learning theory models propose that behavioral control reflects value that is both cached (computed and stored during previous experience) and inferred (estimated on the fly on the basis of knowledge of the causal structure of the environment). The latter is thought to depend on the orbitofrontal cortex. Yet some accounts propose that the orbitofrontal cortex contributes to behavior by signaling "economic" value, regardless of the associative basis of the information. We found that the orbitofrontal cortex is critical for both value-based behavior and learning when value must be inferred but not when a cached value is sufficient. The orbitofrontal cortex is thus fundamental for accessing model-based representations of the environment to compute value rather than for signaling value per se.

  10. Using High-Speed WANs and Network Data Caches to Enable Remote and Distributed Visualization

    Energy Technology Data Exchange (ETDEWEB)

    Bethel, Wes; Lau, Stephen; Tierney, Brian; Lee, Jason; Gunter, Dan

    2000-04-18

    Visapult is a prototype application and framework for remote visualization of large scientific datasets. We approach the technical challenges of tera-scale visualization with a unique architecture that employs high speed WANs and network data caches for data staging and transmission. This architecture allows for the use of available cache and compute resources at arbitrary locations on the network. High data throughput rates and network utilization are achieved by parallelizing I/O at each stage in the application, and by pipe-lining the visualization process. On the desktop, the graphics interactivity is effectively decoupled from the latency inherent in network applications. We present a detailed performance analysis of the application, and improvements resulting from field-test analysis conducted as part of the DOE Combustion Corridor project.

  11. Fundamentals of Cluster-Centric Content Placement in Cache-Enabled Device-to-Device Networks

    OpenAIRE

    Afshang, Mehrnaz; Dhillon, Harpreet S.; Chong, Peter Han Joo

    2015-01-01

    This paper develops a comprehensive analytical framework with foundations in stochastic geometry to characterize the performance of cluster-centric content placement in a cache-enabled device-to-device (D2D) network. Different from device-centric content placement, cluster-centric placement focuses on placing content in each cluster such that the collective performance of all the devices in each cluster is optimized. Modeling the locations of the devices by a Poisson cluster process, we defin...

  12. Accelerating Convolutional Neural Networks for Continuous Mobile Vision via Cache Reuse

    OpenAIRE

    Xu, Mengwei; Liu, Xuanzhe; Liu, Yunxin; Lin, Felix Xiaozhu

    2017-01-01

    Convolutional Neural Network (CNN) is the state-of-the-art algorithm of many mobile vision fields. It is also applied in many vision tasks such as face detection and augmented reality on mobile devices. Though benefited from the high accuracy achieved via deep CNN models, nowadays commercial mobile devices are often short in processing capacity and battery to continuously carry out such CNN-driven vision applications. In this paper, we propose a transparent caching mechanism, named CNNCache, ...

  13. Greatly improved cache update times for conditions data with Frontier/Squid

    CERN Document Server

    Dykstra, Dave

    2009-01-01

    The CMS detector project loads copies of conditions data to over 100,000 computer cores worldwide by using a software subsystem called Frontier. This subsystem translates database queries into HTTP, looks up the results in a central database at CERN, and caches the results in an industry-standard HTTP proxy/caching server called Squid. One of the most challenging aspects of any cache system is coherency, that is, ensuring that changes made to the underlying data get propagated out to all clients in a timely manner. Recently, the Frontier system was enhanced to drastically reduce the time for changes to be propagated everywhere without heavily loading servers. The propagation time is now as low as 15 minutes for some kinds of data and no more than 60 minutes for the rest of the data. This was accomplished by taking advantage of an HTTP and Squid feature called If-Modified-Since. In order to use this feature, the Frontier server sends a Last-Modified timestamp, but since modification times are not normally trac...

  14. A cache-friendly sampling strategy for texture-based volume rendering on GPU

    Directory of Open Access Journals (Sweden)

    Junpeng Wang

    2017-06-01

    Full Text Available The texture-based volume rendering is a memory-intensive algorithm. Its performance relies heavily on the performance of the texture cache. However, most existing texture-based volume rendering methods blindly map computational resources to texture memory and result in incoherent memory access patterns, causing low cache hit rates in certain cases. The distance between samples taken by threads of an atomic scheduling unit (e.g. a warp of 32 threads in CUDA of the GPU is a crucial factor that affects the texture cache performance. Based on this fact, we present a new sampling strategy, called Warp Marching, for the ray-casting algorithm of texture-based volume rendering. The effects of different sample organizations and different thread-pixel mappings in the ray-casting algorithm are thoroughly analyzed. Also, a pipeline manner color blending approach is introduced and the power of warp-level GPU operations is leveraged to improve the efficiency of parallel executions on the GPU. In addition, the rendering performance of the Warp Marching is view-independent, and it outperforms existing empty space skipping techniques in scenarios that need to render large dynamic volumes in a low resolution image. Through a series of micro-benchmarking and real-life data experiments, we rigorously analyze our sampling strategies and demonstrate significant performance enhancements over existing sampling methods.

  15. Fox squirrels match food assessment and cache effort to value and scarcity.

    Directory of Open Access Journals (Sweden)

    Mikel M Delgado

    Full Text Available Scatter hoarders must allocate time to assess items for caching, and to carry and bury each cache. Such decisions should be driven by economic variables, such as the value of the individual food items, the scarcity of these items, competition for food items and risk of pilferage by conspecifics. The fox squirrel, an obligate scatter-hoarder, assesses cacheable food items using two overt movements, head flicks and paw manipulations. These behaviors allow an examination of squirrel decision processes when storing food for winter survival. We measured wild squirrels' time allocations and frequencies of assessment and investment behaviors during periods of food scarcity (summer and abundance (fall, giving the squirrels a series of 15 items (alternating five hazelnuts and five peanuts. Assessment and investment per cache increased when resource value was higher (hazelnuts or resources were scarcer (summer, but decreased as scarcity declined (end of sessions. This is the first study to show that assessment behaviors change in response to factors that indicate daily and seasonal resource abundance, and that these factors may interact in complex ways to affect food storing decisions. Food-storing tree squirrels may be a useful and important model species to understand the complex economic decisions made under natural conditions.

  16. The role of seed mass on the caching decision by agoutis, Dasyprocta leporina (Rodentia: Agoutidae

    Directory of Open Access Journals (Sweden)

    Mauro Galetti

    2010-06-01

    Full Text Available It has been shown that the local extinction of large-bodied frugivores may cause cascading consequences for plant recruitment and overall plant diversity. However, to what extent the resilient mammals can compensate the role of seed dispersal in defaunated sites is poorly understood. Caviomorph rodents, especially Dasyprocta spp., are usually resilient frugivores in hunted forests and their seed caching behavior may be important for many plant species which lack primary dispersers. We compared the effect of the variation in seed mass of six vertebrate-dispersed plant species on the caching decision by the red-rumped agoutis Dasyprocta leporina Linnaeus, 1758 in a land-bridge island of the Atlantic forest, Brazil. We found a strong positive effect of seed mass on seed fate and dispersal distance, but there was a great variation between species. Agoutis never cached seeds smaller than 0.9 g and larger seeds were dispersed for longer distances. Therefore, agoutis can be important seed dispersers of large-seeded species in defaunated forests.

  17. Reservoir characterization of Pennsylvanian sandstone reservoirs. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Kelkar, M.

    1995-02-01

    This final report summarizes the progress during the three years of a project on Reservoir Characterization of Pennsylvanian Sandstone Reservoirs. The report is divided into three sections: (i) reservoir description; (ii) scale-up procedures; (iii) outcrop investigation. The first section describes the methods by which a reservoir can be described in three dimensions. The next step in reservoir description is to scale up reservoir properties for flow simulation. The second section addresses the issue of scale-up of reservoir properties once the spatial descriptions of properties are created. The last section describes the investigation of an outcrop.

  18. On-farm irrigation reservoirs for surface water storage in eastern Arkansas: Trends in construction in response to aquifer depletion

    Science.gov (United States)

    Yaeger, M. A.; Reba, M. L.; Massey, J. H.; Adviento-Borbe, A.

    2017-12-01

    On-farm surface water storage reservoirs have been constructed to address declines in the Mississippi River Valley Alluvial aquifer, the primary source of irrigation for most of the row crops grown in eastern Arkansas. These reservoirs and their associated infrastructure represent significant investments in financial and natural resources, and may cause producers to incur costs associated with foregone crop production and long-term maintenance. Thus, an analysis of reservoir construction trends in the Grand Prairie Critical Groundwater Area (GPCGA) and Cache River Critical Groundwater Area (CRCGA) was conducted to assist future water management decisions. Between 1996 and 2015, on average, 16 and 4 reservoirs were constructed per year, corresponding to cumulative new reservoir surface areas of 161 and 60 ha yr-1, for the GPCGA and the CRCGA, respectively. In terms of reservoir locations relative to aquifer status, after 1996, 84.5% of 309 total reservoirs constructed in the GPCGA and 91.0% of 78 in the CRCGA were located in areas with remaining saturated aquifer thicknesses of 50% or less. The majority of new reservoirs (74% in the GPCGA and 63% in the CRCGA) were constructed on previously productive cropland. The next most common land use, representing 11% and 15% of new reservoirs constructed in the GPCGA and CRCGA, respectively, was the combination of a field edge and a ditch, stream, or other low-lying area. Less than 10% of post-1996 reservoirs were constructed on predominately low-lying land, and the use of such lands decreased in both critical groundwater areas during the past 20 years. These disparities in reservoir construction rates, locations, and prior land uses is likely due to groundwater declines being first observed in the GPCGA as well as the existence of two large-scale river diversion projects under construction in the GPCGA that feature on-farm storage as a means to offset groundwater use.

  19. Cache-aware data structure model for parallelism and dynamic load balancing

    International Nuclear Information System (INIS)

    Sridi, Marwa

    2016-01-01

    This PhD thesis is dedicated to the implementation of innovative parallel methods in the framework of fast transient fluid-structure dynamics. It improves existing methods within EUROPLEXUS software, in order to optimize the shared memory parallel strategy, complementary to the original distributed memory approach, brought together into a global hybrid strategy for clusters of multi-core nodes. Starting from a sound analysis of the state of the art concerning data structuring techniques correlated to the hierarchic memory organization of current multi-processor architectures, the proposed work introduces an approach suitable for an explicit time integration (i.e. with no linear system to solve at each step). A data structure of type 'Structure of arrays' is conserved for the global data storage, providing flexibility and efficiency for current operations on kinematics fields (displacement, velocity and acceleration). On the contrary, in the particular case of elementary operations (for internal forces generic computations, as well as fluxes computations between cell faces for fluid models), particularly time consuming but localized in the program, a temporary data structure of type 'Array of structures' is used instead, to force an efficient filling of the cache memory and increase the performance of the resolution, for both serial and shared memory parallel processing. Switching from the global structure to the temporary one is based on a cell grouping strategy, following classing cache-blocking principles but handling specifically for this work neighboring data necessary to the efficient treatment of ALE fluxes for cells on the group boundaries. The proposed approach is extensively tested, from the point of views of both the computation time and the access failures into cache memory, confronting the gains obtained within the elementary operations to the potential overhead generated by the data structure switch. Obtained results are very

  20. EarthCache as a Tool to Promote Earth-Science in Public School Classrooms

    Science.gov (United States)

    Gochis, E. E.; Rose, W. I.; Klawiter, M.; Vye, E. C.; Engelmann, C. A.

    2011-12-01

    Geoscientists often find it difficult to bridge the gap in communication between university research and what is learned in the public schools. Today's schools operate in a high stakes environment that only allow instruction based on State and National Earth Science curriculum standards. These standards are often unknown by academics or are written in a style that obfuscates the transfer of emerging scientific research to students in the classroom. Earth Science teachers are in an ideal position to make this link because they have a background in science as well as a solid understanding of the required curriculum standards for their grade and the pedagogical expertise to pass on new information to their students. As part of the Michigan Teacher Excellence Program (MiTEP), teachers from Grand Rapids, Kalamazoo, and Jackson school districts participate in 2 week field courses with Michigan Tech University to learn from earth science experts about how the earth works. This course connects Earth Science Literacy Principles' Big Ideas and common student misconceptions with standards-based education. During the 2011 field course, we developed and began to implement a three-phase EarthCache model that will provide a geospatial interactive medium for teachers to translate the material they learn in the field to the students in their standards based classrooms. MiTEP participants use GPS and Google Earth to navigate to Michigan sites of geo-significance. At each location academic experts aide participants in making scientific observations about the locations' geologic features, and "reading the rocks" methodology to interpret the area's geologic history. The participants are then expected to develop their own EarthCache site to be used as pedagogical tool bridging the gap between standards-based classroom learning, contemporary research and unique outdoor field experiences. The final phase supports teachers in integrating inquiry based, higher-level learning student

  1. Allegheny County Municipal Boundaries

    Data.gov (United States)

    Allegheny County / City of Pittsburgh / Western PA Regional Data Center — This dataset demarcates the municipal boundaries in Allegheny County. Data was created to portray the boundaries of the 130 Municipalities in Allegheny County the...

  2. Allegheny County Addressing Landmarks

    Data.gov (United States)

    Allegheny County / City of Pittsburgh / Western PA Regional Data Center — This dataset contains address points which represent physical address locations assigned by the Allegheny County addressing authority. Data is updated by County...

  3. Allegheny County Council Districts

    Data.gov (United States)

    Allegheny County / City of Pittsburgh / Western PA Regional Data Center — This dataset portrays the boundaries of the County Council Districts in Allegheny County. The dataset is based on municipal boundaries and City of Pittsburgh ward...

  4. Allegheny County Address Points

    Data.gov (United States)

    Allegheny County / City of Pittsburgh / Western PA Regional Data Center — This dataset contains address points which represent physical address locations assigned by the Allegheny County addressing authority. Data is updated by County...

  5. Allegheny County Air Quality

    Data.gov (United States)

    Allegheny County / City of Pittsburgh / Western PA Regional Data Center — Air quality data from Allegheny County Health Department monitors throughout the county. Air quality monitored data must be verified by qualified individuals before...

  6. Bathymetry and capacity of Shawnee Reservoir, Oklahoma, 2016

    Science.gov (United States)

    Ashworth, Chad E.; Smith, S. Jerrod; Smith, Kevin A.

    2017-02-13

    Shawnee Reservoir (locally known as Shawnee Twin Lakes) is a man-made reservoir on South Deer Creek with a drainage area of 32.7 square miles in Pottawatomie County, Oklahoma. The reservoir consists of two lakes connected by an equilibrium channel. The southern lake (Shawnee City Lake Number 1) was impounded in 1935, and the northern lake (Shawnee City Lake Number 2) was impounded in 1960. Shawnee Reservoir serves as a municipal water supply, and water is transferred about 9 miles by gravity to a water treatment plant in Shawnee, Oklahoma. Secondary uses of the reservoir are for recreation, fish and wildlife habitat, and flood control. Shawnee Reservoir has a normal-pool elevation of 1,069.0 feet (ft) above North American Vertical Datum of 1988 (NAVD 88). The auxiliary spillway, which defines the flood-pool elevation, is at an elevation of 1,075.0 ft.The U.S. Geological Survey (USGS), in cooperation with the City of Shawnee, has operated a real-time stage (water-surface elevation) gage (USGS station 07241600) at Shawnee Reservoir since 2006. For the period of record ending in 2016, this gage recorded a maximum stage of 1,078.1 ft on May 24, 2015, and a minimum stage of 1,059.1 ft on April 10–11, 2007. This gage did not report reservoir storage prior to this report (2016) because a sufficiently detailed and thoroughly documented bathymetric (reservoir-bottom elevation) survey and corresponding stage-storage relation had not been published. A 2011 bathymetric survey with contours delineated at 5-foot intervals was published in Oklahoma Water Resources Board (2016), but that publication did not include a stage-storage relation table. The USGS, in cooperation with the City of Shawnee, performed a bathymetric survey of Shawnee Reservoir in 2016 and released the bathymetric-survey data in 2017. The purposes of the bathymetric survey were to (1) develop a detailed bathymetric map of the reservoir and (2) determine the relations between stage and reservoir storage

  7. Application of advanced reservoir characterization, simulation and production optimization strategies to maximize recovery in slope and basin clastic reservoirs, West Texas (Delaware Basin). Annual report

    Energy Technology Data Exchange (ETDEWEB)

    Dutton, S.P.; Asquith, G.B.; Barton, M.D.; Cole, A.G.; Gogas, J.; Malik, M.A.; Clift, S.J.; Guzman, J.I.

    1997-11-01

    The objective of this project is to demonstrate that detailed reservoir characterization of slope and basin clastic reservoirs in sandstones of the Delaware Mountain Group in the Delaware Basin of West Texas and New Mexico is a cost-effective way to recover a higher percentage of the original oil in place through strategic placement of infill wells and geologically based field development. This project involves reservoir characterization of two Late Permian slope and basin clastic reservoirs in the Delaware Basin, West Texas, followed by a field demonstration in one of the fields. The fields being investigated are Geraldine Ford and Ford West fields in Reeves and Culberson Counties, Texas. Project objectives are divided into two major phases, reservoir characterization and implementation. The objectives of the reservoir characterization phase of the project were to provide a detailed understanding of the architecture and heterogeneity of the two fields, the Ford Geraldine unit and Ford West field. Reservoir characterization utilized 3-D seismic data, high-resolution sequence stratigraphy, subsurface field studies, outcrop characterization, and other techniques. Once reservoir characterized was completed, a pilot area of approximately 1 mi{sup 2} at the northern end of the Ford Geraldine unit was chosen for reservoir simulation. This report summarizes the results of the second year of reservoir characterization.

  8. Land Use And Land Cover Dynamics Under Climate Change In Urbanizing Intermountain West: A Case Study From Cache County, Utah

    OpenAIRE

    Li, Enjie

    2013-01-01

    Climate change is tightly linked with urbanization. Urban development with increasing greenhouse gas emission worsens climate change, while climate change in turn influence hydroclimate and ecosystem functions, and indirectly affect urban systems. The Intermountain West is experiencing rapid urban growth, climate change interacting with urbanization poses new challenges to the Intermountain West. Urban planning needs to adapt to these new changes and constrains, and to develop new tools and p...

  9. Water resources of Van Buren County, Michigan

    Science.gov (United States)

    Giroux, P.R.; Hendrickson, G.E.; Stoimenoff, L.E.; Whetstone, G.W.

    1964-01-01

    The water resources of Van Buren County include productive ground-water reservoirs, a network of perennial streams, about 60 major inland lakes, and Lake Michigan. Most water users obtain their supplies from wells. The ground-water reservoirs in the glacial drift can provide several times the amount of water now used, but large withdrawals of ground water may lower the levels of nearby lakes or diminish the flow of nearly streams. Permeable soils and drift account for the relatively high base flows of streams in the southeastern two-thirds of the county. Less permeable surficial materials in the northwest part of the county result in relatively low base flows there. The water from wells is generally hard and high in iron content but is otherwise suitable for most uses. Water from streams and lakes is similar to that from wells except that iron-content is not a problem, and some of the inland lakes have very soft water. The availability of ground water, the base flow of streams, and the chemical character of water in the county are summarized in maps and tables accompanying this report.

  10. dCache: implementing a high-end NFSv4.1 service using a Java NIO framework

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    dCache is a high performance scalable storage system widely used by HEP community. In addition to set of home grown protocols we also provide industry standard access mechanisms like WebDAV and NFSv4.1. This support places dCache as a direct competitor to commercial solutions. Nevertheless conforming to a protocol is not enough; our implementations must perform comparably or even better than commercial systems. To achieve this, dCache uses two high-end IO frameworks from well know application servers: GlassFish and JBoss. This presentation describes how we implemented an rfc1831 and rfc2203 compliant ONC RPC (Sun RPC) service based on the Grizzly NIO framework, part of the GlassFish application server. This ONC RPC service is the key component of dCache’s NFSv4.1 implementation, but is independent of dCache and available for other projects. We will also show some details of dCache NFS v4.1 implementations, describe some of the Java NIO techniques used and, finally, present details of our performance e...

  11. Optimising reservoir operation

    DEFF Research Database (Denmark)

    Ngo, Long le

    Anvendelse af optimeringsteknik til drift af reservoirer er blevet et væsentligt element i vandressource-planlægning og -forvaltning. Traditionelt har reservoirer været styret af heuristiske procedurer for udtag af vand, suppleret i en vis udstrækning af subjektive beslutninger. Udnyttelse af...... reservoirer involverer en lang række interessenter med meget forskellige formål (f.eks. kunstig vanding, vandkraft, vandforsyning mv.), og optimeringsteknik kan langt bedre lede frem til afbalancerede løsninger af de ofte modstridende interesser. Afhandlingen foreslår en række tiltag, hvormed traditionelle...... driftsstrategier kan erstattes af optimale strategier baseret på den nyeste udvikling indenfor computer-baserede beregninger. Hovedbidraget i afhandlingen er udviklingen af et beregningssystem, hvori en simuleringsmodel er koblet til en model for optimering af nogle udvalgte beslutningsvariable, der i særlig grad...

  12. Geothermal reservoir engineering

    CERN Document Server

    Grant, Malcolm Alister

    2011-01-01

    As nations alike struggle to diversify and secure their power portfolios, geothermal energy, the essentially limitless heat emanating from the earth itself, is being harnessed at an unprecedented rate.  For the last 25 years, engineers around the world tasked with taming this raw power have used Geothermal Reservoir Engineering as both a training manual and a professional reference.  This long-awaited second edition of Geothermal Reservoir Engineering is a practical guide to the issues and tasks geothermal engineers encounter in the course of their daily jobs. The bo

  13. Session: Reservoir Technology

    Energy Technology Data Exchange (ETDEWEB)

    Renner, Joel L.; Bodvarsson, Gudmundur S.; Wannamaker, Philip E.; Horne, Roland N.; Shook, G. Michael

    1992-01-01

    This session at the Geothermal Energy Program Review X: Geothermal Energy and the Utility Market consisted of five papers: ''Reservoir Technology'' by Joel L. Renner; ''LBL Research on the Geysers: Conceptual Models, Simulation and Monitoring Studies'' by Gudmundur S. Bodvarsson; ''Geothermal Geophysical Research in Electrical Methods at UURI'' by Philip E. Wannamaker; ''Optimizing Reinjection Strategy at Palinpinon, Philippines Based on Chloride Data'' by Roland N. Horne; ''TETRAD Reservoir Simulation'' by G. Michael Shook

  14. A Comparison between Fixed Priority and EDF Scheduling accounting for Cache Related Pre-emption Delays

    Directory of Open Access Journals (Sweden)

    Will Lunniss

    2014-04-01

    Full Text Available In multitasking real-time systems, the choice of scheduling algorithm is an important factor to ensure that response time requirements are met while maximising limited system resources. Two popular scheduling algorithms include fixed priority (FP and earliest deadline first (EDF. While they have been studied in great detail before, they have not been compared when taking into account cache related pre-emption delays (CRPD. Memory and cache are split into a number of blocks containing instructions and data. During a pre-emption, cache blocks from the pre-empting task can evict those of the pre-empted task. When the pre-empted task is resumed, if it then has to re-load the evicted blocks, CRPD are introduced which then affect the schedulability of the task. In this paper we compare FP and EDF scheduling algorithms in the presence of CRPD using the state-of-the-art CRPD analysis. We find that when CRPD is accounted for, the performance gains offered by EDF over FP, while still notable, are diminished. Furthermore, we find that under scenarios that cause relatively high CRPD, task layout optimisation techniques can be applied to allow FP to schedule tasksets at a similar processor utilisation to EDF. Thus making the choice of the task layout in memory as important as the choice of scheduling algorithm. This is very relevant for industry, as it is much cheaper and simpler to adjust the task layout through the linker than it is to switch the scheduling algorithm.

  15. Incorporating cache management behavior into seed dispersal: the effect of pericarp removal on acorn germination.

    Directory of Open Access Journals (Sweden)

    Xianfeng Yi

    Full Text Available Selecting seeds for long-term storage is a key factor for food hoarding animals. Siberian chipmunks (Tamias sibiricus remove the pericarp and scatter hoard sound acorns of Quercus mongolica over those that are insect-infested to maximize returns from caches. We have no knowledge of whether these chipmunks remove the pericarp from acorns of other species of oaks and if this behavior benefits seedling establishment. In this study, we tested whether Siberian chipmunks engage in this behavior with acorns of three other Chinese oak species, Q. variabilis, Q. aliena and Q. serrata var. brevipetiolata, and how the dispersal and germination of these acorns are affected. Our results show that when chipmunks were provided with sound and infested acorns of Quercus variabilis, Q. aliena and Q. serrata var. brevipetiolata, the two types were equally harvested and dispersed. This preference suggests that Siberian chipmunks are incapable of distinguishing between sound and insect-infested acorns. However, Siberian chipmunks removed the pericarp from acorns of these three oak species prior to dispersing and caching them. Consequently, significantly more sound acorns were scatter hoarded and more infested acorns were immediately consumed. Additionally, indoor germination experiments showed that pericarp removal by chipmunks promoted acorn germination while artificial removal showed no significant effect. Our results show that pericarp removal allows Siberian chipmunks to effectively discriminate against insect-infested acorns and may represent an adaptive behavior for cache management. Because of the germination patterns of pericarp-removed acorns, we argue that the foraging behavior of Siberian chipmunks could have potential impacts on the dispersal and germination of acorns from various oak species.

  16. Analytical derivation of traffic patterns in cache-coherent shared-memory systems

    DEFF Research Database (Denmark)

    Stuart, Matthias Bo; Sparsø, Jens

    2011-01-01

    This paper presents an analytical method to derive the worst-case traffic pattern caused by a task graph mapped to a cache-coherent shared-memory system. Our analysis allows designers to rapidly evaluate the impact of different mappings of tasks to IP cores on the traffic pattern. The accuracy...... varies with the application’s data sharing pattern, and is around 65% in the average case and 1% in the best case when considering the traffic pattern as a whole. For individual connections, our method produces tight worst-case bandwidths....

  17. Comparison of the Frontier Distributed Database Caching System with NoSQL Databases

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Non-relational "NoSQL" databases such as Cassandra and CouchDB are best known for their ability to scale to large numbers of clients spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects, is based on traditional SQL databases but also has the same high scalability and wide-area distributability for an important subset of applications. This paper compares the architectures, behavior, performance, and maintainability of the two different approaches and identifies the criteria for choosing which approach to prefer over the other.

  18. Mercury and Methylmercury concentrations and loads in Cache Creek Basin, California, January 2000 through May 2001

    Science.gov (United States)

    Domagalski, Joseph L.; Alpers, Charles N.; Slotton, Darrell G.; Suchanek, Thomas H.; Ayers, Shaun M.

    2004-01-01

    Concentrations and mass loads of total mercury and methylmercury in streams draining abandoned mercury mines and near geothermal discharge in Cache Creek Basin, California, were measured during a 17-month period from January 2000 through May 2001. Rainfall and runoff averages during the study period were lower than long-term averages. Mass loads of mercury and methylmercury from upstream sources to downstream receiving waters, such as San Francisco Bay, were generally the highest during or after winter rainfall events. During the study period, mass loads of mercury and methylmercury from geothermal sources tended to be greater than those from abandoned mining areas because of a lack of large precipitation events capable of mobilizing significant amounts of either mercury-laden sediment or dissolved mercury and methylmercury from mine waste. Streambed sediments of Cache Creek are a source of mercury and methylmercury to downstream receiving bodies of water such as the Delta of the San Joaquin and Sacramento Rivers. Much of the mercury in these sediments was deposited over the last 150 years by erosion and stream discharge from abandoned mines or by continuous discharges from geothermal areas. Several geochemical constituents were useful as natural tracers for mining and geothermal areas. These constituents included aqueous concentrations of boron, chloride, lithium, and sulfate, and the stable isotopes of hydrogen and oxygen in water. Stable isotopes of water in areas draining geothermal discharges were enriched with more oxygen-18 relative to oxygen-16 than meteoric waters, whereas the enrichment by stable isotopes of water from much of the runoff from abandoned mines was similar to that of meteoric water. Geochemical signatures from stable isotopes and trace-element concentrations may be useful as tracers of total mercury or methylmercury from specific locations; however, mercury and methylmercury are not conservatively transported. A distinct mixing trend of

  19. unconventional natural gas reservoirs

    International Nuclear Information System (INIS)

    Correa G, Tomas F; Osorio, Nelson; Restrepo R, Dora P

    2009-01-01

    This work is an exploration about different unconventional gas reservoirs worldwide: coal bed methane, tight gas, shale gas and gas hydrate? describing aspects such as definition, reserves, production methods, environmental issues and economics. The overview also mentioned preliminary studies about these sources in Colombia.

  20. Parallel reservoir simulator computations

    International Nuclear Information System (INIS)

    Hemanth-Kumar, K.; Young, L.C.

    1995-01-01

    The adaptation of a reservoir simulator for parallel computations is described. The simulator was originally designed for vector processors. It performs approximately 99% of its calculations in vector/parallel mode and relative to scalar calculations it achieves speedups of 65 and 81 for black oil and EOS simulations, respectively on the CRAY C-90

  1. EXPLOITATION AND OPTIMIZATION OF RESERVOIR PERFORMANCE IN HUNTON FORMATION, OKLAHOMA

    Energy Technology Data Exchange (ETDEWEB)

    Mohan Kelkar

    2002-03-31

    The West Carney Field in Lincoln County, Oklahoma is one of few newly discovered oil fields in Oklahoma. Although profitable, the field exhibits several unusual characteristics. These include decreasing water-oil ratios, decreasing gas-oil ratios, decreasing bottomhole pressures during shut-ins in some wells, and transient behavior for water production in many wells. This report explains the unusual characteristics of West Carney Field based on detailed geological and engineering analyses. We propose a geological history that explains the presence of mobile water and oil in the reservoir. The combination of matrix and fractures in the reservoir explains the reservoir's flow behavior. We confirm our hypothesis by matching observed performance with a simulated model and develop procedures for correlating core data to log data so that the analysis can be extended to other, similar fields where the core coverage may be limited.

  2. APPLICATION OF INTEGRATED RESERVOIR MANAGEMENT AND RESERVOIR CHARACTERIZATION

    Energy Technology Data Exchange (ETDEWEB)

    Jack Bergeron; Tom Blasingame; Louis Doublet; Mohan Kelkar; George Freeman; Jeff Callard; David Moore; David Davies; Richard Vessell; Brian Pregger; Bill Dixon; Bryce Bezant

    2000-03-01

    Reservoir performance and characterization are vital parameters during the development phase of a project. Infill drilling of wells on a uniform spacing, without regard to characterization does not optimize development because it fails to account for the complex nature of reservoir heterogeneities present in many low permeability reservoirs, especially carbonate reservoirs. These reservoirs are typically characterized by: (1) large, discontinuous pay intervals; (2) vertical and lateral changes in reservoir properties; (3) low reservoir energy; (4) high residual oil saturation; and (5) low recovery efficiency. The operational problems they encounter in these types of reservoirs include: (1) poor or inadequate completions and stimulations; (2) early water breakthrough; (3) poor reservoir sweep efficiency in contacting oil throughout the reservoir as well as in the nearby well regions; (4) channeling of injected fluids due to preferential fracturing caused by excessive injection rates; and (5) limited data availability and poor data quality. Infill drilling operations only need target areas of the reservoir which will be economically successful. If the most productive areas of a reservoir can be accurately identified by combining the results of geological, petrophysical, reservoir performance, and pressure transient analyses, then this ''integrated'' approach can be used to optimize reservoir performance during secondary and tertiary recovery operations without resorting to ''blanket'' infill drilling methods. New and emerging technologies such as geostatistical modeling, rock typing, and rigorous decline type curve analysis can be used to quantify reservoir quality and the degree of interwell communication. These results can then be used to develop a 3-D simulation model for prediction of infill locations. The application of reservoir surveillance techniques to identify additional reservoir ''pay'' zones

  3. Replicas Strategy and Cache Optimization of Video Surveillance Systems Based on Cloud Storage

    Directory of Open Access Journals (Sweden)

    Rongheng Li

    2018-04-01

    Full Text Available With the rapid development of video surveillance technology, especially the popularity of cloud-based video surveillance applications, video data begins to grow explosively. However, in the cloud-based video surveillance system, replicas occupy an amount of storage space. Also, the slow response to video playback constrains the performance of the system. In this paper, considering the characteristics of video data comprehensively, we propose a dynamic redundant replicas mechanism based on security levels that can dynamically adjust the number of replicas. Based on the location correlation between cameras, this paper also proposes a data cache strategy to improve the response speed of data reading. Experiments illustrate that: (1 our dynamic redundant replicas mechanism can save storage space while ensuring data security; (2 the cache mechanism can predict the playback behaviors of the users in advance and improve the response speed of data reading according to the location and time correlation of the front-end cameras; and (3 in terms of cloud-based video surveillance, our proposed approaches significantly outperform existing methods.

  4. Transient Variable Caching in Java’s Stack-Based Intermediate Representation

    Directory of Open Access Journals (Sweden)

    Paul Týma

    1999-01-01

    Full Text Available Java’s stack‐based intermediate representation (IR is typically coerced to execute on register‐based architectures. Unoptimized compiled code dutifully replicates transient variable usage designated by the programmer and common optimization practices tend to introduce further usage (i.e., CSE, Loop‐invariant Code Motion, etc.. On register based machines, often transient variables are cached within registers (when available saving the expense of actually accessing memory. Unfortunately, in stack‐based environments because of the need to push and pop the transient values, further performance improvement is possible. This paper presents Transient Variable Caching (TVC, a technique for eliminating transient variable overhead whenever possible. This optimization would find a likely home in optimizers attached to the back of popular Java compilers. Side effects of the algorithm include significant instruction reordering and introduction of many stack‐manipulation operations. This combination has proven to greatly impede the ability to decompile stack‐based IR code sequences. The code that results from the transform is faster, smaller, and greatly impedes decompilation.

  5. Observations of territorial breeding common ravens caching eggs of greater sage-grouse

    Science.gov (United States)

    Howe, Kristy B.; Coates, Peter S.

    2015-01-01

    Previous investigations using continuous video monitoring of greater sage-grouse Centrocercus urophasianus nests have unambiguously identified common ravens Corvus corax as an important egg predator within the western United States. The quantity of greater sage-grouse eggs an individual common raven consumes during the nesting period and the extent to which common ravens actively hunt greater sage-grouse nests are largely unknown. However, some evidence suggests that territorial breeding common ravens, rather than nonbreeding transients, are most likely responsible for nest depredations. We describe greater sage-grouse egg depredation observations obtained opportunistically from three common raven nests located in Idaho and Nevada where depredated greater sage-grouse eggs were found at or in the immediate vicinity of the nest site, including the caching of eggs in nearby rock crevices. We opportunistically monitored these nests by counting and removing depredated eggs and shell fragments from the nest sites during each visit to determine the extent to which the common raven pairs preyed on greater sage-grouse eggs. To our knowledge, our observations represent the first evidence that breeding, territorial pairs of common ravens cache greater sage-grouse eggs and are capable of depredating multiple greater sage-grouse nests.

  6. Flood Frequency Analysis of Future Climate Projections in the Cache Creek Watershed

    Science.gov (United States)

    Fischer, I.; Trihn, T.; Ishida, K.; Jang, S.; Kavvas, E.; Kavvas, M. L.

    2014-12-01

    Effects of climate change on hydrologic flow regimes, particularly extreme events, necessitate modeling of future flows to best inform water resources management. Future flow projections may be modeled through the joint use of carbon emission scenarios, general circulation models and watershed models. This research effort ran 13 simulations for carbon emission scenarios (taken from the A1, A2 and B1 families) over the 21st century (2001-2100) for the Cache Creek watershed in Northern California. Atmospheric data from general circulation models, CCSM3 and ECHAM5, were dynamically downscaled to a 9 km resolution using MM5, a regional mesoscale model, before being input into the physically based watershed environmental hydrology (WEHY) model. Ensemble mean and standard deviation of simulated flows describe the expected hydrologic system response. Frequency histograms and cumulative distribution functions characterize the range of hydrologic responses that may occur. The modeled flow results comprise a dataset suitable for time series and frequency analysis allowing for more robust system characterization, including indices such as the 100 year flood return period. These results are significant for water quality management as the Cache Creek watershed is severely impacted by mercury pollution from historic mining activities. Extreme flow events control mercury fate and transport affecting the downstream water bodies of the Sacramento River and Sacramento- San Joaquin Delta which provide drinking water to over 25 million people.

  7. Work reservoirs in thermodynamics

    Science.gov (United States)

    Anacleto, Joaquim

    2010-05-01

    We stress the usefulness of the work reservoir in the formalism of thermodynamics, in particular in the context of the first law. To elucidate its usefulness, the formalism is then applied to the Joule expansion and other peculiar and instructive experimental situations, clarifying the concepts of configuration and dissipative work. The ideas and discussions presented in this study are primarily intended for undergraduate students, but they might also be useful to graduate students, researchers and teachers.

  8. Field guide to Muddy Formation outcrops, Crook County, Wyoming

    Energy Technology Data Exchange (ETDEWEB)

    Rawn-Schatzinger, V.

    1993-11-01

    The objectives of this research program are to (1) determine the reservoir characteristics and production problems of shoreline barrier reservoirs; and (2) develop methods and methodologies to effectively characterize shoreline bamer reservoirs to predict flow patterns of injected and produced fluids. Two reservoirs were selected for detailed reservoir characterization studies -- Bell Creek field, Carter County, Montana that produces from the Lower Cretaceous (Albian-Cenomanian) Muddy Formation, and Patrick Draw field, Sweetwater County, Wyoming that produces from the Upper Cretaceous (Campanian) Almond Formation of the Mesaverde Group. An important component of the research project was to use information from outcrop exposures of the producing formations to study the spatial variations of reservoir properties and the degree to which outcrop information can be used in the construction of reservoir models. This report contains the data and analyses collected from outcrop exposures of the Muddy Formation, located in Crook County, Wyoming, 40 miles south of Bell Creek oil field. The outcrop data set contains permeability, porosity, petrographic, grain size and geologic data from 1-inch-diameter core plugs chilled from the outcrop face, as well as geological descriptions and sedimentological interpretations of the outcrop exposures. The outcrop data set provides information about facies characteristics and geometries and the spatial distribution of permeability and porosity on interwell scales. Appendices within this report include a micropaleontological analyses of selected outcrop samples, an annotated bibliography of papers on the Muddy Formation in the Powder River Basin, and over 950 permeability and porosity values measured from 1-inch-diameter core plugs drilled from the outcrop. All data contained in this resort are available in electronic format upon request. The core plugs drilled from the outcrop are available for measurement.

  9. Advantageous Reservoir Characterization Technology in Extra Low Permeability Oil Reservoirs

    Directory of Open Access Journals (Sweden)

    Yutian Luo

    2017-01-01

    Full Text Available This paper took extra low permeability reservoirs in Dagang Liujianfang Oilfield as an example and analyzed different types of microscopic pore structures by SEM, casting thin sections fluorescence microscope, and so on. With adoption of rate-controlled mercury penetration, NMR, and some other advanced techniques, based on evaluation parameters, namely, throat radius, volume percentage of mobile fluid, start-up pressure gradient, and clay content, the classification and assessment method of extra low permeability reservoirs was improved and the parameter boundaries of the advantageous reservoirs were established. The physical properties of reservoirs with different depth are different. Clay mineral variation range is 7.0%, and throat radius variation range is 1.81 μm, and start pressure gradient range is 0.23 MPa/m, and movable fluid percentage change range is 17.4%. The class IV reservoirs account for 9.56%, class II reservoirs account for 12.16%, and class III reservoirs account for 78.29%. According to the comparison of different development methods, class II reservoir is most suitable for waterflooding development, and class IV reservoir is most suitable for gas injection development. Taking into account the gas injection in the upper section of the reservoir, the next section of water injection development will achieve the best results.

  10. Integrating Cache-Related Pre-emption Delays into Analysis of Fixed Priority Scheduling with Pre-emption Thresholds

    NARCIS (Netherlands)

    Bril, R.J.; Altmeyer, S.; van den Heuvel, M.H.P.; Davis, R.I.; Behnam, M.

    2014-01-01

    Cache-related pre-emption delays (CRPD) have been integrated into the schedulability analysis of sporadic tasks with constrained deadlines for fixed-priority pre-emptive scheduling (FPPS). This paper generalizes that work by integrating CRPD into the schedulability analysis of tasks with arbitrary

  11. Assessment of watershed vulnerability to climate change for the Uinta-Wasatch-Cache and Ashley National Forests, Utah

    Science.gov (United States)

    Janine Rice; Tim Bardsley; Pete Gomben; Dustin Bambrough; Stacey Weems; Sarah Leahy; Christopher Plunkett; Charles Condrat; Linda A. Joyce

    2017-01-01

    Watersheds on the Uinta-Wasatch-Cache and Ashley National Forests provide many ecosystem services, and climate change poses a risk to these services. We developed a watershed vulnerability assessment to provide scientific information for land managers facing the challenge of managing these watersheds. Literature-based information and expert elicitation is used to...

  12. Tannin concentration enhances seed caching by scatter-hoarding rodents: An experiment using artificial ‘seeds’

    Science.gov (United States)

    Wang, Bo; Chen, Jin

    2008-11-01

    Tannins are very common among plant seeds but their effects on the fate of seeds, for example, via mediation of the feeding preferences of scatter-hoarding rodents, are poorly understood. In this study, we created a series of artificial 'seeds' that only differed in tannin concentration and the type of tannin, and placed them in a pine forest in the Shangri-La Alpine Botanical Garden, Yunnan Province of China. Two rodent species ( Apodemus latronum and A. chevrieri) showed significant preferences for 'seeds' with different tannin concentrations. A significantly higher proportion of seeds with low tannin concentration were consumed in situ compared with seeds with a higher tannin concentration. Meanwhile, the tannin concentration was significantly positively correlated with the proportion of seeds cached. The different types of tannin (hydrolysable tannin vs condensed tannin) did not differ significantly in their effect on the proportion of seeds eaten in situ vs seeds cached. Tannin concentrations had no significant effect on the distance that cached seeds were carried, which suggests that rodents may respond to different seed traits in deciding whether or not to cache seeds and how far they will transport seeds.

  13. Allegheny County TIF Boundaries

    Data.gov (United States)

    Allegheny County / City of Pittsburgh / Western PA Regional Data Center — Tax Increment Financing (TIF) outline parcels for Allegheny County, PA. TIF closing books contain all necessary documentation related to a TIF in order to close on...

  14. Allegheny County Greenways

    Data.gov (United States)

    Allegheny County / City of Pittsburgh / Western PA Regional Data Center — Greenways data was compiled by the Allegheny Land Trust as a planning effort in the development of Allegheny Places, the Allegheny County Comprehensive Plan. The...

  15. Allegheny County Parcel Boundaries

    Data.gov (United States)

    Allegheny County / City of Pittsburgh / Western PA Regional Data Center — This dataset contains parcel boundaries attributed with county block and lot number. Use the Property Information Extractor for more control downloading a filtered...

  16. Allegheny County Parks Outlines

    Data.gov (United States)

    Allegheny County / City of Pittsburgh / Western PA Regional Data Center — Shows the size and shape of the nine Allegheny County parks. If viewing this description on the Western Pennsylvania Regional Data Center’s open data portal...

  17. Allegheny County Dam Locations

    Data.gov (United States)

    Allegheny County / City of Pittsburgh / Western PA Regional Data Center — This dataset shows the point locations of dams in Allegheny County. If viewing this description on the Western Pennsylvania Regional Data Center’s open data portal...

  18. Allegheny County Plumbers

    Data.gov (United States)

    Allegheny County / City of Pittsburgh / Western PA Regional Data Center — All master plumbers must be registered with the Allegheny County Health Department. Only Registered Master Plumbers who possess a current plumbing license or...

  19. Allegheny County Hospitals

    Data.gov (United States)

    Allegheny County / City of Pittsburgh / Western PA Regional Data Center — The data on health care facilities includes the name and location of all the hospitals and primary care facilities in Allegheny County. The current listing of...

  20. Allegheny County Boundary

    Data.gov (United States)

    Allegheny County / City of Pittsburgh / Western PA Regional Data Center — This dataset contains the Allegheny County boundary. If viewing this description on the Western Pennsylvania Regional Data Center’s open data portal...

  1. Allegheny County Employee Salaries

    Data.gov (United States)

    Allegheny County / City of Pittsburgh / Western PA Regional Data Center — Employee salaries are a regular Right to Know request the County receives. Here is the disclaimer language that is included with the dataset from the Open Records...

  2. Allegheny County Crash Data

    Data.gov (United States)

    Allegheny County / City of Pittsburgh / Western PA Regional Data Center — Contains locations and information about every crash incident reported to the police in Allegheny County from 2004 to 2016. Fields include injury severity,...

  3. Allegheny County Tobacco Vendors

    Data.gov (United States)

    Allegheny County / City of Pittsburgh / Western PA Regional Data Center — The tobacco vendor information provides the location of all tobacco vendors in Allegheny County in 2015. Data was compiled from administrative records managed by...

  4. Allegheny County Traffic Counts

    Data.gov (United States)

    Allegheny County / City of Pittsburgh / Western PA Regional Data Center — Traffic sensors at over 1,200 locations in Allegheny County collect vehicle counts for the Pennsylvania Department of Transportation. Data included in the Health...

  5. Beaver County Crash Data

    Data.gov (United States)

    Allegheny County / City of Pittsburgh / Western PA Regional Data Center — Contains locations and information about every crash incident reported to the police in Beaver County from 2011 to 2015. Fields include injury severity, fatalities,...

  6. Taos County Roads

    Data.gov (United States)

    Earth Data Analysis Center, University of New Mexico — Vector line shapefile under the stewardship of the Taos County Planning Department depicting roads in Taos County, New Mexico. Originally under the Emergency...

  7. County Population Vulnerability

    Data.gov (United States)

    City and County of Durham, North Carolina — This layer summarizes the social vulnerability index for populations within each county in the United States at scales 1:3m and below. It answers the question...

  8. Allegheny County Depression Medication

    Data.gov (United States)

    Allegheny County / City of Pittsburgh / Western PA Regional Data Center — These Census Tract-level datasets described here provide de-identified diagnosis data for customers of three managed care organizations in Allegheny County (Gateway...

  9. Allegheny County Anxiety Medication

    Data.gov (United States)

    Allegheny County / City of Pittsburgh / Western PA Regional Data Center — These Census Tract-level datasets described here provide de-identified diagnosis data for customers of three managed care organizations in Allegheny County (Gateway...

  10. Butler County Crash Data

    Data.gov (United States)

    Allegheny County / City of Pittsburgh / Western PA Regional Data Center — Contains locations and information about every crash incident reported to the police in Butler County from 2011 to 2015. Fields include injury severity, fatalities,...

  11. Allegheny County Property Viewer

    Data.gov (United States)

    Allegheny County / City of Pittsburgh / Western PA Regional Data Center — Webmap of Allegheny municipalities and parcel data. Zoom for a clickable parcel map with owner name, property photograph, and link to the County Real Estate website...

  12. Allegheny County Property Assessments

    Data.gov (United States)

    Allegheny County / City of Pittsburgh / Western PA Regional Data Center — Real Property parcel characteristics for Allegheny County, PA. Includes information pertaining to land, values, sales, abatements, and building characteristics (if...

  13. Allegheny County Street Centerlines

    Data.gov (United States)

    Allegheny County / City of Pittsburgh / Western PA Regional Data Center — This dataset contains the locations of the street centerlines for vehicular and foot traffic in Allegheny County. Street Centerlines are classified as Primary Road,...

  14. Allegheny County Major Rivers

    Data.gov (United States)

    Allegheny County / City of Pittsburgh / Western PA Regional Data Center — This dataset contains locations of major rivers that flow through Allegheny County. These shapes have been taken from the Hydrology dataset. The Ohio River,...

  15. Allegheny County Asbestos Permits

    Data.gov (United States)

    Allegheny County / City of Pittsburgh / Western PA Regional Data Center — Current asbestos permit data issued by the County for commercial building demolitions and renovations as required by the EPA. This file is updated daily and can be...

  16. Allegheny County Obesity Rates

    Data.gov (United States)

    Allegheny County / City of Pittsburgh / Western PA Regional Data Center — Obesity rates for each Census Tract in Allegheny County were produced for the study “Developing small-area predictions for smoking and obesity prevalence in the...

  17. Allegheny County Smoking Rates

    Data.gov (United States)

    Allegheny County / City of Pittsburgh / Western PA Regional Data Center — Smoking rates for each Census Tract in Allegheny County were produced for the study “Developing small-area predictions for smoking and obesity prevalence in the...

  18. Interdisciplinary study of reservoir compartments and heterogeneity. Final report, October 1, 1993--December 31, 1996

    Energy Technology Data Exchange (ETDEWEB)

    Van Kirk, C.

    1998-01-01

    A case study approach using Terry Sandstone production from the Hambert-Aristocrat Field, Weld County, Colorado was used to document the process of integration. One specific project goal is to demonstrate how a multidisciplinary approach can be used to detect reservoir compartmentalization and improve reserve estimates. The final project goal is to derive a general strategy for integration for independent operators. Teamwork is the norm for the petroleum industry where teams of geologists, geophysicists, and petroleum engineers work together to improve profits through a better understanding of reservoir size, compartmentalization, and orientation as well as reservoir flow characteristics. In this manner, integration of data narrows the uncertainty in reserve estimates and enhances reservoir management decisions. The process of integration has proven to be iterative. Integration has helped identify reservoir compartmentalization and reduce the uncertainty in the reserve estimates. This research report documents specific examples of integration and the economic benefits of integration.

  19. 75 FR 15458 - Request for Small Reclamation Projects Act Loan To Construct Narrows Dam in Sanpete County, UT

    Science.gov (United States)

    2010-03-29

    ... Projects Act Loan To Construct Narrows Dam in Sanpete County, UT AGENCY: Bureau of Reclamation, Interior... construction by SWCD of the proposed Narrows Dam and reservoir, a non-Federal project to be located on... construction of the 17,000 acre-foot Narrows Dam and reservoir on Gooseberry Creek, pipelines to deliver the...

  20. Achieving cost/performance balance ratio using tiered storage caching techniques: A case study with CephFS

    Science.gov (United States)

    Poat, M. D.; Lauret, J.

    2017-10-01

    As demand for widely accessible storage capacity increases and usage is on the rise, steady IO performance is desired but tends to suffer within multi-user environments. Typical deployments use standard hard drives as the cost per/GB is quite low. On the other hand, HDD based solutions for storage is not known to scale well with process concurrency and soon enough, high rate of IOPs create a “random access” pattern killing performance. Though not all SSDs are alike, SSDs are an established technology often used to address this exact “random access” problem. In this contribution, we will first discuss the IO performance of many different SSD drives (tested in a comparable and standalone manner). We will then be discussing the performance and integrity of at least three low-level disk caching techniques (Flashcache, dm-cache, and bcache) including individual policies, procedures, and IO performance. Furthermore, the STAR online computing infrastructure currently hosts a POSIX-compliant Ceph distributed storage cluster - while caching is not a native feature of CephFS (only exists in the Ceph Object store), we will show how one can implement a caching mechanism profiting from an implementation at a lower level. As our illustration, we will present our CephFS setup, IO performance tests, and overall experience from such configuration. We hope this work will service the community’s interest for using disk-caching mechanisms with applicable uses such as distributed storage systems and seeking an overall IO performance gain.

  1. Summary and Synthesis of Mercury Studies in the Cache Creek Watershed, California, 2000-01

    Science.gov (United States)

    Domagalski, Joseph L.; Slotton, Darell G.; Alpers, Charles N.; Suchanek, Thomas H.; Churchill, Ronald; Bloom, Nicolas; Ayers, Shaun M.; Clinkenbeard, John

    2004-01-01

    This report summarizes the principal findings of the Cache Creek, California, components of a project funded by the CALFED Bay?Delta Program entitled 'An Assessment of Ecological and Human Health Impacts of Mercury in the Bay?Delta Watershed.' A companion report summarizes the key findings of other components of the project based in the San Francisco Bay and the Delta of the Sacramento and San Joaquin Rivers. These summary documents present the more important findings of the various studies in a format intended for a wide audience. For more in-depth, scientific presentation and discussion of the research, a series of detailed technical reports of the integrated mercury studies is available at the following website: .

  2. An ecological response model for the Cache la Poudre River through Fort Collins

    Science.gov (United States)

    Shanahan, Jennifer; Baker, Daniel; Bledsoe, Brian P.; Poff, LeRoy; Merritt, David M.; Bestgen, Kevin R.; Auble, Gregor T.; Kondratieff, Boris C.; Stokes, John; Lorie, Mark; Sanderson, John

    2014-01-01

    The Poudre River Ecological Response Model (ERM) is a collaborative effort initiated by the City of Fort Collins and a team of nine river scientists to provide the City with a tool to improve its understanding of the past, present, and likely future conditions of the Cache la Poudre River ecosystem. The overall ecosystem condition is described through the measurement of key ecological indicators such as shape and character of the stream channel and banks, streamside plant communities and floodplain wetlands, aquatic vegetation and insects, and fishes, both coolwater trout and warmwater native species. The 13- mile-long study area of the Poudre River flows through Fort Collins, Colorado, and is located in an ecological transition zone between the upstream, cold-water, steep-gradient system in the Front Range of the Southern Rocky Mountains and the downstream, warm-water, low-gradient reach in the Colorado high plains.

  3. Improved cache performance in Monte Carlo transport calculations using energy banding

    Science.gov (United States)

    Siegel, A.; Smith, K.; Felker, K.; Romano, P.; Forget, B.; Beckman, P.

    2014-04-01

    We present an energy banding algorithm for Monte Carlo (MC) neutral particle transport simulations which depend on large cross section lookup tables. In MC codes, read-only cross section data tables are accessed frequently, exhibit poor locality, and are typically too much large to fit in fast memory. Thus, performance is often limited by long latencies to RAM, or by off-node communication latencies when the data footprint is very large and must be decomposed on a distributed memory machine. The proposed energy banding algorithm allows maximal temporal reuse of data in band sizes that can flexibly accommodate different architectural features. The energy banding algorithm is general and has a number of benefits compared to the traditional approach. In the present analysis we explore its potential to achieve improvements in time-to-solution on modern cache-based architectures.

  4. The Identification and Treatment of a Unique Cache of Organic Artefacts from Menorca's Bronze Age

    Directory of Open Access Journals (Sweden)

    Howard Wellman

    1996-05-01

    Full Text Available A unique cache of organic artefacts was excavated in March 1995 from Cova d'es Carritx, Menorca, a sealed cave system that was used as a mortuary in the late second or early first millennia BC. This deposit included a set of unique conical tubes made of bovine horn sheath, stuffed with hair or other fibres, and capped with wooden disks. Other materials were found in association with the tubes, including a copper-tin alloy rod. The decision to display some of the tubes required a degree of consolidative strengthening which would conflict with conservation aims of preserving the artefacts essentially unchanged for future study. The two most complete artefacts were treated by localised consolidation (with Paraloid B-72, while the other two were left untreated. The two consolidated tubes were provided with display-ready mounts, while the others were packaged to minimise the effects of handling and long-term storage.

  5. Caching behaviour by red squirrels may contribute to food conditioning of grizzly bears

    Directory of Open Access Journals (Sweden)

    Julia Elizabeth Put

    2017-08-01

    Full Text Available We describe an interspecific relationship wherein grizzly bears (Ursus arctos horribilis appear to seek out and consume agricultural seeds concentrated in the middens of red squirrels (Tamiasciurus hudsonicus, which had collected and cached spilled grain from a railway. We studied this interaction by estimating squirrel density, midden density and contents, and bear activity along paired transects that were near (within 50 m or far (200 m from the railway. Relative to far ones, near transects had 2.4 times more squirrel sightings, but similar numbers of squirrel middens. Among 15 middens in which agricultural products were found, 14 were near the rail and 4 subsequently exhibited evidence of bear digging. Remote cameras confirmed the presence of squirrels on the rail and bears excavating middens. We speculate that obtaining grain from squirrel middens encourages bears to seek grain on the railway, potentially contributing to their rising risk of collisions with trains.

  6. Comparison of the Frontier Distributed Database Caching System to NoSQL Databases

    International Nuclear Information System (INIS)

    Dykstra, Dave

    2012-01-01

    One of the main attractions of non-relational “NoSQL” databases is their ability to scale to large numbers of readers, including readers spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects for Conditions data, is based on traditional SQL databases but also adds high scalability and the ability to be distributed over a wide-area for an important subset of applications. This paper compares the major characteristics of the two different approaches and identifies the criteria for choosing which approach to prefer over the other. It also compares in some detail the NoSQL databases used by CMS and ATLAS: MongoDB, CouchDB, HBase, and Cassandra.

  7. Comparison of the Frontier Distributed Database Caching System to NoSQL Databases

    Science.gov (United States)

    Dykstra, Dave

    2012-12-01

    One of the main attractions of non-relational “NoSQL” databases is their ability to scale to large numbers of readers, including readers spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects for Conditions data, is based on traditional SQL databases but also adds high scalability and the ability to be distributed over a wide-area for an important subset of applications. This paper compares the major characteristics of the two different approaches and identifies the criteria for choosing which approach to prefer over the other. It also compares in some detail the NoSQL databases used by CMS and ATLAS: MongoDB, CouchDB, HBase, and Cassandra.

  8. Comparison of the Frontier Distributed Database Caching System to NoSQL Databases

    Energy Technology Data Exchange (ETDEWEB)

    Dykstra, Dave [Fermilab

    2012-07-20

    One of the main attractions of non-relational NoSQL databases is their ability to scale to large numbers of readers, including readers spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects for Conditions data, is based on traditional SQL databases but also adds high scalability and the ability to be distributed over a wide-area for an important subset of applications. This paper compares the major characteristics of the two different approaches and identifies the criteria for choosing which approach to prefer over the other. It also compares in some detail the NoSQL databases used by CMS and ATLAS: MongoDB, CouchDB, HBase, and Cassandra.

  9. Modified stretched exponential model of computer system resources management limitations-The case of cache memory

    Science.gov (United States)

    Strzałka, Dominik; Dymora, Paweł; Mazurek, Mirosław

    2018-02-01

    In this paper we present some preliminary results in the field of computer systems management with relation to Tsallis thermostatistics and the ubiquitous problem of hardware limited resources. In the case of systems with non-deterministic behaviour, management of their resources is a key point that guarantees theirs acceptable performance and proper working. This is very wide problem that stands for many challenges in financial, transport, water and food, health, etc. areas. We focus on computer systems with attention paid to cache memory and propose to use an analytical model that is able to connect non-extensive entropy formalism, long-range dependencies, management of system resources and queuing theory. Obtained analytical results are related to the practical experiment showing interesting and valuable results.

  10. A Cache-Oblivious Implicit Dictionary with the Working Set Property

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Kejlberg-Rasmussen, Casper; Truelsen, Jakob

    2010-01-01

    In this paper we present an implicit dictionary with the working set property i.e. a dictionary supporting \\op{insert}($e$), \\op{delete}($x$) and \\op{predecessor}($x$) in~$\\O(\\log n)$ time and \\op{search}($x$) in $\\O(\\log\\ell)$ time, where $n$ is the number of elements stored in the dictionary...... and $\\ell$ is the number of distinct elements searched for since the element with key~$x$ was last searched for. The dictionary stores the elements in an array of size~$n$ using \\emph{no} additional space. In the cache-oblivious model the operations \\op{insert}($e$), \\op{delete}($x$) and \\op...

  11. Geothermal reservoir management

    Energy Technology Data Exchange (ETDEWEB)

    Scherer, C.R.; Golabi, K.

    1978-02-01

    The optimal management of a hot water geothermal reservoir was considered. The physical system investigated includes a three-dimensional aquifer from which hot water is pumped and circulated through a heat exchanger. Heat removed from the geothermal fluid is transferred to a building complex or other facility for space heating. After passing through the heat exchanger, the (now cooled) geothermal fluid is reinjected into the aquifer. This cools the reservoir at a rate predicted by an expression relating pumping rate, time, and production hole temperature. The economic model proposed in the study maximizes discounted value of energy transferred across the heat exchanger minus the discounted cost of wells, equipment, and pumping energy. The real value of energy is assumed to increase at r percent per year. A major decision variable is the production or pumping rate (which is constant over the project life). Other decision variables in this optimization are production timing, reinjection temperature, and the economic life of the reservoir at the selected pumping rate. Results show that waiting time to production and production life increases as r increases and decreases as the discount rate increases. Production rate decreases as r increases and increases as the discount rate increases. The optimal injection temperature is very close to the temperature of the steam produced on the other side of the heat exchanger, and is virtually independent of r and the discount rate. Sensitivity of the decision variables to geohydrological parameters was also investigated. Initial aquifer temperature and permeability have a major influence on these variables, although aquifer porosity is of less importance. A penalty was considered for production delay after the lease is granted.

  12. Mercury and methylmercury concentrations and loads in the Cache Creek watershed, California

    International Nuclear Information System (INIS)

    Domagalski, Joseph L.; Alpers, Charles N.; Slotton, Darell G.; Suchanek, Thomas H.; Ayers, Shaun M.

    2004-01-01

    Concentrations and loads of total mercury and methylmercury were measured in streams draining abandoned mercury mines and in the proximity of geothermal discharge in the Cache Creek watershed of California during a 17-month period from January 2000 through May 2001. Rainfall and runoff were lower than long-term averages during the study period. The greatest loading of mercury and methylmercury from upstream sources to downstream receiving waters, such as San Francisco Bay, generally occurred during or after winter rainfall events. During the study period, loads of mercury and methylmercury from geothermal sources tended to be greater than those from abandoned mining areas, a pattern attributable to the lack of large precipitation events capable of mobilizing significant amounts of either mercury-laden sediment or dissolved mercury and methylmercury from mine waste. Streambed sediments of Cache Creek are a significant source of mercury and methylmercury to downstream receiving bodies of water. Much of the mercury in these sediments is the result of deposition over the last 100-150 years by either storm-water runoff, from abandoned mines, or continuous discharges from geothermal areas. Several geochemical constituents were useful as natural tracers for mining and geothermal areas, including the aqueous concentrations of boron, chloride, lithium and sulfate, and the stable isotopes of hydrogen and oxygen in water. Stable isotopes of water in areas draining geothermal discharges showed a distinct trend toward enrichment of 18 O compared with meteoric waters, whereas much of the runoff from abandoned mines indicated a stable isotopic pattern more consistent with local meteoric water

  13. Design issues and caching strategies for CD-ROM-based multimedia storage

    Science.gov (United States)

    Shastri, Vijnan; Rajaraman, V.; Jamadagni, H. S.; Venkat-Rangan, P.; Sampath-Kumar, Srihari

    1996-03-01

    CD-ROMs have proliferated as a distribution media for desktop machines for a large variety of multimedia applications (targeted for a single-user environment) like encyclopedias, magazines and games. With CD-ROM capacities up to 3 GB being available in the near future, they will form an integral part of Video on Demand (VoD) servers to store full-length movies and multimedia. In the first section of this paper we look at issues related to the single- user desktop environment. Since these multimedia applications are highly interactive in nature, we take a pragmatic approach, and have made a detailed study of the multimedia application behavior in terms of the I/O request patterns generated to the CD-ROM subsystem by tracing these patterns. We discuss prefetch buffer design and seek time characteristics in the context of the analysis of these traces. We also propose an adaptive main-memory hosted cache that receives caching hints from the application to reduce the latency when the user moves from one node of the hyper graph to another. In the second section we look at the use of CD-ROM in a VoD server and discuss the problem of scheduling multiple request streams and buffer management in this scenario. We adapt the C-SCAN (Circular SCAN) algorithm to suit the CD-ROM drive characteristics and prove that it is optimal in terms of buffer size management. We provide computationally inexpensive relations by which this algorithm can be implemented. We then propose an admission control algorithm which admits new request streams without disrupting the continuity of playback of the previous request streams. The algorithm also supports operations such as fast forward and replay. Finally, we discuss the problem of optimal placement of MPEG streams on CD-ROMs in the third section.

  14. Towards Cache-Enabled, Order-Aware, Ontology-Based Stream Reasoning Framework

    Energy Technology Data Exchange (ETDEWEB)

    Yan, Rui; Praggastis, Brenda L.; Smith, William P.; McGuinness, Deborah L.

    2016-08-16

    While streaming data have become increasingly more popular in business and research communities, semantic models and processing software for streaming data have not kept pace. Traditional semantic solutions have not addressed transient data streams. Semantic web languages (e.g., RDF, OWL) have typically addressed static data settings and linked data approaches have predominantly addressed static or growing data repositories. Streaming data settings have some fundamental differences; in particular, data are consumed on the fly and data may expire. Stream reasoning, a combination of stream processing and semantic reasoning, has emerged with the vision of providing "smart" processing of streaming data. C-SPARQL is a prominent stream reasoning system that handles semantic (RDF) data streams. Many stream reasoning systems including C-SPARQL use a sliding window and use data arrival time to evict data. For data streams that include expiration times, a simple arrival time scheme is inadequate if the window size does not match the expiration period. In this paper, we propose a cache-enabled, order-aware, ontology-based stream reasoning framework. This framework consumes RDF streams with expiration timestamps assigned by the streaming source. Our framework utilizes both arrival and expiration timestamps in its cache eviction policies. In addition, we introduce the notion of "semantic importance" which aims to address the relevance of data to the expected reasoning, thus enabling the eviction algorithms to be more context- and reasoning-aware when choosing what data to maintain for question answering. We evaluate this framework by implementing three different prototypes and utilizing five metrics. The trade-offs of deploying the proposed framework are also discussed.

  15. Mercury and methylmercury concentrations and loads in the Cache Creek watershed, California.

    Science.gov (United States)

    Domagalski, Joseph L; Alpers, Charles N; Slotton, Darell G; Suchanek, Thomas H; Ayers, Shaun M

    2004-07-05

    Concentrations and loads of total mercury and methylmercury were measured in streams draining abandoned mercury mines and in the proximity of geothermal discharge in the Cache Creek watershed of California during a 17-month period from January 2000 through May 2001. Rainfall and runoff were lower than long-term averages during the study period. The greatest loading of mercury and methylmercury from upstream sources to downstream receiving waters, such as San Francisco Bay, generally occurred during or after winter rainfall events. During the study period, loads of mercury and methylmercury from geothermal sources tended to be greater than those from abandoned mining areas, a pattern attributable to the lack of large precipitation events capable of mobilizing significant amounts of either mercury-laden sediment or dissolved mercury and methylmercury from mine waste. Streambed sediments of Cache Creek are a significant source of mercury and methylmercury to downstream receiving bodies of water. Much of the mercury in these sediments is the result of deposition over the last 100-150 years by either storm-water runoff, from abandoned mines, or continuous discharges from geothermal areas. Several geochemical constituents were useful as natural tracers for mining and geothermal areas, including the aqueous concentrations of boron, chloride, lithium and sulfate, and the stable isotopes of hydrogen and oxygen in water. Stable isotopes of water in areas draining geothermal discharges showed a distinct trend toward enrichment of (18)O compared with meteoric waters, whereas much of the runoff from abandoned mines indicated a stable isotopic pattern more consistent with local meteoric water.

  16. Advances in photonic reservoir computing

    Science.gov (United States)

    Van der Sande, Guy; Brunner, Daniel; Soriano, Miguel C.

    2017-05-01

    We review a novel paradigm that has emerged in analogue neuromorphic optical computing. The goal is to implement a reservoir computer in optics, where information is encoded in the intensity and phase of the optical field. Reservoir computing is a bio-inspired approach especially suited for processing time-dependent information. The reservoir's complex and high-dimensional transient response to the input signal is capable of universal computation. The reservoir does not need to be trained, which makes it very well suited for optics. As such, much of the promise of photonic reservoirs lies in their minimal hardware requirements, a tremendous advantage over other hardware-intensive neural network models. We review the two main approaches to optical reservoir computing: networks implemented with multiple discrete optical nodes and the continuous system of a single nonlinear device coupled to delayed feedback.

  17. Encapsulated microsensors for reservoir interrogation

    Energy Technology Data Exchange (ETDEWEB)

    Scott, Eddie Elmer; Aines, Roger D.; Spadaccini, Christopher M.

    2016-03-08

    In one general embodiment, a system includes at least one microsensor configured to detect one or more conditions of a fluidic medium of a reservoir; and a receptacle, wherein the receptacle encapsulates the at least one microsensor. In another general embodiment, a method include injecting the encapsulated at least one microsensor as recited above into a fluidic medium of a reservoir; and detecting one or more conditions of the fluidic medium of the reservoir.

  18. Tracking Seed Fates of Tropical Tree Species: Evidence for Seed Caching in a Tropical Forest in North-East India

    Science.gov (United States)

    Sidhu, Swati; Datta, Aparajita

    2015-01-01

    Rodents affect the post-dispersal fate of seeds by acting either as on-site seed predators or as secondary dispersers when they scatter-hoard seeds. The tropical forests of north-east India harbour a high diversity of little-studied terrestrial murid and hystricid rodents. We examined the role played by these rodents in determining the seed fates of tropical evergreen tree species in a forest site in north-east India. We selected ten tree species (3 mammal-dispersed and 7 bird-dispersed) that varied in seed size and followed the fates of 10,777 tagged seeds. We used camera traps to determine the identity of rodent visitors, visitation rates and their seed-handling behavior. Seeds of all tree species were handled by at least one rodent taxon. Overall rates of seed removal (44.5%) were much higher than direct on-site seed predation (9.9%), but seed-handling behavior differed between the terrestrial rodent groups: two species of murid rodents removed and cached seeds, and two species of porcupines were on-site seed predators. In addition, a true cricket, Brachytrupes sp., cached seeds of three species underground. We found 309 caches formed by the rodents and the cricket; most were single-seeded (79%) and seeds were moved up to 19 m. Over 40% of seeds were re-cached from primary cache locations, while about 12% germinated in the primary caches. Seed removal rates varied widely amongst tree species, from 3% in Beilschmiedia assamica to 97% in Actinodaphne obovata. Seed predation was observed in nine species. Chisocheton cumingianus (57%) and Prunus ceylanica (25%) had moderate levels of seed predation while the remaining species had less than 10% seed predation. We hypothesized that seed traits that provide information on resource quantity would influence rodent choice of a seed, while traits that determine resource accessibility would influence whether seeds are removed or eaten. Removal rates significantly decreased (p seed size. Removal rates were significantly

  19. Reservoir Simulations of Low-Temperature Geothermal Reservoirs

    Science.gov (United States)

    Bedre, Madhur Ganesh

    The eastern United States generally has lower temperature gradients than the western United States. However, West Virginia, in particular, has higher temperature gradients compared to other eastern states. A recent study at Southern Methodist University by Blackwell et al. has shown the presence of a hot spot in the eastern part of West Virginia with temperatures reaching 150°C at a depth of between 4.5 and 5 km. This thesis work examines similar reservoirs at a depth of around 5 km resembling the geology of West Virginia, USA. The temperature gradients used are in accordance with the SMU study. In order to assess the effects of geothermal reservoir conditions on the lifetime of a low-temperature geothermal system, a sensitivity analysis study was performed on following seven natural and human-controlled parameters within a geothermal reservoir: reservoir temperature, injection fluid temperature, injection flow rate, porosity, rock thermal conductivity, water loss (%) and well spacing. This sensitivity analysis is completed by using ‘One factor at a time method (OFAT)’ and ‘Plackett-Burman design’ methods. The data used for this study was obtained by carrying out the reservoir simulations using TOUGH2 simulator. The second part of this work is to create a database of thermal potential and time-dependant reservoir conditions for low-temperature geothermal reservoirs by studying a number of possible scenarios. Variations in the parameters identified in sensitivity analysis study are used to expand the scope of database. Main results include the thermal potential of reservoir, pressure and temperature profile of the reservoir over its operational life (30 years for this study), the plant capacity and required pumping power. The results of this database will help the supply curves calculations for low-temperature geothermal reservoirs in the United States, which is the long term goal of the work being done by the geothermal research group under Dr. Anderson at

  20. A review of reservoir desiltation

    DEFF Research Database (Denmark)

    Brandt, Anders

    2000-01-01

    physical geography, hydrology, desilation efficiency, reservoir flushing, density-current venting, sediment slucing, erosion pattern, downstream effects, flow characteristics, sedimentation......physical geography, hydrology, desilation efficiency, reservoir flushing, density-current venting, sediment slucing, erosion pattern, downstream effects, flow characteristics, sedimentation...

  1. Reservoir sedimentation; a literature survey

    NARCIS (Netherlands)

    Sloff, C.J.

    1991-01-01

    A survey of literature is made on reservoir sedimentation, one of the most threatening processes for world-wide reservoir performance. The sedimentation processes, their impacts, and their controlling factors are assessed from a hydraulic engineering point of view with special emphasis on

  2. FRACTURED PETROLEUM RESERVOIRS

    Energy Technology Data Exchange (ETDEWEB)

    Abbas Firoozabadi

    1999-06-11

    The four chapters that are described in this report cover a variety of subjects that not only give insight into the understanding of multiphase flow in fractured porous media, but they provide also major contribution towards the understanding of flow processes with in-situ phase formation. In the following, a summary of all the chapters will be provided. Chapter I addresses issues related to water injection in water-wet fractured porous media. There are two parts in this chapter. Part I covers extensive set of measurements for water injection in water-wet fractured porous media. Both single matrix block and multiple matrix blocks tests are covered. There are two major findings from these experiments: (1) co-current imbibition can be more efficient than counter-current imbibition due to lower residual oil saturation and higher oil mobility, and (2) tight fractured porous media can be more efficient than a permeable porous media when subjected to water injection. These findings are directly related to the type of tests one can perform in the laboratory and to decide on the fate of water injection in fractured reservoirs. Part II of Chapter I presents modeling of water injection in water-wet fractured media by modifying the Buckley-Leverett Theory. A major element of the new model is the multiplication of the transfer flux by the fractured saturation with a power of 1/2. This simple model can account for both co-current and counter-current imbibition and computationally it is very efficient. It can be orders of magnitude faster than a conventional dual-porosity model. Part II also presents the results of water injection tests in very tight rocks of some 0.01 md permeability. Oil recovery from water imbibition tests from such at tight rock can be as high as 25 percent. Chapter II discusses solution gas-drive for cold production from heavy-oil reservoirs. The impetus for this work is the study of new gas phase formation from in-situ process which can be significantly

  3. Reservoir engineering and hydrogeology

    International Nuclear Information System (INIS)

    Anon.

    1983-01-01

    Summaries are included which show advances in the following areas: fractured porous media, flow in single fractures or networks of fractures, hydrothermal flow, hydromechanical effects, hydrochemical processes, unsaturated-saturated systems, and multiphase multicomponent flows. The main thrust of these efforts is to understand the movement of mass and energy through rocks. This has involved treating fracture rock masses in which the flow phenomena within both the fractures and the matrix must be investigated. Studies also address the complex coupling between aspects of thermal, hydraulic, and mechanical processes associated with a nuclear waste repository in a fractured rock medium. In all these projects, both numerical modeling and simulation, as well as field studies, were employed. In the theoretical area, a basic understanding of multiphase flow, nonisothermal unsaturated behavior, and new numerical methods have been developed. The field work has involved reservoir testing, data analysis, and case histories at a number of geothermal projects

  4. Chalk as a reservoir

    DEFF Research Database (Denmark)

    Fabricius, Ida Lykke

    basin, so stylolite formation in the chalk is controlled by effective burial stress. The stylolites are zones of calcite dissolution and probably are the source of calcite for porefilling cementation which is typical in water zone chalk and also affect the reservoirs to different extent. The relatively...... 50% calcite, leaving the remaining internal surface to the fine grained silica and clay. The high specific surface of these components causes clay- and silica rich intervals to have high irreducible water saturation. Although chalks typically are found to be water wet, chalk with mixed wettability...... stabilizes chemically by recrystallization. This process requires energy and is promoted by temperature. This recrystallization in principle does not influence porosity, but only specific surface, which decreases during recrystallization, causing permeability to increase. The central North Sea is a warm...

  5. Pacifiers: a microbial reservoir.

    Science.gov (United States)

    Comina, Elodie; Marion, Karine; Renaud, François N R; Dore, Jeanne; Bergeron, Emmanuelle; Freney, Jean

    2006-12-01

    The permanent contact between the nipple part of pacifiers and the oral microflora offers ideal conditions for the development of biofilms. This study assessed the microbial contamination on the surface of 25 used pacifier nipples provided by day-care centers. Nine were made of silicone and 16 were made of latex. The biofilm was quantified using direct staining and microscopic observations followed by scraping and microorganism counting. The presence of a biofilm was confirmed on 80% of the pacifier nipples studied. This biofilm was mature for 36% of them. Latex pacifier nipples were more contaminated than silicone ones. The two main genera isolated were Staphylococcus and Candida. Our results confirm that nipples can be seen as potential reservoirs of infections. However, pacifiers do have some advantages; in particular, the potential protection they afford against sudden infant death syndrome. Strict rules of hygiene and an efficient antibiofilm cleaning protocol should be established to answer the worries of parents concerning the safety of pacifiers.

  6. Reservoir characterization based on tracer response and rank analysis of production and injection rates

    Energy Technology Data Exchange (ETDEWEB)

    Refunjol, B.T. [Lagoven, S.A., Pdvsa (Venezuela); Lake, L.W. [Univ. of Texas, Austin, TX (United States)

    1997-08-01

    Quantification of the spatial distribution of properties is important for many reservoir-engineering applications. But, before applying any reservoir-characterization technique, the type of problem to be tackled and the information available should be analyzed. This is important because difficulties arise in reservoirs where production records are the only information for analysis. This paper presents the results of a practical technique to determine preferential flow trends in a reservoir. The technique is a combination of reservoir geology, tracer data, and Spearman rank correlation coefficient analysis. The Spearman analysis, in particular, will prove to be important because it appears to be insightful and uses injection/production data that are prevalent in circumstances where other data are nonexistent. The technique is applied to the North Buck Draw field, Campbell County, Wyoming. This work provides guidelines to assess information about reservoir continuity in interwell regions from widely available measurements of production and injection rates at existing wells. The information gained from the application of this technique can contribute to both the daily reservoir management and the future design, control, and interpretation of subsequent projects in the reservoir, without the need for additional data.

  7. Advances in photonic reservoir computing

    Directory of Open Access Journals (Sweden)

    Van der Sande Guy

    2017-05-01

    Full Text Available We review a novel paradigm that has emerged in analogue neuromorphic optical computing. The goal is to implement a reservoir computer in optics, where information is encoded in the intensity and phase of the optical field. Reservoir computing is a bio-inspired approach especially suited for processing time-dependent information. The reservoir’s complex and high-dimensional transient response to the input signal is capable of universal computation. The reservoir does not need to be trained, which makes it very well suited for optics. As such, much of the promise of photonic reservoirs lies in their minimal hardware requirements, a tremendous advantage over other hardware-intensive neural network models. We review the two main approaches to optical reservoir computing: networks implemented with multiple discrete optical nodes and the continuous system of a single nonlinear device coupled to delayed feedback.

  8. Data from selected Almond Formation outcrops -- Sweetwater County, Wyoming

    Energy Technology Data Exchange (ETDEWEB)

    Jackson, S.R.; Rawn-Schatzinger, V.

    1993-12-01

    The objectives of this research program are to: (1) determine the reservoir characteristics and production problems of shoreline barrier reservoirs; and (2) develop methods and methodologies to effectively characterize shoreline barrier reservoirs to predict flow patterns of injected and produced fluids. Two reservoirs were selected for detailed reservoir characterization studies -- Bell Creek field, Carter County, Montana, that produces from the Lower Cretaceous (Albian-Cenomanian) Muddy Formation, and Patrick Draw field, Sweetwater County, Wyoming that produces from the Upper Cretaceous (Campanian) Almond Formation of the Mesaverde Group. An important component of the research project was to use information from outcrop exposures of the producing formations to study the spatial variations of reservoir properties and the degree to which outcrop information can be used in the construction of reservoir models. A report similar to this one presents the Muddy Formation outcrop data and analyses performed in the course of this study (Rawn-Schatzinger, 1993). Two outcrop localities, RG and RH, previously described by Roehler (1988) provided good exposures of the Upper Almond shoreline barrier facies and were studied during 1990--1991. Core from core well No. 2 drilled approximately 0.3 miles downdip of outcrop RG was obtained for study. The results of the core study will be reported in a separate volume. Outcrops RH and RG, located about 2 miles apart were selected for detailed description and drilling of core plugs. One 257-ft-thick section was measured at outcrop RG, and three sections {approximately}145 ft thick located 490 and 655 feet apart were measured at the outcrop RH. Cross-sections of these described profiles were constructed to determine lateral facies continuity and changes. This report contains the data and analyses from the studied outcrops.

  9. Seed drops and caches by the harvester ant Messor barbarus: do they contribute to seed dispersal in Mediterranean grasslands?

    Science.gov (United States)

    Detrain, C.; Tasse, Olivier

    To determine whether the harvester ant Messor barbarus acts as a seed disperser in Mediterranean grasslands, the accuracy level of seed processing was assessed in the field by quantifying seed drops by loaded foragers. In the vicinity of exploited seed patches 3times as many diaspores were found as in controls due to seed losses by foragers. Over trails, up to 30% of harvested seeds were dropped, singly, by workers but all were recovered by nestmates within 24h. Seeds were also dropped within temporary caches with very few viable diaspores being left per cache when ants no longer used the trail. Globally, ant-dispersed diaspores accounted for only 0.1% of seeds harvested by M. barbarus. We discuss the possible significance for grassland vegetation of harvester-ant-mediated seed dispersal.

  10. Gravity observations for hydrocarbon reservoir monitoring

    NARCIS (Netherlands)

    Glegola, M.A.

    2013-01-01

    In this thesis the added value of gravity observations for hydrocarbon reservoir monitoring and characterization is investigated. Reservoir processes and reservoir types most suitable for gravimetric monitoring are identified. Major noise sources affecting time-lapse gravimetry are analyzed. The

  11. VT Boundaries - county polygons

    Data.gov (United States)

    Vermont Center for Geographic Information — (Link to Metadata) The BNDHASH dataset depicts Vermont villages, towns, counties, Regional Planning Commissions (RPC), and LEPC (Local Emergency Planning Committee)...

  12. Assessment of conservation practices in the Fort Cobb Reservoir watershed, southwestern Oklahoma

    Science.gov (United States)

    Becker, Carol J.

    2011-01-01

    The Fort Cobb Reservoir watershed encompasses about 813 square kilometers of rural farm land in Caddo, Custer, and Washita Counties in southwestern Oklahoma. The Fort Cobb Reservoir and six stream segments were identified on the Oklahoma 1998 303(d) list as not supporting designated beneficial uses because of impairment by nutrients, suspended solids, sedimentation, pesticides, and unknown toxicity. As a result, State and Federal agencies, in collaboration with conservation districts and landowners, started conservation efforts in 2001 to decrease erosion and transport of sediments and nutrients to the reservoir and improve water quality in tributaries. The U.S. Department of Agriculture selected the Fort Cobb Reservoir watershed in 2003 as 1 of 14 benchmark watersheds under the Conservation Effectiveness Assessment Project with the objective of quantifying the environmental benefits derived from agricultural conservation programs in reducing inflows of sediments and phosphorus to the reservoir. In November 2004, the Biologic, Geographic, Geologic, and Water Disciplines of the U.S. Geological Survey, in collaboration with the Agricultural Research Service, Grazinglands Research Laboratory in El Reno, Oklahoma, began an interdisciplinary investigation to produce an integrated publication to complement this program. This publication is a compilation of 10 report chapters describing land uses, soils, geology, climate, and water quality in streams and the reservoir through results of field and remote sensing investigations from 2004 to 2007. The investigations indicated that targeting best-management practices to small intermittent streams draining to the reservoir and to the Cobb Creek subwatershed may effectively augment efforts to improve eutrophic to hypereutrophic conditions that continue to affect the reservoir. The three major streams flowing into the reservoir contribute nutrients causing eutrophication, but minor streams draining cultivated fields near the

  13. Minimizing End-to-End Interference in I/O Stacks Spanning Shared Multi-Level Buffer Caches

    Science.gov (United States)

    Patrick, Christina M.

    2011-01-01

    This thesis presents an end-to-end interference minimizing uniquely designed high performance I/O stack that spans multi-level shared buffer cache hierarchies accessing shared I/O servers to deliver a seamless high performance I/O stack. In this thesis, I show that I can build a superior I/O stack which minimizes the inter-application interference…

  14. Analysis of real-time reservoir monitoring : reservoirs, strategies, & modeling.

    Energy Technology Data Exchange (ETDEWEB)

    Mani, Seethambal S.; van Bloemen Waanders, Bart Gustaaf; Cooper, Scott Patrick; Jakaboski, Blake Elaine; Normann, Randy Allen; Jennings, Jim (University of Texas at Austin, Austin, TX); Gilbert, Bob (University of Texas at Austin, Austin, TX); Lake, Larry W. (University of Texas at Austin, Austin, TX); Weiss, Chester Joseph; Lorenz, John Clay; Elbring, Gregory Jay; Wheeler, Mary Fanett (University of Texas at Austin, Austin, TX); Thomas, Sunil G. (University of Texas at Austin, Austin, TX); Rightley, Michael J.; Rodriguez, Adolfo (University of Texas at Austin, Austin, TX); Klie, Hector (University of Texas at Austin, Austin, TX); Banchs, Rafael (University of Texas at Austin, Austin, TX); Nunez, Emilio J. (University of Texas at Austin, Austin, TX); Jablonowski, Chris (University of Texas at Austin, Austin, TX)

    2006-11-01

    The project objective was to detail better ways to assess and exploit intelligent oil and gas field information through improved modeling, sensor technology, and process control to increase ultimate recovery of domestic hydrocarbons. To meet this objective we investigated the use of permanent downhole sensors systems (Smart Wells) whose data is fed real-time into computational reservoir models that are integrated with optimized production control systems. The project utilized a three-pronged approach (1) a value of information analysis to address the economic advantages, (2) reservoir simulation modeling and control optimization to prove the capability, and (3) evaluation of new generation sensor packaging to survive the borehole environment for long periods of time. The Value of Information (VOI) decision tree method was developed and used to assess the economic advantage of using the proposed technology; the VOI demonstrated the increased subsurface resolution through additional sensor data. Our findings show that the VOI studies are a practical means of ascertaining the value associated with a technology, in this case application of sensors to production. The procedure acknowledges the uncertainty in predictions but nevertheless assigns monetary value to the predictions. The best aspect of the procedure is that it builds consensus within interdisciplinary teams The reservoir simulation and modeling aspect of the project was developed to show the capability of exploiting sensor information both for reservoir characterization and to optimize control of the production system. Our findings indicate history matching is improved as more information is added to the objective function, clearly indicating that sensor information can help in reducing the uncertainty associated with reservoir characterization. Additional findings and approaches used are described in detail within the report. The next generation sensors aspect of the project evaluated sensors and packaging

  15. Well testing in gas hydrate reservoirs

    OpenAIRE

    Kome, Melvin Njumbe

    2015-01-01

    Reservoir testing and analysis are fundamental tools in understanding reservoir hydraulics and hence forecasting reservoir responses. The quality of the analysis is very dependent on the conceptual model used in investigating the responses under different flowing conditions. The use of reservoir testing in the characterization and derivation of reservoir parameters is widely established, especially in conventional oil and gas reservoirs. However, with depleting conventional reserves, the ...

  16. Potential Mechanisms Driving Population Variation in Spatial Memory and the Hippocampus in Food-caching Chickadees.

    Science.gov (United States)

    Croston, Rebecca; Branch, Carrie L; Kozlovsky, Dovid Y; Roth, Timothy C; LaDage, Lara D; Freas, Cody A; Pravosudov, Vladimir V

    2015-09-01

    Harsh environments and severe winters have been hypothesized to favor improvement of the cognitive abilities necessary for successful foraging. Geographic variation in winter climate, then, is likely associated with differences in selection pressures on cognitive ability, which could lead to evolutionary changes in cognition and its neural mechanisms, assuming that variation in these traits is heritable. Here, we focus on two species of food-caching chickadees (genus Poecile), which rely on stored food for survival over winter and require the use of spatial memory to recover their stores. These species also exhibit extensive climate-related population level variation in spatial memory and the hippocampus, including volume, the total number and size of neurons, and adults' rates of neurogenesis. Such variation could be driven by several mechanisms within the context of natural selection, including independent, population-specific selection (local adaptation), environment experience-based plasticity, developmental differences, and/or epigenetic differences. Extensive data on cognition, brain morphology, and behavior in multiple populations of these two species of chickadees along longitudinal, latitudinal, and elevational gradients in winter climate are most consistent with the hypothesis that natural selection drives the evolution of local adaptations associated with spatial memory differences among populations. Conversely, there is little support for the hypotheses that environment-induced plasticity or developmental differences are the main causes of population differences across climatic gradients. Available data on epigenetic modifications of memory ability are also inconsistent with the observed patterns of population variation, with birds living in more stressful and harsher environments having better spatial memory associated with a larger hippocampus and a larger number of hippocampal neurons. Overall, the existing data are most consistent with the

  17. Sediment Characteristics of Tennessee Streams and Reservoirs

    National Research Council Canada - National Science Library

    Trimble, Stanley W; Carey, William P

    1984-01-01

    Suspended-sediment and reservoir sedimentation data have been analyzed to determine sediment yields and transport characteristics of Tennessee streams Data from 31 reservoirs plus suspended-sediment...

  18. Changes to the Bakomi Reservoir

    Directory of Open Access Journals (Sweden)

    Kubinský Daniel

    2014-08-01

    Full Text Available This article is focused on the analysis and evaluation of the changes of the bottom of the Bakomi reservoir, the total volume of the reservoir, ecosystems, as well as changes in the riparian zone of the Bakomi reservoir (situated in the central Slovakia. Changes of the water component of the reservoir were subject to the deposition by erosion-sedimentation processes, and were identifed on the basis of a comparison of the present relief of the bottom of reservoir obtained from feld measurements (in 2011 with the relief measurements of the bottom obtained from the 1971 historical maps, (i.e. over a period of 40 years. Changes of landscape structures of the riparian zone have been mapped for the time period of 1949–2013; these changes have been identifed with the analysis of ortophotomaps and the feld survey. There has been a signifcant rise of disturbed shores with low herb grassland. Over a period of 40 years, there has been a deposition of 667 m3 of sediments. The results showed that there were no signifcant changes in the local ecosystems of the Bakomi reservoir in comparison to the other reservoirs in the vicinity of Banská Štiavnica.

  19. TRITIUM RESERVOIR STRUCTURAL PERFORMANCE PREDICTION

    Energy Technology Data Exchange (ETDEWEB)

    Lam, P.S.; Morgan, M.J

    2005-11-10

    The burst test is used to assess the material performance of tritium reservoirs in the surveillance program in which reservoirs have been in service for extended periods of time. A materials system model and finite element procedure were developed under a Savannah River Site Plant-Directed Research and Development (PDRD) program to predict the structural response under a full range of loading and aged material conditions of the reservoir. The results show that the predicted burst pressure and volume ductility are in good agreement with the actual burst test results for the unexposed units. The material tensile properties used in the calculations were obtained from a curved tensile specimen harvested from a companion reservoir by Electric Discharge Machining (EDM). In the absence of exposed and aged material tensile data, literature data were used for demonstrating the methodology in terms of the helium-3 concentration in the metal and the depth of penetration in the reservoir sidewall. It can be shown that the volume ductility decreases significantly with the presence of tritium and its decay product, helium-3, in the metal, as was observed in the laboratory-controlled burst tests. The model and analytical procedure provides a predictive tool for reservoir structural integrity under aging conditions. It is recommended that benchmark tests and analysis for aged materials be performed. The methodology can be augmented to predict performance for reservoir with flaws.

  20. Allegheny County Blazed Trails Locations

    Data.gov (United States)

    Allegheny County / City of Pittsburgh / Western PA Regional Data Center — Shows the location of blazed trails in all Allegheny County parks. This is the same data used in the Allegheny County Parks Trails Mobile App, available for Apple...

  1. Allegheny County Supermarkets & Convenience Stores

    Data.gov (United States)

    Allegheny County / City of Pittsburgh / Western PA Regional Data Center — Location information for all Supermarkets and Convenience Stores in Allegheny County was produced using the Allegheny County Fee and Permit Data for 2016.

  2. A reservoir simulation approach for modeling of naturally fractured reservoirs

    Directory of Open Access Journals (Sweden)

    H. Mohammadi

    2012-12-01

    Full Text Available In this investigation, the Warren and Root model proposed for the simulation of naturally fractured reservoir was improved. A reservoir simulation approach was used to develop a 2D model of a synthetic oil reservoir. Main rock properties of each gridblock were defined for two different types of gridblocks called matrix and fracture gridblocks. These two gridblocks were different in porosity and permeability values which were higher for fracture gridblocks compared to the matrix gridblocks. This model was solved using the implicit finite difference method. Results showed an improvement in the Warren and Root model especially in region 2 of the semilog plot of pressure drop versus time, which indicated a linear transition zone with no inflection point as predicted by other investigators. Effects of fracture spacing, fracture permeability, fracture porosity, matrix permeability and matrix porosity on the behavior of a typical naturally fractured reservoir were also presented.

  3. THE SURDUC RESERVOIR (ROMANIA

    Directory of Open Access Journals (Sweden)

    Niculae Iulian TEODORESCU

    2008-06-01

    Full Text Available The Surduc reservoir was projected to ensure more water when water is scarce and to thus provide especially the city Timisoara, downstream of it with water.The accumulation is placed on the main affluent of the Bega river, Gladna in the upper part of its watercourse.The dam behind which this accumulation was created is of a frontal type made of enrochements with a masque made of armed concrete on the upstream part and protected/sustained by grass on the downstream. The dam is 130m long on its coping and a constructed height of 34 m. It is also endowed with spillway for high water and two bottom outlets formed of two conduits, at the end of which is the microplant. The second part of my paper deals with the hydrometric analysis of the Accumulation Surduc and its impact upon the flow, especially the maximum run-off. This influence is exemplified through the high flood from the 29th of July 1980, the most significant flood recorded in the basin with an apparition probability of 0.002%.

  4. Allegheny County Block Areas

    Data.gov (United States)

    Allegheny County / City of Pittsburgh / Western PA Regional Data Center — This dataset overlays a grid on the County to assist in locating a parcel. The grid squares are 3,500 by 4,500 square feet. The data was derived from original...

  5. LANDSLIDES IN SUCEAVA COUNTY

    Directory of Open Access Journals (Sweden)

    Dan Zarojanu

    2017-07-01

    Full Text Available In the county of Suceava, the landslides are a real and permanent problem. This paper presents the observations of landslides over the last 30 years in Suceava County, especially their morphology, theirs causes and the landslide stopping measures. It presents also several details regarding the lanslides from the town of Suceava, of Frasin and the village of Brodina.

  6. Allegheny County Watershed Boundaries

    Data.gov (United States)

    Allegheny County / City of Pittsburgh / Western PA Regional Data Center — This dataset demarcates the 52 isolated sub-Watersheds of Allegheny County that drain to single point on the main stem rivers. Created by 3 Rivers 2nd Nature based...

  7. Smart Collaborative Caching for Information-Centric IoT in Fog Computing

    Science.gov (United States)

    Song, Fei; Ai, Zheng-Yang; Li, Jun-Jie; Zhang, Hong-Ke

    2017-01-01

    The significant changes enabled by the fog computing had demonstrated that Internet of Things (IoT) urgently needs more evolutional reforms. Limited by the inflexible design philosophy; the traditional structure of a network is hard to meet the latest demands. However, Information-Centric Networking (ICN) is a promising option to bridge and cover these enormous gaps. In this paper, a Smart Collaborative Caching (SCC) scheme is established by leveraging high-level ICN principles for IoT within fog computing paradigm. The proposed solution is supposed to be utilized in resource pooling, content storing, node locating and other related situations. By investigating the available characteristics of ICN, some challenges of such combination are reviewed in depth. The details of building SCC, including basic model and advanced algorithms, are presented based on theoretical analysis and simplified examples. The validation focuses on two typical scenarios: simple status inquiry and complex content sharing. The number of clusters, packet loss probability and other parameters are also considered. The analytical results demonstrate that the performance of our scheme, regarding total packet number and average transmission latency, can outperform that of the original ones. We expect that the SCC will contribute an efficient solution to the related studies. PMID:29104219

  8. AirCache: A Crowd-Based Solution for Geoanchored Floating Data

    Directory of Open Access Journals (Sweden)

    Armir Bujari

    2016-01-01

    Full Text Available The Internet edge has evolved from a simple consumer of information and data to eager producer feeding sensed data at a societal scale. The crowdsensing paradigm is a representative example which has the potential to revolutionize the way we acquire and consume data. Indeed, especially in the era of smartphones, the geographical and temporal scopus of data is often local. For instance, users’ queries are more and more frequently about a nearby object, event, person, location, and so forth. These queries could certainly be processed and answered locally, without the need for contacting a remote server through the Internet. In this scenario, the data is alimented (sensed by the users and, as a consequence, data lifetime is limited by human organizational factors (e.g., mobility. From this basis, data survivability in the Area of Interest (AoI is crucial and, if not guaranteed, could undermine system deployment. Addressing this scenario, we discuss and contribute with a novel protocol named AirCache, whose aim is to guarantee data availability in the AoI while at the same time reducing the data access costs at the network edges. We assess our proposal through a simulation analysis showing that our approach effectively fulfills its design objectives.

  9. Diets of three species of anurans from the cache creek watershed, California, USA

    Science.gov (United States)

    Hothem, R.L.; Meckstroth, A.M.; Wegner, K.E.; Jennings, M.R.; Crayon, J.J.

    2009-01-01

    We evaluated the diets of three sympatric anuran species, the native Northern Pacific Treefrog, Pseudacris regilla, and Foothill Yellow-Legged Frog, Rana boylii, and the introduced American Bullfrog, Lithobates catesbeianus, based on stomach contents of frogs collected at 36 sites in 1997 and 1998. This investigation was part of a study of mercury bioaccumulation in the biota of the Cache Creek Watershed in north-central California, an area affected by mercury contamination from natural sources and abandoned mercury mines. We collected R. boylii at 22 sites, L. catesbeianus at 21 sites, and P. regilla at 13 sites. We collected both L. catesbeianus and R. boylii at nine sites and all three species at five sites. Pseudacris regilla had the least aquatic diet (100% of the samples had terrestrial prey vs. 5% with aquatic prey), followed by R. boylii (98% terrestrial, 28% aquatic), and L. catesbeianus, which had similar percentages of terrestrial (81%) and aquatic prey (74%). Observed predation by L. catesbeianus on R. boylii may indicate that interaction between these two species is significant. Based on their widespread abundance and their preference for aquatic foods, we suggest that, where present, L. catesbeianus should be the species of choice for all lethal biomonitoring of mercury in amphibians. Copyright ?? 2009 Society for the Study of Amphibians and Reptiles.

  10. Leveraging KVM Events to Detect Cache-Based Side Channel Attacks in a Virtualization Environment

    Directory of Open Access Journals (Sweden)

    Ady Wahyudi Paundu

    2018-01-01

    Full Text Available Cache-based side channel attack (CSCa techniques in virtualization systems are becoming more advanced, while defense methods against them are still perceived as nonpractical. The most recent CSCa variant called Flush + Flush has showed that the current detection methods can be easily bypassed. Within this work, we introduce a novel monitoring approach to detect CSCa operations inside a virtualization environment. We utilize the Kernel Virtual Machine (KVM event data in the kernel and process this data using a machine learning technique to identify any CSCa operation in the guest Virtual Machine (VM. We evaluate our approach using Receiver Operating Characteristic (ROC diagram of multiple attack and benign operation scenarios. Our method successfully separate the CSCa datasets from the non-CSCa datasets, on both trained and nontrained data scenarios. The successful classification also include the Flush + Flush attack scenario. We are also able to explain the classification results by extracting the set of most important features that separate both classes using their Fisher scores and show that our monitoring approach can work to detect CSCa in general. Finally, we evaluate the overhead impact of our CSCa monitoring method and show that it has a negligible computation overhead on the host and the guest VM.

  11. Smart Collaborative Caching for Information-Centric IoT in Fog Computing.

    Science.gov (United States)

    Song, Fei; Ai, Zheng-Yang; Li, Jun-Jie; Pau, Giovanni; Collotta, Mario; You, Ilsun; Zhang, Hong-Ke

    2017-11-01

    The significant changes enabled by the fog computing had demonstrated that Internet of Things (IoT) urgently needs more evolutional reforms. Limited by the inflexible design philosophy; the traditional structure of a network is hard to meet the latest demands. However, Information-Centric Networking (ICN) is a promising option to bridge and cover these enormous gaps. In this paper, a Smart Collaborative Caching (SCC) scheme is established by leveraging high-level ICN principles for IoT within fog computing paradigm. The proposed solution is supposed to be utilized in resource pooling, content storing, node locating and other related situations. By investigating the available characteristics of ICN, some challenges of such combination are reviewed in depth. The details of building SCC, including basic model and advanced algorithms, are presented based on theoretical analysis and simplified examples. The validation focuses on two typical scenarios: simple status inquiry and complex content sharing. The number of clusters, packet loss probability and other parameters are also considered. The analytical results demonstrate that the performance of our scheme, regarding total packet number and average transmission latency, can outperform that of the original ones. We expect that the SCC will contribute an efficient solution to the related studies.

  12. Políticas de reemplazo en la caché de web

    Directory of Open Access Journals (Sweden)

    Carlos Quesada Sánchez

    2006-05-01

    Full Text Available La web es el mecanismo de comunicación más utilizado en la actualidad debido a su flexibilidad y a la oferta casi interminable de herramientas para navegarla. Esto hace que día con día se agreguen alrededor de un millón de páginas en ella. De esta manera, es entonces la biblioteca más grande, con recursos textuales y de multimedia, que jamás se haya visto antes. Eso sí, es una biblioteca distribuida alrededor de todos los servidores que contienen esa información. Como fuente de consulta, es importante que la recuperación de los datos sea eficiente. Para ello existe el Web Caching, técnica mediante la cual se almacenan temporalmente algunos datos de la web en los servidores locales, de manera que no haya que pedirlos al servidor remoto cada vez que un usuario los solicita. Empero, la cantidad de memoria disponible en los servidores locales para almacenar esa información es limitada: hay que decidir cuáles objetos de la web se almacenan y cuáles no. Esto da pie a varias políticas de reemplazo que se explorarán en este artículo. Mediante un experimento de peticiones reales de la Web, compararemos el desempeño de estas técnicas.

  13. A Query Cache Tool for Optimizing Repeatable and Parallel OLAP Queries

    Science.gov (United States)

    Santos, Ricardo Jorge; Bernardino, Jorge

    On-line analytical processing against data warehouse databases is a common form of getting decision making information for almost every business field. Decision support information oftenly concerns periodic values based on regular attributes, such as sales amounts, percentages, most transactioned items, etc. This means that many similar OLAP instructions are periodically repeated, and simultaneously, between the several decision makers. Our Query Cache Tool takes advantage of previously executed queries, storing their results and the current state of the data which was accessed. Future queries only need to execute against the new data, inserted since the queries were last executed, and join these results with the previous ones. This makes query execution much faster, because we only need to process the most recent data. Our tool also minimizes the execution time and resource consumption for similar queries simultaneously executed by different users, putting the most recent ones on hold until the first finish and returns the results for all of them. The stored query results are held until they are considered outdated, then automatically erased. We present an experimental evaluation of our tool using a data warehouse based on a real-world business dataset and use a set of typical decision support queries to discuss the results, showing a very high gain in query execution time.

  14. Traversal Caches: A Framework for FPGA Acceleration of Pointer Data Structures

    Directory of Open Access Journals (Sweden)

    James Coole

    2010-01-01

    Full Text Available Field-programmable gate arrays (FPGAs and other reconfigurable computing (RC devices have been widely shown to have numerous advantages including order of magnitude performance and power improvements compared to microprocessors for some applications. Unfortunately, FPGA usage has largely been limited to applications exhibiting sequential memory access patterns, thereby prohibiting acceleration of important applications with irregular patterns (e.g., pointer-based data structures. In this paper, we present a design pattern for RC application development that serializes irregular data structure traversals online into a traversal cache, which allows the corresponding data to be efficiently streamed to the FPGA. The paper presents a generalized framework that benefits applications with repeated traversals, which we show can achieve between 7x and 29x speedup over pointer-based software. For applications without strictly repeated traversals, we present application-specialized extensions that benefit applications with highly similar traversals by exploiting similarity to improve memory bandwidth and execute multiple traversals in parallel. We show that these extensions can achieve a speedup between 11x and 70x on a Virtex4 LX100 for Barnes-Hut n-body simulation.

  15. Smart Collaborative Caching for Information-Centric IoT in Fog Computing

    Directory of Open Access Journals (Sweden)

    Fei Song

    2017-11-01

    Full Text Available The significant changes enabled by the fog computing had demonstrated that Internet of Things (IoT urgently needs more evolutional reforms. Limited by the inflexible design philosophy; the traditional structure of a network is hard to meet the latest demands. However, Information-Centric Networking (ICN is a promising option to bridge and cover these enormous gaps. In this paper, a Smart Collaborative Caching (SCC scheme is established by leveraging high-level ICN principles for IoT within fog computing paradigm. The proposed solution is supposed to be utilized in resource pooling, content storing, node locating and other related situations. By investigating the available characteristics of ICN, some challenges of such combination are reviewed in depth. The details of building SCC, including basic model and advanced algorithms, are presented based on theoretical analysis and simplified examples. The validation focuses on two typical scenarios: simple status inquiry and complex content sharing. The number of clusters, packet loss probability and other parameters are also considered. The analytical results demonstrate that the performance of our scheme, regarding total packet number and average transmission latency, can outperform that of the original ones. We expect that the SCC will contribute an efficient solution to the related studies.

  16. Improved Oil Recovery in Fluvial Dominated Deltaic Reservoirs of Kansas - Near-Term

    International Nuclear Information System (INIS)

    Green, Don W.; McCune, A.D.; Michnick, M.; Reynolds, R.; Walton, A.; Watney, L.; Willhite, G. Paul

    1999-01-01

    The objective of this project is to address waterflood problems of the type found in Morrow sandstone reservoirs in southwestern Kansas and in Cherokee Group reservoirs in southeastern Kansas. Two demonstration sites operated by different independent oil operators are involved in this project. The Stewart Field is located in Finney County, Kansas and is operated by PetroSantander, Inc. Te Nelson Lease is located in Allen County, Kansas, in the N.E. Savonburg Field and is operated by James E. Russell Petroleum, Inc. General topics to be addressed are (1) reservoir management and performance evaluation, (2) waterflood optimization, and (3) the demonstration of recovery processes involving off-the-shelf technologies which can be used to enhance waterflood recovery, increase reserves, and reduce the abandonment rate of these reservoir types. In the Stewart Project, the reservoir management portion of the project conducted during Budget Period 1 involved performance evaluation. This included (1) reservoir characterization and the development of a reservoir database, (2) volumetric analysis to evaluate production performance, (3) reservoir modeling, (4) laboratory work, (5) identification of operational problems, (6) identification of unrecovered mobile oil and estimation of recovery factors, and (7) Identification of the most efficient and economical recovery process. To accomplish these objectives the initial budget period was subdivided into three major tasks. The tasks were (1) geological and engineering analysis, (2) laboratory testing, and (3) unitization. Due to the presence of different operators within the field, it was necessary to unitize the field in order to demonstrate a field-wide improved recovery process. This work was completed and the project moved into Budget Period 2

  17. Understanding the True Stimulated Reservoir Volume in Shale Reservoirs

    KAUST Repository

    Hussain, Maaruf

    2017-06-06

    Successful exploitation of shale reservoirs largely depends on the effectiveness of hydraulic fracturing stimulation program. Favorable results have been attributed to intersection and reactivation of pre-existing fractures by hydraulically-induced fractures that connect the wellbore to a larger fracture surface area within the reservoir rock volume. Thus, accurate estimation of the stimulated reservoir volume (SRV) becomes critical for the reservoir performance simulation and production analysis. Micro-seismic events (MS) have been commonly used as a proxy to map out the SRV geometry, which could be erroneous because not all MS events are related to hydraulic fracture propagation. The case studies discussed here utilized a fully 3-D simulation approach to estimate the SRV. The simulation approach presented in this paper takes into account the real-time changes in the reservoir\\'s geomechanics as a function of fluid pressures. It is consisted of four separate coupled modules: geomechanics, hydrodynamics, a geomechanical joint model for interfacial resolution, and an adaptive re-meshing. Reservoir stress condition, rock mechanical properties, and injected fluid pressure dictate how fracture elements could open or slide. Critical stress intensity factor was used as a fracture criterion governing the generation of new fractures or propagation of existing fractures and their directions. Our simulations were run on a Cray XC-40 HPC system. The studies outcomes proved the approach of using MS data as a proxy for SRV to be significantly flawed. Many of the observed stimulated natural fractures are stress related and very few that are closer to the injection field are connected. The situation is worsened in a highly laminated shale reservoir as the hydraulic fracture propagation is significantly hampered. High contrast in the in-situ stresses related strike-slip developed thereby shortens the extent of SRV. However, far field nature fractures that were not connected to

  18. Reservoir-induced landslides and risk control in Three Gorges Project on Yangtze River, China

    Directory of Open Access Journals (Sweden)

    Yueping Yin

    2016-10-01

    Full Text Available The Three Gorges region in China was basically a geohazard-prone area prior to construction of the Three Gorges Reservoir (TGR. After construction of the TGR, the water level was raised from 70 m to 175 m above sea level (ASL, and annual reservoir regulation has caused a 30-m water level difference after impoundment of the TGR since September 2008. This paper first presents the spatiotemporal distribution of landslides in six periods of 175 m ASL trial impoundments from 2008 to 2014. The results show that the number of landslides sharply decreased from 273 at the initial stage to less than ten at the second stage of impoundment. Based on this, the reservoir-induced landslides in the TGR region can be roughly classified into five failure patterns, i.e. accumulation landslide, dip-slope landslide, reversed bedding landslide, rockfall, and karst breccia landslide. The accumulation landslides and dip-slope landslides account for more than 90%. Taking the Shuping accumulation landslide (a sliding mass volume of 20.7 × 106 m3 in Zigui County and the Outang dip-slope landslide (a sliding mass volume of about 90 × 106 m3 in Fengjie County as two typical cases, the mechanisms of reactivation of the two landslides are analyzed. The monitoring data and factor of safety (FOS calculation show that the accumulation landslide is dominated by water level variation in the reservoir as most part of the mass body is under 175 m ASL, and the dip-slope landslide is controlled by the coupling effect of reservoir water level variation and precipitation as an extensive recharge area of rainfall from the rear and the front mass is below 175 m ASL. The characteristics of landslide-induced impulsive wave hazards after and before reservoir impoundment are studied, and the probability of occurrence of a landslide-induced impulsive wave hazard has increased in the reservoir region. Simulation results of the Ganjingzi landslide in Wushan County indicate the

  19. Quantification of Interbasin Transfers into the Addicks Reservoir during Hurricane Harvey

    Science.gov (United States)

    Sebastian, A.; Juan, A.; Gori, A.; Maulsby, F.; Bedient, P. B.

    2017-12-01

    Between August 25 and 30, Hurricane Harvey dropped unprecedented rainfall over southeast Texas causing widespread flooding in the City of Houston. Water levels in the Addicks and Barker reservoirs, built in the 1940s to protect downtown Houston, exceeded previous records by approximately 2 meters. Concerns regarding structural integrity of the dams and damage to neighbourhoods in within the reservoir pool resulted in controlled releases into Buffalo Bayou, flooding an estimated 4,000 additional structures downstream of the dams. In 2016, during the Tax Day it became apparent that overflows from Cypress Creek in northern Harris County substantially contribute to water levels in Addicks. Prior to this event, little was known about the hydrodynamics of this overflow area or about the additional stress placed on Addicks and Barker reservoirs due to the volume of overflow. However, this information is critical for determining flood risk in Addicks Watershed, and ultimately Buffalo Bayou. In this study, we utilize the recently developed HEC-RAS 2D model the interbasin transfer that occurs between Cypress Creek Watershed and Addicks Reservoir to quantify the volume and rate at which water from Cypress enters the reservoir during extreme events. Ultimately, the results of this study will help inform the official hydrologic models used by HCFCD to determine reservoir operation during future storm events and better inform residents living in or above the reservoir pool about their potential flood risk.

  20. Chickamauga reservoir embayment study - 1990

    Energy Technology Data Exchange (ETDEWEB)

    Meinert, D.L.; Butkus, S.R.; McDonough, T.A.

    1992-12-01

    The objectives of this report are three-fold: (1) assess physical, chemical, and biological conditions in the major embayments of Chickamauga Reservoir; (2) compare water quality and biological conditions of embayments with main river locations; and (3) identify any water quality concerns in the study embayments that may warrant further investigation and/or management actions. Embayments are important areas of reservoirs to be considered when assessments are made to support water quality management plans. In general, embayments, because of their smaller size (water surface areas usually less than 1000 acres), shallower morphometry (average depth usually less than 10 feet), and longer detention times (frequently a month or more), exhibit more extreme responses to pollutant loadings and changes in land use than the main river region of the reservoir. Consequently, embayments are often at greater risk of water quality impairments (e.g. nutrient enrichment, filling and siltation, excessive growths of aquatic plants, algal blooms, low dissolved oxygen concentrations, bacteriological contamination, etc.). Much of the secondary beneficial use of reservoirs occurs in embayments (viz. marinas, recreation areas, parks and beaches, residential development, etc.). Typically embayments comprise less than 20 percent of the surface area of a reservoir, but they often receive 50 percent or more of the water-oriented recreational use of the reservoir. This intensive recreational use creates a potential for adverse use impacts if poor water quality and aquatic conditions exist in an embayment.

  1. Reservoir characterization of Pennsylvanian Sandstone Reservoirs. Annual report

    Energy Technology Data Exchange (ETDEWEB)

    Kelkar, M.

    1992-09-01

    This annual report describes the progress during the second year of a project on Reservoir Characterization of Pennsylvanian Sandstone Reservoirs. The report is divided into three sections: (i) reservoir description and scale-up procedures; (ii) outcrop investigation; (iii) in-fill drilling potential. The first section describes the methods by which a reservoir can be characterized, can be described in three dimensions, and can be scaled up with respect to its properties, appropriate for simulation purposes. The second section describes the progress on investigation of an outcrop. The outcrop is an analog of Bartlesville Sandstone. We have drilled ten wells behind the outcrop and collected extensive log and core data. The cores have been slabbed, photographed and the several plugs have been taken. In addition, minipermeameter is used to measure permeabilities on the core surface at six inch intervals. The plugs have been analyzed for the permeability and porosity values. The variations in property values will be tied to the geological descriptions as well as the subsurface data collected from the Glen Pool field. The third section discusses the application of geostatistical techniques to infer in-fill well locations. The geostatistical technique used is the simulated annealing technique because of its flexibility. One of the important reservoir data is the production data. Use of production data will allow us to define the reservoir continuities, which may in turn, determine the in-fill well locations. The proposed technique allows us to incorporate some of the production data as constraints in the reservoir descriptions. The technique has been validated by comparing the results with numerical simulations.

  2. Petroleum reservoir data for testing simulation models

    Energy Technology Data Exchange (ETDEWEB)

    Lloyd, J.M.; Harrison, W.

    1980-09-01

    This report consists of reservoir pressure and production data for 25 petroleum reservoirs. Included are 5 data sets for single-phase (liquid) reservoirs, 1 data set for a single-phase (liquid) reservoir with pressure maintenance, 13 data sets for two-phase (liquid/gas) reservoirs and 6 for two-phase reservoirs with pressure maintenance. Also given are ancillary data for each reservoir that could be of value in the development and validation of simulation models. A bibliography is included that lists the publications from which the data were obtained.

  3. Gravity observations for hydrocarbon reservoir monitoring

    OpenAIRE

    Glegola, M.A.

    2013-01-01

    In this thesis the added value of gravity observations for hydrocarbon reservoir monitoring and characterization is investigated. Reservoir processes and reservoir types most suitable for gravimetric monitoring are identified. Major noise sources affecting time-lapse gravimetry are analyzed. The added value of gravity data for reservoir monitoring and characterization is analyzed within closed-loop reservoir management concept. Synthetic 2D and 3D numerical experiments are performed where var...

  4. A Novel Two-Tier Cooperative Caching Mechanism for the Optimization of Multi-Attribute Periodic Queries in Wireless Sensor Networks

    Science.gov (United States)

    Zhou, ZhangBing; Zhao, Deng; Shu, Lei; Tsang, Kim-Fung

    2015-01-01

    Wireless sensor networks, serving as an important interface between physical environments and computational systems, have been used extensively for supporting domain applications, where multiple-attribute sensory data are queried from the network continuously and periodically. Usually, certain sensory data may not vary significantly within a certain time duration for certain applications. In this setting, sensory data gathered at a certain time slot can be used for answering concurrent queries and may be reused for answering the forthcoming queries when the variation of these data is within a certain threshold. To address this challenge, a popularity-based cooperative caching mechanism is proposed in this article, where the popularity of sensory data is calculated according to the queries issued in recent time slots. This popularity reflects the possibility that sensory data are interested in the forthcoming queries. Generally, sensory data with the highest popularity are cached at the sink node, while sensory data that may not be interested in the forthcoming queries are cached in the head nodes of divided grid cells. Leveraging these cooperatively cached sensory data, queries are answered through composing these two-tier cached data. Experimental evaluation shows that this approach can reduce the network communication cost significantly and increase the network capability. PMID:26131665

  5. Texture analysis for mapping Tamarix parviflora using aerial photographs along the Cache Creek, California.

    Science.gov (United States)

    Ge, Shaokui; Carruthers, Raymond; Gong, Peng; Herrera, Angelica

    2006-03-01

    Natural color photographs were used to detect the coverage of saltcedar, Tamarix parviflora, along a 40 km portion of Cache Creek near Woodland, California. Historical aerial photographs from 2001 were retrospectively evaluated and compared with actual ground-based information to assess accuracy of the assessment process. The color aerial photos were sequentially digitized, georeferenced, classified using color and texture methods, and mosaiced into maps for field use. Eight types of ground cover (Tamarix, agricultural crops, roads, rocks, water bodies, evergreen trees, non-evergreen trees and shrubs (excluding Tamarix)) were selected from the digitized photos for separability analysis and supervised classification. Due to color similarities among the eight cover types, the average separability, based originally only on color, was very low. The separability was improved significantly through the inclusion of texture analysis. Six types of texture measures with various window sizes were evaluated. The best texture was used as an additional feature along with the color, for identifying Tamarix. A total of 29 color photographs were processed to detect Tamarix infestations using a combination of the original digital images and optimal texture features. It was found that the saltcedar covered a total of 3.96 km(2) (396 hectares) within the study area. For the accuracy assessment, 95 classified samples from the resulting map were checked in the field with a global position system (GPS) unit to verify Tamarix presence. The producer's accuracy was 77.89%. In addition, 157 independently located ground sites containing saltcedar were compared with the classified maps, producing a user's accuracy of 71.33%.

  6. Distributed late-binding micro-scheduling and data caching for data-intensive workflows

    International Nuclear Information System (INIS)

    Delgado Peris, A.

    2015-01-01

    Today's world is flooded with vast amounts of digital information coming from innumerable sources. Moreover, it seems clear that this trend will only intensify in the future. Industry, society and remarkably science are not indifferent to this fact. On the contrary, they are struggling to get the most out of this data, which means that they need to capture, transfer, store and process it in a timely and efficient manner, using a wide range of computational resources. And this task is not always simple. A very representative example of the challenges posed by the management and processing of large quantities of data is that of the Large Hadron Collider experiments, which handle tens of petabytes of physics information every year. Based on the experience of one of these collaborations, we have studied the main issues involved in the management of huge volumes of data and in the completion of sizeable workflows that consume it. In this context, we have developed a general-purpose architecture for the scheduling and execution of workflows with heavy data requirements: the Task Queue. This new system builds on the late-binding overlay model, which has helped experiments to successfully overcome the problems associated to the heterogeneity and complexity of large computational grids. Our proposal introduces several enhancements to the existing systems. The execution agents of the Task Queue architecture share a Distributed Hash Table (DHT) and perform job matching and assignment cooperatively. In this way, scalability problems of centralized matching algorithms are avoided and workflow execution times are improved. Scalability makes fine-grained micro-scheduling possible and enables new functionalities, like the implementation of a distributed data cache on the execution nodes and the integration of data location information in the scheduling decisions...(Author)

  7. Allegheny County Older Housing

    Data.gov (United States)

    Allegheny County / City of Pittsburgh / Western PA Regional Data Center — Older housing can impact the quality of the occupant's health in a number of ways, including lead exposure, housing quality, and factors that may exacerbate...

  8. Allegheny County Housing Tenure

    Data.gov (United States)

    Allegheny County / City of Pittsburgh / Western PA Regional Data Center — Home ownership provides a number of financial, social, and health benefits to American families. Especially in areas with housing price appreciation, home ownership...

  9. Allegheny County Cemetery Outlines

    Data.gov (United States)

    Allegheny County / City of Pittsburgh / Western PA Regional Data Center — Outlines of public and private cemeteries greater than one acre in size. Areas were delineated following a generalized line along the outside edge of the area....

  10. Allegheny County Dog Licenses

    Data.gov (United States)

    Allegheny County / City of Pittsburgh / Western PA Regional Data Center — A list of dog license dates, dog breeds, and dog name by zip code. Currently this dataset does not include City of Pittsburgh dogs.

  11. Allegheny County Sheriff Sales

    Data.gov (United States)

    Allegheny County / City of Pittsburgh / Western PA Regional Data Center — List of properties up for auction at a Sheriff Sale. Datasets labeled "Current" contain this month's postings, while those labeled "Archive" contain a running list...

  12. Allegheny County Vacant Properties

    Data.gov (United States)

    Allegheny County / City of Pittsburgh / Western PA Regional Data Center — Mail carriers routinely collect data on address no longer receiving mail due to vacancy. This vacancy data is reported quarterly at census tract geographies in the...

  13. HYDRAULICS, GREER COUNTY, OKLAHOMA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — Approximate hydraulic analysis was performed on streams in Greer County, Oklahoma. The approximate analysis was performed in accordance the FEMA G&S. Hydraulic...

  14. Durham County Demographic Profile

    Data.gov (United States)

    City and County of Durham, North Carolina — (a) Includes persons reporting only one race.(b) Hispanics may be of any race, so also are included in applicable race categories. D: Suppressed to avoid disclosure...

  15. Allegheny County Hydrology Areas

    Data.gov (United States)

    Allegheny County / City of Pittsburgh / Western PA Regional Data Center — The Hydrology Feature Dataset contains photogrammetrically compiled water drainage features and structures including rivers, streams, drainage canals, locks, dams,...

  16. Allegheny County Hydrology Lines

    Data.gov (United States)

    Allegheny County / City of Pittsburgh / Western PA Regional Data Center — The Hydrology Feature Dataset contains photogrammetrically compiled water drainage features and structures including rivers, streams, drainage canals, locks, dams,...

  17. Allegheny County Walk Scores

    Data.gov (United States)

    Allegheny County / City of Pittsburgh / Western PA Regional Data Center — Walk Score measures the walkability of any address using a patented system developed by the Walk Score company. For each 2010 Census Tract centroid, Walk Score...

  18. Spatial-temporal analysis of Cache Valley virus (Bunyaviridae: Orthobunyavirus) infection in anopheline and culicine mosquitoes (Diptera: Culicidae) in the northeastern United States, 1997-2012.

    Science.gov (United States)

    Andreadis, Theodore G; Armstrong, Philip M; Anderson, John F; Main, Andrew J

    2014-10-01

    Cache Valley virus (CVV) is a mosquito-borne bunyavirus (family Bunyaviridae, genus Orthobunyavirus) that is enzootic throughout much of North and Central America. White-tailed deer (Odocoileus virginianus) have been incriminated as important reservoir and amplification hosts. CVV has been found in a diverse array of mosquito species, but the principal vectors are unknown. A 16-year study was undertaken to identify the primary mosquito vectors in Connecticut, quantify seasonal prevalence rates of infection, and define the spatial geographic distribution of CVV in the state as a function of land use and white-tailed deer populations, which have increased substantially over this period. CVV was isolated from 16 mosquito species in seven genera, almost all of which were multivoltine and mammalophilic. Anopheles (An.) punctipennis was incriminated as the most consistent and likely vector in this region on the basis of yearly isolation frequencies and the spatial geographic distribution of infected mosquitoes. Other species exhibiting frequent temporal and moderate spatial geographic patterns of virus isolation within the state included Ochlerotatus (Oc.) trivittatus, Oc. canadensis, Aedes (Ae.) vexans, and Ae. cinereus. New isolation records for CVV were established for An. walkeri, Culiseta melanura, and Oc. cantator. Other species from which CVV was isolated included An. quadrimaculatus, Coquillettidia perturbans, Culex salinarius, Oc. japonicus, Oc. sollicitans, Oc. taeniorhynchus, Oc. triseriatus, and Psorophora ferox. Mosquitoes infected with CVV were equally distributed throughout urban, suburban, and rural locales, and infection rates were not directly associated with the localized abundance of white-tailed deer, possibly due to their saturation throughout the region. Virus activity in mosquitoes was episodic with no consistent pattern from year-to-year, and fluctuations in yearly seasonal infection rates did not appear to be directly impacted by overall

  19. Reservoir-induced seismicity at Castanhao reservoir, NE Brazil

    Science.gov (United States)

    Nunes, B.; do Nascimento, A.; Ferreira, J.; Bezerra, F.

    2012-04-01

    Our case study - the Castanhão reservoir - is located in NE Brazil on crystalline rock at the Borborema Province. The Borborema Province is a major Proterozoic-Archean terrain formed as a consequence of convergence and collision of the São Luis-West Africa craton and the São Francisco-Congo-Kasai cratons. This reservoir is a 60 m high earth-filled dam, which can store up to 4.5 billion m3 of water. The construction begun in 1990 and finished in October 2003.The first identified reservoir-induced events occurred in 2003, when the water level was still low. The water reached the spillway for the first time in January 2004 and, after that, an increase in seismicity occured. The present study shows the results of a campaign done in the period from November 19th, 2009 to December 31th, 2010 at the Castanhão reservoir. We deployed six three-component digital seismographic station network around one of the areas of the reservoir. We analyzed a total of 77 events which were recorded in at least four stations. To determine hypocenters and time origin, we used HYPO71 program (Lee & Lahr, 1975) assuming a half-space model with following parameters: VP= 5.95 km/s and VP/VS=1.73. We also performed a relocation of these events using HYPODD (Waldhauser & Ellsworth, 2000) programme. The input data used we used were catalogue data, with all absolute times. The results from the spatio-temporal suggest that different clusters at different areas and depths are triggered at different times due to a mixture of: i - pore pressure increase due to diffusion and ii - increase of pore pressure due to the reservoir load.

  20. The Caregiver Contribution to Heart Failure Self-Care (CACHS): Further Psychometric Testing of a Novel Instrument.

    Science.gov (United States)

    Buck, Harleah G; Harkness, Karen; Ali, Muhammad Usman; Carroll, Sandra L; Kryworuchko, Jennifer; McGillion, Michael

    2017-04-01

    Caregivers (CGs) contribute important assistance with heart failure (HF) self-care, including daily maintenance, symptom monitoring, and management. Until CGs' contributions to self-care can be quantified, it is impossible to characterize it, account for its impact on patient outcomes, or perform meaningful cost analyses. The purpose of this study was to conduct psychometric testing and item reduction on the recently developed 34-item Caregiver Contribution to Heart Failure Self-care (CACHS) instrument using classical and item response theory methods. Fifty CGs (mean age 63 years ±12.84; 70% female) recruited from a HF clinic completed the CACHS in 2014 and results evaluated using classical test theory and item response theory. Items would be deleted for low (.95) endorsement, low (.7) corrected item-total correlations, significant pairwise correlation coefficients, floor or ceiling effects, relatively low latent trait and item information function levels ( .5), and differential item functioning. After analysis, 14 items were excluded, resulting in a 20-item instrument (self-care maintenance eight items; monitoring seven items; and management five items). Most items demonstrated moderate to high discrimination (median 2.13, minimum .77, maximum 5.05), and appropriate item difficulty (-2.7 to 1.4). Internal consistency reliability was excellent (Cronbach α = .94, average inter-item correlation = .41) with no ceiling effects. The newly developed 20-item version of the CACHS is supported by rigorous instrument development and represents a novel instrument to measure CGs' contribution to HF self-care. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  1. Simulation of flow and habitat conditions under ice, Cache la Poudre River - January 2006

    Science.gov (United States)

    Waddle, Terry

    2007-01-01

    The U.S. Forest Service authorizes the occupancy and use of Forest Service lands by various projects, including water storage facilities, under the Federal Land Policy and Management Act. Federal Land Policy and Management Act permits can be renewed at the end of their term. The U.S. Forest Service analyzes the environmental effects for the initial issuance or renewal of a permit and the terms and conditions (for example, mitigations plans) contained in the permit for the facilities. The U.S. Forest Service is preparing an environmental impact statement (EIS) to determine the conditions for the occupancy and use for Long Draw Reservoir on National Forest System administered lands. The scope of the EIS includes evaluating current operations and effects to fish habitat of an ongoing winter release of 0.283 m3 /s (10 ft3 /s) from headwater reservoirs as part of a previously issued permit. The field conditions observed during this study included this release.

  2. EXPLOITATION AND OPTIMIZATION OF RESERVOIR PERFORMANCE IN HUNTON FORMATION, OKLAHOMA

    Energy Technology Data Exchange (ETDEWEB)

    Mohan Kelkar

    2003-10-01

    This report presents the work done so far on Hunton Formation in West Carney Field in Lincoln County, Oklahoma. West Carney Field produces oil and gas from the Hunton Formation. The field was developed starting in 1995. Some of the unique characteristics of the field include decreasing water oil ratio over time, decreasing gas-oil ratio at the beginning of production, inability to calculate oil reserves in the field based on log data, and sustained oil rates over long periods of time. To understand the unique characteristics of the field, an integrated evaluation was undertaken. Production data from the field were meticulously collected, and over forty wells were cored and logged to better understand the petrophysical and engineering characteristics. Based on the work done in this budget period so far, some of the preliminary conclusions can be listed as follows: (1) Based on PVT analysis, the field most likely contains volatile oil with bubble point close to initial reservoir pressure of 1,900 psia. (2) The initial oil in place, which is contact with existing wells, can be determined by newly developed material balance technique. The oil in place, which is in communication, is significantly less than determined by volumetric analysis, indicating heterogeneous nature of the reservoir. The oil in place, determined by material balance, is greater than determined by decline curve analysis. This difference may lead to additional locations for in fill wells. (3) The core and log evaluation indicates that the intermediate pores (porosity between 2 and 6 %) are very important in determining production potential of the reservoir. These intermediate size pores contain high oil saturation. (4) The limestone part of the reservoir, although low in porosity (mostly less than 6 %) is much more prolific in terms of oil production than the dolomite portion of the reservoir. The reason for this difference is the higher oil saturation in low porosity region. As the average porosity

  3. Application of Integrated Reservoir Management and Reservoir Characterization to Optimize Infill Drilling

    Energy Technology Data Exchange (ETDEWEB)

    None

    1998-01-01

    Infill drilling if wells on a uniform spacing without regard to reservoir performance and characterization foes not optimize reservoir development because it fails to account for the complex nature of reservoir heterogeneities present in many low permeability reservoirs, and carbonate reservoirs in particular. New and emerging technologies, such as geostatistical modeling, rigorous decline curve analysis, reservoir rock typing, and special core analysis can be used to develop a 3-D simulation model for prediction of infill locations.

  4. Cloud computing and Reservoir project

    International Nuclear Information System (INIS)

    Beco, S.; Maraschini, A.; Pacini, F.; Biran, O.

    2009-01-01

    The support for complex services delivery is becoming a key point in current internet technology. Current trends in internet applications are characterized by on demand delivery of ever growing amounts of content. The future internet of services will have to deliver content intensive applications to users with quality of service and security guarantees. This paper describes the Reservoir project and the challenge of a reliable and effective delivery of services as utilities in a commercial scenario. It starts by analyzing the needs of a future infrastructure provider and introducing the key concept of a service oriented architecture that combines virtualisation-aware grid with grid-aware virtualisation, while being driven by business service management. This article will then focus on the benefits and the innovations derived from the Reservoir approach. Eventually, a high level view of Reservoir general architecture is illustrated.

  5. Reservoir effects in radiocarbon dating

    International Nuclear Information System (INIS)

    Head, M.J.

    1997-01-01

    Full text: The radiocarbon dating technique depends essentially on the assumption that atmospheric carbon dioxide containing the cosmogenic radioisotope 14 C enters into a state of equilibrium with all living material (plants and animals) as part of the terrestrial carbon cycle. Terrestrial reservoir effects occur when the atmospheric 14 C signal is diluted by local effects where systems depleted in 14 C mix with systems that are in equilibrium with the atmosphere. Naturally, this can occur with plant material growing close to an active volcano adding very old CO 2 to the atmosphere (the original 14 C has completely decayed). It can also occur in highly industrialised areas where fossil fuel derived CO 2 dilutes the atmospheric signal. A terrestrial reservoir effect can occur in the case of fresh water shells living in rivers or lakes where there is an input of ground water from springs or a raising of the water table. Soluble bicarbonate derived from the dissolution of very old limestone produces a 14 C dilution effect. Land snail shells and stream carbonate depositions (tufas and travertines) can be affected by a similar mechanism. Alternatively, in specific cases, these reservoir effects may not occur. This means that general interpretations assuming quantitative values for these terrestrial effects are not possible. Each microenvironment associated with samples being analysed needs to be evaluated independently. Similarly, the marine environment produces reservoir effects. With respect to marine shells and corals, the water depth at which carbonate growth occurs can significantly affect quantitative 14 C dilution, especially in areas where very old water is uplifted, mixing with top layers of water that undergo significant exchange with atmospheric CO 2 . Hence, generalisations with respect to the marine reservoir effect also pose problems. These can be exacerbated by the mixing of sea water with either terrestrial water in estuaries, or ground water where

  6. Sedimentological and Geomorphological Effects of Reservoir Flushing: The Cachi Reservoir, Costa Rica, 1996

    DEFF Research Database (Denmark)

    Brandt, Anders; Swenning, Joar

    1999-01-01

    Physical geography, hydrology, geomorphology, sediment transport, erosion, sedimentation, dams, reservoirs......Physical geography, hydrology, geomorphology, sediment transport, erosion, sedimentation, dams, reservoirs...

  7. Application of Integrated Reservoir Management and Reservoir Characterization to Optimize Infill Drilling

    Energy Technology Data Exchange (ETDEWEB)

    P. K. Pande

    1998-10-29

    Initial drilling of wells on a uniform spacing, without regard to reservoir performance and characterization, must become a process of the past. Such efforts do not optimize reservoir development as they fail to account for the complex nature of reservoir heterogeneities present in many low permeability reservoirs, and carbonate reservoirs in particular. These reservoirs are typically characterized by: o Large, discontinuous pay intervals o Vertical and lateral changes in reservoir properties o Low reservoir energy o High residual oil saturation o Low recovery efficiency

  8. SIRIU RESERVOIR, BUZAU RIVER (ROMANIA

    Directory of Open Access Journals (Sweden)

    Daniel Constantin DIACONU

    2008-06-01

    Full Text Available Siriu reservoir, owes it`s creation to the dam built on the river Buzau, in the town which bears the same name. The reservoir has a hydro energetic role, to diminish the maximum flow and to provide water to the localities below. The partial exploitation of the lake, began in 1984; Since that time, the initial bed of the river began to accumulate large quantities of alluvia, reducing the retention capacity of the lake, which had a volume of 125 million m3. The changes produced are determined by many topographic surveys at the bottom of the lake.

  9. Trap-efficiency study, Highland Creek flood-retarding reservoir near Kelseyville, California, water years 1966-77

    Science.gov (United States)

    Trujillo, L.F.

    1982-01-01

    This investigation is part of a nationwide study of trap efficiency of detention reservoirs. In this report, trap efficiency was computed from reservoir inflow and outflow sediment data and from reservoir survey and outflow data. Highland Creek Reservoir is a flood-retarding reservoir located in Lake County, near Kelseyville, California. This reservoir has a maximum storage capacity of 3,199 acre-feet and permanent pool storage of 921 acre-feet. Mean annual rainfall for the 14.1 square-mile drainage area above Highland Creek Dam was 29 inches during the December 1965 to September 1977 study period. Resultant mean annual runoff was 17,100 acre-feet. Total reservoir inflow for the 11.8 yea r study period was 202,000 acre-feet, transporting an estimated 126,000 tons (10,700 tons per year) of suspended sediment. Total reservoir outflow for the same period was 188,700 acre-feet, including 15 ,230 tons (1,290 tons per year) of sediment. Estimated trap efficiency for the study period was 88 percent, based on estimated sediment inflow and measured sediment outflow.

  10. Reservoir Sedimentation Based on Uncertainty Analysis

    Directory of Open Access Journals (Sweden)

    Farhad Imanshoar

    2014-01-01

    Full Text Available Reservoir sedimentation can result in loss of much needed reservoir storage capacity, reducing the useful life of dams. Thus, sufficient sediment storage capacity should be provided for the reservoir design stage to ensure that sediment accumulation will not impair the functioning of the reservoir during the useful operational-economic life of the project. However, an important issue to consider when estimating reservoir sedimentation and accumulation is the uncertainty involved in reservoir sedimentation. In this paper, the basic factors influencing the density of sediments deposited in reservoirs are discussed, and uncertainties in reservoir sedimentation have been determined using the Delta method. Further, Kenny Reservoir in the White River Basin in northwestern Colorado was selected to determine the density of deposits in the reservoir and the coefficient of variation. The results of this investigation have indicated that by using the Delta method in the case of Kenny Reservoir, the uncertainty regarding accumulated sediment density, expressed by the coefficient of variation for a period of 50 years of reservoir operation, could be reduced to about 10%. Results of the Delta method suggest an applicable approach for dead storage planning via interfacing with uncertainties associated with reservoir sedimentation.

  11. Potosi Reservoir Modeling; History and Recommendations

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Valerie; Leetaru, Hannes

    2014-09-30

    As a part of a larger project co-funded by the United States Department of Energy (US DOE) to evaluate the potential of formations within the Cambro-Ordovician strata above the Mt. Simon as potential targets for carbon sequestration in the Illinois and Michigan Basins, the Illinois Clean Coal Institute (ICCI) requested Schlumberger to evaluate the potential injectivity and carbon dioxide (CO₂) plume size of the Cambrian Potosi Formation. The evaluation of this formation was accomplished using wireline data, core data, pressure data, and seismic data from two projects: the US DOE-funded Illinois Basin–Decatur Project being conducted by the Midwest Geological Sequestration Consortium in Macon County, Illinois, as well as data from the Illinois – Industrial Carbon Capture and Sequestration (IL-ICCS) project funded through the American Recovery and Reinvestment Act. In 2010, technical performance evaluations on the Cambrian Potosi Formation were performed through reservoir modeling. The data included formation tops from mud logs, well logs from the Verification Well 1 (VW1) and the Injection Well (CCS1), structural and stratigraphic formation from three dimensional (3D) seismic data, and field data from several waste water injection wells for the Potosi Formation. The intention was for two million tonnes per annum (MTPA) of CO₂ to be injected for 20 years into the Potosi Formation. In 2013, updated reservoir models for the Cambrian Potosi Formation were evaluated. The data included formation tops from mud logs, well logs from the CCS1, VW1, and Verification Well 2 (VW2) wells, structural and stratigraphic formation from a larger 3D seismic survey, and field data from several waste water injection wells for Potosi Formation. The objective is to simulate the injection of CO₂ at a rate 3.5 million tons per annum (3.2 million tonnes per annum [MTPA]) for 30 years 106 million tons (96 MT total) into the Potosi Formation. The Potosi geomodeling efforts have evolved

  12. Thousands of RNA-cached copies of whole chromosomes are present in the ciliateOxytrichaduring development.

    Science.gov (United States)

    Lindblad, Kelsi A; Bracht, John R; Williams, April E; Landweber, Laura F

    2017-08-01

    The ciliate Oxytricha trifallax maintains two genomes: a germline genome that is active only during sexual conjugation and a transcriptionally active, somatic genome that derives from the germline via extensive sequence reduction and rearrangement. Previously, we found that long noncoding (lnc) RNA "templates"-telomere-containing, RNA-cached copies of mature chromosomes-provide the information to program the rearrangement process. Here we used a modified RNA-seq approach to conduct the first genome-wide search for endogenous, telomere-to-telomere RNA transcripts. We find that during development, Oxytricha produces long noncoding RNA copies for over 10,000 of its 16,000 somatic chromosomes, consistent with a model in which Oxytricha transmits an RNA-cached copy of its somatic genome to the sexual progeny. Both the primary sequence and expression profile of a somatic chromosome influence the temporal distribution and abundance of individual template RNAs. This suggests that Oxytricha may undergo multiple rounds of DNA rearrangement during development. These observations implicate a complex set of thousands of long RNA molecules in the wiring and maintenance of a highly elaborate somatic genome architecture. © 2017 Lindblad et al.; Published by Cold Spring Harbor Laboratory Press for the RNA Society.

  13. From the Island of the Blue Dolphins: A unique 19th century cache feature from San Nicolas Island, California

    Science.gov (United States)

    Erlandson, Jon M.; Thomas-Barnett, Lisa; Vellanoweth, René L.; Schwartz, Steven J.; Muhs, Daniel R.

    2013-01-01

    A cache feature salvaged from an eroding sea cliff on San Nicolas Island produced two redwood boxes containing more than 200 artifacts of Nicoleño, Native Alaskan, and Euro-American origin. Outside the boxes were four asphaltum-coated baskets, abalone shells, a sandstone dish, and a hafted stone knife. The boxes, made from split redwood planks, contained a variety of artifacts and numerous unmodified bones and teeth from marine mammals, fish, birds, and large land mammals. Nicoleño-style artifacts include 11 knives with redwood handles and stone blades, stone projectile points, steatite ornaments and effigies, a carved stone pipe, abraders and burnishing stones, bird bone whistles, bone and shell pendants, abalone shell dishes, and two unusual barbed shell fishhooks. Artifacts of Native Alaskan style include four bone toggling harpoons, two unilaterally barbed bone harpoon heads, bone harpoon fore-shafts, a ground slate blade, and an adze blade. Objects of Euro-American origin or materials include a brass button, metal harpoon blades, and ten flaked glass bifaces. The contents of the cache feature, dating to the early-to-mid nineteenth century, provide an extraordinary window on a time of European expansion and global economic development that created unique cultural interactions and social transformations.

  14. XRootd, disk-based, caching-proxy for optimization of data-access, data-placement and data-replication

    CERN Document Server

    Tadel, Matevz

    2013-01-01

    Following the smashing success of XRootd-based USCMS data-federation, AAA project investigated extensions of the federation architecture by developing two sample implementations of an XRootd, disk-based, caching-proxy. The first one simply starts fetching a whole file as soon as a file-open request is received and is suitable when completely random file access is expected or it is already known that a whole file be read. The second implementation supports on-demand downloading of partial files. Extensions to the Hadoop file-system have been developed to allow foran immediate fallback to network access when local HDFS storage fails to provide the requested block. Tools needed to analyze and to tweak block replication factors and to inject downloaded blocks into a running HDFS installation have also been developed. Both cache implementations are in operation at UCSD and several tests were also performed at UNL and UW-M. Operational experience and applications to automatic storage healing and opportunistic compu...

  15. Contrasting patterns of survival and dispersal in multiple habitats reveal an ecological trap in a food-caching bird.

    Science.gov (United States)

    Norris, D Ryan; Flockhart, D T Tyler; Strickland, Dan

    2013-11-01

    A comprehensive understanding of how natural and anthropogenic variation in habitat influences populations requires long-term information on how such variation affects survival and dispersal throughout the annual cycle. Gray jays Perisoreus canadensis are widespread boreal resident passerines that use cached food to survive over the winter and to begin breeding during the late winter. Using multistate capture-recapture analysis, we examined apparent survival and dispersal in relation to habitat quality in a gray jay population over 34 years (1977-2010). Prior evidence suggests that natural variation in habitat quality is driven by the proportion of conifers on territories because of their superior ability to preserve cached food. Although neither adults (>1 year) nor juveniles (preference ecological trap for birds. Reproductive success, as shown in a previous study, but not survival, is sensitive to natural variation in habitat quality, suggesting that gray jays, despite living in harsh winter conditions, likely favor the allocation of limited resources towards self-maintenance over reproduction.

  16. Towards Transparent Throughput Elasticity for IaaS Cloud Storage: Exploring the Benefits of Adaptive Block-Level Caching

    Energy Technology Data Exchange (ETDEWEB)

    Nicolae, Bogdan [IBM Research, Dublin, Ireland; Riteau, Pierre [University of Chicago, Chicago, IL, USA; Keahey, Kate [Argonne National Laboratory, Lemont, IL, USA

    2015-10-01

    Storage elasticity on IaaS clouds is a crucial feature in the age of data-intensive computing, especially when considering fluctuations of I/O throughput. This paper provides a transparent solution that automatically boosts I/O bandwidth during peaks for underlying virtual disks, effectively avoiding over-provisioning without performance loss. The authors' proposal relies on the idea of leveraging short-lived virtual disks of better performance characteristics (and thus more expensive) to act during peaks as a caching layer for the persistent virtual disks where the application data is stored. Furthermore, they introduce a performance and cost prediction methodology that can be used both independently to estimate in advance what trade-off between performance and cost is possible, as well as an optimization technique that enables better cache size selection to meet the desired performance level with minimal cost. The authors demonstrate the benefits of their proposal both for microbenchmarks and for two real-life applications using large-scale experiments.

  17. Data assimilation in reservoir management

    NARCIS (Netherlands)

    Rommelse, J.R.

    2009-01-01

    The research presented in this thesis aims at improving computer models that allow simulations of water, oil and gas flows in subsurface petroleum reservoirs. This is done by integrating, or assimilating, measurements into physics-bases models. In recent years petroleum technology has developed

  18. Reservoirs in the United States

    Science.gov (United States)

    Harbeck, G. Earl

    1948-01-01

    Man has engaged in the control of flowing water since history began. Among his early recorded efforts were reservoirs for muncipal water-supplies constructed near ancient Jerusalem to store water which was brought there in masonry conduits. 1/  Irrigation was practiced in Egypt as early as 2000 B. C. There the "basin system" was used from ancient times until the 19th century. The land was divided , into basins of approximately 40,000 acres, separated by earthen dikes. 2/  Flood waters of the Nile generally inundated the basins through canals, many of which were built by the Pharaohs. Even then the economic consequences of a deficient annual flood were recognized. Lake Maeris, which according to Herodotus was an ancient storage reservoir, is said to have had an area of 30,000 acres. In India, the British found at the time of their occupancy of the Presidency of Madras about 50,000 reservoirs for irrigation, many believed to be of ancient construction. 3/ During the period 115-130 A. D. reservoirs were built to improve the water-supply of Athens. Much has been written concerning the elaborate collection and distribution system built to supply Rome, and parts of it remain to this day as monuments to the engineering skill employed by the Romans in solving the problem of large-scale municipal water-supplies.

  19. Reasons for reservoir effect variability

    DEFF Research Database (Denmark)

    Philippsen, Bente

    2013-01-01

    , aquatic plants and fish from the rivers Alster and Trave range between zero and about 3,000 radiocarbon years. The reservoir age of water DIC depends to a large extent on the origin of the water and is for example correlated with precipitation amounts. These short-term variations are smoothed out in water...

  20. Mars Rover proposed for 2018 to seek signs of life and to cache samples for potential return to Earth

    Science.gov (United States)

    Pratt, Lisa; Beaty, David; Westall, Frances; Parnell, John; Poulet, François

    2010-05-01

    Mars Rover proposed for 2018 to seek signs of life and to cache samples for potential return to Earth Lisa Pratt, David Beatty, Frances Westall, John Parnell, François Poulet, and the MRR-SAG team The search for preserved evidence of life is the keystone concept for a new generation of Mars rover capable of exploring, sampling, and caching diverse suites of rocks from outcrops. The proposed mission is conceived to address two general objectives: conduct high-priority in situ science and make concrete steps towards the possible future return of samples to Earth. We propose the name Mars Astrobiology Explorer-Cacher (MAX-C) to best reflect the dual purpose of the proposed mission. The scientific objective of the proposed MAX-C would require rover access to a site with high preservation potential for physical and chemical biosignatures in order to evaluate paleo-environmental conditions, characterize the potential for preservation of biosignatures, and access multiple sequences of geological units in a search for evidence of past life and/or prebiotic chemistry. Samples addressing a variety of high-priority scientific objectives should be collected, documented, and packaged in a manner suitable for possible return to Earth by a future mission. Relevant experience from study of ancient terrestrial strata, martian meteorites, and from the Mars exploration Rovers indicates that the proposed MAX-C's interpretive capability should include: meter to submillimeter texture (optical imaging), mineral identification, major element content, and organic molecular composition. Analytical data should be obtained by direct investigation of outcrops and should not entail acquisition of rock chips or powders. We propose, therefore, a set of arm-mounted instruments that would be capable of interrogating a relatively smooth, abraded surface by creating co-registered 2-D maps of visual texture, mineralogy and geochemical properties. This approach is judged to have particularly high