WorldWideScience

Sample records for end-use disaggregation algorithm

  1. Is disaggregation the holy grail of energy efficiency? The case of electricity

    International Nuclear Information System (INIS)

    Carrie Armel, K.; Gupta, Abhay; Shrimali, Gireesh; Albert, Adrian

    2013-01-01

    This paper aims to address two timely energy problems. First, significant low-cost energy reductions can be made in the residential and commercial sectors, but these savings have not been achievable to date. Second, billions of dollars are being spent to install smart meters, yet the energy saving and financial benefits of this infrastructure – without careful consideration of the human element – will not reach its full potential. We believe that we can address these problems by strategically marrying them, using disaggregation. Disaggregation refers to a set of statistical approaches for extracting end-use and/or appliance level data from an aggregate, or whole-building, energy signal. In this paper, we explain how appliance level data affords numerous benefits, and why using the algorithms in conjunction with smart meters is the most cost-effective and scalable solution for getting this data. We review disaggregation algorithms and their requirements, and evaluate the extent to which smart meters can meet those requirements. Research, technology, and policy recommendations are also outlined. - Highlights: ► Appliance energy use data can produce many consumer, industry, and policy benefits. ► Disaggregating smart meter data is the most cost-effective and scalable solution. ► We review algorithm requirements, and ability of smart meters to meet those. ► Current technology identifies ∼10 appliances; minor upgrades could identify more. ► Research, technology, and policy recommendations for moving forward are outlined.

  2. Load Disaggregation Technologies: Real World and Laboratory Performance

    Energy Technology Data Exchange (ETDEWEB)

    Mayhorn, Ebony T.; Sullivan, Greg P.; Petersen, Joseph M.; Butner, Ryan S.; Johnson, Erica M.

    2016-09-28

    Low cost interval metering and communication technology improvements over the past ten years have enabled the maturity of load disaggregation (or non-intrusive load monitoring) technologies to better estimate and report energy consumption of individual end-use loads. With the appropriate performance characteristics, these technologies have the potential to enable many utility and customer facing applications such as billing transparency, itemized demand and energy consumption, appliance diagnostics, commissioning, energy efficiency savings verification, load shape research, and demand response measurement. However, there has been much skepticism concerning the ability of load disaggregation products to accurately identify and estimate energy consumption of end-uses; which has hindered wide-spread market adoption. A contributing factor is that common test methods and metrics are not available to evaluate performance without having to perform large scale field demonstrations and pilots, which can be costly when developing such products. Without common and cost-effective methods of evaluation, more developed disaggregation technologies will continue to be slow to market and potential users will remain uncertain about their capabilities. This paper reviews recent field studies and laboratory tests of disaggregation technologies. Several factors are identified that are important to consider in test protocols, so that the results reflect real world performance. Potential metrics are examined to highlight their effectiveness in quantifying disaggregation performance. This analysis is then used to suggest performance metrics that are meaningful and of value to potential users and that will enable researchers/developers to identify beneficial ways to improve their technologies.

  3. End-use energy characterization and conservation potentials at DoD Facilities: An analysis of electricity use at Fort Hood, Texas

    Energy Technology Data Exchange (ETDEWEB)

    Akbari, H.; Konopacki, S.

    1995-05-01

    This report discusses the application of the LBL`s End-use Disaggregation Algorithm (EDA) to a DoD installation and presents hourly reconciled end-use data for all major building types and end uses. The project initially focused on achieving these objectives and pilot-testing the methodology at Fort Hood, Texas. Fort Hood, with over 5000 buildings was determined to have representative samples of nearly all of the major building types in use on DoD installations. These building types at Fort Hood include: office, administration, vehicle maintenance, shop, hospital, grocery store, retail store, car wash, church, restaurant, single-family detached housing, two and four-plex housings, and apartment building. Up to 11 end uses were developed for each prototype, consisting of 9 electric and 2 gas; however, only electric end uses were reconciled against known data and weather conditions. The electric end uses are space cooling, ventilation, cooking, miscellaneous/plugs, refrigeration, exterior lighting, interior lighting, process loads, and street lighting. The gas end uses are space heating and hot water heating. Space heating energy-use intensities were simulated only. The EDA was applied to 10 separate feeders from the three substations at Fort Hood. The results from the analyses of these ten feeders were extrapolated to estimate energy use by end use for the entire installation. The results show that administration, residential, and the bar-rack buildings are the largest consumers of electricity for a total of 250GWh per year (74% of annual consumption). By end use, cooling, ventilation, miscellaneous, and indoor lighting consume almost 84% of total electricity use. The contribution to the peak power demand is highest by residential sector (35%, 24 MW), followed by administration buildings (30%), and barrack (14%). For the entire Fort Hood installation, cooling is 54% of the peak demand (38 MW), followed by interior lighting at 18%, and miscellaneous end uses by 12%.

  4. Aggregating and Disaggregating Flexibility Objects

    DEFF Research Database (Denmark)

    Siksnys, Laurynas; Valsomatzis, Emmanouil; Hose, Katja

    2015-01-01

    In many scientific and commercial domains we encounter flexibility objects, i.e., objects with explicit flexibilities in a time and an amount dimension (e.g., energy or product amount). Applications of flexibility objects require novel and efficient techniques capable of handling large amounts...... and aiming at energy balancing during aggregation. In more detail, this paper considers the complete life cycle of flex-objects: aggregation, disaggregation, associated requirements, efficient incremental computation, and balance aggregation techniques. Extensive experiments based on real-world data from...

  5. Disaggregated Futures and Options Commitments of Traders

    Data.gov (United States)

    Commodity Futures Trading Commission — The Disaggregated Futures and Options Commitments of Traders dataset provides a breakdown of each week's open interest for agriculture, energy, metals, lumber, and...

  6. Disaggregated Futures-Only Commitments of Traders

    Data.gov (United States)

    Commodity Futures Trading Commission — The Disaggregated Futures-Only Commitments of Traders dataset provides a breakdown of each week's open interest for agriculture, energy, metals, lumber, and...

  7. Streamflow disaggregation: a nonlinear deterministic approach

    Directory of Open Access Journals (Sweden)

    B. Sivakumar

    2004-01-01

    Full Text Available This study introduces a nonlinear deterministic approach for streamflow disaggregation. According to this approach, the streamflow transformation process from one scale to another is treated as a nonlinear deterministic process, rather than a stochastic process as generally assumed. The approach follows two important steps: (1 reconstruction of the scalar (streamflow series in a multi-dimensional phase-space for representing the transformation dynamics; and (2 use of a local approximation (nearest neighbor method for disaggregation. The approach is employed for streamflow disaggregation in the Mississippi River basin, USA. Data of successively doubled resolutions between daily and 16 days (i.e. daily, 2-day, 4-day, 8-day, and 16-day are studied, and disaggregations are attempted only between successive resolutions (i.e. 2-day to daily, 4-day to 2-day, 8-day to 4-day, and 16-day to 8-day. Comparisons between the disaggregated values and the actual values reveal excellent agreements for all the cases studied, indicating the suitability of the approach for streamflow disaggregation. A further insight into the results reveals that the best results are, in general, achieved for low embedding dimensions (2 or 3 and small number of neighbors (less than 50, suggesting possible presence of nonlinear determinism in the underlying transformation process. A decrease in accuracy with increasing disaggregation scale is also observed, a possible implication of the existence of a scaling regime in streamflow.

  8. India Energy Outlook: End Use Demand in India to 2020

    Energy Technology Data Exchange (ETDEWEB)

    de la Rue du Can, Stephane; McNeil, Michael; Sathaye, Jayant

    2009-03-30

    Integrated economic models have been used to project both baseline and mitigation greenhouse gas emissions scenarios at the country and the global level. Results of these scenarios are typically presented at the sectoral level such as industry, transport, and buildings without further disaggregation. Recently, a keen interest has emerged on constructing bottom up scenarios where technical energy saving potentials can be displayed in detail (IEA, 2006b; IPCC, 2007; McKinsey, 2007). Analysts interested in particular technologies and policies, require detailed information to understand specific mitigation options in relation to business-as-usual trends. However, the limit of information available for developing countries often poses a problem. In this report, we have focus on analyzing energy use in India in greater detail. Results shown for the residential and transport sectors are taken from a previous report (de la Rue du Can, 2008). A complete picture of energy use with disaggregated levels is drawn to understand how energy is used in India and to offer the possibility to put in perspective the different sources of end use energy consumption. For each sector, drivers of energy and technology are indentified. Trends are then analyzed and used to project future growth. Results of this report provide valuable inputs to the elaboration of realistic energy efficiency scenarios.

  9. Context-Based Energy Disaggregation in Smart Homes

    Directory of Open Access Journals (Sweden)

    Francesca Paradiso

    2016-01-01

    Full Text Available In this paper, we address the problem of energy conservation and optimization in residential environments by providing users with useful information to solicit a change in consumption behavior. Taking care to highly limit the costs of installation and management, our work proposes a Non-Intrusive Load Monitoring (NILM approach, which consists of disaggregating the whole-house power consumption into the individual portions associated to each device. State of the art NILM algorithms need monitoring data sampled at high frequency, thus requiring high costs for data collection and management. In this paper, we propose an NILM approach that relaxes the requirements on monitoring data since it uses total active power measurements gathered at low frequency (about 1 Hz. The proposed approach is based on the use of Factorial Hidden Markov Models (FHMM in conjunction with context information related to the user presence in the house and the hourly utilization of appliances. Through a set of tests, we investigated how the use of these additional context-awareness features could improve disaggregation results with respect to the basic FHMM algorithm. The tests have been performed by using Tracebase, an open dataset made of data gathered from real home environments.

  10. Spatial Disaggregation of Areal Rainfall Using Two Different Artificial Neural Networks Models

    Directory of Open Access Journals (Sweden)

    Sungwon Kim

    2015-06-01

    Full Text Available The objective of this study is to develop artificial neural network (ANN models, including multilayer perceptron (MLP and Kohonen self-organizing feature map (KSOFM, for spatial disaggregation of areal rainfall in the Wi-stream catchment, an International Hydrological Program (IHP representative catchment, in South Korea. A three-layer MLP model, using three training algorithms, was used to estimate areal rainfall. The Levenberg–Marquardt training algorithm was found to be more sensitive to the number of hidden nodes than were the conjugate gradient and quickprop training algorithms using the MLP model. Results showed that the networks structures of 11-5-1 (conjugate gradient and quickprop and 11-3-1 (Levenberg-Marquardt were the best for estimating areal rainfall using the MLP model. The networks structures of 1-5-11 (conjugate gradient and quickprop and 1-3-11 (Levenberg–Marquardt, which are the inverse networks for estimating areal rainfall using the best MLP model, were identified for spatial disaggregation of areal rainfall using the MLP model. The KSOFM model was compared with the MLP model for spatial disaggregation of areal rainfall. The MLP and KSOFM models could disaggregate areal rainfall into individual point rainfall with spatial concepts.

  11. An Iterative Load Disaggregation Approach Based on Appliance Consumption Pattern

    Directory of Open Access Journals (Sweden)

    Huijuan Wang

    2018-04-01

    Full Text Available Non-intrusive load monitoring (NILM, monitoring single-appliance consumption level by decomposing the aggregated energy consumption, is a novel and economic technology that is beneficial to energy utilities and energy demand management strategies development. Hardware costs of high-frequency sampling and algorithm’s computational complexity hampered NILM large-scale application. However, low sampling data shows poor performance in event detection when multiple appliances are simultaneously turned on. In this paper, we contribute an iterative disaggregation approach that is based on appliance consumption pattern (ILDACP. Our approach combined Fuzzy C-means clustering algorithm, which provide an initial appliance operating status, and sub-sequence searching Dynamic Time Warping, which retrieves single energy consumption based on the typical power consumption pattern. Results show that the proposed approach is effective to accurately disaggregate power consumption, and is suitable for the situation where different appliances are simultaneously operated. Also, the approach has lower computational complexity than Hidden Markov Model method and it is easy to implement in the household without installing special equipment.

  12. Localization of SDGs through Disaggregation of KPIs

    Directory of Open Access Journals (Sweden)

    Manohar Patole

    2018-03-01

    Full Text Available The United Nation’s Agenda 2030 and Sustainable Development Goals (SDGs pick up where the Millennium Development Goals (MDGs left off. The SDGs set forth a formidable task for the global community and international sustainable development over the next 15 years. Learning from the successes and failures of the MDGs, government officials, development experts, and many other groups understood that localization is necessary to accomplish the SDGs but how and what to localize remain as questions to be answered. The UN Inter-Agency and Expert Group on Sustainable Development Goals (UN IAEG-SDGs sought to answer these questions through development of metadata behind the 17 goals, 169 associated targets and corresponding indicators of the SDGs. Data management is key to understanding how and what to localize, but, to do it properly, the data and metadata needs to be properly disaggregated. This paper reviews the utilization of disaggregation analysis for localization and demonstrates the process of identifying opportunities for subnational interventions to achieve multiple targets and indicators through the formation of new integrated key performance indicators. A case study on SDG 6: Clean Water and Sanitation is used to elucidate these points. The examples presented here are only illustrative—future research and the development of an analytical framework for localization and disaggregation of the SDGs would be a valuable tool for national and local governments, implementing partners and other interested parties.

  13. Long term building energy demand for India: Disaggregating end use energy services in an integrated assessment modeling framework

    International Nuclear Information System (INIS)

    Chaturvedi, Vaibhav; Eom, Jiyong; Clarke, Leon E.; Shukla, Priyadarshi R.

    2014-01-01

    With increasing population, income, and urbanization, meeting the energy service demands for the building sector will be a huge challenge for Indian energy policy. Although there is broad consensus that the Indian building sector will grow and evolve over the coming century, there is little understanding of the potential nature of this evolution over the longer term. The present study uses a technologically detailed, service based building energy model nested in the long term, global, integrated assessment framework, GCAM, to produce scenarios of the evolution of the Indian buildings sector up through the end of the century. The results support the idea that as India evolves toward developed country per-capita income levels, its building sector will largely evolve to resemble those of the currently developed countries (heavy reliance on electricity both for increasing cooling loads and a range of emerging appliance and other plug loads), albeit with unique characteristics based on its climate conditions (cooling dominating heating and even more so with climate change), on fuel preferences that may linger from the present (for example, a preference for gas for cooking), and vestiges of its development path (including remnants of rural poor that use substantial quantities of traditional biomass). - Highlights: ► Building sector final energy demand in India will grow to over five times by century end. ► Space cooling and appliance services will grow substantially in the future. ► Energy service demands will be met predominantly by electricity and gas. ► Urban centers will face huge demand for floor space and building energy services. ► Carbon tax policy will have little effect on reducing building energy demands

  14. Smart Metering and Water End-Use Data: Conservation Benefits and Privacy Risks

    Directory of Open Access Journals (Sweden)

    Damien P. Giurco

    2010-08-01

    Full Text Available Smart metering technology for residential buildings is being trialed and rolled out by water utilities to assist with improved urban water management in a future affected by climate change. The technology can provide near real-time monitoring of where water is used in the home, disaggregated by end-use (shower, toilet, clothes washing, garden irrigation, etc.. This paper explores questions regarding the degree of information detail required to assist utilities in targeting demand management programs and informing customers of their usage patterns, whilst ensuring privacy concerns of residents are upheld.

  15. GIS aided spatial disaggregation of emission inventories

    International Nuclear Information System (INIS)

    Orthofer, R.; Loibl, W.

    1995-10-01

    We have applied our method to produce detailed NMVOC and NO x emission density maps for Austria. While theoretical average emission densities for the whole country would be only 5 t NMVOC and 2.5 t NO x per km 2 , the actual emission densities range from zero in the many uninhabited areas up to more than 3,000 t/km 2 along major highways. In Austria, small scale disaggregation is necessary particularly for the differentiated topography and population patterns in alpine valleys. (author)

  16. Algorithms

    Indian Academy of Sciences (India)

    polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used to describe an algorithm for execution on a computer. An algorithm expressed using a programming.

  17. Disaggregated Imaging Spacecraft Constellation Optimization with a Genetic Algorithm

    Science.gov (United States)

    2014-03-27

    Management Air Force Institute of Technology Air University Air Education and Training Command In Partial Fulfillment of the Requirements for the Degree...distinct mod- ules which, once ‘assembled’ on orbit, deliver the capability of the original monolithic system [5].” Jerry Sellers includes a comic in

  18. Energy, cost, and emission end-use profiles of homes: An Ontario (Canada) case study

    International Nuclear Information System (INIS)

    Aydinalp Koksal, Merih; Rowlands, Ian H.; Parker, Paul

    2015-01-01

    Highlights: • Hourly electricity consumption data of seven end-uses from 25 homes are analyzed. • Hourly load, cost, and emission profiles of end-uses are developed and categorized. • Side-by-side analysis of energy, cost, and environmental effects is conducted. • Behaviour and outdoor temperature based end-uses are determined. • Share of each end-use in the total daily load, cost, and emission is determined. - Abstract: Providing information on the temporal distributions of residential electricity end-uses plays a major role in determining the potential savings in residential electricity demand, cost, and associated emissions. While the majority of the studies on disaggregated residential electricity end-use data provided hourly usage profiles of major appliances, only a few of them presented analysis on the effect of hourly electricity consumption of some specific end-uses on household costs and emissions. This study presents side-by-side analysis of energy, cost, and environment effects of hourly electricity consumption of the main electricity end-uses in a sample of homes in the Canadian province of Ontario. The data used in this study are drawn from a larger multi-stakeholder project in which electricity consumption of major end-uses at 25 homes in Milton, Ontario, was monitored in five-minute intervals for six-month to two-year periods. In addition to determining the hourly price of electricity during the monitoring period, the hourly carbon intensity is determined using fuel type hourly generation and the life cycle greenhouse gas intensities specifically determined for Ontario’s electricity fuel mix. The hourly load, cost, and emissions profiles are developed for the central air conditioner, furnace, clothes dryer, clothes washer, dishwasher, refrigerator, and stove and then grouped into eight day type categories. The side-by-side analysis of categorized load, cost, and emission profiles of the seven electricity end-uses provided information on

  19. Effect of natural antioxidants on the aggregation and disaggregation ...

    African Journals Online (AJOL)

    Conclusion: High antioxidant activities were positively correlated with the inhibition of Aβ aggregation, although not with the disaggregation of pre-formed Aβ aggregates. Nevertheless, potent antioxidants may be helpful in treating Alzheimer's disease. Keywords: Alzheimer's disease, β-Amyloid, Aggregation, Disaggregation ...

  20. Modelling OAIS Compliance for Disaggregated Preservation Services

    Directory of Open Access Journals (Sweden)

    Gareth Knight

    2007-07-01

    Full Text Available The reference model for the Open Archival Information System (OAIS is well established in the research community as a method of modelling the functions of a digital repository and as a basis in which to frame digital curation and preservation issues. In reference to the 5th anniversary review of the OAIS, it is timely to consider how it may be interpreted by an institutional repository. The paper examines methods of sharing essential functions and requirements of an OAIS between two or more institutions, outlining the practical considerations of outsourcing. It also details the approach taken by the SHERPA DP Project to introduce a disaggregated service model for institutional repositories that wish to implement preservation services.

  1. Algorithms

    Indian Academy of Sciences (India)

    to as 'divide-and-conquer'. Although there has been a large effort in realizing efficient algorithms, there are not many universally accepted algorithm design paradigms. In this article, we illustrate algorithm design techniques such as balancing, greedy strategy, dynamic programming strategy, and backtracking or traversal of ...

  2. Multisite rainfall downscaling and disaggregation in a tropical urban area

    Science.gov (United States)

    Lu, Y.; Qin, X. S.

    2014-02-01

    A systematic downscaling-disaggregation study was conducted over Singapore Island, with an aim to generate high spatial and temporal resolution rainfall data under future climate-change conditions. The study consisted of two major components. The first part was to perform an inter-comparison of various alternatives of downscaling and disaggregation methods based on observed data. This included (i) single-site generalized linear model (GLM) plus K-nearest neighbor (KNN) (S-G-K) vs. multisite GLM (M-G) for spatial downscaling, (ii) HYETOS vs. KNN for single-site disaggregation, and (iii) KNN vs. MuDRain (Multivariate Rainfall Disaggregation tool) for multisite disaggregation. The results revealed that, for multisite downscaling, M-G performs better than S-G-K in covering the observed data with a lower RMSE value; for single-site disaggregation, KNN could better keep the basic statistics (i.e. standard deviation, lag-1 autocorrelation and probability of wet hour) than HYETOS; for multisite disaggregation, MuDRain outperformed KNN in fitting interstation correlations. In the second part of the study, an integrated downscaling-disaggregation framework based on M-G, KNN, and MuDRain was used to generate hourly rainfall at multiple sites. The results indicated that the downscaled and disaggregated rainfall data based on multiple ensembles from HadCM3 for the period from 1980 to 2010 could well cover the observed mean rainfall amount and extreme data, and also reasonably keep the spatial correlations both at daily and hourly timescales. The framework was also used to project future rainfall conditions under HadCM3 SRES A2 and B2 scenarios. It was indicated that the annual rainfall amount could reduce up to 5% at the end of this century, but the rainfall of wet season and extreme hourly rainfall could notably increase.

  3. Healthcare Energy End-Use Monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Sheppy, M.; Pless, S.; Kung, F.

    2014-08-01

    NREL partnered with two hospitals (MGH and SUNY UMU) to collect data on the energy used for multiple thermal and electrical end-use categories, including preheat, heating, and reheat; humidification; service water heating; cooling; fans; pumps; lighting; and select plug and process loads. Additional data from medical office buildings were provided for an analysis focused on plug loads. Facility managers, energy managers, and engineers in the healthcare sector will be able to use these results to more effectively prioritize and refine the scope of investments in new metering and energy audits.

  4. Technology data characterizing refrigeration in commercial buildings: Application to end-use forecasting with COMMEND 4.0

    Energy Technology Data Exchange (ETDEWEB)

    Sezgen, O.; Koomey, J.G.

    1995-12-01

    In the United States, energy consumption is increasing most rapidly in the commercial sector. Consequently, the commercial sector is becoming an increasingly important target for state and federal energy policies and also for utility-sponsored demand side management (DSM) programs. The rapid growth in commercial-sector energy consumption also makes it important for analysts working on energy policy and DSM issues to have access to energy end-use forecasting models that include more detailed representations of energy-using technologies in the commercial sector. These new forecasting models disaggregate energy consumption not only by fuel type, end use, and building type, but also by specific technology. The disaggregation of the refrigeration end use in terms of specific technologies, however, is complicated by several factors. First, the number of configurations of refrigeration cases and systems is quite large. Also, energy use is a complex function of the refrigeration-case properties and the refrigeration-system properties. The Electric Power Research Institute`s (EPRI`s) Commercial End-Use Planning System (COMMEND 4.0) and the associated data development presented in this report attempt to address the above complications and create a consistent forecasting framework. Expanding end-use forecasting models so that they address individual technology options requires characterization of the present floorstock in terms of service requirements, energy technologies used, and cost-efficiency attributes of the energy technologies that consumers may choose for new buildings and retrofits. This report describes the process by which we collected refrigeration technology data. The data were generated for COMMEND 4.0 but are also generally applicable to other end-use forecasting frameworks for the commercial sector.

  5. Biomass Resource Allocation among Competing End Uses

    Energy Technology Data Exchange (ETDEWEB)

    Newes, E.; Bush, B.; Inman, D.; Lin, Y.; Mai, T.; Martinez, A.; Mulcahy, D.; Short, W.; Simpkins, T.; Uriarte, C.; Peck, C.

    2012-05-01

    The Biomass Scenario Model (BSM) is a system dynamics model developed by the U.S. Department of Energy as a tool to better understand the interaction of complex policies and their potential effects on the biofuels industry in the United States. However, it does not currently have the capability to account for allocation of biomass resources among the various end uses, which limits its utilization in analysis of policies that target biomass uses outside the biofuels industry. This report provides a more holistic understanding of the dynamics surrounding the allocation of biomass among uses that include traditional use, wood pellet exports, bio-based products and bioproducts, biopower, and biofuels by (1) highlighting the methods used in existing models' treatments of competition for biomass resources; (2) identifying coverage and gaps in industry data regarding the competing end uses; and (3) exploring options for developing models of biomass allocation that could be integrated with the BSM to actively exchange and incorporate relevant information.

  6. Algorithms

    Indian Academy of Sciences (India)

    ticians but also forms the foundation of computer science. Two ... with methods of developing algorithms for solving a variety of problems but ... applications of computers in science and engineer- ... numerical calculus are as important. We will ...

  7. Rare earth elements: end use and recyclability

    Science.gov (United States)

    Goonan, Thomas G.

    2011-01-01

    Rare earth elements are used in mature markets (such as catalysts, glassmaking, lighting, and metallurgy), which account for 59 percent of the total worldwide consumption of rare earth elements, and in newer, high-growth markets (such as battery alloys, ceramics, and permanent magnets), which account for 41 percent of the total worldwide consumption of rare earth elements. In mature market segments, lanthanum and cerium constitute about 80 percent of rare earth elements used, and in new market segments, dysprosium, neodymium, and praseodymium account for about 85 percent of rare earth elements used. Regardless of the end use, rare earth elements are not recycled in large quantities, but could be if recycling became mandated or very high prices of rare earth elements made recycling feasible.

  8. Methodology for getting the end use of energy in the industrial sector from Parana State

    International Nuclear Information System (INIS)

    Haag Filho, A.

    1990-03-01

    A methodology for a survey on the utilization of energy in the industrial sector from Parana state, at low costs, and aiming the supply of data with the desired reliability and disaggregation is presented. The obtained data shall provide elements for the adoption of short term actions as well as serve as a basis for the elaboration of medium and long terms scenarios. The survey shall be conducted throughout the state, comprising all fields of activity and having the following objectives: determine the state's energetic consumption profile by industrial segment and by end use of energy; determine the state's energetic profile with the spatial distribution of consumption and detect the industrial segments which are more sensitive to the energetic substitution programs and/or of energy conservation. (author)

  9. Soil map disaggregation improved by soil-landscape relationships, area-proportional sampling and random forest implementation

    DEFF Research Database (Denmark)

    Møller, Anders Bjørn; Malone, Brendan P.; Odgers, Nathan

    implementation generally improved the algorithm’s ability to predict the correct soil class. The implementation of soil-landscape relationships and area-proportional sampling generally increased the calculation time, while the random forest implementation reduced the calculation time. In the most successful......Detailed soil information is often needed to support agricultural practices, environmental protection and policy decisions. Several digital approaches can be used to map soil properties based on field observations. When soil observations are sparse or missing, an alternative approach...... is to disaggregate existing conventional soil maps. At present, the DSMART algorithm represents the most sophisticated approach for disaggregating conventional soil maps (Odgers et al., 2014). The algorithm relies on classification trees trained from resampled points, which are assigned classes according...

  10. Algorithms

    Indian Academy of Sciences (India)

    algorithm design technique called 'divide-and-conquer'. One of ... Turtle graphics, September. 1996. 5. ... whole list named 'PO' is a pointer to the first element of the list; ..... Program for computing matrices X and Y and placing the result in C *).

  11. Algorithms

    Indian Academy of Sciences (India)

    algorithm that it is implicitly understood that we know how to generate the next natural ..... Explicit comparisons are made in line (1) where maximum and minimum is ... It can be shown that the function T(n) = 3/2n -2 is the solution to the above ...

  12. Commercial demand for energy: a disaggregated approach. [Model validation for 1970-1975; forecasting to 2000

    Energy Technology Data Exchange (ETDEWEB)

    Jackson, J.R.; Cohn, S.; Cope, J.; Johnson, W.S.

    1978-04-01

    This report describes the structure and forecasting accuracy of a disaggregated model of commercial energy use recently developed at Oak Ridge National Laboratory. The model forecasts annual commercial energy use by ten building types, five end uses, and four fuel types. Both economic (utilization rate, fuel choice, capital-energy substitution) and technological factors (equipment efficiency, thermal characteristics of buildings) are explicitly represented in the model. Model parameters are derived from engineering and econometric analysis. The model is then validated by simulating commercial energy use over the 1970--1975 time period. The model performs well both with respect to size of forecast error and ability to predict turning points. The model is then used to evaluate the energy-use implications of national commercial buildings standards based on the ASHRAE 90-75 recommendations. 10 figs., 12 tables, 14 refs.

  13. Command Disaggregation Attack and Mitigation in Industrial Internet of Things

    Directory of Open Access Journals (Sweden)

    Peng Xun

    2017-10-01

    Full Text Available A cyber-physical attack in the industrial Internet of Things can cause severe damage to physical system. In this paper, we focus on the command disaggregation attack, wherein attackers modify disaggregated commands by intruding command aggregators like programmable logic controllers, and then maliciously manipulate the physical process. It is necessary to investigate these attacks, analyze their impact on the physical process, and seek effective detection mechanisms. We depict two different types of command disaggregation attack modes: (1 the command sequence is disordered and (2 disaggregated sub-commands are allocated to wrong actuators. We describe three attack models to implement these modes with going undetected by existing detection methods. A novel and effective framework is provided to detect command disaggregation attacks. The framework utilizes the correlations among two-tier command sequences, including commands from the output of central controller and sub-commands from the input of actuators, to detect attacks before disruptions occur. We have designed components of the framework and explain how to mine and use these correlations to detect attacks. We present two case studies to validate different levels of impact from various attack models and the effectiveness of the detection framework. Finally, we discuss how to enhance the detection framework.

  14. Command Disaggregation Attack and Mitigation in Industrial Internet of Things.

    Science.gov (United States)

    Xun, Peng; Zhu, Pei-Dong; Hu, Yi-Fan; Cui, Peng-Shuai; Zhang, Yan

    2017-10-21

    A cyber-physical attack in the industrial Internet of Things can cause severe damage to physical system. In this paper, we focus on the command disaggregation attack, wherein attackers modify disaggregated commands by intruding command aggregators like programmable logic controllers, and then maliciously manipulate the physical process. It is necessary to investigate these attacks, analyze their impact on the physical process, and seek effective detection mechanisms. We depict two different types of command disaggregation attack modes: (1) the command sequence is disordered and (2) disaggregated sub-commands are allocated to wrong actuators. We describe three attack models to implement these modes with going undetected by existing detection methods. A novel and effective framework is provided to detect command disaggregation attacks. The framework utilizes the correlations among two-tier command sequences, including commands from the output of central controller and sub-commands from the input of actuators, to detect attacks before disruptions occur. We have designed components of the framework and explain how to mine and use these correlations to detect attacks. We present two case studies to validate different levels of impact from various attack models and the effectiveness of the detection framework. Finally, we discuss how to enhance the detection framework.

  15. Algorithms

    Indian Academy of Sciences (India)

    will become clear in the next article when we discuss a simple logo like programming language. ... Rod B may be used as an auxiliary store. The problem is to find an algorithm which performs this task. ... No disks are moved from A to Busing C as auxiliary rod. • move _disk (A, C);. (No + l)th disk is moved from A to C directly ...

  16. Towards an energy end use model

    International Nuclear Information System (INIS)

    Smith Fontana, Raul

    2003-01-01

    The general equilibrium energy end use model proposed, uses linear programming as te basic and central element to optimization of variables defined in the economic and energy areas of the country related to a four factors structure: Energy, Raw Material, Capital and Labor, and related to the sectors: Residential, Commercial, Industrial, Transportation and Import/Export. Input-Output coefficients are defined in an input-output matrix of processes representing the supply of Electricity (generated by nuclear- not available in Chile-hydro, gas, fuel-oil and coal), Petroleum, Imported Natural Gas (transported and distributed) National Natural Gas, LPG, Coal, Wood and representing a demand of Residential, Commercial, Industrial, Transportation and Import/Export. There is an interaction of the final demand composition, the prices of capital, labor and taxes with the levels of operation for each process and the prices of goods and services. In addition to the prices of fuels for each annual period, to the supply and demand of energy and to the total demand it can forecast the optimum coefficients of the final demand. If the data to be collected result reasonably complete and consistent, the model will be useful for planning. A special effort should be placed in specifying a certain number of typical energy activities, the available options for fuels, the selection of them attending rational market decisions and conservation according to well known economical criteria of substitution. To simulate the process of options selection given by the activities and to allow substitutions, it is possible to introduce the logit function characterized by a Weibull distribution and the generalized substitution function characterized by the constant electricity. The model would allow, assuming differents scenario, to visualize general policies in the penetration of energy technologies. To study the penetration of electric energy generated by nuclear, in which the country does not have

  17. Cellular Handling of Protein Aggregates by Disaggregation Machines.

    Science.gov (United States)

    Mogk, Axel; Bukau, Bernd; Kampinga, Harm H

    2018-01-18

    Both acute proteotoxic stresses that unfold proteins and expression of disease-causing mutant proteins that expose aggregation-prone regions can promote protein aggregation. Protein aggregates can interfere with cellular processes and deplete factors crucial for protein homeostasis. To cope with these challenges, cells are equipped with diverse folding and degradation activities to rescue or eliminate aggregated proteins. Here, we review the different chaperone disaggregation machines and their mechanisms of action. In all these machines, the coating of protein aggregates by Hsp70 chaperones represents the conserved, initializing step. In bacteria, fungi, and plants, Hsp70 recruits and activates Hsp100 disaggregases to extract aggregated proteins. In the cytosol of metazoa, Hsp70 is empowered by a specific cast of J-protein and Hsp110 co-chaperones allowing for standalone disaggregation activity. Both types of disaggregation machines are supported by small Hsps that sequester misfolded proteins. Copyright © 2018 Elsevier Inc. All rights reserved.

  18. Photoinduced disaggregation of TiO₂ nanoparticles enables transdermal penetration.

    Directory of Open Access Journals (Sweden)

    Samuel W Bennett

    Full Text Available Under many aqueous conditions, metal oxide nanoparticles attract other nanoparticles and grow into fractal aggregates as the result of a balance between electrostatic and Van Der Waals interactions. Although particle coagulation has been studied for over a century, the effect of light on the state of aggregation is not well understood. Since nanoparticle mobility and toxicity have been shown to be a function of aggregate size, and generally increase as size decreases, photo-induced disaggregation may have significant effects. We show that ambient light and other light sources can partially disaggregate nanoparticles from the aggregates and increase the dermal transport of nanoparticles, such that small nanoparticle clusters can readily diffuse into and through the dermal profile, likely via the interstitial spaces. The discovery of photoinduced disaggregation presents a new phenomenon that has not been previously reported or considered in coagulation theory or transdermal toxicological paradigms. Our results show that after just a few minutes of light, the hydrodynamic diameter of TiO(2 aggregates is reduced from ∼280 nm to ∼230 nm. We exposed pigskin to the nanoparticle suspension and found 200 mg kg(-1 of TiO(2 for skin that was exposed to nanoparticles in the presence of natural sunlight and only 75 mg kg(-1 for skin exposed to dark conditions, indicating the influence of light on NP penetration. These results suggest that photoinduced disaggregation may have important health implications.

  19. Disaggregating Assessment to Close the Loop and Improve Student Learning

    Science.gov (United States)

    Rawls, Janita; Hammons, Stacy

    2015-01-01

    This study examined student learning outcomes for accelerated degree students as compared to conventional undergraduate students, disaggregated by class levels, to develop strategies for then closing the loop with assessment. Using the National Survey of Student Engagement, critical thinking and oral and written communication outcomes were…

  20. Disaggregation of Rainy Hours: Compared Performance of Various Models.

    Science.gov (United States)

    Ben Haha, M.; Hingray, B.; Musy, A.

    In the urban environment, the response times of catchments are usually short. To de- sign or to diagnose waterworks in that context, it is necessary to describe rainfall events with a good time resolution: a 10mn time step is often necessary. Such in- formation is not always available. Rainfall disaggregation models have thus to be applied to produce from rough rainfall data that short time resolution information. The communication will present the performance obtained with several rainfall dis- aggregation models that allow for the disaggregation of rainy hours into six 10mn rainfall amounts. The ability of the models to reproduce some statistical character- istics of rainfall (mean, variance, overall distribution of 10mn-rainfall amounts; ex- treme values of maximal rainfall amounts over different durations) is evaluated thanks to different graphical and numerical criteria. The performance of simple models pre- sented in some scientific papers or developed in the Hydram laboratory as well as the performance of more sophisticated ones is compared with the performance of the basic constant disaggregation model. The compared models are either deterministic or stochastic; for some of them the disaggregation is based on scaling properties of rainfall. The compared models are in increasing complexity order: constant model, linear model (Ben Haha, 2001), Ormsbee Deterministic model (Ormsbee, 1989), Ar- tificial Neuronal Network based model (Burian et al. 2000), Hydram Stochastic 1 and Hydram Stochastic 2 (Ben Haha, 2001), Multiplicative Cascade based model (Olsson and Berndtsson, 1998), Ormsbee Stochastic model (Ormsbee, 1989). The 625 rainy hours used for that evaluation (with a hourly rainfall amount greater than 5mm) were extracted from the 21 years chronological rainfall series (10mn time step) observed at the Pully meteorological station, Switzerland. The models were also evaluated when applied to different rainfall classes depending on the season first and on the

  1. Probabilistic disaggregation model with application to natural hazard risk assessment of portfolios

    DEFF Research Database (Denmark)

    Custer, Rocco; Nishijima, Kazuyoshi

    In natural hazard risk assessment, a resolution mismatch between hazard data and aggregated exposure data is often observed. A possible solution to this issue is the disaggregation of exposure data to match the spatial resolution of hazard data. Disaggregation models available in literature...... disaggregation model that considers the uncertainty in the disaggregation, taking basis in the scaled Dirichlet distribution. The proposed probabilistic disaggregation model is applied to a portfolio of residential buildings in the Canton Bern, Switzerland, subject to flood risk. Thereby, the model is verified...... are usually deterministic and make use of auxiliary indicator, such as land cover, to spatially distribute exposures. As the dependence between auxiliary indicator and disaggregated number of exposures is generally imperfect, uncertainty arises in disaggregation. This paper therefore proposes a probabilistic...

  2. Disaggregate energy consumption and industrial production in South Africa

    Energy Technology Data Exchange (ETDEWEB)

    Ziramba, Emmanuel [Department of Economics, University of South Africa, P.O Box 392, UNISA 0003 (South Africa)

    2009-06-15

    This paper tries to assess the relationship between disaggregate energy consumption and industrial output in South Africa by undertaking a cointegration analysis using annual data from 1980 to 2005. We also investigate the causal relationships between the various disaggregate forms of energy consumption and industrial production. Our results imply that industrial production and employment are long-run forcing variables for electricity consumption. Applying the [Toda, H.Y., Yamamoto, T., 1995. Statistical inference in vector autoregressions with possibly integrated processes. Journal of Econometrics 66, 225-250] technique to Granger-causality, we find bi-directional causality between oil consumption and industrial production. For the other forms of energy consumption, there is evidence in support of the energy neutrality hypothesis. There is also evidence of causality between employment and electricity consumption as well as coal consumption causing employment. (author)

  3. Disaggregate energy consumption and industrial production in South Africa

    International Nuclear Information System (INIS)

    Ziramba, Emmanuel

    2009-01-01

    This paper tries to assess the relationship between disaggregate energy consumption and industrial output in South Africa by undertaking a cointegration analysis using annual data from 1980 to 2005. We also investigate the causal relationships between the various disaggregate forms of energy consumption and industrial production. Our results imply that industrial production and employment are long-run forcing variables for electricity consumption. Applying the [Toda, H.Y., Yamamoto, T., 1995. Statistical inference in vector autoregressions with possibly integrated processes. Journal of Econometrics 66, 225-250] technique to Granger-causality, we find bi-directional causality between oil consumption and industrial production. For the other forms of energy consumption, there is evidence in support of the energy neutrality hypothesis. There is also evidence of causality between employment and electricity consumption as well as coal consumption causing employment.

  4. The Behaviour of Disaggregated Public Expenditures and Income in Malaysia

    OpenAIRE

    Tang, Chor-Foon; Lau, Evan

    2011-01-01

    The present study attempts to re-investigate the behaviour of disaggregated public expenditures data and national income for Malaysia. This study covers the sample period of annual data from 1960 to 2007. The Bartlett-corrected trace tests proposed by Johansen (2002) were used to ascertain the presence of long run equilibrium relationship between public expenditures and national income. The results show one cointegrating vector for each specification of public expenditures. The relatively new...

  5. Size and importance of small electrical end uses in households

    Energy Technology Data Exchange (ETDEWEB)

    Broderick, J R; Zogg, R A; Alberino, D L

    1998-07-01

    Miscellaneous end uses (an energy-consumption category in the residential sector) has recently emerged with more importance than ever before. Miscellaneous end uses are a collection of numerous end uses (often unrelated in technology or market characteristics) that individually are small consumers but when grouped together can become notable in size. The Annual Energy Outlook 1998, published by the Energy Information Administration (EIA), suggests that about 32% of residential electricity use in 1996 is attributable to miscellaneous end uses (21% from the Other Uses category and 11% from other miscellaneous categories). The EIA predicts this consumption will grow to about 47% of residential electricity use by 2010. Other studies have shown substantial consumption in this category, and forecast substantial future growth as well. However, it is not clear that the current accounting structure of the miscellaneous category is the most appropriate one, nor that the forecast growth in consumption will materialize. A bottom-up study on a collection of miscellaneous electric end uses was performed to better understand this complex, ill-defined category. Initial results show that many end uses can be categorized more appropriately, such as furnace fans, which belong in Space Heating. A recommended categorization reduces the Other Uses category from 21% to 12% of electric consumption estimated in 1996. Thus, the consumption from miscellaneous end uses is not nearly as large as thought. Furthermore, the growth rate associated with small end uses is projected to be lower relative to projections from other sources.

  6. Navigating between Disaggregating Nation States and Entrenching Processes of Globalisation

    DEFF Research Database (Denmark)

    Jacobsen, Michael

    2007-01-01

    on the international community for its economic survival this dependency on the global has as a consequence that it rolls back aspects of national sovereignty thus opening up the national hinterland for further international influences. These developments initiate a process of disaggregating state and nation, meaning...... that a gradual disarticulation of the relationship between state and nation produces new societal spaces, which are contested by non-statist interest groups and transnational more or less deterritorialised ethnic affiliated groups and networks. The argument forwarded in this article is that the ethnic Chinese...

  7. Probabilistic disaggregation model with application to natural hazard risk assessment of portfolios

    OpenAIRE

    Custer, Rocco; Nishijima, Kazuyoshi

    2012-01-01

    In natural hazard risk assessment, a resolution mismatch between hazard data and aggregated exposure data is often observed. A possible solution to this issue is the disaggregation of exposure data to match the spatial resolution of hazard data. Disaggregation models available in literature are usually deterministic and make use of auxiliary indicator, such as land cover, to spatially distribute exposures. As the dependence between auxiliary indicator and disaggregated number of exposures is ...

  8. Disaggregation of small, cohesive rubble pile asteroids due to YORP

    Science.gov (United States)

    Scheeres, D. J.

    2018-04-01

    The implication of small amounts of cohesion within relatively small rubble pile asteroids is investigated with regard to their evolution under the persistent presence of the YORP effect. We find that below a characteristic size, which is a function of cohesive strength, density and other properties, rubble pile asteroids can enter a "disaggregation phase" in which they are subject to repeated fissions after which the formation of a stabilizing binary system is not possible. Once this threshold is passed rubble pile asteroids may be disaggregated into their constituent components within a finite time span. These constituent components will have their own spin limits - albeit potentially at a much higher spin rate due to the greater strength of a monolithic body. The implications of this prediction are discussed and include modification of size distributions, prevalence of monolithic bodies among meteoroids and the lifetime of small rubble pile bodies in the solar system. The theory is then used to place constraints on the strength of binary asteroids characterized as a function of their type.

  9. Analysis of a DSM program using an end use model; End use model wo mochiita DSM program no bunseki

    Energy Technology Data Exchange (ETDEWEB)

    Asano, H.; Takahashi, M.; Okada, K. [Central Research Institute of Electric Power Industry, Tokyo (Japan)

    1997-01-30

    An end use model used in the United States who is advanced in demand-side management (DSM) was used to discuss possibilities of designing and evaluating Japan`s future DSM measures. The end use model assumes energy demand based on such factors as device characteristics, meteorological data, energy prices, user characteristics, market characteristics and DSM measures. The model calculates energy demand amount by end uses basically by multiplying assumptions on device unit requirement, device retention rate, and number of users. A representative tool as an end use model that handles load shapes is the hourly electric load model (HELM). It assumes an annual load curve and predicts a maximum system load. The present discussions have performed estimation on demand for consumer use air conditioners in a day in which a maximum summer load occurs in a reference year, estimation on load in a maximum load day in an estimated year, and estimation on weather sensitivity of loads. 5 refs., 5 figs.

  10. Disaggregating Qualitative Data from Asian American College Students in Campus Racial Climate Research and Assessment

    Science.gov (United States)

    Museus, Samuel D.; Truong, Kimberly A.

    2009-01-01

    This article highlights the utility of disaggregating qualitative research and assessment data on Asian American college students. Given the complexity of and diversity within the Asian American population, scholars have begun to underscore the importance of disaggregating data in the empirical examination of Asian Americans, but most of those…

  11. Evolving markets and new end use gas technologies

    International Nuclear Information System (INIS)

    Overall, J.

    1995-01-01

    End use gas technologies, and products for residential, commercial, and industrial uses were reviewed, and markets and market drivers needed for end use technologies in the different types of markets were summarized. The range of end use technologies included: gas fireplaces, combination heating/water heating systems, integrated appliance such as heating/ventilation units, gas cooling, and space cooling for commercial markets. The present and future status of each product market was discussed. Growing markets such as cogeneration, and gas turbine technology also received attention, along with regulatory and environmental concerns. The need to be knowledgeable about current market drivers and to introduce new ones, and the evolution of technology were emphasized as means by which the industry will continue to be able to exert a decisive influence on the direction of these markets

  12. Disaggregate energy consumption and industrial output in the United States

    International Nuclear Information System (INIS)

    Ewing, Bradley T.; Sari, Ramazan; Soytas, Ugur

    2007-01-01

    This paper investigates the effect of disaggregate energy consumption on industrial output in the United States. Most of the related research utilizes aggregate data which may not indicate the relative strength or explanatory power of various energy inputs on output. We use monthly data and employ the generalized variance decomposition approach to assess the relative impacts of energy and employment on real output. Our results suggest that unexpected shocks to coal, natural gas and fossil fuel energy sources have the highest impacts on the variation of output, while several renewable sources exhibit considerable explanatory power as well. However, none of the energy sources explain more of the forecast error variance of industrial output than employment

  13. Basis for selecting soft wheat for end-use quality

    Science.gov (United States)

    Within the United States, end-use quality of soft wheat (Triticum aestivum L.) is determined by several genetically controlled components: milling yield, flour particle size, and baking characteristics related to flour water absorption caused by glutenin macropolymer, non-starch polysaccharides, and...

  14. Environmental benefits of electrification and end-use efficiency

    International Nuclear Information System (INIS)

    McMenamin, J.S.; Monforte, F.A.; Sioshansi, F.P.

    1997-01-01

    Significant reductions in greenhouse gases and criteria pollutants can be achieved through continued substitution of clean, efficient electrotechnologies for fossil fuel-based technologies. Continued improvements in the efficiency of electrical appliances already in use will further increase the environmental benefits of electricity. Over the last several decades, electricity use in the US has grown strongly. Over a 35 year period 1960-95, electric utility sales increased more than fourfold, from under 700 billion kWh (BkWh) to almost 3,000 BkWh. This increase was due, in part, to a growing economy, but it also reflects the increasingly broad application of electricity to provide comfort, convenience, entertainment, safety and productivity. Reflecting this expanding role, energy used for electricity generation by utilities has nearly doubled, increasing from 19 percent of US primary energy use in 1960 to about 36 percent in 1995. Environmental factors have also provided support to policies that promote improved end-use efficiency. More efficient end-use equipment allows consumers to obtain the same level of end-use services with less electricity. Reduced electricity consumption levels imply reduced generation requirements and therefore, lower levels of emissions associated with generation. Beginning in the mid-1970's, and stimulated by abrupt increases in fossil fuel prices, both government and utility policies began to emphasize end-use efficiency

  15. End-use quality of soft kernel durum wheat

    Science.gov (United States)

    Kernel texture is a major determinant of end-use quality of wheat. Durum wheat has very hard kernels. We developed soft kernel durum wheat via Ph1b-mediated homoeologous recombination. The Hardness locus was transferred from Chinese Spring to Svevo durum wheat via back-crossing. ‘Soft Svevo’ had SKC...

  16. Electricity End Uses, Energy Efficiency, and Distributed Energy Resources Baseline

    Energy Technology Data Exchange (ETDEWEB)

    Schwartz, Lisa [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Wei, Max [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Morrow, William [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Deason, Jeff [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Schiller, Steven R. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Leventis, Greg [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Smith, Sarah [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Leow, Woei Ling [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Levin, Todd [Argonne National Lab. (ANL), Argonne, IL (United States); Plotkin, Steven [Argonne National Lab. (ANL), Argonne, IL (United States); Zhou, Yan [Oak Ridge Inst. for Science and Education (ORISE), Oak Ridge, TN (United States)

    2017-01-01

    This report was developed by a team of analysts at Lawrence Berkeley National Laboratory, with Argonne National Laboratory contributing the transportation section, and is a DOE EPSA product and part of a series of “baseline” reports intended to inform the second installment of the Quadrennial Energy Review (QER 1.2). QER 1.2 provides a comprehensive review of the nation’s electricity system and cover the current state and key trends related to the electricity system, including generation, transmission, distribution, grid operations and planning, and end use. The baseline reports provide an overview of elements of the electricity system. This report focuses on end uses, electricity consumption, electric energy efficiency, distributed energy resources (DERs) (such as demand response, distributed generation, and distributed storage), and evaluation, measurement, and verification (EM&V) methods for energy efficiency and DERs.

  17. End-Use Efficiency to Lower Carbon Emissions

    International Nuclear Information System (INIS)

    Marnay, Chris; Osborn, Julie; Webber, Carrie

    2001-01-01

    Compelling evidence demonstrating the warming trend in global temperatures and the mechanism behind it, namely the anthropogenic emissions of carbon dioxide and other greenhouse gases (GHG), has spurred an international effort to reduce emissions of these gases. Despite improving efficiency of the U.S. economy in terms of energy cost per dollar of GDP since the signing of the Kyoto Protocol, energy consumption and carbon emissions are continuing to rise as the economy expands. This growing gap further emphasizes the importance of improving energy use efficiency as a component in the U.S. climate change mitigation program. The end-use efficiency research activities at Berkeley Lab incorporate residential, commercial, industrial, and transportation sectors. This paper focuses on two successful U.S. programs that address end-use efficiency in residential and commercial demand: energy efficient performance standards established by the Department of Energy (DOE) and the Environmental Protection Agency's (EPA's) ENERGY STAR(registered trademark) program

  18. Functional properties of Mozzarella cheese for its end use application.

    Science.gov (United States)

    Ah, Jana; Tagalpallewar, Govind P

    2017-11-01

    Cheese is an extremely versatile food product that has a wide range of flavor, textures and end uses. The vast majority of cheese is eaten not by itself, but as part of another food. As an ingredient in foods, cheese is required to exhibit functional characteristics in the raw as well as cooked forms. Melting, stretching, free-oil formation, elasticity and browning are the functional properties considered to be significant for Mozzarella cheese. When a cheese is destined for its end use, some of its unique characteristics play a significant role in the products acceptability. For instance pH of cheese determines the cheese structure which in turn decides the cheese shredability and meltability properties. The residual galactose content in cheese mass determines the propensity of cheese to brown during baking. Development of 'tailor-made cheese' involves focusing on manipulation of such unique traits of cheese in order to obtain the desired characteristics for its end use application suiting the varied consumer's whims and wishes. This comprehensive review paper will provide an insight to the cheese maker regarding the factors determining the functional properties of cheese and also for the pizza manufacturers to decide which age of cheese to be used which will perform well in baking applications.

  19. Uncovering the end uses of the rare earth elements

    Energy Technology Data Exchange (ETDEWEB)

    Du, Xiaoyue, E-mail: xiaoyue.du@empa.ch [Swiss Federal Laboratories for Materials Science and Technology (EMPA), Lerchenfeldstrasse 5, 9014 St. Gallen (Switzerland); Yale University, 195 Prospect Street, New Haven CT 06511 (United States); Graedel, T.E. [Yale University, 195 Prospect Street, New Haven CT 06511 (United States)

    2013-09-01

    The rare earth elements (REE) are a group of fifteen elements with unique properties that make them indispensable for a wide variety of emerging and conventional established technologies. However, quantitative knowledge of REE remains sparse, despite the current heightened interest in future availability of the resources. Mining is heavily concentrated in China, whose monopoly position and potential restriction of exports render primary supply vulnerable to short term disruption. We have drawn upon the published literature and unpublished materials in different languages to derive the first quantitative annual domestic production by end use of individual rare earth elements from 1995 to 2007. The information is illustrated in Sankey diagrams for the years 1995 and 2007. Other years are available in the supporting information. Comparing 1995 and 2007, the production of the rare earth elements in China, Japan, and the US changed dramatically in quantities and structure. The information can provide a solid foundation for industries, academic institutions and governments to make decisions and develop strategies. - Highlights: • We have derived the first quantitative end use information of the rare earths (REE). • The results are for individual REE from 1995 to 2007. • The end uses of REE in China, Japan, and the US changed dramatically in quantities and structure. • This information can provide solid foundation for decision and strategy making.

  20. A Peltier-based freeze-thaw device for meteorite disaggregation

    Science.gov (United States)

    Ogliore, R. C.

    2018-02-01

    A Peltier-based freeze-thaw device for the disaggregation of meteorite or other rock samples is described. Meteorite samples are kept in six water-filled cavities inside a thin-walled Al block. This block is held between two Peltier coolers that are automatically cycled between cooling and warming. One cycle takes approximately 20 min. The device can run unattended for months, allowing for ˜10 000 freeze-thaw cycles that will disaggregate meteorites even with relatively low porosity. This device was used to disaggregate ordinary and carbonaceous chondrite regoltih breccia meteorites to search for micrometeoroid impact craters.

  1. Characteristics and Performance of Existing Load Disaggregation Technologies

    Energy Technology Data Exchange (ETDEWEB)

    Mayhorn, Ebony T. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Sullivan, Greg P. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Butner, Ryan S. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Hao, He [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Baechler, Michael C. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2015-04-10

    Non-intrusive load monitoring (NILM) or non-intrusive appliance load monitoring (NIALM) is an analytic approach to disaggregate building loads based on a single metering point. This advanced load monitoring and disaggregation technique has the potential to provide an alternative solution to high-priced traditional sub-metering and enable innovative approaches for energy conservation, energy efficiency, and demand response. However, since the inception of the concept in the 1980’s, evaluations of these technologies have focused on reporting performance accuracy without investigating sources of inaccuracies or fully understanding and articulating the meaning of the metrics used to quantify performance. As a result, the market for, as well as, advances in these technologies have been slowly maturing.To improve the market for these NILM technologies, there has to be confidence that the deployment will lead to benefits. In reality, every end-user and application that this technology may enable does not require the highest levels of performance accuracy to produce benefits. Also, there are other important characteristics that need to be considered, which may affect the appeal of NILM products to certain market targets (i.e. residential and commercial building consumers) and the suitability for particular applications. These characteristics include the following: 1) ease of use, the level of expertise/bandwidth required to properly use the product; 2) ease of installation, the level of expertise required to install along with hardware needs that impact product cost; and 3) ability to inform decisions and actions, whether the energy outputs received by end-users (e.g. third party applications, residential users, building operators, etc.) empower decisions and actions to be taken at time frames required for certain applications. Therefore, stakeholders, researchers, and other interested parties should be kept abreast of the evolving capabilities, uses, and characteristics

  2. A disaggregate model to predict the intercity travel demand

    Energy Technology Data Exchange (ETDEWEB)

    Damodaran, S.

    1988-01-01

    This study was directed towards developing disaggregate models to predict the intercity travel demand in Canada. A conceptual framework for the intercity travel behavior was proposed; under this framework, a nested multinomial model structure that combined mode choice and trip generation was developed. The CTS (Canadian Travel Survey) data base was used for testing the structure and to determine the viability of using this data base for intercity travel-demand prediction. Mode-choice and trip-generation models were calibrated for four modes (auto, bus, rail and air) for both business and non-business trips. The models were linked through the inclusive value variable, also referred to as the long sum of the denominator in the literature. Results of the study indicated that the structure used in this study could be applied for intercity travel-demand modeling. However, some limitations of the data base were identified. It is believed that, with some modifications, the CTS data could be used for predicting intercity travel demand. Future research can identify the factors affecting intercity travel behavior, which will facilitate collection of useful data for intercity travel prediction and policy analysis.

  3. Silicon Photonics towards Disaggregation of Resources in Data Centers

    Directory of Open Access Journals (Sweden)

    Miltiadis Moralis-Pegios

    2018-01-01

    Full Text Available In this paper, we demonstrate two subsystems based on Silicon Photonics, towards meeting the network requirements imposed by disaggregation of resources in Data Centers. The first one utilizes a 4 × 4 Silicon photonics switching matrix, employing Mach Zehnder Interferometers (MZIs with Electro-Optical phase shifters, directly controlled by a high speed Field Programmable Gate Array (FPGA board for the successful implementation of a Bloom-Filter (BF-label forwarding scheme. The FPGA is responsible for extracting the BF-label from the incoming optical packets, carrying out the BF-based forwarding function, determining the appropriate switching state and generating the corresponding control signals towards conveying incoming packets to the desired output port of the matrix. The BF-label based packet forwarding scheme allows rapid reconfiguration of the optical switch, while at the same time reduces the memory requirements of the node’s lookup table. Successful operation for 10 Gb/s data packets is reported for a 1 × 4 routing layout. The second subsystem utilizes three integrated spiral waveguides, with record-high 2.6 ns/mm2, delay versus footprint efficiency, along with two Semiconductor Optical Amplifier Mach-Zehnder Interferometer (SOA-MZI wavelength converters, to construct a variable optical buffer and a Time Slot Interchange module. Error-free on-chip variable delay buffering from 6.5 ns up to 17.2 ns and successful timeslot interchanging for 10 Gb/s optical packets are presented.

  4. A Replication of ``Using self-esteem to disaggregate psychopathy, narcissism, and aggression (2013''

    Directory of Open Access Journals (Sweden)

    Durand, Guillaume

    2016-09-01

    Full Text Available The present study is a replication of Falkenbach, Howe, and Falki (2013. Using self-esteem to disaggregate psychopathy, narcissism, and aggression. Personality and Individual Differences, 54(7, 815-820.

  5. Disaggregation of sectors in Social Accounting Matrices using a customized Wolsky method

    OpenAIRE

    BARRERA-LOZANO Margarita; MAINAR CAUSAPÉ ALFREDO; VALLÉS FERRER José

    2014-01-01

    The aim of this work is to enable the implementation of disaggregation processes for specific and homogeneous sectors in Social Accounting Matrices (SAMs), while taking into account the difficulties in data collection from these types of sectors. The method proposed is based on the Wolsky technique, customized for the disaggregation of Social Accounting Matrices, within the current-facilities framework. The Spanish Social Accounting Matrix for 2008 is used as a benchmark for the analysis, and...

  6. Disaggregating reserve-to-production ratios: An algorithm for United States oil and gas reserve development

    Science.gov (United States)

    Williams, Charles William

    Reserve-to-production ratios for oil and gas development are utilized by oil and gas producing states to monitor oil and gas reserve and production dynamics. These ratios are used to determine production levels for the manipulation of oil and gas prices while maintaining adequate reserves for future development. These aggregate reserve-to-production ratios do not provide information concerning development cost and the best time necessary to develop newly discovered reserves. Oil and gas reserves are a semi-finished inventory because development of the reserves must take place in order to implement production. These reserves are considered semi-finished in that they are not counted unless it is economically profitable to produce them. The development of these reserves is encouraged by profit maximization economic variables which must consider the legal, political, and geological aspects of a project. This development is comprised of a myriad of incremental operational decisions, each of which influences profit maximization. The primary purpose of this study was to provide a model for characterizing a single product multi-period inventory/production optimization problem from an unconstrained quantity of raw material which was produced and stored as inventory reserve. This optimization was determined by evaluating dynamic changes in new additions to reserves and the subsequent depletion of these reserves with the maximization of production. A secondary purpose was to determine an equation for exponential depletion of proved reserves which presented a more comprehensive representation of reserve-to-production ratio values than an inadequate and frequently used aggregate historical method. The final purpose of this study was to determine the most accurate delay time for a proved reserve to achieve maximum production. This calculated time provided a measure of the discounted cost and calculation of net present value for developing new reserves. This study concluded that the theoretical model developed by this research may be used to provide a predictive equation for each major oil and gas state so that a net present value to undiscounted net cash flow ratio might be calculated in order to establish an investment signal for profit maximizers. This equation inferred how production decisions were influenced by exogenous factors, such as price, and how policies performed which lead to recommendations regarding effective policies and prudent planning.

  7. Disaggregating and mapping crop statistics using hypertemporal remote sensing

    Science.gov (United States)

    Khan, M. R.; de Bie, C. A. J. M.; van Keulen, H.; Smaling, E. M. A.; Real, R.

    2010-02-01

    Governments compile their agricultural statistics in tabular form by administrative area, which gives no clue to the exact locations where specific crops are actually grown. Such data are poorly suited for early warning and assessment of crop production. 10-Daily satellite image time series of Andalucia, Spain, acquired since 1998 by the SPOT Vegetation Instrument in combination with reported crop area statistics were used to produce the required crop maps. Firstly, the 10-daily (1998-2006) 1-km resolution SPOT-Vegetation NDVI-images were used to stratify the study area in 45 map units through an iterative unsupervised classification process. Each unit represents an NDVI-profile showing changes in vegetation greenness over time which is assumed to relate to the types of land cover and land use present. Secondly, the areas of NDVI-units and the reported cropped areas by municipality were used to disaggregate the crop statistics. Adjusted R-squares were 98.8% for rainfed wheat, 97.5% for rainfed sunflower, and 76.5% for barley. Relating statistical data on areas cropped by municipality with the NDVI-based unit map showed that the selected crops were significantly related to specific NDVI-based map units. Other NDVI-profiles did not relate to the studied crops and represented other types of land use or land cover. The results were validated by using primary field data. These data were collected by the Spanish government from 2001 to 2005 through grid sampling within agricultural areas; each grid (block) contains three 700 m × 700 m segments. The validation showed 68%, 31% and 23% variability explained (adjusted R-squares) between the three produced maps and the thousands of segment data. Mainly variability within the delineated NDVI-units caused relatively low values; the units are internally heterogeneous. Variability between units is properly captured. The maps must accordingly be considered "small scale maps". These maps can be used to monitor crop performance of

  8. Erosion of atmospherically deposited radionuclides as affected by soil disaggregation mechanisms

    International Nuclear Information System (INIS)

    Claval, D.; Garcia-Sanchez, L.; Real, J.; Rouxel, R.; Mauger, S.; Sellier, L.

    2004-01-01

    The interactions of soil disaggregation with radionuclide erosion were studied under controlled conditions in the laboratory on samples from a loamy silty-sandy soil. The fate of 134 Cs and 85 Sr was monitored on soil aggregates and on small plots, with time resolution ranging from minutes to hours after contamination. Analytical experiments reproducing disaggregation mechanisms on aggregates showed that disaggregation controls both erosion and sorption. Compared to differential swelling, air explosion mobilized the most by producing finer particles and increasing five-fold sorption. For all the mechanisms studied, a significant part of the contamination was still unsorbed on the aggregates after an hour. Global experiments on contaminated sloping plots submitted to artificial rainfalls showed radionuclide erosion fluctuations and their origin. Wet radionuclide deposition increased short-term erosion by 50% compared to dry deposition. A developed soil crust when contaminated decreased radionuclide erosion by a factor 2 compared to other initial soil states. These erosion fluctuations were more significant for 134 Cs than 85 Sr, known to have better affinity to soil matrix. These findings confirm the role of disaggregation on radionuclide erosion. Our data support a conceptual model of radionuclide erosion at the small plot scale in two steps: (1) radionuclide non-equilibrium sorption on mobile particles, resulting from simultaneous sorption and disaggregation during wet deposition and (2) later radionuclide transport by runoff with suspended matter

  9. End-use energy analysis in the Malaysian industrial sector

    Energy Technology Data Exchange (ETDEWEB)

    Saidur, R.; Masjuki, H.H. [Department of Mechanical Engineering, University of Malaya, 50603 Kuala Lumpur (Malaysia); Rahim, N.A.; Mekhilef, S.; Ping, H.W. [Department of Electrical Engineering, University of Malaya, 50603 Kuala Lumpur (Malaysia); Jamaluddin, M.F. [Tenaga Nasional Berhad (TNB), Head Office, Bangsar, Kuala Lumpur (Malaysia)

    2009-02-15

    The industrial sector is the second largest consumer of energy in Malaysia. In this energy audit, the most important parameters that have been collected are as follows: power rating and operation time of energy-consuming equipments/machineries; fossil fuel and other sources of energy use; production figure; peak and off-peak tariff usage behavior and power factor. These data were then analyzed to investigate the breakdown of end-use equipments/machineries energy use, the peak and off-peak usage behavior, power factor trend and specific energy use. The results of the energy audit showed that the highest electrical energy-using equipment was an electric motor followed by pumps and air compressors. The specific energy use has been estimated and compared with four Indonesian industries and it was found that three Malaysian industries were more efficient than the Indonesian counterpart. The study also found that about 64% electrical energy was used in peak hours by the industries and the average power factor ranged from 0.88 to 0.92. The study also estimated energy and bill savings using highly efficient electrical motors along with the payback period. (author)

  10. End use energy consumption data base: transportation sector

    Energy Technology Data Exchange (ETDEWEB)

    Hooker, J.N.; Rose, A.B.; Greene, D.L.

    1980-02-01

    The transportation fuel and energy use estimates developed a Oak Ridge National Laboratory (ORNL) for the End Use Energy Consumption Data Base are documented. The total data base contains estimates of energy use in the United States broken down into many categories within all sectors of the economy: agriculture, mining, construction, manufacturing, commerce, the household, electric utilities, and transportation. The transportation data provided by ORNL generally cover each of the 10 years from 1967 through 1976 (occasionally 1977 and 1978), with omissions in some models. The estimtes are broken down by mode of transport, fuel, region and State, sector of the economy providing transportation, and by the use to which it is put, and, in the case of automobile and bus travel, by the income of the traveler. Fuel types include natural gas, motor and aviation gasoline, residual and diesel oil, liuqefied propane, liquefied butane, and naphtha- and kerosene-type jet engine fuels. Electricity use is also estimated. The mode, fuel, sector, and use categories themselves subsume one, two, or three levels of subcategories, resulting in a very detailed categorization and definitive accounting.

  11. Analysis of energy end-use efficiency policy in Spain

    International Nuclear Information System (INIS)

    Collado, Rocío Román; Díaz, María Teresa Sanz

    2017-01-01

    The implementation of saving measures and energy efficiency entails the need to evaluate achievements in terms of energy saving and spending. This paper aims at analysing the effectiveness and economic efficiency of energy saving measures implemented in the Energy Savings and Efficiency Action Plan (2008–2012) (EAP4+) in Spain for 2010. The lack of assessment related to energy savings achieved and public spending allocated by the EAP4+ justifies the need of this analysis. The results show that the transport and building sectors seem to be the most important, from the energy efficiency perspective. Although they did not reach the direct energy savings that were expected, there is scope for reduction with the appropriate energy measures. For the effectiveness indicator, the best performance are achieved by public service, agricultural and fisheries and building sectors, while in terms of energy efficiency per monetary unit, the best results are achieved by transport, industry and agriculture sectors. Authors conclude that it is necessary that central, regional and local administrations will get involved, in order to get better estimates of the energy savings achieved and thus to affect the design of future energy efficiency measures at the lowest possible cost to the citizens. - Highlights: • Energy end-use efficiency policy is analysed in terms of energy savings and spending. • The energy savings achieved by some measures are not always provided. • The total energy savings achieved by transport and building sectors are large. • Different levels of administration should get involved in estimating energy savings.

  12. Disaggregated energy consumption and GDP in Taiwan: A threshold co-integration analysis

    International Nuclear Information System (INIS)

    Hu, J.-L.; Lin, C.-H.

    2008-01-01

    Energy consumption growth is much higher than economic growth for Taiwan in recent years, worsening its energy efficiency. This paper provides a solid explanation by examining the equilibrium relationship between GDP and disaggregated energy consumption under a non-linear framework. The threshold co-integration test developed with asymmetric dynamic adjusting processes proposed by Hansen and Seo [Hansen, B.E., Seo, B., 2002. Testing for two-regime threshold cointegration in vector error-correction models. Journal of Econometrics 110, 293-318.] is applied. Non-linear co-integrations between GDP and disaggregated energy consumptions are confirmed except for oil consumption. The two-regime vector error-correction models (VECM) show that the adjustment process of energy consumption toward equilibrium is highly persistent when an appropriately threshold is reached. There is mean-reverting behavior when the threshold is reached, making aggregate and disaggregated energy consumptions grow faster than GDP in Taiwan

  13. Energy consumption of electricity end uses in Malaysian historic buildings

    Energy Technology Data Exchange (ETDEWEB)

    Kamaruzzaman, Syahrul N.; Edwards, Rodger E.; Zawawi, Emma M.A.

    2007-07-15

    Malaysia has inherited hundreds of heritage buildings from the past including those from the Indian, Chinese and Colonial eras apart from the indigenous traditional buildings. These buildings have the most unique ecstatic value from the viewpoint of architecture, culture, art, etc. Malaysian economy boom in 1980s spurred the need for more buildings especially in large cities. As a result, most of the historic buildings have been converted and transformed into commercial use. As reported by METP, Malaysian buildings energy uses are reflected by the energy consumption in the industrial and commercial sectors. Most of the buildings' energy consumption is electricity, used for running and operating the plants, lighting, lifts and escalators and other equipment in the buildings. These are amongst the factors that have resulted in the high demand for electricity in Malaysia. As outlined in the eighth Malaysia Plan, Malaysia is taking steps in conserving energy and reducing energy consumption on electricity consumption in building. This paper aims to present the breakdown of the major electricity end uses characteristics of historic buildings in Malaysia. The analysis was performed on annual data, allowing comparison with published benchmarks to give an indication of efficiency. Based on data collected a 'normalisation' calculated electricity consumption was established with the intention of improving the comparison between buildings in different climatic regions or with different occupancy patterns. This is useful for identifying where the design needed further attention and helped pinpoint problem areas within a building. It is anticipated that this study would give a good indication on the electricity consumption characteristics of historic buildings in Malaysia. (Author)

  14. Probabilistic disaggregation of a spatial portfolio of exposure for natural hazard risk assessment

    DEFF Research Database (Denmark)

    Custer, Rocco; Nishijima, Kazuyoshi

    2018-01-01

    In natural hazard risk assessment situations are encountered where information on the portfolio of exposure is only available in a spatially aggregated form, hindering a precise risk assessment. Recourse might be found in the spatial disaggregation of the portfolio of exposure to the resolution...... of a portfolio of buildings in two communes in Switzerland and the results are compared to sample observations. The relevance of probabilistic disaggregation uncertainty in natural hazard risk assessment is illustrated with the example of a simple flood risk assessment....

  15. Refining and end use study of coal liquids

    International Nuclear Information System (INIS)

    1998-01-01

    Two direct coal liquids were evaluated by linear programming analysis to determine their value as petroleum refinery feedstock. The first liquid, DL1, was produced from bitiuminous coal using the Hydrocarbon Technologies, Inc.(HTI) two-stage hydrogenation process in Proof of Concept Run No.1, POC-1. The second liquid, DL2,was produced from sub-bituminous coal using a three-stage HTI process in Proof of Concept Run No. 2, POC-2; the third stage being a severe hydrogenation process. A linear programming (LP) model was developed which simulates a generic 150,000 barrel per day refinery in the Midwest U.S. Data from upgrading tests conducted on the coal liquids and related petroleum fractions in the pilot plant testing phase of the Refining and End Use Study was inputed into the model. The coal liquids were compared against a generic petroleum crude feedstock. under two scenarios. In the first scenario, it was assumed that the refinery capacity and product slate/volumes were fixed. The coal liquids would be used to replace a portion of the generic crude. The LP results showed that the DL1 material had essentially the same value as the generic crude. Due to its higher quality, the DL2 material had a value of approximately 0.60 $/barrel higher than the petroleum crude. In the second scenario, it was assumed that a market opportunity exists to increase production by one-third. This requires a refinery expansion. The feedstock for this scenario could be either 100% petroleum crude or a combination of petroleum crude and the direct coal liquids. Linear programming analysis showed that the capital cost of the refinery expansion was significantly less when coal liquids are utilized. In addition, the pilot plant testing showed that both of the direct coal liquids demonstrated superior catalytic cracking and naphtha reforming yields. Depending on the coal liquid flow rate, the value of the DL1 material was 2.5-4.0 $/barrel greater than the base petroleum crude, while the DL2

  16. Converged photonic data storage and switch platform for exascale disaggregated data centers

    Science.gov (United States)

    Pitwon, R.; Wang, K.; Worrall, A.

    2017-02-01

    We report on a converged optically enabled Ethernet storage, switch and compute platform, which could support future disaggregated data center architectures. The platform includes optically enabled Ethernet switch controllers, an advanced electro-optical midplane and optically interchangeable generic end node devices. We demonstrate system level performance using optically enabled Ethernet disk drives and micro-servers across optical links of varied lengths.

  17. Statistical Models for Disaggregation and Reaggregation of Natural Gas Consumption Data

    Czech Academy of Sciences Publication Activity Database

    Brabec, Marek; Konár, Ondřej; Malý, Marek; Kasanický, Ivan; Pelikán, Emil

    2015-01-01

    Roč. 42, č. 5 (2015), s. 921-937 ISSN 0266-4763 Institutional support: RVO:67985807 Keywords : natural gas consumption * semiparametric model * standardized load profiles * aggregation * disaggregation * 62P30 Subject RIV: BB - Applied Statistics , Operational Research Impact factor: 0.419, year: 2015

  18. An economic analysis of disaggregation of space assets: Application to GPS

    Science.gov (United States)

    Hastings, Daniel E.; La Tour, Paul A.

    2017-05-01

    New ideas, technologies and architectural concepts are emerging with the potential to reshape the space enterprise. One of those new architectural concepts is the idea that rather than aggregating payloads onto large very high performance buses, space architectures should be disaggregated with smaller numbers of payloads (as small as one) per bus and the space capabilities spread across a correspondingly larger number of systems. The primary rationale is increased survivability and resilience. The concept of disaggregation is examined from an acquisition cost perspective. A mixed system dynamics and trade space exploration model is developed to look at long-term trends in the space acquisition business. The model is used to examine the question of how different disaggregated GPS architectures compare in cost to the well-known current GPS architecture. A generation-over-generation examination of policy choices is made possible through the application of soft systems modeling of experience and learning effects. The assumptions that are allowed to vary are: design lives, production quantities, non-recurring engineering and time between generations. The model shows that there is always a premium in the first generation to be paid to disaggregate the GPS payloads. However, it is possible to construct survivable architectures where the premium after two generations is relatively low.

  19. The Economic Impact of Higher Education Institutions in Ireland: Evidence from Disaggregated Input-Output Tables

    Science.gov (United States)

    Zhang, Qiantao; Larkin, Charles; Lucey, Brian M.

    2017-01-01

    While there has been a long history of modelling the economic impact of higher education institutions (HEIs), little research has been undertaken in the context of Ireland. This paper provides, for the first time, a disaggregated input-output table for Ireland's higher education sector. The picture painted overall is a higher education sector that…

  20. Carbon emissions, energy consumption and economic growth: An aggregate and disaggregate analysis of the Indian economy

    International Nuclear Information System (INIS)

    Ahmad, Ashfaq; Zhao, Yuhuan; Shahbaz, Muhammad; Bano, Sadia; Zhang, Zhonghua; Wang, Song; Liu, Ya

    2016-01-01

    This study investigates the long and short run relationships among carbon emissions, energy consumption and economic growth in India at the aggregated and disaggregated levels during 1971–2014. The autoregressive distributed lag model is employed for the cointegration analyses and the vector error correction model is applied to determine the direction of causality between variables. Results show that a long run cointegration relationship exists and that the environmental Kuznets curve is validated at the aggregated and disaggregated levels. Furthermore, energy (total energy, gas, oil, electricity and coal) consumption has a positive relationship with carbon emissions and a feedback effect exists between economic growth and carbon emissions. Thus, energy-efficient technologies should be used in domestic production to mitigate carbon emissions at the aggregated and disaggregated levels. The present study provides policy makers with new directions in drafting comprehensive policies with lasting impacts on the economy, energy consumption and environment towards sustainable development. - Highlights: •Relationships among carbon emissions, energy consumption and economic growth are investigated. •The EKC exists at aggregated and disaggregated levels for India. •All energy resources have positive effects on carbon emissions. •Gas energy consumption is less polluting than other energy sources in India.

  1. The Disaggregation of Value-Added Test Scores to Assess Learning Outcomes in Economics Courses

    Science.gov (United States)

    Walstad, William B.; Wagner, Jamie

    2016-01-01

    This study disaggregates posttest, pretest, and value-added or difference scores in economics into four types of economic learning: positive, retained, negative, and zero. The types are derived from patterns of student responses to individual items on a multiple-choice test. The micro and macro data from the "Test of Understanding in College…

  2. Evolution of an intricate J-protein network driving protein disaggregation in eukaryotes.

    Science.gov (United States)

    Nillegoda, Nadinath B; Stank, Antonia; Malinverni, Duccio; Alberts, Niels; Szlachcic, Anna; Barducci, Alessandro; De Los Rios, Paolo; Wade, Rebecca C; Bukau, Bernd

    2017-05-15

    Hsp70 participates in a broad spectrum of protein folding processes extending from nascent chain folding to protein disaggregation. This versatility in function is achieved through a diverse family of J-protein cochaperones that select substrates for Hsp70. Substrate selection is further tuned by transient complexation between different classes of J-proteins, which expands the range of protein aggregates targeted by metazoan Hsp70 for disaggregation. We assessed the prevalence and evolutionary conservation of J-protein complexation and cooperation in disaggregation. We find the emergence of a eukaryote-specific signature for interclass complexation of canonical J-proteins. Consistently, complexes exist in yeast and human cells, but not in bacteria, and correlate with cooperative action in disaggregation in vitro. Signature alterations exclude some J-proteins from networking, which ensures correct J-protein pairing, functional network integrity and J-protein specialization. This fundamental change in J-protein biology during the prokaryote-to-eukaryote transition allows for increased fine-tuning and broadening of Hsp70 function in eukaryotes.

  3. Determining the disaggregated economic value of irrigation water in the Musi sub-basin in India

    NARCIS (Netherlands)

    Hellegers, P.J.G.J.; Davidson, B.

    2010-01-01

    In this paper the residual method is used to determine the disaggregated economic value of irrigation water used in agriculture across crops, zones and seasons. This method relies on the belief that the value of a good (its price by its quantity) is equal to the summation of the quantity of each

  4. A GIS-based disaggregate spatial watershed analysis using RADAR data

    International Nuclear Information System (INIS)

    Al-Hamdan, M.

    2002-01-01

    Hydrology is the study of water in all its forms, origins, and destinations on the earth.This paper develops a novel modeling technique using a geographic information system (GIS) to facilitate watershed hydrological routing using RADAR data. The RADAR rainfall data, segmented to 4 km by 4 km blocks, divides the watershed into several sub basins which are modeled independently. A case study for the GIS-based disaggregate spatial watershed analysis using RADAR data is provided for South Fork Cowikee Creek near Batesville, Alabama. All the data necessary to complete the analysis is maintained in the ArcView GIS software. This paper concludes that the GIS-Based disaggregate spatial watershed analysis using RADAR data is a viable method to calculate hydrological routing for large watersheds. (author)

  5. Technological shape and size: A disaggregated perspective on sectoral innovation systems in renewable electrification pathways

    DEFF Research Database (Denmark)

    Hansen, Ulrich Elmer; Gregersen, Cecilia; Lema, Rasmus

    2018-01-01

    important analytical implications because the disaggregated perspective allows us to identify trajectories that cut across conventionally defined core technologies. This is important for ongoing discussions of electrification pathways in developing countries. We conclude the paper by distilling......The sectoral innovation system perspective has been developed as an analytical framework to analyse and understand innovation dynamics within and across various sectors. Most of the research conducted on sectoral innovation systems has focused on an aggregate-level analysis of entire sectors....... This paper argues that a disaggregated (sub-sectoral) focus is more suited to policy-oriented work on the development and diffusion of renewable energy, particularly in countries with rapidly developing energy systems and open technology choices. It focuses on size, distinguishing between small-scale (mini...

  6. Daily disaggregation of simulated monthly flows using different rainfall datasets in southern Africa

    Directory of Open Access Journals (Sweden)

    D.A. Hughes

    2015-09-01

    New hydrological insights for the region: There are substantial regional differences in the success of the monthly hydrological model, which inevitably affects the success of the daily disaggregation results. There are also regional differences in the success of using global rainfall data sets (Climatic Research Unit (CRU datasets for monthly, National Oceanic and Atmospheric Administration African Rainfall Climatology, version 2 (ARC2 satellite data for daily. The overall conclusion is that the disaggregation method presents a parsimonious approach to generating daily flow simulations from existing monthly simulations and that these daily flows are likely to be useful for some purposes (e.g. water quality modelling, but less so for others (e.g. peak flow analysis.

  7. New Insight into the Finance-Energy Nexus: Disaggregated Evidence from Turkish Sectors

    Directory of Open Access Journals (Sweden)

    Mert Topcu

    2017-01-01

    Full Text Available Seeing that reshaped energy economics literature has adopted some new variables in energy demand function, the number of papers looking into the relationship between financial development and energy consumption at the aggregate level has been increasing over the last few years. This paper, however, proposes a new framework using disaggregated data and investigates the nexus between financial development and sectoral energy consumption in Turkey. To this end, panel time series regression and causality techniques are adopted over the period 1989–2011. Empirical results confirm that financial development does have a significant impact on energy consumption, even with disaggregated data. It is also proved that the magnitude of financial development is larger in energy-intensive industries than in less energy-intensive ones.

  8. Validating CDIAC's population-based approach to the disaggregation of within-country CO2 emissions

    International Nuclear Information System (INIS)

    Cushman, R.M.; Beauchamp, J.J.; Brenkert, A.L.

    1998-01-01

    The Carbon Dioxide Information Analysis Center produces and distributes a data base of CO 2 emissions from fossil-fuel combustion and cement production, expressed as global, regional, and national estimates. CDIAC also produces a companion data base, expressed on a one-degree latitude-longitude grid. To do this gridding, emissions within each country are spatially disaggregated according to the distribution of population within that country. Previously, the lack of within-country emissions data prevented a validation of this approach. But emissions inventories are now becoming available for most US states. An analysis of these inventories confirms that population distribution explains most, but not all, of the variance in the distribution of CO 2 emissions within the US. Additional sources of variance (coal production, non-carbon energy sources, and interstate electricity transfers) are explored, with the hope that the spatial disaggregation of emissions can be improved

  9. Disaggregated Energy Consumption and Sectoral Outputs in Thailand: ARDL Bound Testing Approach

    OpenAIRE

    Thurai Murugan Nathan; Venus Khim-Sen Liew; Wing-Keung Wong

    2016-01-01

    From an economic perspective, energy-output relationship studies have become increasingly popular in recent times, partly fuelled by a need to understand the effect of energy on production outputs rather than overall GDP. This study dealt with disaggregated energy consumption and outputs of some major economic sectors in Thailand. ARDL bound testing approach was employed to examine the co-integration relationship. The Granger causality test of the aforementioned ARDL framework was done to inv...

  10. POVERTY AND CALORIE DEPRIVATION ACROSS SOCIO-ECONOMIC GROUPS IN RURAL INDIA: A DISAGGREGATED ANALYSIS

    OpenAIRE

    Gupta, Abha; Mishra, Deepak K.

    2013-01-01

    This paper examines the linkages between calorie deprivation and poverty in rural India at a disaggregated level. It aims to explore the trends and pattern in levels of nutrient intake across social and economic groups. A spatial analysis at the state and NSS-region level unravels the spatial distribution of calorie deprivation in rural India. The gap between incidence of poverty and calorie deprivation has also been investigated. The paper also estimates the factors influencing calorie depri...

  11. Employment in Disequilibrium: a Disaggregated Approach on a Panel of French Firms

    OpenAIRE

    Brigitte Dormont

    1989-01-01

    The purpose of this paper is to understand disequilibrium phenomena at a disaggregated level. By using data on French firms, we carry out the estimation of labor demand model with two regimes, which correspond to the Keynesian and classical hypotheses. The results enable us to characterize classical firms as being particularly good performers: they have more rapid growth, younger productive plant and higher productivity gains and profitability. Classical firms stand out, with respect to their...

  12. The Long-Run Macroeconomic Effects of Aid and Disaggregated Aid in Ethiopia

    DEFF Research Database (Denmark)

    Gebregziabher, Fiseha Haile

    2014-01-01

    positively, whereas it is negatively associated with government consumption. Our results concerning the impacts of disaggregated aid stand in stark contrast to earlier work. Bilateral aid increases investment and GDP and is negatively associated with government consumption, whereas multilateral aid is only...... positively associated with imports. Grants contribute to GDP, investment and imports, whereas loans affect none of the variables. Finally, there is evidence to suggest that multilateral aid and loans have been disbursed in a procyclical fashion...

  13. Equity in health care financing in Palestine: the value-added of the disaggregate approach.

    Science.gov (United States)

    Abu-Zaineh, Mohammad; Mataria, Awad; Luchini, Stéphane; Moatti, Jean-Paul

    2008-06-01

    This paper analyzes the redistributive effect and progressivity associated with the current health care financing schemes in the Occupied Palestinian Territory, using data from the first Palestinian Household Health Expenditure Survey conducted in 2004. The paper goes beyond the commonly used "aggregate summary index approach" to apply a more detailed "disaggregate approach". Such an approach is borrowed from the general economic literature on taxation, and examines redistributive and vertical effects over specific parts of the income distribution, using the dominance criterion. In addition, the paper employs a bootstrap method to test for the statistical significance of the inequality measures. While both the aggregate and disaggregate approaches confirm the pro-rich and regressive character of out-of-pocket payments, the aggregate approach does not ascertain the potential progressive feature of any of the available insurance schemes. The disaggregate approach, however, significantly reveals a progressive aspect, for over half of the population, of the government health insurance scheme, and demonstrates that the regressivity of the out-of-pocket payments is most pronounced among the worst-off classes of the population. Recommendations are advanced to improve the performance of the government insurance schemes to enhance its capacity in limiting inequalities in health care financing in the Occupied Palestinian Territory.

  14. Load Disaggregation via Pattern Recognition: A Feasibility Study of a Novel Method in Residential Building

    Directory of Open Access Journals (Sweden)

    Younghoon Kwak

    2018-04-01

    Full Text Available In response to the need to improve energy-saving processes in older buildings, especially residential ones, this paper describes the potential of a novel method of disaggregating loads in light of the load patterns of household appliances determined in residential buildings. Experiments were designed to be applicable to general residential buildings and four types of commonly used appliances were selected to verify the method. The method assumes that loads are disaggregated and measured by a single primary meter. Following the metering of household appliances and an analysis of the usage patterns of each type, values of electric current were entered into a Hidden Markov Model (HMM to formulate predictions. Thereafter, the HMM repeatedly performed to output the predicted data close to the measured data, while errors between predicted and the measured data were evaluated to determine whether they met tolerance. When the method was examined for 4 days, matching rates in accordance with the load disaggregation outcomes of the household appliances (i.e., laptop, refrigerator, TV, and microwave were 0.994, 0.992, 0.982, and 0.988, respectively. The proposed method can provide insights into how and where within such buildings energy is consumed. As a result, effective and systematic energy saving measures can be derived even in buildings in which monitoring sensors and measurement equipment are not installed.

  15. 7 CFR 782.12 - Filing FSA-750, End-Use Certificate for Wheat.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 7 2010-01-01 2010-01-01 false Filing FSA-750, End-Use Certificate for Wheat. 782.12... Certificate Program § 782.12 Filing FSA-750, End-Use Certificate for Wheat. (a) Each entity that imports wheat originating in Canada shall, for each entry into the U.S., obtain form FSA-750, End-Use Certificate for Wheat...

  16. 15 CFR 745.2 - End-Use Certificate reporting requirements under the Chemical Weapons Convention.

    Science.gov (United States)

    2010-01-01

    ... requirements under the Chemical Weapons Convention. 745.2 Section 745.2 Commerce and Foreign Trade Regulations... EXPORT ADMINISTRATION REGULATIONS CHEMICAL WEAPONS CONVENTION REQUIREMENTS § 745.2 End-Use Certificate reporting requirements under the Chemical Weapons Convention. Note: The End-Use Certificate requirement of...

  17. Tolerance analysis of null lenses using an end-use system performance criterion

    Science.gov (United States)

    Rodgers, J. Michael

    2000-07-01

    An effective method of assigning tolerances to a null lens is to determine the effects of null-lens fabrication and alignment errors on the end-use system itself, not simply the null lens. This paper describes a method to assign null- lens tolerances based on their effect on any performance parameter of the end-use system.

  18. Spatial and temporal disaggregation of transport-related carbon dioxide emissions in Bogota - Colombia

    Science.gov (United States)

    Hernandez-Gonzalez, L. A.; Jimenez Pizarro, R.; Néstor Y. Rojas, N. Y.

    2011-12-01

    As a result of rapid urbanization during the last 60 years, 75% of the Colombian population now lives in cities. Urban areas are net sources of greenhouse gases (GHG) and contribute significantly to national GHG emission inventories. The development of scientifically-sound GHG mitigation strategies require accurate GHG source and sink estimations. Disaggregated inventories are effective mitigation decision-making tools. The disaggregation process renders detailed information on the distribution of emissions by transport mode, and the resulting a priori emissions map allows for optimal definition of sites for GHG flux monitoring, either by eddy covariance or inverse modeling techniques. Fossil fuel use in transportation is a major source of carbon dioxide (CO2) in Bogota. We present estimates of CO2 emissions from road traffic in Bogota using the Intergovernmental Panel on Climate Change (IPCC) reference method, and a spatial and temporal disaggregation method. Aggregated CO2 emissions from mobile sources were estimated from monthly and annual fossil fuel (gasoline, diesel and compressed natural gas - CNG) consumption statistics, and estimations of bio-ethanol and bio-diesel use. Although bio-fuel CO2 emissions are considered balanced over annual (or multi-annual) agricultural cycles, we included them since CO2 generated by their combustion would be measurable by a net flux monitoring system. For the disaggregation methodology, we used information on Bogota's road network classification, mean travel speed and trip length for each vehicle category and road type. The CO2 emission factors were taken from recent in-road measurements for gasoline- and CNG-powered vehicles and also estimated from COPERT IV. We estimated emission factors for diesel from surveys on average trip length and fuel consumption. Using IPCC's reference method, we estimate Bogota's total transport-related CO2 emissions for 2008 (reference year) at 4.8 Tg CO2. The disaggregation method estimation is

  19. Development of a Disaggregation Framework toward the Estimation of Subdaily Reference Evapotranspiration: 2- Estimation of Subdaily Reference Evapotranspiration Using Disaggregated Weather Data

    Directory of Open Access Journals (Sweden)

    F. Parchami Araghi

    2016-09-01

    Full Text Available Introduction: Subdaily estimates of reference evapotranspiration (ET o are needed in many applications such as dynamic agro-hydrological modeling. However, in many regions, the lack of subdaily weather data availability has hampered the efforts to quantify the subdaily ET o. In the first presented paper, a physically based framework was developed to desegregate daily weather data needed for estimation of subdaily reference ET o, including air temperature, wind speed, dew point, actual vapour pressure, relative humidity, and solar radiation. The main purpose of this study was to estimate the subdaily ETo using disaggregated daily data derived from developed disaggregation framework in the first presented paper. Materials and Methods: Subdaily ET o estimates were made, using ASCE and FAO-56 Penman–Monteith models (ASCE-PM and FAO56-PM, respectively and subdaily weather data derived from the developed daily-to-subdaily weather data disaggregation framework. To this end, long-term daily weather data got from Abadan (59 years and Ahvaz (50 years synoptic weather stations were collected. Sensitivity analysis of Penman–Monteith model to the different meteorological variables (including, daily air temperature, wind speed at 2 m height, actual vapor pressure, and solar radiation was carried out, using partial derivatives of Penman–Monteith equation. The capability of the two models for retrieving the daily ETo was evaluated, using root mean square error RMSE (mm, the mean error ME (mm, the mean absolute error ME (mm, Pearson correlation coefficient r (-, and Nash–Sutcliffe model efficiency coefficient EF (-. Different contributions to the overall error were decomposed using a regression-based method. Results and Discussion: The results of the sensitivity analysis showed that the daily air temperature and the actual vapor pressure are the most significant meteorological variables, which affect the ETo estimates. In contrast, low sensitivity

  20. Energy and Water Consumption End-Use Survey in Commercial and Industrial Sectors in Georgia

    Data.gov (United States)

    US Agency for International Development — The objective of survey was to collect statistical energy and water end-use data for commercial and industrial sectors. The survey identified volumes of energy and...

  1. Amyloid formation and disaggregation of α-synuclein and its tandem repeat (α-TR)

    International Nuclear Information System (INIS)

    Bae, Song Yi; Kim, Seulgi; Hwang, Heejin; Kim, Hyun-Kyung; Yoon, Hyun C.; Kim, Jae Ho; Lee, SangYoon; Kim, T. Doohun

    2010-01-01

    Research highlights: → Formation of the α-synuclein amyloid fibrils by [BIMbF 3 Im]. → Disaggregation of amyloid fibrils by epigallocatechin gallate (EGCG) and baicalein. → Amyloid formation of α-synuclein tandem repeat (α-TR). -- Abstract: The aggregation of α-synuclein is clearly related to the pathogenesis of Parkinson's disease. Therefore, detailed understanding of the mechanism of fibril formation is highly valuable for the development of clinical treatment and also of the diagnostic tools. Here, we have investigated the interaction of α-synuclein with ionic liquids by using several biochemical techniques including Thioflavin T assays and transmission electron microscopy (TEM). Our data shows a rapid formation of α-synuclein amyloid fibrils was stimulated by 1-butyl-3-methylimidazolium bis(trifluoromethylsulfonyl)imide [BIMbF 3 Im], and these fibrils could be disaggregated by polyphenols such as epigallocatechin gallate (EGCG) and baicalein. Furthermore, the effect of [BIMbF 3 Im] on the α-synuclein tandem repeat (α-TR) in the aggregation process was studied.

  2. Spatial and temporal disaggregation of anthropogenic CO2 emissions from the City of Cape Town

    Directory of Open Access Journals (Sweden)

    Alecia Nickless

    2015-11-01

    Full Text Available This paper describes the methodology used to spatially and temporally disaggregate carbon dioxide emission estimates for the City of Cape Town, to be used for a city-scale atmospheric inversion estimating carbon dioxide fluxes. Fossil fuel emissions were broken down into emissions from road transport, domestic emissions, industrial emissions, and airport and harbour emissions. Using spatially explicit information on vehicle counts, and an hourly scaling factor, vehicle emissions estimates were obtained for the city. Domestic emissions from fossil fuel burning were estimated from household fuel usage information and spatially disaggregated population data from the 2011 national census. Fuel usage data were used to derive industrial emissions from listed activities, which included emissions from power generation, and these were distributed spatially according to the source point locations. The emissions from the Cape Town harbour and the international airport were determined from vessel and aircraft count data, respectively. For each emission type, error estimates were determined through error propagation techniques. The total fossil fuel emission field for the city was obtained by summing the spatial layers for each emission type, accumulated for the period of interest. These results will be used in a city-scale inversion study, and this method implemented in the future for a national atmospheric inversion study.

  3. Energy consumption, carbon emissions and economic growth in Saudi Arabia: An aggregate and disaggregate analysis

    International Nuclear Information System (INIS)

    Alkhathlan, Khalid; Javid, Muhammad

    2013-01-01

    The objective of this study is to examine the relationship among economic growth, carbon emissions and energy consumption at the aggregate and disaggregate levels. For the aggregate energy consumption model, we use total energy consumption per capita and CO 2 emissions per capita based on the total energy consumption. For the disaggregate analysis, we used oil, gas and electricity consumption models along with their respective CO 2 emissions. The long-term income elasticities of carbon emissions in three of the four models are positive and higher than their estimated short-term income elasticities. These results suggest that carbon emissions increase with the increase in per capita income which supports the belief that there is a monotonically increasing relationship between per capita carbon emissions and per capita income for the aggregate model and for the oil and electricity consumption models. The long- and short-term income elasticities of carbon emissions are negative for the gas consumption model. This result indicates that if the Saudi Arabian economy switched from oil to gas consumption, then an increase in per capita income would reduce carbon emissions. The results also suggest that electricity is less polluting than other sources of energy. - Highlights: • Carbon emissions increase with the increase in per capita income in Saudi Arabia. • The income elasticity of CO 2 is negative for the gas consumption model. • The income elasticity of CO 2 is positive for the oil consumption model. • The results suggest that electricity is less polluting than oil and gas

  4. The influence of energy consumption of China on its real GDP from aggregated and disaggregated viewpoints

    International Nuclear Information System (INIS)

    Zhang, Wei; Yang, Shuyun

    2013-01-01

    This paper investigated the causal relationship between energy consumption and gross domestic product (GDP) in China at both aggregated and disaggregated levels during the period of 1978–2009 by using a modified version of the Granger (1969) causality test proposed by Toda and Yamamoto (1995) within a multivariate framework. The empirical results suggested the existence of a negative bi-directional Granger causality running from aggregated energy consumption to real GDP. At disaggregated level of energy consumption, the results were complicated. For coal, empirical findings suggested that there was a negative bi-directional Granger causality running from coal consumption to real GDP. However, for oil and gas, empirical findings suggested a positive bi-directional Granger causality running from oil as well as gas consumption to real GDP. Though these results supported the feedback hypothesis, the negative relationship might be attributed to the growing economy production shifting towards less energy intensive sectors and excessive energy consumption in relatively unproductive sectors. The results indicated that policies with reducing aggregated energy consumption and promoting energy conservation may boost China's economic growth. - Highlights: ► A negative bi-directional Granger causality runs from energy consumption to real GDP. ► The same result runs from coal consumption to real GDP, but oil and gas it does not. ► The results partly derive from excessive energy consumption in unproductive sectors. ► Reducing aggregated energy consumption probably promotes the development of China's economy

  5. Energy consumption and economic growth: Evidence from China at both aggregated and disaggregated levels

    International Nuclear Information System (INIS)

    Yuan Jiahai; Kang Jiangang; Zhao Changhong; Hu Zhaoguang

    2008-01-01

    Using a neo-classical aggregate production model where capital, labor and energy are treated as separate inputs, this paper tests for the existence and direction of causality between output growth and energy use in China at both aggregated total energy and disaggregated levels as coal, oil and electricity consumption. Using the Johansen cointegration technique, the empirical findings indicate that there exists long-run cointegration among output, labor, capital and energy use in China at both aggregated and all three disaggregated levels. Then using a VEC specification, the short-run dynamics of the interested variables are tested, indicating that there exists Granger causality running from electricity and oil consumption to GDP, but does not exist Granger causality running from coal and total energy consumption to GDP. On the other hand, short-run Granger causality exists from GDP to total energy, coal and oil consumption, but does not exist from GDP to electricity consumption. We thus propose policy suggestions to solve the energy and sustainable development dilemma in China as: enhancing energy supply security and guaranteeing energy supply, especially in the short run to provide adequate electric power supply and set up national strategic oil reserve; enhancing energy efficiency to save energy; diversifying energy sources, energetically exploiting renewable energy and drawing out corresponding policies and measures; and finally in the long run, transforming development pattern and cut reliance on resource- and energy-dependent industries

  6. Inventory Funding Methods on Navy Ships: NWCF vs. End-use

    Science.gov (United States)

    2013-06-01

    OPTAR Operating Target OSO Other Supply Officer POM Pre/Post-Overseas Movement R-Supply Relational Supply RoR Reorder Review SAC Service...process called other supply officer ( OSO ) transfer. Since end-use ships own their inventory, the supply officer can choose to transfer a part being...requested by another ship at their discretion, based on their ship’s anticipated requirements and their own goodwill. OSO transfers among end-use

  7. Commercial equipment loads: End-Use Load and Consumer Assessment Program (ELCAP)

    Energy Technology Data Exchange (ETDEWEB)

    Pratt, R.G.; Williamson, M.A.; Richman, E.E.; Miller, N.E.

    1990-07-01

    The Office of Energy Resources of the Bonneville Power Administration is generally responsible for the agency's power and conservation resource planning. As associated responsibility which supports a variety of office functions is the analysis of historical trends in and determinants of energy consumption. The Office of Energy Resources' End-Use Research Section operates a comprehensive data collection program to provide pertinent information to support demand-side planning, load forecasting, and demand-side program development and delivery. Part of this on-going program is known as the End-Use Load and Consumer Assessment Program (ELCAP), an effort designed to collect electricity usage data through direct monitoring of end-use loads in buildings. This program is conducted for Bonneville by the Pacific Northwest Laboratory. This report provides detailed information on electricity consumption of miscellaneous equipment from the commercial portion of ELCAP. Miscellaneous equipment includes all commercial end-uses except heating, ventilating, air conditioning, and central lighting systems. Some examples of end-uses covered in this report are office equipment, computers, task lighting, refrigeration, and food preparation. Electricity consumption estimates, in kilowatt-hours per square food per year, are provided for each end-use by building type. The following types of buildings are covered: office, retail, restaurant, grocery, warehouse, school, university, and hotel/motel. 6 refs., 35 figs., 12 tabs.

  8. A rank-based approach for correcting systematic biases in spatial disaggregation of coarse-scale climate simulations

    Science.gov (United States)

    Nahar, Jannatun; Johnson, Fiona; Sharma, Ashish

    2017-07-01

    Use of General Circulation Model (GCM) precipitation and evapotranspiration sequences for hydrologic modelling can result in unrealistic simulations due to the coarse scales at which GCMs operate and the systematic biases they contain. The Bias Correction Spatial Disaggregation (BCSD) method is a popular statistical downscaling and bias correction method developed to address this issue. The advantage of BCSD is its ability to reduce biases in the distribution of precipitation totals at the GCM scale and then introduce more realistic variability at finer scales than simpler spatial interpolation schemes. Although BCSD corrects biases at the GCM scale before disaggregation; at finer spatial scales biases are re-introduced by the assumptions made in the spatial disaggregation process. Our study focuses on this limitation of BCSD and proposes a rank-based approach that aims to reduce the spatial disaggregation bias especially for both low and high precipitation extremes. BCSD requires the specification of a multiplicative bias correction anomaly field that represents the ratio of the fine scale precipitation to the disaggregated precipitation. It is shown that there is significant temporal variation in the anomalies, which is masked when a mean anomaly field is used. This can be improved by modelling the anomalies in rank-space. Results from the application of the rank-BCSD procedure improve the match between the distributions of observed and downscaled precipitation at the fine scale compared to the original BCSD approach. Further improvements in the distribution are identified when a scaling correction to preserve mass in the disaggregation process is implemented. An assessment of the approach using a single GCM over Australia shows clear advantages especially in the simulation of particularly low and high downscaled precipitation amounts.

  9. A rainfall disaggregation scheme for sub-hourly time scales: Coupling a Bartlett-Lewis based model with adjusting procedures

    Science.gov (United States)

    Kossieris, Panagiotis; Makropoulos, Christos; Onof, Christian; Koutsoyiannis, Demetris

    2018-01-01

    Many hydrological applications, such as flood studies, require the use of long rainfall data at fine time scales varying from daily down to 1 min time step. However, in the real world there is limited availability of data at sub-hourly scales. To cope with this issue, stochastic disaggregation techniques are typically employed to produce possible, statistically consistent, rainfall events that aggregate up to the field data collected at coarser scales. A methodology for the stochastic disaggregation of rainfall at fine time scales was recently introduced, combining the Bartlett-Lewis process to generate rainfall events along with adjusting procedures to modify the lower-level variables (i.e., hourly) so as to be consistent with the higher-level one (i.e., daily). In the present paper, we extend the aforementioned scheme, initially designed and tested for the disaggregation of daily rainfall into hourly depths, for any sub-hourly time scale. In addition, we take advantage of the recent developments in Poisson-cluster processes incorporating in the methodology a Bartlett-Lewis model variant that introduces dependence between cell intensity and duration in order to capture the variability of rainfall at sub-hourly time scales. The disaggregation scheme is implemented in an R package, named HyetosMinute, to support disaggregation from daily down to 1-min time scale. The applicability of the methodology was assessed on a 5-min rainfall records collected in Bochum, Germany, comparing the performance of the above mentioned model variant against the original Bartlett-Lewis process (non-random with 5 parameters). The analysis shows that the disaggregation process reproduces adequately the most important statistical characteristics of rainfall at wide range of time scales, while the introduction of the model with dependent intensity-duration results in a better performance in terms of skewness, rainfall extremes and dry proportions.

  10. A calibrated energy end-use model for the U.S. chemical industry

    International Nuclear Information System (INIS)

    Ozalp, N.; Hyman, B.

    2005-01-01

    The chemical industry is the second largest energy user after the petroleum industry in the United States. This paper provided a model for onsite steam and power generation in the chemical industry, as well as an end-use of the industrial gas manufacturing sector. The onsite steam and power generation model included the actual conversion efficiencies of prime movers in the sector. The energy end-use model also allocated combustible fuel and renewable energy inputs among generic end-uses including intermediate conversions through onsite power and steam generation. The model was presented in the form of a graphical depiction of energy flows. Results indicate that 35 per cent of the energy output from boilers is used for power generation, whereas 45 per cent goes directly to end-uses and 20 per cent to waste heat tanks for recovery in the chemical industry. The end-use model for the industrial gas manufacturing sector revealed that 42 per cent of the fuel input goes to onsite steam and power generation, whereas 58 per cent goes directly to end-uses. Among the end-uses, machine drive was the biggest energy user. It was suggested that the model is applicable to all other industries and is consistent with U.S. Department of Energy data for 1998. When used in conjunction with similar models for other years, it can be used to identify changes and trends in energy utilization at the prime mover level of detail. An analysis of the economic impact of energy losses can be based on the results of this model. Cascading of waste heat from high temperature processes to low temperature processes could be integrated into the model. 20 refs., 4 tabs., 8 figs

  11. Disaggregated seismic hazard and the elastic input energy spectrum: An approach to design earthquake selection

    Science.gov (United States)

    Chapman, Martin Colby

    1998-12-01

    The design earthquake selection problem is fundamentally probabilistic. Disaggregation of a probabilistic model of the seismic hazard offers a rational and objective approach that can identify the most likely earthquake scenario(s) contributing to hazard. An ensemble of time series can be selected on the basis of the modal earthquakes derived from the disaggregation. This gives a useful time-domain realization of the seismic hazard, to the extent that a single motion parameter captures the important time-domain characteristics. A possible limitation to this approach arises because most currently available motion prediction models for peak ground motion or oscillator response are essentially independent of duration, and modal events derived using the peak motions for the analysis may not represent the optimal characterization of the hazard. The elastic input energy spectrum is an alternative to the elastic response spectrum for these types of analyses. The input energy combines the elements of amplitude and duration into a single parameter description of the ground motion that can be readily incorporated into standard probabilistic seismic hazard analysis methodology. This use of the elastic input energy spectrum is examined. Regression analysis is performed using strong motion data from Western North America and consistent data processing procedures for both the absolute input energy equivalent velocity, (Vsbea), and the elastic pseudo-relative velocity response (PSV) in the frequency range 0.5 to 10 Hz. The results show that the two parameters can be successfully fit with identical functional forms. The dependence of Vsbea and PSV upon (NEHRP) site classification is virtually identical. The variance of Vsbea is uniformly less than that of PSV, indicating that Vsbea can be predicted with slightly less uncertainty as a function of magnitude, distance and site classification. The effects of site class are important at frequencies less than a few Hertz. The regression

  12. Assessing a disaggregated energy input: using confidence intervals around translog elasticity estimates

    International Nuclear Information System (INIS)

    Hisnanick, J.J.; Kyer, B.L.

    1995-01-01

    The role of energy in the production of manufacturing output has been debated extensively in the literature, particularly its relationship with capital and labor. In an attempt to provide some clarification in this debate, a two-step methodology was used. First under the assumption of a five-factor production function specification, we distinguished between electric and non-electric energy and assessed each component's relationship with capital and labor. Second, we calculated both the Allen and price elasticities and constructed 95% confidence intervals around these values. Our approach led to the following conclusions: that the disaggregation of the energy input into electric and non-electric energy is justified; that capital and electric energy and capital and non-electric energy are substitutes, while labor and electric energy and labor and non-electric energy are complements in production; and that capital and energy are substitutes, while labor and energy are complements. (author)

  13. Quantifying the influence of environmental and water conservation attitudes on household end use water consumption.

    Science.gov (United States)

    Willis, Rachelle M; Stewart, Rodney A; Panuwatwanich, Kriengsak; Williams, Philip R; Hollingsworth, Anna L

    2011-08-01

    Within the research field of urban water demand management, understanding the link between environmental and water conservation attitudes and observed end use water consumption has been limited. Through a mixed method research design incorporating field-based smart metering technology and questionnaire surveys, this paper reveals the relationship between environmental and water conservation attitudes and a domestic water end use break down for 132 detached households located in Gold Coast city, Australia. Using confirmatory factor analysis, attitudinal factors were developed and refined; households were then categorised based on these factors through cluster analysis technique. Results indicated that residents with very positive environmental and water conservation attitudes consumed significantly less water in total and across the behaviourally influenced end uses of shower, clothes washer, irrigation and tap, than those with moderately positive attitudinal concern. The paper concluded with implications for urban water demand management planning, policy and practice. Crown Copyright © 2011. Published by Elsevier Ltd. All rights reserved.

  14. GridLAB-D Technical Support Document: Residential End-Use Module Version 1.0

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, Zachary T.; Gowri, Krishnan; Katipamula, Srinivas

    2008-07-31

    1.0 Introduction The residential module implements the following end uses and characteristics to simulate the power demand in a single family home: • Water heater • Lights • Dishwasher • Range • Microwave • Refrigerator • Internal gains (plug loads) • House (heating/cooling loads) The house model considers the following four major heat gains/losses that contribute to the building heating/cooling load: 1. Conduction through exterior walls, roof and fenestration (based on envelope UA) 2. Air infiltration (based on specified air change rate) 3. Solar radiation (based on CLTD model and using tmy data) 4. Internal gains from lighting, people, equipment and other end use objects. The Equivalent Thermal Parameter (ETP) approach is used to model the residential loads and energy consumption. The following sections describe the modeling assumptions for each of the above end uses and the details of power demand calculations in the residential module.

  15. Analysis of aggregation and disaggregation effects for grid-based hydrological models and the development of improved precipitation disaggregation procedures for GCMs

    Directory of Open Access Journals (Sweden)

    H. S. Wheater

    1999-01-01

    Full Text Available Appropriate representation of hydrological processes within atmospheric General Circulation Models (GCMs is important with respect to internal model dynamics (e.g. surface feedback effects on atmospheric fluxes, continental runoff production and to simulation of terrestrial impacts of climate change. However, at the scale of a GCM grid-square, several methodological problems arise. Spatial disaggregation of grid-square average climatological parameters is required in particular to produce appropriate point intensities from average precipitation. Conversely, aggregation of land surface heterogeneity is necessary for grid-scale or catchment scale application. The performance of grid-based hydrological models is evaluated for two large (104km2 UK catchments. Simple schemes, using sub-grid average of individual land use at 40 km scale and with no calibration, perform well at the annual time-scale and, with the addition of a (calibrated routing component, at the daily and monthly time-scale. Decoupling of hillslope and channel routing does not necessarily improve performance or identifiability. Scale dependence is investigated through application of distribution functions for rainfall and soil moisture at 100 km scale. The results depend on climate, but show interdependence of the representation of sub-grid rainfall and soil moisture distribution. Rainfall distribution is analysed directly using radar rainfall data from the UK and the Arkansas Red River, USA. Among other properties, the scale dependence of spatial coverage upon radar pixel resolution and GCM grid-scale, as well as the serial correlation of coverages are investigated. This leads to a revised methodology for GCM application, as a simple extension of current procedures. A new location-based approach using an image processing technique is then presented, to allow for the preservation of the spatial memory of the process.

  16. Disaggregating measurement uncertainty from population variability and Bayesian treatment of uncensored results

    International Nuclear Information System (INIS)

    Strom, Daniel J.; Joyce, Kevin E.; Maclellan, Jay A.; Watson, David J.; Lynch, Timothy P.; Antonio, Cheryl L.; Birchall, Alan; Anderson, Kevin K.; Zharov, Peter

    2012-01-01

    In making low-level radioactivity measurements of populations, it is commonly observed that a substantial portion of net results are negative. Furthermore, the observed variance of the measurement results arises from a combination of measurement uncertainty and population variability. This paper presents a method for disaggregating measurement uncertainty from population variability to produce a probability density function (PDF) of possibly true results. To do this, simple, justifiable, and reasonable assumptions are made about the relationship of the measurements to the measurands (the 'true values'). The measurements are assumed to be unbiased, that is, that their average value is the average of the measurands. Using traditional estimates of each measurement's uncertainty to disaggregate population variability from measurement uncertainty, a PDF of measurands for the population is produced. Then, using Bayes's theorem, the same assumptions, and all the data from the population of individuals, a prior PDF is computed for each individual's measurand. These PDFs are non-negative, and their average is equal to the average of the measurement results for the population. The uncertainty in these Bayesian posterior PDFs is all Berkson with no remaining classical component. The methods are applied to baseline bioassay data from the Hanford site. The data include 90Sr urinalysis measurements on 128 people, 137Cs in vivo measurements on 5,337 people, and 239Pu urinalysis measurements on 3,270 people. The method produces excellent results for the 90Sr and 137Cs measurements, since there are nonzero concentrations of these global fallout radionuclides in people who have not been occupationally exposed. The method does not work for the 239Pu measurements in non-occupationally exposed people because the population average is essentially zero.

  17. A theoretically consistent stochastic cascade for temporal disaggregation of intermittent rainfall

    Science.gov (United States)

    Lombardo, F.; Volpi, E.; Koutsoyiannis, D.; Serinaldi, F.

    2017-06-01

    Generating fine-scale time series of intermittent rainfall that are fully consistent with any given coarse-scale totals is a key and open issue in many hydrological problems. We propose a stationary disaggregation method that simulates rainfall time series with given dependence structure, wet/dry probability, and marginal distribution at a target finer (lower-level) time scale, preserving full consistency with variables at a parent coarser (higher-level) time scale. We account for the intermittent character of rainfall at fine time scales by merging a discrete stochastic representation of intermittency and a continuous one of rainfall depths. This approach yields a unique and parsimonious mathematical framework providing general analytical formulations of mean, variance, and autocorrelation function (ACF) for a mixed-type stochastic process in terms of mean, variance, and ACFs of both continuous and discrete components, respectively. To achieve the full consistency between variables at finer and coarser time scales in terms of marginal distribution and coarse-scale totals, the generated lower-level series are adjusted according to a procedure that does not affect the stochastic structure implied by the original model. To assess model performance, we study rainfall process as intermittent with both independent and dependent occurrences, where dependence is quantified by the probability that two consecutive time intervals are dry. In either case, we provide analytical formulations of main statistics of our mixed-type disaggregation model and show their clear accordance with Monte Carlo simulations. An application to rainfall time series from real world is shown as a proof of concept.

  18. The use of continuous functions for a top-down temporal disaggregation of emission inventories

    International Nuclear Information System (INIS)

    Kalchmayr, M.; Orthofer, R.

    1997-11-01

    This report is a documentation of a presentation at the International Speciality Conference 'The Emission Inventory: Planning for the Future', October 28-30, 1997 in Research Triangle Park, North Carolina, USA. The Conference was organized by the Air and Waste Management Association (AWMA) and the U.S. Environmental Protection Agency. Emission data with high temporal resolution are necessary to analyze the relationship between emissions and their impacts. In many countries, however, emission inventories refer only to the annual countrywide emission sums, because underlying data (traffic, energy, industry statistics) are available for statistically relevant territorial units and for longer time periods only. This paper describes a method for the temporal disaggregation of yearly emission sums through application of continuous functions which simulate emission generating activities. The temporal patterns of the activities are derived through overlay of annual, weekly and diurnal variation functions which are based on statistical data of the relevant activities. If applied to annual emission data, these combined functions describe the dynamic patterns of emissions over year. The main advantage of the continuous functions method is that temporal emission patterns can be smoothed throughout one year, thus eliminating some of the major drawbacks from the traditional standardized fixed quota system. For handling in models, the continuous functions and their parameters can be directly included and the emission quota calculated directly for a certain hour of the year. The usefulness of the method is demonstrated with NMVOC emission data for Austria. Temporally disaggregated emission data can be used as input for ozone models as well as for visualization and animation of the emission dynamics. The analysis of the temporal dynamics of emission source strengths, e.g. during critical hours for ozone generation in summer, allows the implementation of efficient emission reduction

  19. Disaggregated regulation in network sections: The normative and positive theory; Disaggregierte Regulierung in Netzsektoren: Normative und positive Theorie

    Energy Technology Data Exchange (ETDEWEB)

    Knieps, G. [Inst. fuer Verkehrswissenschaft und Regionalpolitik, Albert-Ludwigs-Univ. Freiburg i.B. (Germany)

    2007-09-15

    The article deals with the interaction of normative and positive theorie of regulation. Those parts of the network which need regulation could be localised and regulated with the help of the normative theory of the monopolistic bottlenecks. Using the positive theory, the basic elements of a mandate for regulation in the sense of the disaggregated economy of regulation are derived.

  20. Long-run relationship between sectoral productivity and energy consumption in Malaysia: An aggregated and disaggregated viewpoint

    International Nuclear Information System (INIS)

    Rahman, Md Saifur; Junsheng, Ha; Shahari, Farihana; Aslam, Mohamed; Masud, Muhammad Mehedi; Banna, Hasanul; Liya, Ma

    2015-01-01

    This paper investigates the causal relationship between energy consumption and economic productivity in Malaysia at both aggregated and disaggregated levels. The investigation utilises total and sectoral (industrial and manufacturing) productivity growth during the 1971–2012 period using the modified Granger causality test proposed by Toda and Yamamoto [1] within a multivariate framework. The economy of Malaysia was found to be energy dependent at aggregated and disaggregated levels of national and sectoral economic growth. However, at disaggregate level, inefficient energy use is particularly identified with electricity and coal consumption patterns and their Granger caused negative effects upon GDP (Gross Domestic Product) and manufacturing growth. These findings suggest that policies should focus more on improving energy efficiency and energy saving. Furthermore, since emissions are found to have a close relationship to economic output at national and sectoral levels green technologies are of a highest necessity. - Highlights: • At aggregate level, energy consumption significantly influences GDP (Gross Domestic Product). • At disaggregate level, electricity & coal consumption does not help output growth. • Mineral and waste are found to positively Granger cause GDP. • The results reveal strong interactions between emissions and economic growth

  1. Gas pricing in Europe. Pt. 2. End-use consumption markets

    International Nuclear Information System (INIS)

    Donath, R.

    1996-01-01

    In the end-use consumption markets, gas is supplied to households, small consumers and industrial customers of retail distributors. As regards the delimitation of industrial customers receiving gas from retail distributors, there are great differences from one country to another, similarly to the market segmentation of wholesale markets.- First of all, the article points out structures and regulations in the investigated end-use consumption markets. The second part investigates cost-oriented and value-oriented pricing principles, followed by a comparison of price structures based on the Eurostat gas purchasing criteria for households and small consumers in the third part. A fourth part summarizes the results. (orig./UA) [de

  2. Modeling end-use quality in U. S. soft wheat germplasm

    Science.gov (United States)

    End-use quality in soft wheat (Triticum aestivum L.) can be assessed by a wide array of measurements, generally categorized into grain, milling, and baking characteristics. Samples were obtained from four regional nurseries. Selected parameters included: test weight, kernel hardness, kernel size, ke...

  3. Motives and perceptions regarding electronic nicotine delivery systems (ENDS) use among adults with mental health conditions.

    Science.gov (United States)

    Spears, Claire Adams; Jones, Dina M; Weaver, Scott R; Pechacek, Terry F; Eriksen, Michael P

    2018-05-01

    Smoking rates are disproportionately high among adults with mental health conditions (MHC), and recent research suggests that among former smokers, those with MHC are more likely to use electronic nicotine delivery systems (ENDS). This study investigated reasons for ENDS use and related risk perceptions among individuals with versus without MHC. Among adult current ENDS users (n=550), associations between self-reported MHC diagnoses and motives for ENDS use and ENDS risk perceptions were examined, stratified by smoking status. There were no significant associations between MHC status and ENDS motives or perceptions in the overall sample. However, current smokers with MHC indicated thinking more about how ENDS might improve their health, and former smokers with MHC reported thinking less about how ENDS might harm their health, compared to their counterparts without MHC. Former smokers with MHC rated several reasons for ENDS use (e.g., less harmful than regular cigarettes; to quit smoking; appealing flavors) as more important than did those without MHC. Current and former smokers with MHC may be especially optimistic about health benefits of ENDS. However, they might also be prone to health risks of continued ENDS use or concurrent use with traditional cigarettes. It will be important for public health messaging to provide this population with accurate information about benefits and risks of ENDS. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. A new NAMA framework for dispersed energy end-use sectors

    DEFF Research Database (Denmark)

    Cheng, Chia-Chin

    2010-01-01

    This paper presents a new approach for a nationally appropriate mitigation actions (NAMA) framework that can unlock the huge potential for greenhouse gas mitigation in dispersed energy end-use sectors in developing countries; specifically, the building sector and the industrial sector. These two ...

  5. Estimating end-use emissions factors for policy analysis: the case of space cooling and heating.

    Science.gov (United States)

    Jacobsen, Grant D

    2014-06-17

    This paper provides the first estimates of end-use specific emissions factors, which are estimates of the amount of a pollutant that is emitted when a unit of electricity is generated to meet demand from a specific end-use. In particular, this paper provides estimates of emissions factors for space cooling and heating, which are two of the most significant end-uses. The analysis is based on a novel two-stage regression framework that estimates emissions factors that are specific to cooling or heating by exploiting variation in cooling and heating demand induced by weather variation. Heating is associated with similar or greater CO2 emissions factor than cooling in all regions. The difference is greatest in the Midwest and Northeast, where the estimated CO2 emissions factor for heating is more than 20% larger than the emissions factor for cooling. The minor differences in emissions factors in other regions, combined with the substantial difference in the demand pattern for cooling and heating, suggests that the use of overall regional emissions factors is reasonable for policy evaluations in certain locations. Accurately quantifying the emissions factors associated with different end-uses across regions will aid in designing improved energy and environmental policies.

  6. Structure and data requirements of an end-use model for residential ...

    African Journals Online (AJOL)

    driniev

    2004-07-03

    Jul 3, 2004 ... 2 Department of Civil and Urban Engineering, Rand Afrikaans University, PO Box 524, Auckland Park 2006, ... One such approach is end-use modelling, which has a ... chemistry and are not easily removed after being dissolved into the ..... with a rainfall of 12 mm/month for month m the same applies to all.

  7. Analysis on learning curves of end-use appliances for the establishment of price-sensitivity load model in competitive electricity market

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, Sung Wook; Kim, Jung Hoon [Hongik University (Korea); Song, Kyung Bin [Keimyung University (Korea); Choi, Joon Young [Jeonju University (Korea)

    2001-07-01

    The change of the electricity charge from cost base to price base due to the introduction to the electricity market competition causes consumer to choose a variety of charge schemes and a portion of loads to be affected by this change. Besides, it is required the index that consolidate the price volatility experienced on the power exchange with gaming and strategic bidding by suppliers to increase profits. Therefore, in order to find a mathematical model of the sensitively-responding to-price loads, the price-sensitive load model is needed. And the development of state-of- the-art technologies affects the electricity price, so the diffusion of high-efficient end-uses and these price affect load patterns. This paper shows the analysis on learning curves algorithms which is used to investigate the correlation of the end-uses' price and load patterns. (author). 6 refs., 4 figs., 4 tabs.

  8. Exergy and environmental comparison of the end use of vehicle fuels: The Brazilian case

    International Nuclear Information System (INIS)

    Flórez-Orrego, Daniel; Silva, Julio A.M.; Oliveira Jr, Silvio de

    2015-01-01

    Highlights: • Total and non-renewable exergy costs of Brazilian transportation service are evaluated. • Specific CO 2 emissions of the Brazilian transportation service are determined. • Overall exergy efficiency of the end use of vehicle fuels in transportation sector is calculated. • A comparative extended analysis of the production and end use of transportation fuels is presented. - Abstract: In this work, a comparative exergy and environmental analysis of the vehicle fuel end use is presented. This analysis comprises petroleum and natural gas derivatives (including hydrogen), biofuels (ethanol and biodiesel), and their mixtures, besides of the electricity generated in the Brazilian electricity mix, intended to be used in plug in electric vehicles. The renewable and non-renewable unit exergy costs and CO 2 emission cost are proposed as suitable indicators for assessing the renewable exergy consumption intensity and the environmental impact, and for quantifying the thermodynamic performance of the transportation sector. This allows ranking the energy conversion processes along the vehicle fuels production routes and their end use, so that the best options for the transportation sector can be determined and better energy policies may be issued. It is found that if a drastic CO 2 emissions abatement of the sector is pursued, a more intensive utilization of ethanol in the Brazilian transportation sector mix is advisable. However, as the overall exergy conversion efficiency of the sugar cane industry is still very low, which increases the unit exergy cost of ethanol, better production and end use technologies are required. Nonetheless, with the current scenario of a predominantly renewable Brazilian electricity mix, based on more than 80% of renewable sources, this source consolidates as the most promising energy source to reduce the large amount of greenhouse gas emissions which transportation sector is responsible for

  9. Algorithming the Algorithm

    DEFF Research Database (Denmark)

    Mahnke, Martina; Uprichard, Emma

    2014-01-01

    Imagine sailing across the ocean. The sun is shining, vastness all around you. And suddenly [BOOM] you’ve hit an invisible wall. Welcome to the Truman Show! Ever since Eli Pariser published his thoughts on a potential filter bubble, this movie scenario seems to have become reality, just with slight...... changes: it’s not the ocean, it’s the internet we’re talking about, and it’s not a TV show producer, but algorithms that constitute a sort of invisible wall. Building on this assumption, most research is trying to ‘tame the algorithmic tiger’. While this is a valuable and often inspiring approach, we...

  10. Flexible hydrological modeling - Disaggregation from lumped catchment scale to higher spatial resolutions

    Science.gov (United States)

    Tran, Quoc Quan; Willems, Patrick; Pannemans, Bart; Blanckaert, Joris; Pereira, Fernando; Nossent, Jiri; Cauwenberghs, Kris; Vansteenkiste, Thomas

    2015-04-01

    Based on an international literature review on model structures of existing rainfall-runoff and hydrological models, a generalized model structure is proposed. It consists of different types of meteorological components, storage components, splitting components and routing components. They can be spatially organized in a lumped way, or on a grid, spatially interlinked by source-to-sink or grid-to-grid (cell-to-cell) routing. The grid size of the model can be chosen depending on the application. The user can select/change the spatial resolution depending on the needs and/or the evaluation of the accuracy of the model results, or use different spatial resolutions in parallel for different applications. Major research questions addressed during the study are: How can we assure consistent results of the model at any spatial detail? How can we avoid strong or sudden changes in model parameters and corresponding simulation results, when one moves from one level of spatial detail to another? How can we limit the problem of overparameterization/equifinality when we move from the lumped model to the spatially distributed model? The proposed approach is a step-wise one, where first the lumped conceptual model is calibrated using a systematic, data-based approach, followed by a disaggregation step where the lumped parameters are disaggregated based on spatial catchment characteristics (topography, land use, soil characteristics). In this way, disaggregation can be done down to any spatial scale, and consistently among scales. Only few additional calibration parameters are introduced to scale the absolute spatial differences in model parameters, but keeping the relative differences as obtained from the spatial catchment characteristics. After calibration of the spatial model, the accuracies of the lumped and spatial models were compared for peak, low and cumulative runoff total and sub-flows (at downstream and internal gauging stations). For the distributed models, additional

  11. Prediction of kharif rice yield at Kharagpur using disaggregated extended range rainfall forecasts

    Science.gov (United States)

    Dhekale, B. S.; Nageswararao, M. M.; Nair, Archana; Mohanty, U. C.; Swain, D. K.; Singh, K. K.; Arunbabu, T.

    2017-08-01

    The Extended Range Forecasts System (ERFS) has been generating monthly and seasonal forecasts on real-time basis throughout the year over India since 2009. India is one of the major rice producer and consumer in South Asia; more than 50% of the Indian population depends on rice as staple food. Rice is mainly grown in kharif season, which contributed 84% of the total annual rice production of the country. Rice cultivation in India is rainfed, which depends largely on rains, so reliability of the rainfall forecast plays a crucial role for planning the kharif rice crop. In the present study, an attempt has been made to test the reliability of seasonal and sub-seasonal ERFS summer monsoon rainfall forecasts for kharif rice yield predictions at Kharagpur, West Bengal by using CERES-Rice (DSSATv4.5) model. These ERFS forecasts are produced as monthly and seasonal mean values and are converted into daily sequences with stochastic weather generators for use with crop growth models. The daily sequences are generated from ERFS seasonal (June-September) and sub-seasonal (July-September, August-September, and September) summer monsoon (June to September) rainfall forecasts which are considered as input in CERES-rice crop simulation model for the crop yield prediction for hindcast (1985-2008) and real-time mode (2009-2015). The yield simulated using India Meteorological Department (IMD) observed daily rainfall data is considered as baseline yield for evaluating the performance of predicted yields using the ERFS forecasts. The findings revealed that the stochastic disaggregation can be used to disaggregate the monthly/seasonal ERFS forecasts into daily sequences. The year to year variability in rice yield at Kharagpur is efficiently predicted by using the ERFS forecast products in hindcast as well as real time, and significant enhancement in the prediction skill is noticed with advancement in the season due to incorporation of observed weather data which reduces uncertainty of

  12. Electrification Futures Study: End-Use Electric Technology Cost and Performance Projections through 2050

    Energy Technology Data Exchange (ETDEWEB)

    Vimmerstedt, Laura J. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Jadun, Paige [National Renewable Energy Lab. (NREL), Golden, CO (United States); McMillan, Colin A. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Steinberg, Daniel C. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Muratori, Matteo [National Renewable Energy Lab. (NREL), Golden, CO (United States); Mai, Trieu T. [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2018-01-02

    This report provides projected cost and performance assumptions for electric technologies considered in the Electrification Futures Study, a detailed and comprehensive analysis of the effects of widespread electrification of end-use service demands in all major economic sectors - transportation, residential and commercial buildings, and industry - for the contiguous United States through 2050. Using extensive literature searches and expert assessment, the authors identify slow, moderate, and rapid technology advancement sensitivities on technology cost and performance, and they offer a comparative analysis of levelized cost metrics as a reference indicator of total costs. The identification and characterization of these end-use service demand technologies is fundamental to the Electrification Futures Study. This report, the larger Electrification Futures Study, and the associated data and methodologies may be useful to planners and analysts in evaluating the potential role of electrification in an uncertain future. The report could be broadly applicable for other analysts and researchers who wish to assess electrification and electric technologies.

  13. Marginalization of end-use technologies in energy innovation for climate protection

    Science.gov (United States)

    Wilson, Charlie; Grubler, Arnulf; Gallagher, Kelly S.; Nemet, Gregory F.

    2012-11-01

    Mitigating climate change requires directed innovation efforts to develop and deploy energy technologies. Innovation activities are directed towards the outcome of climate protection by public institutions, policies and resources that in turn shape market behaviour. We analyse diverse indicators of activity throughout the innovation system to assess these efforts. We find efficient end-use technologies contribute large potential emission reductions and provide higher social returns on investment than energy-supply technologies. Yet public institutions, policies and financial resources pervasively privilege energy-supply technologies. Directed innovation efforts are strikingly misaligned with the needs of an emissions-constrained world. Significantly greater effort is needed to develop the full potential of efficient end-use technologies.

  14. End-Use Opportunity Analysis from Progress Indicator Results for ASHRAE Standard 90.1-2013

    Energy Technology Data Exchange (ETDEWEB)

    Hart, Philip R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Xie, YuLong [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2014-12-01

    This report and an accompanying spreadsheet (PNNL 2014a) compile the end use building simulation results for prototype buildings throughout the United States. The results represent he energy use of each edition of ASHRAE Standard 90.1, Energy Standard for Buildings Except Low-Rise Residential Buildings (ASHRAE 2004, 2007, 2010, 2013). PNNL examined the simulation results to determine how the remaining energy was used.

  15. Prevalence and correlates of ENDS use among adults being treated for chronic lung disease

    OpenAIRE

    Meghan Moran; Shyam Biswal; Joanna Cohen; Robert Henderson; Janet Holbrook; Venkataramana Sidhaye; Robert Wise

    2018-01-01

    Background Chronic lung disease such as asthma or COPD may be exacerbated by electronic nicotine device (ENDS) use. Despite this, little is known about the extent to which adults with chronic lung disease use ENDS and what factors are associated with use. Methods We analyzed data from the second wave of the Population Assessment of Tobacco and Health (PATH) study. The PATH study recruited 28,362 U.S. adults over the age of 18 using a multi-stage randomized sampli...

  16. Development of a global computable general equilibrium model coupled with detailed energy end-use technology

    International Nuclear Information System (INIS)

    Fujimori, Shinichiro; Masui, Toshihiko; Matsuoka, Yuzuru

    2014-01-01

    Highlights: • Detailed energy end-use technology information is considered within a CGE model. • Aggregated macro results of the detailed model are similar to traditional model. • The detailed model shows unique characteristics in the household sector. - Abstract: A global computable general equilibrium (CGE) model integrating detailed energy end-use technologies is developed in this paper. The paper (1) presents how energy end-use technologies are treated within the model and (2) analyzes the characteristics of the model’s behavior. Energy service demand and end-use technologies are explicitly considered, and the share of technologies is determined by a discrete probabilistic function, namely a Logit function, to meet the energy service demand. Coupling with detailed technology information enables the CGE model to have more realistic representation in the energy consumption. The proposed model in this paper is compared with the aggregated traditional model under the same assumptions in scenarios with and without mitigation roughly consistent with the two degree climate mitigation target. Although the results of aggregated energy supply and greenhouse gas emissions are similar, there are three main differences between the aggregated and the detailed technologies models. First, GDP losses in mitigation scenarios are lower in the detailed technology model (2.8% in 2050) as compared with the aggregated model (3.2%). Second, price elasticity and autonomous energy efficiency improvement are heterogeneous across regions and sectors in the detailed technology model, whereas the traditional aggregated model generally utilizes a single value for each of these variables. Third, the magnitude of emissions reduction and factors (energy intensity and carbon factor reduction) related to climate mitigation also varies among sectors in the detailed technology model. The household sector in the detailed technology model has a relatively higher reduction for both energy

  17. Development of a protocol and catalogue for existing end-use metered data from Canadian utilities

    International Nuclear Information System (INIS)

    Robillard, P.; Lopes, J.

    1994-12-01

    Reasons for collection of end-use metering (EUM) data by electrical utilities were cited as: (1) demand-side management (DSM); (2) class end-use load research; (3) technology assessment; and (4) marketing and customer support. The emergence of DSM evaluation has served to focus on EUM as a strategic tool. The combination of DSM- related load research and other load data requirements has resulted in diverse means of end-use data collection, most of them involving frequent data collection. EUM technology was considered costly, and time consuming to implement. Also, the intrusive character of EUM can sometimes strain customer relations. Electric utilities were found to be interested in pursuing options for sharing of EUM data. An expert service function should be developed to provide EUM study design, implementation, and data analysis services to participating utilities. A planning process for coordinating projects among utilities was recommended to reduce single party costs. Organizational mechanisms for providing EUM services were identified. A number of recommendations were made directed toward the CEA for the realization of an EUM service

  18. Knowledge is power: Customer load metering in the Victorian End-Use Measurement Program

    Energy Technology Data Exchange (ETDEWEB)

    Gavin, G. [CitiPower Ltd., Melbourne, VIC (Australia)

    1995-12-31

    The Victorian End-Use Measurement Program is a sophisticated load metering program being conducted over 500 sites in Victoria, covering the major customer sectors of residential, commercial and industrial. Its goal is to gather sufficient data to determine with statistical accuracy the load profiles of these major sectors, together with the load profiles of selected customer end-uses in the residential and commercial sectors, and selected building types in the commercial sector. This paper discusses the major elements of the program, the history of its development, the design of the statistical and operational components of the program, and its implementation in the field. In the Victorian electricity industry, with the combination of contestable customer metering and the End-Use Measurement program metering for the franchise/non-contestable market, there is now a considerable flow of customer load data. The opportunity exists for an accurate understanding of customer load needs, and the minimization of risk in business operations in the retail and wholesale market. (author).

  19. End-use energy consumption estimates for U.S. commercial buildings, 1992

    Energy Technology Data Exchange (ETDEWEB)

    Belzer, D.B.; Wrench, L.E.

    1997-03-01

    An accurate picture of how energy is used in the nation`s stock of commercial buildings can serve a variety of program planning and policy needs of the US Department of Energy, utilities, and other groups seeking to improve the efficiency of energy use in the building sector. This report describes an estimation of energy consumption by end use based upon data from the 1992 Commercial Building Energy Consumption Survey (CBECS). The methodology used in the study combines elements of engineering simulations and statistical analysis to estimate end-use intensities for heating, cooling, ventilation, lighting, refrigeration, hot water, cooking, and miscellaneous equipment. Statistical Adjusted Engineering (SAE) models were estimated by building type. The nonlinear SAE models used variables such as building size, vintage, climate region, weekly operating hours, and employee density to adjust the engineering model predicted loads to the observed consumption (based upon utility billing information). End-use consumption by fuel was estimated for each of the 6,751 buildings in the 1992 CBECS. The report displays the summary results for 11 separate building types as well as for the total US commercial building stock. 4 figs., 15 tabs.

  20. A Practical Methodology for Disaggregating the Drivers of Drug Costs Using Administrative Data.

    Science.gov (United States)

    Lungu, Elena R; Manti, Orlando J; Levine, Mitchell A H; Clark, Douglas A; Potashnik, Tanya M; McKinley, Carol I

    2017-09-01

    Prescription drug expenditures represent a significant component of health care costs in Canada, with estimates of $28.8 billion spent in 2014. Identifying the major cost drivers and the effect they have on prescription drug expenditures allows policy makers and researchers to interpret current cost pressures and anticipate future expenditure levels. To identify the major drivers of prescription drug costs and to develop a methodology to disaggregate the impact of each of the individual drivers. The methodology proposed in this study uses the Laspeyres approach for cost decomposition. This approach isolates the effect of the change in a specific factor (e.g., price) by holding the other factor(s) (e.g., quantity) constant at the base-period value. The Laspeyres approach is expanded to a multi-factorial framework to isolate and quantify several factors that drive prescription drug cost. Three broad categories of effects are considered: volume, price and drug-mix effects. For each category, important sub-effects are quantified. This study presents a new and comprehensive methodology for decomposing the change in prescription drug costs over time including step-by-step demonstrations of how the formulas were derived. This methodology has practical applications for health policy decision makers and can aid researchers in conducting cost driver analyses. The methodology can be adjusted depending on the purpose and analytical depth of the research and data availability. © 2017 Journal of Population Therapeutics and Clinical Pharmacology. All rights reserved.

  1. Modeling of photochemical air pollution in the Barcelona area with highly disaggregated anthropogenic and biogenic emissions

    International Nuclear Information System (INIS)

    Toll, I.; Baldasano, J.M.

    2000-01-01

    The city of Barcelona and its surrounding area, located in the western Mediterranean basin, can reach high levels of O 3 in spring and summertime. To study the origin of this photochemical pollution, a numerical modeling approach was adopted and the episode that took place between 3 and 5 August 1990 was chosen. The main meteorological mesoscale flows were reproduced with the meteorological non-hydrostatic mesoscale model MEMO for 5 August 1990, when weak pressure synoptic conditions took place. The emissions inventory was calculated with the EIM-LEM model, giving highly disaggregated anthropogenic and biogenic emissions in the zone studied, an 80 x 80 km 2 area around the city of Barcelona. Major sources of VOC were road traffic (51%) and vegetation (34%), while NO x were mostly emitted by road traffic (88%). However, emissions from some industrial stacks can be locally important and higher than those from road traffic. Photochemical simulation with the MARS model revealed that the combination of mesoscale wind flows and the above-mentioned local emissions is crucial in the production and transport of O 3 in the area. On the other hand, the geostrophic wind also played an important role in advecting the air masses away from the places O 3 had been generated. The model simulations were also evaluated by comparing meteorological measurements from nine surface stations and concentration measurements from five surface stations, and the results proved to be fairly satisfactory. (author)

  2. Musings on privacy issues in health research involving disaggregate geographic data about individuals

    Directory of Open Access Journals (Sweden)

    AbdelMalik Philip

    2009-07-01

    Full Text Available Abstract This paper offers a state-of-the-art overview of the intertwined privacy, confidentiality, and security issues that are commonly encountered in health research involving disaggregate geographic data about individuals. Key definitions are provided, along with some examples of actual and potential security and confidentiality breaches and related incidents that captured mainstream media and public interest in recent months and years. The paper then goes on to present a brief survey of the research literature on location privacy/confidentiality concerns and on privacy-preserving solutions in conventional health research and beyond, touching on the emerging privacy issues associated with online consumer geoinformatics and location-based services. The 'missing ring' (in many treatments of the topic of data security is also discussed. Personal information and privacy legislations in two countries, Canada and the UK, are covered, as well as some examples of recent research projects and events about the subject. Select highlights from a June 2009 URISA (Urban and Regional Information Systems Association workshop entitled 'Protecting Privacy and Confidentiality of Geographic Data in Health Research' are then presented. The paper concludes by briefly charting the complexity of the domain and the many challenges associated with it, and proposing a novel, 'one stop shop' case-based reasoning framework to streamline the provision of clear and individualised guidance for the design and approval of new research projects (involving geographical identifiers about individuals, including crisp recommendations on which specific privacy-preserving solutions and approaches would be suitable in each case.

  3. Integration properties of disaggregated solar, geothermal and biomass energy consumption in the U.S

    International Nuclear Information System (INIS)

    Apergis, Nicholas; Tsoumas, Chris

    2011-01-01

    This paper investigates the integration properties of disaggregated solar, geothermal and biomass energy consumption in the U.S. The analysis is performed for the 1989-2009 period and covers all sectors which use these types of energy, i.e., transportation, residence, industrial, electric power and commercial. The results suggest that there are differences in the order of integration depending on both the type of energy and the sector involved. Moreover, the inclusion of structural breaks traced from the regulatory changes for these energy types seem to affect the order of integration for each series. - Highlights: → Increasing importance of renewable energy sources. → Integration properties of solar, geothermal and biomass energy consumption in the U.S. → The results show differences in the order of integration depending on the type of energy. → Structural breaks traced for these energy types affect the order of integration. → The order of integration is less than 1, so energy conservation policies are transitory.

  4. A disaggregated analysis of the environmental Kuznets curve for industrial CO_2 emissions in China

    International Nuclear Information System (INIS)

    Wang, Yuan; Zhang, Chen; Lu, Aitong; Li, Li; He, Yanmin; ToJo, Junji; Zhu, Xiaodong

    2017-01-01

    Highlights: • The existence of EKC hypothesis for industrial carbon emissions is tested for China. • A semi-parametric panel regression is used along with the STIRPAT model. • The validity of the EKC hypothesis varies across industry sectors. • The EKC relation to income exists in the electricity and heat production sector. • The EKC relation to urbanization exists in the manufacturing sector. - Abstract: The present study concentrates on a Chinese context and attempts to explicitly examine the impacts of economic growth and urbanization on various industrial carbon emissions through investigation of the existence of an environmental Kuznets curve. Within the Stochastic Impacts by Regression on Population, Affluence and Technology framework, this is the first attempt to simultaneously explore the income/urbanization and disaggregated industrial carbon dioxide emissions nexus, using panel data together with semi-parametric panel fixed effects regression. Our dataset is referred to a provincial panel of China spanning the period 2000–2013. With this information, we find evidence in support of an inverted U-shaped curve relationship between economic growth and carbon dioxide emissions in the electricity and heat production sector, but a similar inference only for urbanization and those emissions in the manufacturing sector. The heterogeneity in the EKC relationship across industry sectors implies that there is urgent need to design more specific policies related to carbon emissions reduction for various industry sectors. Also, these findings contribute to advancing the emerging literature on the development-pollution nexus.

  5. Household energy consumption in the UK: A highly geographically and socio-economically disaggregated model

    International Nuclear Information System (INIS)

    Druckman, A.; Jackson, T.

    2008-01-01

    Devising policies for a low carbon society requires a careful understanding of energy consumption in different types of households. In this paper, we explore patterns of UK household energy use and associated carbon emissions at national level and also at high levels of socio-economic and geographical disaggregation. In particular, we examine specific neighbourhoods with contrasting levels of deprivation, and typical 'types' (segments) of UK households based on socio-economic characteristics. Results support the hypothesis that different segments have widely differing patterns of consumption. We show that household energy use and associated carbon emissions are both strongly, but not solely, related to income levels. Other factors, such as the type of dwelling, tenure, household composition and rural/urban location are also extremely important. The methodology described in this paper can be used in various ways to inform policy-making. For example, results can help in targeting energy efficiency measures; trends from time series results will form a useful basis for scenario building; and the methodology may be used to model expected outcomes of possible policy options, such as personal carbon trading or a progressive tax regime on household energy consumption

  6. A DISAGGREGATED MEASURES APPROACH OF POVERTY STATUS OF FARMING HOUSEHOLDS IN KWARA STATE, NIGERIA

    Directory of Open Access Journals (Sweden)

    Grace Oluwabukunmi Akinsola

    2016-12-01

    Full Text Available In a bid to strengthen the agricultural sector in Nigeria, the Kwara State Government invited thirteen Zimbabwean farmers to participate in agricultural production in Kwara State in 2004. The main objective of this study therefore was to examine the effect of the activities of these foreign farmers on local farmers’ poverty status. A questionnaire was administered on the heads of farming households. A total of 240 respondents were used for the study, which was comprised of 120 contact and 120 non-contact heads of farming households. The analytical tools employed included descriptive statistics and the Foster, Greer and Thorbecke method. The result indicated that the non-contact farming households are poorer than the contact farming households. Using the disaggregated poverty profile, poverty is most severe among the age group of above 60 years. The intensity of poverty is also higher among the married group than the singles. Based on the education level, poverty seems to be most severe among those without any formal education. It is therefore recommended that a minimum of secondary school education should be encouraged among the farming households to prevent higher incidence of poverty in the study area.

  7. The importance of disaggregated freight flow forecasts to inform transport infrastructure investments

    Directory of Open Access Journals (Sweden)

    Jan H. Havenga

    2013-09-01

    Full Text Available This article presents the results of a comprehensive disaggregated commodity flow model for South Africa. The wealth of data available enables a segmented analysis of future freight transportation demand in order to assist with the prioritisation of transportation investments, the development of transport policy and the growth of the logistics service provider industry. In 2011, economic demand for commodities in South Africa’s competitive surface-freight transport market amounted to 622 million tons and is predicted to increase to 1834m tons by 2041, which is a compound annual growth rate of 3.67%. Fifty percent of corridor freight constitutes break bulk; intermodal solutions are therefore critical in South Africa. Scenario analysis indicates that 80%of corridor break-bulk tons can by serviced by four intermodal facilities – in Gauteng, Durban, Cape Town and Port Elizabeth. This would allow for the development of an investment planning hierarchy, enable industry targeting (through commodity visibility, ensure capacity development ahead of demand and lower the cost of logistics in South Africa.

  8. How sex- and age-disaggregated data and gender and generational analyses can improve humanitarian response.

    Science.gov (United States)

    Mazurana, Dyan; Benelli, Prisca; Walker, Peter

    2013-07-01

    Humanitarian aid remains largely driven by anecdote rather than by evidence. The contemporary humanitarian system has significant weaknesses with regard to data collection, analysis, and action at all stages of response to crises involving armed conflict or natural disaster. This paper argues that humanitarian actors can best determine and respond to vulnerabilities and needs if they use sex- and age-disaggregated data (SADD) and gender and generational analyses to help shape their assessments of crises-affected populations. Through case studies, the paper shows how gaps in information on sex and age limit the effectiveness of humanitarian response in all phases of a crisis. The case studies serve to show how proper collection, use, and analysis of SADD enable operational agencies to deliver assistance more effectively and efficiently. The evidence suggests that the employment of SADD and gender and generational analyses assists in saving lives and livelihoods in a crisis. © 2013 The Author(s). Journal compilation © Overseas Development Institute, 2013.

  9. Musings on privacy issues in health research involving disaggregate geographic data about individuals.

    Science.gov (United States)

    Boulos, Maged N Kamel; Curtis, Andrew J; Abdelmalik, Philip

    2009-07-20

    This paper offers a state-of-the-art overview of the intertwined privacy, confidentiality, and security issues that are commonly encountered in health research involving disaggregate geographic data about individuals. Key definitions are provided, along with some examples of actual and potential security and confidentiality breaches and related incidents that captured mainstream media and public interest in recent months and years. The paper then goes on to present a brief survey of the research literature on location privacy/confidentiality concerns and on privacy-preserving solutions in conventional health research and beyond, touching on the emerging privacy issues associated with online consumer geoinformatics and location-based services. The 'missing ring' (in many treatments of the topic) of data security is also discussed. Personal information and privacy legislations in two countries, Canada and the UK, are covered, as well as some examples of recent research projects and events about the subject. Select highlights from a June 2009 URISA (Urban and Regional Information Systems Association) workshop entitled 'Protecting Privacy and Confidentiality of Geographic Data in Health Research' are then presented. The paper concludes by briefly charting the complexity of the domain and the many challenges associated with it, and proposing a novel, 'one stop shop' case-based reasoning framework to streamline the provision of clear and individualised guidance for the design and approval of new research projects (involving geographical identifiers about individuals), including crisp recommendations on which specific privacy-preserving solutions and approaches would be suitable in each case.

  10. Spatially disaggregated population estimates in the absence of national population and housing census data

    Science.gov (United States)

    Wardrop, N. A.; Jochem, W. C.; Bird, T. J.; Chamberlain, H. R.; Clarke, D.; Kerr, D.; Bengtsson, L.; Juran, S.; Seaman, V.; Tatem, A. J.

    2018-01-01

    Population numbers at local levels are fundamental data for many applications, including the delivery and planning of services, election preparation, and response to disasters. In resource-poor settings, recent and reliable demographic data at subnational scales can often be lacking. National population and housing census data can be outdated, inaccurate, or missing key groups or areas, while registry data are generally lacking or incomplete. Moreover, at local scales accurate boundary data are often limited, and high rates of migration and urban growth make existing data quickly outdated. Here we review past and ongoing work aimed at producing spatially disaggregated local-scale population estimates, and discuss how new technologies are now enabling robust and cost-effective solutions. Recent advances in the availability of detailed satellite imagery, geopositioning tools for field surveys, statistical methods, and computational power are enabling the development and application of approaches that can estimate population distributions at fine spatial scales across entire countries in the absence of census data. We outline the potential of such approaches as well as their limitations, emphasizing the political and operational hurdles for acceptance and sustainable implementation of new approaches, and the continued importance of traditional sources of national statistical data. PMID:29555739

  11. Development of an Asset Value Map for Disaster Risk Assessment in China by Spatial Disaggregation Using Ancillary Remote Sensing Data.

    Science.gov (United States)

    Wu, Jidong; Li, Ying; Li, Ning; Shi, Peijun

    2018-01-01

    The extent of economic losses due to a natural hazard and disaster depends largely on the spatial distribution of asset values in relation to the hazard intensity distribution within the affected area. Given that statistical data on asset value are collected by administrative units in China, generating spatially explicit asset exposure maps remains a key challenge for rapid postdisaster economic loss assessment. The goal of this study is to introduce a top-down (or downscaling) approach to disaggregate administrative-unit level asset value to grid-cell level. To do so, finding the highly correlated "surrogate" indicators is the key. A combination of three data sets-nighttime light grid, LandScan population grid, and road density grid, is used as ancillary asset density distribution information for spatializing the asset value. As a result, a high spatial resolution asset value map of China for 2015 is generated. The spatial data set contains aggregated economic value at risk at 30 arc-second spatial resolution. Accuracy of the spatial disaggregation reflects redistribution errors introduced by the disaggregation process as well as errors from the original ancillary data sets. The overall accuracy of the results proves to be promising. The example of using the developed disaggregated asset value map in exposure assessment of watersheds demonstrates that the data set offers immense analytical flexibility for overlay analysis according to the hazard extent. This product will help current efforts to analyze spatial characteristics of exposure and to uncover the contributions of both physical and social drivers of natural hazard and disaster across space and time. © 2017 Society for Risk Analysis.

  12. Disaggregation of MODIS surface temperature over an agricultural area using a time series of Formosat-2 images

    OpenAIRE

    Merlin, O.; Duchemin, Benoit; Hagolle, O.; Jacob, Frédéric; Coudert, B.; Chehbouni, Abdelghani; Dedieu, G.; Garatuza, J.; Kerr, Yann

    2010-01-01

    No of Pages 13; International audience; The temporal frequency of the thermal data provided by current spaceborne high-resolution imagery systems is inadequate for agricultural applications. As an alternative to the lack of high-resolution observations, kilometric thermal data can be disaggregated using a green (photosynthetically active) vegetation index e.g. NDVI (Normalized Difference Vegetation Index) collected at high resolution. Nevertheless, this approach is only valid in the condition...

  13. Disaggregating Orders of Water Scarcity - The Politics of Nexus in the Wami-Ruvu River Basin, Tanzania

    Directory of Open Access Journals (Sweden)

    Anna Mdee

    2017-02-01

    Full Text Available This article considers the dilemma of managing competing uses of surface water in ways that respond to social, ecological and economic needs. Current approaches to managing competing water use, such as Integrated Water Resources Management (IWRM and the concept of the water-energy-food nexus do not adequately disaggregate the political nature of water allocations. This is analysed using Mehta’s (2014 framework on orders of scarcity to disaggregate narratives of water scarcity in two ethnographic case studies in the WamiRuvu River Basin in Tanzania: one of a mountain river that provides water to urban Morogoro, and another of a large donor-supported irrigation scheme on the Wami River. These case studies allow us to explore different interfaces in the food-water-energy nexus. The article makes two points: that disaggregating water scarcity is essential for analysing the nexus; and that current institutional frameworks (such as IWRM mask the political nature of the nexus, and therefore do not provide an adequate platform for adjudicating the interfaces of competing water use.

  14. Disaggregating radar-derived rainfall measurements in East Azarbaijan, Iran, using a spatial random-cascade model

    Science.gov (United States)

    Fouladi Osgouei, Hojjatollah; Zarghami, Mahdi; Ashouri, Hamed

    2017-07-01

    The availability of spatial, high-resolution rainfall data is one of the most essential needs in the study of water resources. These data are extremely valuable in providing flood awareness for dense urban and industrial areas. The first part of this paper applies an optimization-based method to the calibration of radar data based on ground rainfall gauges. Then, the climatological Z-R relationship for the Sahand radar, located in the East Azarbaijan province of Iran, with the help of three adjacent rainfall stations, is obtained. The new climatological Z-R relationship with a power-law form shows acceptable statistical performance, making it suitable for radar-rainfall estimation by the Sahand radar outputs. The second part of the study develops a new heterogeneous random-cascade model for spatially disaggregating the rainfall data resulting from the power-law model. This model is applied to the radar-rainfall image data to disaggregate rainfall data with coverage area of 512 × 512 km2 to a resolution of 32 × 32 km2. Results show that the proposed model has a good ability to disaggregate rainfall data, which may lead to improvement in precipitation forecasting, and ultimately better water-resources management in this arid region, including Urmia Lake.

  15. Sound algorithms

    OpenAIRE

    De Götzen , Amalia; Mion , Luca; Tache , Olivier

    2007-01-01

    International audience; We call sound algorithms the categories of algorithms that deal with digital sound signal. Sound algorithms appeared in the very infancy of computer. Sound algorithms present strong specificities that are the consequence of two dual considerations: the properties of the digital sound signal itself and its uses, and the properties of auditory perception.

  16. Genetic algorithms

    Science.gov (United States)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  17. Biomethane storage: Evaluation of technologies, end uses, business models, and sustainability

    International Nuclear Information System (INIS)

    Budzianowski, Wojciech M.; Brodacka, Marlena

    2017-01-01

    Highlights: • Biomethane storage integrates the different energy subsystems. • It facilitates adoption of solar and wind energy sources. • It is essential to adequately match storages with their end uses and business models. • Business models must propose, create, and capture value linked with gas storage. • Sustainable is economically viable, environmentally benign, and socially beneficial. - Abstract: Biomethane is a renewable gas that can be turned into dispatchable resource through applying storage techniques. The storage enables the discharge of stored biomethane at any time and place it is required as gas turbine power, heat or transport fuel. Thus the stored biomethane could more efficiently serve various energy applications in the power, transport, heat, and gas systems as well as in industry. Biomethane storage may therefore integrate the different energy subsystems making the whole energy system more efficient. This work provides an overview and evaluation of biomethane storage technologies, end uses, business models and sustainability. It is shown that storage technologies are versatile, have different costs and efficiencies and may serve different end uses. Business models may be created or selected to fit regional spatial contexts, realistic demands for gas storage related services, and the level of available subsidies. By applying storage the sustainability of biomethane is greatly improved in terms of economic viability, reduced environmental impacts and greater social benefits. Stored biomethane may greatly facilitate adoption of intermittent renewable energy sources such as solar and wind. Other findings show that biomethane storage needs to be combined with grid services and other similar services to reduce overall storage costs.

  18. Using Sankey diagrams to map energy flow from primary fuel to end use

    International Nuclear Information System (INIS)

    Subramanyam, Veena; Paramshivan, Deepak; Kumar, Amit; Mondal, Md. Alam Hossain

    2015-01-01

    Highlights: • Energy flows from both supply and demand sides shown through Sankey diagrams. • Energy flows from reserves to energy end uses for primary and secondary fuels shown. • Five main energy demand sectors in Alberta are analyzed. • In residential/commercial sectors, highest energy consumption is in space heating. • In the industrial sector, highest energy use is in the mining subsector. - Abstract: The energy sector is the largest contributor to gross domestic product (GDP), income, employment, and government revenue in both developing and developed nations. But the energy sector has a significant environmental footprint due to greenhouse gas (GHG) emissions. Efficient production, conversion, and use of energy resources are key factors for reducing the environmental footprint. Hence it is necessary to understand energy flows from both the supply and the demand sides. Most energy analyses focus on improving energy efficiency broadly without considering the aggregate energy flow. We developed Sankey diagrams that map energy flow for both the demand and supply sides for the province of Alberta, Canada. The diagrams will help policy/decision makers, researchers, and others to understand energy flow from reserves through to final energy end uses for primary and secondary fuels in the five main energy demand sectors in Alberta: residential, commercial, industrial, agricultural, and transportation. The Sankey diagrams created for this study show total energy consumption, useful energy, and energy intensities of various end-use devices. The Long-range Energy Alternatives Planning System (LEAP) model is used in this study. The model showed that Alberta’s total input energy in the five demand sectors was 189 PJ, 186 PJ, 828.5PJ, 398 PJ, and 50.83 PJ, respectively. On the supply side, the total energy input and output were found to be 644.84 PJ and 239 PJ, respectively. These results, along with the associated energy flows were depicted pictorially using

  19. The Use of Energy in Malaysia: Tracing Energy Flows from Primary Source to End Use

    OpenAIRE

    Chinhao Chong; Weidou Ni; Linwei Ma; Pei Liu; Zheng Li

    2015-01-01

    Malaysia is a rapidly developing country in Southeast Asia that aims to achieve high-income country status by 2020; its economic growth is highly dependent on its abundant energy resources, especially natural gas and crude oil. In this paper, a complete picture of Malaysia’s energy use from primary source to end use is presented by mapping a Sankey diagram of Malaysia’s energy flows, together with ongoing trends analysis of the main factors influencing the energy flows. The results indicate t...

  20. Electrification Futures Study: End-Use Electric Technology Cost and Performance Projections through 2050

    Energy Technology Data Exchange (ETDEWEB)

    Jadun, Paige [National Renewable Energy Lab. (NREL), Golden, CO (United States); McMillan, Colin [National Renewable Energy Lab. (NREL), Golden, CO (United States); Steinberg, Daniel [National Renewable Energy Lab. (NREL), Golden, CO (United States); Muratori, Matteo [National Renewable Energy Lab. (NREL), Golden, CO (United States); Vimmerstedt, Laura [National Renewable Energy Lab. (NREL), Golden, CO (United States); Mai, Trieu [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2017-12-01

    This report is the first in a series of Electrification Futures Study (EFS) publications. The EFS is a multiyear research project to explore widespread electrification in the future energy system of the United States. More specifically, the EFS is designed to examine electric technology advancement and adoption for end uses in all major economic sectors as well as electricity consumption growth and load profiles, future power system infrastructure development and operations, and the economic and environmental implications of widespread electrification. Because of the expansive scope and the multiyear duration of the study, research findings and supporting data will be published as a series of reports, with each report released on its own timeframe.

  1. 15 CFR Supplement No. 1 to Part 744 - Military End-Use Examples for § 744.17

    Science.gov (United States)

    2010-01-01

    ... 15 Commerce and Foreign Trade 2 2010-01-01 2010-01-01 false Military End-Use Examples for § 744.17 No. Supplement No. 1 to Part 744 Commerce and Foreign Trade Regulations Relating to Commerce and... End-Use Examples for § 744.17 (a) Examples of military end-uses (as described in § 744.17 (d) of this...

  2. Disaggregate demand for conventional and alternative fuelled vehicles in the Census Metropolitan Area of Hamilton, Canada

    Science.gov (United States)

    Potoglou, Dimitrios

    The focus of this thesis is twofold. First, it offers insight on how households' car-ownership behaviour is affected by urban form and availability of local-transit at the place of residence, after controlling for socio-economic and demographic characteristics. Second, it addresses the importance of vehicle attributes, household and individual characteristics as well as economic incentives and urban form to potential demand for alternative fuelled vehicles. Data for the empirical analyses of the aforementioned research activities were obtained through an innovative Internet survey, which is also documented in this thesis, conducted in the Census Metropolitan Area of Hamilton. The survey included a retrospective questionnaire of households' number and type of vehicles and a stated choices experiment for assessing the potential demand for alternative fuelled vehicles. Established approaches and emerging trends in automobile demand modelling identified early on in this thesis suggest a disaggregate approach and specifically, the estimation of discrete choice models both for explaining car ownership and vehicle-type choice behaviour. It is shown that mixed and diverse land uses as well as short distances between home and work are likely to decrease the probability of households to own a large number of cars. Regarding the demand for alternative fuelled vehicles, while vehicle attributes are particularly important, incentives such as free parking and access to high occupancy vehicle lanes will not influence the choice of hybrids or alternative fuelled vehicles. An improved understating of households' behaviour regarding the number of cars as well as the factors and trade-offs for choosing cleaner vehicles can be used to inform policy designed to reduce car ownership levels and encourage adoption of cleaner vehicle technologies in urban areas. Finally, the Internet survey sets the ground for further research on implementation and evaluation of this data collection method.

  3. Conditions for the Occurrence of Slaking and Other Disaggregation Processes under Rainfall

    Directory of Open Access Journals (Sweden)

    Frédéric Darboux

    2016-07-01

    Full Text Available Under rainfall conditions, aggregates may suffer breakdown by different mechanisms. Slaking is a very efficient breakdown mechanism. However, its occurrence under rainfall conditions has not been demonstrated. Therefore, the aim of this study was to evaluate the occurrence of slaking under rain. Two soils with silt loam (SL and clay loam (CL textures were analyzed. Two classes of aggregates were utilized: 1–3 mm and 3–5 mm. The aggregates were submitted to stability tests and to high intensity (90 mm·h−1 and low intensity (28 mm·h−1 rainfalls, and different kinetic energy impacts (large and small raindrops using a rainfall simulator. The fragment size distributions were determined both after the stability tests and rainfall simulations, with the calculation of the mean weighted diameter (MWD. After the stability tests the SL presented smaller MWDs for all stability tests when compared to the CL. In both soils the lowest MWD was obtained using the fast wetting test, showing they were sensitive to slaking. For both soils and the two aggregate classes evaluated, the MWDs were recorded from the early beginning of the rainfall event under the four rainfall conditions. The occurrence of slaking in the evaluated soils was not verified under the simulated rainfall conditions studied. The early disaggregation was strongly related to the cumulative kinetic energy, advocating for the occurrence of mechanical breakdown. Because slaking requires a very high wetting rate on initially dry aggregates, it seems unlikely to occur under field conditions, except perhaps for furrow irrigation.

  4. End-use matching for solar industrial process heat. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Brown, K.C.; Hooker, D.W.; Rabl, A.; Stadjuhar, S.A.; West, R.E.

    1980-01-01

    Because of the large energy demand of industry (37% of US demand) and the wide spectrum of temperatures at which heat is required, the industrial sector appears to be very suitable for the matching of solar thermal technology with industrial process heat (IPH) requirements. A methodology for end-use matching has been devised, complete with required data bases and an evaluation program PROSYS/ECONMAT. Six cities in the United States were selected for an analysis of solar applications to IPH. Typical process heat requirements for 70% of the industrial plants in each city were identified and evaluated in conjunction with meteorological and economic data for each site to determine lowest-cost solar systems for each application. The flexibility and scope of PROSYS/ECONMAT is shown in a variety of sensitivity studies that expand the results of the six-city analysis. Case studies of two industrial plants were performed to evaluate the end-use matching procedure; these results are reported.

  5. The Use of Energy in Malaysia: Tracing Energy Flows from Primary Source to End Use

    Directory of Open Access Journals (Sweden)

    Chinhao Chong

    2015-04-01

    Full Text Available Malaysia is a rapidly developing country in Southeast Asia that aims to achieve high-income country status by 2020; its economic growth is highly dependent on its abundant energy resources, especially natural gas and crude oil. In this paper, a complete picture of Malaysia’s energy use from primary source to end use is presented by mapping a Sankey diagram of Malaysia’s energy flows, together with ongoing trends analysis of the main factors influencing the energy flows. The results indicate that Malaysia’s energy use depends heavily on fossil fuels, including oil, gas and coal. In the past 30 years, Malaysia has successfully diversified its energy structure by introducing more natural gas and coal into its power generation. To sustainably feed the rapidly growing energy demand in end-use sectors with the challenge of global climate change, Malaysia must pay more attention to the development of renewable energy, green technology and energy conservation in the future.

  6. Modernizing residential heating in Russia: End-use practices, legal developments, and future prospects

    International Nuclear Information System (INIS)

    Korppoo, Anna; Korobova, Nina

    2012-01-01

    This article explores the significance of modernization policies concerning Russia’s technically obsolete but socially important residential heating sector, focusing on the 2009 energy efficiency framework law and its prospects for implementation. Ownership and control structures are in flux throughout the heating sector chain. Inefficiencies, causing low service quality and rising prices, have already started eroding the market share of district heating, despite its potential benefits. End-use management practices – such as lack of metering, communal billing, and low prices that do not cover production costs – reduce consumer incentives to cut consumption. The diversity of end-users adds to the complexity of focused measures like energy-saving contracts. However, end-use sector reforms such as mandatory meter installation and increasing prices – even if socially acceptable and fully implemented – cannot alone provide the massive investments required. More appropriate is sector-wide reform with the government’s financial participation – especially if consumer efforts can yield better service quality. - Highlights: ► We analyze Russia’s energy efficiency policy on residential heating sector. ► Institutional structures and practices reduce incentives to cut consumption. ► Meter installation and increasing prices cannot deliver investments required. ► Government led sector-wide reform is required, linked to better service quality.

  7. A new NAMA framework for dispersed energy end-use sectors

    International Nuclear Information System (INIS)

    Cheng, C.-C.

    2010-01-01

    This paper presents a new approach for a nationally appropriate mitigation actions (NAMA) framework that can unlock the huge potential for greenhouse gas mitigation in dispersed energy end-use sectors in developing countries; specifically, the building sector and the industrial sector. These two sectors make up the largest portions of energy consumption in developing countries. However, due to multiple barriers and lack of effective polices, energy efficiency in dispersed energy end-use sectors has not been effectively put into practice. The new NAMA framework described in this paper is designed to fulfill the demand for public policies and public sector investment in developing countries and thereby boost private sector investment through project based market mechanisms, such as CDM. The new NAMA framework is designed as a need-based mechanism which effectively considers the conditions of each developing country. The building sector is used as an example to demonstrate how NAMA measures can be registered and implemented. The described new NAMA framework has the ability to interface efficiently with Kyoto Protocol mechanisms and to facilitate a systematic uptake for GHG emission reduction investment projects. This is an essential step to achieve the global climate change mitigation target and support sustainable development in developing countries.

  8. Geography and end use drive the diversification of worldwide winter rye populations.

    Science.gov (United States)

    Parat, Florence; Schwertfirm, Grit; Rudolph, Ulrike; Miedaner, Thomas; Korzun, Viktor; Bauer, Eva; Schön, Chris-Carolin; Tellier, Aurélien

    2016-01-01

    To meet the current challenges in human food production, improved understanding of the genetic diversity of crop species that maximizes the selection efficacy in breeding programs is needed. The present study offers new insights into the diversity, genetic structure and demographic history of cultivated rye (Secale cereale L.). We genotyped 620 individuals from 14 global rye populations with a different end use (grain or forage) at 32 genome-wide simple sequence repeat markers. We reveal the relationships among these populations, their sizes and the timing of domestication events using population genetics and model-based inference with approximate Bayesian computation. Our main results demonstrate (i) a high within-population variation and genetic diversity, (ii) an unexpected absence of reduction in diversity with an increasing improvement level and (iii) patterns suggestive of multiple domestication events. We suggest that the main drivers of diversification of winter rye are the end use of rye in two early regions of cultivation: rye forage in the Mediterranean area and grain in northeast Europe. The lower diversity and stronger differentiation of eastern European populations were most likely due to more intensive cultivation and breeding of rye in this region, in contrast to the Mediterranean region where it was considered a secondary crop or even a weed. We discuss the relevance of our results for the management of gene bank resources and the pitfalls of inference methods applied to crop domestication due to violation of model assumptions and model complexity. © 2015 John Wiley & Sons Ltd.

  9. Argos: Residential end-use simulation model for load management strategy analysis

    International Nuclear Information System (INIS)

    Capasso, A.; Lamedica, R.; Prudenzi, A.

    1992-01-01

    In recent years, load management (LM) strategies, aimed at the optimization of available energy resources, as well as, the reduction of investments for new power plants, have been applied worldwide in residential end-use assessments. However, the forecasting of LM strategy impacts on the residential sector is very complex because it is based on a preliminary evaluation of the customers' proclivity to adapt their load characteristics to utility aims. In order to reduce load analysis requirements, which are substantial due to the need for thorough statistical analyses, complex field tests and measurements, the availability of models taking into account customer behavioural aspects is of paramount importance. This paper illustrates a simulation model which allows the performance of numerical evaluations concerning the effectiveness of some LM strategies applied to a residential end-use area load profile as previously determined by the aggregation of the contributions of individual households. This model enabled the evaluation of the impact, on the load profile, of a time-of-day tariff such as that recently introduced in Italy

  10. Resource acquisition, distribution and end-use efficiencies and the growth of industrial society

    Science.gov (United States)

    Jarvis, A. J.; Jarvis, S. J.; Hewitt, C. N.

    2015-10-01

    A key feature of the growth of industrial society is the acquisition of increasing quantities of resources from the environment and their distribution for end-use. With respect to energy, the growth of industrial society appears to have been near-exponential for the last 160 years. We provide evidence that indicates that the global distribution of resources that underpins this growth may be facilitated by the continual development and expansion of near-optimal directed networks (roads, railways, flight paths, pipelines, cables etc.). However, despite this continual striving for optimisation, the distribution efficiencies of these networks must decline over time as they expand due to path lengths becoming longer and more tortuous. Therefore, to maintain long-term exponential growth the physical limits placed on the distribution networks appear to be counteracted by innovations deployed elsewhere in the system, namely at the points of acquisition and end-use of resources. We postulate that the maintenance of the growth of industrial society, as measured by global energy use, at the observed rate of ~ 2.4 % yr-1 stems from an implicit desire to optimise patterns of energy use over human working lifetimes.

  11. Algorithmic cryptanalysis

    CERN Document Server

    Joux, Antoine

    2009-01-01

    Illustrating the power of algorithms, Algorithmic Cryptanalysis describes algorithmic methods with cryptographically relevant examples. Focusing on both private- and public-key cryptographic algorithms, it presents each algorithm either as a textual description, in pseudo-code, or in a C code program.Divided into three parts, the book begins with a short introduction to cryptography and a background chapter on elementary number theory and algebra. It then moves on to algorithms, with each chapter in this section dedicated to a single topic and often illustrated with simple cryptographic applic

  12. Algorithmic mathematics

    CERN Document Server

    Hougardy, Stefan

    2016-01-01

    Algorithms play an increasingly important role in nearly all fields of mathematics. This book allows readers to develop basic mathematical abilities, in particular those concerning the design and analysis of algorithms as well as their implementation. It presents not only fundamental algorithms like the sieve of Eratosthenes, the Euclidean algorithm, sorting algorithms, algorithms on graphs, and Gaussian elimination, but also discusses elementary data structures, basic graph theory, and numerical questions. In addition, it provides an introduction to programming and demonstrates in detail how to implement algorithms in C++. This textbook is suitable for students who are new to the subject and covers a basic mathematical lecture course, complementing traditional courses on analysis and linear algebra. Both authors have given this "Algorithmic Mathematics" course at the University of Bonn several times in recent years.

  13. Mapping the Energy Flow from Supply to End Use in three Geographic Regions of China

    DEFF Research Database (Denmark)

    Mischke, Peggy; Xiong, Weiming

    China's past economic development policies resulted in different energy infrastructure patterns across China. There is a long tradition in analysing and discussing regional disparities of China's economy. For more than 20 years, regional differences in GDP, industrial outputs, household income...... and consumption were analysed across China's provincial units. Regional disparities in China's current energy flow are rarely visualised and quantified from a comprehensive, system-wide perspective that is tracing all major fuels and energy carriers in supply, transformation and final end-use in different sectors....... A few national and provincial energy flow diagrams of China were developed since 2000, althoug with limited detail on major regional disparities and inter-regional fuel flows. No regional energy flow charts are yet available for East-, Central- and West-China. This study maps and quantifies energy...

  14. A MEMS AC current sensor for residential and commercial electricity end-use monitoring

    International Nuclear Information System (INIS)

    Leland, E S; Wright, P K; White, R M

    2009-01-01

    This paper presents a novel prototype MEMS sensor for alternating current designed for monitoring electricity end-use in residential and commercial environments. This new current sensor design is comprised of a piezoelectric MEMS cantilever with a permanent magnet mounted on the cantilever's free end. When placed near a wire carrying AC current, the magnet is driven sinusoidally, producing a voltage in the cantilever proportional to the current being measured. Analytical models were developed to predict the applicable magnetic forces and piezoelectric voltage output in order to guide the design of a sensor prototype. This paper also details the fabrication process for this sensor design. Released piezoelectric MEMS cantilevers have been fabricated using a four-mask process and aluminum nitride as the active piezoelectric material. Dispenser-printed microscale composite permanent magnets have been integrated, resulting in the first MEMS-scale prototypes of this current sensor design

  15. A Crosswalk of Mineral Commodity End Uses and North American Industry Classification System (NAICS) codes

    Science.gov (United States)

    Barry, James J.; Matos, Grecia R.; Menzie, W. David

    2015-09-14

    This crosswalk is based on the premise that there is a connection between the way mineral commodities are used and how this use is reflected in the economy. Raw mineral commodities are the basic materials from which goods, finished products, or intermediate materials are manufactured or made. Mineral commodities are vital to the development of the U.S. economy and they impact nearly every industrial segment of the economy, representing 12.2 percent of the U.S. gross domestic product (GDP) in 2010 (U.S. Bureau of Economic Analysis, 2014). In an effort to better understand the distribution of mineral commodities in the economy, the U.S. Geological Survey (USGS) attempts to link the end uses of mineral commodities to the corresponding North American Industry Classification System (NAICS) codes.

  16. Residential Lighting End-Use Consumption Study: Estimation Framework and Initial Estimates

    Energy Technology Data Exchange (ETDEWEB)

    Gifford, Will R.; Goldberg, Miriam L.; Tanimoto, Paulo M.; Celnicker, Dane R.; Poplawski, Michael E.

    2012-12-01

    The U.S. DOE Residential Lighting End-Use Consumption Study is an initiative of the U.S. Department of Energy’s (DOE’s) Solid-State Lighting Program that aims to improve the understanding of lighting energy usage in residential dwellings. The study has developed a regional estimation framework within a national sample design that allows for the estimation of lamp usage and energy consumption 1) nationally and by region of the United States, 2) by certain household characteristics, 3) by location within the home, 4) by certain lamp characteristics, and 5) by certain categorical cross-classifications (e.g., by dwelling type AND lamp type or fixture type AND control type).

  17. Average regional end-use energy price projections to the year 2030

    International Nuclear Information System (INIS)

    1991-01-01

    The energy prices shown in this report cover the period from 1991 through 2030. These prices reflect sector/fuel price projections from the Annual Energy Outlook 1991 (AEO) base case, developed using the Energy Information Administration's (EIA) Intermediate Future Forecasting System (IFFS) forecasting model. Projections through 2010 are AEO base case forecasts. Projections for the period from 2011 through 2030 were developed separately from the AEO for this report, and the basis for these projections is described in Chapter 3. Projections in this report include average energy prices for each of four Census Regions for the residential, commercial, industrial, and transportation end-use sectors. Energy sources include electricity, distillate fuel oil, liquefied petroleum gas, motor gasoline, residual fuel oil, natural gas, and steam coal. (VC)

  18. Energy conservation: policy issues and end-use scenarios of savings potential

    Energy Technology Data Exchange (ETDEWEB)

    1978-09-01

    The enclosed work is based on previous research during this fiscal year, contained in Construction of Energy Conservation Scenarios: Interim Report of Work in Progress, June 1978. Five subjects were investigated and summaries were published for each subject in separate publications. This publication summarizes policy issues on the five subjects: tradeoffs of municipal solid-waste-processing alternatives (economics of garbage collection; mechanical versus home separation of recyclables); policy barriers and investment decisions in industry (methodology for identification of potential barriers to industrial energy conservation; process of industrial investment decision making); energy-efficient recreational travel (information system to promote energy-efficient recreational travel; recreational travel; national importance and individual decision making); energy-efficient buildings (causes of litigation against energy-conservation building codes; description of the building process); and end-use energy-conservation data base and scenaerios (residential; commercial; transportation; and industrial).

  19. Energy end use statistics and estimations in the Polish household sector

    International Nuclear Information System (INIS)

    Gilecki, R.

    1997-01-01

    The energy statistics in Poland was in the past concentrated on energy production and industrial consumption, but little information was available on the households energy consumption. This data unavailability was an important barrier for the various analyses and forecasting of the energy balance developments. In the recent years some successful attempts were made to acquire a wider and more reliable picture of household energy consumption. The households surveys were made and some existing data were analyzed and verified. The better and more detailed picture of households energy use was in this way constructed. The breakdown of energy consumption by end-use categories (space heating, water heating, cooking, electrical appliances) was quite reliably estimated. Important international cooperation and guidance was used in the course of Polish households energy consumption research. (author). 6 refs

  20. Energy end use statistics and estimations in the Polish household sector

    Energy Technology Data Exchange (ETDEWEB)

    Gilecki, R [Energy Information Centre, Warsaw (Poland)

    1997-09-01

    The energy statistics in Poland was in the past concentrated on energy production and industrial consumption, but little information was available on the households energy consumption. This data unavailability was an important barrier for the various analyses and forecasting of the energy balance developments. In the recent years some successful attempts were made to acquire a wider and more reliable picture of household energy consumption. The households surveys were made and some existing data were analyzed and verified. The better and more detailed picture of households energy use was in this way constructed. The breakdown of energy consumption by end-use categories (space heating, water heating, cooking, electrical appliances) was quite reliably estimated. Important international cooperation and guidance was used in the course of Polish households energy consumption research. (author). 6 refs.

  1. HIV/AIDS National Strategic Plans of Sub-Saharan African countries: an analysis for gender equality and sex-disaggregated HIV targets

    Science.gov (United States)

    Sherwood, Jennifer; Sharp, Alana; Cooper, Bergen; Roose-Snyder, Beirne; Blumenthal, Susan

    2017-01-01

    Abstract National Strategic Plans (NSPs) for HIV/AIDS are country planning documents that set priorities for programmes and services, including a set of targets to quantify progress toward national and international goals. The inclusion of sex-disaggregated targets and targets to combat gender inequality is important given the high disease burden among young women and adolescent girls in Sub-Saharan Africa, yet no comprehensive gender-focused analysis of NSP targets has been performed. This analysis quantitatively evaluates national HIV targets, included in NSPs from eighteen Sub-Saharan African countries, for sex-disaggregation. Additionally, NSP targets aimed at reducing gender-based inequality in health outcomes are compiled and inductively coded to report common themes. On average, in the eighteen countries included in this analysis, 31% of NSP targets include sex-disaggregation (range 0–92%). Three countries disaggregated a majority (>50%) of their targets by sex. Sex-disaggregation in data reporting was more common for targets related to the early phases of the HIV care continuum: 83% of countries included any sex-disaggregated targets for HIV prevention, 56% for testing and linkage to care, 22% for improving antiretroviral treatment coverage, and 11% for retention in treatment. The most common target to reduce gender inequality was to prevent gender-based violence (present in 50% of countries). Other commonly incorporated target areas related to improving women’s access to family planning, human and legal rights, and decision-making power. The inclusion of sex-disaggregated targets in national planning is vital to ensure that programmes make progress for all population groups. Improving the availability and quality of indicators to measure gender inequality, as well as evaluating programme outcomes by sex, is critical to tracking this progress. This analysis reveals an urgent need to set specific and separate targets for men and women in order to achieve

  2. Total algorithms

    NARCIS (Netherlands)

    Tel, G.

    We define the notion of total algorithms for networks of processes. A total algorithm enforces that a "decision" is taken by a subset of the processes, and that participation of all processes is required to reach this decision. Total algorithms are an important building block in the design of

  3. Technology data characterizing water heating in commercial buildings: Application to end-use forecasting

    Energy Technology Data Exchange (ETDEWEB)

    Sezgen, O.; Koomey, J.G.

    1995-12-01

    Commercial-sector conservation analyses have traditionally focused on lighting and space conditioning because of their relatively-large shares of electricity and fuel consumption in commercial buildings. In this report we focus on water heating, which is one of the neglected end uses in the commercial sector. The share of the water-heating end use in commercial-sector electricity consumption is 3%, which corresponds to 0.3 quadrillion Btu (quads) of primary energy consumption. Water heating accounts for 15% of commercial-sector fuel use, which corresponds to 1.6 quads of primary energy consumption. Although smaller in absolute size than the savings associated with lighting and space conditioning, the potential cost-effective energy savings from water heaters are large enough in percentage terms to warrant closer attention. In addition, water heating is much more important in particular building types than in the commercial sector as a whole. Fuel consumption for water heating is highest in lodging establishments, hospitals, and restaurants (0.27, 0.22, and 0.19 quads, respectively); water heating`s share of fuel consumption for these building types is 35%, 18% and 32%, respectively. At the Lawrence Berkeley National Laboratory, we have developed and refined a base-year data set characterizing water heating technologies in commercial buildings as well as a modeling framework. We present the data and modeling framework in this report. The present commercial floorstock is characterized in terms of water heating requirements and technology saturations. Cost-efficiency data for water heating technologies are also developed. These data are intended to support models used for forecasting energy use of water heating in the commercial sector.

  4. Bias-correction and Spatial Disaggregation for Climate Change Impact Assessments at a basin scale

    Science.gov (United States)

    Nyunt, Cho; Koike, Toshio; Yamamoto, Akio; Nemoto, Toshihoro; Kitsuregawa, Masaru

    2013-04-01

    Basin-scale climate change impact studies mainly rely on general circulation models (GCMs) comprising the related emission scenarios. Realistic and reliable data from GCM is crucial for national scale or basin scale impact and vulnerability assessments to build safety society under climate change. However, GCM fail to simulate regional climate features due to the imprecise parameterization schemes in atmospheric physics and coarse resolution scale. This study describes how to exclude some unsatisfactory GCMs with respect to focused basin, how to minimize the biases of GCM precipitation through statistical bias correction and how to cover spatial disaggregation scheme, a kind of downscaling, within in a basin. GCMs rejection is based on the regional climate features of seasonal evolution as a bench mark and mainly depends on spatial correlation and root mean square error of precipitation and atmospheric variables over the target region. Global Precipitation Climatology Project (GPCP) and Japanese 25-uear Reanalysis Project (JRA-25) are specified as references in figuring spatial pattern and error of GCM. Statistical bias-correction scheme comprises improvements of three main flaws of GCM precipitation such as low intensity drizzled rain days with no dry day, underestimation of heavy rainfall and inter-annual variability of local climate. Biases of heavy rainfall are conducted by generalized Pareto distribution (GPD) fitting over a peak over threshold series. Frequency of rain day error is fixed by rank order statistics and seasonal variation problem is solved by using a gamma distribution fitting in each month against insi-tu stations vs. corresponding GCM grids. By implementing the proposed bias-correction technique to all insi-tu stations and their respective GCM grid, an easy and effective downscaling process for impact studies at the basin scale is accomplished. The proposed method have been examined its applicability to some of the basins in various climate

  5. Improving and disaggregating N_2O emission factors for ruminant excreta on temperate pasture soils

    International Nuclear Information System (INIS)

    Krol, D.J.; Carolan, R.; Minet, E.; McGeough, K.L.; Watson, C.J.; Forrestal, P.J.; Lanigan, G.J.; Richards, K.G.

    2016-01-01

    Cattle excreta deposited on grazed grasslands are a major source of the greenhouse gas (GHG) nitrous oxide (N_2O). Currently, many countries use the IPCC default emission factor (EF) of 2% to estimate excreta-derived N_2O emissions. However, emissions can vary greatly depending on the type of excreta (dung or urine), soil type and timing of application. Therefore three experiments were conducted to quantify excreta-derived N_2O emissions and their associated EFs, and to assess the effect of soil type, season of application and type of excreta on the magnitude of losses. Cattle dung, urine and artificial urine treatments were applied in spring, summer and autumn to three temperate grassland sites with varying soil and weather conditions. Nitrous oxide emissions were measured from the three experiments over 12 months to generate annual N_2O emission factors. The EFs from urine treated soil was greater (0.30–4.81% for real urine and 0.13–3.82% for synthetic urine) when compared with dung (− 0.02–1.48%) treatments. Nitrous oxide emissions were driven by environmental conditions and could be predicted by rainfall and temperature before, and soil moisture deficit after application; highlighting the potential for a decision support tool to reduce N_2O emissions by modifying grazing management based on these parameters. Emission factors varied seasonally with the highest EFs in autumn and were also dependent on soil type, with the lowest EFs observed from well-drained and the highest from imperfectly drained soil. The EFs averaged 0.31 and 1.18% for cattle dung and urine, respectively, both of which were considerably lower than the IPCC default value of 2%. These results support both lowering and disaggregating EFs by excreta type. - Highlights: • N_2O emissions were measured from cattle excreta applied to pasture. • N_2O was universally higher from urine compared with dung. • N_2O was driven by rainfall, temperature and soil moisture deficit. • Emission

  6. Foreign labor and regional labor markets: aggregate and disaggregate impact on growth and wages in Danish regions

    DEFF Research Database (Denmark)

    Schmidt, Torben Dall; Jensen, Peter Sandholt

    2013-01-01

    non-negative effects on the job opportunities for Danish workers in regional labor markets, whereas the evidence of a regional wage growth effect is mixed. We also present disaggregated results focusing on regional heterogeneity of business structures, skill levels and backgrounds of foreign labor....... The results are interpreted within a specific Danish labor market context and the associated regional outcomes. This adds to previous findings and emphasizes the importance of labor market institutions for the effect of foreign labor on regional employment growth....

  7. Disaggregating Hot Water Use and Predicting Hot Water Waste in Five Test Homes

    Energy Technology Data Exchange (ETDEWEB)

    Henderson, Hugh [ARIES Collaborative, New York, NY (United States); Wade, Jeremy [ARIES Collaborative, New York, NY (United States)

    2014-04-01

    While it is important to make the equipment (or "plant") in a residential hot water system more efficient, the hot water distribution system also affects overall system performance and energy use. Energy wasted in heating water that is not used is estimated to be on the order of 10%-30% of total domestic hot water (DHW) energy use. This field monitoring project installed temperature sensors on the distribution piping (on trunks and near fixtures) in five houses near Syracuse, NY, and programmed a data logger to collect data at 5 second intervals whenever there was a hot water draw. This data was used to assign hot water draws to specific end uses in the home as well as to determine the portion of each hot water that was deemed useful (i.e., above a temperature threshold at the fixture). Overall, the procedures to assign water draws to each end use were able to successfully assign about 50% of the water draws, but these assigned draws accounted for about 95% of the total hot water use in each home. The amount of hot water deemed as useful ranged from low of 75% at one house to a high of 91% in another. At three of the houses, new water heaters and distribution improvements were implemented during the monitoring period and the impact of these improvements on hot water use and delivery efficiency were evaluated.

  8. Disaggregating Hot Water Use and Predicting Hot Water Waste in Five Test Homes

    Energy Technology Data Exchange (ETDEWEB)

    Henderson, H.; Wade, J.

    2014-04-01

    While it is important to make the equipment (or 'plant') in a residential hot water system more efficient, the hot water distribution system also affects overall system performance and energy use. Energy wasted in heating water that is not used is estimated to be on the order of 10 to 30 percent of total domestic hot water (DHW) energy use. This field monitoring project installed temperature sensors on the distribution piping (on trunks and near fixtures) and programmed a data logger to collect data at 5 second intervals whenever there was a hot water draw. This data was used to assign hot water draws to specific end uses in the home as well as to determine the portion of each hot water that was deemed useful (i.e., above a temperature threshold at the fixture). Five houses near Syracuse NY were monitored. Overall, the procedures to assign water draws to each end use were able to successfully assign about 50% of the water draws, but these assigned draws accounted for about 95% of the total hot water use in each home. The amount of hot water deemed as useful ranged from low of 75% at one house to a high of 91% in another. At three of the houses, new water heaters and distribution improvements were implemented during the monitoring period and the impact of these improvements on hot water use and delivery efficiency were evaluated.

  9. Water End-Uses in Low-Income Houses in Southern Brazil

    Directory of Open Access Journals (Sweden)

    Ana Kelly Marinoski

    2014-07-01

    Full Text Available Knowing water consumption patterns in buildings is key information for water planning. This article aims to characterize the water consumption pattern and water end-uses in low-income houses in the region of Florianópolis, Southern Brazil. Data were collected by interviewing householders, as well as by measuring the flow rate of existing water fixtures and appliances. The results indicated that the shower was the fixture with the largest water consumption in households, i.e., about 30%–36% of total water consumption on average, followed by the toilet (18%–20%. The surveyed households consumed from 111 to 152 L/capita·day on average, based on different income ranges. No correlation was found between income and water consumption. The results of this study can be used to estimate the consumption of water for new buildings, as well as to develop integrated water management strategies in low-income developments, in Florianópolis, such as water-saving plumbing fixtures, rainwater harvesting, and greywater reuse. Likely, there would be a deferral of capital investments in new water assets for enhancing water and wastewater services by saving water in low-income houses.

  10. ESTIMATION OF COB-DOUGLAS AND TRANSLOG PRODUCTION FUNCTIONS WITH CAPITAL AND GENDER DISAGGREGATED LABOR INPUTS IN THE USA

    Directory of Open Access Journals (Sweden)

    Gertrude Sebunya Muwanga

    2018-01-01

    Full Text Available This is an empirical investigation of the homogeneity of gender disaggregated labor using the Douglas, single/multi-factor translog production functions; and labor productivity functions for the USA.   The results based on the single factor translog model, indicated that: an increase in the capita/female labor ratio increases aggregate output; male labor is more productive than female labor, which is more productive than capital; a simultaneous increase in quantity allocated and productivity of the leads to an increase in output; female labor productivity has grown slower than male labor productivity; it much easier to substitute male labor for capital compared to female labor; and the three inputs are neither perfect substitutes nor perfect complements. As a consequence, male and female labor are not homogenous inputs. Efforts to investigate the factors influencing gender disaggregated labor productivity; and designing policies to achieve gender parity in numbers/productivity in the labor force and increasing the ease of substitutability between male labor and female labor are required.

  11. Spatial accuracy of a simplified disaggregation method for traffic emissions applied in seven mid-sized Chilean cities

    Science.gov (United States)

    Ossés de Eicker, Margarita; Zah, Rainer; Triviño, Rubén; Hurni, Hans

    The spatial accuracy of top-down traffic emission inventory maps obtained with a simplified disaggregation method based on street density was assessed in seven mid-sized Chilean cities. Each top-down emission inventory map was compared against a reference, namely a more accurate bottom-up emission inventory map from the same study area. The comparison was carried out using a combination of numerical indicators and visual interpretation. Statistically significant differences were found between the seven cities with regard to the spatial accuracy of their top-down emission inventory maps. In compact cities with a simple street network and a single center, a good accuracy of the spatial distribution of emissions was achieved with correlation values>0.8 with respect to the bottom-up emission inventory of reference. In contrast, the simplified disaggregation method is not suitable for complex cities consisting of interconnected nuclei, resulting in correlation valuessituation to get an overview on the spatial distribution of the emissions generated by traffic activities.

  12. A Sub-category Disaggregated Greenhouse Gas Emission Inventory for the Bogota Region, Colombia

    Science.gov (United States)

    Pulido-Guio, A. D.; Rojas, A. M.; Ossma, L. J.; Jimenez-Pizarro, R.

    2012-12-01

    Several international organizations, such as UNDP and UNEP, have recently recognized the importance of empowering sub-national decision levels on climatic governance according to the subsidiarity principle. Regional and municipal authorities are directly responsible for land use management and for regulating economic sectors that emit greenhouse gases (GHG) and are vulnerable to climate change. Sub-national authorities are also closer to the population, which make them better suited for educating the public and for achieving commitment among stakeholders. This investigation was developed within the frame of the Regional Integrated Program on Climate Change for the Cundinamarca-Bogota Region (PRICC), an initiative aimed at incorporating the climate dimension into the regional and local decision making. The region composed by Bogota and its nearest, semi-rural area of influence (Province of Cundinamarca) is the most important population and economic center of Colombia. Our investigation serves two purposes: a) to establish methodologies for estimating regional GHG emissions appropriate to the Colombian context, and b) to disaggregate GHG emissions by economic sector as a mitigation decision-making tool. GHG emissions were calculated using IPCC 1996 - Tier 1 methodologies, as there are no regional- or country-specific emission factors available for Colombia. Top-Down (TD) methodologies, based on national and regional energy use intensity, per capita consumption and fertilizer use, were developed and applied to estimate activities for following categories: fuel use in industrial, commercial and residential sectors (excepting NG and LPG), use of ozone depleting substances (ODS) and substitutes, and fertilizer use (for total emissions of agricultural soils). The emissions from the remaining 22 categories were calculated using Bottom-Up (BU) methodologies given the availability of regional information. The total GHG emissions in the Cundinamarca-Bogota Region on 2008 are

  13. Residential applliance data, assumptions and methodology for end-use forecasting with EPRI-REEPS 2.1

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, R.J,; Johnson, F.X.; Brown, R.E.; Hanford, J.W.; Kommey, J.G.

    1994-05-01

    This report details the data, assumptions and methodology for end-use forecasting of appliance energy use in the US residential sector. Our analysis uses the modeling framework provided by the Appliance Model in the Residential End-Use Energy Planning System (REEPS), which was developed by the Electric Power Research Institute. In this modeling framework, appliances include essentially all residential end-uses other than space conditioning end-uses. We have defined a distinct appliance model for each end-use based on a common modeling framework provided in the REEPS software. This report details our development of the following appliance models: refrigerator, freezer, dryer, water heater, clothes washer, dishwasher, lighting, cooking and miscellaneous. Taken together, appliances account for approximately 70% of electricity consumption and 30% of natural gas consumption in the US residential sector. Appliances are thus important to those residential sector policies or programs aimed at improving the efficiency of electricity and natural gas consumption. This report is primarily methodological in nature, taking the reader through the entire process of developing the baseline for residential appliance end-uses. Analysis steps documented in this report include: gathering technology and market data for each appliance end-use and specific technologies within those end-uses, developing cost data for the various technologies, and specifying decision models to forecast future purchase decisions by households. Our implementation of the REEPS 2.1 modeling framework draws on the extensive technology, cost and market data assembled by LBL for the purpose of analyzing federal energy conservation standards. The resulting residential appliance forecasting model offers a flexible and accurate tool for analyzing the effect of policies at the national level.

  14. 3D Finite Volume Modeling of ENDE Using Electromagnetic T-Formulation

    Directory of Open Access Journals (Sweden)

    Yue Li

    2012-01-01

    Full Text Available An improved method which can analyze the eddy current density in conductor materials using finite volume method is proposed on the basis of Maxwell equations and T-formulation. The algorithm is applied to solve 3D electromagnetic nondestructive evaluation (E’NDE benchmark problems. The computing code is applied to study an Inconel 600 work piece with holes or cracks. The impedance change due to the presence of the crack is evaluated and compared with the experimental data of benchmark problems No. 1 and No. 2. The results show a good agreement between both calculated and measured data.

  15. Assessment of end-use electricity consumption and peak demand by Townsville's housing stock

    International Nuclear Information System (INIS)

    Ren, Zhengen; Paevere, Phillip; Grozev, George; Egan, Stephen; Anticev, Julia

    2013-01-01

    We have developed a comprehensive model to estimate annual end-use electricity consumption and peak demand of housing stock, considering occupants' use of air conditioning systems and major appliances. The model was applied to analyse private dwellings in Townsville, Australia's largest tropical city. For the financial year (FY) 2010–11 the predicted results agreed with the actual electricity consumption with an error less than 10% for cooling thermostat settings at the standard setting temperature of 26.5 °C and at 1.0 °C higher than the standard setting. The greatest difference in monthly electricity consumption in the summer season between the model and the actual data decreased from 21% to 2% when the thermostat setting was changed from 26.5 °C to 27.5 °C. Our findings also showed that installation of solar panels in Townville houses could reduce electricity demand from the grid and would have a minor impact on the yearly peak demand. A key new feature of the model is that it can be used to predict probability distribution of energy demand considering (a) that appliances may be used randomly and (b) the way people use thermostats. The peak demand for the FY estimated from the probability distribution tracked the actual peak demand at 97% confidence level. - Highlights: • We developed a model to estimate housing stock energy consumption and peak demand. • Appliances used randomly and thermostat settings for space cooling were considered. • On-site installation of solar panels was also considered. • Its' results agree well with the actual electricity consumption and peak demand. • It shows the model could provide the probability distribution of electricity demand

  16. Public Health Benefits of End-Use Electrical Energy Efficiency in California: An Exploratory Study

    Energy Technology Data Exchange (ETDEWEB)

    McKone, Thomas E.; Lobscheid, A.B.

    2006-06-01

    This study assesses for California how increasing end-use electrical energy efficiency from installing residential insulation impacts exposures and disease burden from power-plant pollutant emissions. Installation of fiberglass attic insulation in the nearly 3 million electricity-heated homes throughout California is used as a case study. The pollutants nitrous oxides (NO{sub x}), sulfur dioxide (SO{sub 2}), fine particulate matter (PM2.5), benzo(a)pyrene, benzene, and naphthalene are selected for the assessment. Exposure is characterized separately for rural and urban environments using the CalTOX model, which is a key input to the US Environmental Protection Agency (EPA) Tool for the Reduction and Assessment of Chemicals and other environmental Impacts (TRACI). The output of CalTOX provides for urban and rural populations emissions-to-intake factors, which are expressed as an individual intake fraction (iFi). The typical iFi from power plant emissions are on the order of 10{sup -13} (g intake per g emitted) in urban and rural regions. The cumulative (rural and urban) product of emissions, population, and iFi is combined with toxic effects factors to determine human damage factors (HDFs). HDF are expressed as disability adjusted life years (DALYs) per kilogram pollutant emitted. The HDF approach is applied to the insulation case study. Upgrading existing residential insulation to US Department of Energy (DOE) recommended levels eliminates over the assmned 50-year lifetime of the insulation an estimated 1000 DALYs from power-plant emissions per million tonne (Mt) of insulation installed, mostly from the elimination of PM2.5 emissions. In comparison, the estimated burden from the manufacture of this insulation in DALYs per Mt is roughly four orders of magnitude lower than that avoided.

  17. Significant ELCAP analysis results: Summary report. [End-use Load and Consumer Assessment Program

    Energy Technology Data Exchange (ETDEWEB)

    Pratt, R.G.; Conner, C.C.; Drost, M.K.; Miller, N.E.; Cooke, B.A.; Halverson, M.A.; Lebaron, B.A.; Lucas, R.G.; Jo, J.; Richman, E.E.; Sandusky, W.F. (Pacific Northwest Lab., Richland, WA (USA)); Ritland, K.G. (Ritland Associates, Seattle, WA (USA)); Taylor, M.E. (USDOE Bonneville Power Administration, Portland, OR (USA)); Hauser, S.G. (Solar Energy Research Inst., Golden, CO (USA))

    1991-02-01

    The evolution of the End-Use Load and Consumer Assessment Program (ELCAP) since 1983 at Bonneville Power Administration (Bonneville) has been eventful and somewhat tortuous. The birth pangs of a data set so large and encompassing as this have been overwhelming at times. The early adolescent stage of data set development and use has now been reached and preliminary results of early analyses of the data are becoming well known. However, the full maturity of the data set and the corresponding wealth of analytic insights are not fully realized. This document is in some sense a milestone in the brief history of the program. It is a summary of the results of the first five years of the program, principally containing excerpts from a number of previous reports. It is meant to highlight significant accomplishments and analytical results, with a focus on the principal results. Many of the results have a broad application in the utility load research community in general, although the real breadth of the data set remains largely unexplored. The first section of the document introduces the data set: how the buildings were selected, how the metering equipment was installed, and how the data set has been prepared for analysis. Each of the sections that follow the introduction summarize a particular analytic result. A large majority of the analyses to date involve the residential samples, as these were installed first and had highest priority on the analytic agenda. Two exploratory analyses using commercial data are included as an introduction to the commercial analyses that are currently underway. Most of the sections reference more complete technical reports which the reader should refer to for details of the methodology and for more complete discussion of the results. Sections have been processed separately for inclusion on the data base.

  18. Techno-economic analysis for the evaluation of three UCG synthesis gas end use approaches

    Science.gov (United States)

    Nakaten, Natalie; Kempka, Thomas; Burchart-Korol, Dorota; Krawczyk, Piotr; Kapusta, Krzysztof; Stańczyk, Krzysztof

    2016-04-01

    Underground coal gasification (UCG) enables the utilization of coal reserves that are economically not exploitable because of complex geological boundary conditions. In the present study we investigate UCG as a potential economic approach for conversion of deep-seated coals into a synthesis gas and its application within three different utilization options. Related to geological boundary conditions and the chosen gasification agent, UCG synthesis gas composes of varying methane, hydrogen, nitrogen, carbon monoxide and carbon dioxide amounts. In accordance to its calorific value, the processed UCG synthesis gas can be utilized in different manners, as for electricity generation in a combined cycle power plant or for feedstock production making use of its various chemical components. In the present study we analyze UCG synthesis gas utilization economics in the context of clean electricity generation with an integrated carbon capture and storage process (CCS) as well as synthetic fuel and fertilizer production (Kempka et al., 2010) based on a gas composition achieved during an in situ UCG trial in the Wieczorek Mine. Hereby, we also consider chemical feedstock production in order to mitigate CO2 emissions. Within a sensitivity analysis of UCG synthesis gas calorific value variations, we produce a range of capital and operational expenditure bandwidths that allow for an economic assessment of different synthesis gas end use approaches. To carry out the integrated techno-economic assessment of the coupled systems and the sensitivity analysis, we adapted the techno-economic UCG-CCS model developed by Nakaten et al. (2014). Our techno-economic modeling results demonstrate that the calorific value has a high impact on the economics of UCG synthesis gas utilization. In the underlying study, the synthesis gas is not suitable for an economic competitive electricity generation, due to the relatively low calorific value of 4.5 MJ/Nm³. To be a profitable option for electricity

  19. A Novel Magnetic Actuation Scheme to Disaggregate Nanoparticles and Enhance Passage across the Blood–Brain Barrier

    Directory of Open Access Journals (Sweden)

    Ali Kafash Hoshiar

    2017-12-01

    Full Text Available The blood–brain barrier (BBB hinders drug delivery to the brain. Despite various efforts to develop preprogramed actuation schemes for magnetic drug delivery, the unmodeled aggregation phenomenon limits drug delivery performance. This paper proposes a novel scheme with an aggregation model for a feed-forward magnetic actuation design. A simulation platform for aggregated particle delivery is developed and an actuation scheme is proposed to deliver aggregated magnetic nanoparticles (MNPs using a discontinuous asymmetrical magnetic actuation. The experimental results with a Y-shaped channel indicated the success of the proposed scheme in steering and disaggregation. The delivery performance of the developed scheme was examined using a realistic, three-dimensional (3D vessel simulation. Furthermore, the proposed scheme enhanced the transport and uptake of MNPs across the BBB in mice. The scheme presented here facilitates the passage of particles across the BBB to the brain using an electromagnetic actuation scheme.

  20. Value of time determination for the city of Alexandria based on a disaggregate binary mode choice model

    Directory of Open Access Journals (Sweden)

    Mounir Mahmoud Moghazy Abdel-Aal

    2017-12-01

    Full Text Available In the travel demand modeling field, mode choice is the most important decision that affects the resulted road congestion. The behavioral nature of the disaggregate models and the associated advantages of such models over aggregate models have led to their extensive use. This paper proposes a framework to determine the value of time (VoT for the city of Alexandria through calibrating a disaggregate linear-in parameter utility-based binary logit mode choice model of the city. The mode attributes (travel time and travel cost along with traveler attributes (car ownership and income were selected as the utility attributes of the basic model formulation which included 5 models. Three additional alternative utility formulations based on the transformation of the mode attributes including relative travel cost (cost divided by income and log (travel time and the combination of the two transformations together were introduced. The parameter estimation procedure was based on the likelihood maximization technique and was performed in EXCEL. Out of 20 models estimated, only 2 models are considered successful in terms of the parameters estimates correct signs and the magnitude of their significance (t-statistics value. The determination of the VoT serves also in the model validation. The best two models estimated the value of time at LE 11.30/hr and LE 14.50/hr with a relative error of +3.7% and +33.0%, respectively, of the hourly salary of LE 10.9/hr. The proposed two models prove to be sensitive to trip time and income levels as factors affecting the choice mechanism. The sensitivity analysis was performed and proved the model with higher relative error is marginally more robust. Keywords: Transportation modeling, Binary mode choice, Parameter estimation, Value of time, Likelihood maximization, Sensitivity analysis

  1. Analysis of Fuel Cell Markets in Japan and the US: Experience Curve Development and Cost Reduction Disaggregation

    Energy Technology Data Exchange (ETDEWEB)

    Wei, Max [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Smith, Sarah J. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Sohn, Michael D. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2016-07-15

    Fuel cells are both a longstanding and emerging technology for stationary and transportation applications, and their future use will likely be critical for the deep decarbonization of global energy systems. As we look into future applications, a key challenge for policy-makers and technology market forecasters who seek to track and/or accelerate their market adoption is the ability to forecast market costs of the fuel cells as technology innovations are incorporated into market products. Specifically, there is a need to estimate technology learning rates, which are rates of cost reduction versus production volume. Unfortunately, no literature exists for forecasting future learning rates for fuel cells. In this paper, we look retrospectively to estimate learning rates for two fuel cell deployment programs: (1) the micro-combined heat and power (CHP) program in Japan, and (2) the Self-Generation Incentive Program (SGIP) in California. These two examples have a relatively broad set of historical market data and thus provide an informative and international comparison of distinct fuel cell technologies and government deployment programs. We develop a generalized procedure for disaggregating experience-curve cost-reductions in order to disaggregate the Japanese fuel cell micro-CHP market into its constituent components, and we derive and present a range of learning rates that may explain observed market trends. Finally, we explore the differences in the technology development ecosystem and market conditions that may have contributed to the observed differences in cost reduction and draw policy observations for the market adoption of future fuel cell technologies. The scientific and policy contributions of this paper are the first comparative experience curve analysis of past fuel cell technologies in two distinct markets, and the first quantitative comparison of a detailed cost model of fuel cell systems with actual market data. The resulting approach is applicable to

  2. Specific effect of the linear charge density of the acid polysaccharide on thermal aggregation/ disaggregation processes in complex carrageenan/lysozyme systems

    NARCIS (Netherlands)

    Antonov, Y.; Zhuravleva, I.; Cardinaels, R.M.; Moldenaers, P.

    2017-01-01

    We study thermal aggregation and disaggregation processes in complex carrageenan/lysozyme systems with a different linear charge density of the sulphated polysaccharide. To this end, we determine the temperature dependency of the turbidity and the intensity size distribution functions in complex

  3. HIV/AIDS National Strategic Plans of Sub-Saharan African countries: an analysis for gender equality and sex-disaggregated HIV targets.

    Science.gov (United States)

    Sherwood, Jennifer; Sharp, Alana; Cooper, Bergen; Roose-Snyder, Beirne; Blumenthal, Susan

    2017-12-01

    National Strategic Plans (NSPs) for HIV/AIDS are country planning documents that set priorities for programmes and services, including a set of targets to quantify progress toward national and international goals. The inclusion of sex-disaggregated targets and targets to combat gender inequality is important given the high disease burden among young women and adolescent girls in Sub-Saharan Africa, yet no comprehensive gender-focused analysis of NSP targets has been performed. This analysis quantitatively evaluates national HIV targets, included in NSPs from eighteen Sub-Saharan African countries, for sex-disaggregation. Additionally, NSP targets aimed at reducing gender-based inequality in health outcomes are compiled and inductively coded to report common themes. On average, in the eighteen countries included in this analysis, 31% of NSP targets include sex-disaggregation (range 0-92%). Three countries disaggregated a majority (>50%) of their targets by sex. Sex-disaggregation in data reporting was more common for targets related to the early phases of the HIV care continuum: 83% of countries included any sex-disaggregated targets for HIV prevention, 56% for testing and linkage to care, 22% for improving antiretroviral treatment coverage, and 11% for retention in treatment. The most common target to reduce gender inequality was to prevent gender-based violence (present in 50% of countries). Other commonly incorporated target areas related to improving women's access to family planning, human and legal rights, and decision-making power. The inclusion of sex-disaggregated targets in national planning is vital to ensure that programmes make progress for all population groups. Improving the availability and quality of indicators to measure gender inequality, as well as evaluating programme outcomes by sex, is critical to tracking this progress. This analysis reveals an urgent need to set specific and separate targets for men and women in order to achieve an equitable

  4. Genotype, environment, seeding rate, and top-dressed nitrogen effects on end-use quality of modern Nebraska winter wheat.

    Science.gov (United States)

    Bhatta, Madhav; Regassa, Teshome; Rose, Devin J; Baenziger, P Stephen; Eskridge, Kent M; Santra, Dipak K; Poudel, Rachana

    2017-12-01

    Fine-tuning production inputs such as seeding rate, nitrogen (N), and genotype may improve end-use quality of hard red winter wheat (Triticum aestivium L.) when growing conditions are unpredictable. Studies were conducted at the Agronomy Research Farm (ARF; Lincoln, NE, USA) and the High Plains Agricultural Laboratory (HPAL; Sidney, NE, USA) in 2014 and 2015 in Nebraska, USA, to determine the effects of genotype (6), environment (4), seeding rate (3), and flag leaf top-dressed N (0 and 34 kg N ha -1 ) on the end-use quality of winter wheat. End-use quality traits were influenced by environment, genotype, seeding rate, top-dressed N, and their interactions. Mixograph parameters had a strong correlation with grain volume weight and flour yield. Doubling the recommended seeding rate and N at the flag leaf stage increased grain protein content by 8.1% in 2014 and 1.5% in 2015 at ARF and 4.2% in 2014 and 8.4% in 2015 at HPAL. The key finding of this research is that increasing seeding rates up to double the current recommendations with N at the flag leaf stage improved most of the end-use quality traits. This will have a significant effect on the premium for protein a farmer could receive when marketing wheat. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.

  5. Development of Timber Property Classification Based on the End-Use with Reference to Twenty Sri Lankan Timber Species

    Directory of Open Access Journals (Sweden)

    ND Ruwanpathirana

    2014-06-01

    Full Text Available An investigation was carried out on selected 20 timber species of Sri Lanka to study different wood properties, i.e., wood density, modulus of rapture, modulus of elasticity, compression parallel to grain, shrinkage/movement, workability (sawing, nailing, sanding and finishing, treatability of preservative, timber durability, timber texture by vessel diameter and some gross properties, timber colour and present timber uses. Based on the results, an attempt was made to classify the studied timber species into property levels. The final objective of this study was to develop relationships between the end-uses of timber and their property requirements and levels with reference to 20 Sri Lankan timber species.   Timber selection for the use in Sri Lanka is species-oriented and sometimes it is based on the traditional use. Based on wood properties of 20 Sri Lankan timber species selected, an attempt was made to recognise the most important wood properties and their levels to develop a four end-use property classification. In general, the proposed end-use property classification in this study could be differentiated as (i. for building construction, (ii. for furniture and joinery (iii. for light construction, and (iv. for miscellaneous uses. Among the selected timber species, Dipterocarpus zeylanicus is eminently suitable for under-water work. Eucalyptus microcorys is regarded as one of the best timbers for dancing floors. These specialty and causative factors of timber, however, must be explored and documented in order to prepare end-use property classification for miscellaneous use.

  6. Algorithmic alternatives

    International Nuclear Information System (INIS)

    Creutz, M.

    1987-11-01

    A large variety of Monte Carlo algorithms are being used for lattice gauge simulations. For purely bosonic theories, present approaches are generally adequate; nevertheless, overrelaxation techniques promise savings by a factor of about three in computer time. For fermionic fields the situation is more difficult and less clear. Algorithms which involve an extrapolation to a vanishing step size are all quite closely related. Methods which do not require such an approximation tend to require computer time which grows as the square of the volume of the system. Recent developments combining global accept/reject stages with Langevin or microcanonical updatings promise to reduce this growth to V/sup 4/3/

  7. Combinatorial algorithms

    CERN Document Server

    Hu, T C

    2002-01-01

    Newly enlarged, updated second edition of a valuable text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discusses binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. 153 black-and-white illus. 23 tables.Newly enlarged, updated second edition of a valuable, widely used text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discussed are binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. New to this edition: Chapter 9

  8. Analysis of PG&E`s residential end-use metered data to improve electricity demand forecasts -- final report

    Energy Technology Data Exchange (ETDEWEB)

    Eto, J.H.; Moezzi, M.M.

    1993-12-01

    This report summarizes findings from a unique project to improve the end-use electricity load shape and peak demand forecasts made by the Pacific Gas and Electric Company (PG&E) and the California Energy Commission (CEC). First, the direct incorporation of end-use metered data into electricity demand forecasting models is a new approach that has only been made possible by recent end-use metering projects. Second, and perhaps more importantly, the joint-sponsorship of this analysis has led to the development of consistent sets of forecasting model inputs. That is, the ability to use a common data base and similar data treatment conventions for some of the forecasting inputs frees forecasters to concentrate on those differences (between their competing forecasts) that stem from real differences of opinion, rather than differences that can be readily resolved with better data. The focus of the analysis is residential space cooling, which represents a large and growing demand in the PG&E service territory. Using five years of end-use metered, central air conditioner data collected by PG&E from over 300 residences, we developed consistent sets of new inputs for both PG&E`s and CEC`s end-use load shape forecasting models. We compared the performance of the new inputs both to the inputs previously used by PG&E and CEC, and to a second set of new inputs developed to take advantage of a recently added modeling option to the forecasting model. The testing criteria included ability to forecast total daily energy use, daily peak demand, and demand at 4 P.M. (the most frequent hour of PG&E`s system peak demand). We also tested the new inputs with the weather data used by PG&E and CEC in preparing their forecasts.

  9. Energy conservation: policy issues and end-use scenarios of savings potential. Part IV. Energy-efficient recreational travel

    Energy Technology Data Exchange (ETDEWEB)

    Benson, P.; Codina, R.; Cornwall, B.

    1978-09-01

    The guidelines laid out for the five subjects investigated in this series are to take a holistic view of energy conservation policies by describing the overall system in which they are implemented; provide analytical tools and sufficiently disaggregated data bases that can be adapted to answer a variety of questions by the users; identify and discuss some of the important issues behind successful energy conservation policy; and develop an energy conservation policy in depth. This report contains the design of a specific policy that addresses energy conservation in recreational travel. The policy is denoted as an ''Information System for the National Park Service.'' This work is based on prior examination of the characteristics of the recreational trip and decision making for the recreational experience. The examination revealed which aspects of the recreational travel system needed to be addressed to encourage energy-efficient modal decisions for recreational travel. This policy is briefly described in Section 1, the ''Summary of Initiative.'' A more detailed discussion of the policy follows. The material which led to the policy's formation is developed in Section 2: Importance and Impact of the Recreational Trip; Weekend Travel; The Flowchart: Decision Making for the Recreational Experience; Policy Development for Phase 1 ''Planning the Trip;'' and Objectives and Strategies for ''Planning the Trip.'' (MCW)

  10. Autodriver algorithm

    Directory of Open Access Journals (Sweden)

    Anna Bourmistrova

    2011-02-01

    Full Text Available The autodriver algorithm is an intelligent method to eliminate the need of steering by a driver on a well-defined road. The proposed method performs best on a four-wheel steering (4WS vehicle, though it is also applicable to two-wheel-steering (TWS vehicles. The algorithm is based on coinciding the actual vehicle center of rotation and road center of curvature, by adjusting the kinematic center of rotation. The road center of curvature is assumed prior information for a given road, while the dynamic center of rotation is the output of dynamic equations of motion of the vehicle using steering angle and velocity measurements as inputs. We use kinematic condition of steering to set the steering angles in such a way that the kinematic center of rotation of the vehicle sits at a desired point. At low speeds the ideal and actual paths of the vehicle are very close. With increase of forward speed the road and tire characteristics, along with the motion dynamics of the vehicle cause the vehicle to turn about time-varying points. By adjusting the steering angles, our algorithm controls the dynamic turning center of the vehicle so that it coincides with the road curvature center, hence keeping the vehicle on a given road autonomously. The position and orientation errors are used as feedback signals in a closed loop control to adjust the steering angles. The application of the presented autodriver algorithm demonstrates reliable performance under different driving conditions.

  11. An initial assessment of a SMAP soil moisture disaggregation scheme using TIR surface evaporation data over the continental United States

    Science.gov (United States)

    Mishra, Vikalp; Ellenburg, W. Lee; Griffin, Robert E.; Mecikalski, John R.; Cruise, James F.; Hain, Christopher R.; Anderson, Martha C.

    2018-06-01

    The Soil Moisture Active Passive (SMAP) mission is dedicated toward global soil moisture mapping. Typically, an L-band microwave radiometer has spatial resolution on the order of 36-40 km, which is too coarse for many specific hydro-meteorological and agricultural applications. With the failure of the SMAP active radar within three months of becoming operational, an intermediate (9-km) and finer (3-km) scale soil moisture product solely from the SMAP mission is no longer possible. Therefore, the focus of this study is a disaggregation of the 36-km resolution SMAP passive-only surface soil moisture (SSM) using the Soil Evaporative Efficiency (SEE) approach to spatial scales of 3-km and 9-km. The SEE was computed using thermal-infrared (TIR) estimation of surface evaporation over Continental U.S. (CONUS). The disaggregation results were compared with the 3 months of SMAP-Active (SMAP-A) and Active/Passive (AP) products, while comparisons with SMAP-Enhanced (SMAP-E), SMAP-Passive (SMAP-P), as well as with more than 180 Soil Climate Analysis Network (SCAN) stations across CONUS were performed for a 19 month period. At the 9-km spatial scale, the TIR-Downscaled data correlated strongly with the SMAP-E SSM both spatially (r = 0.90) and temporally (r = 0.87). In comparison with SCAN observations, overall correlations of 0.49 and 0.47; bias of -0.022 and -0.019 and unbiased RMSD of 0.105 and 0.100 were found for SMAP-E and TIR-Downscaled SSM across the Continental U.S., respectively. At 3-km scale, TIR-Downscaled and SMAP-A had a mean temporal correlation of only 0.27. In terms of gain statistics, the highest percentage of SCAN sites with positive gains (>55%) was observed with the TIR-Downscaled SSM at 9-km. Overall, the TIR-based downscaled SSM showed strong correspondence with SMAP-E; compared to SCAN, and overall both SMAP-E and TIR-Downscaled performed similarly, however, gain statistics show that TIR-Downscaled SSM slightly outperformed SMAP-E.

  12. Empirical models for end-use properties prediction of LDPE: application in the flexible plastic packaging industry

    Directory of Open Access Journals (Sweden)

    Maria Carolina Burgos Costa

    2008-03-01

    Full Text Available The objective of this work is to develop empirical models to predict end use properties of low density polyethylene (LDPE resins as functions of two intrinsic properties easily measured in the polymers industry. The most important properties for application in the flexible plastic packaging industry were evaluated experimentally for seven commercial polymer grades. Statistical correlation analysis was performed for all variables and used as the basis for proper choice of inputs to each model output. Intrinsic properties selected for resin characterization are fluidity index (FI, which is essentially an indirect measurement of viscosity and weight average molecular weight (MW, and density. In general, models developed are able to reproduce and predict experimental data within experimental accuracy and show that a significant number of end use properties improve as the MW and density increase. Optical properties are mainly determined by the polymer morphology.

  13. A housing stock model of non-heating end-use energy in England verified by aggregate energy use data

    International Nuclear Information System (INIS)

    Lorimer, Stephen

    2012-01-01

    This paper proposes a housing stock model of non-heating end-use energy for England that can be verified using aggregate energy use data available for small areas. These end-uses, commonly referred to as appliances and lighting, are a rapidly increasing part of residential energy demand. This paper proposes a model that can be verified using aggregated data of electricity meters in small areas and census data on housing. Secondly, any differences that open up between major collections of housing could potentially be resolved by using data from frequently updated expenditure surveys. For the year 2008, the model overestimated domestic non-heating energy use at the national scale by 1.5%. This model was then used on the residential sector with various area classifications, which found that rural and suburban areas were generally underestimated by up to 3.3% and urban areas overestimated by up to 5.2% with the notable exception of “professional city life” classifications. The model proposed in this paper has the potential to be a verifiable and adaptable model for non-heating end-use energy in households in England for the future. - Highlights: ► Housing stock energy model was developed for end-uses outside of heating for UK context. ► This entailed changes to the building energy model that serves as the bottom of the stock model. ► The model is adaptable to reflect rapid changes in consumption between major housing surveys. ► Verification was done against aggregated consumption data and for the first time uses a measured size of the housing stock. ► The verification process revealed spatial variations in consumption patterns for future research.

  14. FIXED ASSETS WITH AN OPEN-ENDED USEFUL LIFE AS A NEW OBJECT OF ACCOUNTING IN THE SHIP REPAIR INDUSTRY

    Directory of Open Access Journals (Sweden)

    Olga Zharikova

    2015-09-01

    Full Text Available The industry-specific factors that contribute to the allocation of a new object of accounting in property, plant and equipment of the ship repair industry organizations are revealed. Specificity the salvaging operation of hydraulic structures, which affect the organization and methodology of objects` accounting, is described. The definition of «fixed assets with an open-ended useful life» is proposed.

  15. Algorithmic Self

    DEFF Research Database (Denmark)

    Markham, Annette

    This paper takes an actor network theory approach to explore some of the ways that algorithms co-construct identity and relational meaning in contemporary use of social media. Based on intensive interviews with participants as well as activity logging and data tracking, the author presents a richly...... layered set of accounts to help build our understanding of how individuals relate to their devices, search systems, and social network sites. This work extends critical analyses of the power of algorithms in implicating the social self by offering narrative accounts from multiple perspectives. It also...... contributes an innovative method for blending actor network theory with symbolic interaction to grapple with the complexity of everyday sensemaking practices within networked global information flows....

  16. Improving energy efficiency and smart grid program analysis with agent-based end-use forecasting models

    International Nuclear Information System (INIS)

    Jackson, Jerry

    2010-01-01

    Electric utilities and regulators face difficult challenges evaluating new energy efficiency and smart grid programs prompted, in large part, by recent state and federal mandates and financial incentives. It is increasingly difficult to separate electricity use impacts of individual utility programs from the impacts of increasingly stringent appliance and building efficiency standards, increasing electricity prices, appliance manufacturer efficiency improvements, energy program interactions and other factors. This study reviews traditional approaches used to evaluate electric utility energy efficiency and smart-grid programs and presents an agent-based end-use modeling approach that resolves many of the shortcomings of traditional approaches. Data for a representative sample of utility customers in a Midwestern US utility are used to evaluate energy efficiency and smart grid program targets over a fifteen-year horizon. Model analysis indicates that a combination of the two least stringent efficiency and smart grid program scenarios provides peak hour reductions one-third greater than the most stringent smart grid program suggesting that reductions in peak demand requirements are more feasible when both efficiency and smart grid programs are considered together. Suggestions on transitioning from traditional end-use models to agent-based end-use models are provided.

  17. Changes in Food Intake in Australia: Comparing the 1995 and 2011 National Nutrition Survey Results Disaggregated into Basic Foods.

    Science.gov (United States)

    Ridoutt, Bradley; Baird, Danielle; Bastiaans, Kathryn; Hendrie, Gilly; Riley, Malcolm; Sanguansri, Peerasak; Syrette, Julie; Noakes, Manny

    2016-05-25

    As nations seek to address obesity and diet-related chronic disease, understanding shifts in food intake over time is an imperative. However, quantifying intake of basic foods is not straightforward because of the diversity of raw and cooked wholefoods, processed foods and mixed dishes actually consumed. In this study, data from the Australian national nutrition surveys of 1995 and 2011, each involving more than 12,000 individuals and covering more than 4500 separate foods, were coherently disaggregated into basic foods, with cooking and processing factors applied where necessary. Although Australians are generally not eating in a manner consistent with national dietary guidelines, there have been several positive changes. Australians are eating more whole fruit, a greater diversity of vegetables, more beans, peas and pulses, less refined sugar, and they have increased their preference for brown and wholegrain cereals. Adult Australians have also increased their intake of nuts and seeds. Fruit juice consumption markedly declined, especially for younger Australians. Cocoa consumption increased and shifts in dairy product intake were mixed, reflecting one of several important differences between age and gender cohorts. This study sets the context for more detailed research at the level of specific foods to understand individual and household differences.

  18. Improving the Communication Pattern in Matrix-Vector Operations for Large Scale-Free Graphs by Disaggregation

    Energy Technology Data Exchange (ETDEWEB)

    Kuhlemann, Verena [Emory Univ., Atlanta, GA (United States); Vassilevski, Panayot S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2013-10-28

    Matrix-vector multiplication is the key operation in any Krylov-subspace iteration method. We are interested in Krylov methods applied to problems associated with the graph Laplacian arising from large scale-free graphs. Furthermore, computations with graphs of this type on parallel distributed-memory computers are challenging. This is due to the fact that scale-free graphs have a degree distribution that follows a power law, and currently available graph partitioners are not efficient for such an irregular degree distribution. The lack of a good partitioning leads to excessive interprocessor communication requirements during every matrix-vector product. Here, we present an approach to alleviate this problem based on embedding the original irregular graph into a more regular one by disaggregating (splitting up) vertices in the original graph. The matrix-vector operations for the original graph are performed via a factored triple matrix-vector product involving the embedding graph. And even though the latter graph is larger, we are able to decrease the communication requirements considerably and improve the performance of the matrix-vector product.

  19. Human papillomavirus vaccine initiation in Asian Indians and Asian subpopulations: a case for examining disaggregated data in public health research.

    Science.gov (United States)

    Budhwani, H; De, P

    2017-12-01

    Vaccine disparities research often focuses on differences between the five main racial and ethnic classifications, ignoring heterogeneity of subpopulations. Considering this knowledge gap, we examined human papillomavirus (HPV) vaccine initiation in Asian Indians and Asian subpopulations. National Health Interview Survey data (2008-2013), collected by the National Center for Health Statistics, were analyzed. Multiple logistic regression analysis was conducted on adults aged 18-26 years (n = 20,040). Asian Indians had high income, education, and health insurance coverage, all positive predictors of preventative health engagement and vaccine uptake. However, we find that Asian Indians had comparatively lower rates of HPV vaccine initiation (odds ratio = 0.41; 95% confidence interval = 0.207-0.832), and foreign-born Asian Indians had the lowest rate HPV vaccination of all subpopulations (2.3%). Findings substantiate the need for research on disaggregated data rather than evaluating vaccination behaviors solely across standard racial and ethnic categories. We identified two populations that were initiating HPV vaccine at abysmal levels: foreign-born persons and Asian Indians. Development of culturally appropriate messaging has the potential to improve these initiation rates and improve population health. Copyright © 2017 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.

  20. Changes in Food Intake in Australia: Comparing the 1995 and 2011 National Nutrition Survey Results Disaggregated into Basic Foods

    Directory of Open Access Journals (Sweden)

    Bradley Ridoutt

    2016-05-01

    Full Text Available As nations seek to address obesity and diet-related chronic disease, understanding shifts in food intake over time is an imperative. However, quantifying intake of basic foods is not straightforward because of the diversity of raw and cooked wholefoods, processed foods and mixed dishes actually consumed. In this study, data from the Australian national nutrition surveys of 1995 and 2011, each involving more than 12,000 individuals and covering more than 4500 separate foods, were coherently disaggregated into basic foods, with cooking and processing factors applied where necessary. Although Australians are generally not eating in a manner consistent with national dietary guidelines, there have been several positive changes. Australians are eating more whole fruit, a greater diversity of vegetables, more beans, peas and pulses, less refined sugar, and they have increased their preference for brown and wholegrain cereals. Adult Australians have also increased their intake of nuts and seeds. Fruit juice consumption markedly declined, especially for younger Australians. Cocoa consumption increased and shifts in dairy product intake were mixed, reflecting one of several important differences between age and gender cohorts. This study sets the context for more detailed research at the level of specific foods to understand individual and household differences.

  1. Hyperforin prevents beta-amyloid neurotoxicity and spatial memory impairments by disaggregation of Alzheimer's amyloid-beta-deposits.

    Science.gov (United States)

    Dinamarca, M C; Cerpa, W; Garrido, J; Hancke, J L; Inestrosa, N C

    2006-11-01

    The major protein constituent of amyloid deposits in Alzheimer's disease (AD) is the amyloid beta-peptide (Abeta). In the present work, we have determined the effect of hyperforin an acylphloroglucinol compound isolated from Hypericum perforatum (St John's Wort), on Abeta-induced spatial memory impairments and on Abeta neurotoxicity. We report here that hyperforin: (1) decreases amyloid deposit formation in rats injected with amyloid fibrils in the hippocampus; (2) decreases the neuropathological changes and behavioral impairments in a rat model of amyloidosis; (3) prevents Abeta-induced neurotoxicity in hippocampal neurons both from amyloid fibrils and Abeta oligomers, avoiding the increase in reactive oxidative species associated with amyloid toxicity. Both effects could be explained by the capacity of hyperforin to disaggregate amyloid deposits in a dose and time-dependent manner and to decrease Abeta aggregation and amyloid formation. Altogether these evidences suggest that hyperforin may be useful to decrease amyloid burden and toxicity in AD patients, and may be a putative therapeutic agent to fight the disease.

  2. Disaggregation of SMOS soil moisture over West Africa using the Temperature and Vegetation Dryness Index based on SEVIRI land surface parameters

    DEFF Research Database (Denmark)

    Tagesson, T.; Horion, S.; Nieto, H.

    2018-01-01

    the Temperature and Vegetation Dryness Index (TVDI) that served as SM proxy within the disaggregation process. West Africa (3 N, 26 W; 28 N, 26 E) was selected as a case study as it presents both an important North-South climate gradient and a diverse range of ecosystem types. The main challenge was to set up...... resolution of SMOS SM, with potential application for local drought/flood monitoring of importance for the livelihood of the population of West Africa....

  3. NIR-Red Spectra-Based Disaggregation of SMAP Soil Moisture to 250 m Resolution Based on SMAPEx-4/5 in Southeastern Australia

    Directory of Open Access Journals (Sweden)

    Nengcheng Chen

    2017-01-01

    Full Text Available To meet the demand of regional hydrological and agricultural applications, a new method named near infrared-red (NIR-red spectra-based disaggregation (NRSD was proposed to perform a disaggregation of Soil Moisture Active Passive (SMAP products from 36 km to 250 m resolution. The NRSD combined proposed normalized soil moisture index (NSMI with SMAP data to obtain 250 m resolution soil moisture mapping. The experiment was conducted in southeastern Australia during SMAP Experiments (SMAPEx 4/5 and validated with the in situ SMAPEx network. Results showed that NRSD performed a decent downscaling (root-mean-square error (RMSE = 0.04 m3/m3 and 0.12 m3/m3 during SMAPEx-4 and SMAPEx-5, respectively. Based on the validation, it was found that the proposed NSMI was a new alternative indicator for denoting the heterogeneity of soil moisture at sub-kilometer scales. Attributed to the excellent performance of the NSMI, NRSD has a higher overall accuracy, finer spatial representation within SMAP pixels and wider applicable scope on usability tests for land cover, vegetation density and drought condition than the disaggregation based on physical and theoretical scale change (DISPATCH has at 250 m resolution. This revealed that the NRSD method is expected to provide soil moisture mapping at 250-resolution for large-scale hydrological and agricultural studies.

  4. Parallel algorithms

    CERN Document Server

    Casanova, Henri; Robert, Yves

    2008-01-01

    ""…The authors of the present book, who have extensive credentials in both research and instruction in the area of parallelism, present a sound, principled treatment of parallel algorithms. … This book is very well written and extremely well designed from an instructional point of view. … The authors have created an instructive and fascinating text. The book will serve researchers as well as instructors who need a solid, readable text for a course on parallelism in computing. Indeed, for anyone who wants an understandable text from which to acquire a current, rigorous, and broad vi

  5. Algorithm 865

    DEFF Research Database (Denmark)

    Gustavson, Fred G.; Reid, John K.; Wasniewski, Jerzy

    2007-01-01

    We present subroutines for the Cholesky factorization of a positive-definite symmetric matrix and for solving corresponding sets of linear equations. They exploit cache memory by using the block hybrid format proposed by the authors in a companion article. The matrix is packed into n(n + 1)/2 real...... variables, and the speed is usually better than that of the LAPACK algorithm that uses full storage (n2 variables). Included are subroutines for rearranging a matrix whose upper or lower-triangular part is packed by columns to this format and for the inverse rearrangement. Also included is a kernel...

  6. Emissions from residential combustion considering end-uses and spatial constraints: Part I, methods and spatial distribution

    Science.gov (United States)

    Winijkul, Ekbordin; Fierce, Laura; Bond, Tami C.

    2016-01-01

    This study describes a framework to attribute national-level atmospheric emissions in the year 2010 from the residential sector, one of the largest energy-related sources of aerosol emissions. We place special emphasis on end-uses, dividing usage into cooking, heating, lighting, and others. This study covers regions where solid biomass fuel provides more than 50% of total residential energy: Latin America, Africa, and Asia (5.2 billion people in 2010). Using nightlight data and population density, we classify five land types: urban, electrified rural with forest access, electrified rural without forest access, non-electrified rural with forest access, and non-electrified rural without forest access. We then apportion national-level residential fuel consumption among all land-types and end-uses, and assign end-use technologies to each combination. The resulting calculation gives spatially-distributed emissions of particulate matter, black carbon, organic carbon, nitrogen oxides, methane, non-methane hydrocarbons, carbon monoxide, and carbon dioxide. Within this study region, about 13% of the energy is consumed in urban areas, and 45% in non-urban land near forests. About half the energy is consumed in land without access to electricity. Cooking accounts for 54% of the consumption, heating for 9%, and lighting only 2%, with unidentified uses making up the remainder. Because biofuel use is assumed to occur preferentially where wood is accessible and electricity is not, our method shifts emissions to land types without electrification, compared with previous methods. The framework developed here is an important first step in acknowledging the role of household needs and local constraints in choosing energy provision. Although data and relationships described here need further development, this structure offers a more physically-based understanding of residential energy choices and, ultimately, opportunities for emission reduction.

  7. Optimal urban water conservation strategies considering embedded energy: coupling end-use and utility water-energy models.

    Science.gov (United States)

    Escriva-Bou, A.; Lund, J. R.; Pulido-Velazquez, M.; Spang, E. S.; Loge, F. J.

    2014-12-01

    Although most freshwater resources are used in agriculture, a greater amount of energy is consumed per unit of water supply for urban areas. Therefore, efforts to reduce the carbon footprint of water in cities, including the energy embedded within household uses, can be an order of magnitude larger than for other water uses. This characteristic of urban water systems creates a promising opportunity to reduce global greenhouse gas emissions, particularly given rapidly growing urbanization worldwide. Based on a previous Water-Energy-CO2 emissions model for household water end uses, this research introduces a probabilistic two-stage optimization model considering technical and behavioral decision variables to obtain the most economical strategies to minimize household water and water-related energy bills given both water and energy price shocks. Results show that adoption rates to reduce energy intensive appliances increase significantly, resulting in an overall 20% growth in indoor water conservation if household dwellers include the energy cost of their water use. To analyze the consequences on a utility-scale, we develop an hourly water-energy model based on data from East Bay Municipal Utility District in California, including the residential consumption, obtaining that water end uses accounts for roughly 90% of total water-related energy, but the 10% that is managed by the utility is worth over 12 million annually. Once the entire end-use + utility model is completed, several demand-side management conservation strategies were simulated for the city of San Ramon. In this smaller water district, roughly 5% of total EBMUD water use, we found that the optimal household strategies can reduce total GHG emissions by 4% and utility's energy cost over 70,000/yr. Especially interesting from the utility perspective could be the "smoothing" of water use peaks by avoiding daytime irrigation that among other benefits might reduce utility energy costs by 0.5% according to our

  8. Pathways to Carbon Neutral Industrial Sectors: Integrated Modelling Approach with High Level of Detail for End-use Processes

    DEFF Research Database (Denmark)

    Industry constitutes a substantial share of the energy and fuel consumption in energy systems. Types and patterns of usage within different industrial sectors are diverse. In this paper, we illustrate the energy and fuel use in Danish industry by 24 end-uses and 20 fuels and provide hourly profiles...... for electricity, space and process heating. The heat profiles are based on measured natural gas consumption. While seasonal patterns are predominant for space heating, process heating and electricity consumption are found to follow sector-related activities on a temporal scale. Building on this data analysis...

  9. Electronic nicotine delivery system (ENDS) use during smoking cessation: a qualitative study of 40 Oklahoma quitline callers.

    Science.gov (United States)

    Vickerman, Katrina A; Beebe, Laura A; Schauer, Gillian L; Magnusson, Brooke; King, Brian A

    2017-04-01

    Approximately 10% (40 000) of US quitline enrollees who smoke cigarettes report current use of electronic nicotine delivery systems (ENDS); however, little is known about callers' ENDS use. Our aim was to describe why and how quitline callers use ENDS, their beliefs about ENDS and the impact of ENDS use on callers' quit processes and use of FDA-approved cessation medications. Qualitative interviews conducted 1-month postregistration. Interviews were recorded, transcribed, double-coded and analysed to identify themes. Oklahoma Tobacco Helpline. 40 callers aged ≥18 who were seeking help to quit smoking were using ENDS at registration and completed ≥1 programme calls. At 1-month postregistration interview, 80% of callers had smoked cigarettes in the last 7 days, almost two-thirds were using ENDS, and half were using cessation medications. Nearly all believed ENDS helped them quit or cut down on smoking; however, participants were split on whether they would recommend cessation medications, ENDS or both together for quitting. Confusion and misinformation about potential harms of ENDS and cessation medications were reported. Participants reported using ENDS in potentially adaptive ways (eg, using ENDS to cut down and nicotine replacement therapy to quit, and stepping down nicotine in ENDS to wean off ENDS after quitting) and maladaptive ways (eg, frequent automatic ENDS use, using ENDS in situations they did not previously smoke, cutting down on smoking using ENDS without a schedule or plan to quit), which could impact the likelihood of quitting smoking or continuing ENDS use. These qualitative findings suggest quitline callers who use ENDS experience confusion and misinformation about ENDS and FDA-approved cessation medications. Callers also use ENDS in ways that may not facilitate quitting smoking. Opportunities exist for quitlines to educate ENDS users and help them create a coordinated plan most likely to result in completely quitting combustible tobacco

  10. A programmable Si-photonic node for SDN-enabled Bloom filter forwarding in disaggregated data centers

    Science.gov (United States)

    Moralis-Pegios, M.; Terzenidis, N.; Vagionas, C.; Pitris, S.; Chatzianagnostou, E.; Brimont, A.; Zanzi, A.; Sanchis, P.; Marti, J.; Kraft, J.; Rochracher, K.; Dorrestein, S.; Bogdan, M.; Tekin, T.; Syrivelis, D.; Tassiulas, L.; Miliou, A.; Pleros, N.; Vyrsokinos, K.

    2017-02-01

    Programmable switching nodes supporting Software-Defined Networking (SDN) over optical interconnecting technologies arise as a key enabling technology for future disaggregated Data Center (DC) environments. The SDNenabling roadmap of intra-DC optical solutions is already a reality for rack-to-rack interconnects, with recent research reporting on interesting applications of programmable silicon photonic switching fabrics addressing board-to-board and even on-board applications. In this perspective, simplified information addressing schemes like Bloom filter (BF)-based labels emerge as a highly promising solution for ensuring rapid switch reconfiguration, following quickly the changes enforced in network size, network topology or even in content location. The benefits of BF-based forwarding have been so far successfully demonstrated in the Information-Centric Network (ICN) paradigm, while theoretical studies have also revealed the energy consumption and speed advantages when applied in DCs. In this paper we present for the first time a programmable 4x4 Silicon Photonic switch that supports SDN through the use of BF-labeled router ports. Our scheme significantly simplifies packet forwarding as it negates the need for large forwarding tables, allowing for its remote control through modifications in the assigned BF labels. We demonstrate 1x4 switch operation controlling the Si-Pho switch by a Stratix V FPGA module, which is responsible for processing the packet ID and correlating its destination with the appropriate BF-labeled outgoing port. DAC- and amplifier-less control of the carrier-injection Si-Pho switches is demonstrated, revealing successful switching of 10Gb/s data packets with BF-based forwarding information changes taking place at a time-scale that equals the duration of four consecutive packets.

  11. The added value of stochastic spatial disaggregation for short-term rainfall forecasts currently available in Canada

    Science.gov (United States)

    Gagnon, Patrick; Rousseau, Alain N.; Charron, Dominique; Fortin, Vincent; Audet, René

    2017-11-01

    Several businesses and industries rely on rainfall forecasts to support their day-to-day operations. To deal with the uncertainty associated with rainfall forecast, some meteorological organisations have developed products, such as ensemble forecasts. However, due to the intensive computational requirements of ensemble forecasts, the spatial resolution remains coarse. For example, Environment and Climate Change Canada's (ECCC) Global Ensemble Prediction System (GEPS) data is freely available on a 1-degree grid (about 100 km), while those of the so-called High Resolution Deterministic Prediction System (HRDPS) are available on a 2.5-km grid (about 40 times finer). Potential users are then left with the option of using either a high-resolution rainfall forecast without uncertainty estimation and/or an ensemble with a spectrum of plausible rainfall values, but at a coarser spatial scale. The objective of this study was to evaluate the added value of coupling the Gibbs Sampling Disaggregation Model (GSDM) with ECCC products to provide accurate, precise and consistent rainfall estimates at a fine spatial resolution (10-km) within a forecast framework (6-h). For 30, 6-h, rainfall events occurring within a 40,000-km2 area (Québec, Canada), results show that, using 100-km aggregated reference rainfall depths as input, statistics of the rainfall fields generated by GSDM were close to those of the 10-km reference field. However, in forecast mode, GSDM outcomes inherit of the ECCC forecast biases, resulting in a poor performance when GEPS data were used as input, mainly due to the inherent rainfall depth distribution of the latter product. Better performance was achieved when the Regional Deterministic Prediction System (RDPS), available on a 10-km grid and aggregated at 100-km, was used as input to GSDM. Nevertheless, most of the analyzed ensemble forecasts were weakly consistent. Some areas of improvement are identified herein.

  12. A Biodiversity Indicators Dashboard: Addressing Challenges to Monitoring Progress towards the Aichi Biodiversity Targets Using Disaggregated Global Data

    Science.gov (United States)

    Han, Xuemei; Smyth, Regan L.; Young, Bruce E.; Brooks, Thomas M.; Sánchez de Lozada, Alexandra; Bubb, Philip; Butchart, Stuart H. M.; Larsen, Frank W.; Hamilton, Healy; Hansen, Matthew C.; Turner, Will R.

    2014-01-01

    Recognizing the imperiled status of biodiversity and its benefit to human well-being, the world's governments committed in 2010 to take effective and urgent action to halt biodiversity loss through the Convention on Biological Diversity's “Aichi Targets”. These targets, and many conservation programs, require monitoring to assess progress toward specific goals. However, comprehensive and easily understood information on biodiversity trends at appropriate spatial scales is often not available to the policy makers, managers, and scientists who require it. We surveyed conservation stakeholders in three geographically diverse regions of critical biodiversity concern (the Tropical Andes, the African Great Lakes, and the Greater Mekong) and found high demand for biodiversity indicator information but uneven availability. To begin to address this need, we present a biodiversity “dashboard” – a visualization of biodiversity indicators designed to enable tracking of biodiversity and conservation performance data in a clear, user-friendly format. This builds on previous, more conceptual, indicator work to create an operationalized online interface communicating multiple indicators at multiple spatial scales. We structured this dashboard around the Pressure-State-Response-Benefit framework, selecting four indicators to measure pressure on biodiversity (deforestation rate), state of species (Red List Index), conservation response (protection of key biodiversity areas), and benefits to human populations (freshwater provision). Disaggregating global data, we present dashboard maps and graphics for the three regions surveyed and their component countries. These visualizations provide charts showing regional and national trends and lay the foundation for a web-enabled, interactive biodiversity indicators dashboard. This new tool can help track progress toward the Aichi Targets, support national monitoring and reporting, and inform outcome-based policy-making for the

  13. Disaggregation of remotely sensed soil moisture under all sky condition using machine learning approach in Northeast Asia

    Science.gov (United States)

    Kim, S.; Kim, H.; Choi, M.; Kim, K.

    2016-12-01

    Estimating spatiotemporal variation of soil moisture is crucial to hydrological applications such as flood, drought, and near real-time climate forecasting. Recent advances in space-based passive microwave measurements allow the frequent monitoring of the surface soil moisture at a global scale and downscaling approaches have been applied to improve the spatial resolution of passive microwave products available at local scale applications. However, most downscaling methods using optical and thermal dataset, are valid only in cloud-free conditions; thus renewed downscaling method under all sky condition is necessary for the establishment of spatiotemporal continuity of datasets at fine resolution. In present study Support Vector Machine (SVM) technique was utilized to downscale a satellite-based soil moisture retrievals. The 0.1 and 0.25-degree resolution of daily Land Parameter Retrieval Model (LPRM) L3 soil moisture datasets from Advanced Microwave Scanning Radiometer 2 (AMSR2) were disaggregated over Northeast Asia in 2015. Optically derived estimates of surface temperature (LST), normalized difference vegetation index (NDVI), and its cloud products were obtained from MODerate Resolution Imaging Spectroradiometer (MODIS) for the purpose of downscaling soil moisture in finer resolution under all sky condition. Furthermore, a comparison analysis between in situ and downscaled soil moisture products was also conducted for quantitatively assessing its accuracy. Results showed that downscaled soil moisture under all sky condition not only preserves the quality of AMSR2 LPRM soil moisture at 1km resolution, but also attains higher spatial data coverage. From this research we expect that time continuous monitoring of soil moisture at fine scale regardless of weather conditions would be available.

  14. L-band brightness temperature disaggregation for use with S-band and C-band radiometer data for WCOM

    Science.gov (United States)

    Yao, P.; Shi, J.; Zhao, T.; Cosh, M. H.; Bindlish, R.

    2017-12-01

    There are two passive microwave sensors onboard the Water Cycle Observation Mission (WCOM), which includes a synthetic aperture radiometer operating at L-S-C bands and a scanning microwave radiometer operating from C- to W-bands. It provides a unique opportunity to disaggregate L-band brightness temperature (soil moisture) with S-band C-bands radiometer data. In this study, passive-only downscaling methodologies are developed and evaluated. Based on the radiative transfer modeling, it was found that the TBs (brightness temperature) between the L-band and S-band exhibit a linear relationship, and there is an exponential relationship between L-band and C-band. We carried out the downscaling results by two methods: (1) downscaling with L-S-C band passive measurements with the same incidence angle from payload IMI; (2) downscaling with L-C band passive measurements with different incidence angle from payloads IMI and PMI. The downscaling method with L-S bands with the same incident angle was first evaluated using SMEX02 data. The RMSE are 2.69 K and 1.52 K for H and V polarization respectively. The downscaling method with L-C bands is developed with different incident angles using SMEX03 data. The RMSE are 2.97 K and 2.68 K for H and V polarization respectively. These results showed that high-resolution L-band brightness temperature and soil moisture products could be generated from the future WCOM passive-only observations.

  15. A biodiversity indicators dashboard: addressing challenges to monitoring progress towards the Aichi biodiversity targets using disaggregated global data.

    Science.gov (United States)

    Han, Xuemei; Smyth, Regan L; Young, Bruce E; Brooks, Thomas M; Sánchez de Lozada, Alexandra; Bubb, Philip; Butchart, Stuart H M; Larsen, Frank W; Hamilton, Healy; Hansen, Matthew C; Turner, Will R

    2014-01-01

    Recognizing the imperiled status of biodiversity and its benefit to human well-being, the world's governments committed in 2010 to take effective and urgent action to halt biodiversity loss through the Convention on Biological Diversity's "Aichi Targets". These targets, and many conservation programs, require monitoring to assess progress toward specific goals. However, comprehensive and easily understood information on biodiversity trends at appropriate spatial scales is often not available to the policy makers, managers, and scientists who require it. We surveyed conservation stakeholders in three geographically diverse regions of critical biodiversity concern (the Tropical Andes, the African Great Lakes, and the Greater Mekong) and found high demand for biodiversity indicator information but uneven availability. To begin to address this need, we present a biodiversity "dashboard"--a visualization of biodiversity indicators designed to enable tracking of biodiversity and conservation performance data in a clear, user-friendly format. This builds on previous, more conceptual, indicator work to create an operationalized online interface communicating multiple indicators at multiple spatial scales. We structured this dashboard around the Pressure-State-Response-Benefit framework, selecting four indicators to measure pressure on biodiversity (deforestation rate), state of species (Red List Index), conservation response (protection of key biodiversity areas), and benefits to human populations (freshwater provision). Disaggregating global data, we present dashboard maps and graphics for the three regions surveyed and their component countries. These visualizations provide charts showing regional and national trends and lay the foundation for a web-enabled, interactive biodiversity indicators dashboard. This new tool can help track progress toward the Aichi Targets, support national monitoring and reporting, and inform outcome-based policy-making for the protection of

  16. A biodiversity indicators dashboard: addressing challenges to monitoring progress towards the Aichi biodiversity targets using disaggregated global data.

    Directory of Open Access Journals (Sweden)

    Xuemei Han

    Full Text Available Recognizing the imperiled status of biodiversity and its benefit to human well-being, the world's governments committed in 2010 to take effective and urgent action to halt biodiversity loss through the Convention on Biological Diversity's "Aichi Targets". These targets, and many conservation programs, require monitoring to assess progress toward specific goals. However, comprehensive and easily understood information on biodiversity trends at appropriate spatial scales is often not available to the policy makers, managers, and scientists who require it. We surveyed conservation stakeholders in three geographically diverse regions of critical biodiversity concern (the Tropical Andes, the African Great Lakes, and the Greater Mekong and found high demand for biodiversity indicator information but uneven availability. To begin to address this need, we present a biodiversity "dashboard"--a visualization of biodiversity indicators designed to enable tracking of biodiversity and conservation performance data in a clear, user-friendly format. This builds on previous, more conceptual, indicator work to create an operationalized online interface communicating multiple indicators at multiple spatial scales. We structured this dashboard around the Pressure-State-Response-Benefit framework, selecting four indicators to measure pressure on biodiversity (deforestation rate, state of species (Red List Index, conservation response (protection of key biodiversity areas, and benefits to human populations (freshwater provision. Disaggregating global data, we present dashboard maps and graphics for the three regions surveyed and their component countries. These visualizations provide charts showing regional and national trends and lay the foundation for a web-enabled, interactive biodiversity indicators dashboard. This new tool can help track progress toward the Aichi Targets, support national monitoring and reporting, and inform outcome-based policy-making for the

  17. Disaggregation of collective dose-a worked example based on future discharges from the Sellafield nuclear fuel reprocessing site, UK

    International Nuclear Information System (INIS)

    Jones, S R; Lambers, B; Stevens, A

    2004-01-01

    Collective dose has long been advocated as an important measure of the detriment associated with practices that involve the use of radioactivity. Application of collective dose in the context of worker protection is relatively straightforward, whereas its application in the context of discharges to the environment can yield radically different conclusions depending upon the population groups and integration times that are considered. The computer program PC-CREAM98 has been used to provide an indicative disaggregation into individual dose bands of the collective dose due to potential future radioactive discharges from the nuclear fuel reprocessing site at Sellafield in the UK. Two alternative discharge scenarios are considered, which represent a 'stop reprocessing early, minimum discharge' scenario and a 'reprocessing beyond current contracts' scenario. For aerial discharges, collective dose at individual effective dose rates exceeding 0.015 μSv y -1 is only incurred within the UK, and at effective dose rates exceeding 1.5 μSv y -1 is only incurred within about 20 km of Sellafield. The geographical distribution of collective dose from liquid discharges is harder to assess, but it appears that collective dose incurred outside the UK is at levels of individual effective dose rate below 1.5 μSv y -1 , with the majority being incurred at rates of 0.002 μSv y -1 or less. In multi-attribute utility analyses, the view taken on the radiological detriment to be attributed to the two discharge scenarios will depend critically on the weight or monetary value ascribed to collective doses incurred within the differing bands of individual dose rate

  18. Not All Large Customers are Made Alike: Disaggregating Response to Default-Service Day-Ahead Market Pricing

    International Nuclear Information System (INIS)

    Hopper, Nicole; Goldman, Charles; Neenan, Bernie

    2006-01-01

    For decades, policymakers and program designers have gone on the assumption that large customers, particularly industrial facilities, are the best candidates for realtime pricing (RTP). This assumption is based partly on practical considerations (large customers can provide potentially large load reductions) but also on the premise that businesses focused on production cost minimization are most likely to participate and respond to opportunities for bill savings. Yet few studies have examined the actual price response of large industrial and commercial customers in a disaggregated fashion, nor have factors such as the impacts of demand response (DR) enabling technologies, simultaneous emergency DR program participation and price response barriers been fully elucidated. This second-phase case study of Niagara Mohawk Power Corporation (NMPC)'s large customer RTP tariff addresses these information needs. The results demonstrate the extreme diversity of large customers' response to hourly varying prices. While two-thirds exhibit some price response, about 20 percent of customers provide 75-80 percent of the aggregate load reductions. Manufacturing customers are most price-responsive as a group, followed by government/education customers, while other sectors are largely unresponsive. However, individual customer response varies widely. Currently, enabling technologies do not appear to enhance hourly price response; customers report using them for other purposes. The New York Independent System Operator (NYISO)'s emergency DR programs enhance price response, in part by signaling to customers that day-ahead prices are high. In sum, large customers do currently provide moderate price response, but there is significant room for improvement through targeted programs that help customers develop and implement automated load-response strategies

  19. Seasonal fuel consumption, stoves, and end-uses in rural households of the far-western development region of Nepal

    Science.gov (United States)

    Lam, Nicholas L.; Upadhyay, Basudev; Maharjan, Shovana; Jagoe, Kirstie; Weyant, Cheryl L.; Thompson, Ryan; Uprety, Sital; Johnson, Michael A.; Bond, Tami C.

    2017-12-01

    Understanding how fuels and stoves are used to meet a diversity of household needs is an important step in addressing the factors leading to continued reliance on polluting devices, and thereby improving household energy programs. In Nepal and many other countries dependent on solid fuel, efforts to mitigate the impacts of residential solid fuel use have emphasized cooking while focusing less on other solid fuel dependent end-uses. We employed a four-season fuel assessment in a cohort of 110 households residing in two elevation regions of the Far-Western Development Region (Province 7) of Nepal. Household interviews and direct fuel weights were used to assess seasonality in fuel consumption and its association with stoves that met cooking and non-cooking needs. Per-capita fuel consumption in winter was twice that of other measured seasons, on average. This winter increase was attributed to greater prevalence of use and fuel consumption by supplemental stoves, not the main cooking stove. End-use profiles showed that fuel was used in supplemental stoves to meet the majority of non-meal needs in the home, notably water heating and preparation of animal food. This emphasis on fuels, stoves, and the satisfaction of energy needs—rather than just stoves or fuels—leads to a better understanding of the factors leading to device and fuel choice within households.

  20. Planning for the recreational end use of a future LLR waste mound in Canada - Leaving an honourable legacy

    International Nuclear Information System (INIS)

    Kleb, H.R.; Zelmer, R.L.

    2007-01-01

    The Low-Level Radioactive Waste Management Office was established in 1982 to carry out the federal government's responsibilities for low-level radioactive (LLR) waste management in Canada. In this capacity, the Office operates programs to characterize, delineate, decontaminate and consolidate historic LLR waste for interim and long-term storage. The Office is currently the proponent of the Port Hope Area Initiative; a program directed at the development and implementation of a safe, local long-term management solution for historic LLR waste in the Port Hope area. A legal agreement between the Government of Canada and the host community provides the framework for the implementation of the Port Hope Project. Specifically, the agreement requires that the surface of the long-term LLR waste management facility be 'conducive to passive and active recreational uses such as soccer fields and baseball diamonds'. However, there are currently no examples of licensed LLR waste management facilities in Canada that permit recreational use. Such an end use presents challenges with respect to engineering and design, health and safety and landscape planning. This paper presents the cover system design, the environmental effects assessment and the landscape planning processes that were undertaken in support of the recreational end use of the Port Hope long-term LLR waste management facility. (authors)

  1. A simple two stage optimization algorithm for constrained power economic dispatch

    International Nuclear Information System (INIS)

    Huang, G.; Song, K.

    1994-01-01

    A simple two stage optimization algorithm is proposed and investigated for fast computation of constrained power economic dispatch control problems. The method is a simple demonstration of the hierarchical aggregation-disaggregation (HAD) concept. The algorithm first solves an aggregated problem to obtain an initial solution. This aggregated problem turns out to be classical economic dispatch formulation, and it can be solved in 1% of overall computation time. In the second stage, linear programming method finds optimal solution which satisfies power balance constraints, generation and transmission inequality constraints and security constraints. Implementation of the algorithm for IEEE systems and EPRI Scenario systems shows that the two stage method obtains average speedup ratio 10.64 as compared to classical LP-based method

  2. Algorithmic chemistry

    Energy Technology Data Exchange (ETDEWEB)

    Fontana, W.

    1990-12-13

    In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.

  3. Data Warehousing: Beyond Disaggregation.

    Science.gov (United States)

    Rudner, Lawrence M.; Boston, Carol

    2003-01-01

    Discusses data warehousing, which provides information more fully responsive to local, state, and federal data needs. Such a system allows educators to generate reports and analyses that supply information, provide accountability, explore relationships among different kinds of data, and inform decision-makers. (Contains one figure and eight…

  4. Evolution of residential electricity demand by end-use in Quebec 1979-1989: A conditional demand analysis

    International Nuclear Information System (INIS)

    Lafrance, G.; Perron, D.

    1994-01-01

    Some of the main conclusions are presented from a temporal analysis of three large-scale electricity demand surveys (1979, 1984, and 1989) for the Quebec residential sector. A regression method called conditional demand analysis was used. The study allows a number of conclusions about certain electricity consumption trends by end-uses from 1979 to 1989 by household type and by vintage category. For example, the results indicate that decreasing electricity consumption between 1979 and 1984 for a typical dwelling equipped with electric space heating was mainly related to a large decline in net heating consumption. Overall, the results suggest that some permanent energy savings have been realized by a typical household equipped with an electric heating system due to improvements in standards and changes in customer behavior. These energy savings were partly offset by increased electricity consumption from the purchase of new appliances and an increase in the demand for hot water. 7 refs., 1 fig., 8 tabs

  5. Family Physicians' Perceived Prevalence, Safety, and Screening for Cigarettes, Marijuana, and Electronic-Nicotine Delivery Systems (ENDS) Use during Pregnancy.

    Science.gov (United States)

    Northrup, Thomas F; Klawans, Michelle R; Villarreal, Yolanda R; Abramovici, Adi; Suter, Melissa A; Mastrobattista, Joan M; Moreno, Carlos A; Aagaard, Kjersti M; Stotts, Angela L

    2017-01-01

    Assess perceptions of prevalence, safety, and screening practices for cigarettes and secondhand smoke exposure (SHSe), marijuana (and synthetic marijuana), electronic nicotine delivery systems (ENDS; eg, e-cigarettes), nicotine-replacement therapy (NRT), and smoking-cessation medications during pregnancy, among primary care physicians (PCPs) providing obstetric care. A web-based, cross-sectional survey was e-mailed to 3750 US physicians (belonging to organizations within the Council of Academic Family Medicine Educational Research Alliance). Several research groups' questions were included in the survey. Only physicians who reported providing "labor and delivery" obstetric care responded to questions related to the study objectives. A total of 1248 physicians (of 3750) responded (33.3%) and 417 reported providing labor and delivery obstetric care. Obstetric providers (N = 417) reported cigarette (54%), marijuana (49%), and ENDS use (24%) by "Some (6% to 25%)" pregnant women, with 37% endorsing that "Very Few (1% to 5%)" pregnant women used ENDS. Providers most often selected that very few pregnant women used NRT (45%), cessation medications (ie, bupropion or varenicline; 37%), and synthetic marijuana (23%). Significant proportions chose "Do not Know" for synthetic marijuana (58%) and ENDS (27%). Over 90% of the sample perceived that use of or exposure to cigarettes (99%), synthetic marijuana (99%), SHS (97%), marijuana (92%), or ENDS (91%) were unsafe during pregnancy, with the exception of NRT (44%). Providers most consistently screened for cigarette (85%) and marijuana use (63%), followed by SHSe in the home (48%), and ENDS (33%) and synthetic marijuana use (28%). Fewer than a quarter (18%) screened consistently for all substances and SHSe. One third (32%) reported laboratory testing for marijuana and 3% reported laboratory testing for smoking status. This sample of PCPs providing obstetric care within academic settings perceived cigarettes, marijuana, and ENDS

  6. Characterization of changes in commercial building structure, equipment, and occupants: End-Use Load and Consumer Assessment Program

    Energy Technology Data Exchange (ETDEWEB)

    Lucas, R.G.; Taylor, Z.T.; Miller, N.E.; Pratt, R.G.

    1990-12-01

    Changes in commercial building structure, equipment, and occupants result in changes in building energy use. The frequency and magnitude of those changes have substantial implications for conservation programs and resource planning. For example, changes may shorten the useful lifetime of a conservation measure as well as impact the savings from that measure. This report summarizes the frequency of changes in a commercial building sample that was end-use metered under the End-Use Load and Consumer Assessment Program (ELCAP). The sample includes offices, dry good retails, groceries, restaurants, warehouses, schools, and hotels. Two years of metered data, site visit records, and audit data were examined for evidence of building changes. The observed changes were then classified into 12 categories, which included business type, equipment, remodel, vacancy, and operating schedule. The analysis characterized changes in terms of frequency of types of change; relationship to building vintage and floor area; and variation by building type. The analysis also examined the energy impacts of various changes. The analysis determined that the rate of change in commercial buildings is high--50% of the buildings experienced one type of change during the 2 years for which monitoring data were examined. Equipment changes were found to be most frequent in offices and retail stores. Larger, older office buildings tend to experience a wider variety of changes more frequently than the smaller, newer buildings. Key findings and observations are presented in Section 2. Section 3 provides the underlying motivation and objectives. In Section 4, the methodology used is documented, including the commercial building sample and the data sources used. Included are the definitions of change events and the overall approach taken. Results are analyzed in Section 5, with additional technical details in Appendixes. 2 refs., 46 figs., 22 tabs. (JF)

  7. Sustainability assessment of alternative end-uses for disused areas based on multi-criteria decision-making method.

    Science.gov (United States)

    De Feo, Giovanni; De Gisi, Sabino; De Vita, Sabato; Notarnicola, Michele

    2018-08-01

    The main aim of this study was to define and apply a multidisciplinary and multi-criteria approach to sustainability in evaluating alternative end-uses for disused areas. Taking into account the three pillars of sustainability (social, economic and environmental dimension) as well as the need for stakeholders to have new practical instruments, the innovative approach consists of four modules stated (i) sociological, (ii) economic, (iii) environmental and (iv) multi-criteria assessment. By means of a case study on a small Municipality in Southern Italy, three end-uses alternatives, representing three essential services for citizens, were selected: Municipal gym; Market area; Municipal Solid Waste (MSW) separate collection centre. The sociological module was useful to select the most socially sound alternative by means of a consultative referendum, simulated with the use of a structured questionnaire administered to a sample of the population. The economic evaluation was conducted defining the bill of quantities with regarding to six main items (soil handling, landfill disposal tax, public services, structure and services, completion work, equipment and furnishings). The environmental evaluation was performed applying the Delphi method with local technicians who were involved in a qualitative-quantitative evaluation of the three alternatives with regarding to eight possible environmental impacts (landscape impact, soil handling, odour, traffic, noise, atmospheric pollution, wastewater, waste). Finally, the Simple Additive Weighting was used as multi-criteria technique to define alternatives priorities. The obtained results showed how the multi-criteria analysis is a useful decision support tool able to identify transparently and efficiently the most sustainable solutions to a complex social problem. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. Improving and disaggregating N{sub 2}O emission factors for ruminant excreta on temperate pasture soils

    Energy Technology Data Exchange (ETDEWEB)

    Krol, D.J., E-mail: kroldj@tcd.ie [Teagasc, Crops, Land Use and Environment Programme, Johnstown Castle, Co., Wexford (Ireland); Carolan, R. [Agri-Food and Biosciences Institute (AFBI), Belfast BT9 5PX (Ireland); Minet, E. [Teagasc, Crops, Land Use and Environment Programme, Johnstown Castle, Co., Wexford (Ireland); McGeough, K.L.; Watson, C.J. [Agri-Food and Biosciences Institute (AFBI), Belfast BT9 5PX (Ireland); Forrestal, P.J. [Teagasc, Crops, Land Use and Environment Programme, Johnstown Castle, Co., Wexford (Ireland); Lanigan, G.J., E-mail: gary.lanigan@teagasc.ie [Teagasc, Crops, Land Use and Environment Programme, Johnstown Castle, Co., Wexford (Ireland); Richards, K.G. [Teagasc, Crops, Land Use and Environment Programme, Johnstown Castle, Co., Wexford (Ireland)

    2016-10-15

    Cattle excreta deposited on grazed grasslands are a major source of the greenhouse gas (GHG) nitrous oxide (N{sub 2}O). Currently, many countries use the IPCC default emission factor (EF) of 2% to estimate excreta-derived N{sub 2}O emissions. However, emissions can vary greatly depending on the type of excreta (dung or urine), soil type and timing of application. Therefore three experiments were conducted to quantify excreta-derived N{sub 2}O emissions and their associated EFs, and to assess the effect of soil type, season of application and type of excreta on the magnitude of losses. Cattle dung, urine and artificial urine treatments were applied in spring, summer and autumn to three temperate grassland sites with varying soil and weather conditions. Nitrous oxide emissions were measured from the three experiments over 12 months to generate annual N{sub 2}O emission factors. The EFs from urine treated soil was greater (0.30–4.81% for real urine and 0.13–3.82% for synthetic urine) when compared with dung (− 0.02–1.48%) treatments. Nitrous oxide emissions were driven by environmental conditions and could be predicted by rainfall and temperature before, and soil moisture deficit after application; highlighting the potential for a decision support tool to reduce N{sub 2}O emissions by modifying grazing management based on these parameters. Emission factors varied seasonally with the highest EFs in autumn and were also dependent on soil type, with the lowest EFs observed from well-drained and the highest from imperfectly drained soil. The EFs averaged 0.31 and 1.18% for cattle dung and urine, respectively, both of which were considerably lower than the IPCC default value of 2%. These results support both lowering and disaggregating EFs by excreta type. - Highlights: • N{sub 2}O emissions were measured from cattle excreta applied to pasture. • N{sub 2}O was universally higher from urine compared with dung. • N{sub 2}O was driven by rainfall, temperature

  9. Convergence of in-Country Prices for the Turkish Economy : A Panel Data Search for the PPP Hypothesis Using Sub-Regional Disaggregated Data

    Directory of Open Access Journals (Sweden)

    Mustafa METE

    2014-12-01

    Full Text Available This paper tries to examine that in-country prices from the Turkish economy can be specified as a stationary relationship giving support to the long-run purchasing power parity in economics theory. For this purpose, a sub-regional categorization of the economy is considered over the investigation period of 2005-2012, and, following Esaka (2003, the study uses a panel estimation framework consisting of 12 disaggregated consumer price indices to search for whether the relative prices of goods between sub-regions of the Turkish economy can be represented by stationary time series properties.

  10. Financing end-use solar technologies in a restructured electricity industry: Comparing the cost of public policies

    International Nuclear Information System (INIS)

    Jones, E.; Eto, J.

    1997-09-01

    Renewable energy technologies are capital intensive. Successful public policies for promoting renewable energy must address the significant resources needed to finance them. Public policies to support financing for renewable energy technologies must pay special attention to interactions with federal, state, and local taxes. These interactions are important because they can dramatically increase or decrease the effectiveness of a policy, and they determine the total cost of a policy to society as a whole. This report describes a comparative analysis of the cost of public policies to support financing for two end-use solar technologies: residential solar domestic hot water heating (SDHW) and residential rooftop photovoltaic (PV) systems. The analysis focuses on the cost of the technologies under five different ownership and financing scenarios. Four scenarios involve leasing the technologies to homeowners in return for a payment that is determined by the financing requirements of each form of ownership. For each scenario, the authors examine nine public policies that might be used to lower the cost of these technologies: investment tax credits (federal and state), production tax credits (federal and state), production incentives, low-interest loans, grants (taxable and two types of nontaxable), direct customer payments, property and sales tax reductions, and accelerated depreciation

  11. Pseudo-deterministic Algorithms

    OpenAIRE

    Goldwasser , Shafi

    2012-01-01

    International audience; In this talk we describe a new type of probabilistic algorithm which we call Bellagio Algorithms: a randomized algorithm which is guaranteed to run in expected polynomial time, and to produce a correct and unique solution with high probability. These algorithms are pseudo-deterministic: they can not be distinguished from deterministic algorithms in polynomial time by a probabilistic polynomial time observer with black box access to the algorithm. We show a necessary an...

  12. 15 CFR Supplement No. 3 to Part 744 - Countries Not Subject to Certain Nuclear End-Use Restrictions in § 744.2(a)

    Science.gov (United States)

    2010-01-01

    ... COMMERCE EXPORT ADMINISTRATION REGULATIONS CONTROL POLICY: END-USER AND END-USE BASED Pt. 744, Supp. 3... Marino and Holy See) Japan Luxembourg Netherlands New Zealand Norway Portugal Spain Sweden Turkey United...

  13. Multi-criteria analysis towards the new end use of recycled water for household laundry: a case study in Sydney.

    Science.gov (United States)

    Chen, Z; Ngo, H H; Guo, W S; Listowski, A; O'Halloran, K; Thompson, M; Muthukaruppan, M

    2012-11-01

    This paper aims to put forward several management alternatives regarding the application of recycled water for household laundry in Sydney. Based on different recycled water treatment techniques such as microfiltration (MF), granular activated carbon (GAC) or reverse osmosis (RO), and types of washing machines (WMs), five alternatives were proposed as follows: (1) do nothing scenario; (2) MF+existing WMs; (3) MF+new WMs; (4) MF-GAC+existing WMs; and (5) MF-RO+existing WMs. Accordingly, a comprehensive quantitative assessment on the trade-off among a variety of issues (e.g., engineering feasibility, initial cost, energy consumption, supply flexibility and water savings) was performed over the alternatives. This was achieved by a computer-based multi-criteria analysis (MCA) using the rank order weight generation together with preference ranking organization method for enrichment evaluation (PROMETHEE) outranking techniques. Particularly, the generated 10,000 combinations of weights via Monte Carlo simulation were able to significantly reduce the man-made errors of single fixed set of weights because of its objectivity and high efficiency. To illustrate the methodology, a case study on Rouse Hill Development Area (RHDA), Sydney, Australia was carried out afterwards. The study was concluded by highlighting the feasibility of using highly treated recycled water for existing and new washing machines. This could provide a powerful guidance for sustainable water reuse management in the long term. However, more detailed field trials and investigations are still needed to effectively understand, predict and manage the impact of selected recycled water for new end use alternatives. Copyright © 2012 Elsevier B.V. All rights reserved.

  14. Hamiltonian Algorithm Sound Synthesis

    OpenAIRE

    大矢, 健一

    2013-01-01

    Hamiltonian Algorithm (HA) is an algorithm for searching solutions is optimization problems. This paper introduces a sound synthesis technique using Hamiltonian Algorithm and shows a simple example. "Hamiltonian Algorithm Sound Synthesis" uses phase transition effect in HA. Because of this transition effect, totally new waveforms are produced.

  15. Progressive geometric algorithms

    NARCIS (Netherlands)

    Alewijnse, S.P.A.; Bagautdinov, T.M.; de Berg, M.T.; Bouts, Q.W.; ten Brink, Alex P.; Buchin, K.A.; Westenberg, M.A.

    2015-01-01

    Progressive algorithms are algorithms that, on the way to computing a complete solution to the problem at hand, output intermediate solutions that approximate the complete solution increasingly well. We present a framework for analyzing such algorithms, and develop efficient progressive algorithms

  16. Progressive geometric algorithms

    NARCIS (Netherlands)

    Alewijnse, S.P.A.; Bagautdinov, T.M.; Berg, de M.T.; Bouts, Q.W.; Brink, ten A.P.; Buchin, K.; Westenberg, M.A.

    2014-01-01

    Progressive algorithms are algorithms that, on the way to computing a complete solution to the problem at hand, output intermediate solutions that approximate the complete solution increasingly well. We present a framework for analyzing such algorithms, and develop efficient progressive algorithms

  17. The Algorithmic Imaginary

    DEFF Research Database (Denmark)

    Bucher, Taina

    2017-01-01

    the notion of the algorithmic imaginary. It is argued that the algorithmic imaginary – ways of thinking about what algorithms are, what they should be and how they function – is not just productive of different moods and sensations but plays a generative role in moulding the Facebook algorithm itself...... of algorithms affect people's use of these platforms, if at all? To help answer these questions, this article examines people's personal stories about the Facebook algorithm through tweets and interviews with 25 ordinary users. To understand the spaces where people and algorithms meet, this article develops...

  18. The BR eigenvalue algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Geist, G.A. [Oak Ridge National Lab., TN (United States). Computer Science and Mathematics Div.; Howell, G.W. [Florida Inst. of Tech., Melbourne, FL (United States). Dept. of Applied Mathematics; Watkins, D.S. [Washington State Univ., Pullman, WA (United States). Dept. of Pure and Applied Mathematics

    1997-11-01

    The BR algorithm, a new method for calculating the eigenvalues of an upper Hessenberg matrix, is introduced. It is a bulge-chasing algorithm like the QR algorithm, but, unlike the QR algorithm, it is well adapted to computing the eigenvalues of the narrowband, nearly tridiagonal matrices generated by the look-ahead Lanczos process. This paper describes the BR algorithm and gives numerical evidence that it works well in conjunction with the Lanczos process. On the biggest problems run so far, the BR algorithm beats the QR algorithm by a factor of 30--60 in computing time and a factor of over 100 in matrix storage space.

  19. Algorithmically specialized parallel computers

    CERN Document Server

    Snyder, Lawrence; Gannon, Dennis B

    1985-01-01

    Algorithmically Specialized Parallel Computers focuses on the concept and characteristics of an algorithmically specialized computer.This book discusses the algorithmically specialized computers, algorithmic specialization using VLSI, and innovative architectures. The architectures and algorithms for digital signal, speech, and image processing and specialized architectures for numerical computations are also elaborated. Other topics include the model for analyzing generalized inter-processor, pipelined architecture for search tree maintenance, and specialized computer organization for raster

  20. Life cycle greenhouse gas emissions from U.S. liquefied natural gas exports: implications for end uses.

    Science.gov (United States)

    Abrahams, Leslie S; Samaras, Constantine; Griffin, W Michael; Matthews, H Scott

    2015-03-03

    This study analyzes how incremental U.S. liquefied natural gas (LNG) exports affect global greenhouse gas (GHG) emissions. We find that exported U.S. LNG has mean precombustion emissions of 37 g CO2-equiv/MJ when regasified in Europe and Asia. Shipping emissions of LNG exported from U.S. ports to Asian and European markets account for only 3.5-5.5% of precombustion life cycle emissions, hence shipping distance is not a major driver of GHGs. A scenario-based analysis addressing how potential end uses (electricity and industrial heating) and displacement of existing fuels (coal and Russian natural gas) affect GHG emissions shows the mean emissions for electricity generation using U.S. exported LNG were 655 g CO2-equiv/kWh (with a 90% confidence interval of 562-770), an 11% increase over U.S. natural gas electricity generation. Mean emissions from industrial heating were 104 g CO2-equiv/MJ (90% CI: 87-123). By displacing coal, LNG saves 550 g CO2-equiv per kWh of electricity and 20 g per MJ of heat. LNG saves GHGs under upstream fugitive emissions rates up to 9% and 5% for electricity and heating, respectively. GHG reductions were found if Russian pipeline natural gas was displaced for electricity and heating use regardless of GWP, as long as U.S. fugitive emission rates remain below the estimated 5-7% rate of Russian gas. However, from a country specific carbon accounting perspective, there is an imbalance in accrued social costs and benefits. Assuming a mean social cost of carbon of $49/metric ton, mean global savings from U.S. LNG displacement of coal for electricity generation are $1.50 per thousand cubic feet (Mcf) of gaseous natural gas exported as LNG ($.028/kWh). Conversely, the U.S. carbon cost of exporting the LNG is $1.80/Mcf ($.013/kWh), or $0.50-$5.50/Mcf across the range of potential discount rates. This spatial shift in embodied carbon emissions is important to consider in national interest estimates for LNG exports.

  1. End use technology choice in the National Energy Modeling System (NEMS): An analysis of the residential and commercial building sectors

    International Nuclear Information System (INIS)

    Wilkerson, Jordan T.; Cullenward, Danny; Davidian, Danielle; Weyant, John P.

    2013-01-01

    The National Energy Modeling System (NEMS) is arguably the most influential energy model in the United States. The U.S. Energy Information Administration uses NEMS to generate the federal government's annual long-term forecast of national energy consumption and to evaluate prospective federal energy policies. NEMS is considered such a standard tool that other models are calibrated to its forecasts, in both government and academic practice. As a result, NEMS has a significant influence over expert opinions of plausible energy futures. NEMS is a massively detailed model whose inner workings, despite its prominence, receive relatively scant critical attention. This paper analyzes how NEMS projects energy demand in the residential and commercial sectors. In particular, we focus on the role of consumers' preferences and financial constraints, investigating how consumers choose appliances and other end-use technologies. We identify conceptual issues in the approach the model takes to the same question across both sectors. Running the model with a range of consumer preferences, we estimate the extent to which this issue impacts projected consumption relative to the baseline model forecast for final energy demand in the year 2035. In the residential sector, the impact ranges from a decrease of 0.73 quads (− 6.0%) to an increase of 0.24 quads (+ 2.0%). In the commercial sector, the impact ranges from a decrease of 1.0 quads (− 9.0%) to an increase of 0.99 quads (+ 9.0%). - Highlights: • This paper examines the impact of consumer preferences on final energy in the Commercial and Residential sectors of the National Energy Modeling System (NEMS). • We describe the conceptual and empirical basis for modeling consumer technology choice in NEMS. • We offer a range of alternative parameters to show the energy demand sensitivity to technology choice. • We show there are significant potential savings available in both building sectors. • Because the model uses its own

  2. Assessing gendered roles in water decision-making in semi-arid regions through sex-disaggregated water data with UNESCO-WWAP gender toolkit

    Science.gov (United States)

    Miletto, Michela; Greco, Francesca; Belfiore, Elena

    2017-04-01

    Global climate change is expected to exacerbate current and future stresses on water resources from population growth and land use, and increase the frequency and severity of droughts and floods. Women are more vulnerable to the effects of climate change than men not only because they constitute the majority of the world's poor but also because they are more dependent for their livelihood on natural resources that are threatened by climate change. In addition, social, economic and political barriers often limit their coping capacity. Women play a key role in the provision, management and safeguarding of water, nonetheless, gender inequality in water management framework persists around the globe. Sharp data are essential to inform decisions and support effective policies. Disaggregating water data by sex is crucial to analyse gendered roles in the water realm and inform gender sensitive water policies in light of the global commitments to gender equality of Agenda 2030. In view of this scenario, WWAP has created an innovative toolkit for sex-disaggregated water data collection, as a result of a participatory work of more than 35 experts, part of the WWAP Working Group on Sex-Disaggregated Indicators (http://www.unesco.org/new/en/natural-sciences/environment/water/wwap/water-and-gender/un-wwap-working-group-on-gender-disaggregated-indicators/#c1430774). The WWAP toolkit contains four tools: the methodology (Seager J. WWAP UNESCO, 2015), set of key indicators, the guideline (Pangare V.,WWAP UNESCO, 2015) and a questionnaire for field survey. WWAP key gender-sensitive indicators address water resources management, aspects of water quality and agricultural uses, water resources governance and management, and investigate unaccounted labour in according to gender and age. Managing water resources is key for climate adaptation. Women are particularly sensitive to water quality and the health of water-dependent ecosystems, often source of food and job opportunities

  3. AN ACTIVE-PASSIVE COMBINED ALGORITHM FOR HIGH SPATIAL RESOLUTION RETRIEVAL OF SOIL MOISTURE FROM SATELLITE SENSORS (Invited)

    Science.gov (United States)

    Lakshmi, V.; Mladenova, I. E.; Narayan, U.

    2009-12-01

    Soil moisture is known to be an essential factor in controlling the partitioning of rainfall into surface runoff and infiltration and solar energy into latent and sensible heat fluxes. Remote sensing has long proven its capability to obtain soil moisture in near real-time. However, at the present time we have the Advanced Scanning Microwave Radiometer (AMSR-E) on board NASA’s AQUA platform is the only satellite sensor that supplies a soil moisture product. AMSR-E coarse spatial resolution (~ 50 km at 6.9 GHz) strongly limits its applicability for small scale studies. A very promising technique for spatial disaggregation by combining radar and radiometer observations has been demonstrated by the authors using a methodology is based on the assumption that any change in measured brightness temperature and backscatter from one to the next time step is due primarily to change in soil wetness. The approach uses radiometric estimates of soil moisture at a lower resolution to compute the sensitivity of radar to soil moisture at the lower resolution. This estimate of sensitivity is then disaggregated using vegetation water content, vegetation type and soil texture information, which are the variables on which determine the radar sensitivity to soil moisture and are generally available at a scale of radar observation. This change detection algorithm is applied to several locations. We have used aircraft observed active and passive data over Walnut Creek watershed in Central Iowa in 2002; the Little Washita Watershed in Oklahoma in 2003 and the Murrumbidgee Catchment in southeastern Australia for 2006. All of these locations have different soils and land cover conditions which leads to a rigorous test of the disaggregation algorithm. Furthermore, we compare the derived high spatial resolution soil moisture to in-situ sampling and ground observation networks

  4. Environmental impact study due to end use energy technologies; Estudio prospectivo del impacto ambiental debido a tecnologias de uso final de la energia

    Energy Technology Data Exchange (ETDEWEB)

    Manzini Poli, Fabio

    1997-11-01

    Two thirds of the internal offer of energy in Mexico is consumed by end use sectors through end use technologies (TUF). Here is presented an integral conceptual frame for the environmental impact evaluation due to end use technologies, then the evolution of the interactions between technology-environment-fuel is analyzed in the long term (year 2025) according to three possible scenarios: business as usual, blocks and sustainable. [Spanish] Dos terceras partes de la oferta interna de energia en Mexico la utilizan los sectores de consumo final mediante tecnologias de uso final energetico. En el presente trabajo se introduce un marco conceptual integral para evaluar los impactos ambientales debidos a la utilizacion de tecnologias de uso final de la energia (TUF), luego se analiza la evolucion de las interacciones entre tecnologia-energetico-ambiente a largo plazo (ano 2025) de acuerdo a tres escenarios posibles: tendencial, bloques y sustentable.

  5. Analysis of the Syrian long-term energy and electricity demand projection using the end-use methodology

    International Nuclear Information System (INIS)

    Hainoun, A.; Seif-Eldin, M.K.; Almoustafa, S.

    2006-01-01

    A comprehensive analysis of the possible future long-term development of Syrian energy and electricity demand covering the period 1999-2030 is presented. The analysis was conducted using the IAEA's model MAED, which relies upon the end-use approach. This model has been validated during the last two decades through the successful application in many developing countries, even those having partial market economy and energy subsidy. Starting from the base year, final energy consumption distributed by energy forms and consumption sectors, the future energy and electricity demand has been projected according to three different scenarios reflecting the possible future demographic, socio-economic and technological development of the country. These scenarios are constructed to cover a plausible range, in which future evolution factors affecting energy demand are expected to lie. The first is a high economy scenario (HS) representing the reference case, which is characterized by high gross domestic product (GDP) growth rate (average annual about 6%) and moderate improved technologies in the various consumption sectors. The second is an energy efficiency scenario (ES), which is identical to HS in all main parameters except these relating to the efficiency improvement and conservation measures. Here, high technology improvement and more effective conservation measures in all consumption sectors are proposed and the role of solar to substitute fossil energy for heating purposes is considered effectively. The third is a low economy scenario (LS) with low GDP growth rate (average annual about 3.5%) and less technology improvement in the consumption sectors. As a consequence, the improvement in the energy efficiency is low and the influence of conservation measures is less effective. Starting from about 10.5mtoe final energy in the base year, the analysis shows that the projected energy demand will grow annually at average rates of 5%, 4.5% and 3% for the HS, ES and LS

  6. Quantum Computation and Algorithms

    International Nuclear Information System (INIS)

    Biham, O.; Biron, D.; Biham, E.; Grassi, M.; Lidar, D.A.

    1999-01-01

    It is now firmly established that quantum algorithms provide a substantial speedup over classical algorithms for a variety of problems, including the factorization of large numbers and the search for a marked element in an unsorted database. In this talk I will review the principles of quantum algorithms, the basic quantum gates and their operation. The combination of superposition and interference, that makes these algorithms efficient, will be discussed. In particular, Grover's search algorithm will be presented as an example. I will show that the time evolution of the amplitudes in Grover's algorithm can be found exactly using recursion equations, for any initial amplitude distribution

  7. Paraho environmental data. Part IV. Land reclamation and revegetation. Part V. Biological effects. Part VI. Occupational health and safety. Part VII. End use

    Energy Technology Data Exchange (ETDEWEB)

    Limbach, L.K.

    1982-06-01

    Characteristics of the environment and ecosystems at Anvil Points, reclamation of retorted shale, revegetation of retorted shale, and ecological effects of retorted shale are reported in the first section of this report. Methods used in screening shale oil and retort water for mutagens and carcinogens as well as toxicity studies are reported in the second section of this report. The third section contains information concerning the industrial hygiene and medical studies made at Anvil Points during Paraho research operations. The last section discusses the end uses of shale crude oil and possible health effects associated with end use. (DMC)

  8. Fermion cluster algorithms

    International Nuclear Information System (INIS)

    Chandrasekharan, Shailesh

    2000-01-01

    Cluster algorithms have been recently used to eliminate sign problems that plague Monte-Carlo methods in a variety of systems. In particular such algorithms can also be used to solve sign problems associated with the permutation of fermion world lines. This solution leads to the possibility of designing fermion cluster algorithms in certain cases. Using the example of free non-relativistic fermions we discuss the ideas underlying the algorithm

  9. Autonomous Star Tracker Algorithms

    DEFF Research Database (Denmark)

    Betto, Maurizio; Jørgensen, John Leif; Kilsgaard, Søren

    1998-01-01

    Proposal, in response to an ESA R.f.P., to design algorithms for autonomous star tracker operations.The proposal also included the development of a star tracker breadboard to test the algorithms performances.......Proposal, in response to an ESA R.f.P., to design algorithms for autonomous star tracker operations.The proposal also included the development of a star tracker breadboard to test the algorithms performances....

  10. A verified LLL algorithm

    NARCIS (Netherlands)

    Divasón, Jose; Joosten, Sebastiaan; Thiemann, René; Yamada, Akihisa

    2018-01-01

    The Lenstra-Lenstra-Lovász basis reduction algorithm, also known as LLL algorithm, is an algorithm to find a basis with short, nearly orthogonal vectors of an integer lattice. Thereby, it can also be seen as an approximation to solve the shortest vector problem (SVP), which is an NP-hard problem,

  11. Biological durability of wood in relation to end-use - Part 1. Towards a European standard for laboratory testing of the biological durability of wood

    NARCIS (Netherlands)

    Acker, Van J.; Stevens, M.; Carey, J.; Sierra-Alvarez, R.; Militz, H.; Bayon, Le I.; Kleist, G.; Peek, R.D.

    2003-01-01

    The determination of biological durability of wood is an issue requiring sufficient reliability regarding end-use related prediction of performance. Five test institutes joined efforts to check standard test methods and to improve methodology and data interpretation for assessment of natural

  12. Hybrid life-cycle environmental and cost inventory of sewage sludge treatment and end-use scenarios: a case study from China.

    Science.gov (United States)

    Murray, Ashley; Horvath, Arpad; Nelson, Kara L

    2008-05-01

    Sewage sludge management poses environmental, economic, and political challenges for wastewater treatment plants and municipalities around the globe. To facilitate more informed and sustainable decision making, this study used life-cycle inventory (LCI) to expand upon previous process-based LCIs of sewage sludge treatmenttechnologies. Additionally, the study evaluated an array of productive end-use options for treated sewage sludge, such as fertilizer and as an input into construction materials, to determine how the sustainability of traditional manufacturing processes changes with sludge as a replacement for other raw inputs. The inclusion of the life-cycle of necessary inputs (such as lime) used in sludge treatment significantly impacts the sustainability profiles of different treatment and end-use schemes. Overall, anaerobic digestion is generally the optimal treatment technology whereas incineration, particularly if coal-fired, is the most environmentally and economically costly. With respect to sludge end use, offsets are greatest for the use of sludge as fertilizer, but all of the productive uses of sludge can improve the sustainability of conventional manufacturing practices. The results are intended to help inform and guide decisions about sludge handling for existing wastewater treatment plants and those that are still in the planning phase in cities around the world. Although additional factors must be considered when selecting a sludge treatment and end-use scheme, this study highlights how a systems approach to planning can contribute significantly to improving overall environmental sustainability.

  13. The impact of natural gas/hydrogen mixtures on the performance of end-use equipment : Interchangeability analysis for domestic appliances

    NARCIS (Netherlands)

    de Vries, Harmen; Mokhov, Anatoli V.; Levinsky, Howard B.

    2017-01-01

    The addition of hydrogen derived from renewable power to the natural gas network is being promoted as a viable means of storing excess wind and solar energy. However, the changes in combustion properties of the natural gas upon hydrogen addition can impact the performance of the end-use equipment

  14. New durum wheat with soft kernel texture: milling performance and end-use quality analysis of the Hardness locus in Triticum turgidum ssp. durum

    Science.gov (United States)

    Wheat kernel texture dictates U.S. wheat market class. Durum wheat has limited demand and culinary end-uses compared to bread wheat because of its extremely hard kernel texture which preclude conventional milling. ‘Soft Svevo’, a new durum cultivar with soft kernel texture comparable to a soft white...

  15. New durum wheat with soft kernel texture: end-use quality analysis of the Hardness locus in Triticum turgidum ssp. durum

    Science.gov (United States)

    Wheat kernel texture dictates U.S. wheat market class. Durum wheat has limited demand and culinary end-uses compared to bread wheat because of its extremely hard kernel texture which precludes conventional milling. ‘Soft Svevo’, a new durum cultivar with soft kernel texture comparable to a soft whit...

  16. Quantitative Imaging Biomarkers: A Review of Statistical Methods for Computer Algorithm Comparisons

    Science.gov (United States)

    2014-01-01

    Quantitative biomarkers from medical images are becoming important tools for clinical diagnosis, staging, monitoring, treatment planning, and development of new therapies. While there is a rich history of the development of quantitative imaging biomarker (QIB) techniques, little attention has been paid to the validation and comparison of the computer algorithms that implement the QIB measurements. In this paper we provide a framework for QIB algorithm comparisons. We first review and compare various study designs, including designs with the true value (e.g. phantoms, digital reference images, and zero-change studies), designs with a reference standard (e.g. studies testing equivalence with a reference standard), and designs without a reference standard (e.g. agreement studies and studies of algorithm precision). The statistical methods for comparing QIB algorithms are then presented for various study types using both aggregate and disaggregate approaches. We propose a series of steps for establishing the performance of a QIB algorithm, identify limitations in the current statistical literature, and suggest future directions for research. PMID:24919829

  17. Quantitative imaging biomarkers: a review of statistical methods for computer algorithm comparisons.

    Science.gov (United States)

    Obuchowski, Nancy A; Reeves, Anthony P; Huang, Erich P; Wang, Xiao-Feng; Buckler, Andrew J; Kim, Hyun J Grace; Barnhart, Huiman X; Jackson, Edward F; Giger, Maryellen L; Pennello, Gene; Toledano, Alicia Y; Kalpathy-Cramer, Jayashree; Apanasovich, Tatiyana V; Kinahan, Paul E; Myers, Kyle J; Goldgof, Dmitry B; Barboriak, Daniel P; Gillies, Robert J; Schwartz, Lawrence H; Sullivan, Daniel C

    2015-02-01

    Quantitative biomarkers from medical images are becoming important tools for clinical diagnosis, staging, monitoring, treatment planning, and development of new therapies. While there is a rich history of the development of quantitative imaging biomarker (QIB) techniques, little attention has been paid to the validation and comparison of the computer algorithms that implement the QIB measurements. In this paper we provide a framework for QIB algorithm comparisons. We first review and compare various study designs, including designs with the true value (e.g. phantoms, digital reference images, and zero-change studies), designs with a reference standard (e.g. studies testing equivalence with a reference standard), and designs without a reference standard (e.g. agreement studies and studies of algorithm precision). The statistical methods for comparing QIB algorithms are then presented for various study types using both aggregate and disaggregate approaches. We propose a series of steps for establishing the performance of a QIB algorithm, identify limitations in the current statistical literature, and suggest future directions for research. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  18. Nature-inspired optimization algorithms

    CERN Document Server

    Yang, Xin-She

    2014-01-01

    Nature-Inspired Optimization Algorithms provides a systematic introduction to all major nature-inspired algorithms for optimization. The book's unified approach, balancing algorithm introduction, theoretical background and practical implementation, complements extensive literature with well-chosen case studies to illustrate how these algorithms work. Topics include particle swarm optimization, ant and bee algorithms, simulated annealing, cuckoo search, firefly algorithm, bat algorithm, flower algorithm, harmony search, algorithm analysis, constraint handling, hybrid methods, parameter tuning

  19. VISUALIZATION OF PAGERANK ALGORITHM

    OpenAIRE

    Perhaj, Ervin

    2013-01-01

    The goal of the thesis is to develop a web application that help users understand the functioning of the PageRank algorithm. The thesis consists of two parts. First we develop an algorithm to calculate PageRank values of web pages. The input of algorithm is a list of web pages and links between them. The user enters the list through the web interface. From the data the algorithm calculates PageRank value for each page. The algorithm repeats the process, until the difference of PageRank va...

  20. Parallel sorting algorithms

    CERN Document Server

    Akl, Selim G

    1985-01-01

    Parallel Sorting Algorithms explains how to use parallel algorithms to sort a sequence of items on a variety of parallel computers. The book reviews the sorting problem, the parallel models of computation, parallel algorithms, and the lower bounds on the parallel sorting problems. The text also presents twenty different algorithms, such as linear arrays, mesh-connected computers, cube-connected computers. Another example where algorithm can be applied is on the shared-memory SIMD (single instruction stream multiple data stream) computers in which the whole sequence to be sorted can fit in the

  1. Prevalence of electronic nicotine delivery systems (ENDS) use among youth globally: a systematic review and meta-analysis of country level data.

    Science.gov (United States)

    Yoong, Sze Lin; Stockings, Emily; Chai, Li Kheng; Tzelepis, Flora; Wiggers, John; Oldmeadow, Christopher; Paul, Christine; Peruga, Armando; Kingsland, Melanie; Attia, John; Wolfenden, Luke

    2018-03-12

    To describe the prevalence and change in prevalence of electronic nicotine delivery systems (ENDS) use in youth by country and combustible smoking status. Databases and the grey literature were systematically searched to December 2015. Studies describing the prevalence of ENDS use in the general population aged ≤20 years in a defined geographical region were included. Where multiple estimates were available within countries, prevalence estimates of ENDS use were pooled for each country separately. Data from 27 publications (36 surveys) from 13 countries were included. The prevalence of ENDS ever use in 2013-2015 among youth were highest in Poland (62.1%; 95%CI: 59.9-64.2%), and lowest in Italy (5.9%; 95%CI: 3.3-9.2%). Among non-smoking youth, the prevalence of ENDS ever use in 2013-2015 varied, ranging from 4.2% (95%CI: 3.8-4.6%) in the US to 14.0% in New Zealand (95%CI: 12.7-15.4%). The prevalence of ENDS ever use among current tobacco smoking youth was the highest in Canada (71.9%, 95%CI: 70.9-72.8%) and lowest in Italy (29.9%, 95%CI: 18.5-42.5%). Between 2008 and 2015, ENDS ever use among youth increased in Poland, Korea, New Zealand and the US; decreased in Italy and Canada; and remained stable in the UK. There is considerable heterogeneity in ENDS use among youth globally across countries and also between current smokers and non-smokers. Implications for public health: Population-level survey data on ENDS use is needed to inform public health policy and messaging globally. © 2018 The Authors.

  2. Modified Clipped LMS Algorithm

    Directory of Open Access Journals (Sweden)

    Lotfizad Mojtaba

    2005-01-01

    Full Text Available Abstract A new algorithm is proposed for updating the weights of an adaptive filter. The proposed algorithm is a modification of an existing method, namely, the clipped LMS, and uses a three-level quantization ( scheme that involves the threshold clipping of the input signals in the filter weight update formula. Mathematical analysis shows the convergence of the filter weights to the optimum Wiener filter weights. Also, it can be proved that the proposed modified clipped LMS (MCLMS algorithm has better tracking than the LMS algorithm. In addition, this algorithm has reduced computational complexity relative to the unmodified one. By using a suitable threshold, it is possible to increase the tracking capability of the MCLMS algorithm compared to the LMS algorithm, but this causes slower convergence. Computer simulations confirm the mathematical analysis presented.

  3. Core-size regulated aggregation/disaggregation of citrate-coated gold nanoparticles (5-50 nm) and dissolved organic matter: Extinction, emission, and scattering evidence

    Science.gov (United States)

    Esfahani, Milad Rabbani; Pallem, Vasanta L.; Stretz, Holly A.; Wells, Martha J. M.

    2018-01-01

    Knowledge of the interactions between gold nanoparticles (GNPs) and dissolved organic matter (DOM) is significant in the development of detection devices for environmental sensing, studies of environmental fate and transport, and advances in antifouling water treatment membranes. The specific objective of this research was to spectroscopically investigate the fundamental interactions between citrate-stabilized gold nanoparticles (CT-GNPs) and DOM. Studies indicated that 30 and 50 nm diameter GNPs promoted disaggregation of the DOM. This result-disaggregation of an environmentally important polyelectrolyte-will be quite useful regarding antifouling properties in water treatment and water-based sensing applications. Furthermore, resonance Rayleigh scattering results showed significant enhancement in the UV range which can be useful to characterize DOM and can be exploited as an analytical tool to better sense and improve our comprehension of nanomaterial interactions with environmental systems. CT-GNPs having core size diameters of 5, 10, 30, and 50 nm were studied in the absence and presence of added DOM at 2 and 8 ppm at low ionic strength and near neutral pH (6.0-6.5) approximating surface water conditions. Interactions were monitored by cross-interpretation among ultraviolet (UV)-visible extinction spectroscopy, excitation-emission matrix (EEM) spectroscopy (emission and Rayleigh scattering), and dynamic light scattering (DLS). This comprehensive combination of spectroscopic analyses lends new insights into the antifouling behavior of GNPs. The CT-GNP-5 and -10 controls emitted light and aggregated. In contrast, the CT-GNP-30 and CT-GNP-50 controls scattered light intensely, but did not aggregate and did not emit light. The presence of any CT-GNP did not affect the extinction spectra of DOM, and the presence of DOM did not affect the extinction spectra of the CT-GNPs. The emission spectra (visible range) differed only slightly between calculated and actual

  4. Spatial and temporal disaggregation of the on-road vehicle emission inventory in a medium-sized Andean city. Comparison of GIS-based top-down methodologies

    Science.gov (United States)

    Gómez, C. D.; González, C. M.; Osses, M.; Aristizábal, B. H.

    2018-04-01

    Emission data is an essential tool for understanding environmental problems associated with sources and dynamics of air pollutants in urban environments, especially those emitted from vehicular sources. There is a lack of knowledge about the estimation of air pollutant emissions and particularly its spatial and temporal distribution in South America, mainly in medium-sized cities with population less than one million inhabitants. This work performed the spatial and temporal disaggregation of the on-road vehicle emission inventory (EI) in the medium-sized Andean city of Manizales, Colombia, with a spatial resolution of 1 km × 1 km and a temporal resolution of 1 h. A reported top-down methodology, based on the analysis of traffic flow levels and road network distribution, was applied. Results obtained allowed the identification of several hotspots of emission at the downtown zone and the residential and commercial area of Manizales. Downtown exhibited the highest percentage contribution of emissions normalized by its total area, with values equal to 6% and 5% of total CO and PM10 emissions per km2 respectively. These indexes were higher than those obtained in residential-commercial area with values of 2%/km2 for both pollutants. Temporal distribution showed strong relationship with driving patterns at rush hours, as well as an important influence of passenger cars and motorcycles in emissions of CO both at downtown and residential-commercial areas, and the impact of public transport in PM10 emissions in the residential-commercial zone. Considering that detailed information about traffic counts and road network distribution is not always available in medium-sized cities, this work compares other simplified top-down methods for spatially assessing the on-road vehicle EI. Results suggested that simplified methods could underestimate the spatial allocation of downtown emissions, a zone dominated by high traffic of vehicles. The comparison between simplified methods

  5. Semioptimal practicable algorithmic cooling

    International Nuclear Information System (INIS)

    Elias, Yuval; Mor, Tal; Weinstein, Yossi

    2011-01-01

    Algorithmic cooling (AC) of spins applies entropy manipulation algorithms in open spin systems in order to cool spins far beyond Shannon's entropy bound. Algorithmic cooling of nuclear spins was demonstrated experimentally and may contribute to nuclear magnetic resonance spectroscopy. Several cooling algorithms were suggested in recent years, including practicable algorithmic cooling (PAC) and exhaustive AC. Practicable algorithms have simple implementations, yet their level of cooling is far from optimal; exhaustive algorithms, on the other hand, cool much better, and some even reach (asymptotically) an optimal level of cooling, but they are not practicable. We introduce here semioptimal practicable AC (SOPAC), wherein a few cycles (typically two to six) are performed at each recursive level. Two classes of SOPAC algorithms are proposed and analyzed. Both attain cooling levels significantly better than PAC and are much more efficient than the exhaustive algorithms. These algorithms are shown to bridge the gap between PAC and exhaustive AC. In addition, we calculated the number of spins required by SOPAC in order to purify qubits for quantum computation. As few as 12 and 7 spins are required (in an ideal scenario) to yield a mildly pure spin (60% polarized) from initial polarizations of 1% and 10%, respectively. In the latter case, about five more spins are sufficient to produce a highly pure spin (99.99% polarized), which could be relevant for fault-tolerant quantum computing.

  6. Electricity of nuclear origin and primary and end-use energy consumption; Electricite nucleaire et consommation d'energie primaire et finale

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2008-07-01

    In France, the electricity of nuclear origin corresponds to about 40% of the primary energy consumption, while electricity as a whole represents about 23% of the end-use energy. This apparent paradox can be explained by 2 methodological points: 1 - the primary energy consumption, in the case of electricity, includes only the energy of nuclear, hydraulic, wind, photovoltaic and geothermal origin. On the other hand, the end-use energy consumption includes all forms of electricity consumed, i.e. the electricity of both primary and secondary origin. 2 - By international convention, the coefficients used to convert MWth into tpe (ton of petroleum equivalent) can change according to two factors: the power generation source and the type of kWh considered, either produced or consumed. The coexistence of different concepts and definitions is justified by the different usages made with them. Therefore, calculations referring to different definitions or equivalence coefficients are not immediately comparable. (J.S.)

  7. Introduction to Evolutionary Algorithms

    CERN Document Server

    Yu, Xinjie

    2010-01-01

    Evolutionary algorithms (EAs) are becoming increasingly attractive for researchers from various disciplines, such as operations research, computer science, industrial engineering, electrical engineering, social science, economics, etc. This book presents an insightful, comprehensive, and up-to-date treatment of EAs, such as genetic algorithms, differential evolution, evolution strategy, constraint optimization, multimodal optimization, multiobjective optimization, combinatorial optimization, evolvable hardware, estimation of distribution algorithms, ant colony optimization, particle swarm opti

  8. Recursive forgetting algorithms

    DEFF Research Database (Denmark)

    Parkum, Jens; Poulsen, Niels Kjølstad; Holst, Jan

    1992-01-01

    In the first part of the paper, a general forgetting algorithm is formulated and analysed. It contains most existing forgetting schemes as special cases. Conditions are given ensuring that the basic convergence properties will hold. In the second part of the paper, the results are applied...... to a specific algorithm with selective forgetting. Here, the forgetting is non-uniform in time and space. The theoretical analysis is supported by a simulation example demonstrating the practical performance of this algorithm...

  9. Disaggregating Within- and Between-Person Effects of Social Identification on Subjective and Endocrinological Stress Reactions in a Real-Life Stress Situation.

    Science.gov (United States)

    Ketturat, Charlene; Frisch, Johanna U; Ullrich, Johannes; Häusser, Jan A; van Dick, Rolf; Mojzisch, Andreas

    2016-02-01

    Several experimental and cross-sectional studies have established the stress-buffering effect of social identification, yet few longitudinal studies have been conducted within this area of research. This study is the first to make use of a multilevel approach to disaggregate between- and within-person effects of social identification on subjective and endocrinological stress reactions. Specifically, we conducted a study with 85 prospective students during their 1-day aptitude test for a university sports program. Ad hoc groups were formed, in which students completed several tests in various disciplines together. At four points in time, salivary cortisol, subjective strain, and identification with their group were measured. Results of multilevel analyses show a significant within-person effect of social identification: The more students identified with their group, the less stress they experienced and the lower their cortisol response was. Between-person effects were not significant. Advantages of using multilevel approaches within this field of research are discussed. © 2015 by the Society for Personality and Social Psychology, Inc.

  10. Elements in nucleotide sensing and hydrolysis of the AAA+ disaggregation machine ClpB: a structure-based mechanistic dissection of a molecular motor

    Energy Technology Data Exchange (ETDEWEB)

    Zeymer, Cathleen, E-mail: cathleen.zeymer@mpimf-heidelberg.mpg.de; Barends, Thomas R. M.; Werbeck, Nicolas D.; Schlichting, Ilme; Reinstein, Jochen, E-mail: cathleen.zeymer@mpimf-heidelberg.mpg.de [Max Planck Institute for Medical Research, Jahnstrasse 29, 69120 Heidelberg (Germany)

    2014-02-01

    High-resolution crystal structures together with mutational analysis and transient kinetics experiments were utilized to understand nucleotide sensing and the regulation of the ATPase cycle in an AAA+ molecular motor. ATPases of the AAA+ superfamily are large oligomeric molecular machines that remodel their substrates by converting the energy from ATP hydrolysis into mechanical force. This study focuses on the molecular chaperone ClpB, the bacterial homologue of Hsp104, which reactivates aggregated proteins under cellular stress conditions. Based on high-resolution crystal structures in different nucleotide states, mutational analysis and nucleotide-binding kinetics experiments, the ATPase cycle of the C-terminal nucleotide-binding domain (NBD2), one of the motor subunits of this AAA+ disaggregation machine, is dissected mechanistically. The results provide insights into nucleotide sensing, explaining how the conserved sensor 2 motif contributes to the discrimination between ADP and ATP binding. Furthermore, the role of a conserved active-site arginine (Arg621), which controls binding of the essential Mg{sup 2+} ion, is described. Finally, a hypothesis is presented as to how the ATPase activity is regulated by a conformational switch that involves the essential Walker A lysine. In the proposed model, an unusual side-chain conformation of this highly conserved residue stabilizes a catalytically inactive state, thereby avoiding unnecessary ATP hydrolysis.

  11. Estimation of future levels and changes in profitability: The effect of the relative position of the firm in its industry and the operating-financing disaggregation

    Directory of Open Access Journals (Sweden)

    Borja Amor-Tapia

    2014-01-01

    Full Text Available In this paper we examine how the relative position of a firm's Return on Equity (ROE in industries affects the predictability of the next-year ROE levels, and the ROE changes from year to year. Using Nissim and Penman breakdown into operating and financing drivers, the significant role of the industry factor is established, although changes in signs suggest subtle non-linear relations in the drivers. Our study avoids problems originating from negative signs by analyzing sorts and by making new regressions with disaggregated second-order drivers by signs. This way, our results provide evidence of some different patterns in the influence of the first-level drivers of ROE (the operating factor and the financing factor, and the second-level drivers (profit margin, asset turnover, leverage and return spread on future profitability, depending on the industry spread. The results on the role of contextual factors to improve the estimation of future profitability remain consistent for small and large firms, although adding some nuances.

  12. Short circuit: Disaggregation of adrenocorticotropic hormone and cortisol levels in HIV-positive, methamphetamine-using men who have sex with men.

    Science.gov (United States)

    Carrico, Adam W; Rodriguez, Violeta J; Jones, Deborah L; Kumar, Mahendra

    2018-01-01

    This study examined if methamphetamine use alone (METH + HIV-) and methamphetamine use in combination with HIV (METH + HIV+) were associated with hypothalamic-pituitary-adrenal (HPA) axis dysregulation as well as insulin resistance relative to a nonmethamphetamine-using, HIV-negative comparison group (METH-HIV-). Using an intact groups design, serum levels of HPA axis hormones in 46 METH + HIV- and 127 METH + HIV+ men who have sex with men (MSM) were compared to 136 METH-HIV- men. There were no group differences in prevailing adrenocorticotropic hormone (ACTH) or cortisol levels, but the association between ACTH and cortisol was moderated by METH + HIV+ group (β = -0.19, p < .05). Compared to METH-HIV- men, METH + HIV+ MSM displayed 10% higher log 10 cortisol levels per standard deviation lower ACTH. Both groups of methamphetamine-using MSM had lower insulin resistance and greater syndemic burden (i.e., sleep disturbance, severe depression, childhood trauma, and polysubstance use disorder) compared to METH-HIV- men. However, the disaggregated functional relationship between ACTH and cortisol in METH + HIV+ MSM was independent of these factors. Further research is needed to characterize the bio-behavioral pathways that explain dysregulated HPA axis functioning in HIV-positive, methamphetamine-using MSM. Copyright © 2017 John Wiley & Sons, Ltd.

  13. Elements in nucleotide sensing and hydrolysis of the AAA+ disaggregation machine ClpB: a structure-based mechanistic dissection of a molecular motor

    International Nuclear Information System (INIS)

    Zeymer, Cathleen; Barends, Thomas R. M.; Werbeck, Nicolas D.; Schlichting, Ilme; Reinstein, Jochen

    2014-01-01

    High-resolution crystal structures together with mutational analysis and transient kinetics experiments were utilized to understand nucleotide sensing and the regulation of the ATPase cycle in an AAA+ molecular motor. ATPases of the AAA+ superfamily are large oligomeric molecular machines that remodel their substrates by converting the energy from ATP hydrolysis into mechanical force. This study focuses on the molecular chaperone ClpB, the bacterial homologue of Hsp104, which reactivates aggregated proteins under cellular stress conditions. Based on high-resolution crystal structures in different nucleotide states, mutational analysis and nucleotide-binding kinetics experiments, the ATPase cycle of the C-terminal nucleotide-binding domain (NBD2), one of the motor subunits of this AAA+ disaggregation machine, is dissected mechanistically. The results provide insights into nucleotide sensing, explaining how the conserved sensor 2 motif contributes to the discrimination between ADP and ATP binding. Furthermore, the role of a conserved active-site arginine (Arg621), which controls binding of the essential Mg 2+ ion, is described. Finally, a hypothesis is presented as to how the ATPase activity is regulated by a conformational switch that involves the essential Walker A lysine. In the proposed model, an unusual side-chain conformation of this highly conserved residue stabilizes a catalytically inactive state, thereby avoiding unnecessary ATP hydrolysis

  14. Explaining algorithms using metaphors

    CERN Document Server

    Forišek, Michal

    2013-01-01

    There is a significant difference between designing a new algorithm, proving its correctness, and teaching it to an audience. When teaching algorithms, the teacher's main goal should be to convey the underlying ideas and to help the students form correct mental models related to the algorithm. This process can often be facilitated by using suitable metaphors. This work provides a set of novel metaphors identified and developed as suitable tools for teaching many of the 'classic textbook' algorithms taught in undergraduate courses worldwide. Each chapter provides exercises and didactic notes fo

  15. Algorithms in Algebraic Geometry

    CERN Document Server

    Dickenstein, Alicia; Sommese, Andrew J

    2008-01-01

    In the last decade, there has been a burgeoning of activity in the design and implementation of algorithms for algebraic geometric computation. Some of these algorithms were originally designed for abstract algebraic geometry, but now are of interest for use in applications and some of these algorithms were originally designed for applications, but now are of interest for use in abstract algebraic geometry. The workshop on Algorithms in Algebraic Geometry that was held in the framework of the IMA Annual Program Year in Applications of Algebraic Geometry by the Institute for Mathematics and Its

  16. Shadow algorithms data miner

    CERN Document Server

    Woo, Andrew

    2012-01-01

    Digital shadow generation continues to be an important aspect of visualization and visual effects in film, games, simulations, and scientific applications. This resource offers a thorough picture of the motivations, complexities, and categorized algorithms available to generate digital shadows. From general fundamentals to specific applications, it addresses shadow algorithms and how to manage huge data sets from a shadow perspective. The book also examines the use of shadow algorithms in industrial applications, in terms of what algorithms are used and what software is applicable.

  17. Spectral Decomposition Algorithm (SDA)

    Data.gov (United States)

    National Aeronautics and Space Administration — Spectral Decomposition Algorithm (SDA) is an unsupervised feature extraction technique similar to PCA that was developed to better distinguish spectral features in...

  18. Quick fuzzy backpropagation algorithm.

    Science.gov (United States)

    Nikov, A; Stoeva, S

    2001-03-01

    A modification of the fuzzy backpropagation (FBP) algorithm called QuickFBP algorithm is proposed, where the computation of the net function is significantly quicker. It is proved that the FBP algorithm is of exponential time complexity, while the QuickFBP algorithm is of polynomial time complexity. Convergence conditions of the QuickFBP, resp. the FBP algorithm are defined and proved for: (1) single output neural networks in case of training patterns with different targets; and (2) multiple output neural networks in case of training patterns with equivalued target vector. They support the automation of the weights training process (quasi-unsupervised learning) establishing the target value(s) depending on the network's input values. In these cases the simulation results confirm the convergence of both algorithms. An example with a large-sized neural network illustrates the significantly greater training speed of the QuickFBP rather than the FBP algorithm. The adaptation of an interactive web system to users on the basis of the QuickFBP algorithm is presented. Since the QuickFBP algorithm ensures quasi-unsupervised learning, this implies its broad applicability in areas of adaptive and adaptable interactive systems, data mining, etc. applications.

  19. Portfolios of quantum algorithms.

    Science.gov (United States)

    Maurer, S M; Hogg, T; Huberman, B A

    2001-12-17

    Quantum computation holds promise for the solution of many intractable problems. However, since many quantum algorithms are stochastic in nature they can find the solution of hard problems only probabilistically. Thus the efficiency of the algorithms has to be characterized by both the expected time to completion and the associated variance. In order to minimize both the running time and its uncertainty, we show that portfolios of quantum algorithms analogous to those of finance can outperform single algorithms when applied to the NP-complete problems such as 3-satisfiability.

  20. Multiple pathways to gender-sensitive budget support in the education sector: Analysing the effectiveness of sex-disaggregated indicators in performance assessment frameworks and gender working groups in (education) budget support to Sub-Saharan Africa countries

    OpenAIRE

    Holvoet, Nathalie; Inberg, Liesbeth

    2013-01-01

    In order to correct for the initial gender blindness of the Paris Declaration and related aid modalities as general and sector budget support, it has been proposed to integrate a gender dimension into budget support entry points. This paper studies the effectiveness of (joint) gender working groups and the integration of sex-disaggregated indicators and targets in performance assessment frameworks in the context of education sector budget support delivered to a sample of 17 Sub-Saharan Africa...

  1. Algorithm 426 : Merge sort algorithm [M1

    NARCIS (Netherlands)

    Bron, C.

    1972-01-01

    Sorting by means of a two-way merge has a reputation of requiring a clerically complicated and cumbersome program. This ALGOL 60 procedure demonstrates that, using recursion, an elegant and efficient algorithm can be designed, the correctness of which is easily proved [2]. Sorting n objects gives

  2. Composite Differential Search Algorithm

    Directory of Open Access Journals (Sweden)

    Bo Liu

    2014-01-01

    Full Text Available Differential search algorithm (DS is a relatively new evolutionary algorithm inspired by the Brownian-like random-walk movement which is used by an organism to migrate. It has been verified to be more effective than ABC, JDE, JADE, SADE, EPSDE, GSA, PSO2011, and CMA-ES. In this paper, we propose four improved solution search algorithms, namely “DS/rand/1,” “DS/rand/2,” “DS/current to rand/1,” and “DS/current to rand/2” to search the new space and enhance the convergence rate for the global optimization problem. In order to verify the performance of different solution search methods, 23 benchmark functions are employed. Experimental results indicate that the proposed algorithm performs better than, or at least comparable to, the original algorithm when considering the quality of the solution obtained. However, these schemes cannot still achieve the best solution for all functions. In order to further enhance the convergence rate and the diversity of the algorithm, a composite differential search algorithm (CDS is proposed in this paper. This new algorithm combines three new proposed search schemes including “DS/rand/1,” “DS/rand/2,” and “DS/current to rand/1” with three control parameters using a random method to generate the offspring. Experiment results show that CDS has a faster convergence rate and better search ability based on the 23 benchmark functions.

  3. Algorithms and Their Explanations

    NARCIS (Netherlands)

    Benini, M.; Gobbo, F.; Beckmann, A.; Csuhaj-Varjú, E.; Meer, K.

    2014-01-01

    By analysing the explanation of the classical heapsort algorithm via the method of levels of abstraction mainly due to Floridi, we give a concrete and precise example of how to deal with algorithmic knowledge. To do so, we introduce a concept already implicit in the method, the ‘gradient of

  4. Finite lattice extrapolation algorithms

    International Nuclear Information System (INIS)

    Henkel, M.; Schuetz, G.

    1987-08-01

    Two algorithms for sequence extrapolation, due to von den Broeck and Schwartz and Bulirsch and Stoer are reviewed and critically compared. Applications to three states and six states quantum chains and to the (2+1)D Ising model show that the algorithm of Bulirsch and Stoer is superior, in particular if only very few finite lattice data are available. (orig.)

  5. Recursive automatic classification algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Bauman, E V; Dorofeyuk, A A

    1982-03-01

    A variational statement of the automatic classification problem is given. The dependence of the form of the optimal partition surface on the form of the classification objective functional is investigated. A recursive algorithm is proposed for maximising a functional of reasonably general form. The convergence problem is analysed in connection with the proposed algorithm. 8 references.

  6. Graph Colouring Algorithms

    DEFF Research Database (Denmark)

    Husfeldt, Thore

    2015-01-01

    This chapter presents an introduction to graph colouring algorithms. The focus is on vertex-colouring algorithms that work for general classes of graphs with worst-case performance guarantees in a sequential model of computation. The presentation aims to demonstrate the breadth of available...

  7. 8. Algorithm Design Techniques

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 8. Algorithms - Algorithm Design Techniques. R K Shyamasundar. Series Article Volume 2 ... Author Affiliations. R K Shyamasundar1. Computer Science Group, Tata Institute of Fundamental Research, Homi Bhabha Road, Mumbai 400 005, India ...

  8. Geometric approximation algorithms

    CERN Document Server

    Har-Peled, Sariel

    2011-01-01

    Exact algorithms for dealing with geometric objects are complicated, hard to implement in practice, and slow. Over the last 20 years a theory of geometric approximation algorithms has emerged. These algorithms tend to be simple, fast, and more robust than their exact counterparts. This book is the first to cover geometric approximation algorithms in detail. In addition, more traditional computational geometry techniques that are widely used in developing such algorithms, like sampling, linear programming, etc., are also surveyed. Other topics covered include approximate nearest-neighbor search, shape approximation, coresets, dimension reduction, and embeddings. The topics covered are relatively independent and are supplemented by exercises. Close to 200 color figures are included in the text to illustrate proofs and ideas.

  9. Group leaders optimization algorithm

    Science.gov (United States)

    Daskin, Anmer; Kais, Sabre

    2011-03-01

    We present a new global optimization algorithm in which the influence of the leaders in social groups is used as an inspiration for the evolutionary technique which is designed into a group architecture. To demonstrate the efficiency of the method, a standard suite of single and multi-dimensional optimization functions along with the energies and the geometric structures of Lennard-Jones clusters are given as well as the application of the algorithm on quantum circuit design problems. We show that as an improvement over previous methods, the algorithm scales as N 2.5 for the Lennard-Jones clusters of N-particles. In addition, an efficient circuit design is shown for a two-qubit Grover search algorithm which is a quantum algorithm providing quadratic speedup over the classical counterpart.

  10. Fast geometric algorithms

    International Nuclear Information System (INIS)

    Noga, M.T.

    1984-01-01

    This thesis addresses a number of important problems that fall within the framework of the new discipline of Computational Geometry. The list of topics covered includes sorting and selection, convex hull algorithms, the L 1 hull, determination of the minimum encasing rectangle of a set of points, the Euclidean and L 1 diameter of a set of points, the metric traveling salesman problem, and finding the superrange of star-shaped and monotype polygons. The main theme of all the work was to develop a set of very fast state-of-the-art algorithms that supersede any rivals in terms of speed and ease of implementation. In some cases existing algorithms were refined; for others new techniques were developed that add to the present database of fast adaptive geometric algorithms. What emerges is a collection of techniques that is successful at merging modern tools developed in analysis of algorithms with those of classical geometry

  11. Totally parallel multilevel algorithms

    Science.gov (United States)

    Frederickson, Paul O.

    1988-01-01

    Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.

  12. Governance by algorithms

    Directory of Open Access Journals (Sweden)

    Francesca Musiani

    2013-08-01

    Full Text Available Algorithms are increasingly often cited as one of the fundamental shaping devices of our daily, immersed-in-information existence. Their importance is acknowledged, their performance scrutinised in numerous contexts. Yet, a lot of what constitutes 'algorithms' beyond their broad definition as “encoded procedures for transforming input data into a desired output, based on specified calculations” (Gillespie, 2013 is often taken for granted. This article seeks to contribute to the discussion about 'what algorithms do' and in which ways they are artefacts of governance, providing two examples drawing from the internet and ICT realm: search engine queries and e-commerce websites’ recommendations to customers. The question of the relationship between algorithms and rules is likely to occupy an increasingly central role in the study and the practice of internet governance, in terms of both institutions’ regulation of algorithms, and algorithms’ regulation of our society.

  13. Where genetic algorithms excel.

    Science.gov (United States)

    Baum, E B; Boneh, D; Garrett, C

    2001-01-01

    We analyze the performance of a genetic algorithm (GA) we call Culling, and a variety of other algorithms, on a problem we refer to as the Additive Search Problem (ASP). We show that the problem of learning the Ising perceptron is reducible to a noisy version of ASP. Noisy ASP is the first problem we are aware of where a genetic-type algorithm bests all known competitors. We generalize ASP to k-ASP to study whether GAs will achieve "implicit parallelism" in a problem with many more schemata. GAs fail to achieve this implicit parallelism, but we describe an algorithm we call Explicitly Parallel Search that succeeds. We also compute the optimal culling point for selective breeding, which turns out to be independent of the fitness function or the population distribution. We also analyze a mean field theoretic algorithm performing similarly to Culling on many problems. These results provide insight into when and how GAs can beat competing methods.

  14. Network-Oblivious Algorithms

    DEFF Research Database (Denmark)

    Bilardi, Gianfranco; Pietracaprina, Andrea; Pucci, Geppino

    2016-01-01

    A framework is proposed for the design and analysis of network-oblivious algorithms, namely algorithms that can run unchanged, yet efficiently, on a variety of machines characterized by different degrees of parallelism and communication capabilities. The framework prescribes that a network......-oblivious algorithm be specified on a parallel model of computation where the only parameter is the problem’s input size, and then evaluated on a model with two parameters, capturing parallelism granularity and communication latency. It is shown that for a wide class of network-oblivious algorithms, optimality...... of cache hierarchies, to the realm of parallel computation. Its effectiveness is illustrated by providing optimal network-oblivious algorithms for a number of key problems. Some limitations of the oblivious approach are also discussed....

  15. Energy End-Use : Industry

    NARCIS (Netherlands)

    Banerjee, R.; Gong, Y; Gielen, D.J.; Januzzi, G.; Marechal, F.; McKane, A.T.; Rosen, M.A.; Es, D. van; Worrell, E.

    2012-01-01

    The industrial sector accounts for about 30% of the global final energy use and accounts for about 115 EJ of final energy use in 2005. 1Cement, iron and steel, chemicals, pulp and paper and aluminum are key energy intensive materials that account for more than half the global industrial use. There

  16. Algorithms in Singular

    Directory of Open Access Journals (Sweden)

    Hans Schonemann

    1996-12-01

    Full Text Available Some algorithms for singularity theory and algebraic geometry The use of Grobner basis computations for treating systems of polynomial equations has become an important tool in many areas. This paper introduces of the concept of standard bases (a generalization of Grobner bases and the application to some problems from algebraic geometry. The examples are presented as SINGULAR commands. A general introduction to Grobner bases can be found in the textbook [CLO], an introduction to syzygies in [E] and [St1]. SINGULAR is a computer algebra system for computing information about singularities, for use in algebraic geometry. The basic algorithms in SINGULAR are several variants of a general standard basis algorithm for general monomial orderings (see [GG]. This includes wellorderings (Buchberger algorithm ([B1], [B2] and tangent cone orderings (Mora algorithm ([M1], [MPT] as special cases: It is able to work with non-homogeneous and homogeneous input and also to compute in the localization of the polynomial ring in 0. Recent versions include algorithms to factorize polynomials and a factorizing Grobner basis algorithm. For a complete description of SINGULAR see [Si].

  17. A New Modified Firefly Algorithm

    Directory of Open Access Journals (Sweden)

    Medha Gupta

    2016-07-01

    Full Text Available Nature inspired meta-heuristic algorithms studies the emergent collective intelligence of groups of simple agents. Firefly Algorithm is one of the new such swarm-based metaheuristic algorithm inspired by the flashing behavior of fireflies. The algorithm was first proposed in 2008 and since then has been successfully used for solving various optimization problems. In this work, we intend to propose a new modified version of Firefly algorithm (MoFA and later its performance is compared with the standard firefly algorithm along with various other meta-heuristic algorithms. Numerical studies and results demonstrate that the proposed algorithm is superior to existing algorithms.

  18. 1980 survey and evaluation of utility conservation, load management, and solar end-use projects. Volume 3: utility load management projects. Final report

    Energy Technology Data Exchange (ETDEWEB)

    1982-01-01

    The results of the 1980 survey of electric utility-sponsored energy conservation, load management, and end-use solar energy conversion projects are described. The work is an expansion of a previous survey and evaluation and has been jointly sponsored by EPRI and DOE through the Oak Ridge National Laboratory. There are three volumes and a summary document. Each volume presents the results of an extensive survey to determine electric utility involvement in customer-side projects related to the particular technology (i.e., conservation, solar, or load management), selected descriptions of utility projects and results, and first-level technical and economic evaluations.

  19. Magnet sorting algorithms

    International Nuclear Information System (INIS)

    Dinev, D.

    1996-01-01

    Several new algorithms for sorting of dipole and/or quadrupole magnets in synchrotrons and storage rings are described. The algorithms make use of a combinatorial approach to the problem and belong to the class of random search algorithms. They use an appropriate metrization of the state space. The phase-space distortion (smear) is used as a goal function. Computational experiments for the case of the JINR-Dubna superconducting heavy ion synchrotron NUCLOTRON have shown a significant reduction of the phase-space distortion after the magnet sorting. (orig.)

  20. Algorithms for parallel computers

    International Nuclear Information System (INIS)

    Churchhouse, R.F.

    1985-01-01

    Until relatively recently almost all the algorithms for use on computers had been designed on the (usually unstated) assumption that they were to be run on single processor, serial machines. With the introduction of vector processors, array processors and interconnected systems of mainframes, minis and micros, however, various forms of parallelism have become available. The advantage of parallelism is that it offers increased overall processing speed but it also raises some fundamental questions, including: (i) which, if any, of the existing 'serial' algorithms can be adapted for use in the parallel mode. (ii) How close to optimal can such adapted algorithms be and, where relevant, what are the convergence criteria. (iii) How can we design new algorithms specifically for parallel systems. (iv) For multi-processor systems how can we handle the software aspects of the interprocessor communications. Aspects of these questions illustrated by examples are considered in these lectures. (orig.)

  1. Fluid structure coupling algorithm

    International Nuclear Information System (INIS)

    McMaster, W.H.; Gong, E.Y.; Landram, C.S.; Quinones, D.F.

    1980-01-01

    A fluid-structure-interaction algorithm has been developed and incorporated into the two-dimensional code PELE-IC. This code combines an Eulerian incompressible fluid algorithm with a Lagrangian finite element shell algorithm and incorporates the treatment of complex free surfaces. The fluid structure and coupling algorithms have been verified by the calculation of solved problems from the literature and from air and steam blowdown experiments. The code has been used to calculate loads and structural response from air blowdown and the oscillatory condensation of steam bubbles in water suppression pools typical of boiling water reactors. The techniques developed have been extended to three dimensions and implemented in the computer code PELE-3D

  2. Algorithmic phase diagrams

    Science.gov (United States)

    Hockney, Roger

    1987-01-01

    Algorithmic phase diagrams are a neat and compact representation of the results of comparing the execution time of several algorithms for the solution of the same problem. As an example, the recent results are shown of Gannon and Van Rosendale on the solution of multiple tridiagonal systems of equations in the form of such diagrams. The act of preparing these diagrams has revealed an unexpectedly complex relationship between the best algorithm and the number and size of the tridiagonal systems, which was not evident from the algebraic formulae in the original paper. Even so, for a particular computer, one diagram suffices to predict the best algorithm for all problems that are likely to be encountered the prediction being read directly from the diagram without complex calculation.

  3. Diagnostic Algorithm Benchmarking

    Science.gov (United States)

    Poll, Scott

    2011-01-01

    A poster for the NASA Aviation Safety Program Annual Technical Meeting. It describes empirical benchmarking on diagnostic algorithms using data from the ADAPT Electrical Power System testbed and a diagnostic software framework.

  4. Inclusive Flavour Tagging Algorithm

    International Nuclear Information System (INIS)

    Likhomanenko, Tatiana; Derkach, Denis; Rogozhnikov, Alex

    2016-01-01

    Identifying the flavour of neutral B mesons production is one of the most important components needed in the study of time-dependent CP violation. The harsh environment of the Large Hadron Collider makes it particularly hard to succeed in this task. We present an inclusive flavour-tagging algorithm as an upgrade of the algorithms currently used by the LHCb experiment. Specifically, a probabilistic model which efficiently combines information from reconstructed vertices and tracks using machine learning is proposed. The algorithm does not use information about underlying physics process. It reduces the dependence on the performance of lower level identification capacities and thus increases the overall performance. The proposed inclusive flavour-tagging algorithm is applicable to tag the flavour of B mesons in any proton-proton experiment. (paper)

  5. Unsupervised learning algorithms

    CERN Document Server

    Aydin, Kemal

    2016-01-01

    This book summarizes the state-of-the-art in unsupervised learning. The contributors discuss how with the proliferation of massive amounts of unlabeled data, unsupervised learning algorithms, which can automatically discover interesting and useful patterns in such data, have gained popularity among researchers and practitioners. The authors outline how these algorithms have found numerous applications including pattern recognition, market basket analysis, web mining, social network analysis, information retrieval, recommender systems, market research, intrusion detection, and fraud detection. They present how the difficulty of developing theoretically sound approaches that are amenable to objective evaluation have resulted in the proposal of numerous unsupervised learning algorithms over the past half-century. The intended audience includes researchers and practitioners who are increasingly using unsupervised learning algorithms to analyze their data. Topics of interest include anomaly detection, clustering,...

  6. Vector Network Coding Algorithms

    OpenAIRE

    Ebrahimi, Javad; Fragouli, Christina

    2010-01-01

    We develop new algebraic algorithms for scalar and vector network coding. In vector network coding, the source multicasts information by transmitting vectors of length L, while intermediate nodes process and combine their incoming packets by multiplying them with L x L coding matrices that play a similar role as coding c in scalar coding. Our algorithms for scalar network jointly optimize the employed field size while selecting the coding coefficients. Similarly, for vector coding, our algori...

  7. Optimization algorithms and applications

    CERN Document Server

    Arora, Rajesh Kumar

    2015-01-01

    Choose the Correct Solution Method for Your Optimization ProblemOptimization: Algorithms and Applications presents a variety of solution techniques for optimization problems, emphasizing concepts rather than rigorous mathematical details and proofs. The book covers both gradient and stochastic methods as solution techniques for unconstrained and constrained optimization problems. It discusses the conjugate gradient method, Broyden-Fletcher-Goldfarb-Shanno algorithm, Powell method, penalty function, augmented Lagrange multiplier method, sequential quadratic programming, method of feasible direc

  8. Effects of Low-Carbon Technologies and End-Use Electrification on Energy-Related Greenhouse Gases Mitigation in China by 2050

    Directory of Open Access Journals (Sweden)

    Zheng Guo

    2015-07-01

    Full Text Available Greenhouse gas emissions in China have been increasing in line with its energy consumption and economic growth. Major means for energy-related greenhouse gases mitigation in the foreseeable future are transition to less carbon intensive energy supplies and structural changes in energy consumption. In this paper, a bottom-up model is built to examine typical projected scenarios for energy supply and demand, with which trends of energy-related carbon dioxide emissions by 2050 can be analyzed. Results show that low-carbon technologies remain essential contributors to reducing emissions and altering emissions trends up to 2050. By pushing the limit of current practicality, emissions reduction can reach 20 to 28 percent and the advent of carbon peaking could shift from 2040 to 2030. In addition, the effect of electrification at end-use sectors is studied. Results show that electrifying transport could reduce emissions and bring the advent of carbon peaking forward, but the effect is less significant compared with low-carbon technologies. Moreover, it implies the importance of decarbonizing power supply before electrifying end-use sectors.

  9. From Genetics to Genetic Algorithms

    Indian Academy of Sciences (India)

    Genetic algorithms (GAs) are computational optimisation schemes with an ... The algorithms solve optimisation problems ..... Genetic Algorithms in Search, Optimisation and Machine. Learning, Addison-Wesley Publishing Company, Inc. 1989.

  10. Algorithmic Principles of Mathematical Programming

    NARCIS (Netherlands)

    Faigle, Ulrich; Kern, Walter; Still, Georg

    2002-01-01

    Algorithmic Principles of Mathematical Programming investigates the mathematical structures and principles underlying the design of efficient algorithms for optimization problems. Recent advances in algorithmic theory have shown that the traditionally separate areas of discrete optimization, linear

  11. RFID Location Algorithm

    Directory of Open Access Journals (Sweden)

    Wang Zi Min

    2016-01-01

    Full Text Available With the development of social services, people’s living standards improve further requirements, there is an urgent need for a way to adapt to the complex situation of the new positioning technology. In recent years, RFID technology have a wide range of applications in all aspects of life and production, such as logistics tracking, car alarm, security and other items. The use of RFID technology to locate, it is a new direction in the eyes of the various research institutions and scholars. RFID positioning technology system stability, the error is small and low-cost advantages of its location algorithm is the focus of this study.This article analyzes the layers of RFID technology targeting methods and algorithms. First, RFID common several basic methods are introduced; Secondly, higher accuracy to political network location method; Finally, LANDMARC algorithm will be described. Through this it can be seen that advanced and efficient algorithms play an important role in increasing RFID positioning accuracy aspects.Finally, the algorithm of RFID location technology are summarized, pointing out the deficiencies in the algorithm, and put forward a follow-up study of the requirements, the vision of a better future RFID positioning technology.

  12. Modified Firefly Algorithm

    Directory of Open Access Journals (Sweden)

    Surafel Luleseged Tilahun

    2012-01-01

    Full Text Available Firefly algorithm is one of the new metaheuristic algorithms for optimization problems. The algorithm is inspired by the flashing behavior of fireflies. In the algorithm, randomly generated solutions will be considered as fireflies, and brightness is assigned depending on their performance on the objective function. One of the rules used to construct the algorithm is, a firefly will be attracted to a brighter firefly, and if there is no brighter firefly, it will move randomly. In this paper we modify this random movement of the brighter firefly by generating random directions in order to determine the best direction in which the brightness increases. If such a direction is not generated, it will remain in its current position. Furthermore the assignment of attractiveness is modified in such a way that the effect of the objective function is magnified. From the simulation result it is shown that the modified firefly algorithm performs better than the standard one in finding the best solution with smaller CPU time.

  13. Improved multivariate polynomial factoring algorithm

    International Nuclear Information System (INIS)

    Wang, P.S.

    1978-01-01

    A new algorithm for factoring multivariate polynomials over the integers based on an algorithm by Wang and Rothschild is described. The new algorithm has improved strategies for dealing with the known problems of the original algorithm, namely, the leading coefficient problem, the bad-zero problem and the occurrence of extraneous factors. It has an algorithm for correctly predetermining leading coefficients of the factors. A new and efficient p-adic algorithm named EEZ is described. Bascially it is a linearly convergent variable-by-variable parallel construction. The improved algorithm is generally faster and requires less store then the original algorithm. Machine examples with comparative timing are included

  14. Disaggregating Corporate Freedom of Religion

    DEFF Research Database (Denmark)

    Lægaard, Sune

    2015-01-01

    The paper investigates arguments for the idea in recent American Supreme Court jurisprudence that freedom of religion should not simply be understood as an ordinary legal right within the framework of liberal constitutionalism but as an expression of deference by the state and its legal system...... to religion as a separate and independent jurisdiction with its own system of law over which religious groups are sovereign. I discuss the relationship between, on the one hand, ordinary rights of freedom of association and freedom of religion and, on the other hand, this idea of corporate freedom of religion...

  15. A Parallel Butterfly Algorithm

    KAUST Repository

    Poulson, Jack; Demanet, Laurent; Maxwell, Nicholas; Ying, Lexing

    2014-01-01

    The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.

  16. A Parallel Butterfly Algorithm

    KAUST Repository

    Poulson, Jack

    2014-02-04

    The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.

  17. Agency and Algorithms

    Directory of Open Access Journals (Sweden)

    Hanns Holger Rutz

    2016-11-01

    Full Text Available Although the concept of algorithms has been established a long time ago, their current topicality indicates a shift in the discourse. Classical definitions based on logic seem to be inadequate to describe their aesthetic capabilities. New approaches stress their involvement in material practices as well as their incompleteness. Algorithmic aesthetics can no longer be tied to the static analysis of programs, but must take into account the dynamic and experimental nature of coding practices. It is suggested that the aesthetic objects thus produced articulate something that could be called algorithmicity or the space of algorithmic agency. This is the space or the medium – following Luhmann’s form/medium distinction – where human and machine undergo mutual incursions. In the resulting coupled “extimate” writing process, human initiative and algorithmic speculation cannot be clearly divided out any longer. An observation is attempted of defining aspects of such a medium by drawing a trajectory across a number of sound pieces. The operation of exchange between form and medium I call reconfiguration and it is indicated by this trajectory. 

  18. Algebraic dynamics algorithm: Numerical comparison with Runge-Kutta algorithm and symplectic geometric algorithm

    Institute of Scientific and Technical Information of China (English)

    WANG ShunJin; ZHANG Hua

    2007-01-01

    Based on the exact analytical solution of ordinary differential equations,a truncation of the Taylor series of the exact solution to the Nth order leads to the Nth order algebraic dynamics algorithm.A detailed numerical comparison is presented with Runge-Kutta algorithm and symplectic geometric algorithm for 12 test models.The results show that the algebraic dynamics algorithm can better preserve both geometrical and dynamical fidelity of a dynamical system at a controllable precision,and it can solve the problem of algorithm-induced dissipation for the Runge-Kutta algorithm and the problem of algorithm-induced phase shift for the symplectic geometric algorithm.

  19. Algebraic dynamics algorithm:Numerical comparison with Runge-Kutta algorithm and symplectic geometric algorithm

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Based on the exact analytical solution of ordinary differential equations, a truncation of the Taylor series of the exact solution to the Nth order leads to the Nth order algebraic dynamics algorithm. A detailed numerical comparison is presented with Runge-Kutta algorithm and symplectic geometric algorithm for 12 test models. The results show that the algebraic dynamics algorithm can better preserve both geometrical and dynamical fidelity of a dynamical system at a controllable precision, and it can solve the problem of algorithm-induced dissipation for the Runge-Kutta algorithm and the problem of algorithm-induced phase shift for the symplectic geometric algorithm.

  20. Reducing out-of-pocket expenditures to reduce poverty: a disaggregated analysis at rural-urban and state level in India.

    Science.gov (United States)

    Garg, Charu C; Karan, Anup K

    2009-03-01

    Out-of-pocket (OOP) expenditure on health care has significant implications for poverty in many developing countries. This paper aims to assess the differential impact of OOP expenditure and its components, such as expenditure on inpatient care, outpatient care and on drugs, across different income quintiles, between developed and less developed regions in India. It also attempts to measure poverty at disaggregated rural-urban and state levels. Based on Consumer Expenditure Survey (CES) data from the National Sample Survey (NSS), conducted in 1999-2000, the share of households' expenditure on health services and drugs was calculated. The number of individuals below the state-specific rural and urban poverty line in 17 major states, with and without netting out OOP expenditure, was determined. This also enabled the calculation of the poverty gap or poverty deepening in each region. Estimates show that OOP expenditure is about 5% of total household expenditure (ranging from about 2% in Assam to almost 7% in Kerala) with a higher proportion being recorded in rural areas and affluent states. Purchase of drugs constitutes 70% of the total OOP expenditure. Approximately 32.5 million persons fell below the poverty line in 1999-2000 through OOP payments, implying that the overall poverty increase after accounting for OOP expenditure is 3.2% (as against a rise of 2.2% shown in earlier literature). Also, the poverty headcount increase and poverty deepening is much higher in poorer states and rural areas compared with affluent states and urban areas, except in the case of Maharashtra. High OOP payment share in total health expenditures did not always imply a high poverty headcount; state-specific economic and social factors played a role. The paper argues for better methods of capturing drugs expenditure in household surveys and recommends that special attention be paid to expenditures on drugs, in particular for the poor. Targeted policies in just five poor states to reduce

  1. Estimation of future levels and changes in profitability: The effect of the relative position of the firm in its industry and the operating-financing disaggregation

    Directory of Open Access Journals (Sweden)

    Borja Amor-Tapia

    2014-06-01

    Full Text Available In this paper we examine how the relative position of a firm’s Return on Equity (ROE in industries affects the predictability of the next-year ROE levels, and the ROE changes from year to year. Using Nissim and Penman breakdown into operating and financing drivers, the significant role of the industry factor is established, although changes in signs suggest subtle non-linear relations in the drivers. Our study avoids problems originating from negative signs by analyzing sorts and by making new regressions with disaggregated second-order drivers by signs. This way, our results provide evidence of some different patterns in the influence of the first-level drivers of ROE (the operating factor and the financing factor, and the second-level drivers (profit margin, asset turnover, leverage and return spread on future profitability, depending on the industry spread. The results on the role of contextual factors to improve the estimation of future profitability remain consistent for small and large firms, although adding some nuances En este trabajo examinamos si la posición relativa del ROE de la empresa en el sector afecta a la estimación del nivel de ROE en el a˜no posterior, y a la estimación de su variación. Empleando el desglose operativo-financiero de Nissim y Penman, encontramos que el factor sectorial es significativo, aunque las variaciones de los signos sugieren la presencia de relaciones no lineales. Nuestro trabajo evita los problemas generados por los signos negativos en los ratios al emplear cuantiles y realizar regresiones independientes para los diferentes signos que toman las variables. De esta forma, los resultados muestran diferentes patrones en el impacto de los inductores del ROE de primer nivel (los factores operativo y financiero y de segundo nivel (margen de resultados, rotaciones de los activos, endeudamiento y diferencial de rentabilidad sobre la rentabilidad futura, dependiendo del diferencial de rentabilidad con

  2. Substitution of efficient electro-technologies for thermal end-uses to traditional processes. Screening of possibilities and applications under study

    International Nuclear Information System (INIS)

    Menga, P.; Grattieri, W.; Korn, G.; Malinverni, R.

    1996-01-01

    ENEL's long-lasting commitment in rationalizing the energy end-uses, has lead to the assessment of the potential for the substitution, in the field of thermal uses, of traditional processes with efficient electro-technologies. The evaluation has been performed by taking in account the advantages for the user, in terms of the reduction in operating costs (energy included), for the electricity industry (increase in kWh sales), and for the Society (savings in primary energy consumption). The analysis allowed to identify many applications for which the primary energy saving is jointly obtained with significant extra-energy advantages for the end user. In order to validate the effectiveness of innovative electro-technologies, a demonstration activity, by means of pilot plants, is in progress. (author)

  3. Joint research report of Central Research Institute of Electric Power Industry and Japan Research Institute Ltd. Conceptual construction of Japanese type end-use model; Denryoku chuo kenkyusho Nihon Sogo Kenkyusho kyodo kenkyu hokokusho. Nippon gata end use model no gainen kochiku

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-07-01

    The concept of Japanese type demand analysis model (end-use model) was constructed for the efficient management of electric power companies and efficient power utilization. Diffusion and use conditions of domestic air conditioners are considerably different depending on local life style and climate. In order to design demand measures considering combination of appliances in every market segment, demands in an end use level (end demand level, each appliance level) should be acquired. The basic structure of the model is composed of various exogenous variables such as weather data, price and ups and downs trend of customers, and various appliance data such as size, efficiency and energy consumption rate, and various customer data such as possession rate of appliances and number of customers. The final energy demand is estimated by integrating the above variables. By systematizing the stored data of precise actual load conditions, construction of DSM (demand side management) strategy becomes possible by using computer tools. 8 refs., 15 figs., 13 tabs.

  4. Detection of algorithmic trading

    Science.gov (United States)

    Bogoev, Dimitar; Karam, Arzé

    2017-10-01

    We develop a new approach to reflect the behavior of algorithmic traders. Specifically, we provide an analytical and tractable way to infer patterns of quote volatility and price momentum consistent with different types of strategies employed by algorithmic traders, and we propose two ratios to quantify these patterns. Quote volatility ratio is based on the rate of oscillation of the best ask and best bid quotes over an extremely short period of time; whereas price momentum ratio is based on identifying patterns of rapid upward or downward movement in prices. The two ratios are evaluated across several asset classes. We further run a two-stage Artificial Neural Network experiment on the quote volatility ratio; the first stage is used to detect the quote volatility patterns resulting from algorithmic activity, while the second is used to validate the quality of signal detection provided by our measure.

  5. Handbook of Memetic Algorithms

    CERN Document Server

    Cotta, Carlos; Moscato, Pablo

    2012-01-01

    Memetic Algorithms (MAs) are computational intelligence structures combining multiple and various operators in order to address optimization problems.  The combination and interaction amongst operators evolves and promotes the diffusion of the most successful units and generates an algorithmic behavior which can handle complex objective functions and hard fitness landscapes.   “Handbook of Memetic Algorithms” organizes, in a structured way, all the the most important results in the field of MAs since their earliest definition until now.  A broad review including various algorithmic solutions as well as successful applications is included in this book. Each class of optimization problems, such as constrained optimization, multi-objective optimization, continuous vs combinatorial problems, uncertainties, are analysed separately and, for each problem,  memetic recipes for tackling the difficulties are given with some successful examples. Although this book contains chapters written by multiple authors, ...

  6. Algorithms in invariant theory

    CERN Document Server

    Sturmfels, Bernd

    2008-01-01

    J. Kung and G.-C. Rota, in their 1984 paper, write: "Like the Arabian phoenix rising out of its ashes, the theory of invariants, pronounced dead at the turn of the century, is once again at the forefront of mathematics". The book of Sturmfels is both an easy-to-read textbook for invariant theory and a challenging research monograph that introduces a new approach to the algorithmic side of invariant theory. The Groebner bases method is the main tool by which the central problems in invariant theory become amenable to algorithmic solutions. Students will find the book an easy introduction to this "classical and new" area of mathematics. Researchers in mathematics, symbolic computation, and computer science will get access to a wealth of research ideas, hints for applications, outlines and details of algorithms, worked out examples, and research problems.

  7. The Retina Algorithm

    CERN Multimedia

    CERN. Geneva; PUNZI, Giovanni

    2015-01-01

    Charge particle reconstruction is one of the most demanding computational tasks found in HEP, and it becomes increasingly important to perform it in real time. We envision that HEP would greatly benefit from achieving a long-term goal of making track reconstruction happen transparently as part of the detector readout ("detector-embedded tracking"). We describe here a track-reconstruction approach based on a massively parallel pattern-recognition algorithm, inspired by studies of the processing of visual images by the brain as it happens in nature ('RETINA algorithm'). It turns out that high-quality tracking in large HEP detectors is possible with very small latencies, when this algorithm is implemented in specialized processors, based on current state-of-the-art, high-speed/high-bandwidth digital devices.

  8. Named Entity Linking Algorithm

    Directory of Open Access Journals (Sweden)

    M. F. Panteleev

    2017-01-01

    Full Text Available In the tasks of processing text in natural language, Named Entity Linking (NEL represents the task to define and link some entity, which is found in the text, with some entity in the knowledge base (for example, Dbpedia. Currently, there is a diversity of approaches to solve this problem, but two main classes can be identified: graph-based approaches and machine learning-based ones. Graph and Machine Learning approaches-based algorithm is proposed accordingly to the stated assumptions about the interrelations of named entities in a sentence and in general.In the case of graph-based approaches, it is necessary to solve the problem of identifying an optimal set of the related entities according to some metric that characterizes the distance between these entities in a graph built on some knowledge base. Due to limitations in processing power, to solve this task directly is impossible. Therefore, its modification is proposed. Based on the algorithms of machine learning, an independent solution cannot be built due to small volumes of training datasets relevant to NEL task. However, their use can contribute to improving the quality of the algorithm. The adaptation of the Latent Dirichlet Allocation model is proposed in order to obtain a measure of the compatibility of attributes of various entities encountered in one context.The efficiency of the proposed algorithm was experimentally tested. A test dataset was independently generated. On its basis the performance of the model was compared using the proposed algorithm with the open source product DBpedia Spotlight, which solves the NEL problem.The mockup, based on the proposed algorithm, showed a low speed as compared to DBpedia Spotlight. However, the fact that it has shown higher accuracy, stipulates the prospects for work in this direction.The main directions of development were proposed in order to increase the accuracy of the system and its productivity.

  9. Law and Order in Algorithmics

    NARCIS (Netherlands)

    Fokkinga, M.M.

    1992-01-01

    An algorithm is the input-output effect of a computer program; mathematically, the notion of algorithm comes close to the notion of function. Just as arithmetic is the theory and practice of calculating with numbers, so is ALGORITHMICS the theory and practice of calculating with algorithms. Just as

  10. A cluster algorithm for graphs

    NARCIS (Netherlands)

    S. van Dongen

    2000-01-01

    textabstractA cluster algorithm for graphs called the emph{Markov Cluster algorithm (MCL~algorithm) is introduced. The algorithm provides basically an interface to an algebraic process defined on stochastic matrices, called the MCL~process. The graphs may be both weighted (with nonnegative weight)

  11. Algorithms for Reinforcement Learning

    CERN Document Server

    Szepesvari, Csaba

    2010-01-01

    Reinforcement learning is a learning paradigm concerned with learning to control a system so as to maximize a numerical performance measure that expresses a long-term objective. What distinguishes reinforcement learning from supervised learning is that only partial feedback is given to the learner about the learner's predictions. Further, the predictions may have long term effects through influencing the future state of the controlled system. Thus, time plays a special role. The goal in reinforcement learning is to develop efficient learning algorithms, as well as to understand the algorithms'

  12. Animation of planning algorithms

    OpenAIRE

    Sun, Fan

    2014-01-01

    Planning is the process of creating a sequence of steps/actions that will satisfy a goal of a problem. The partial order planning (POP) algorithm is one of Artificial Intelligence approach for problem planning. By learning G52PAS module, I find that it is difficult for students to understand this planning algorithm by just reading its pseudo code and doing some exercise in writing. Students cannot know how each actual step works clearly and might miss some steps because of their confusion. ...

  13. Secondary Vertex Finder Algorithm

    CERN Document Server

    Heer, Sebastian; The ATLAS collaboration

    2017-01-01

    If a jet originates from a b-quark, a b-hadron is formed during the fragmentation process. In its dominant decay modes, the b-hadron decays into a c-hadron via the electroweak interaction. Both b- and c-hadrons have lifetimes long enough, to travel a few millimetres before decaying. Thus displaced vertices from b- and subsequent c-hadron decays provide a strong signature for a b-jet. Reconstructing these secondary vertices (SV) and their properties is the aim of this algorithm. The performance of this algorithm is studied with tt̄ events, requiring at least one lepton, simulated at 13 TeV.

  14. Parallel Algorithms and Patterns

    Energy Technology Data Exchange (ETDEWEB)

    Robey, Robert W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-06-16

    This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.

  15. Randomized Filtering Algorithms

    DEFF Research Database (Denmark)

    Katriel, Irit; Van Hentenryck, Pascal

    2008-01-01

    of AllDifferent and is generalization, the Global Cardinality Constraint. The first delayed filtering scheme is a Monte Carlo algorithm: its running time is superior, in the worst case, to that of enforcing are consistency after every domain event, while its filtering effectiveness is analyzed...... in the expected sense. The second scheme is a Las Vegas algorithm using filtering triggers: Its effectiveness is the same as enforcing are consistency after every domain event, while in the expected case it is faster by a factor of m/n, where n and m are, respectively, the number of nodes and edges...

  16. An Ordering Linear Unification Algorithm

    Institute of Scientific and Technical Information of China (English)

    胡运发

    1989-01-01

    In this paper,we present an ordering linear unification algorithm(OLU).A new idea on substituteion of the binding terms is introduced to the algorithm,which is able to overcome some drawbacks of other algorithms,e.g.,MM algorithm[1],RG1 and RG2 algorithms[2],Particularly,if we use the directed eyclie graphs,the algoritm needs not check the binding order,then the OLU algorithm can also be aplied to the infinite tree data struceture,and a higher efficiency can be expected.The paper focuses upon the discussion of OLU algorithm and a partial order structure with respect to the unification algorithm.This algorithm has been implemented in the GKD-PROLOG/VAX 780 interpreting system.Experimental results have shown that the algorithm is very simple and efficient.

  17. New Optimization Algorithms in Physics

    CERN Document Server

    Hartmann, Alexander K

    2004-01-01

    Many physicists are not aware of the fact that they can solve their problems by applying optimization algorithms. Since the number of such algorithms is steadily increasing, many new algorithms have not been presented comprehensively until now. This presentation of recently developed algorithms applied in physics, including demonstrations of how they work and related results, aims to encourage their application, and as such the algorithms selected cover concepts and methods from statistical physics to optimization problems emerging in theoretical computer science.

  18. A propositional CONEstrip algorithm

    NARCIS (Netherlands)

    E. Quaeghebeur (Erik); A. Laurent; O. Strauss; B. Bouchon-Meunier; R.R. Yager (Ronald)

    2014-01-01

    textabstractWe present a variant of the CONEstrip algorithm for checking whether the origin lies in a finitely generated convex cone that can be open, closed, or neither. This variant is designed to deal efficiently with problems where the rays defining the cone are specified as linear combinations

  19. Modular Regularization Algorithms

    DEFF Research Database (Denmark)

    Jacobsen, Michael

    2004-01-01

    The class of linear ill-posed problems is introduced along with a range of standard numerical tools and basic concepts from linear algebra, statistics and optimization. Known algorithms for solving linear inverse ill-posed problems are analyzed to determine how they can be decomposed into indepen...

  20. Efficient graph algorithms

    Indian Academy of Sciences (India)

    Shortest path problems. Road network on cities and we want to navigate between cities. . – p.8/30 ..... The rest of the talk... Computing connectivities between all pairs of vertices good algorithm wrt both space and time to compute the exact solution. . – p.15/30 ...

  1. The Copenhagen Triage Algorithm

    DEFF Research Database (Denmark)

    Hasselbalch, Rasmus Bo; Plesner, Louis Lind; Pries-Heje, Mia

    2016-01-01

    is non-inferior to an existing triage model in a prospective randomized trial. METHODS: The Copenhagen Triage Algorithm (CTA) study is a prospective two-center, cluster-randomized, cross-over, non-inferiority trial comparing CTA to the Danish Emergency Process Triage (DEPT). We include patients ≥16 years...

  2. de Casteljau's Algorithm Revisited

    DEFF Research Database (Denmark)

    Gravesen, Jens

    1998-01-01

    It is demonstrated how all the basic properties of Bezier curves can be derived swiftly and efficiently without any reference to the Bernstein polynomials and essentially with only geometric arguments. This is achieved by viewing one step in de Casteljau's algorithm as an operator (the de Casteljau...

  3. Algorithms in ambient intelligence

    NARCIS (Netherlands)

    Aarts, E.H.L.; Korst, J.H.M.; Verhaegh, W.F.J.; Weber, W.; Rabaey, J.M.; Aarts, E.

    2005-01-01

    We briefly review the concept of ambient intelligence and discuss its relation with the domain of intelligent algorithms. By means of four examples of ambient intelligent systems, we argue that new computing methods and quantification measures are needed to bridge the gap between the class of

  4. General Algorithm (High level)

    Indian Academy of Sciences (India)

    First page Back Continue Last page Overview Graphics. General Algorithm (High level). Iteratively. Use Tightness Property to remove points of P1,..,Pi. Use random sampling to get a Random Sample (of enough points) from the next largest cluster, Pi+1. Use the Random Sampling Procedure to approximate ci+1 using the ...

  5. Comprehensive eye evaluation algorithm

    Science.gov (United States)

    Agurto, C.; Nemeth, S.; Zamora, G.; Vahtel, M.; Soliz, P.; Barriga, S.

    2016-03-01

    In recent years, several research groups have developed automatic algorithms to detect diabetic retinopathy (DR) in individuals with diabetes (DM), using digital retinal images. Studies have indicated that diabetics have 1.5 times the annual risk of developing primary open angle glaucoma (POAG) as do people without DM. Moreover, DM patients have 1.8 times the risk for age-related macular degeneration (AMD). Although numerous investigators are developing automatic DR detection algorithms, there have been few successful efforts to create an automatic algorithm that can detect other ocular diseases, such as POAG and AMD. Consequently, our aim in the current study was to develop a comprehensive eye evaluation algorithm that not only detects DR in retinal images, but also automatically identifies glaucoma suspects and AMD by integrating other personal medical information with the retinal features. The proposed system is fully automatic and provides the likelihood of each of the three eye disease. The system was evaluated in two datasets of 104 and 88 diabetic cases. For each eye, we used two non-mydriatic digital color fundus photographs (macula and optic disc centered) and, when available, information about age, duration of diabetes, cataracts, hypertension, gender, and laboratory data. Our results show that the combination of multimodal features can increase the AUC by up to 5%, 7%, and 8% in the detection of AMD, DR, and glaucoma respectively. Marked improvement was achieved when laboratory results were combined with retinal image features.

  6. Enhanced sampling algorithms.

    Science.gov (United States)

    Mitsutake, Ayori; Mori, Yoshiharu; Okamoto, Yuko

    2013-01-01

    In biomolecular systems (especially all-atom models) with many degrees of freedom such as proteins and nucleic acids, there exist an astronomically large number of local-minimum-energy states. Conventional simulations in the canonical ensemble are of little use, because they tend to get trapped in states of these energy local minima. Enhanced conformational sampling techniques are thus in great demand. A simulation in generalized ensemble performs a random walk in potential energy space and can overcome this difficulty. From only one simulation run, one can obtain canonical-ensemble averages of physical quantities as functions of temperature by the single-histogram and/or multiple-histogram reweighting techniques. In this article we review uses of the generalized-ensemble algorithms in biomolecular systems. Three well-known methods, namely, multicanonical algorithm, simulated tempering, and replica-exchange method, are described first. Both Monte Carlo and molecular dynamics versions of the algorithms are given. We then present various extensions of these three generalized-ensemble algorithms. The effectiveness of the methods is tested with short peptide and protein systems.

  7. Algorithm Theory - SWAT 2006

    DEFF Research Database (Denmark)

    This book constitutes the refereed proceedings of the 10th Scandinavian Workshop on Algorithm Theory, SWAT 2006, held in Riga, Latvia, in July 2006. The 36 revised full papers presented together with 3 invited papers were carefully reviewed and selected from 154 submissions. The papers address all...

  8. Optimal Quadratic Programming Algorithms

    CERN Document Server

    Dostal, Zdenek

    2009-01-01

    Quadratic programming (QP) is one technique that allows for the optimization of a quadratic function in several variables in the presence of linear constraints. This title presents various algorithms for solving large QP problems. It is suitable as an introductory text on quadratic programming for graduate students and researchers

  9. 3-D Characterization of the Structure of Paper and Paperboard and Their Application to Optimize Drying and Water Removal Processes and End-Use Applications

    Energy Technology Data Exchange (ETDEWEB)

    Shri Ramaswamy, University of Minnesota; B.V. Ramarao, State University of New York

    2004-08-29

    The three dimensional structure of paper materials plays a critical role in the paper manufacturing process especially via its impact on the transport properties for fluids. Dewatering of the wet web, pressing and drying will benefit from knowledge of the relationships between the web structure and its transport coefficients. The structure of the pore space within a paper sheet is imaged in serial sections using x-ray micro computed tomography. The three dimensional structure is reconstructed from these sections using digital image processing techniques. The structure is then analyzed by measuring traditional descriptors for the pore space such as specific surface area and porosity. A sequence of microtomographs was imaged at approximately 2 m intervals and the three-dimensional pore-fiber structure was reconstructed. The pore size distributions for both in-plane as well as transverse pores were measured. Significant differences in the in-plane (XY) and the transverse directions in pore characteristics are found and may help partly explain the different liquid and vapor transport properties in the in-plane and transverse directions. Results with varying sheet structures compare favorably with conventional mercury intrusion porosimetry data. Interestingly, the transverse pore structure appears to be more open with larger pore size distribution compared to the in plane pore structure. This may help explain the differences in liquid and vapor transport through the in plane and transverse structures during the paper manufacturing process and during end-use application. Comparison of Z-directional structural details of hand sheet and commercially made fine paper samples show a distinct difference in pore size distribution both in the in-plane and transverse direction. Method presented here may provide a useful tool to the papermaker to truly engineer the structure of paper and board tailored to specific end-use applications. The difference in surface structure between

  10. 3-D Characterization of the Structure of Paper and Paperboard and Their Application to Optimize Drying and Water Removal Processes and End-Use Applications

    International Nuclear Information System (INIS)

    Shri Ramaswamy, University of Minnesota; B.V. Ramarao

    2004-01-01

    The three dimensional structure of paper materials plays a critical role in the paper manufacturing process especially via its impact on the transport properties for fluids. Dewatering of the wet web, pressing and drying will benefit from knowledge of the relationships between the web structure and its transport coefficients. The structure of the pore space within a paper sheet is imaged in serial sections using x-ray micro computed tomography. The three dimensional structure is reconstructed from these sections using digital image processing techniques. The structure is then analyzed by measuring traditional descriptors for the pore space such as specific surface area and porosity. A sequence of microtomographs was imaged at approximately 2 ?m intervals and the three-dimensional pore-fiber structure was reconstructed. The pore size distributions for both in-plane as well as transverse pores were measured. Significant differences in the in-plane (XY) and the transverse directions in pore characteristics are found and may help partly explain the different liquid and vapor transport properties in the in-plane and transverse directions. Results with varying sheet structures compare favorably with conventional mercury intrusion porosimetry data. Interestingly, the transverse pore structure appears to be more open with larger pore size distribution compared to the in plane pore structure. This may help explain the differences in liquid and vapor transport through the in plane and transverse structures during the paper manufacturing process and during end-use application. Comparison of Z-directional structural details of hand sheet and commercially made fine paper samples show a distinct difference in pore size distribution both in the in-plane and transverse direction. Method presented here may provide a useful tool to the papermaker to truly engineer the structure of paper and board tailored to specific end-use applications. The difference in surface structure between

  11. Benchmarking monthly homogenization algorithms

    Science.gov (United States)

    Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratianni, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.

    2011-08-01

    The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data

  12. Python algorithms mastering basic algorithms in the Python language

    CERN Document Server

    Hetland, Magnus Lie

    2014-01-01

    Python Algorithms, Second Edition explains the Python approach to algorithm analysis and design. Written by Magnus Lie Hetland, author of Beginning Python, this book is sharply focused on classical algorithms, but it also gives a solid understanding of fundamental algorithmic problem-solving techniques. The book deals with some of the most important and challenging areas of programming and computer science in a highly readable manner. It covers both algorithmic theory and programming practice, demonstrating how theory is reflected in real Python programs. Well-known algorithms and data struc

  13. Differential Targeting of Hsp70 Heat Shock Proteins HSPA6 and HSPA1A with Components of a Protein Disaggregation/Refolding Machine in Differentiated Human Neuronal Cells following Thermal Stress

    Directory of Open Access Journals (Sweden)

    Ian R. Brown

    2017-04-01

    Full Text Available Heat shock proteins (Hsps co-operate in multi-protein machines that counter protein misfolding and aggregation and involve DNAJ (Hsp40, HSPA (Hsp70, and HSPH (Hsp105α. The HSPA family is a multigene family composed of inducible and constitutively expressed members. Inducible HSPA6 (Hsp70B' is found in the human genome but not in the genomes of mouse and rat. To advance knowledge of this little studied HSPA member, the targeting of HSPA6 to stress-sensitive neuronal sites with components of a disaggregation/refolding machine was investigated following thermal stress. HSPA6 targeted the periphery of nuclear speckles (perispeckles that have been characterized as sites of transcription. However, HSPA6 did not co-localize at perispeckles with DNAJB1 (Hsp40-1 or HSPH1 (Hsp105α. At 3 h after heat shock, HSPA6 co-localized with these members of the disaggregation/refolding machine at the granular component (GC of the nucleolus. Inducible HSPA1A (Hsp70-1 and constitutively expressed HSPA8 (Hsc70 co-localized at nuclear speckles with components of the machine immediately after heat shock, and at the GC layer of the nucleolus at 1 h with DNAJA1 and BAG-1. These results suggest that HSPA6 exhibits targeting features that are not apparent for HSPA1A and HSPA8.

  14. The Algorithm for Algorithms: An Evolutionary Algorithm Based on Automatic Designing of Genetic Operators

    Directory of Open Access Journals (Sweden)

    Dazhi Jiang

    2015-01-01

    Full Text Available At present there is a wide range of evolutionary algorithms available to researchers and practitioners. Despite the great diversity of these algorithms, virtually all of the algorithms share one feature: they have been manually designed. A fundamental question is “are there any algorithms that can design evolutionary algorithms automatically?” A more complete definition of the question is “can computer construct an algorithm which will generate algorithms according to the requirement of a problem?” In this paper, a novel evolutionary algorithm based on automatic designing of genetic operators is presented to address these questions. The resulting algorithm not only explores solutions in the problem space like most traditional evolutionary algorithms do, but also automatically generates genetic operators in the operator space. In order to verify the performance of the proposed algorithm, comprehensive experiments on 23 well-known benchmark optimization problems are conducted. The results show that the proposed algorithm can outperform standard differential evolution algorithm in terms of convergence speed and solution accuracy which shows that the algorithm designed automatically by computers can compete with the algorithms designed by human beings.

  15. Reactive Collision Avoidance Algorithm

    Science.gov (United States)

    Scharf, Daniel; Acikmese, Behcet; Ploen, Scott; Hadaegh, Fred

    2010-01-01

    The reactive collision avoidance (RCA) algorithm allows a spacecraft to find a fuel-optimal trajectory for avoiding an arbitrary number of colliding spacecraft in real time while accounting for acceleration limits. In addition to spacecraft, the technology can be used for vehicles that can accelerate in any direction, such as helicopters and submersibles. In contrast to existing, passive algorithms that simultaneously design trajectories for a cluster of vehicles working to achieve a common goal, RCA is implemented onboard spacecraft only when an imminent collision is detected, and then plans a collision avoidance maneuver for only that host vehicle, thus preventing a collision in an off-nominal situation for which passive algorithms cannot. An example scenario for such a situation might be when a spacecraft in the cluster is approaching another one, but enters safe mode and begins to drift. Functionally, the RCA detects colliding spacecraft, plans an evasion trajectory by solving the Evasion Trajectory Problem (ETP), and then recovers after the collision is avoided. A direct optimization approach was used to develop the algorithm so it can run in real time. In this innovation, a parameterized class of avoidance trajectories is specified, and then the optimal trajectory is found by searching over the parameters. The class of trajectories is selected as bang-off-bang as motivated by optimal control theory. That is, an avoiding spacecraft first applies full acceleration in a constant direction, then coasts, and finally applies full acceleration to stop. The parameter optimization problem can be solved offline and stored as a look-up table of values. Using a look-up table allows the algorithm to run in real time. Given a colliding spacecraft, the properties of the collision geometry serve as indices of the look-up table that gives the optimal trajectory. For multiple colliding spacecraft, the set of trajectories that avoid all spacecraft is rapidly searched on

  16. Electricity savings ``soon come'' to Jamaica -- Assessing the potential for air conditioning and refrigeration end-use DSM

    Energy Technology Data Exchange (ETDEWEB)

    Conlon, T.; Hamzawi, E.; Campbell, V.

    1998-07-01

    With the support of the Inter-American Development Bank, the Global Environment Facility of the World Bank, and the Rockefeller Foundation, the national electric utility in Jamaica (Jamaica Public Service Company) has begun an assessment of the technical, economic, and financial opportunities for achieving demand-side management (DSM) energy savings in the air conditioning and refrigeration end uses. The feasibility and cost effectiveness of specific measures is being assessed for both the residential and commercials segments. While structures as a traditional load-research-based market assessment, the project uses ethnographic data collection and analysis techniques and involves collaboration with local contractors. The skills of local experts are being taped to identify and interview the key market players, and to develop an understanding of the barriers to and opportunities for energy efficiency present in the evolving equipment markets. The paper outlines methods and presents preliminary case study results for the air conditioning market. The authors identify major groups of market players and dominant types of equipment, and provide an overview of market dynamics. The volume of sales passing through both formal and informal distribution channels is estimated and market barriers are identified. Based on the findings of the study, recommendations will be made for future program and policy initiatives designed to mitigate selected barriers in each of the supply chains.

  17. Partitional clustering algorithms

    CERN Document Server

    2015-01-01

    This book summarizes the state-of-the-art in partitional clustering. Clustering, the unsupervised classification of patterns into groups, is one of the most important tasks in exploratory data analysis. Primary goals of clustering include gaining insight into, classifying, and compressing data. Clustering has a long and rich history that spans a variety of scientific disciplines including anthropology, biology, medicine, psychology, statistics, mathematics, engineering, and computer science. As a result, numerous clustering algorithms have been proposed since the early 1950s. Among these algorithms, partitional (nonhierarchical) ones have found many applications, especially in engineering and computer science. This book provides coverage of consensus clustering, constrained clustering, large scale and/or high dimensional clustering, cluster validity, cluster visualization, and applications of clustering. Examines clustering as it applies to large and/or high-dimensional data sets commonly encountered in reali...

  18. Treatment Algorithm for Ameloblastoma

    Directory of Open Access Journals (Sweden)

    Madhumati Singh

    2014-01-01

    Full Text Available Ameloblastoma is the second most common benign odontogenic tumour (Shafer et al. 2006 which constitutes 1–3% of all cysts and tumours of jaw, with locally aggressive behaviour, high recurrence rate, and a malignant potential (Chaine et al. 2009. Various treatment algorithms for ameloblastoma have been reported; however, a universally accepted approach remains unsettled and controversial (Chaine et al. 2009. The treatment algorithm to be chosen depends on size (Escande et al. 2009 and Sampson and Pogrel 1999, anatomical location (Feinberg and Steinberg 1996, histologic variant (Philipsen and Reichart 1998, and anatomical involvement (Jackson et al. 1996. In this paper various such treatment modalities which include enucleation and peripheral osteotomy, partial maxillectomy, segmental resection and reconstruction done with fibula graft, and radical resection and reconstruction done with rib graft and their recurrence rate are reviewed with study of five cases.

  19. An Algorithmic Diversity Diet?

    DEFF Research Database (Denmark)

    Sørensen, Jannick Kirk; Schmidt, Jan-Hinrik

    2016-01-01

    With the growing influence of personalized algorithmic recommender systems on the exposure of media content to users, the relevance of discussing the diversity of recommendations increases, particularly as far as public service media (PSM) is concerned. An imagined implementation of a diversity...... diet system however triggers not only the classic discussion of the reach – distinctiveness balance for PSM, but also shows that ‘diversity’ is understood very differently in algorithmic recommender system communities than it is editorially and politically in the context of PSM. The design...... of a diversity diet system generates questions not just about editorial power, personal freedom and techno-paternalism, but also about the embedded politics of recommender systems as well as the human skills affiliated with PSM editorial work and the nature of PSM content....

  20. DAL Algorithms and Python

    CERN Document Server

    Aydemir, Bahar

    2017-01-01

    The Trigger and Data Acquisition (TDAQ) system of the ATLAS detector at the Large Hadron Collider (LHC) at CERN is composed of a large number of distributed hardware and software components. TDAQ system consists of about 3000 computers and more than 25000 applications which, in a coordinated manner, provide the data-taking functionality of the overall system. There is a number of online services required to configure, monitor and control the ATLAS data taking. In particular, the configuration service is used to provide configuration of above components. The configuration of the ATLAS data acquisition system is stored in XML-based object database named OKS. DAL (Data Access Library) allowing to access it's information by C++, Java and Python clients in a distributed environment. Some information has quite complicated structure, so it's extraction requires writing special algorithms. Algorithms available on C++ programming language and partially reimplemented on Java programming language. The goal of the projec...

  1. Genetic algorithm essentials

    CERN Document Server

    Kramer, Oliver

    2017-01-01

    This book introduces readers to genetic algorithms (GAs) with an emphasis on making the concepts, algorithms, and applications discussed as easy to understand as possible. Further, it avoids a great deal of formalisms and thus opens the subject to a broader audience in comparison to manuscripts overloaded by notations and equations. The book is divided into three parts, the first of which provides an introduction to GAs, starting with basic concepts like evolutionary operators and continuing with an overview of strategies for tuning and controlling parameters. In turn, the second part focuses on solution space variants like multimodal, constrained, and multi-objective solution spaces. Lastly, the third part briefly introduces theoretical tools for GAs, the intersections and hybridizations with machine learning, and highlights selected promising applications.

  2. Boosting foundations and algorithms

    CERN Document Server

    Schapire, Robert E

    2012-01-01

    Boosting is an approach to machine learning based on the idea of creating a highly accurate predictor by combining many weak and inaccurate "rules of thumb." A remarkably rich theory has evolved around boosting, with connections to a range of topics, including statistics, game theory, convex optimization, and information geometry. Boosting algorithms have also enjoyed practical success in such fields as biology, vision, and speech processing. At various times in its history, boosting has been perceived as mysterious, controversial, even paradoxical.

  3. Stochastic split determinant algorithms

    International Nuclear Information System (INIS)

    Horvatha, Ivan

    2000-01-01

    I propose a large class of stochastic Markov processes associated with probability distributions analogous to that of lattice gauge theory with dynamical fermions. The construction incorporates the idea of approximate spectral split of the determinant through local loop action, and the idea of treating the infrared part of the split through explicit diagonalizations. I suggest that exact algorithms of practical relevance might be based on Markov processes so constructed

  4. Quantum gate decomposition algorithms.

    Energy Technology Data Exchange (ETDEWEB)

    Slepoy, Alexander

    2006-07-01

    Quantum computing algorithms can be conveniently expressed in a format of a quantum logical circuits. Such circuits consist of sequential coupled operations, termed ''quantum gates'', or quantum analogs of bits called qubits. We review a recently proposed method [1] for constructing general ''quantum gates'' operating on an qubits, as composed of a sequence of generic elementary ''gates''.

  5. KAM Tori Construction Algorithms

    Science.gov (United States)

    Wiesel, W.

    In this paper we evaluate and compare two algorithms for the calculation of KAM tori in Hamiltonian systems. The direct fitting of a torus Fourier series to a numerically integrated trajectory is the first method, while an accelerated finite Fourier transform is the second method. The finite Fourier transform, with Hanning window functions, is by far superior in both computational loading and numerical accuracy. Some thoughts on applications of KAM tori are offered.

  6. Irregular Applications: Architectures & Algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Feo, John T.; Villa, Oreste; Tumeo, Antonino; Secchi, Simone

    2012-02-06

    Irregular applications are characterized by irregular data structures, control and communication patterns. Novel irregular high performance applications which deal with large data sets and require have recently appeared. Unfortunately, current high performance systems and software infrastructures executes irregular algorithms poorly. Only coordinated efforts by end user, area specialists and computer scientists that consider both the architecture and the software stack may be able to provide solutions to the challenges of modern irregular applications.

  7. Large scale tracking algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Ross L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Love, Joshua Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Melgaard, David Kennett [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Karelitz, David B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pitts, Todd Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Zollweg, Joshua David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Anderson, Dylan Z. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Nandy, Prabal [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Whitlow, Gary L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bender, Daniel A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Byrne, Raymond Harry [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  8. NEUTRON ALGORITHM VERIFICATION TESTING

    International Nuclear Information System (INIS)

    COWGILL, M.; MOSBY, W.; ARGONNE NATIONAL LABORATORY-WEST

    2000-01-01

    Active well coincidence counter assays have been performed on uranium metal highly enriched in 235 U. The data obtained in the present program, together with highly enriched uranium (HEU) metal data obtained in other programs, have been analyzed using two approaches, the standard approach and an alternative approach developed at BNL. Analysis of the data with the standard approach revealed that the form of the relationship between the measured reals and the 235 U mass varied, being sometimes linear and sometimes a second-order polynomial. In contrast, application of the BNL algorithm, which takes into consideration the totals, consistently yielded linear relationships between the totals-corrected reals and the 235 U mass. The constants in these linear relationships varied with geometric configuration and level of enrichment. This indicates that, when the BNL algorithm is used, calibration curves can be established with fewer data points and with more certainty than if a standard algorithm is used. However, this potential advantage has only been established for assays of HEU metal. In addition, the method is sensitive to the stability of natural background in the measurement facility

  9. Convex hull ranking algorithm for multi-objective evolutionary algorithms

    NARCIS (Netherlands)

    Davoodi Monfrared, M.; Mohades, A.; Rezaei, J.

    2012-01-01

    Due to many applications of multi-objective evolutionary algorithms in real world optimization problems, several studies have been done to improve these algorithms in recent years. Since most multi-objective evolutionary algorithms are based on the non-dominated principle, and their complexity

  10. Foundations of genetic algorithms 1991

    CERN Document Server

    1991-01-01

    Foundations of Genetic Algorithms 1991 (FOGA 1) discusses the theoretical foundations of genetic algorithms (GA) and classifier systems.This book compiles research papers on selection and convergence, coding and representation, problem hardness, deception, classifier system design, variation and recombination, parallelization, and population divergence. Other topics include the non-uniform Walsh-schema transform; spurious correlations and premature convergence in genetic algorithms; and variable default hierarchy separation in a classifier system. The grammar-based genetic algorithm; condition

  11. THE APPROACHING TRAIN DETECTION ALGORITHM

    OpenAIRE

    S. V. Bibikov

    2015-01-01

    The paper deals with detection algorithm for rail vibroacoustic waves caused by approaching train on the background of increased noise. The urgency of algorithm development for train detection in view of increased rail noise, when railway lines are close to roads or road intersections is justified. The algorithm is based on the method of weak signals detection in a noisy environment. The information statistics ultimate expression is adjusted. We present the results of algorithm research and t...

  12. Combinatorial optimization algorithms and complexity

    CERN Document Server

    Papadimitriou, Christos H

    1998-01-01

    This clearly written, mathematically rigorous text includes a novel algorithmic exposition of the simplex method and also discusses the Soviet ellipsoid algorithm for linear programming; efficient algorithms for network flow, matching, spanning trees, and matroids; the theory of NP-complete problems; approximation algorithms, local search heuristics for NP-complete problems, more. All chapters are supplemented by thought-provoking problems. A useful work for graduate-level students with backgrounds in computer science, operations research, and electrical engineering.

  13. Essential algorithms a practical approach to computer algorithms

    CERN Document Server

    Stephens, Rod

    2013-01-01

    A friendly and accessible introduction to the most useful algorithms Computer algorithms are the basic recipes for programming. Professional programmers need to know how to use algorithms to solve difficult programming problems. Written in simple, intuitive English, this book describes how and when to use the most practical classic algorithms, and even how to create new algorithms to meet future needs. The book also includes a collection of questions that can help readers prepare for a programming job interview. Reveals methods for manipulating common data structures s

  14. Efficient GPS Position Determination Algorithms

    National Research Council Canada - National Science Library

    Nguyen, Thao Q

    2007-01-01

    ... differential GPS algorithm for a network of users. The stand-alone user GPS algorithm is a direct, closed-form, and efficient new position determination algorithm that exploits the closed-form solution of the GPS trilateration equations and works...

  15. Algorithmic approach to diagram techniques

    International Nuclear Information System (INIS)

    Ponticopoulos, L.

    1980-10-01

    An algorithmic approach to diagram techniques of elementary particles is proposed. The definition and axiomatics of the theory of algorithms are presented, followed by the list of instructions of an algorithm formalizing the construction of graphs and the assignment of mathematical objects to them. (T.A.)

  16. Measurement of electrical energy and typification of end uses in the domestic sector; Medicion de energia electrica y tipificacion de usos finales en el sector domestico

    Energy Technology Data Exchange (ETDEWEB)

    Garcia Juarez, Francisco; Maqueda Zamora, Martin Roberto [Instituto de Investigaciones Electricas, Cuernavaca, Morelos (Mexico)

    2003-07-01

    In this work the advantages in using modern measuring equipment that allows the segregation of load curves of the domestic users using a single measuring equipment are presented, some samples of the measurements that have been obtained during measurements made to diverse domestic users are presented. Also, some other complementary technologies of recent development are mentioned that help in the application of power efficiency measures in the diverse sectors, the use of these equipment serves as a support for measuring the effectiveness of the programs of of energy saving and of the demand management that are desired to implement. The objective of this article is to demonstrate the use of the modern measuring equipment for the typification and measurement of the end uses in the domestic sector, and to present the advantages of using these equipment in the power efficiency studies. [Spanish] En este trabajo se exponen las ventajas de usar equipos de medicion modernos que permiten la desagregacion de curvas de carga de los usuarios domesticos utilizando un solo equipo de medicion, se presentan algunas muestras de las mediciones que se han obtenido durante mediciones realizadas a diversos usuarios domesticos. Tambien, se mencionan algunas otras tecnologias complementarias de reciente desarrollo que ayudan en la aplicacion de medidas de eficiencia energetica en los diversos sectores, el uso de estos equipos sirve de soporte para medir la efectividad de los programas de ahorro de energia y de administracion de la demanda que se desean implementar. El objetivo de este articulo es demostrar el uso de los equipos de medicion modernos para la tipificacion y medicion de usos finales en el sector domestico, y presentar las ventajas de usar estos equipos en los estudios de eficiencia energetica.

  17. Residential energy consumption for end uses in Mexico (1984-1994); Consumo de energia residencial por usos finales en Mexico (1984 y 1994)

    Energy Technology Data Exchange (ETDEWEB)

    Vazquez, Oscar; Sheinbaum, Claudia [Instituto de Ingenieria de la UNAM, Mexico, D. F. (Mexico)

    1998-12-31

    This paper analyses the changes in equipment in dwellings and the residential energy consumption for end uses in Mexico in the 1984-1994 decade. The study is based in data of the Instituto Nacional de Estadistica, Geografia e Informatica (INEGI)`s Income-Expense in Homes National Survey and in estimates of the unit consumption of the household appurtenances. The most important results show that food cooking represents 64% of the residential energy consumption, 22% water heating, 4% lightning and 10% electric appurtenances and other uses of LP gas and natural gas. The devices of greater saturation in 1994 were the gas stove (87%), the iron (85%), the TV (85%) and the refrigerator (64%). In analysis of the equipment there is a serious inequity in the country. The number of dwellings that have electric household devices or appurtenances requiring the supply of energy services greatly depend on the income level of the same. [Espanol] Este articulo analiza los cambios en el equipamiento de las viviendas y en el consumo de energia residencial por usos finales en Mexico en la decada 1984-1994. El estudio se basa en datos de la Encuesta Nacional Ingreso-Gasto de los Hogares del Instituto Nacional de Estadistica, Geografia e Informatica (INEGI) y en estimaciones del consumo unitario de los equipos. Los resultados mas importantes muestran que la coccion de alimentos representa el 64% del consumo de energia residencial, 22% el calentamiento de agua, 4% la iluminacion y 10% los equipos electricos y otros usos de gas LP y gas natural. Los equipos de mayor saturacion en 1994 fueron la estufa de gas (87%), la plancha (85%), la television (85%) y el refrigerador (64%). En un analisis de equipamiento por nivel de ingreso, se muestra que existe una inequidad grave en el pais. El numero de viviendas que cuentan con equipos para suministrar servicios energeticos depende enormemente del nivel de ingreso de las mismas.

  18. Residential energy consumption for end uses in Mexico (1984-1994); Consumo de energia residencial por usos finales en Mexico (1984 y 1994)

    Energy Technology Data Exchange (ETDEWEB)

    Vazquez, Oscar; Sheinbaum, Claudia [Instituto de Ingenieria de la UNAM, Mexico, D. F. (Mexico)

    1999-12-31

    This paper analyses the changes in equipment in dwellings and the residential energy consumption for end uses in Mexico in the 1984-1994 decade. The study is based in data of the Instituto Nacional de Estadistica, Geografia e Informatica (INEGI)`s Income-Expense in Homes National Survey and in estimates of the unit consumption of the household appurtenances. The most important results show that food cooking represents 64% of the residential energy consumption, 22% water heating, 4% lightning and 10% electric appurtenances and other uses of LP gas and natural gas. The devices of greater saturation in 1994 were the gas stove (87%), the iron (85%), the TV (85%) and the refrigerator (64%). In analysis of the equipment there is a serious inequity in the country. The number of dwellings that have electric household devices or appurtenances requiring the supply of energy services greatly depend on the income level of the same. [Espanol] Este articulo analiza los cambios en el equipamiento de las viviendas y en el consumo de energia residencial por usos finales en Mexico en la decada 1984-1994. El estudio se basa en datos de la Encuesta Nacional Ingreso-Gasto de los Hogares del Instituto Nacional de Estadistica, Geografia e Informatica (INEGI) y en estimaciones del consumo unitario de los equipos. Los resultados mas importantes muestran que la coccion de alimentos representa el 64% del consumo de energia residencial, 22% el calentamiento de agua, 4% la iluminacion y 10% los equipos electricos y otros usos de gas LP y gas natural. Los equipos de mayor saturacion en 1994 fueron la estufa de gas (87%), la plancha (85%), la television (85%) y el refrigerador (64%). En un analisis de equipamiento por nivel de ingreso, se muestra que existe una inequidad grave en el pais. El numero de viviendas que cuentan con equipos para suministrar servicios energeticos depende enormemente del nivel de ingreso de las mismas.

  19. Selfish Gene Algorithm Vs Genetic Algorithm: A Review

    Science.gov (United States)

    Ariff, Norharyati Md; Khalid, Noor Elaiza Abdul; Hashim, Rathiah; Noor, Noorhayati Mohamed

    2016-11-01

    Evolutionary algorithm is one of the algorithms inspired by the nature. Within little more than a decade hundreds of papers have reported successful applications of EAs. In this paper, the Selfish Gene Algorithms (SFGA), as one of the latest evolutionary algorithms (EAs) inspired from the Selfish Gene Theory which is an interpretation of Darwinian Theory ideas from the biologist Richards Dawkins on 1989. In this paper, following a brief introduction to the Selfish Gene Algorithm (SFGA), the chronology of its evolution is presented. It is the purpose of this paper is to present an overview of the concepts of Selfish Gene Algorithm (SFGA) as well as its opportunities and challenges. Accordingly, the history, step involves in the algorithm are discussed and its different applications together with an analysis of these applications are evaluated.

  20. Honing process optimization algorithms

    Science.gov (United States)

    Kadyrov, Ramil R.; Charikov, Pavel N.; Pryanichnikova, Valeria V.

    2018-03-01

    This article considers the relevance of honing processes for creating high-quality mechanical engineering products. The features of the honing process are revealed and such important concepts as the task for optimization of honing operations, the optimal structure of the honing working cycles, stepped and stepless honing cycles, simulation of processing and its purpose are emphasized. It is noted that the reliability of the mathematical model determines the quality parameters of the honing process control. An algorithm for continuous control of the honing process is proposed. The process model reliably describes the machining of a workpiece in a sufficiently wide area and can be used to operate the CNC machine CC743.

  1. Opposite Degree Algorithm and Its Applications

    Directory of Open Access Journals (Sweden)

    Xiao-Guang Yue

    2015-12-01

    Full Text Available The opposite (Opposite Degree, referred to as OD algorithm is an intelligent algorithm proposed by Yue Xiaoguang et al. Opposite degree algorithm is mainly based on the concept of opposite degree, combined with the idea of design of neural network and genetic algorithm and clustering analysis algorithm. The OD algorithm is divided into two sub algorithms, namely: opposite degree - numerical computation (OD-NC algorithm and opposite degree - Classification computation (OD-CC algorithm.

  2. Fast algorithm for Morphological Filters

    International Nuclear Information System (INIS)

    Lou Shan; Jiang Xiangqian; Scott, Paul J

    2011-01-01

    In surface metrology, morphological filters, which evolved from the envelope filtering system (E-system) work well for functional prediction of surface finish in the analysis of surfaces in contact. The naive algorithms are time consuming, especially for areal data, and not generally adopted in real practice. A fast algorithm is proposed based on the alpha shape. The hull obtained by rolling the alpha ball is equivalent to the morphological opening/closing in theory. The algorithm depends on Delaunay triangulation with time complexity O(nlogn). In comparison to the naive algorithms it generates the opening and closing envelope without combining dilation and erosion. Edge distortion is corrected by reflective padding for open profiles/surfaces. Spikes in the sample data are detected and points interpolated to prevent singularities. The proposed algorithm works well both for morphological profile and area filters. Examples are presented to demonstrate the validity and superiority on efficiency of this algorithm over the naive algorithm.

  3. Recognition algorithms in knot theory

    International Nuclear Information System (INIS)

    Dynnikov, I A

    2003-01-01

    In this paper the problem of constructing algorithms for comparing knots and links is discussed. A survey of existing approaches and basic results in this area is given. In particular, diverse combinatorial methods for representing links are discussed, the Haken algorithm for recognizing a trivial knot (the unknot) and a scheme for constructing a general algorithm (using Haken's ideas) for comparing links are presented, an approach based on representing links by closed braids is described, the known algorithms for solving the word problem and the conjugacy problem for braid groups are described, and the complexity of the algorithms under consideration is discussed. A new method of combinatorial description of knots is given together with a new algorithm (based on this description) for recognizing the unknot by using a procedure for monotone simplification. In the conclusion of the paper several problems are formulated whose solution could help to advance towards the 'algorithmization' of knot theory

  4. Hybrid Cryptosystem Using Tiny Encryption Algorithm and LUC Algorithm

    Science.gov (United States)

    Rachmawati, Dian; Sharif, Amer; Jaysilen; Andri Budiman, Mohammad

    2018-01-01

    Security becomes a very important issue in data transmission and there are so many methods to make files more secure. One of that method is cryptography. Cryptography is a method to secure file by writing the hidden code to cover the original file. Therefore, if the people do not involve in cryptography, they cannot decrypt the hidden code to read the original file. There are many methods are used in cryptography, one of that method is hybrid cryptosystem. A hybrid cryptosystem is a method that uses a symmetric algorithm to secure the file and use an asymmetric algorithm to secure the symmetric algorithm key. In this research, TEA algorithm is used as symmetric algorithm and LUC algorithm is used as an asymmetric algorithm. The system is tested by encrypting and decrypting the file by using TEA algorithm and using LUC algorithm to encrypt and decrypt the TEA key. The result of this research is by using TEA Algorithm to encrypt the file, the cipher text form is the character from ASCII (American Standard for Information Interchange) table in the form of hexadecimal numbers and the cipher text size increase by sixteen bytes as the plaintext length is increased by eight characters.

  5. Online Planning Algorithm

    Science.gov (United States)

    Rabideau, Gregg R.; Chien, Steve A.

    2010-01-01

    AVA v2 software selects goals for execution from a set of goals that oversubscribe shared resources. The term goal refers to a science or engineering request to execute a possibly complex command sequence, such as image targets or ground-station downlinks. Developed as an extension to the Virtual Machine Language (VML) execution system, the software enables onboard and remote goal triggering through the use of an embedded, dynamic goal set that can oversubscribe resources. From the set of conflicting goals, a subset must be chosen that maximizes a given quality metric, which in this case is strict priority selection. A goal can never be pre-empted by a lower priority goal, and high-level goals can be added, removed, or updated at any time, and the "best" goals will be selected for execution. The software addresses the issue of re-planning that must be performed in a short time frame by the embedded system where computational resources are constrained. In particular, the algorithm addresses problems with well-defined goal requests without temporal flexibility that oversubscribes available resources. By using a fast, incremental algorithm, goal selection can be postponed in a "just-in-time" fashion allowing requests to be changed or added at the last minute. Thereby enabling shorter response times and greater autonomy for the system under control.

  6. Algorithmic Relative Complexity

    Directory of Open Access Journals (Sweden)

    Daniele Cerra

    2011-04-01

    Full Text Available Information content and compression are tightly related concepts that can be addressed through both classical and algorithmic information theories, on the basis of Shannon entropy and Kolmogorov complexity, respectively. The definition of several entities in Kolmogorov’s framework relies upon ideas from classical information theory, and these two approaches share many common traits. In this work, we expand the relations between these two frameworks by introducing algorithmic cross-complexity and relative complexity, counterparts of the cross-entropy and relative entropy (or Kullback-Leibler divergence found in Shannon’s framework. We define the cross-complexity of an object x with respect to another object y as the amount of computational resources needed to specify x in terms of y, and the complexity of x related to y as the compression power which is lost when adopting such a description for x, compared to the shortest representation of x. Properties of analogous quantities in classical information theory hold for these new concepts. As these notions are incomputable, a suitable approximation based upon data compression is derived to enable the application to real data, yielding a divergence measure applicable to any pair of strings. Example applications are outlined, involving authorship attribution and satellite image classification, as well as a comparison to similar established techniques.

  7. Fatigue evaluation algorithms: Review

    Energy Technology Data Exchange (ETDEWEB)

    Passipoularidis, V.A.; Broendsted, P.

    2009-11-15

    A progressive damage fatigue simulator for variable amplitude loads named FADAS is discussed in this work. FADAS (Fatigue Damage Simulator) performs ply by ply stress analysis using classical lamination theory and implements adequate stiffness discount tactics based on the failure criterion of Puck, to model the degradation caused by failure events in ply level. Residual strength is incorporated as fatigue damage accumulation metric. Once the typical fatigue and static properties of the constitutive ply are determined,the performance of an arbitrary lay-up under uniaxial and/or multiaxial load time series can be simulated. The predictions are validated against fatigue life data both from repeated block tests at a single stress ratio as well as against spectral fatigue using the WISPER, WISPERX and NEW WISPER load sequences on a Glass/Epoxy multidirectional laminate typical of a wind turbine rotor blade construction. Two versions of the algorithm, the one using single-step and the other using incremental application of each load cycle (in case of ply failure) are implemented and compared. Simulation results confirm the ability of the algorithm to take into account load sequence effects. In general, FADAS performs well in predicting life under both spectral and block loading fatigue. (author)

  8. Experimental and kinetics study for phytoremediation of sugar mill effluent using water lettuce (Pistia stratiotes L.) and its end use for biogas production.

    Science.gov (United States)

    Kumar, Vinod; Singh, Jogendra; Pathak, V V; Ahmad, Shamshad; Kothari, Richa

    2017-10-01

    In present study, the performance of phytoremediation by Pistia stratiotes on sugar mill effluent (SME) and its end use for biogas production are investigated. The objectives of the study are to determine the nutrient and pollution reduction efficiency of P. stratiotes from SME and evaluation of its biomass as a feedstock for biogas production. Various concentrations of SME (25, 50, 75, and 100%) were remediated by Pistia stratiotes (initial weight; 150 g) outdoor for 60 days under batch mode experimental setup. The results showed that P. stratiotes achieved marked reduction in nutrient (TKN, 72.86%; TP, 71.49%) and pollutant load (EC, 25.69%; TDS, 57.26%; BOD, 69.40%; COD, 61.80%; Ca 2+ , 56.79%; Mg 2+ , 55.01%; Na + , 42.86%; K + , 54.38%; MPN, 78.13%; SPC, 60.13%) from 75% SME at the end of the experiment. The highest biomass (328.48 ± 2.04 g) and chlorophyll content (3.62 ± 3.04 mg/g) were also achieved with 75% SME. The dried biomass of P. stratiotes (from 75% SME) was inoculated with cow dung (10% w/v) and diluted with distilled water (1:10). The whole content was used as a substrate for the biogas production within hydraulic retention time (HRT) of 30 days at room temperature. Substrate parameters such as pH, TS (%), COD (mg/L), TKN (%), TOC (%), VS (%), and C/N ratio were reduced from 7.85 to 6.0, 66.65 to 28.65%, 12,900 to 2800 mg/L, 0.95 to 0.75%, 45.54 to 19.5%, 76.87 to 28.78%, and 47.94 to 26.00, respectively, in 30 days of HRT. About 8478.6 mL of cumulative biogas production was evaluated by modified Gompertz equation. Thus, the present investigation not only achieved efficient nutrient and pollution reduction from SME but also proved the potential of P. stratiotes for biogas production.

  9. Optimal Fungal Space Searching Algorithms.

    Science.gov (United States)

    Asenova, Elitsa; Lin, Hsin-Yu; Fu, Eileen; Nicolau, Dan V; Nicolau, Dan V

    2016-10-01

    Previous experiments have shown that fungi use an efficient natural algorithm for searching the space available for their growth in micro-confined networks, e.g., mazes. This natural "master" algorithm, which comprises two "slave" sub-algorithms, i.e., collision-induced branching and directional memory, has been shown to be more efficient than alternatives, with one, or the other, or both sub-algorithms turned off. In contrast, the present contribution compares the performance of the fungal natural algorithm against several standard artificial homologues. It was found that the space-searching fungal algorithm consistently outperforms uninformed algorithms, such as Depth-First-Search (DFS). Furthermore, while the natural algorithm is inferior to informed ones, such as A*, this under-performance does not importantly increase with the increase of the size of the maze. These findings suggest that a systematic effort of harvesting the natural space searching algorithms used by microorganisms is warranted and possibly overdue. These natural algorithms, if efficient, can be reverse-engineered for graph and tree search strategies.

  10. STAR Algorithm Integration Team - Facilitating operational algorithm development

    Science.gov (United States)

    Mikles, V. J.

    2015-12-01

    The NOAA/NESDIS Center for Satellite Research and Applications (STAR) provides technical support of the Joint Polar Satellite System (JPSS) algorithm development and integration tasks. Utilizing data from the S-NPP satellite, JPSS generates over thirty Environmental Data Records (EDRs) and Intermediate Products (IPs) spanning atmospheric, ocean, cryosphere, and land weather disciplines. The Algorithm Integration Team (AIT) brings technical expertise and support to product algorithms, specifically in testing and validating science algorithms in a pre-operational environment. The AIT verifies that new and updated algorithms function in the development environment, enforces established software development standards, and ensures that delivered packages are functional and complete. AIT facilitates the development of new JPSS-1 algorithms by implementing a review approach based on the Enterprise Product Lifecycle (EPL) process. Building on relationships established during the S-NPP algorithm development process and coordinating directly with science algorithm developers, the AIT has implemented structured reviews with self-contained document suites. The process has supported algorithm improvements for products such as ozone, active fire, vegetation index, and temperature and moisture profiles.

  11. Algorithm aversion: people erroneously avoid algorithms after seeing them err.

    Science.gov (United States)

    Dietvorst, Berkeley J; Simmons, Joseph P; Massey, Cade

    2015-02-01

    Research shows that evidence-based algorithms more accurately predict the future than do human forecasters. Yet when forecasters are deciding whether to use a human forecaster or a statistical algorithm, they often choose the human forecaster. This phenomenon, which we call algorithm aversion, is costly, and it is important to understand its causes. We show that people are especially averse to algorithmic forecasters after seeing them perform, even when they see them outperform a human forecaster. This is because people more quickly lose confidence in algorithmic than human forecasters after seeing them make the same mistake. In 5 studies, participants either saw an algorithm make forecasts, a human make forecasts, both, or neither. They then decided whether to tie their incentives to the future predictions of the algorithm or the human. Participants who saw the algorithm perform were less confident in it, and less likely to choose it over an inferior human forecaster. This was true even among those who saw the algorithm outperform the human.

  12. The Texas Medication Algorithm Project (TMAP) schizophrenia algorithms.

    Science.gov (United States)

    Miller, A L; Chiles, J A; Chiles, J K; Crismon, M L; Rush, A J; Shon, S P

    1999-10-01

    In the Texas Medication Algorithm Project (TMAP), detailed guidelines for medication management of schizophrenia and related disorders, bipolar disorders, and major depressive disorders have been developed and implemented. This article describes the algorithms developed for medication treatment of schizophrenia and related disorders. The guidelines recommend a sequence of medications and discuss dosing, duration, and switch-over tactics. They also specify response criteria at each stage of the algorithm for both positive and negative symptoms. The rationale and evidence for each aspect of the algorithms are presented.

  13. Algorithmic Reflections on Choreography

    Directory of Open Access Journals (Sweden)

    Pablo Ventura

    2016-11-01

    Full Text Available In 1996, Pablo Ventura turned his attention to the choreography software Life Forms to find out whether the then-revolutionary new tool could lead to new possibilities of expression in contemporary dance. During the next 2 decades, he devised choreographic techniques and custom software to create dance works that highlight the operational logic of computers, accompanied by computer-generated dance and media elements. This article provides a firsthand account of how Ventura’s engagement with algorithmic concepts guided and transformed his choreographic practice. The text describes the methods that were developed to create computer-aided dance choreographies. Furthermore, the text illustrates how choreography techniques can be applied to correlate formal and aesthetic aspects of movement, music, and video. Finally, the text emphasizes how Ventura’s interest in the wider conceptual context has led him to explore with choreographic means fundamental issues concerning the characteristics of humans and machines and their increasingly profound interdependencies.

  14. Multisensor data fusion algorithm development

    Energy Technology Data Exchange (ETDEWEB)

    Yocky, D.A.; Chadwick, M.D.; Goudy, S.P.; Johnson, D.K.

    1995-12-01

    This report presents a two-year LDRD research effort into multisensor data fusion. We approached the problem by addressing the available types of data, preprocessing that data, and developing fusion algorithms using that data. The report reflects these three distinct areas. First, the possible data sets for fusion are identified. Second, automated registration techniques for imagery data are analyzed. Third, two fusion techniques are presented. The first fusion algorithm is based on the two-dimensional discrete wavelet transform. Using test images, the wavelet algorithm is compared against intensity modulation and intensity-hue-saturation image fusion algorithms that are available in commercial software. The wavelet approach outperforms the other two fusion techniques by preserving spectral/spatial information more precisely. The wavelet fusion algorithm was also applied to Landsat Thematic Mapper and SPOT panchromatic imagery data. The second algorithm is based on a linear-regression technique. We analyzed the technique using the same Landsat and SPOT data.

  15. Mao-Gilles Stabilization Algorithm

    OpenAIRE

    Jérôme Gilles

    2013-01-01

    Originally, the Mao-Gilles stabilization algorithm was designed to compensate the non-rigid deformations due to atmospheric turbulence. Given a sequence of frames affected by atmospheric turbulence, the algorithm uses a variational model combining optical flow and regularization to characterize the static observed scene. The optimization problem is solved by Bregman Iteration and the operator splitting method. The algorithm is simple, efficient, and can be easily generalized for different sce...

  16. Mao-Gilles Stabilization Algorithm

    Directory of Open Access Journals (Sweden)

    Jérôme Gilles

    2013-07-01

    Full Text Available Originally, the Mao-Gilles stabilization algorithm was designed to compensate the non-rigid deformations due to atmospheric turbulence. Given a sequence of frames affected by atmospheric turbulence, the algorithm uses a variational model combining optical flow and regularization to characterize the static observed scene. The optimization problem is solved by Bregman Iteration and the operator splitting method. The algorithm is simple, efficient, and can be easily generalized for different scenarios involving non-rigid deformations.

  17. One improved LSB steganography algorithm

    Science.gov (United States)

    Song, Bing; Zhang, Zhi-hong

    2013-03-01

    It is easy to be detected by X2 and RS steganalysis with high accuracy that using LSB algorithm to hide information in digital image. We started by selecting information embedded location and modifying the information embedded method, combined with sub-affine transformation and matrix coding method, improved the LSB algorithm and a new LSB algorithm was proposed. Experimental results show that the improved one can resist the X2 and RS steganalysis effectively.

  18. Unsupervised Classification Using Immune Algorithm

    OpenAIRE

    Al-Muallim, M. T.; El-Kouatly, R.

    2012-01-01

    Unsupervised classification algorithm based on clonal selection principle named Unsupervised Clonal Selection Classification (UCSC) is proposed in this paper. The new proposed algorithm is data driven and self-adaptive, it adjusts its parameters to the data to make the classification operation as fast as possible. The performance of UCSC is evaluated by comparing it with the well known K-means algorithm using several artificial and real-life data sets. The experiments show that the proposed U...

  19. Graph Algorithm Animation with Grrr

    OpenAIRE

    Rodgers, Peter; Vidal, Natalia

    2000-01-01

    We discuss geometric positioning, highlighting of visited nodes and user defined highlighting that form the algorithm animation facilities in the Grrr graph rewriting programming language. The main purpose of animation was initially for the debugging and profiling of Grrr code, but recently it has been extended for the purpose of teaching algorithms to undergraduate students. The animation is restricted to graph based algorithms such as graph drawing, list manipulation or more traditional gra...

  20. Algorithms over partially ordered sets

    DEFF Research Database (Denmark)

    Baer, Robert M.; Østerby, Ole

    1969-01-01

    in partially ordered sets, answer the combinatorial question of how many maximal chains might exist in a partially ordered set withn elements, and we give an algorithm for enumerating all maximal chains. We give (in § 3) algorithms which decide whether a partially ordered set is a (lower or upper) semi......-lattice, and whether a lattice has distributive, modular, and Boolean properties. Finally (in § 4) we give Algol realizations of the various algorithms....

  1. An overview of smart grid routing algorithms

    Science.gov (United States)

    Wang, Junsheng; OU, Qinghai; Shen, Haijuan

    2017-08-01

    This paper summarizes the typical routing algorithm in smart grid by analyzing the communication business and communication requirements of intelligent grid. Mainly from the two kinds of routing algorithm is analyzed, namely clustering routing algorithm and routing algorithm, analyzed the advantages and disadvantages of two kinds of typical routing algorithm in routing algorithm and applicability.

  2. Algorithmic complexity of quantum capacity

    Science.gov (United States)

    Oskouei, Samad Khabbazi; Mancini, Stefano

    2018-04-01

    We analyze the notion of quantum capacity from the perspective of algorithmic (descriptive) complexity. To this end, we resort to the concept of semi-computability in order to describe quantum states and quantum channel maps. We introduce algorithmic entropies (like algorithmic quantum coherent information) and derive relevant properties for them. Then we show that quantum capacity based on semi-computable concept equals the entropy rate of algorithmic coherent information, which in turn equals the standard quantum capacity. Thanks to this, we finally prove that the quantum capacity, for a given semi-computable channel, is limit computable.

  3. Machine Learning an algorithmic perspective

    CERN Document Server

    Marsland, Stephen

    2009-01-01

    Traditional books on machine learning can be divided into two groups - those aimed at advanced undergraduates or early postgraduates with reasonable mathematical knowledge and those that are primers on how to code algorithms. The field is ready for a text that not only demonstrates how to use the algorithms that make up machine learning methods, but also provides the background needed to understand how and why these algorithms work. Machine Learning: An Algorithmic Perspective is that text.Theory Backed up by Practical ExamplesThe book covers neural networks, graphical models, reinforcement le

  4. DNABIT Compress - Genome compression algorithm.

    Science.gov (United States)

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-22

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, "DNABIT Compress" for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that "DNABIT Compress" algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases.

  5. Diversity-Guided Evolutionary Algorithms

    DEFF Research Database (Denmark)

    Ursem, Rasmus Kjær

    2002-01-01

    Population diversity is undoubtably a key issue in the performance of evolutionary algorithms. A common hypothesis is that high diversity is important to avoid premature convergence and to escape local optima. Various diversity measures have been used to analyze algorithms, but so far few...... algorithms have used a measure to guide the search. The diversity-guided evolutionary algorithm (DGEA) uses the wellknown distance-to-average-point measure to alternate between phases of exploration (mutation) and phases of exploitation (recombination and selection). The DGEA showed remarkable results...

  6. FRAMEWORK FOR COMPARING SEGMENTATION ALGORITHMS

    Directory of Open Access Journals (Sweden)

    G. Sithole

    2015-05-01

    Full Text Available The notion of a ‘Best’ segmentation does not exist. A segmentation algorithm is chosen based on the features it yields, the properties of the segments (point sets it generates, and the complexity of its algorithm. The segmentation is then assessed based on a variety of metrics such as homogeneity, heterogeneity, fragmentation, etc. Even after an algorithm is chosen its performance is still uncertain because the landscape/scenarios represented in a point cloud have a strong influence on the eventual segmentation. Thus selecting an appropriate segmentation algorithm is a process of trial and error. Automating the selection of segmentation algorithms and their parameters first requires methods to evaluate segmentations. Three common approaches for evaluating segmentation algorithms are ‘goodness methods’, ‘discrepancy methods’ and ‘benchmarks’. Benchmarks are considered the most comprehensive method of evaluation. This paper shortcomings in current benchmark methods are identified and a framework is proposed that permits both a visual and numerical evaluation of segmentations for different algorithms, algorithm parameters and evaluation metrics. The concept of the framework is demonstrated on a real point cloud. Current results are promising and suggest that it can be used to predict the performance of segmentation algorithms.

  7. Look-ahead fermion algorithm

    International Nuclear Information System (INIS)

    Grady, M.

    1986-01-01

    I describe a fast fermion algorithm which utilizes pseudofermion fields but appears to have little or no systematic error. Test simulations on two-dimensional gauge theories are described. A possible justification for the algorithm being exact is discussed. 8 refs

  8. Quantum algorithms and learning theory

    NARCIS (Netherlands)

    Arunachalam, S.

    2018-01-01

    This thesis studies strengths and weaknesses of quantum computers. In the first part we present three contributions to quantum algorithms. 1) consider a search space of N elements. One of these elements is "marked" and our goal is to find this. We describe a quantum algorithm to solve this problem

  9. Online co-regularized algorithms

    NARCIS (Netherlands)

    Ruijter, T. de; Tsivtsivadze, E.; Heskes, T.

    2012-01-01

    We propose an online co-regularized learning algorithm for classification and regression tasks. We demonstrate that by sequentially co-regularizing prediction functions on unlabeled data points, our algorithm provides improved performance in comparison to supervised methods on several UCI benchmarks

  10. A fast fractional difference algorithm

    DEFF Research Database (Denmark)

    Jensen, Andreas Noack; Nielsen, Morten Ørregaard

    2014-01-01

    We provide a fast algorithm for calculating the fractional difference of a time series. In standard implementations, the calculation speed (number of arithmetic operations) is of order T 2, where T is the length of the time series. Our algorithm allows calculation speed of order T log...

  11. A Fast Fractional Difference Algorithm

    DEFF Research Database (Denmark)

    Jensen, Andreas Noack; Nielsen, Morten Ørregaard

    We provide a fast algorithm for calculating the fractional difference of a time series. In standard implementations, the calculation speed (number of arithmetic operations) is of order T 2, where T is the length of the time series. Our algorithm allows calculation speed of order T log...

  12. A Distributed Spanning Tree Algorithm

    DEFF Research Database (Denmark)

    Johansen, Karl Erik; Jørgensen, Ulla Lundin; Nielsen, Sven Hauge

    We present a distributed algorithm for constructing a spanning tree for connected undirected graphs. Nodes correspond to processors and edges correspond to two-way channels. Each processor has initially a distinct identity and all processors perform the same algorithm. Computation as well...

  13. Algorithms in combinatorial design theory

    CERN Document Server

    Colbourn, CJ

    1985-01-01

    The scope of the volume includes all algorithmic and computational aspects of research on combinatorial designs. Algorithmic aspects include generation, isomorphism and analysis techniques - both heuristic methods used in practice, and the computational complexity of these operations. The scope within design theory includes all aspects of block designs, Latin squares and their variants, pairwise balanced designs and projective planes and related geometries.

  14. Tau reconstruction and identification algorithm

    Indian Academy of Sciences (India)

    CMS has developed sophisticated tau identification algorithms for tau hadronic decay modes. Production of tau lepton decaying to hadrons are studied at 7 TeV centre-of-mass energy with 2011 collision data collected by CMS detector and has been used to measure the performance of tau identification algorithms by ...

  15. Executable Pseudocode for Graph Algorithms

    NARCIS (Netherlands)

    B. Ó Nualláin (Breanndán)

    2015-01-01

    textabstract Algorithms are written in pseudocode. However the implementation of an algorithm in a conventional, imperative programming language can often be scattered over hundreds of lines of code thus obscuring its essence. This can lead to difficulties in understanding or verifying the

  16. Where are the parallel algorithms?

    Science.gov (United States)

    Voigt, R. G.

    1985-01-01

    Four paradigms that can be useful in developing parallel algorithms are discussed. These include computational complexity analysis, changing the order of computation, asynchronous computation, and divide and conquer. Each is illustrated with an example from scientific computation, and it is shown that computational complexity must be used with great care or an inefficient algorithm may be selected.

  17. Algorithms for Decision Tree Construction

    KAUST Repository

    Chikalov, Igor

    2011-01-01

    The study of algorithms for decision tree construction was initiated in 1960s. The first algorithms are based on the separation heuristic [13, 31] that at each step tries dividing the set of objects as evenly as possible. Later Garey and Graham [28

  18. A distributed spanning tree algorithm

    DEFF Research Database (Denmark)

    Johansen, Karl Erik; Jørgensen, Ulla Lundin; Nielsen, Svend Hauge

    1988-01-01

    We present a distributed algorithm for constructing a spanning tree for connected undirected graphs. Nodes correspond to processors and edges correspond to two way channels. Each processor has initially a distinct identity and all processors perform the same algorithm. Computation as well as comm...

  19. Global alignment algorithms implementations | Fatumo ...

    African Journals Online (AJOL)

    In this paper, we implemented the two routes for sequence comparison, that is; the dotplot and Needleman-wunsch algorithm for global sequence alignment. Our algorithms were implemented in python programming language and were tested on Linux platform 1.60GHz, 512 MB of RAM SUSE 9.2 and 10.1 versions.

  20. Cascade Error Projection Learning Algorithm

    Science.gov (United States)

    Duong, T. A.; Stubberud, A. R.; Daud, T.

    1995-01-01

    A detailed mathematical analysis is presented for a new learning algorithm termed cascade error projection (CEP) and a general learning frame work. This frame work can be used to obtain the cascade correlation learning algorithm by choosing a particular set of parameters.

  1. Novel medical image enhancement algorithms

    Science.gov (United States)

    Agaian, Sos; McClendon, Stephen A.

    2010-01-01

    In this paper, we present two novel medical image enhancement algorithms. The first, a global image enhancement algorithm, utilizes an alpha-trimmed mean filter as its backbone to sharpen images. The second algorithm uses a cascaded unsharp masking technique to separate the high frequency components of an image in order for them to be enhanced using a modified adaptive contrast enhancement algorithm. Experimental results from enhancing electron microscopy, radiological, CT scan and MRI scan images, using the MATLAB environment, are then compared to the original images as well as other enhancement methods, such as histogram equalization and two forms of adaptive contrast enhancement. An image processing scheme for electron microscopy images of Purkinje cells will also be implemented and utilized as a comparison tool to evaluate the performance of our algorithm.

  2. Elementary functions algorithms and implementation

    CERN Document Server

    Muller, Jean-Michel

    2016-01-01

    This textbook presents the concepts and tools necessary to understand, build, and implement algorithms for computing elementary functions (e.g., logarithms, exponentials, and the trigonometric functions). Both hardware- and software-oriented algorithms are included, along with issues related to accurate floating-point implementation. This third edition has been updated and expanded to incorporate the most recent advances in the field, new elementary function algorithms, and function software. After a preliminary chapter that briefly introduces some fundamental concepts of computer arithmetic, such as floating-point arithmetic and redundant number systems, the text is divided into three main parts. Part I considers the computation of elementary functions using algorithms based on polynomial or rational approximations and using table-based methods; the final chapter in this section deals with basic principles of multiple-precision arithmetic. Part II is devoted to a presentation of “shift-and-add” algorithm...

  3. Streaming Algorithms for Line Simplification

    DEFF Research Database (Denmark)

    Abam, Mohammad; de Berg, Mark; Hachenberger, Peter

    2010-01-01

    this problem in a streaming setting, where we only have a limited amount of storage, so that we cannot store all the points. We analyze the competitive ratio of our algorithms, allowing resource augmentation: we let our algorithm maintain a simplification with 2k (internal) points and compare the error of our...... simplification to the error of the optimal simplification with k points. We obtain the algorithms with O(1) competitive ratio for three cases: convex paths, where the error is measured using the Hausdorff distance (or Fréchet distance), xy-monotone paths, where the error is measured using the Hausdorff distance...... (or Fréchet distance), and general paths, where the error is measured using the Fréchet distance. In the first case the algorithm needs O(k) additional storage, and in the latter two cases the algorithm needs O(k 2) additional storage....

  4. Linear feature detection algorithm for astronomical surveys - I. Algorithm description

    Science.gov (United States)

    Bektešević, Dino; Vinković, Dejan

    2017-11-01

    Computer vision algorithms are powerful tools in astronomical image analyses, especially when automation of object detection and extraction is required. Modern object detection algorithms in astronomy are oriented towards detection of stars and galaxies, ignoring completely the detection of existing linear features. With the emergence of wide-field sky surveys, linear features attract scientific interest as possible trails of fast flybys of near-Earth asteroids and meteors. In this work, we describe a new linear feature detection algorithm designed specifically for implementation in big data astronomy. The algorithm combines a series of algorithmic steps that first remove other objects (stars and galaxies) from the image and then enhance the line to enable more efficient line detection with the Hough algorithm. The rate of false positives is greatly reduced thanks to a step that replaces possible line segments with rectangles and then compares lines fitted to the rectangles with the lines obtained directly from the image. The speed of the algorithm and its applicability in astronomical surveys are also discussed.

  5. The Dropout Learning Algorithm

    Science.gov (United States)

    Baldi, Pierre; Sadowski, Peter

    2014-01-01

    Dropout is a recently introduced algorithm for training neural network by randomly dropping units during training to prevent their co-adaptation. A mathematical analysis of some of the static and dynamic properties of dropout is provided using Bernoulli gating variables, general enough to accommodate dropout on units or connections, and with variable rates. The framework allows a complete analysis of the ensemble averaging properties of dropout in linear networks, which is useful to understand the non-linear case. The ensemble averaging properties of dropout in non-linear logistic networks result from three fundamental equations: (1) the approximation of the expectations of logistic functions by normalized geometric means, for which bounds and estimates are derived; (2) the algebraic equality between normalized geometric means of logistic functions with the logistic of the means, which mathematically characterizes logistic functions; and (3) the linearity of the means with respect to sums, as well as products of independent variables. The results are also extended to other classes of transfer functions, including rectified linear functions. Approximation errors tend to cancel each other and do not accumulate. Dropout can also be connected to stochastic neurons and used to predict firing rates, and to backpropagation by viewing the backward propagation as ensemble averaging in a dropout linear network. Moreover, the convergence properties of dropout can be understood in terms of stochastic gradient descent. Finally, for the regularization properties of dropout, the expectation of the dropout gradient is the gradient of the corresponding approximation ensemble, regularized by an adaptive weight decay term with a propensity for self-consistent variance minimization and sparse representations. PMID:24771879

  6. Improved autonomous star identification algorithm

    International Nuclear Information System (INIS)

    Luo Li-Yan; Xu Lu-Ping; Zhang Hua; Sun Jing-Rong

    2015-01-01

    The log–polar transform (LPT) is introduced into the star identification because of its rotation invariance. An improved autonomous star identification algorithm is proposed in this paper to avoid the circular shift of the feature vector and to reduce the time consumed in the star identification algorithm using LPT. In the proposed algorithm, the star pattern of the same navigation star remains unchanged when the stellar image is rotated, which makes it able to reduce the star identification time. The logarithmic values of the plane distances between the navigation and its neighbor stars are adopted to structure the feature vector of the navigation star, which enhances the robustness of star identification. In addition, some efforts are made to make it able to find the identification result with fewer comparisons, instead of searching the whole feature database. The simulation results demonstrate that the proposed algorithm can effectively accelerate the star identification. Moreover, the recognition rate and robustness by the proposed algorithm are better than those by the LPT algorithm and the modified grid algorithm. (paper)

  7. Portable Health Algorithms Test System

    Science.gov (United States)

    Melcher, Kevin J.; Wong, Edmond; Fulton, Christopher E.; Sowers, Thomas S.; Maul, William A.

    2010-01-01

    A document discusses the Portable Health Algorithms Test (PHALT) System, which has been designed as a means for evolving the maturity and credibility of algorithms developed to assess the health of aerospace systems. Comprising an integrated hardware-software environment, the PHALT system allows systems health management algorithms to be developed in a graphical programming environment, to be tested and refined using system simulation or test data playback, and to be evaluated in a real-time hardware-in-the-loop mode with a live test article. The integrated hardware and software development environment provides a seamless transition from algorithm development to real-time implementation. The portability of the hardware makes it quick and easy to transport between test facilities. This hard ware/software architecture is flexible enough to support a variety of diagnostic applications and test hardware, and the GUI-based rapid prototyping capability is sufficient to support development execution, and testing of custom diagnostic algorithms. The PHALT operating system supports execution of diagnostic algorithms under real-time constraints. PHALT can perform real-time capture and playback of test rig data with the ability to augment/ modify the data stream (e.g. inject simulated faults). It performs algorithm testing using a variety of data input sources, including real-time data acquisition, test data playback, and system simulations, and also provides system feedback to evaluate closed-loop diagnostic response and mitigation control.

  8. Quantum algorithm for linear regression

    Science.gov (United States)

    Wang, Guoming

    2017-07-01

    We present a quantum algorithm for fitting a linear regression model to a given data set using the least-squares approach. Differently from previous algorithms which yield a quantum state encoding the optimal parameters, our algorithm outputs these numbers in the classical form. So by running it once, one completely determines the fitted model and then can use it to make predictions on new data at little cost. Moreover, our algorithm works in the standard oracle model, and can handle data sets with nonsparse design matrices. It runs in time poly( log2(N ) ,d ,κ ,1 /ɛ ) , where N is the size of the data set, d is the number of adjustable parameters, κ is the condition number of the design matrix, and ɛ is the desired precision in the output. We also show that the polynomial dependence on d and κ is necessary. Thus, our algorithm cannot be significantly improved. Furthermore, we also give a quantum algorithm that estimates the quality of the least-squares fit (without computing its parameters explicitly). This algorithm runs faster than the one for finding this fit, and can be used to check whether the given data set qualifies for linear regression in the first place.

  9. Array architectures for iterative algorithms

    Science.gov (United States)

    Jagadish, Hosagrahar V.; Rao, Sailesh K.; Kailath, Thomas

    1987-01-01

    Regular mesh-connected arrays are shown to be isomorphic to a class of so-called regular iterative algorithms. For a wide variety of problems it is shown how to obtain appropriate iterative algorithms and then how to translate these algorithms into arrays in a systematic fashion. Several 'systolic' arrays presented in the literature are shown to be specific cases of the variety of architectures that can be derived by the techniques presented here. These include arrays for Fourier Transform, Matrix Multiplication, and Sorting.

  10. An investigation of genetic algorithms

    International Nuclear Information System (INIS)

    Douglas, S.R.

    1995-04-01

    Genetic algorithms mimic biological evolution by natural selection in their search for better individuals within a changing population. they can be used as efficient optimizers. This report discusses the developing field of genetic algorithms. It gives a simple example of the search process and introduces the concept of schema. It also discusses modifications to the basic genetic algorithm that result in species and niche formation, in machine learning and artificial evolution of computer programs, and in the streamlining of human-computer interaction. (author). 3 refs., 1 tab., 2 figs

  11. Instance-specific algorithm configuration

    CERN Document Server

    Malitsky, Yuri

    2014-01-01

    This book presents a modular and expandable technique in the rapidly emerging research area of automatic configuration and selection of the best algorithm for the instance at hand. The author presents the basic model behind ISAC and then details a number of modifications and practical applications. In particular, he addresses automated feature generation, offline algorithm configuration for portfolio generation, algorithm selection, adaptive solvers, online tuning, and parallelization.    The author's related thesis was honorably mentioned (runner-up) for the ACP Dissertation Award in 2014,

  12. Subcubic Control Flow Analysis Algorithms

    DEFF Research Database (Denmark)

    Midtgaard, Jan; Van Horn, David

    We give the first direct subcubic algorithm for performing control flow analysis of higher-order functional programs. Despite the long held belief that inclusion-based flow analysis could not surpass the ``cubic bottleneck, '' we apply known set compression techniques to obtain an algorithm...... that runs in time O(n^3/log n) on a unit cost random-access memory model machine. Moreover, we refine the initial flow analysis into two more precise analyses incorporating notions of reachability. We give subcubic algorithms for these more precise analyses and relate them to an existing analysis from...

  13. Quantum Computations: Fundamentals and Algorithms

    International Nuclear Information System (INIS)

    Duplij, S.A.; Shapoval, I.I.

    2007-01-01

    Basic concepts of quantum information theory, principles of quantum calculations and the possibility of creation on this basis unique on calculation power and functioning principle device, named quantum computer, are concerned. The main blocks of quantum logic, schemes of quantum calculations implementation, as well as some known today effective quantum algorithms, called to realize advantages of quantum calculations upon classical, are presented here. Among them special place is taken by Shor's algorithm of number factorization and Grover's algorithm of unsorted database search. Phenomena of decoherence, its influence on quantum computer stability and methods of quantum errors correction are described

  14. Planar graphs theory and algorithms

    CERN Document Server

    Nishizeki, T

    1988-01-01

    Collected in this volume are most of the important theorems and algorithms currently known for planar graphs, together with constructive proofs for the theorems. Many of the algorithms are written in Pidgin PASCAL, and are the best-known ones; the complexities are linear or 0(nlogn). The first two chapters provide the foundations of graph theoretic notions and algorithmic techniques. The remaining chapters discuss the topics of planarity testing, embedding, drawing, vertex- or edge-coloring, maximum independence set, subgraph listing, planar separator theorem, Hamiltonian cycles, and single- or multicommodity flows. Suitable for a course on algorithms, graph theory, or planar graphs, the volume will also be useful for computer scientists and graph theorists at the research level. An extensive reference section is included.

  15. Optimally stopped variational quantum algorithms

    Science.gov (United States)

    Vinci, Walter; Shabani, Alireza

    2018-04-01

    Quantum processors promise a paradigm shift in high-performance computing which needs to be assessed by accurate benchmarking measures. In this article, we introduce a benchmark for the variational quantum algorithm (VQA), recently proposed as a heuristic algorithm for small-scale quantum processors. In VQA, a classical optimization algorithm guides the processor's quantum dynamics to yield the best solution for a given problem. A complete assessment of the scalability and competitiveness of VQA should take into account both the quality and the time of dynamics optimization. The method of optimal stopping, employed here, provides such an assessment by explicitly including time as a cost factor. Here, we showcase this measure for benchmarking VQA as a solver for some quadratic unconstrained binary optimization. Moreover, we show that a better choice for the cost function of the classical routine can significantly improve the performance of the VQA algorithm and even improve its scaling properties.

  16. Fluid-structure-coupling algorithm

    International Nuclear Information System (INIS)

    McMaster, W.H.; Gong, E.Y.; Landram, C.S.; Quinones, D.F.

    1980-01-01

    A fluid-structure-interaction algorithm has been developed and incorporated into the two dimensional code PELE-IC. This code combines an Eulerian incompressible fluid algorithm with a Lagrangian finite element shell algorithm and incorporates the treatment of complex free surfaces. The fluid structure, and coupling algorithms have been verified by the calculation of solved problems from the literature and from air and steam blowdown experiments. The code has been used to calculate loads and structural response from air blowdown and the oscillatory condensation of steam bubbles in water suppression pools typical of boiling water reactors. The techniques developed here have been extended to three dimensions and implemented in the computer code PELE-3D

  17. Recursive Algorithm For Linear Regression

    Science.gov (United States)

    Varanasi, S. V.

    1988-01-01

    Order of model determined easily. Linear-regression algorithhm includes recursive equations for coefficients of model of increased order. Algorithm eliminates duplicative calculations, facilitates search for minimum order of linear-regression model fitting set of data satisfactory.

  18. A quantum causal discovery algorithm

    Science.gov (United States)

    Giarmatzi, Christina; Costa, Fabio

    2018-03-01

    Finding a causal model for a set of classical variables is now a well-established task—but what about the quantum equivalent? Even the notion of a quantum causal model is controversial. Here, we present a causal discovery algorithm for quantum systems. The input to the algorithm is a process matrix describing correlations between quantum events. Its output consists of different levels of information about the underlying causal model. Our algorithm determines whether the process is causally ordered by grouping the events into causally ordered non-signaling sets. It detects if all relevant common causes are included in the process, which we label Markovian, or alternatively if some causal relations are mediated through some external memory. For a Markovian process, it outputs a causal model, namely the causal relations and the corresponding mechanisms, represented as quantum states and channels. Our algorithm opens the route to more general quantum causal discovery methods.

  19. Multiagent scheduling models and algorithms

    CERN Document Server

    Agnetis, Alessandro; Gawiejnowicz, Stanisław; Pacciarelli, Dario; Soukhal, Ameur

    2014-01-01

    This book presents multi-agent scheduling models in which subsets of jobs sharing the same resources are evaluated by different criteria. It discusses complexity results, approximation schemes, heuristics and exact algorithms.

  20. Aggregation Algorithms in Heterogeneous Tables

    Directory of Open Access Journals (Sweden)

    Titus Felix FURTUNA

    2006-01-01

    Full Text Available The heterogeneous tables are most used in the problem of aggregation. A solution for this problem is to standardize these tables of figures. In this paper, we proposed some methods of aggregation based on the hierarchical algorithms.

  1. Designing algorithms using CAD technologies

    Directory of Open Access Journals (Sweden)

    Alin IORDACHE

    2008-01-01

    Full Text Available A representative example of eLearning-platform modular application, ‘Logical diagrams’, is intended to be a useful learning and testing tool for the beginner programmer, but also for the more experienced one. The problem this application is trying to solve concerns young programmers who forget about the fundamentals of this domain, algorithmic. Logical diagrams are a graphic representation of an algorithm, which uses different geometrical figures (parallelograms, rectangles, rhombuses, circles with particular meaning that are called blocks and connected between them to reveal the flow of the algorithm. The role of this application is to help the user build the diagram for the algorithm and then automatically generate the C code and test it.

  2. A filtered backprojection algorithm with characteristics of the iterative landweber algorithm

    OpenAIRE

    L. Zeng, Gengsheng

    2012-01-01

    Purpose: In order to eventually develop an analytical algorithm with noise characteristics of an iterative algorithm, this technical note develops a window function for the filtered backprojection (FBP) algorithm in tomography that behaves as an iterative Landweber algorithm.

  3. A retrodictive stochastic simulation algorithm

    International Nuclear Information System (INIS)

    Vaughan, T.G.; Drummond, P.D.; Drummond, A.J.

    2010-01-01

    In this paper we describe a simple method for inferring the initial states of systems evolving stochastically according to master equations, given knowledge of the final states. This is achieved through the use of a retrodictive stochastic simulation algorithm which complements the usual predictive stochastic simulation approach. We demonstrate the utility of this new algorithm by applying it to example problems, including the derivation of likely ancestral states of a gene sequence given a Markovian model of genetic mutation.

  4. Autonomous algorithms for image restoration

    OpenAIRE

    Griniasty , Meir

    1994-01-01

    We describe a general theoretical framework for algorithms that adaptively tune all their parameters during the restoration of a noisy image. The adaptation procedure is based on a mean field approach which is known as ``Deterministic Annealing'', and is reminiscent of the ``Deterministic Bolzmann Machiné'. The algorithm is less time consuming in comparison with its simulated annealing alternative. We apply the theory to several architectures and compare their performances.

  5. Algorithms and Public Service Media

    OpenAIRE

    Sørensen, Jannick Kirk; Hutchinson, Jonathon

    2018-01-01

    When Public Service Media (PSM) organisations introduce algorithmic recommender systems to suggest media content to users, fundamental values of PSM are challenged. Beyond being confronted with ubiquitous computer ethics problems of causality and transparency, also the identity of PSM as curator and agenda-setter is challenged. The algorithms represents rules for which content to present to whom, and in this sense they may discriminate and bias the exposure of diversity. Furthermore, on a pra...

  6. New algorithms for parallel MRI

    International Nuclear Information System (INIS)

    Anzengruber, S; Ramlau, R; Bauer, F; Leitao, A

    2008-01-01

    Magnetic Resonance Imaging with parallel data acquisition requires algorithms for reconstructing the patient's image from a small number of measured lines of the Fourier domain (k-space). In contrast to well-known algorithms like SENSE and GRAPPA and its flavors we consider the problem as a non-linear inverse problem. However, in order to avoid cost intensive derivatives we will use Landweber-Kaczmarz iteration and in order to improve the overall results some additional sparsity constraints.

  7. Algorithm for programming function generators

    International Nuclear Information System (INIS)

    Bozoki, E.

    1981-01-01

    The present paper deals with a mathematical problem, encountered when driving a fully programmable μ-processor controlled function generator. An algorithm is presented to approximate a desired function by a set of straight segments in such a way that additional restrictions (hardware imposed) are also satisfied. A computer program which incorporates this algorithm and automatically generates the necessary input for the function generator for a broad class of desired functions is also described

  8. Neutronic rebalance algorithms for SIMMER

    International Nuclear Information System (INIS)

    Soran, P.D.

    1976-05-01

    Four algorithms to solve the two-dimensional neutronic rebalance equations in SIMMER are investigated. Results of the study are presented and indicate that a matrix decomposition technique with a variable convergence criterion is the best solution algorithm in terms of accuracy and calculational speed. Rebalance numerical stability problems are examined. The results of the study can be applied to other neutron transport codes which use discrete ordinates techniques

  9. Euclidean shortest paths exact or approximate algorithms

    CERN Document Server

    Li, Fajie

    2014-01-01

    This book reviews algorithms for the exact or approximate solution of shortest-path problems, with a specific focus on a class of algorithms called rubberband algorithms. The coverage includes mathematical proofs for many of the given statements.

  10. A Global algorithm for linear radiosity

    OpenAIRE

    Sbert Cassasayas, Mateu; Pueyo Sánchez, Xavier

    1993-01-01

    A linear algorithm for radiosity is presented, linear both in time and storage. The new algorithm is based on previous work by the authors and on the well known algorithms for progressive radiosity and Monte Carlo particle transport.

  11. Cascade Error Projection: A New Learning Algorithm

    Science.gov (United States)

    Duong, T. A.; Stubberud, A. R.; Daud, T.; Thakoor, A. P.

    1995-01-01

    A new neural network architecture and a hardware implementable learning algorithm is proposed. The algorithm, called cascade error projection (CEP), handles lack of precision and circuit noise better than existing algorithms.

  12. Multimodal Estimation of Distribution Algorithms.

    Science.gov (United States)

    Yang, Qiang; Chen, Wei-Neng; Li, Yun; Chen, C L Philip; Xu, Xiang-Min; Zhang, Jun

    2016-02-15

    Taking the advantage of estimation of distribution algorithms (EDAs) in preserving high diversity, this paper proposes a multimodal EDA. Integrated with clustering strategies for crowding and speciation, two versions of this algorithm are developed, which operate at the niche level. Then these two algorithms are equipped with three distinctive techniques: 1) a dynamic cluster sizing strategy; 2) an alternative utilization of Gaussian and Cauchy distributions to generate offspring; and 3) an adaptive local search. The dynamic cluster sizing affords a potential balance between exploration and exploitation and reduces the sensitivity to the cluster size in the niching methods. Taking advantages of Gaussian and Cauchy distributions, we generate the offspring at the niche level through alternatively using these two distributions. Such utilization can also potentially offer a balance between exploration and exploitation. Further, solution accuracy is enhanced through a new local search scheme probabilistically conducted around seeds of niches with probabilities determined self-adaptively according to fitness values of these seeds. Extensive experiments conducted on 20 benchmark multimodal problems confirm that both algorithms can achieve competitive performance compared with several state-of-the-art multimodal algorithms, which is supported by nonparametric tests. Especially, the proposed algorithms are very promising for complex problems with many local optima.

  13. Efficient RNA structure comparison algorithms.

    Science.gov (United States)

    Arslan, Abdullah N; Anandan, Jithendar; Fry, Eric; Monschke, Keith; Ganneboina, Nitin; Bowerman, Jason

    2017-12-01

    Recently proposed relative addressing-based ([Formula: see text]) RNA secondary structure representation has important features by which an RNA structure database can be stored into a suffix array. A fast substructure search algorithm has been proposed based on binary search on this suffix array. Using this substructure search algorithm, we present a fast algorithm that finds the largest common substructure of given multiple RNA structures in [Formula: see text] format. The multiple RNA structure comparison problem is NP-hard in its general formulation. We introduced a new problem for comparing multiple RNA structures. This problem has more strict similarity definition and objective, and we propose an algorithm that solves this problem efficiently. We also develop another comparison algorithm that iteratively calls this algorithm to locate nonoverlapping large common substructures in compared RNAs. With the new resulting tools, we improved the RNASSAC website (linked from http://faculty.tamuc.edu/aarslan ). This website now also includes two drawing tools: one specialized for preparing RNA substructures that can be used as input by the search tool, and another one for automatically drawing the entire RNA structure from a given structure sequence.

  14. Golden Sine Algorithm: A Novel Math-Inspired Algorithm

    Directory of Open Access Journals (Sweden)

    TANYILDIZI, E.

    2017-05-01

    Full Text Available In this study, Golden Sine Algorithm (Gold-SA is presented as a new metaheuristic method for solving optimization problems. Gold-SA has been developed as a new search algorithm based on population. This math-based algorithm is inspired by sine that is a trigonometric function. In the algorithm, random individuals are created as many as the number of search agents with uniform distribution for each dimension. The Gold-SA operator searches to achieve a better solution in each iteration by trying to bring the current situation closer to the target value. The solution space is narrowed by the golden section so that the areas that are supposed to give only good results are scanned instead of the whole solution space scan. In the tests performed, it is seen that Gold-SA has better results than other population based methods. In addition, Gold-SA has fewer algorithm-dependent parameters and operators than other metaheuristic methods, increasing the importance of this method by providing faster convergence of this new method.

  15. Algorithms as fetish: Faith and possibility in algorithmic work

    Directory of Open Access Journals (Sweden)

    Suzanne L Thomas

    2018-01-01

    Full Text Available Algorithms are powerful because we invest in them the power to do things. With such promise, they can transform the ordinary, say snapshots along a robotic vacuum cleaner’s route, into something much more, such as a clean home. Echoing David Graeber’s revision of fetishism, we argue that this easy slip from technical capabilities to broader claims betrays not the “magic” of algorithms but rather the dynamics of their exchange. Fetishes are not indicators of false thinking, but social contracts in material form. They mediate emerging distributions of power often too nascent, too slippery or too disconcerting to directly acknowledge. Drawing primarily on 2016 ethnographic research with computer vision professionals, we show how faith in what algorithms can do shapes the social encounters and exchanges of their production. By analyzing algorithms through the lens of fetishism, we can see the social and economic investment in some people’s labor over others. We also see everyday opportunities for social creativity and change. We conclude that what is problematic about algorithms is not their fetishization but instead their stabilization into full-fledged gods and demons – the more deserving objects of critique.

  16. Algebraic Algorithm Design and Local Search

    National Research Council Canada - National Science Library

    Graham, Robert

    1996-01-01

    .... Algebraic techniques have been applied successfully to algorithm synthesis by the use of algorithm theories and design tactics, an approach pioneered in the Kestrel Interactive Development System (KIDS...

  17. Development of an algorithm for quantifying extremity biological tissue

    International Nuclear Information System (INIS)

    Pavan, Ana L.M.; Miranda, Jose R.A.; Pina, Diana R. de

    2013-01-01

    The computerized radiology (CR) has become the most widely used device for image acquisition and production, since its introduction in the 80s. The detection and early diagnosis, obtained via CR, are important for the successful treatment of diseases such as arthritis, metabolic bone diseases, tumors, infections and fractures. However, the standards used for optimization of these images are based on international protocols. Therefore, it is necessary to compose radiographic techniques for CR system that provides a secure medical diagnosis, with doses as low as reasonably achievable. To this end, the aim of this work is to develop a quantifier algorithm of tissue, allowing the construction of a homogeneous end used phantom to compose such techniques. It was developed a database of computed tomography images of hand and wrist of adult patients. Using the Matlab ® software, was developed a computational algorithm able to quantify the average thickness of soft tissue and bones present in the anatomical region under study, as well as the corresponding thickness in simulators materials (aluminium and lucite). This was possible through the application of mask and Gaussian removal technique of histograms. As a result, was obtained an average thickness of soft tissue of 18,97 mm and bone tissue of 6,15 mm, and their equivalents in materials simulators of 23,87 mm of acrylic and 1,07mm of aluminum. The results obtained agreed with the medium thickness of biological tissues of a patient's hand pattern, enabling the construction of an homogeneous phantom

  18. Algorithmic randomness and physical entropy

    International Nuclear Information System (INIS)

    Zurek, W.H.

    1989-01-01

    Algorithmic randomness provides a rigorous, entropylike measure of disorder of an individual, microscopic, definite state of a physical system. It is defined by the size (in binary digits) of the shortest message specifying the microstate uniquely up to the assumed resolution. Equivalently, algorithmic randomness can be expressed as the number of bits in the smallest program for a universal computer that can reproduce the state in question (for instance, by plotting it with the assumed accuracy). In contrast to the traditional definitions of entropy, algorithmic randomness can be used to measure disorder without any recourse to probabilities. Algorithmic randomness is typically very difficult to calculate exactly but relatively easy to estimate. In large systems, probabilistic ensemble definitions of entropy (e.g., coarse-grained entropy of Gibbs and Boltzmann's entropy H=lnW, as well as Shannon's information-theoretic entropy) provide accurate estimates of the algorithmic entropy of an individual system or its average value for an ensemble. One is thus able to rederive much of thermodynamics and statistical mechanics in a setting very different from the usual. Physical entropy, I suggest, is a sum of (i) the missing information measured by Shannon's formula and (ii) of the algorithmic information content---algorithmic randomness---present in the available data about the system. This definition of entropy is essential in describing the operation of thermodynamic engines from the viewpoint of information gathering and using systems. These Maxwell demon-type entities are capable of acquiring and processing information and therefore can ''decide'' on the basis of the results of their measurements and computations the best strategy for extracting energy from their surroundings. From their internal point of view the outcome of each measurement is definite

  19. Contact-impact algorithms on parallel computers

    International Nuclear Information System (INIS)

    Zhong Zhihua; Nilsson, Larsgunnar

    1994-01-01

    Contact-impact algorithms on parallel computers are discussed within the context of explicit finite element analysis. The algorithms concerned include a contact searching algorithm and an algorithm for contact force calculations. The contact searching algorithm is based on the territory concept of the general HITA algorithm. However, no distinction is made between different contact bodies, or between different contact surfaces. All contact segments from contact boundaries are taken as a single set. Hierarchy territories and contact territories are expanded. A three-dimensional bucket sort algorithm is used to sort contact nodes. The defence node algorithm is used in the calculation of contact forces. Both the contact searching algorithm and the defence node algorithm are implemented on the connection machine CM-200. The performance of the algorithms is examined under different circumstances, and numerical results are presented. ((orig.))

  20. A review on quantum search algorithms

    Science.gov (United States)

    Giri, Pulak Ranjan; Korepin, Vladimir E.

    2017-12-01

    The use of superposition of states in quantum computation, known as quantum parallelism, has significant advantage in terms of speed over the classical computation. It is evident from the early invented quantum algorithms such as Deutsch's algorithm, Deutsch-Jozsa algorithm and its variation as Bernstein-Vazirani algorithm, Simon algorithm, Shor's algorithms, etc. Quantum parallelism also significantly speeds up the database search algorithm, which is important in computer science because it comes as a subroutine in many important algorithms. Quantum database search of Grover achieves the task of finding the target element in an unsorted database in a time quadratically faster than the classical computer. We review Grover's quantum search algorithms for a singe and multiple target elements in a database. The partial search algorithm of Grover and Radhakrishnan and its optimization by Korepin called GRK algorithm are also discussed.

  1. Computational geometry algorithms and applications

    CERN Document Server

    de Berg, Mark; Overmars, Mark; Schwarzkopf, Otfried

    1997-01-01

    Computational geometry emerged from the field of algorithms design and anal­ ysis in the late 1970s. It has grown into a recognized discipline with its own journals, conferences, and a large community of active researchers. The suc­ cess of the field as a research discipline can on the one hand be explained from the beauty of the problems studied and the solutions obtained, and, on the other hand, by the many application domains--computer graphics, geographic in­ formation systems (GIS), robotics, and others-in which geometric algorithms play a fundamental role. For many geometric problems the early algorithmic solutions were either slow or difficult to understand and implement. In recent years a number of new algorithmic techniques have been developed that improved and simplified many of the previous approaches. In this textbook we have tried to make these modem algorithmic solutions accessible to a large audience. The book has been written as a textbook for a course in computational geometry, but it can ...

  2. The Chandra Source Catalog: Algorithms

    Science.gov (United States)

    McDowell, Jonathan; Evans, I. N.; Primini, F. A.; Glotfelty, K. J.; McCollough, M. L.; Houck, J. C.; Nowak, M. A.; Karovska, M.; Davis, J. E.; Rots, A. H.; Siemiginowska, A. L.; Hain, R.; Evans, J. D.; Anderson, C. S.; Bonaventura, N. R.; Chen, J. C.; Doe, S. M.; Fabbiano, G.; Galle, E. C.; Gibbs, D. G., II; Grier, J. D.; Hall, D. M.; Harbo, P. N.; He, X.; Lauer, J.; Miller, J. B.; Mitschang, A. W.; Morgan, D. L.; Nichols, J. S.; Plummer, D. A.; Refsdal, B. L.; Sundheim, B. A.; Tibbetts, M. S.; van Stone, D. W.; Winkelman, S. L.; Zografou, P.

    2009-09-01

    Creation of the Chandra Source Catalog (CSC) required adjustment of existing pipeline processing, adaptation of existing interactive analysis software for automated use, and development of entirely new algorithms. Data calibration was based on the existing pipeline, but more rigorous data cleaning was applied and the latest calibration data products were used. For source detection, a local background map was created including the effects of ACIS source readout streaks. The existing wavelet source detection algorithm was modified and a set of post-processing scripts used to correct the results. To analyse the source properties we ran the SAO Traceray trace code for each source to generate a model point spread function, allowing us to find encircled energy correction factors and estimate source extent. Further algorithms were developed to characterize the spectral, spatial and temporal properties of the sources and to estimate the confidence intervals on count rates and fluxes. Finally, sources detected in multiple observations were matched, and best estimates of their merged properties derived. In this paper we present an overview of the algorithms used, with more detailed treatment of some of the newly developed algorithms presented in companion papers.

  3. Quantum walks and search algorithms

    CERN Document Server

    Portugal, Renato

    2013-01-01

    This book addresses an interesting area of quantum computation called quantum walks, which play an important role in building quantum algorithms, in particular search algorithms. Quantum walks are the quantum analogue of classical random walks. It is known that quantum computers have great power for searching unsorted databases. This power extends to many kinds of searches, particularly to the problem of finding a specific location in a spatial layout, which can be modeled by a graph. The goal is to find a specific node knowing that the particle uses the edges to jump from one node to the next. This book is self-contained with main topics that include: Grover's algorithm, describing its geometrical interpretation and evolution by means of the spectral decomposition of the evolution operater Analytical solutions of quantum walks on important graphs like line, cycles, two-dimensional lattices, and hypercubes using Fourier transforms Quantum walks on generic graphs, describing methods to calculate the limiting d...

  4. Gossip algorithms in quantum networks

    International Nuclear Information System (INIS)

    Siomau, Michael

    2017-01-01

    Gossip algorithms is a common term to describe protocols for unreliable information dissemination in natural networks, which are not optimally designed for efficient communication between network entities. We consider application of gossip algorithms to quantum networks and show that any quantum network can be updated to optimal configuration with local operations and classical communication. This allows to speed-up – in the best case exponentially – the quantum information dissemination. Irrespective of the initial configuration of the quantum network, the update requiters at most polynomial number of local operations and classical communication. - Highlights: • We analyze the performance of gossip algorithms in quantum networks. • Local operations and classical communication (LOCC) can speed the performance up. • The speed-up is exponential in the best case; the number of LOCC is polynomial.

  5. Universal algorithm of time sharing

    International Nuclear Information System (INIS)

    Silin, I.N.; Fedyun'kin, E.D.

    1979-01-01

    Timesharing system algorithm is proposed for the wide class of one- and multiprocessor computer configurations. Dynamical priority is the piece constant function of the channel characteristic and system time quantum. The interactive job quantum has variable length. Characteristic recurrent formula is received. The concept of the background job is introduced. Background job loads processor if high priority jobs are inactive. Background quality function is given on the base of the statistical data received in the timesharing process. Algorithm includes optimal trashing off procedure for the jobs replacements in the memory. Sharing of the system time in proportion to the external priorities is guaranteed for the all active enough computing channels (back-ground too). The fast answer is guaranteed for the interactive jobs, which use small time and memory. The external priority control is saved for the high level scheduler. The experience of the algorithm realization on the BESM-6 computer in JINR is discussed

  6. Algorithms for Decision Tree Construction

    KAUST Repository

    Chikalov, Igor

    2011-01-01

    The study of algorithms for decision tree construction was initiated in 1960s. The first algorithms are based on the separation heuristic [13, 31] that at each step tries dividing the set of objects as evenly as possible. Later Garey and Graham [28] showed that such algorithm may construct decision trees whose average depth is arbitrarily far from the minimum. Hyafil and Rivest in [35] proved NP-hardness of DT problem that is constructing a tree with the minimum average depth for a diagnostic problem over 2-valued information system and uniform probability distribution. Cox et al. in [22] showed that for a two-class problem over information system, even finding the root node attribute for an optimal tree is an NP-hard problem. © Springer-Verlag Berlin Heidelberg 2011.

  7. Scalable algorithms for contact problems

    CERN Document Server

    Dostál, Zdeněk; Sadowská, Marie; Vondrák, Vít

    2016-01-01

    This book presents a comprehensive and self-contained treatment of the authors’ newly developed scalable algorithms for the solutions of multibody contact problems of linear elasticity. The brand new feature of these algorithms is theoretically supported numerical scalability and parallel scalability demonstrated on problems discretized by billions of degrees of freedom. The theory supports solving multibody frictionless contact problems, contact problems with possibly orthotropic Tresca’s friction, and transient contact problems. It covers BEM discretization, jumping coefficients, floating bodies, mortar non-penetration conditions, etc. The exposition is divided into four parts, the first of which reviews appropriate facets of linear algebra, optimization, and analysis. The most important algorithms and optimality results are presented in the third part of the volume. The presentation is complete, including continuous formulation, discretization, decomposition, optimality results, and numerical experimen...

  8. Fault Tolerant External Memory Algorithms

    DEFF Research Database (Denmark)

    Jørgensen, Allan Grønlund; Brodal, Gerth Stølting; Mølhave, Thomas

    2009-01-01

    Algorithms dealing with massive data sets are usually designed for I/O-efficiency, often captured by the I/O model by Aggarwal and Vitter. Another aspect of dealing with massive data is how to deal with memory faults, e.g. captured by the adversary based faulty memory RAM by Finocchi and Italiano....... However, current fault tolerant algorithms do not scale beyond the internal memory. In this paper we investigate for the first time the connection between I/O-efficiency in the I/O model and fault tolerance in the faulty memory RAM, and we assume that both memory and disk are unreliable. We show a lower...... bound on the number of I/Os required for any deterministic dictionary that is resilient to memory faults. We design a static and a dynamic deterministic dictionary with optimal query performance as well as an optimal sorting algorithm and an optimal priority queue. Finally, we consider scenarios where...

  9. Gossip algorithms in quantum networks

    Energy Technology Data Exchange (ETDEWEB)

    Siomau, Michael, E-mail: siomau@nld.ds.mpg.de [Physics Department, Jazan University, P.O. Box 114, 45142 Jazan (Saudi Arabia); Network Dynamics, Max Planck Institute for Dynamics and Self-Organization (MPIDS), 37077 Göttingen (Germany)

    2017-01-23

    Gossip algorithms is a common term to describe protocols for unreliable information dissemination in natural networks, which are not optimally designed for efficient communication between network entities. We consider application of gossip algorithms to quantum networks and show that any quantum network can be updated to optimal configuration with local operations and classical communication. This allows to speed-up – in the best case exponentially – the quantum information dissemination. Irrespective of the initial configuration of the quantum network, the update requiters at most polynomial number of local operations and classical communication. - Highlights: • We analyze the performance of gossip algorithms in quantum networks. • Local operations and classical communication (LOCC) can speed the performance up. • The speed-up is exponential in the best case; the number of LOCC is polynomial.

  10. Next Generation Suspension Dynamics Algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Schunk, Peter Randall [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Higdon, Jonathon [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Chen, Steven [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-12-01

    This research project has the objective to extend the range of application, improve the efficiency and conduct simulations with the Fast Lubrication Dynamics (FLD) algorithm for concentrated particle suspensions in a Newtonian fluid solvent. The research involves a combination of mathematical development, new computational algorithms, and application to processing flows of relevance in materials processing. The mathematical developments clarify the underlying theory, facilitate verification against classic monographs in the field and provide the framework for a novel parallel implementation optimized for an OpenMP shared memory environment. The project considered application to consolidation flows of major interest in high throughput materials processing and identified hitherto unforeseen challenges in the use of FLD in these applications. Extensions to the algorithm have been developed to improve its accuracy in these applications.

  11. Algorithms for Protein Structure Prediction

    DEFF Research Database (Denmark)

    Paluszewski, Martin

    -trace. Here we present three different approaches for reconstruction of C-traces from predictable measures. In our first approach [63, 62], the C-trace is positioned on a lattice and a tabu-search algorithm is applied to find minimum energy structures. The energy function is based on half-sphere-exposure (HSE......) is more robust than standard Monte Carlo search. In the second approach for reconstruction of C-traces, an exact branch and bound algorithm has been developed [67, 65]. The model is discrete and makes use of secondary structure predictions, HSE, CN and radius of gyration. We show how to compute good lower...... bounds for partial structures very fast. Using these lower bounds, we are able to find global minimum structures in a huge conformational space in reasonable time. We show that many of these global minimum structures are of good quality compared to the native structure. Our branch and bound algorithm...

  12. Some nonlinear space decomposition algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Tai, Xue-Cheng; Espedal, M. [Univ. of Bergen (Norway)

    1996-12-31

    Convergence of a space decomposition method is proved for a general convex programming problem. The space decomposition refers to methods that decompose a space into sums of subspaces, which could be a domain decomposition or a multigrid method for partial differential equations. Two algorithms are proposed. Both can be used for linear as well as nonlinear elliptic problems and they reduce to the standard additive and multiplicative Schwarz methods for linear elliptic problems. Two {open_quotes}hybrid{close_quotes} algorithms are also presented. They converge faster than the additive one and have better parallelism than the multiplicative method. Numerical tests with a two level domain decomposition for linear, nonlinear and interface elliptic problems are presented for the proposed algorithms.

  13. A generalization of Takane's algorithm for DEDICOM

    NARCIS (Netherlands)

    Kiers, Henk A.L.; ten Berge, Jos M.F.; Takane, Yoshio; de Leeuw, Jan

    An algorithm is described for fitting the DEDICOM model for the analysis of asymmetric data matrices. This algorithm generalizes an algorithm suggested by Takane in that it uses a damping parameter in the iterative process. Takane's algorithm does not always converge monotonically. Based on the

  14. Seamless Merging of Hypertext and Algorithm Animation

    Science.gov (United States)

    Karavirta, Ville

    2009-01-01

    Online learning material that students use by themselves is one of the typical usages of algorithm animation (AA). Thus, the integration of algorithm animations into hypertext is seen as an important topic today to promote the usage of algorithm animation in teaching. This article presents an algorithm animation viewer implemented purely using…

  15. Empirical tests of the Gradual Learning Algorithm

    NARCIS (Netherlands)

    Boersma, P.; Hayes, B.

    1999-01-01

    The Gradual Learning Algorithm (Boersma 1997) is a constraint ranking algorithm for learning Optimality-theoretic grammars. The purpose of this article is to assess the capabilities of the Gradual Learning Algorithm, particularly in comparison with the Constraint Demotion algorithm of Tesar and

  16. Empirical tests of the Gradual Learning Algorithm

    NARCIS (Netherlands)

    Boersma, P.; Hayes, B.

    2001-01-01

    The Gradual Learning Algorithm (Boersma 1997) is a constraint-ranking algorithm for learning optimality-theoretic grammars. The purpose of this article is to assess the capabilities of the Gradual Learning Algorithm, particularly in comparison with the Constraint Demotion algorithm of Tesar and

  17. A new cluster algorithm for graphs

    NARCIS (Netherlands)

    S. van Dongen

    1998-01-01

    textabstractA new cluster algorithm for graphs called the emph{Markov Cluster algorithm ($MCL$ algorithm) is introduced. The graphs may be both weighted (with nonnegative weight) and directed. Let~$G$~be such a graph. The $MCL$ algorithm simulates flow in $G$ by first identifying $G$ in a

  18. A Hybrid Chaotic Quantum Evolutionary Algorithm

    DEFF Research Database (Denmark)

    Cai, Y.; Zhang, M.; Cai, H.

    2010-01-01

    A hybrid chaotic quantum evolutionary algorithm is proposed to reduce amount of computation, speed up convergence and restrain premature phenomena of quantum evolutionary algorithm. The proposed algorithm adopts the chaotic initialization method to generate initial population which will form a pe...... tests. The presented algorithm is applied to urban traffic signal timing optimization and the effect is satisfied....

  19. Using Alternative Multiplication Algorithms to "Offload" Cognition

    Science.gov (United States)

    Jazby, Dan; Pearn, Cath

    2015-01-01

    When viewed through a lens of embedded cognition, algorithms may enable aspects of the cognitive work of multi-digit multiplication to be "offloaded" to the environmental structure created by an algorithm. This study analyses four multiplication algorithms by viewing different algorithms as enabling cognitive work to be distributed…

  20. Gossip algorithms in quantum networks

    Science.gov (United States)

    Siomau, Michael

    2017-01-01

    Gossip algorithms is a common term to describe protocols for unreliable information dissemination in natural networks, which are not optimally designed for efficient communication between network entities. We consider application of gossip algorithms to quantum networks and show that any quantum network can be updated to optimal configuration with local operations and classical communication. This allows to speed-up - in the best case exponentially - the quantum information dissemination. Irrespective of the initial configuration of the quantum network, the update requiters at most polynomial number of local operations and classical communication.