WorldWideScience

Sample records for disaggregate analysis

  1. A GIS-based disaggregate spatial watershed analysis using RADAR data

    International Nuclear Information System (INIS)

    Al-Hamdan, M.

    2002-01-01

    Hydrology is the study of water in all its forms, origins, and destinations on the earth.This paper develops a novel modeling technique using a geographic information system (GIS) to facilitate watershed hydrological routing using RADAR data. The RADAR rainfall data, segmented to 4 km by 4 km blocks, divides the watershed into several sub basins which are modeled independently. A case study for the GIS-based disaggregate spatial watershed analysis using RADAR data is provided for South Fork Cowikee Creek near Batesville, Alabama. All the data necessary to complete the analysis is maintained in the ArcView GIS software. This paper concludes that the GIS-Based disaggregate spatial watershed analysis using RADAR data is a viable method to calculate hydrological routing for large watersheds. (author)

  2. POVERTY AND CALORIE DEPRIVATION ACROSS SOCIO-ECONOMIC GROUPS IN RURAL INDIA: A DISAGGREGATED ANALYSIS

    OpenAIRE

    Gupta, Abha; Mishra, Deepak K.

    2013-01-01

    This paper examines the linkages between calorie deprivation and poverty in rural India at a disaggregated level. It aims to explore the trends and pattern in levels of nutrient intake across social and economic groups. A spatial analysis at the state and NSS-region level unravels the spatial distribution of calorie deprivation in rural India. The gap between incidence of poverty and calorie deprivation has also been investigated. The paper also estimates the factors influencing calorie depri...

  3. Energy consumption, carbon emissions and economic growth in Saudi Arabia: An aggregate and disaggregate analysis

    International Nuclear Information System (INIS)

    Alkhathlan, Khalid; Javid, Muhammad

    2013-01-01

    The objective of this study is to examine the relationship among economic growth, carbon emissions and energy consumption at the aggregate and disaggregate levels. For the aggregate energy consumption model, we use total energy consumption per capita and CO 2 emissions per capita based on the total energy consumption. For the disaggregate analysis, we used oil, gas and electricity consumption models along with their respective CO 2 emissions. The long-term income elasticities of carbon emissions in three of the four models are positive and higher than their estimated short-term income elasticities. These results suggest that carbon emissions increase with the increase in per capita income which supports the belief that there is a monotonically increasing relationship between per capita carbon emissions and per capita income for the aggregate model and for the oil and electricity consumption models. The long- and short-term income elasticities of carbon emissions are negative for the gas consumption model. This result indicates that if the Saudi Arabian economy switched from oil to gas consumption, then an increase in per capita income would reduce carbon emissions. The results also suggest that electricity is less polluting than other sources of energy. - Highlights: • Carbon emissions increase with the increase in per capita income in Saudi Arabia. • The income elasticity of CO 2 is negative for the gas consumption model. • The income elasticity of CO 2 is positive for the oil consumption model. • The results suggest that electricity is less polluting than oil and gas

  4. Disaggregated energy consumption and GDP in Taiwan: A threshold co-integration analysis

    International Nuclear Information System (INIS)

    Hu, J.-L.; Lin, C.-H.

    2008-01-01

    Energy consumption growth is much higher than economic growth for Taiwan in recent years, worsening its energy efficiency. This paper provides a solid explanation by examining the equilibrium relationship between GDP and disaggregated energy consumption under a non-linear framework. The threshold co-integration test developed with asymmetric dynamic adjusting processes proposed by Hansen and Seo [Hansen, B.E., Seo, B., 2002. Testing for two-regime threshold cointegration in vector error-correction models. Journal of Econometrics 110, 293-318.] is applied. Non-linear co-integrations between GDP and disaggregated energy consumptions are confirmed except for oil consumption. The two-regime vector error-correction models (VECM) show that the adjustment process of energy consumption toward equilibrium is highly persistent when an appropriately threshold is reached. There is mean-reverting behavior when the threshold is reached, making aggregate and disaggregated energy consumptions grow faster than GDP in Taiwan

  5. An economic analysis of disaggregation of space assets: Application to GPS

    Science.gov (United States)

    Hastings, Daniel E.; La Tour, Paul A.

    2017-05-01

    New ideas, technologies and architectural concepts are emerging with the potential to reshape the space enterprise. One of those new architectural concepts is the idea that rather than aggregating payloads onto large very high performance buses, space architectures should be disaggregated with smaller numbers of payloads (as small as one) per bus and the space capabilities spread across a correspondingly larger number of systems. The primary rationale is increased survivability and resilience. The concept of disaggregation is examined from an acquisition cost perspective. A mixed system dynamics and trade space exploration model is developed to look at long-term trends in the space acquisition business. The model is used to examine the question of how different disaggregated GPS architectures compare in cost to the well-known current GPS architecture. A generation-over-generation examination of policy choices is made possible through the application of soft systems modeling of experience and learning effects. The assumptions that are allowed to vary are: design lives, production quantities, non-recurring engineering and time between generations. The model shows that there is always a premium in the first generation to be paid to disaggregate the GPS payloads. However, it is possible to construct survivable architectures where the premium after two generations is relatively low.

  6. Carbon emissions, energy consumption and economic growth: An aggregate and disaggregate analysis of the Indian economy

    International Nuclear Information System (INIS)

    Ahmad, Ashfaq; Zhao, Yuhuan; Shahbaz, Muhammad; Bano, Sadia; Zhang, Zhonghua; Wang, Song; Liu, Ya

    2016-01-01

    This study investigates the long and short run relationships among carbon emissions, energy consumption and economic growth in India at the aggregated and disaggregated levels during 1971–2014. The autoregressive distributed lag model is employed for the cointegration analyses and the vector error correction model is applied to determine the direction of causality between variables. Results show that a long run cointegration relationship exists and that the environmental Kuznets curve is validated at the aggregated and disaggregated levels. Furthermore, energy (total energy, gas, oil, electricity and coal) consumption has a positive relationship with carbon emissions and a feedback effect exists between economic growth and carbon emissions. Thus, energy-efficient technologies should be used in domestic production to mitigate carbon emissions at the aggregated and disaggregated levels. The present study provides policy makers with new directions in drafting comprehensive policies with lasting impacts on the economy, energy consumption and environment towards sustainable development. - Highlights: •Relationships among carbon emissions, energy consumption and economic growth are investigated. •The EKC exists at aggregated and disaggregated levels for India. •All energy resources have positive effects on carbon emissions. •Gas energy consumption is less polluting than other energy sources in India.

  7. HIV/AIDS National Strategic Plans of Sub-Saharan African countries: an analysis for gender equality and sex-disaggregated HIV targets

    Science.gov (United States)

    Sherwood, Jennifer; Sharp, Alana; Cooper, Bergen; Roose-Snyder, Beirne; Blumenthal, Susan

    2017-01-01

    Abstract National Strategic Plans (NSPs) for HIV/AIDS are country planning documents that set priorities for programmes and services, including a set of targets to quantify progress toward national and international goals. The inclusion of sex-disaggregated targets and targets to combat gender inequality is important given the high disease burden among young women and adolescent girls in Sub-Saharan Africa, yet no comprehensive gender-focused analysis of NSP targets has been performed. This analysis quantitatively evaluates national HIV targets, included in NSPs from eighteen Sub-Saharan African countries, for sex-disaggregation. Additionally, NSP targets aimed at reducing gender-based inequality in health outcomes are compiled and inductively coded to report common themes. On average, in the eighteen countries included in this analysis, 31% of NSP targets include sex-disaggregation (range 0–92%). Three countries disaggregated a majority (>50%) of their targets by sex. Sex-disaggregation in data reporting was more common for targets related to the early phases of the HIV care continuum: 83% of countries included any sex-disaggregated targets for HIV prevention, 56% for testing and linkage to care, 22% for improving antiretroviral treatment coverage, and 11% for retention in treatment. The most common target to reduce gender inequality was to prevent gender-based violence (present in 50% of countries). Other commonly incorporated target areas related to improving women’s access to family planning, human and legal rights, and decision-making power. The inclusion of sex-disaggregated targets in national planning is vital to ensure that programmes make progress for all population groups. Improving the availability and quality of indicators to measure gender inequality, as well as evaluating programme outcomes by sex, is critical to tracking this progress. This analysis reveals an urgent need to set specific and separate targets for men and women in order to achieve

  8. Analysis of aggregation and disaggregation effects for grid-based hydrological models and the development of improved precipitation disaggregation procedures for GCMs

    Directory of Open Access Journals (Sweden)

    H. S. Wheater

    1999-01-01

    Full Text Available Appropriate representation of hydrological processes within atmospheric General Circulation Models (GCMs is important with respect to internal model dynamics (e.g. surface feedback effects on atmospheric fluxes, continental runoff production and to simulation of terrestrial impacts of climate change. However, at the scale of a GCM grid-square, several methodological problems arise. Spatial disaggregation of grid-square average climatological parameters is required in particular to produce appropriate point intensities from average precipitation. Conversely, aggregation of land surface heterogeneity is necessary for grid-scale or catchment scale application. The performance of grid-based hydrological models is evaluated for two large (104km2 UK catchments. Simple schemes, using sub-grid average of individual land use at 40 km scale and with no calibration, perform well at the annual time-scale and, with the addition of a (calibrated routing component, at the daily and monthly time-scale. Decoupling of hillslope and channel routing does not necessarily improve performance or identifiability. Scale dependence is investigated through application of distribution functions for rainfall and soil moisture at 100 km scale. The results depend on climate, but show interdependence of the representation of sub-grid rainfall and soil moisture distribution. Rainfall distribution is analysed directly using radar rainfall data from the UK and the Arkansas Red River, USA. Among other properties, the scale dependence of spatial coverage upon radar pixel resolution and GCM grid-scale, as well as the serial correlation of coverages are investigated. This leads to a revised methodology for GCM application, as a simple extension of current procedures. A new location-based approach using an image processing technique is then presented, to allow for the preservation of the spatial memory of the process.

  9. A disaggregated analysis of the environmental Kuznets curve for industrial CO_2 emissions in China

    International Nuclear Information System (INIS)

    Wang, Yuan; Zhang, Chen; Lu, Aitong; Li, Li; He, Yanmin; ToJo, Junji; Zhu, Xiaodong

    2017-01-01

    Highlights: • The existence of EKC hypothesis for industrial carbon emissions is tested for China. • A semi-parametric panel regression is used along with the STIRPAT model. • The validity of the EKC hypothesis varies across industry sectors. • The EKC relation to income exists in the electricity and heat production sector. • The EKC relation to urbanization exists in the manufacturing sector. - Abstract: The present study concentrates on a Chinese context and attempts to explicitly examine the impacts of economic growth and urbanization on various industrial carbon emissions through investigation of the existence of an environmental Kuznets curve. Within the Stochastic Impacts by Regression on Population, Affluence and Technology framework, this is the first attempt to simultaneously explore the income/urbanization and disaggregated industrial carbon dioxide emissions nexus, using panel data together with semi-parametric panel fixed effects regression. Our dataset is referred to a provincial panel of China spanning the period 2000–2013. With this information, we find evidence in support of an inverted U-shaped curve relationship between economic growth and carbon dioxide emissions in the electricity and heat production sector, but a similar inference only for urbanization and those emissions in the manufacturing sector. The heterogeneity in the EKC relationship across industry sectors implies that there is urgent need to design more specific policies related to carbon emissions reduction for various industry sectors. Also, these findings contribute to advancing the emerging literature on the development-pollution nexus.

  10. Analysis of Fuel Cell Markets in Japan and the US: Experience Curve Development and Cost Reduction Disaggregation

    Energy Technology Data Exchange (ETDEWEB)

    Wei, Max [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Smith, Sarah J. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Sohn, Michael D. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2016-07-15

    Fuel cells are both a longstanding and emerging technology for stationary and transportation applications, and their future use will likely be critical for the deep decarbonization of global energy systems. As we look into future applications, a key challenge for policy-makers and technology market forecasters who seek to track and/or accelerate their market adoption is the ability to forecast market costs of the fuel cells as technology innovations are incorporated into market products. Specifically, there is a need to estimate technology learning rates, which are rates of cost reduction versus production volume. Unfortunately, no literature exists for forecasting future learning rates for fuel cells. In this paper, we look retrospectively to estimate learning rates for two fuel cell deployment programs: (1) the micro-combined heat and power (CHP) program in Japan, and (2) the Self-Generation Incentive Program (SGIP) in California. These two examples have a relatively broad set of historical market data and thus provide an informative and international comparison of distinct fuel cell technologies and government deployment programs. We develop a generalized procedure for disaggregating experience-curve cost-reductions in order to disaggregate the Japanese fuel cell micro-CHP market into its constituent components, and we derive and present a range of learning rates that may explain observed market trends. Finally, we explore the differences in the technology development ecosystem and market conditions that may have contributed to the observed differences in cost reduction and draw policy observations for the market adoption of future fuel cell technologies. The scientific and policy contributions of this paper are the first comparative experience curve analysis of past fuel cell technologies in two distinct markets, and the first quantitative comparison of a detailed cost model of fuel cell systems with actual market data. The resulting approach is applicable to

  11. HIV/AIDS National Strategic Plans of Sub-Saharan African countries: an analysis for gender equality and sex-disaggregated HIV targets.

    Science.gov (United States)

    Sherwood, Jennifer; Sharp, Alana; Cooper, Bergen; Roose-Snyder, Beirne; Blumenthal, Susan

    2017-12-01

    National Strategic Plans (NSPs) for HIV/AIDS are country planning documents that set priorities for programmes and services, including a set of targets to quantify progress toward national and international goals. The inclusion of sex-disaggregated targets and targets to combat gender inequality is important given the high disease burden among young women and adolescent girls in Sub-Saharan Africa, yet no comprehensive gender-focused analysis of NSP targets has been performed. This analysis quantitatively evaluates national HIV targets, included in NSPs from eighteen Sub-Saharan African countries, for sex-disaggregation. Additionally, NSP targets aimed at reducing gender-based inequality in health outcomes are compiled and inductively coded to report common themes. On average, in the eighteen countries included in this analysis, 31% of NSP targets include sex-disaggregation (range 0-92%). Three countries disaggregated a majority (>50%) of their targets by sex. Sex-disaggregation in data reporting was more common for targets related to the early phases of the HIV care continuum: 83% of countries included any sex-disaggregated targets for HIV prevention, 56% for testing and linkage to care, 22% for improving antiretroviral treatment coverage, and 11% for retention in treatment. The most common target to reduce gender inequality was to prevent gender-based violence (present in 50% of countries). Other commonly incorporated target areas related to improving women's access to family planning, human and legal rights, and decision-making power. The inclusion of sex-disaggregated targets in national planning is vital to ensure that programmes make progress for all population groups. Improving the availability and quality of indicators to measure gender inequality, as well as evaluating programme outcomes by sex, is critical to tracking this progress. This analysis reveals an urgent need to set specific and separate targets for men and women in order to achieve an equitable

  12. Monetary Policy and Real Estate Prices: A Disaggregated Analysis for Switzerland

    OpenAIRE

    Berlemann, Michael; Freese, Julia

    2010-01-01

    Most empirical studies found that monetary policy has a significant effect on house prices while stock markets remain unaffected by interest rate shocks. In this paper we conduct a more detailed analysis by studying various sub-segments of the real estate market. Em-ploying a new dataset for Switzerland we estimate vector autoregressive models and find substitution effects between house and apartment prices on the one hand and rental prices on the other. Interestingly enough, commercial prope...

  13. Localization of SDGs through Disaggregation of KPIs

    Directory of Open Access Journals (Sweden)

    Manohar Patole

    2018-03-01

    Full Text Available The United Nation’s Agenda 2030 and Sustainable Development Goals (SDGs pick up where the Millennium Development Goals (MDGs left off. The SDGs set forth a formidable task for the global community and international sustainable development over the next 15 years. Learning from the successes and failures of the MDGs, government officials, development experts, and many other groups understood that localization is necessary to accomplish the SDGs but how and what to localize remain as questions to be answered. The UN Inter-Agency and Expert Group on Sustainable Development Goals (UN IAEG-SDGs sought to answer these questions through development of metadata behind the 17 goals, 169 associated targets and corresponding indicators of the SDGs. Data management is key to understanding how and what to localize, but, to do it properly, the data and metadata needs to be properly disaggregated. This paper reviews the utilization of disaggregation analysis for localization and demonstrates the process of identifying opportunities for subnational interventions to achieve multiple targets and indicators through the formation of new integrated key performance indicators. A case study on SDG 6: Clean Water and Sanitation is used to elucidate these points. The examples presented here are only illustrative—future research and the development of an analytical framework for localization and disaggregation of the SDGs would be a valuable tool for national and local governments, implementing partners and other interested parties.

  14. Load Disaggregation Technologies: Real World and Laboratory Performance

    Energy Technology Data Exchange (ETDEWEB)

    Mayhorn, Ebony T.; Sullivan, Greg P.; Petersen, Joseph M.; Butner, Ryan S.; Johnson, Erica M.

    2016-09-28

    Low cost interval metering and communication technology improvements over the past ten years have enabled the maturity of load disaggregation (or non-intrusive load monitoring) technologies to better estimate and report energy consumption of individual end-use loads. With the appropriate performance characteristics, these technologies have the potential to enable many utility and customer facing applications such as billing transparency, itemized demand and energy consumption, appliance diagnostics, commissioning, energy efficiency savings verification, load shape research, and demand response measurement. However, there has been much skepticism concerning the ability of load disaggregation products to accurately identify and estimate energy consumption of end-uses; which has hindered wide-spread market adoption. A contributing factor is that common test methods and metrics are not available to evaluate performance without having to perform large scale field demonstrations and pilots, which can be costly when developing such products. Without common and cost-effective methods of evaluation, more developed disaggregation technologies will continue to be slow to market and potential users will remain uncertain about their capabilities. This paper reviews recent field studies and laboratory tests of disaggregation technologies. Several factors are identified that are important to consider in test protocols, so that the results reflect real world performance. Potential metrics are examined to highlight their effectiveness in quantifying disaggregation performance. This analysis is then used to suggest performance metrics that are meaningful and of value to potential users and that will enable researchers/developers to identify beneficial ways to improve their technologies.

  15. Aggregating and Disaggregating Flexibility Objects

    DEFF Research Database (Denmark)

    Siksnys, Laurynas; Valsomatzis, Emmanouil; Hose, Katja

    2015-01-01

    In many scientific and commercial domains we encounter flexibility objects, i.e., objects with explicit flexibilities in a time and an amount dimension (e.g., energy or product amount). Applications of flexibility objects require novel and efficient techniques capable of handling large amounts...... and aiming at energy balancing during aggregation. In more detail, this paper considers the complete life cycle of flex-objects: aggregation, disaggregation, associated requirements, efficient incremental computation, and balance aggregation techniques. Extensive experiments based on real-world data from...

  16. Disaggregated Futures and Options Commitments of Traders

    Data.gov (United States)

    Commodity Futures Trading Commission — The Disaggregated Futures and Options Commitments of Traders dataset provides a breakdown of each week's open interest for agriculture, energy, metals, lumber, and...

  17. Disaggregated Futures-Only Commitments of Traders

    Data.gov (United States)

    Commodity Futures Trading Commission — The Disaggregated Futures-Only Commitments of Traders dataset provides a breakdown of each week's open interest for agriculture, energy, metals, lumber, and...

  18. Streamflow disaggregation: a nonlinear deterministic approach

    Directory of Open Access Journals (Sweden)

    B. Sivakumar

    2004-01-01

    Full Text Available This study introduces a nonlinear deterministic approach for streamflow disaggregation. According to this approach, the streamflow transformation process from one scale to another is treated as a nonlinear deterministic process, rather than a stochastic process as generally assumed. The approach follows two important steps: (1 reconstruction of the scalar (streamflow series in a multi-dimensional phase-space for representing the transformation dynamics; and (2 use of a local approximation (nearest neighbor method for disaggregation. The approach is employed for streamflow disaggregation in the Mississippi River basin, USA. Data of successively doubled resolutions between daily and 16 days (i.e. daily, 2-day, 4-day, 8-day, and 16-day are studied, and disaggregations are attempted only between successive resolutions (i.e. 2-day to daily, 4-day to 2-day, 8-day to 4-day, and 16-day to 8-day. Comparisons between the disaggregated values and the actual values reveal excellent agreements for all the cases studied, indicating the suitability of the approach for streamflow disaggregation. A further insight into the results reveals that the best results are, in general, achieved for low embedding dimensions (2 or 3 and small number of neighbors (less than 50, suggesting possible presence of nonlinear determinism in the underlying transformation process. A decrease in accuracy with increasing disaggregation scale is also observed, a possible implication of the existence of a scaling regime in streamflow.

  19. Disaggregate energy consumption and industrial production in South Africa

    Energy Technology Data Exchange (ETDEWEB)

    Ziramba, Emmanuel [Department of Economics, University of South Africa, P.O Box 392, UNISA 0003 (South Africa)

    2009-06-15

    This paper tries to assess the relationship between disaggregate energy consumption and industrial output in South Africa by undertaking a cointegration analysis using annual data from 1980 to 2005. We also investigate the causal relationships between the various disaggregate forms of energy consumption and industrial production. Our results imply that industrial production and employment are long-run forcing variables for electricity consumption. Applying the [Toda, H.Y., Yamamoto, T., 1995. Statistical inference in vector autoregressions with possibly integrated processes. Journal of Econometrics 66, 225-250] technique to Granger-causality, we find bi-directional causality between oil consumption and industrial production. For the other forms of energy consumption, there is evidence in support of the energy neutrality hypothesis. There is also evidence of causality between employment and electricity consumption as well as coal consumption causing employment. (author)

  20. Disaggregate energy consumption and industrial production in South Africa

    International Nuclear Information System (INIS)

    Ziramba, Emmanuel

    2009-01-01

    This paper tries to assess the relationship between disaggregate energy consumption and industrial output in South Africa by undertaking a cointegration analysis using annual data from 1980 to 2005. We also investigate the causal relationships between the various disaggregate forms of energy consumption and industrial production. Our results imply that industrial production and employment are long-run forcing variables for electricity consumption. Applying the [Toda, H.Y., Yamamoto, T., 1995. Statistical inference in vector autoregressions with possibly integrated processes. Journal of Econometrics 66, 225-250] technique to Granger-causality, we find bi-directional causality between oil consumption and industrial production. For the other forms of energy consumption, there is evidence in support of the energy neutrality hypothesis. There is also evidence of causality between employment and electricity consumption as well as coal consumption causing employment.

  1. Disaggregation of sectors in Social Accounting Matrices using a customized Wolsky method

    OpenAIRE

    BARRERA-LOZANO Margarita; MAINAR CAUSAPÉ ALFREDO; VALLÉS FERRER José

    2014-01-01

    The aim of this work is to enable the implementation of disaggregation processes for specific and homogeneous sectors in Social Accounting Matrices (SAMs), while taking into account the difficulties in data collection from these types of sectors. The method proposed is based on the Wolsky technique, customized for the disaggregation of Social Accounting Matrices, within the current-facilities framework. The Spanish Social Accounting Matrix for 2008 is used as a benchmark for the analysis, and...

  2. Reducing out-of-pocket expenditures to reduce poverty: a disaggregated analysis at rural-urban and state level in India.

    Science.gov (United States)

    Garg, Charu C; Karan, Anup K

    2009-03-01

    Out-of-pocket (OOP) expenditure on health care has significant implications for poverty in many developing countries. This paper aims to assess the differential impact of OOP expenditure and its components, such as expenditure on inpatient care, outpatient care and on drugs, across different income quintiles, between developed and less developed regions in India. It also attempts to measure poverty at disaggregated rural-urban and state levels. Based on Consumer Expenditure Survey (CES) data from the National Sample Survey (NSS), conducted in 1999-2000, the share of households' expenditure on health services and drugs was calculated. The number of individuals below the state-specific rural and urban poverty line in 17 major states, with and without netting out OOP expenditure, was determined. This also enabled the calculation of the poverty gap or poverty deepening in each region. Estimates show that OOP expenditure is about 5% of total household expenditure (ranging from about 2% in Assam to almost 7% in Kerala) with a higher proportion being recorded in rural areas and affluent states. Purchase of drugs constitutes 70% of the total OOP expenditure. Approximately 32.5 million persons fell below the poverty line in 1999-2000 through OOP payments, implying that the overall poverty increase after accounting for OOP expenditure is 3.2% (as against a rise of 2.2% shown in earlier literature). Also, the poverty headcount increase and poverty deepening is much higher in poorer states and rural areas compared with affluent states and urban areas, except in the case of Maharashtra. High OOP payment share in total health expenditures did not always imply a high poverty headcount; state-specific economic and social factors played a role. The paper argues for better methods of capturing drugs expenditure in household surveys and recommends that special attention be paid to expenditures on drugs, in particular for the poor. Targeted policies in just five poor states to reduce

  3. GIS aided spatial disaggregation of emission inventories

    International Nuclear Information System (INIS)

    Orthofer, R.; Loibl, W.

    1995-10-01

    We have applied our method to produce detailed NMVOC and NO x emission density maps for Austria. While theoretical average emission densities for the whole country would be only 5 t NMVOC and 2.5 t NO x per km 2 , the actual emission densities range from zero in the many uninhabited areas up to more than 3,000 t/km 2 along major highways. In Austria, small scale disaggregation is necessary particularly for the differentiated topography and population patterns in alpine valleys. (author)

  4. Effect of natural antioxidants on the aggregation and disaggregation ...

    African Journals Online (AJOL)

    Conclusion: High antioxidant activities were positively correlated with the inhibition of Aβ aggregation, although not with the disaggregation of pre-formed Aβ aggregates. Nevertheless, potent antioxidants may be helpful in treating Alzheimer's disease. Keywords: Alzheimer's disease, β-Amyloid, Aggregation, Disaggregation ...

  5. Modelling OAIS Compliance for Disaggregated Preservation Services

    Directory of Open Access Journals (Sweden)

    Gareth Knight

    2007-07-01

    Full Text Available The reference model for the Open Archival Information System (OAIS is well established in the research community as a method of modelling the functions of a digital repository and as a basis in which to frame digital curation and preservation issues. In reference to the 5th anniversary review of the OAIS, it is timely to consider how it may be interpreted by an institutional repository. The paper examines methods of sharing essential functions and requirements of an OAIS between two or more institutions, outlining the practical considerations of outsourcing. It also details the approach taken by the SHERPA DP Project to introduce a disaggregated service model for institutional repositories that wish to implement preservation services.

  6. Multisite rainfall downscaling and disaggregation in a tropical urban area

    Science.gov (United States)

    Lu, Y.; Qin, X. S.

    2014-02-01

    A systematic downscaling-disaggregation study was conducted over Singapore Island, with an aim to generate high spatial and temporal resolution rainfall data under future climate-change conditions. The study consisted of two major components. The first part was to perform an inter-comparison of various alternatives of downscaling and disaggregation methods based on observed data. This included (i) single-site generalized linear model (GLM) plus K-nearest neighbor (KNN) (S-G-K) vs. multisite GLM (M-G) for spatial downscaling, (ii) HYETOS vs. KNN for single-site disaggregation, and (iii) KNN vs. MuDRain (Multivariate Rainfall Disaggregation tool) for multisite disaggregation. The results revealed that, for multisite downscaling, M-G performs better than S-G-K in covering the observed data with a lower RMSE value; for single-site disaggregation, KNN could better keep the basic statistics (i.e. standard deviation, lag-1 autocorrelation and probability of wet hour) than HYETOS; for multisite disaggregation, MuDRain outperformed KNN in fitting interstation correlations. In the second part of the study, an integrated downscaling-disaggregation framework based on M-G, KNN, and MuDRain was used to generate hourly rainfall at multiple sites. The results indicated that the downscaled and disaggregated rainfall data based on multiple ensembles from HadCM3 for the period from 1980 to 2010 could well cover the observed mean rainfall amount and extreme data, and also reasonably keep the spatial correlations both at daily and hourly timescales. The framework was also used to project future rainfall conditions under HadCM3 SRES A2 and B2 scenarios. It was indicated that the annual rainfall amount could reduce up to 5% at the end of this century, but the rainfall of wet season and extreme hourly rainfall could notably increase.

  7. Validating CDIAC's population-based approach to the disaggregation of within-country CO2 emissions

    International Nuclear Information System (INIS)

    Cushman, R.M.; Beauchamp, J.J.; Brenkert, A.L.

    1998-01-01

    The Carbon Dioxide Information Analysis Center produces and distributes a data base of CO 2 emissions from fossil-fuel combustion and cement production, expressed as global, regional, and national estimates. CDIAC also produces a companion data base, expressed on a one-degree latitude-longitude grid. To do this gridding, emissions within each country are spatially disaggregated according to the distribution of population within that country. Previously, the lack of within-country emissions data prevented a validation of this approach. But emissions inventories are now becoming available for most US states. An analysis of these inventories confirms that population distribution explains most, but not all, of the variance in the distribution of CO 2 emissions within the US. Additional sources of variance (coal production, non-carbon energy sources, and interstate electricity transfers) are explored, with the hope that the spatial disaggregation of emissions can be improved

  8. Characteristics and Performance of Existing Load Disaggregation Technologies

    Energy Technology Data Exchange (ETDEWEB)

    Mayhorn, Ebony T. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Sullivan, Greg P. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Butner, Ryan S. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Hao, He [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Baechler, Michael C. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2015-04-10

    Non-intrusive load monitoring (NILM) or non-intrusive appliance load monitoring (NIALM) is an analytic approach to disaggregate building loads based on a single metering point. This advanced load monitoring and disaggregation technique has the potential to provide an alternative solution to high-priced traditional sub-metering and enable innovative approaches for energy conservation, energy efficiency, and demand response. However, since the inception of the concept in the 1980’s, evaluations of these technologies have focused on reporting performance accuracy without investigating sources of inaccuracies or fully understanding and articulating the meaning of the metrics used to quantify performance. As a result, the market for, as well as, advances in these technologies have been slowly maturing.To improve the market for these NILM technologies, there has to be confidence that the deployment will lead to benefits. In reality, every end-user and application that this technology may enable does not require the highest levels of performance accuracy to produce benefits. Also, there are other important characteristics that need to be considered, which may affect the appeal of NILM products to certain market targets (i.e. residential and commercial building consumers) and the suitability for particular applications. These characteristics include the following: 1) ease of use, the level of expertise/bandwidth required to properly use the product; 2) ease of installation, the level of expertise required to install along with hardware needs that impact product cost; and 3) ability to inform decisions and actions, whether the energy outputs received by end-users (e.g. third party applications, residential users, building operators, etc.) empower decisions and actions to be taken at time frames required for certain applications. Therefore, stakeholders, researchers, and other interested parties should be kept abreast of the evolving capabilities, uses, and characteristics

  9. Technological shape and size: A disaggregated perspective on sectoral innovation systems in renewable electrification pathways

    DEFF Research Database (Denmark)

    Hansen, Ulrich Elmer; Gregersen, Cecilia; Lema, Rasmus

    2018-01-01

    important analytical implications because the disaggregated perspective allows us to identify trajectories that cut across conventionally defined core technologies. This is important for ongoing discussions of electrification pathways in developing countries. We conclude the paper by distilling......The sectoral innovation system perspective has been developed as an analytical framework to analyse and understand innovation dynamics within and across various sectors. Most of the research conducted on sectoral innovation systems has focused on an aggregate-level analysis of entire sectors....... This paper argues that a disaggregated (sub-sectoral) focus is more suited to policy-oriented work on the development and diffusion of renewable energy, particularly in countries with rapidly developing energy systems and open technology choices. It focuses on size, distinguishing between small-scale (mini...

  10. Daily disaggregation of simulated monthly flows using different rainfall datasets in southern Africa

    Directory of Open Access Journals (Sweden)

    D.A. Hughes

    2015-09-01

    New hydrological insights for the region: There are substantial regional differences in the success of the monthly hydrological model, which inevitably affects the success of the daily disaggregation results. There are also regional differences in the success of using global rainfall data sets (Climatic Research Unit (CRU datasets for monthly, National Oceanic and Atmospheric Administration African Rainfall Climatology, version 2 (ARC2 satellite data for daily. The overall conclusion is that the disaggregation method presents a parsimonious approach to generating daily flow simulations from existing monthly simulations and that these daily flows are likely to be useful for some purposes (e.g. water quality modelling, but less so for others (e.g. peak flow analysis.

  11. Command Disaggregation Attack and Mitigation in Industrial Internet of Things

    Directory of Open Access Journals (Sweden)

    Peng Xun

    2017-10-01

    Full Text Available A cyber-physical attack in the industrial Internet of Things can cause severe damage to physical system. In this paper, we focus on the command disaggregation attack, wherein attackers modify disaggregated commands by intruding command aggregators like programmable logic controllers, and then maliciously manipulate the physical process. It is necessary to investigate these attacks, analyze their impact on the physical process, and seek effective detection mechanisms. We depict two different types of command disaggregation attack modes: (1 the command sequence is disordered and (2 disaggregated sub-commands are allocated to wrong actuators. We describe three attack models to implement these modes with going undetected by existing detection methods. A novel and effective framework is provided to detect command disaggregation attacks. The framework utilizes the correlations among two-tier command sequences, including commands from the output of central controller and sub-commands from the input of actuators, to detect attacks before disruptions occur. We have designed components of the framework and explain how to mine and use these correlations to detect attacks. We present two case studies to validate different levels of impact from various attack models and the effectiveness of the detection framework. Finally, we discuss how to enhance the detection framework.

  12. Command Disaggregation Attack and Mitigation in Industrial Internet of Things.

    Science.gov (United States)

    Xun, Peng; Zhu, Pei-Dong; Hu, Yi-Fan; Cui, Peng-Shuai; Zhang, Yan

    2017-10-21

    A cyber-physical attack in the industrial Internet of Things can cause severe damage to physical system. In this paper, we focus on the command disaggregation attack, wherein attackers modify disaggregated commands by intruding command aggregators like programmable logic controllers, and then maliciously manipulate the physical process. It is necessary to investigate these attacks, analyze their impact on the physical process, and seek effective detection mechanisms. We depict two different types of command disaggregation attack modes: (1) the command sequence is disordered and (2) disaggregated sub-commands are allocated to wrong actuators. We describe three attack models to implement these modes with going undetected by existing detection methods. A novel and effective framework is provided to detect command disaggregation attacks. The framework utilizes the correlations among two-tier command sequences, including commands from the output of central controller and sub-commands from the input of actuators, to detect attacks before disruptions occur. We have designed components of the framework and explain how to mine and use these correlations to detect attacks. We present two case studies to validate different levels of impact from various attack models and the effectiveness of the detection framework. Finally, we discuss how to enhance the detection framework.

  13. A disaggregate model to predict the intercity travel demand

    Energy Technology Data Exchange (ETDEWEB)

    Damodaran, S.

    1988-01-01

    This study was directed towards developing disaggregate models to predict the intercity travel demand in Canada. A conceptual framework for the intercity travel behavior was proposed; under this framework, a nested multinomial model structure that combined mode choice and trip generation was developed. The CTS (Canadian Travel Survey) data base was used for testing the structure and to determine the viability of using this data base for intercity travel-demand prediction. Mode-choice and trip-generation models were calibrated for four modes (auto, bus, rail and air) for both business and non-business trips. The models were linked through the inclusive value variable, also referred to as the long sum of the denominator in the literature. Results of the study indicated that the structure used in this study could be applied for intercity travel-demand modeling. However, some limitations of the data base were identified. It is believed that, with some modifications, the CTS data could be used for predicting intercity travel demand. Future research can identify the factors affecting intercity travel behavior, which will facilitate collection of useful data for intercity travel prediction and policy analysis.

  14. A rainfall disaggregation scheme for sub-hourly time scales: Coupling a Bartlett-Lewis based model with adjusting procedures

    Science.gov (United States)

    Kossieris, Panagiotis; Makropoulos, Christos; Onof, Christian; Koutsoyiannis, Demetris

    2018-01-01

    Many hydrological applications, such as flood studies, require the use of long rainfall data at fine time scales varying from daily down to 1 min time step. However, in the real world there is limited availability of data at sub-hourly scales. To cope with this issue, stochastic disaggregation techniques are typically employed to produce possible, statistically consistent, rainfall events that aggregate up to the field data collected at coarser scales. A methodology for the stochastic disaggregation of rainfall at fine time scales was recently introduced, combining the Bartlett-Lewis process to generate rainfall events along with adjusting procedures to modify the lower-level variables (i.e., hourly) so as to be consistent with the higher-level one (i.e., daily). In the present paper, we extend the aforementioned scheme, initially designed and tested for the disaggregation of daily rainfall into hourly depths, for any sub-hourly time scale. In addition, we take advantage of the recent developments in Poisson-cluster processes incorporating in the methodology a Bartlett-Lewis model variant that introduces dependence between cell intensity and duration in order to capture the variability of rainfall at sub-hourly time scales. The disaggregation scheme is implemented in an R package, named HyetosMinute, to support disaggregation from daily down to 1-min time scale. The applicability of the methodology was assessed on a 5-min rainfall records collected in Bochum, Germany, comparing the performance of the above mentioned model variant against the original Bartlett-Lewis process (non-random with 5 parameters). The analysis shows that the disaggregation process reproduces adequately the most important statistical characteristics of rainfall at wide range of time scales, while the introduction of the model with dependent intensity-duration results in a better performance in terms of skewness, rainfall extremes and dry proportions.

  15. Cellular Handling of Protein Aggregates by Disaggregation Machines.

    Science.gov (United States)

    Mogk, Axel; Bukau, Bernd; Kampinga, Harm H

    2018-01-18

    Both acute proteotoxic stresses that unfold proteins and expression of disease-causing mutant proteins that expose aggregation-prone regions can promote protein aggregation. Protein aggregates can interfere with cellular processes and deplete factors crucial for protein homeostasis. To cope with these challenges, cells are equipped with diverse folding and degradation activities to rescue or eliminate aggregated proteins. Here, we review the different chaperone disaggregation machines and their mechanisms of action. In all these machines, the coating of protein aggregates by Hsp70 chaperones represents the conserved, initializing step. In bacteria, fungi, and plants, Hsp70 recruits and activates Hsp100 disaggregases to extract aggregated proteins. In the cytosol of metazoa, Hsp70 is empowered by a specific cast of J-protein and Hsp110 co-chaperones allowing for standalone disaggregation activity. Both types of disaggregation machines are supported by small Hsps that sequester misfolded proteins. Copyright © 2018 Elsevier Inc. All rights reserved.

  16. Load Disaggregation via Pattern Recognition: A Feasibility Study of a Novel Method in Residential Building

    Directory of Open Access Journals (Sweden)

    Younghoon Kwak

    2018-04-01

    Full Text Available In response to the need to improve energy-saving processes in older buildings, especially residential ones, this paper describes the potential of a novel method of disaggregating loads in light of the load patterns of household appliances determined in residential buildings. Experiments were designed to be applicable to general residential buildings and four types of commonly used appliances were selected to verify the method. The method assumes that loads are disaggregated and measured by a single primary meter. Following the metering of household appliances and an analysis of the usage patterns of each type, values of electric current were entered into a Hidden Markov Model (HMM to formulate predictions. Thereafter, the HMM repeatedly performed to output the predicted data close to the measured data, while errors between predicted and the measured data were evaluated to determine whether they met tolerance. When the method was examined for 4 days, matching rates in accordance with the load disaggregation outcomes of the household appliances (i.e., laptop, refrigerator, TV, and microwave were 0.994, 0.992, 0.982, and 0.988, respectively. The proposed method can provide insights into how and where within such buildings energy is consumed. As a result, effective and systematic energy saving measures can be derived even in buildings in which monitoring sensors and measurement equipment are not installed.

  17. Photoinduced disaggregation of TiO₂ nanoparticles enables transdermal penetration.

    Directory of Open Access Journals (Sweden)

    Samuel W Bennett

    Full Text Available Under many aqueous conditions, metal oxide nanoparticles attract other nanoparticles and grow into fractal aggregates as the result of a balance between electrostatic and Van Der Waals interactions. Although particle coagulation has been studied for over a century, the effect of light on the state of aggregation is not well understood. Since nanoparticle mobility and toxicity have been shown to be a function of aggregate size, and generally increase as size decreases, photo-induced disaggregation may have significant effects. We show that ambient light and other light sources can partially disaggregate nanoparticles from the aggregates and increase the dermal transport of nanoparticles, such that small nanoparticle clusters can readily diffuse into and through the dermal profile, likely via the interstitial spaces. The discovery of photoinduced disaggregation presents a new phenomenon that has not been previously reported or considered in coagulation theory or transdermal toxicological paradigms. Our results show that after just a few minutes of light, the hydrodynamic diameter of TiO(2 aggregates is reduced from ∼280 nm to ∼230 nm. We exposed pigskin to the nanoparticle suspension and found 200 mg kg(-1 of TiO(2 for skin that was exposed to nanoparticles in the presence of natural sunlight and only 75 mg kg(-1 for skin exposed to dark conditions, indicating the influence of light on NP penetration. These results suggest that photoinduced disaggregation may have important health implications.

  18. Disaggregating Assessment to Close the Loop and Improve Student Learning

    Science.gov (United States)

    Rawls, Janita; Hammons, Stacy

    2015-01-01

    This study examined student learning outcomes for accelerated degree students as compared to conventional undergraduate students, disaggregated by class levels, to develop strategies for then closing the loop with assessment. Using the National Survey of Student Engagement, critical thinking and oral and written communication outcomes were…

  19. Disaggregation of Rainy Hours: Compared Performance of Various Models.

    Science.gov (United States)

    Ben Haha, M.; Hingray, B.; Musy, A.

    In the urban environment, the response times of catchments are usually short. To de- sign or to diagnose waterworks in that context, it is necessary to describe rainfall events with a good time resolution: a 10mn time step is often necessary. Such in- formation is not always available. Rainfall disaggregation models have thus to be applied to produce from rough rainfall data that short time resolution information. The communication will present the performance obtained with several rainfall dis- aggregation models that allow for the disaggregation of rainy hours into six 10mn rainfall amounts. The ability of the models to reproduce some statistical character- istics of rainfall (mean, variance, overall distribution of 10mn-rainfall amounts; ex- treme values of maximal rainfall amounts over different durations) is evaluated thanks to different graphical and numerical criteria. The performance of simple models pre- sented in some scientific papers or developed in the Hydram laboratory as well as the performance of more sophisticated ones is compared with the performance of the basic constant disaggregation model. The compared models are either deterministic or stochastic; for some of them the disaggregation is based on scaling properties of rainfall. The compared models are in increasing complexity order: constant model, linear model (Ben Haha, 2001), Ormsbee Deterministic model (Ormsbee, 1989), Ar- tificial Neuronal Network based model (Burian et al. 2000), Hydram Stochastic 1 and Hydram Stochastic 2 (Ben Haha, 2001), Multiplicative Cascade based model (Olsson and Berndtsson, 1998), Ormsbee Stochastic model (Ormsbee, 1989). The 625 rainy hours used for that evaluation (with a hourly rainfall amount greater than 5mm) were extracted from the 21 years chronological rainfall series (10mn time step) observed at the Pully meteorological station, Switzerland. The models were also evaluated when applied to different rainfall classes depending on the season first and on the

  20. Probabilistic disaggregation model with application to natural hazard risk assessment of portfolios

    DEFF Research Database (Denmark)

    Custer, Rocco; Nishijima, Kazuyoshi

    In natural hazard risk assessment, a resolution mismatch between hazard data and aggregated exposure data is often observed. A possible solution to this issue is the disaggregation of exposure data to match the spatial resolution of hazard data. Disaggregation models available in literature...... disaggregation model that considers the uncertainty in the disaggregation, taking basis in the scaled Dirichlet distribution. The proposed probabilistic disaggregation model is applied to a portfolio of residential buildings in the Canton Bern, Switzerland, subject to flood risk. Thereby, the model is verified...... are usually deterministic and make use of auxiliary indicator, such as land cover, to spatially distribute exposures. As the dependence between auxiliary indicator and disaggregated number of exposures is generally imperfect, uncertainty arises in disaggregation. This paper therefore proposes a probabilistic...

  1. The Behaviour of Disaggregated Public Expenditures and Income in Malaysia

    OpenAIRE

    Tang, Chor-Foon; Lau, Evan

    2011-01-01

    The present study attempts to re-investigate the behaviour of disaggregated public expenditures data and national income for Malaysia. This study covers the sample period of annual data from 1960 to 2007. The Bartlett-corrected trace tests proposed by Johansen (2002) were used to ascertain the presence of long run equilibrium relationship between public expenditures and national income. The results show one cointegrating vector for each specification of public expenditures. The relatively new...

  2. Savannah River Site disaggregated seismic spectra

    International Nuclear Information System (INIS)

    Stephenson, D.E.

    1993-02-01

    The objective of this technical note is to characterize seismic ground motion at the Savannah River Site (SRS) by postulated earthquakes that may impact facilities at the site. This task is accomplished by reviewing the deterministic and probabilistic assessments of the seismic hazard to establish the earthquakes that control the hazard to establish the earthquakes that control the hazard at the site and then evaluate the associated seismic ground motions in terms of response spectra. For engineering design criteria of earthquake-resistant structures, response spectra serve the function of characterizing ground motions as a function of period or frequency. These motions then provide the input parameters that are used in the analysis of structural response. Because they use the maximum response, the response spectra are an inherently conservative design tool. Response spectra are described in terms of amplitude, duration, and frequency content, and these are related to source parameters, travel path, and site conditions. Studies by a number of investigators have shown by statistical analysis that for different magnitudes the response spectrum values are different for differing periods. These facts support Jennings' position that using different shapes of design spectra for earthquakes of different magnitudes and travel paths is a better practice than employing a single, general-purpose shape. All seismic ground motion characterization results indicate that the PGA is controlled by a local event with M w < 6 and R < 30km. The results also show that lower frequencies are controlled by a larger, more distant event, typically the Charleston source. The PGA of 0.2 g, based originally on the Blume study, is consistent with LLNL report UCRL-15910 (1990) and with the DOE position on LLNL/EPRI

  3. Spatial and temporal disaggregation of transport-related carbon dioxide emissions in Bogota - Colombia

    Science.gov (United States)

    Hernandez-Gonzalez, L. A.; Jimenez Pizarro, R.; Néstor Y. Rojas, N. Y.

    2011-12-01

    16% lower, mainly due to uncertainty in activity factors. With only 4% of Bogota's fleet, diesel use accounts for 42% of the CO2 emissions. The emissions are almost evenly shared between public (9% of the fleet) and private transport. Peak emissions occur at 8 a.m. and 6 p.m. with maximum values over a densely industrialized area at the northwest of Bogota. This investigation allowed estimating the relative contribution of fuel and vehicle categories to spatially- and temporally-resolved CO2 emissions. Fuel consumption time series indicate a near-stabilization trend on energy consumption for transportation, which is unexpected taking into account the sustained economic and vehicle fleet growth in Bogota. The comparison of the disaggregation methodology with the IPCC methodology contributes to the analysis of possible error sources on activity factor estimations. This information is very useful for uncertainty estimation and adjustment of primary air pollutant emissions inventories.

  4. Disaggregating asthma: Big investigation versus big data.

    Science.gov (United States)

    Belgrave, Danielle; Henderson, John; Simpson, Angela; Buchan, Iain; Bishop, Christopher; Custovic, Adnan

    2017-02-01

    We are facing a major challenge in bridging the gap between identifying subtypes of asthma to understand causal mechanisms and translating this knowledge into personalized prevention and management strategies. In recent years, "big data" has been sold as a panacea for generating hypotheses and driving new frontiers of health care; the idea that the data must and will speak for themselves is fast becoming a new dogma. One of the dangers of ready accessibility of health care data and computational tools for data analysis is that the process of data mining can become uncoupled from the scientific process of clinical interpretation, understanding the provenance of the data, and external validation. Although advances in computational methods can be valuable for using unexpected structure in data to generate hypotheses, there remains a need for testing hypotheses and interpreting results with scientific rigor. We argue for combining data- and hypothesis-driven methods in a careful synergy, and the importance of carefully characterized birth and patient cohorts with genetic, phenotypic, biological, and molecular data in this process cannot be overemphasized. The main challenge on the road ahead is to harness bigger health care data in ways that produce meaningful clinical interpretation and to translate this into better diagnoses and properly personalized prevention and treatment plans. There is a pressing need for cross-disciplinary research with an integrative approach to data science, whereby basic scientists, clinicians, data analysts, and epidemiologists work together to understand the heterogeneity of asthma. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  5. Navigating between Disaggregating Nation States and Entrenching Processes of Globalisation

    DEFF Research Database (Denmark)

    Jacobsen, Michael

    2007-01-01

    on the international community for its economic survival this dependency on the global has as a consequence that it rolls back aspects of national sovereignty thus opening up the national hinterland for further international influences. These developments initiate a process of disaggregating state and nation, meaning...... that a gradual disarticulation of the relationship between state and nation produces new societal spaces, which are contested by non-statist interest groups and transnational more or less deterritorialised ethnic affiliated groups and networks. The argument forwarded in this article is that the ethnic Chinese...

  6. Probabilistic disaggregation model with application to natural hazard risk assessment of portfolios

    OpenAIRE

    Custer, Rocco; Nishijima, Kazuyoshi

    2012-01-01

    In natural hazard risk assessment, a resolution mismatch between hazard data and aggregated exposure data is often observed. A possible solution to this issue is the disaggregation of exposure data to match the spatial resolution of hazard data. Disaggregation models available in literature are usually deterministic and make use of auxiliary indicator, such as land cover, to spatially distribute exposures. As the dependence between auxiliary indicator and disaggregated number of exposures is ...

  7. Development of an Asset Value Map for Disaster Risk Assessment in China by Spatial Disaggregation Using Ancillary Remote Sensing Data.

    Science.gov (United States)

    Wu, Jidong; Li, Ying; Li, Ning; Shi, Peijun

    2018-01-01

    The extent of economic losses due to a natural hazard and disaster depends largely on the spatial distribution of asset values in relation to the hazard intensity distribution within the affected area. Given that statistical data on asset value are collected by administrative units in China, generating spatially explicit asset exposure maps remains a key challenge for rapid postdisaster economic loss assessment. The goal of this study is to introduce a top-down (or downscaling) approach to disaggregate administrative-unit level asset value to grid-cell level. To do so, finding the highly correlated "surrogate" indicators is the key. A combination of three data sets-nighttime light grid, LandScan population grid, and road density grid, is used as ancillary asset density distribution information for spatializing the asset value. As a result, a high spatial resolution asset value map of China for 2015 is generated. The spatial data set contains aggregated economic value at risk at 30 arc-second spatial resolution. Accuracy of the spatial disaggregation reflects redistribution errors introduced by the disaggregation process as well as errors from the original ancillary data sets. The overall accuracy of the results proves to be promising. The example of using the developed disaggregated asset value map in exposure assessment of watersheds demonstrates that the data set offers immense analytical flexibility for overlay analysis according to the hazard extent. This product will help current efforts to analyze spatial characteristics of exposure and to uncover the contributions of both physical and social drivers of natural hazard and disaster across space and time. © 2017 Society for Risk Analysis.

  8. Context-Based Energy Disaggregation in Smart Homes

    Directory of Open Access Journals (Sweden)

    Francesca Paradiso

    2016-01-01

    Full Text Available In this paper, we address the problem of energy conservation and optimization in residential environments by providing users with useful information to solicit a change in consumption behavior. Taking care to highly limit the costs of installation and management, our work proposes a Non-Intrusive Load Monitoring (NILM approach, which consists of disaggregating the whole-house power consumption into the individual portions associated to each device. State of the art NILM algorithms need monitoring data sampled at high frequency, thus requiring high costs for data collection and management. In this paper, we propose an NILM approach that relaxes the requirements on monitoring data since it uses total active power measurements gathered at low frequency (about 1 Hz. The proposed approach is based on the use of Factorial Hidden Markov Models (FHMM in conjunction with context information related to the user presence in the house and the hourly utilization of appliances. Through a set of tests, we investigated how the use of these additional context-awareness features could improve disaggregation results with respect to the basic FHMM algorithm. The tests have been performed by using Tracebase, an open dataset made of data gathered from real home environments.

  9. Disaggregation of small, cohesive rubble pile asteroids due to YORP

    Science.gov (United States)

    Scheeres, D. J.

    2018-04-01

    The implication of small amounts of cohesion within relatively small rubble pile asteroids is investigated with regard to their evolution under the persistent presence of the YORP effect. We find that below a characteristic size, which is a function of cohesive strength, density and other properties, rubble pile asteroids can enter a "disaggregation phase" in which they are subject to repeated fissions after which the formation of a stabilizing binary system is not possible. Once this threshold is passed rubble pile asteroids may be disaggregated into their constituent components within a finite time span. These constituent components will have their own spin limits - albeit potentially at a much higher spin rate due to the greater strength of a monolithic body. The implications of this prediction are discussed and include modification of size distributions, prevalence of monolithic bodies among meteoroids and the lifetime of small rubble pile bodies in the solar system. The theory is then used to place constraints on the strength of binary asteroids characterized as a function of their type.

  10. An Iterative Load Disaggregation Approach Based on Appliance Consumption Pattern

    Directory of Open Access Journals (Sweden)

    Huijuan Wang

    2018-04-01

    Full Text Available Non-intrusive load monitoring (NILM, monitoring single-appliance consumption level by decomposing the aggregated energy consumption, is a novel and economic technology that is beneficial to energy utilities and energy demand management strategies development. Hardware costs of high-frequency sampling and algorithm’s computational complexity hampered NILM large-scale application. However, low sampling data shows poor performance in event detection when multiple appliances are simultaneously turned on. In this paper, we contribute an iterative disaggregation approach that is based on appliance consumption pattern (ILDACP. Our approach combined Fuzzy C-means clustering algorithm, which provide an initial appliance operating status, and sub-sequence searching Dynamic Time Warping, which retrieves single energy consumption based on the typical power consumption pattern. Results show that the proposed approach is effective to accurately disaggregate power consumption, and is suitable for the situation where different appliances are simultaneously operated. Also, the approach has lower computational complexity than Hidden Markov Model method and it is easy to implement in the household without installing special equipment.

  11. Disaggregated seismic hazard and the elastic input energy spectrum: An approach to design earthquake selection

    Science.gov (United States)

    Chapman, Martin Colby

    1998-12-01

    The design earthquake selection problem is fundamentally probabilistic. Disaggregation of a probabilistic model of the seismic hazard offers a rational and objective approach that can identify the most likely earthquake scenario(s) contributing to hazard. An ensemble of time series can be selected on the basis of the modal earthquakes derived from the disaggregation. This gives a useful time-domain realization of the seismic hazard, to the extent that a single motion parameter captures the important time-domain characteristics. A possible limitation to this approach arises because most currently available motion prediction models for peak ground motion or oscillator response are essentially independent of duration, and modal events derived using the peak motions for the analysis may not represent the optimal characterization of the hazard. The elastic input energy spectrum is an alternative to the elastic response spectrum for these types of analyses. The input energy combines the elements of amplitude and duration into a single parameter description of the ground motion that can be readily incorporated into standard probabilistic seismic hazard analysis methodology. This use of the elastic input energy spectrum is examined. Regression analysis is performed using strong motion data from Western North America and consistent data processing procedures for both the absolute input energy equivalent velocity, (Vsbea), and the elastic pseudo-relative velocity response (PSV) in the frequency range 0.5 to 10 Hz. The results show that the two parameters can be successfully fit with identical functional forms. The dependence of Vsbea and PSV upon (NEHRP) site classification is virtually identical. The variance of Vsbea is uniformly less than that of PSV, indicating that Vsbea can be predicted with slightly less uncertainty as a function of magnitude, distance and site classification. The effects of site class are important at frequencies less than a few Hertz. The regression

  12. Disaggregating Qualitative Data from Asian American College Students in Campus Racial Climate Research and Assessment

    Science.gov (United States)

    Museus, Samuel D.; Truong, Kimberly A.

    2009-01-01

    This article highlights the utility of disaggregating qualitative research and assessment data on Asian American college students. Given the complexity of and diversity within the Asian American population, scholars have begun to underscore the importance of disaggregating data in the empirical examination of Asian Americans, but most of those…

  13. Disaggregate energy consumption and industrial output in the United States

    International Nuclear Information System (INIS)

    Ewing, Bradley T.; Sari, Ramazan; Soytas, Ugur

    2007-01-01

    This paper investigates the effect of disaggregate energy consumption on industrial output in the United States. Most of the related research utilizes aggregate data which may not indicate the relative strength or explanatory power of various energy inputs on output. We use monthly data and employ the generalized variance decomposition approach to assess the relative impacts of energy and employment on real output. Our results suggest that unexpected shocks to coal, natural gas and fossil fuel energy sources have the highest impacts on the variation of output, while several renewable sources exhibit considerable explanatory power as well. However, none of the energy sources explain more of the forecast error variance of industrial output than employment

  14. Commercial demand for energy: a disaggregated approach. [Model validation for 1970-1975; forecasting to 2000

    Energy Technology Data Exchange (ETDEWEB)

    Jackson, J.R.; Cohn, S.; Cope, J.; Johnson, W.S.

    1978-04-01

    This report describes the structure and forecasting accuracy of a disaggregated model of commercial energy use recently developed at Oak Ridge National Laboratory. The model forecasts annual commercial energy use by ten building types, five end uses, and four fuel types. Both economic (utilization rate, fuel choice, capital-energy substitution) and technological factors (equipment efficiency, thermal characteristics of buildings) are explicitly represented in the model. Model parameters are derived from engineering and econometric analysis. The model is then validated by simulating commercial energy use over the 1970--1975 time period. The model performs well both with respect to size of forecast error and ability to predict turning points. The model is then used to evaluate the energy-use implications of national commercial buildings standards based on the ASHRAE 90-75 recommendations. 10 figs., 12 tables, 14 refs.

  15. Development of a Disaggregation Framework toward the Estimation of Subdaily Reference Evapotranspiration: 2- Estimation of Subdaily Reference Evapotranspiration Using Disaggregated Weather Data

    Directory of Open Access Journals (Sweden)

    F. Parchami Araghi

    2016-09-01

    Full Text Available Introduction: Subdaily estimates of reference evapotranspiration (ET o are needed in many applications such as dynamic agro-hydrological modeling. However, in many regions, the lack of subdaily weather data availability has hampered the efforts to quantify the subdaily ET o. In the first presented paper, a physically based framework was developed to desegregate daily weather data needed for estimation of subdaily reference ET o, including air temperature, wind speed, dew point, actual vapour pressure, relative humidity, and solar radiation. The main purpose of this study was to estimate the subdaily ETo using disaggregated daily data derived from developed disaggregation framework in the first presented paper. Materials and Methods: Subdaily ET o estimates were made, using ASCE and FAO-56 Penman–Monteith models (ASCE-PM and FAO56-PM, respectively and subdaily weather data derived from the developed daily-to-subdaily weather data disaggregation framework. To this end, long-term daily weather data got from Abadan (59 years and Ahvaz (50 years synoptic weather stations were collected. Sensitivity analysis of Penman–Monteith model to the different meteorological variables (including, daily air temperature, wind speed at 2 m height, actual vapor pressure, and solar radiation was carried out, using partial derivatives of Penman–Monteith equation. The capability of the two models for retrieving the daily ETo was evaluated, using root mean square error RMSE (mm, the mean error ME (mm, the mean absolute error ME (mm, Pearson correlation coefficient r (-, and Nash–Sutcliffe model efficiency coefficient EF (-. Different contributions to the overall error were decomposed using a regression-based method. Results and Discussion: The results of the sensitivity analysis showed that the daily air temperature and the actual vapor pressure are the most significant meteorological variables, which affect the ETo estimates. In contrast, low sensitivity

  16. Dis-aggregation of airborne flux measurements using footprint analysis

    NARCIS (Netherlands)

    Hutjes, R.W.A.; Vellinga, O.S.; Gioli, B.; Miglietta, F.

    2010-01-01

    Aircraft measurements of turbulent fluxes are generally being made with the objective to obtain an estimate of regional exchanges between land surface and atmosphere, to investigate the spatial variability of these fluxes, but also to learn something about the fluxes from some or all of the land

  17. Causes of Corruption in Russia: A Disaggregated Analysis

    OpenAIRE

    Belousova, Veronika; Rajeev, K. Goel; Korhonen, Iikka

    2011-01-01

    This paper examines determinants of corruption across Russian regions. Key contributions include: (i) a formal study of economic corruption determinants across Russian regions; (ii) comparisons of determinants of perceived corruption versus those of actual corruption; and (iii) studying the influence of market competition and other factors on corruption. The results show that economic prosperity, population, market competition and urbanization are significant determinants of Russian corruptio...

  18. Political ideology and health in Japan: a disaggregated analysis.

    Science.gov (United States)

    Subramanian, S V; Hamano, Tsuyoshi; Perkins, Jessica M; Koyabu, Akio; Fujisawa, Yoshikazu

    2010-09-01

    Recent studies from the USA and Europe suggest an association between an individual's political ideology and their health status, with those claiming to be conservatives reporting better health. The presence of this association is examined in Japan. Individual-level data from the 2000-3, 2005 and 2006 Japan General Social Survey were analysed. The outcomes of interest were self-rated poor health and smoking status. The independent variable of interest was reported political beliefs on a 5-point 'left'-to-'right' scale. Covariates included age, sex, education, income, occupational status and fixed effects for survey periods. Logistic regression models were estimated. There was an inverse association between political ideology (left to right) and self-rated poor health as well as between ideology and smoking status even after adjusting for age, sex, socioeconomic status and fixed effects for survey periods. Compared with those who identified as 'left', the OR for reporting poor health and smoking among those who identified as 'right' was 0.86 (95% CI 0.74 to 0.99) and 0.80 (95% CI 0.70 to 0.91), respectively. Health differences by political ideology have typically been interpreted as reflecting socioeconomic differences. The results from Japan corroborate the previous findings from the USA and Europe that socioeconomic differences do not account for health differences by political ideologies. Political ideology is likely to be a marker of several latent values and attitudes (eg, religiosity, individual responsibility and/or community participation) that might be beneficial for health at the individual level.

  19. The use of continuous functions for a top-down temporal disaggregation of emission inventories

    International Nuclear Information System (INIS)

    Kalchmayr, M.; Orthofer, R.

    1997-11-01

    This report is a documentation of a presentation at the International Speciality Conference 'The Emission Inventory: Planning for the Future', October 28-30, 1997 in Research Triangle Park, North Carolina, USA. The Conference was organized by the Air and Waste Management Association (AWMA) and the U.S. Environmental Protection Agency. Emission data with high temporal resolution are necessary to analyze the relationship between emissions and their impacts. In many countries, however, emission inventories refer only to the annual countrywide emission sums, because underlying data (traffic, energy, industry statistics) are available for statistically relevant territorial units and for longer time periods only. This paper describes a method for the temporal disaggregation of yearly emission sums through application of continuous functions which simulate emission generating activities. The temporal patterns of the activities are derived through overlay of annual, weekly and diurnal variation functions which are based on statistical data of the relevant activities. If applied to annual emission data, these combined functions describe the dynamic patterns of emissions over year. The main advantage of the continuous functions method is that temporal emission patterns can be smoothed throughout one year, thus eliminating some of the major drawbacks from the traditional standardized fixed quota system. For handling in models, the continuous functions and their parameters can be directly included and the emission quota calculated directly for a certain hour of the year. The usefulness of the method is demonstrated with NMVOC emission data for Austria. Temporally disaggregated emission data can be used as input for ozone models as well as for visualization and animation of the emission dynamics. The analysis of the temporal dynamics of emission source strengths, e.g. during critical hours for ozone generation in summer, allows the implementation of efficient emission reduction

  20. A Peltier-based freeze-thaw device for meteorite disaggregation

    Science.gov (United States)

    Ogliore, R. C.

    2018-02-01

    A Peltier-based freeze-thaw device for the disaggregation of meteorite or other rock samples is described. Meteorite samples are kept in six water-filled cavities inside a thin-walled Al block. This block is held between two Peltier coolers that are automatically cycled between cooling and warming. One cycle takes approximately 20 min. The device can run unattended for months, allowing for ˜10 000 freeze-thaw cycles that will disaggregate meteorites even with relatively low porosity. This device was used to disaggregate ordinary and carbonaceous chondrite regoltih breccia meteorites to search for micrometeoroid impact craters.

  1. Silicon Photonics towards Disaggregation of Resources in Data Centers

    Directory of Open Access Journals (Sweden)

    Miltiadis Moralis-Pegios

    2018-01-01

    Full Text Available In this paper, we demonstrate two subsystems based on Silicon Photonics, towards meeting the network requirements imposed by disaggregation of resources in Data Centers. The first one utilizes a 4 × 4 Silicon photonics switching matrix, employing Mach Zehnder Interferometers (MZIs with Electro-Optical phase shifters, directly controlled by a high speed Field Programmable Gate Array (FPGA board for the successful implementation of a Bloom-Filter (BF-label forwarding scheme. The FPGA is responsible for extracting the BF-label from the incoming optical packets, carrying out the BF-based forwarding function, determining the appropriate switching state and generating the corresponding control signals towards conveying incoming packets to the desired output port of the matrix. The BF-label based packet forwarding scheme allows rapid reconfiguration of the optical switch, while at the same time reduces the memory requirements of the node’s lookup table. Successful operation for 10 Gb/s data packets is reported for a 1 × 4 routing layout. The second subsystem utilizes three integrated spiral waveguides, with record-high 2.6 ns/mm2, delay versus footprint efficiency, along with two Semiconductor Optical Amplifier Mach-Zehnder Interferometer (SOA-MZI wavelength converters, to construct a variable optical buffer and a Time Slot Interchange module. Error-free on-chip variable delay buffering from 6.5 ns up to 17.2 ns and successful timeslot interchanging for 10 Gb/s optical packets are presented.

  2. A Replication of ``Using self-esteem to disaggregate psychopathy, narcissism, and aggression (2013''

    Directory of Open Access Journals (Sweden)

    Durand, Guillaume

    2016-09-01

    Full Text Available The present study is a replication of Falkenbach, Howe, and Falki (2013. Using self-esteem to disaggregate psychopathy, narcissism, and aggression. Personality and Individual Differences, 54(7, 815-820.

  3. The importance of disaggregated freight flow forecasts to inform transport infrastructure investments

    Directory of Open Access Journals (Sweden)

    Jan H. Havenga

    2013-09-01

    Full Text Available This article presents the results of a comprehensive disaggregated commodity flow model for South Africa. The wealth of data available enables a segmented analysis of future freight transportation demand in order to assist with the prioritisation of transportation investments, the development of transport policy and the growth of the logistics service provider industry. In 2011, economic demand for commodities in South Africa’s competitive surface-freight transport market amounted to 622 million tons and is predicted to increase to 1834m tons by 2041, which is a compound annual growth rate of 3.67%. Fifty percent of corridor freight constitutes break bulk; intermodal solutions are therefore critical in South Africa. Scenario analysis indicates that 80%of corridor break-bulk tons can by serviced by four intermodal facilities – in Gauteng, Durban, Cape Town and Port Elizabeth. This would allow for the development of an investment planning hierarchy, enable industry targeting (through commodity visibility, ensure capacity development ahead of demand and lower the cost of logistics in South Africa.

  4. How sex- and age-disaggregated data and gender and generational analyses can improve humanitarian response.

    Science.gov (United States)

    Mazurana, Dyan; Benelli, Prisca; Walker, Peter

    2013-07-01

    Humanitarian aid remains largely driven by anecdote rather than by evidence. The contemporary humanitarian system has significant weaknesses with regard to data collection, analysis, and action at all stages of response to crises involving armed conflict or natural disaster. This paper argues that humanitarian actors can best determine and respond to vulnerabilities and needs if they use sex- and age-disaggregated data (SADD) and gender and generational analyses to help shape their assessments of crises-affected populations. Through case studies, the paper shows how gaps in information on sex and age limit the effectiveness of humanitarian response in all phases of a crisis. The case studies serve to show how proper collection, use, and analysis of SADD enable operational agencies to deliver assistance more effectively and efficiently. The evidence suggests that the employment of SADD and gender and generational analyses assists in saving lives and livelihoods in a crisis. © 2013 The Author(s). Journal compilation © Overseas Development Institute, 2013.

  5. Disaggregating and mapping crop statistics using hypertemporal remote sensing

    Science.gov (United States)

    Khan, M. R.; de Bie, C. A. J. M.; van Keulen, H.; Smaling, E. M. A.; Real, R.

    2010-02-01

    Governments compile their agricultural statistics in tabular form by administrative area, which gives no clue to the exact locations where specific crops are actually grown. Such data are poorly suited for early warning and assessment of crop production. 10-Daily satellite image time series of Andalucia, Spain, acquired since 1998 by the SPOT Vegetation Instrument in combination with reported crop area statistics were used to produce the required crop maps. Firstly, the 10-daily (1998-2006) 1-km resolution SPOT-Vegetation NDVI-images were used to stratify the study area in 45 map units through an iterative unsupervised classification process. Each unit represents an NDVI-profile showing changes in vegetation greenness over time which is assumed to relate to the types of land cover and land use present. Secondly, the areas of NDVI-units and the reported cropped areas by municipality were used to disaggregate the crop statistics. Adjusted R-squares were 98.8% for rainfed wheat, 97.5% for rainfed sunflower, and 76.5% for barley. Relating statistical data on areas cropped by municipality with the NDVI-based unit map showed that the selected crops were significantly related to specific NDVI-based map units. Other NDVI-profiles did not relate to the studied crops and represented other types of land use or land cover. The results were validated by using primary field data. These data were collected by the Spanish government from 2001 to 2005 through grid sampling within agricultural areas; each grid (block) contains three 700 m × 700 m segments. The validation showed 68%, 31% and 23% variability explained (adjusted R-squares) between the three produced maps and the thousands of segment data. Mainly variability within the delineated NDVI-units caused relatively low values; the units are internally heterogeneous. Variability between units is properly captured. The maps must accordingly be considered "small scale maps". These maps can be used to monitor crop performance of

  6. Erosion of atmospherically deposited radionuclides as affected by soil disaggregation mechanisms

    International Nuclear Information System (INIS)

    Claval, D.; Garcia-Sanchez, L.; Real, J.; Rouxel, R.; Mauger, S.; Sellier, L.

    2004-01-01

    The interactions of soil disaggregation with radionuclide erosion were studied under controlled conditions in the laboratory on samples from a loamy silty-sandy soil. The fate of 134 Cs and 85 Sr was monitored on soil aggregates and on small plots, with time resolution ranging from minutes to hours after contamination. Analytical experiments reproducing disaggregation mechanisms on aggregates showed that disaggregation controls both erosion and sorption. Compared to differential swelling, air explosion mobilized the most by producing finer particles and increasing five-fold sorption. For all the mechanisms studied, a significant part of the contamination was still unsorbed on the aggregates after an hour. Global experiments on contaminated sloping plots submitted to artificial rainfalls showed radionuclide erosion fluctuations and their origin. Wet radionuclide deposition increased short-term erosion by 50% compared to dry deposition. A developed soil crust when contaminated decreased radionuclide erosion by a factor 2 compared to other initial soil states. These erosion fluctuations were more significant for 134 Cs than 85 Sr, known to have better affinity to soil matrix. These findings confirm the role of disaggregation on radionuclide erosion. Our data support a conceptual model of radionuclide erosion at the small plot scale in two steps: (1) radionuclide non-equilibrium sorption on mobile particles, resulting from simultaneous sorption and disaggregation during wet deposition and (2) later radionuclide transport by runoff with suspended matter

  7. Integration properties of disaggregated solar, geothermal and biomass energy consumption in the U.S

    International Nuclear Information System (INIS)

    Apergis, Nicholas; Tsoumas, Chris

    2011-01-01

    This paper investigates the integration properties of disaggregated solar, geothermal and biomass energy consumption in the U.S. The analysis is performed for the 1989-2009 period and covers all sectors which use these types of energy, i.e., transportation, residence, industrial, electric power and commercial. The results suggest that there are differences in the order of integration depending on both the type of energy and the sector involved. Moreover, the inclusion of structural breaks traced from the regulatory changes for these energy types seem to affect the order of integration for each series. - Highlights: → Increasing importance of renewable energy sources. → Integration properties of solar, geothermal and biomass energy consumption in the U.S. → The results show differences in the order of integration depending on the type of energy. → Structural breaks traced for these energy types affect the order of integration. → The order of integration is less than 1, so energy conservation policies are transitory.

  8. Probabilistic disaggregation of a spatial portfolio of exposure for natural hazard risk assessment

    DEFF Research Database (Denmark)

    Custer, Rocco; Nishijima, Kazuyoshi

    2018-01-01

    In natural hazard risk assessment situations are encountered where information on the portfolio of exposure is only available in a spatially aggregated form, hindering a precise risk assessment. Recourse might be found in the spatial disaggregation of the portfolio of exposure to the resolution...... of a portfolio of buildings in two communes in Switzerland and the results are compared to sample observations. The relevance of probabilistic disaggregation uncertainty in natural hazard risk assessment is illustrated with the example of a simple flood risk assessment....

  9. Converged photonic data storage and switch platform for exascale disaggregated data centers

    Science.gov (United States)

    Pitwon, R.; Wang, K.; Worrall, A.

    2017-02-01

    We report on a converged optically enabled Ethernet storage, switch and compute platform, which could support future disaggregated data center architectures. The platform includes optically enabled Ethernet switch controllers, an advanced electro-optical midplane and optically interchangeable generic end node devices. We demonstrate system level performance using optically enabled Ethernet disk drives and micro-servers across optical links of varied lengths.

  10. Statistical Models for Disaggregation and Reaggregation of Natural Gas Consumption Data

    Czech Academy of Sciences Publication Activity Database

    Brabec, Marek; Konár, Ondřej; Malý, Marek; Kasanický, Ivan; Pelikán, Emil

    2015-01-01

    Roč. 42, č. 5 (2015), s. 921-937 ISSN 0266-4763 Institutional support: RVO:67985807 Keywords : natural gas consumption * semiparametric model * standardized load profiles * aggregation * disaggregation * 62P30 Subject RIV: BB - Applied Statistics , Operational Research Impact factor: 0.419, year: 2015

  11. The Economic Impact of Higher Education Institutions in Ireland: Evidence from Disaggregated Input-Output Tables

    Science.gov (United States)

    Zhang, Qiantao; Larkin, Charles; Lucey, Brian M.

    2017-01-01

    While there has been a long history of modelling the economic impact of higher education institutions (HEIs), little research has been undertaken in the context of Ireland. This paper provides, for the first time, a disaggregated input-output table for Ireland's higher education sector. The picture painted overall is a higher education sector that…

  12. Is disaggregation the holy grail of energy efficiency? The case of electricity

    International Nuclear Information System (INIS)

    Carrie Armel, K.; Gupta, Abhay; Shrimali, Gireesh; Albert, Adrian

    2013-01-01

    This paper aims to address two timely energy problems. First, significant low-cost energy reductions can be made in the residential and commercial sectors, but these savings have not been achievable to date. Second, billions of dollars are being spent to install smart meters, yet the energy saving and financial benefits of this infrastructure – without careful consideration of the human element – will not reach its full potential. We believe that we can address these problems by strategically marrying them, using disaggregation. Disaggregation refers to a set of statistical approaches for extracting end-use and/or appliance level data from an aggregate, or whole-building, energy signal. In this paper, we explain how appliance level data affords numerous benefits, and why using the algorithms in conjunction with smart meters is the most cost-effective and scalable solution for getting this data. We review disaggregation algorithms and their requirements, and evaluate the extent to which smart meters can meet those requirements. Research, technology, and policy recommendations are also outlined. - Highlights: ► Appliance energy use data can produce many consumer, industry, and policy benefits. ► Disaggregating smart meter data is the most cost-effective and scalable solution. ► We review algorithm requirements, and ability of smart meters to meet those. ► Current technology identifies ∼10 appliances; minor upgrades could identify more. ► Research, technology, and policy recommendations for moving forward are outlined.

  13. The Disaggregation of Value-Added Test Scores to Assess Learning Outcomes in Economics Courses

    Science.gov (United States)

    Walstad, William B.; Wagner, Jamie

    2016-01-01

    This study disaggregates posttest, pretest, and value-added or difference scores in economics into four types of economic learning: positive, retained, negative, and zero. The types are derived from patterns of student responses to individual items on a multiple-choice test. The micro and macro data from the "Test of Understanding in College…

  14. Evolution of an intricate J-protein network driving protein disaggregation in eukaryotes.

    Science.gov (United States)

    Nillegoda, Nadinath B; Stank, Antonia; Malinverni, Duccio; Alberts, Niels; Szlachcic, Anna; Barducci, Alessandro; De Los Rios, Paolo; Wade, Rebecca C; Bukau, Bernd

    2017-05-15

    Hsp70 participates in a broad spectrum of protein folding processes extending from nascent chain folding to protein disaggregation. This versatility in function is achieved through a diverse family of J-protein cochaperones that select substrates for Hsp70. Substrate selection is further tuned by transient complexation between different classes of J-proteins, which expands the range of protein aggregates targeted by metazoan Hsp70 for disaggregation. We assessed the prevalence and evolutionary conservation of J-protein complexation and cooperation in disaggregation. We find the emergence of a eukaryote-specific signature for interclass complexation of canonical J-proteins. Consistently, complexes exist in yeast and human cells, but not in bacteria, and correlate with cooperative action in disaggregation in vitro. Signature alterations exclude some J-proteins from networking, which ensures correct J-protein pairing, functional network integrity and J-protein specialization. This fundamental change in J-protein biology during the prokaryote-to-eukaryote transition allows for increased fine-tuning and broadening of Hsp70 function in eukaryotes.

  15. Determining the disaggregated economic value of irrigation water in the Musi sub-basin in India

    NARCIS (Netherlands)

    Hellegers, P.J.G.J.; Davidson, B.

    2010-01-01

    In this paper the residual method is used to determine the disaggregated economic value of irrigation water used in agriculture across crops, zones and seasons. This method relies on the belief that the value of a good (its price by its quantity) is equal to the summation of the quantity of each

  16. Spatial Disaggregation of Areal Rainfall Using Two Different Artificial Neural Networks Models

    Directory of Open Access Journals (Sweden)

    Sungwon Kim

    2015-06-01

    Full Text Available The objective of this study is to develop artificial neural network (ANN models, including multilayer perceptron (MLP and Kohonen self-organizing feature map (KSOFM, for spatial disaggregation of areal rainfall in the Wi-stream catchment, an International Hydrological Program (IHP representative catchment, in South Korea. A three-layer MLP model, using three training algorithms, was used to estimate areal rainfall. The Levenberg–Marquardt training algorithm was found to be more sensitive to the number of hidden nodes than were the conjugate gradient and quickprop training algorithms using the MLP model. Results showed that the networks structures of 11-5-1 (conjugate gradient and quickprop and 11-3-1 (Levenberg-Marquardt were the best for estimating areal rainfall using the MLP model. The networks structures of 1-5-11 (conjugate gradient and quickprop and 1-3-11 (Levenberg–Marquardt, which are the inverse networks for estimating areal rainfall using the best MLP model, were identified for spatial disaggregation of areal rainfall using the MLP model. The KSOFM model was compared with the MLP model for spatial disaggregation of areal rainfall. The MLP and KSOFM models could disaggregate areal rainfall into individual point rainfall with spatial concepts.

  17. Value of time determination for the city of Alexandria based on a disaggregate binary mode choice model

    Directory of Open Access Journals (Sweden)

    Mounir Mahmoud Moghazy Abdel-Aal

    2017-12-01

    Full Text Available In the travel demand modeling field, mode choice is the most important decision that affects the resulted road congestion. The behavioral nature of the disaggregate models and the associated advantages of such models over aggregate models have led to their extensive use. This paper proposes a framework to determine the value of time (VoT for the city of Alexandria through calibrating a disaggregate linear-in parameter utility-based binary logit mode choice model of the city. The mode attributes (travel time and travel cost along with traveler attributes (car ownership and income were selected as the utility attributes of the basic model formulation which included 5 models. Three additional alternative utility formulations based on the transformation of the mode attributes including relative travel cost (cost divided by income and log (travel time and the combination of the two transformations together were introduced. The parameter estimation procedure was based on the likelihood maximization technique and was performed in EXCEL. Out of 20 models estimated, only 2 models are considered successful in terms of the parameters estimates correct signs and the magnitude of their significance (t-statistics value. The determination of the VoT serves also in the model validation. The best two models estimated the value of time at LE 11.30/hr and LE 14.50/hr with a relative error of +3.7% and +33.0%, respectively, of the hourly salary of LE 10.9/hr. The proposed two models prove to be sensitive to trip time and income levels as factors affecting the choice mechanism. The sensitivity analysis was performed and proved the model with higher relative error is marginally more robust. Keywords: Transportation modeling, Binary mode choice, Parameter estimation, Value of time, Likelihood maximization, Sensitivity analysis

  18. New Insight into the Finance-Energy Nexus: Disaggregated Evidence from Turkish Sectors

    Directory of Open Access Journals (Sweden)

    Mert Topcu

    2017-01-01

    Full Text Available Seeing that reshaped energy economics literature has adopted some new variables in energy demand function, the number of papers looking into the relationship between financial development and energy consumption at the aggregate level has been increasing over the last few years. This paper, however, proposes a new framework using disaggregated data and investigates the nexus between financial development and sectoral energy consumption in Turkey. To this end, panel time series regression and causality techniques are adopted over the period 1989–2011. Empirical results confirm that financial development does have a significant impact on energy consumption, even with disaggregated data. It is also proved that the magnitude of financial development is larger in energy-intensive industries than in less energy-intensive ones.

  19. Disaggregated Energy Consumption and Sectoral Outputs in Thailand: ARDL Bound Testing Approach

    OpenAIRE

    Thurai Murugan Nathan; Venus Khim-Sen Liew; Wing-Keung Wong

    2016-01-01

    From an economic perspective, energy-output relationship studies have become increasingly popular in recent times, partly fuelled by a need to understand the effect of energy on production outputs rather than overall GDP. This study dealt with disaggregated energy consumption and outputs of some major economic sectors in Thailand. ARDL bound testing approach was employed to examine the co-integration relationship. The Granger causality test of the aforementioned ARDL framework was done to inv...

  20. Employment in Disequilibrium: a Disaggregated Approach on a Panel of French Firms

    OpenAIRE

    Brigitte Dormont

    1989-01-01

    The purpose of this paper is to understand disequilibrium phenomena at a disaggregated level. By using data on French firms, we carry out the estimation of labor demand model with two regimes, which correspond to the Keynesian and classical hypotheses. The results enable us to characterize classical firms as being particularly good performers: they have more rapid growth, younger productive plant and higher productivity gains and profitability. Classical firms stand out, with respect to their...

  1. The Long-Run Macroeconomic Effects of Aid and Disaggregated Aid in Ethiopia

    DEFF Research Database (Denmark)

    Gebregziabher, Fiseha Haile

    2014-01-01

    positively, whereas it is negatively associated with government consumption. Our results concerning the impacts of disaggregated aid stand in stark contrast to earlier work. Bilateral aid increases investment and GDP and is negatively associated with government consumption, whereas multilateral aid is only...... positively associated with imports. Grants contribute to GDP, investment and imports, whereas loans affect none of the variables. Finally, there is evidence to suggest that multilateral aid and loans have been disbursed in a procyclical fashion...

  2. Equity in health care financing in Palestine: the value-added of the disaggregate approach.

    Science.gov (United States)

    Abu-Zaineh, Mohammad; Mataria, Awad; Luchini, Stéphane; Moatti, Jean-Paul

    2008-06-01

    This paper analyzes the redistributive effect and progressivity associated with the current health care financing schemes in the Occupied Palestinian Territory, using data from the first Palestinian Household Health Expenditure Survey conducted in 2004. The paper goes beyond the commonly used "aggregate summary index approach" to apply a more detailed "disaggregate approach". Such an approach is borrowed from the general economic literature on taxation, and examines redistributive and vertical effects over specific parts of the income distribution, using the dominance criterion. In addition, the paper employs a bootstrap method to test for the statistical significance of the inequality measures. While both the aggregate and disaggregate approaches confirm the pro-rich and regressive character of out-of-pocket payments, the aggregate approach does not ascertain the potential progressive feature of any of the available insurance schemes. The disaggregate approach, however, significantly reveals a progressive aspect, for over half of the population, of the government health insurance scheme, and demonstrates that the regressivity of the out-of-pocket payments is most pronounced among the worst-off classes of the population. Recommendations are advanced to improve the performance of the government insurance schemes to enhance its capacity in limiting inequalities in health care financing in the Occupied Palestinian Territory.

  3. Examining Pedestrian Injury Severity Using Alternative Disaggregate Models

    DEFF Research Database (Denmark)

    Abay, Kibrom Araya

    2013-01-01

    This paper investigates the injury severity of pedestrians considering detailed road user characteristics and alternative model specification using a high-quality Danish road accident data. Such detailed and alternative modeling approach helps to assess the sensitivity of empirical inferences...... to the choice of these models. The empirical analysis reveals that detailed road user characteristics such as crime history of drivers and momentary activities of road users at the time of the accident provides an interesting insight in the injury severity analysis. Likewise, the alternative analytical...... specification of the models reveals that some of the conventionally employed fixed parameters injury severity models could underestimate the effect of some important behavioral attributes of the accidents. For instance, the standard ordered logit model underestimated the marginal effects of some...

  4. Disaggregating tree and grass phenology in tropical savannas

    Science.gov (United States)

    Zhou, Qiang

    method was better for extracting the green tree phenology, but the original decomposition method was better for retrieval of understory grass phenology. Both methods, however, were less accurate than in the Cerrado than in Australia due to intermingling and intergrading of grass and small woody components. Since African savanna trees are predominantly deciduous, the frequency method was combined with the linear unmixing of fractional cover to attempt to separate the relatively similar phenology of deciduous trees and seasonal grasses. The results for Africa revealed limitations associated with both methods. There was spatial and seasonal variation in the spectral indices used to unmix fractional cover resulting in poor validation for NPV in particular. The frequency analysis revealed significant phase variation indicative of different phenology, but these could not be clearly ascribed to separate grass and tree components. Overall findings indicate that site-specific variation and vegetation structure and composition, along with MODIS pixel resolution, and the simple vegetation index approach used was not robust across the different savanna biomes. The approach showed generally better performance for estimating PV fraction, and separating green phenology, but there were major inconsistencies, errors and biases in estimation of NPV and BS outside of the Australian savanna environment.

  5. An initial assessment of a SMAP soil moisture disaggregation scheme using TIR surface evaporation data over the continental United States

    Science.gov (United States)

    Mishra, Vikalp; Ellenburg, W. Lee; Griffin, Robert E.; Mecikalski, John R.; Cruise, James F.; Hain, Christopher R.; Anderson, Martha C.

    2018-06-01

    The Soil Moisture Active Passive (SMAP) mission is dedicated toward global soil moisture mapping. Typically, an L-band microwave radiometer has spatial resolution on the order of 36-40 km, which is too coarse for many specific hydro-meteorological and agricultural applications. With the failure of the SMAP active radar within three months of becoming operational, an intermediate (9-km) and finer (3-km) scale soil moisture product solely from the SMAP mission is no longer possible. Therefore, the focus of this study is a disaggregation of the 36-km resolution SMAP passive-only surface soil moisture (SSM) using the Soil Evaporative Efficiency (SEE) approach to spatial scales of 3-km and 9-km. The SEE was computed using thermal-infrared (TIR) estimation of surface evaporation over Continental U.S. (CONUS). The disaggregation results were compared with the 3 months of SMAP-Active (SMAP-A) and Active/Passive (AP) products, while comparisons with SMAP-Enhanced (SMAP-E), SMAP-Passive (SMAP-P), as well as with more than 180 Soil Climate Analysis Network (SCAN) stations across CONUS were performed for a 19 month period. At the 9-km spatial scale, the TIR-Downscaled data correlated strongly with the SMAP-E SSM both spatially (r = 0.90) and temporally (r = 0.87). In comparison with SCAN observations, overall correlations of 0.49 and 0.47; bias of -0.022 and -0.019 and unbiased RMSD of 0.105 and 0.100 were found for SMAP-E and TIR-Downscaled SSM across the Continental U.S., respectively. At 3-km scale, TIR-Downscaled and SMAP-A had a mean temporal correlation of only 0.27. In terms of gain statistics, the highest percentage of SCAN sites with positive gains (>55%) was observed with the TIR-Downscaled SSM at 9-km. Overall, the TIR-based downscaled SSM showed strong correspondence with SMAP-E; compared to SCAN, and overall both SMAP-E and TIR-Downscaled performed similarly, however, gain statistics show that TIR-Downscaled SSM slightly outperformed SMAP-E.

  6. Human papillomavirus vaccine initiation in Asian Indians and Asian subpopulations: a case for examining disaggregated data in public health research.

    Science.gov (United States)

    Budhwani, H; De, P

    2017-12-01

    Vaccine disparities research often focuses on differences between the five main racial and ethnic classifications, ignoring heterogeneity of subpopulations. Considering this knowledge gap, we examined human papillomavirus (HPV) vaccine initiation in Asian Indians and Asian subpopulations. National Health Interview Survey data (2008-2013), collected by the National Center for Health Statistics, were analyzed. Multiple logistic regression analysis was conducted on adults aged 18-26 years (n = 20,040). Asian Indians had high income, education, and health insurance coverage, all positive predictors of preventative health engagement and vaccine uptake. However, we find that Asian Indians had comparatively lower rates of HPV vaccine initiation (odds ratio = 0.41; 95% confidence interval = 0.207-0.832), and foreign-born Asian Indians had the lowest rate HPV vaccination of all subpopulations (2.3%). Findings substantiate the need for research on disaggregated data rather than evaluating vaccination behaviors solely across standard racial and ethnic categories. We identified two populations that were initiating HPV vaccine at abysmal levels: foreign-born persons and Asian Indians. Development of culturally appropriate messaging has the potential to improve these initiation rates and improve population health. Copyright © 2017 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.

  7. Amyloid formation and disaggregation of α-synuclein and its tandem repeat (α-TR)

    International Nuclear Information System (INIS)

    Bae, Song Yi; Kim, Seulgi; Hwang, Heejin; Kim, Hyun-Kyung; Yoon, Hyun C.; Kim, Jae Ho; Lee, SangYoon; Kim, T. Doohun

    2010-01-01

    Research highlights: → Formation of the α-synuclein amyloid fibrils by [BIMbF 3 Im]. → Disaggregation of amyloid fibrils by epigallocatechin gallate (EGCG) and baicalein. → Amyloid formation of α-synuclein tandem repeat (α-TR). -- Abstract: The aggregation of α-synuclein is clearly related to the pathogenesis of Parkinson's disease. Therefore, detailed understanding of the mechanism of fibril formation is highly valuable for the development of clinical treatment and also of the diagnostic tools. Here, we have investigated the interaction of α-synuclein with ionic liquids by using several biochemical techniques including Thioflavin T assays and transmission electron microscopy (TEM). Our data shows a rapid formation of α-synuclein amyloid fibrils was stimulated by 1-butyl-3-methylimidazolium bis(trifluoromethylsulfonyl)imide [BIMbF 3 Im], and these fibrils could be disaggregated by polyphenols such as epigallocatechin gallate (EGCG) and baicalein. Furthermore, the effect of [BIMbF 3 Im] on the α-synuclein tandem repeat (α-TR) in the aggregation process was studied.

  8. Spatial and temporal disaggregation of anthropogenic CO2 emissions from the City of Cape Town

    Directory of Open Access Journals (Sweden)

    Alecia Nickless

    2015-11-01

    Full Text Available This paper describes the methodology used to spatially and temporally disaggregate carbon dioxide emission estimates for the City of Cape Town, to be used for a city-scale atmospheric inversion estimating carbon dioxide fluxes. Fossil fuel emissions were broken down into emissions from road transport, domestic emissions, industrial emissions, and airport and harbour emissions. Using spatially explicit information on vehicle counts, and an hourly scaling factor, vehicle emissions estimates were obtained for the city. Domestic emissions from fossil fuel burning were estimated from household fuel usage information and spatially disaggregated population data from the 2011 national census. Fuel usage data were used to derive industrial emissions from listed activities, which included emissions from power generation, and these were distributed spatially according to the source point locations. The emissions from the Cape Town harbour and the international airport were determined from vessel and aircraft count data, respectively. For each emission type, error estimates were determined through error propagation techniques. The total fossil fuel emission field for the city was obtained by summing the spatial layers for each emission type, accumulated for the period of interest. These results will be used in a city-scale inversion study, and this method implemented in the future for a national atmospheric inversion study.

  9. The influence of energy consumption of China on its real GDP from aggregated and disaggregated viewpoints

    International Nuclear Information System (INIS)

    Zhang, Wei; Yang, Shuyun

    2013-01-01

    This paper investigated the causal relationship between energy consumption and gross domestic product (GDP) in China at both aggregated and disaggregated levels during the period of 1978–2009 by using a modified version of the Granger (1969) causality test proposed by Toda and Yamamoto (1995) within a multivariate framework. The empirical results suggested the existence of a negative bi-directional Granger causality running from aggregated energy consumption to real GDP. At disaggregated level of energy consumption, the results were complicated. For coal, empirical findings suggested that there was a negative bi-directional Granger causality running from coal consumption to real GDP. However, for oil and gas, empirical findings suggested a positive bi-directional Granger causality running from oil as well as gas consumption to real GDP. Though these results supported the feedback hypothesis, the negative relationship might be attributed to the growing economy production shifting towards less energy intensive sectors and excessive energy consumption in relatively unproductive sectors. The results indicated that policies with reducing aggregated energy consumption and promoting energy conservation may boost China's economic growth. - Highlights: ► A negative bi-directional Granger causality runs from energy consumption to real GDP. ► The same result runs from coal consumption to real GDP, but oil and gas it does not. ► The results partly derive from excessive energy consumption in unproductive sectors. ► Reducing aggregated energy consumption probably promotes the development of China's economy

  10. Energy consumption and economic growth: Evidence from China at both aggregated and disaggregated levels

    International Nuclear Information System (INIS)

    Yuan Jiahai; Kang Jiangang; Zhao Changhong; Hu Zhaoguang

    2008-01-01

    Using a neo-classical aggregate production model where capital, labor and energy are treated as separate inputs, this paper tests for the existence and direction of causality between output growth and energy use in China at both aggregated total energy and disaggregated levels as coal, oil and electricity consumption. Using the Johansen cointegration technique, the empirical findings indicate that there exists long-run cointegration among output, labor, capital and energy use in China at both aggregated and all three disaggregated levels. Then using a VEC specification, the short-run dynamics of the interested variables are tested, indicating that there exists Granger causality running from electricity and oil consumption to GDP, but does not exist Granger causality running from coal and total energy consumption to GDP. On the other hand, short-run Granger causality exists from GDP to total energy, coal and oil consumption, but does not exist from GDP to electricity consumption. We thus propose policy suggestions to solve the energy and sustainable development dilemma in China as: enhancing energy supply security and guaranteeing energy supply, especially in the short run to provide adequate electric power supply and set up national strategic oil reserve; enhancing energy efficiency to save energy; diversifying energy sources, energetically exploiting renewable energy and drawing out corresponding policies and measures; and finally in the long run, transforming development pattern and cut reliance on resource- and energy-dependent industries

  11. A Sub-category Disaggregated Greenhouse Gas Emission Inventory for the Bogota Region, Colombia

    Science.gov (United States)

    Pulido-Guio, A. D.; Rojas, A. M.; Ossma, L. J.; Jimenez-Pizarro, R.

    2012-12-01

    Several international organizations, such as UNDP and UNEP, have recently recognized the importance of empowering sub-national decision levels on climatic governance according to the subsidiarity principle. Regional and municipal authorities are directly responsible for land use management and for regulating economic sectors that emit greenhouse gases (GHG) and are vulnerable to climate change. Sub-national authorities are also closer to the population, which make them better suited for educating the public and for achieving commitment among stakeholders. This investigation was developed within the frame of the Regional Integrated Program on Climate Change for the Cundinamarca-Bogota Region (PRICC), an initiative aimed at incorporating the climate dimension into the regional and local decision making. The region composed by Bogota and its nearest, semi-rural area of influence (Province of Cundinamarca) is the most important population and economic center of Colombia. Our investigation serves two purposes: a) to establish methodologies for estimating regional GHG emissions appropriate to the Colombian context, and b) to disaggregate GHG emissions by economic sector as a mitigation decision-making tool. GHG emissions were calculated using IPCC 1996 - Tier 1 methodologies, as there are no regional- or country-specific emission factors available for Colombia. Top-Down (TD) methodologies, based on national and regional energy use intensity, per capita consumption and fertilizer use, were developed and applied to estimate activities for following categories: fuel use in industrial, commercial and residential sectors (excepting NG and LPG), use of ozone depleting substances (ODS) and substitutes, and fertilizer use (for total emissions of agricultural soils). The emissions from the remaining 22 categories were calculated using Bottom-Up (BU) methodologies given the availability of regional information. The total GHG emissions in the Cundinamarca-Bogota Region on 2008 are

  12. A rank-based approach for correcting systematic biases in spatial disaggregation of coarse-scale climate simulations

    Science.gov (United States)

    Nahar, Jannatun; Johnson, Fiona; Sharma, Ashish

    2017-07-01

    Use of General Circulation Model (GCM) precipitation and evapotranspiration sequences for hydrologic modelling can result in unrealistic simulations due to the coarse scales at which GCMs operate and the systematic biases they contain. The Bias Correction Spatial Disaggregation (BCSD) method is a popular statistical downscaling and bias correction method developed to address this issue. The advantage of BCSD is its ability to reduce biases in the distribution of precipitation totals at the GCM scale and then introduce more realistic variability at finer scales than simpler spatial interpolation schemes. Although BCSD corrects biases at the GCM scale before disaggregation; at finer spatial scales biases are re-introduced by the assumptions made in the spatial disaggregation process. Our study focuses on this limitation of BCSD and proposes a rank-based approach that aims to reduce the spatial disaggregation bias especially for both low and high precipitation extremes. BCSD requires the specification of a multiplicative bias correction anomaly field that represents the ratio of the fine scale precipitation to the disaggregated precipitation. It is shown that there is significant temporal variation in the anomalies, which is masked when a mean anomaly field is used. This can be improved by modelling the anomalies in rank-space. Results from the application of the rank-BCSD procedure improve the match between the distributions of observed and downscaled precipitation at the fine scale compared to the original BCSD approach. Further improvements in the distribution are identified when a scaling correction to preserve mass in the disaggregation process is implemented. An assessment of the approach using a single GCM over Australia shows clear advantages especially in the simulation of particularly low and high downscaled precipitation amounts.

  13. Assessing a disaggregated energy input: using confidence intervals around translog elasticity estimates

    International Nuclear Information System (INIS)

    Hisnanick, J.J.; Kyer, B.L.

    1995-01-01

    The role of energy in the production of manufacturing output has been debated extensively in the literature, particularly its relationship with capital and labor. In an attempt to provide some clarification in this debate, a two-step methodology was used. First under the assumption of a five-factor production function specification, we distinguished between electric and non-electric energy and assessed each component's relationship with capital and labor. Second, we calculated both the Allen and price elasticities and constructed 95% confidence intervals around these values. Our approach led to the following conclusions: that the disaggregation of the energy input into electric and non-electric energy is justified; that capital and electric energy and capital and non-electric energy are substitutes, while labor and electric energy and labor and non-electric energy are complements in production; and that capital and energy are substitutes, while labor and energy are complements. (author)

  14. Disaggregating measurement uncertainty from population variability and Bayesian treatment of uncensored results

    International Nuclear Information System (INIS)

    Strom, Daniel J.; Joyce, Kevin E.; Maclellan, Jay A.; Watson, David J.; Lynch, Timothy P.; Antonio, Cheryl L.; Birchall, Alan; Anderson, Kevin K.; Zharov, Peter

    2012-01-01

    In making low-level radioactivity measurements of populations, it is commonly observed that a substantial portion of net results are negative. Furthermore, the observed variance of the measurement results arises from a combination of measurement uncertainty and population variability. This paper presents a method for disaggregating measurement uncertainty from population variability to produce a probability density function (PDF) of possibly true results. To do this, simple, justifiable, and reasonable assumptions are made about the relationship of the measurements to the measurands (the 'true values'). The measurements are assumed to be unbiased, that is, that their average value is the average of the measurands. Using traditional estimates of each measurement's uncertainty to disaggregate population variability from measurement uncertainty, a PDF of measurands for the population is produced. Then, using Bayes's theorem, the same assumptions, and all the data from the population of individuals, a prior PDF is computed for each individual's measurand. These PDFs are non-negative, and their average is equal to the average of the measurement results for the population. The uncertainty in these Bayesian posterior PDFs is all Berkson with no remaining classical component. The methods are applied to baseline bioassay data from the Hanford site. The data include 90Sr urinalysis measurements on 128 people, 137Cs in vivo measurements on 5,337 people, and 239Pu urinalysis measurements on 3,270 people. The method produces excellent results for the 90Sr and 137Cs measurements, since there are nonzero concentrations of these global fallout radionuclides in people who have not been occupationally exposed. The method does not work for the 239Pu measurements in non-occupationally exposed people because the population average is essentially zero.

  15. A theoretically consistent stochastic cascade for temporal disaggregation of intermittent rainfall

    Science.gov (United States)

    Lombardo, F.; Volpi, E.; Koutsoyiannis, D.; Serinaldi, F.

    2017-06-01

    Generating fine-scale time series of intermittent rainfall that are fully consistent with any given coarse-scale totals is a key and open issue in many hydrological problems. We propose a stationary disaggregation method that simulates rainfall time series with given dependence structure, wet/dry probability, and marginal distribution at a target finer (lower-level) time scale, preserving full consistency with variables at a parent coarser (higher-level) time scale. We account for the intermittent character of rainfall at fine time scales by merging a discrete stochastic representation of intermittency and a continuous one of rainfall depths. This approach yields a unique and parsimonious mathematical framework providing general analytical formulations of mean, variance, and autocorrelation function (ACF) for a mixed-type stochastic process in terms of mean, variance, and ACFs of both continuous and discrete components, respectively. To achieve the full consistency between variables at finer and coarser time scales in terms of marginal distribution and coarse-scale totals, the generated lower-level series are adjusted according to a procedure that does not affect the stochastic structure implied by the original model. To assess model performance, we study rainfall process as intermittent with both independent and dependent occurrences, where dependence is quantified by the probability that two consecutive time intervals are dry. In either case, we provide analytical formulations of main statistics of our mixed-type disaggregation model and show their clear accordance with Monte Carlo simulations. An application to rainfall time series from real world is shown as a proof of concept.

  16. Disaggregated regulation in network sections: The normative and positive theory; Disaggregierte Regulierung in Netzsektoren: Normative und positive Theorie

    Energy Technology Data Exchange (ETDEWEB)

    Knieps, G. [Inst. fuer Verkehrswissenschaft und Regionalpolitik, Albert-Ludwigs-Univ. Freiburg i.B. (Germany)

    2007-09-15

    The article deals with the interaction of normative and positive theorie of regulation. Those parts of the network which need regulation could be localised and regulated with the help of the normative theory of the monopolistic bottlenecks. Using the positive theory, the basic elements of a mandate for regulation in the sense of the disaggregated economy of regulation are derived.

  17. Long-run relationship between sectoral productivity and energy consumption in Malaysia: An aggregated and disaggregated viewpoint

    International Nuclear Information System (INIS)

    Rahman, Md Saifur; Junsheng, Ha; Shahari, Farihana; Aslam, Mohamed; Masud, Muhammad Mehedi; Banna, Hasanul; Liya, Ma

    2015-01-01

    This paper investigates the causal relationship between energy consumption and economic productivity in Malaysia at both aggregated and disaggregated levels. The investigation utilises total and sectoral (industrial and manufacturing) productivity growth during the 1971–2012 period using the modified Granger causality test proposed by Toda and Yamamoto [1] within a multivariate framework. The economy of Malaysia was found to be energy dependent at aggregated and disaggregated levels of national and sectoral economic growth. However, at disaggregate level, inefficient energy use is particularly identified with electricity and coal consumption patterns and their Granger caused negative effects upon GDP (Gross Domestic Product) and manufacturing growth. These findings suggest that policies should focus more on improving energy efficiency and energy saving. Furthermore, since emissions are found to have a close relationship to economic output at national and sectoral levels green technologies are of a highest necessity. - Highlights: • At aggregate level, energy consumption significantly influences GDP (Gross Domestic Product). • At disaggregate level, electricity & coal consumption does not help output growth. • Mineral and waste are found to positively Granger cause GDP. • The results reveal strong interactions between emissions and economic growth

  18. Disaggregation of remotely sensed soil moisture under all sky condition using machine learning approach in Northeast Asia

    Science.gov (United States)

    Kim, S.; Kim, H.; Choi, M.; Kim, K.

    2016-12-01

    Estimating spatiotemporal variation of soil moisture is crucial to hydrological applications such as flood, drought, and near real-time climate forecasting. Recent advances in space-based passive microwave measurements allow the frequent monitoring of the surface soil moisture at a global scale and downscaling approaches have been applied to improve the spatial resolution of passive microwave products available at local scale applications. However, most downscaling methods using optical and thermal dataset, are valid only in cloud-free conditions; thus renewed downscaling method under all sky condition is necessary for the establishment of spatiotemporal continuity of datasets at fine resolution. In present study Support Vector Machine (SVM) technique was utilized to downscale a satellite-based soil moisture retrievals. The 0.1 and 0.25-degree resolution of daily Land Parameter Retrieval Model (LPRM) L3 soil moisture datasets from Advanced Microwave Scanning Radiometer 2 (AMSR2) were disaggregated over Northeast Asia in 2015. Optically derived estimates of surface temperature (LST), normalized difference vegetation index (NDVI), and its cloud products were obtained from MODerate Resolution Imaging Spectroradiometer (MODIS) for the purpose of downscaling soil moisture in finer resolution under all sky condition. Furthermore, a comparison analysis between in situ and downscaled soil moisture products was also conducted for quantitatively assessing its accuracy. Results showed that downscaled soil moisture under all sky condition not only preserves the quality of AMSR2 LPRM soil moisture at 1km resolution, but also attains higher spatial data coverage. From this research we expect that time continuous monitoring of soil moisture at fine scale regardless of weather conditions would be available.

  19. Assessing gendered roles in water decision-making in semi-arid regions through sex-disaggregated water data with UNESCO-WWAP gender toolkit

    Science.gov (United States)

    Miletto, Michela; Greco, Francesca; Belfiore, Elena

    2017-04-01

    Global climate change is expected to exacerbate current and future stresses on water resources from population growth and land use, and increase the frequency and severity of droughts and floods. Women are more vulnerable to the effects of climate change than men not only because they constitute the majority of the world's poor but also because they are more dependent for their livelihood on natural resources that are threatened by climate change. In addition, social, economic and political barriers often limit their coping capacity. Women play a key role in the provision, management and safeguarding of water, nonetheless, gender inequality in water management framework persists around the globe. Sharp data are essential to inform decisions and support effective policies. Disaggregating water data by sex is crucial to analyse gendered roles in the water realm and inform gender sensitive water policies in light of the global commitments to gender equality of Agenda 2030. In view of this scenario, WWAP has created an innovative toolkit for sex-disaggregated water data collection, as a result of a participatory work of more than 35 experts, part of the WWAP Working Group on Sex-Disaggregated Indicators (http://www.unesco.org/new/en/natural-sciences/environment/water/wwap/water-and-gender/un-wwap-working-group-on-gender-disaggregated-indicators/#c1430774). The WWAP toolkit contains four tools: the methodology (Seager J. WWAP UNESCO, 2015), set of key indicators, the guideline (Pangare V.,WWAP UNESCO, 2015) and a questionnaire for field survey. WWAP key gender-sensitive indicators address water resources management, aspects of water quality and agricultural uses, water resources governance and management, and investigate unaccounted labour in according to gender and age. Managing water resources is key for climate adaptation. Women are particularly sensitive to water quality and the health of water-dependent ecosystems, often source of food and job opportunities

  20. Flexible hydrological modeling - Disaggregation from lumped catchment scale to higher spatial resolutions

    Science.gov (United States)

    Tran, Quoc Quan; Willems, Patrick; Pannemans, Bart; Blanckaert, Joris; Pereira, Fernando; Nossent, Jiri; Cauwenberghs, Kris; Vansteenkiste, Thomas

    2015-04-01

    Based on an international literature review on model structures of existing rainfall-runoff and hydrological models, a generalized model structure is proposed. It consists of different types of meteorological components, storage components, splitting components and routing components. They can be spatially organized in a lumped way, or on a grid, spatially interlinked by source-to-sink or grid-to-grid (cell-to-cell) routing. The grid size of the model can be chosen depending on the application. The user can select/change the spatial resolution depending on the needs and/or the evaluation of the accuracy of the model results, or use different spatial resolutions in parallel for different applications. Major research questions addressed during the study are: How can we assure consistent results of the model at any spatial detail? How can we avoid strong or sudden changes in model parameters and corresponding simulation results, when one moves from one level of spatial detail to another? How can we limit the problem of overparameterization/equifinality when we move from the lumped model to the spatially distributed model? The proposed approach is a step-wise one, where first the lumped conceptual model is calibrated using a systematic, data-based approach, followed by a disaggregation step where the lumped parameters are disaggregated based on spatial catchment characteristics (topography, land use, soil characteristics). In this way, disaggregation can be done down to any spatial scale, and consistently among scales. Only few additional calibration parameters are introduced to scale the absolute spatial differences in model parameters, but keeping the relative differences as obtained from the spatial catchment characteristics. After calibration of the spatial model, the accuracies of the lumped and spatial models were compared for peak, low and cumulative runoff total and sub-flows (at downstream and internal gauging stations). For the distributed models, additional

  1. Prediction of kharif rice yield at Kharagpur using disaggregated extended range rainfall forecasts

    Science.gov (United States)

    Dhekale, B. S.; Nageswararao, M. M.; Nair, Archana; Mohanty, U. C.; Swain, D. K.; Singh, K. K.; Arunbabu, T.

    2017-08-01

    The Extended Range Forecasts System (ERFS) has been generating monthly and seasonal forecasts on real-time basis throughout the year over India since 2009. India is one of the major rice producer and consumer in South Asia; more than 50% of the Indian population depends on rice as staple food. Rice is mainly grown in kharif season, which contributed 84% of the total annual rice production of the country. Rice cultivation in India is rainfed, which depends largely on rains, so reliability of the rainfall forecast plays a crucial role for planning the kharif rice crop. In the present study, an attempt has been made to test the reliability of seasonal and sub-seasonal ERFS summer monsoon rainfall forecasts for kharif rice yield predictions at Kharagpur, West Bengal by using CERES-Rice (DSSATv4.5) model. These ERFS forecasts are produced as monthly and seasonal mean values and are converted into daily sequences with stochastic weather generators for use with crop growth models. The daily sequences are generated from ERFS seasonal (June-September) and sub-seasonal (July-September, August-September, and September) summer monsoon (June to September) rainfall forecasts which are considered as input in CERES-rice crop simulation model for the crop yield prediction for hindcast (1985-2008) and real-time mode (2009-2015). The yield simulated using India Meteorological Department (IMD) observed daily rainfall data is considered as baseline yield for evaluating the performance of predicted yields using the ERFS forecasts. The findings revealed that the stochastic disaggregation can be used to disaggregate the monthly/seasonal ERFS forecasts into daily sequences. The year to year variability in rice yield at Kharagpur is efficiently predicted by using the ERFS forecast products in hindcast as well as real time, and significant enhancement in the prediction skill is noticed with advancement in the season due to incorporation of observed weather data which reduces uncertainty of

  2. A Practical Methodology for Disaggregating the Drivers of Drug Costs Using Administrative Data.

    Science.gov (United States)

    Lungu, Elena R; Manti, Orlando J; Levine, Mitchell A H; Clark, Douglas A; Potashnik, Tanya M; McKinley, Carol I

    2017-09-01

    Prescription drug expenditures represent a significant component of health care costs in Canada, with estimates of $28.8 billion spent in 2014. Identifying the major cost drivers and the effect they have on prescription drug expenditures allows policy makers and researchers to interpret current cost pressures and anticipate future expenditure levels. To identify the major drivers of prescription drug costs and to develop a methodology to disaggregate the impact of each of the individual drivers. The methodology proposed in this study uses the Laspeyres approach for cost decomposition. This approach isolates the effect of the change in a specific factor (e.g., price) by holding the other factor(s) (e.g., quantity) constant at the base-period value. The Laspeyres approach is expanded to a multi-factorial framework to isolate and quantify several factors that drive prescription drug cost. Three broad categories of effects are considered: volume, price and drug-mix effects. For each category, important sub-effects are quantified. This study presents a new and comprehensive methodology for decomposing the change in prescription drug costs over time including step-by-step demonstrations of how the formulas were derived. This methodology has practical applications for health policy decision makers and can aid researchers in conducting cost driver analyses. The methodology can be adjusted depending on the purpose and analytical depth of the research and data availability. © 2017 Journal of Population Therapeutics and Clinical Pharmacology. All rights reserved.

  3. Modeling of photochemical air pollution in the Barcelona area with highly disaggregated anthropogenic and biogenic emissions

    International Nuclear Information System (INIS)

    Toll, I.; Baldasano, J.M.

    2000-01-01

    The city of Barcelona and its surrounding area, located in the western Mediterranean basin, can reach high levels of O 3 in spring and summertime. To study the origin of this photochemical pollution, a numerical modeling approach was adopted and the episode that took place between 3 and 5 August 1990 was chosen. The main meteorological mesoscale flows were reproduced with the meteorological non-hydrostatic mesoscale model MEMO for 5 August 1990, when weak pressure synoptic conditions took place. The emissions inventory was calculated with the EIM-LEM model, giving highly disaggregated anthropogenic and biogenic emissions in the zone studied, an 80 x 80 km 2 area around the city of Barcelona. Major sources of VOC were road traffic (51%) and vegetation (34%), while NO x were mostly emitted by road traffic (88%). However, emissions from some industrial stacks can be locally important and higher than those from road traffic. Photochemical simulation with the MARS model revealed that the combination of mesoscale wind flows and the above-mentioned local emissions is crucial in the production and transport of O 3 in the area. On the other hand, the geostrophic wind also played an important role in advecting the air masses away from the places O 3 had been generated. The model simulations were also evaluated by comparing meteorological measurements from nine surface stations and concentration measurements from five surface stations, and the results proved to be fairly satisfactory. (author)

  4. Musings on privacy issues in health research involving disaggregate geographic data about individuals

    Directory of Open Access Journals (Sweden)

    AbdelMalik Philip

    2009-07-01

    Full Text Available Abstract This paper offers a state-of-the-art overview of the intertwined privacy, confidentiality, and security issues that are commonly encountered in health research involving disaggregate geographic data about individuals. Key definitions are provided, along with some examples of actual and potential security and confidentiality breaches and related incidents that captured mainstream media and public interest in recent months and years. The paper then goes on to present a brief survey of the research literature on location privacy/confidentiality concerns and on privacy-preserving solutions in conventional health research and beyond, touching on the emerging privacy issues associated with online consumer geoinformatics and location-based services. The 'missing ring' (in many treatments of the topic of data security is also discussed. Personal information and privacy legislations in two countries, Canada and the UK, are covered, as well as some examples of recent research projects and events about the subject. Select highlights from a June 2009 URISA (Urban and Regional Information Systems Association workshop entitled 'Protecting Privacy and Confidentiality of Geographic Data in Health Research' are then presented. The paper concludes by briefly charting the complexity of the domain and the many challenges associated with it, and proposing a novel, 'one stop shop' case-based reasoning framework to streamline the provision of clear and individualised guidance for the design and approval of new research projects (involving geographical identifiers about individuals, including crisp recommendations on which specific privacy-preserving solutions and approaches would be suitable in each case.

  5. Household energy consumption in the UK: A highly geographically and socio-economically disaggregated model

    International Nuclear Information System (INIS)

    Druckman, A.; Jackson, T.

    2008-01-01

    Devising policies for a low carbon society requires a careful understanding of energy consumption in different types of households. In this paper, we explore patterns of UK household energy use and associated carbon emissions at national level and also at high levels of socio-economic and geographical disaggregation. In particular, we examine specific neighbourhoods with contrasting levels of deprivation, and typical 'types' (segments) of UK households based on socio-economic characteristics. Results support the hypothesis that different segments have widely differing patterns of consumption. We show that household energy use and associated carbon emissions are both strongly, but not solely, related to income levels. Other factors, such as the type of dwelling, tenure, household composition and rural/urban location are also extremely important. The methodology described in this paper can be used in various ways to inform policy-making. For example, results can help in targeting energy efficiency measures; trends from time series results will form a useful basis for scenario building; and the methodology may be used to model expected outcomes of possible policy options, such as personal carbon trading or a progressive tax regime on household energy consumption

  6. A DISAGGREGATED MEASURES APPROACH OF POVERTY STATUS OF FARMING HOUSEHOLDS IN KWARA STATE, NIGERIA

    Directory of Open Access Journals (Sweden)

    Grace Oluwabukunmi Akinsola

    2016-12-01

    Full Text Available In a bid to strengthen the agricultural sector in Nigeria, the Kwara State Government invited thirteen Zimbabwean farmers to participate in agricultural production in Kwara State in 2004. The main objective of this study therefore was to examine the effect of the activities of these foreign farmers on local farmers’ poverty status. A questionnaire was administered on the heads of farming households. A total of 240 respondents were used for the study, which was comprised of 120 contact and 120 non-contact heads of farming households. The analytical tools employed included descriptive statistics and the Foster, Greer and Thorbecke method. The result indicated that the non-contact farming households are poorer than the contact farming households. Using the disaggregated poverty profile, poverty is most severe among the age group of above 60 years. The intensity of poverty is also higher among the married group than the singles. Based on the education level, poverty seems to be most severe among those without any formal education. It is therefore recommended that a minimum of secondary school education should be encouraged among the farming households to prevent higher incidence of poverty in the study area.

  7. Musings on privacy issues in health research involving disaggregate geographic data about individuals.

    Science.gov (United States)

    Boulos, Maged N Kamel; Curtis, Andrew J; Abdelmalik, Philip

    2009-07-20

    This paper offers a state-of-the-art overview of the intertwined privacy, confidentiality, and security issues that are commonly encountered in health research involving disaggregate geographic data about individuals. Key definitions are provided, along with some examples of actual and potential security and confidentiality breaches and related incidents that captured mainstream media and public interest in recent months and years. The paper then goes on to present a brief survey of the research literature on location privacy/confidentiality concerns and on privacy-preserving solutions in conventional health research and beyond, touching on the emerging privacy issues associated with online consumer geoinformatics and location-based services. The 'missing ring' (in many treatments of the topic) of data security is also discussed. Personal information and privacy legislations in two countries, Canada and the UK, are covered, as well as some examples of recent research projects and events about the subject. Select highlights from a June 2009 URISA (Urban and Regional Information Systems Association) workshop entitled 'Protecting Privacy and Confidentiality of Geographic Data in Health Research' are then presented. The paper concludes by briefly charting the complexity of the domain and the many challenges associated with it, and proposing a novel, 'one stop shop' case-based reasoning framework to streamline the provision of clear and individualised guidance for the design and approval of new research projects (involving geographical identifiers about individuals), including crisp recommendations on which specific privacy-preserving solutions and approaches would be suitable in each case.

  8. Spatially disaggregated population estimates in the absence of national population and housing census data

    Science.gov (United States)

    Wardrop, N. A.; Jochem, W. C.; Bird, T. J.; Chamberlain, H. R.; Clarke, D.; Kerr, D.; Bengtsson, L.; Juran, S.; Seaman, V.; Tatem, A. J.

    2018-01-01

    Population numbers at local levels are fundamental data for many applications, including the delivery and planning of services, election preparation, and response to disasters. In resource-poor settings, recent and reliable demographic data at subnational scales can often be lacking. National population and housing census data can be outdated, inaccurate, or missing key groups or areas, while registry data are generally lacking or incomplete. Moreover, at local scales accurate boundary data are often limited, and high rates of migration and urban growth make existing data quickly outdated. Here we review past and ongoing work aimed at producing spatially disaggregated local-scale population estimates, and discuss how new technologies are now enabling robust and cost-effective solutions. Recent advances in the availability of detailed satellite imagery, geopositioning tools for field surveys, statistical methods, and computational power are enabling the development and application of approaches that can estimate population distributions at fine spatial scales across entire countries in the absence of census data. We outline the potential of such approaches as well as their limitations, emphasizing the political and operational hurdles for acceptance and sustainable implementation of new approaches, and the continued importance of traditional sources of national statistical data. PMID:29555739

  9. Soil map disaggregation improved by soil-landscape relationships, area-proportional sampling and random forest implementation

    DEFF Research Database (Denmark)

    Møller, Anders Bjørn; Malone, Brendan P.; Odgers, Nathan

    implementation generally improved the algorithm’s ability to predict the correct soil class. The implementation of soil-landscape relationships and area-proportional sampling generally increased the calculation time, while the random forest implementation reduced the calculation time. In the most successful......Detailed soil information is often needed to support agricultural practices, environmental protection and policy decisions. Several digital approaches can be used to map soil properties based on field observations. When soil observations are sparse or missing, an alternative approach...... is to disaggregate existing conventional soil maps. At present, the DSMART algorithm represents the most sophisticated approach for disaggregating conventional soil maps (Odgers et al., 2014). The algorithm relies on classification trees trained from resampled points, which are assigned classes according...

  10. Disaggregation of MODIS surface temperature over an agricultural area using a time series of Formosat-2 images

    OpenAIRE

    Merlin, O.; Duchemin, Benoit; Hagolle, O.; Jacob, Frédéric; Coudert, B.; Chehbouni, Abdelghani; Dedieu, G.; Garatuza, J.; Kerr, Yann

    2010-01-01

    No of Pages 13; International audience; The temporal frequency of the thermal data provided by current spaceborne high-resolution imagery systems is inadequate for agricultural applications. As an alternative to the lack of high-resolution observations, kilometric thermal data can be disaggregated using a green (photosynthetically active) vegetation index e.g. NDVI (Normalized Difference Vegetation Index) collected at high resolution. Nevertheless, this approach is only valid in the condition...

  11. Disaggregating Orders of Water Scarcity - The Politics of Nexus in the Wami-Ruvu River Basin, Tanzania

    Directory of Open Access Journals (Sweden)

    Anna Mdee

    2017-02-01

    Full Text Available This article considers the dilemma of managing competing uses of surface water in ways that respond to social, ecological and economic needs. Current approaches to managing competing water use, such as Integrated Water Resources Management (IWRM and the concept of the water-energy-food nexus do not adequately disaggregate the political nature of water allocations. This is analysed using Mehta’s (2014 framework on orders of scarcity to disaggregate narratives of water scarcity in two ethnographic case studies in the WamiRuvu River Basin in Tanzania: one of a mountain river that provides water to urban Morogoro, and another of a large donor-supported irrigation scheme on the Wami River. These case studies allow us to explore different interfaces in the food-water-energy nexus. The article makes two points: that disaggregating water scarcity is essential for analysing the nexus; and that current institutional frameworks (such as IWRM mask the political nature of the nexus, and therefore do not provide an adequate platform for adjudicating the interfaces of competing water use.

  12. Disaggregating radar-derived rainfall measurements in East Azarbaijan, Iran, using a spatial random-cascade model

    Science.gov (United States)

    Fouladi Osgouei, Hojjatollah; Zarghami, Mahdi; Ashouri, Hamed

    2017-07-01

    The availability of spatial, high-resolution rainfall data is one of the most essential needs in the study of water resources. These data are extremely valuable in providing flood awareness for dense urban and industrial areas. The first part of this paper applies an optimization-based method to the calibration of radar data based on ground rainfall gauges. Then, the climatological Z-R relationship for the Sahand radar, located in the East Azarbaijan province of Iran, with the help of three adjacent rainfall stations, is obtained. The new climatological Z-R relationship with a power-law form shows acceptable statistical performance, making it suitable for radar-rainfall estimation by the Sahand radar outputs. The second part of the study develops a new heterogeneous random-cascade model for spatially disaggregating the rainfall data resulting from the power-law model. This model is applied to the radar-rainfall image data to disaggregate rainfall data with coverage area of 512 × 512 km2 to a resolution of 32 × 32 km2. Results show that the proposed model has a good ability to disaggregate rainfall data, which may lead to improvement in precipitation forecasting, and ultimately better water-resources management in this arid region, including Urmia Lake.

  13. U.S.-China trade: A disaggregated analysis, 1996-2006

    OpenAIRE

    Spadafora, Francesco

    2007-01-01

    The rising importance of China as a major U.S. trading partner is the most important change in the international specialization pattern of the U.S. over the last decade. Using a multi-dimensional approach and a detailed dataset, we analyze the evolution over time of the trad relationship between China and the U.S. in order to highlight its major quantitative and qualitative changes. In this regard, the role of China as a major trading partner to the U.S. has significantly increased following ...

  14. Logistics flows and enterprise input-output models: aggregate and disaggregate analysis

    NARCIS (Netherlands)

    Albino, V.; Yazan, Devrim; Messeni Petruzzelli, A.; Okogbaa, O.G.

    2011-01-01

    In the present paper, we propose the use of enterprise input-output (EIO) models to describe and analyse the logistics flows considering spatial issues and related environmental effects associated with production and transportation processes. In particular, transportation is modelled as a specific

  15. Disaggregate demand for conventional and alternative fuelled vehicles in the Census Metropolitan Area of Hamilton, Canada

    Science.gov (United States)

    Potoglou, Dimitrios

    The focus of this thesis is twofold. First, it offers insight on how households' car-ownership behaviour is affected by urban form and availability of local-transit at the place of residence, after controlling for socio-economic and demographic characteristics. Second, it addresses the importance of vehicle attributes, household and individual characteristics as well as economic incentives and urban form to potential demand for alternative fuelled vehicles. Data for the empirical analyses of the aforementioned research activities were obtained through an innovative Internet survey, which is also documented in this thesis, conducted in the Census Metropolitan Area of Hamilton. The survey included a retrospective questionnaire of households' number and type of vehicles and a stated choices experiment for assessing the potential demand for alternative fuelled vehicles. Established approaches and emerging trends in automobile demand modelling identified early on in this thesis suggest a disaggregate approach and specifically, the estimation of discrete choice models both for explaining car ownership and vehicle-type choice behaviour. It is shown that mixed and diverse land uses as well as short distances between home and work are likely to decrease the probability of households to own a large number of cars. Regarding the demand for alternative fuelled vehicles, while vehicle attributes are particularly important, incentives such as free parking and access to high occupancy vehicle lanes will not influence the choice of hybrids or alternative fuelled vehicles. An improved understating of households' behaviour regarding the number of cars as well as the factors and trade-offs for choosing cleaner vehicles can be used to inform policy designed to reduce car ownership levels and encourage adoption of cleaner vehicle technologies in urban areas. Finally, the Internet survey sets the ground for further research on implementation and evaluation of this data collection method.

  16. Conditions for the Occurrence of Slaking and Other Disaggregation Processes under Rainfall

    Directory of Open Access Journals (Sweden)

    Frédéric Darboux

    2016-07-01

    Full Text Available Under rainfall conditions, aggregates may suffer breakdown by different mechanisms. Slaking is a very efficient breakdown mechanism. However, its occurrence under rainfall conditions has not been demonstrated. Therefore, the aim of this study was to evaluate the occurrence of slaking under rain. Two soils with silt loam (SL and clay loam (CL textures were analyzed. Two classes of aggregates were utilized: 1–3 mm and 3–5 mm. The aggregates were submitted to stability tests and to high intensity (90 mm·h−1 and low intensity (28 mm·h−1 rainfalls, and different kinetic energy impacts (large and small raindrops using a rainfall simulator. The fragment size distributions were determined both after the stability tests and rainfall simulations, with the calculation of the mean weighted diameter (MWD. After the stability tests the SL presented smaller MWDs for all stability tests when compared to the CL. In both soils the lowest MWD was obtained using the fast wetting test, showing they were sensitive to slaking. For both soils and the two aggregate classes evaluated, the MWDs were recorded from the early beginning of the rainfall event under the four rainfall conditions. The occurrence of slaking in the evaluated soils was not verified under the simulated rainfall conditions studied. The early disaggregation was strongly related to the cumulative kinetic energy, advocating for the occurrence of mechanical breakdown. Because slaking requires a very high wetting rate on initially dry aggregates, it seems unlikely to occur under field conditions, except perhaps for furrow irrigation.

  17. Elements in nucleotide sensing and hydrolysis of the AAA+ disaggregation machine ClpB: a structure-based mechanistic dissection of a molecular motor

    Energy Technology Data Exchange (ETDEWEB)

    Zeymer, Cathleen, E-mail: cathleen.zeymer@mpimf-heidelberg.mpg.de; Barends, Thomas R. M.; Werbeck, Nicolas D.; Schlichting, Ilme; Reinstein, Jochen, E-mail: cathleen.zeymer@mpimf-heidelberg.mpg.de [Max Planck Institute for Medical Research, Jahnstrasse 29, 69120 Heidelberg (Germany)

    2014-02-01

    High-resolution crystal structures together with mutational analysis and transient kinetics experiments were utilized to understand nucleotide sensing and the regulation of the ATPase cycle in an AAA+ molecular motor. ATPases of the AAA+ superfamily are large oligomeric molecular machines that remodel their substrates by converting the energy from ATP hydrolysis into mechanical force. This study focuses on the molecular chaperone ClpB, the bacterial homologue of Hsp104, which reactivates aggregated proteins under cellular stress conditions. Based on high-resolution crystal structures in different nucleotide states, mutational analysis and nucleotide-binding kinetics experiments, the ATPase cycle of the C-terminal nucleotide-binding domain (NBD2), one of the motor subunits of this AAA+ disaggregation machine, is dissected mechanistically. The results provide insights into nucleotide sensing, explaining how the conserved sensor 2 motif contributes to the discrimination between ADP and ATP binding. Furthermore, the role of a conserved active-site arginine (Arg621), which controls binding of the essential Mg{sup 2+} ion, is described. Finally, a hypothesis is presented as to how the ATPase activity is regulated by a conformational switch that involves the essential Walker A lysine. In the proposed model, an unusual side-chain conformation of this highly conserved residue stabilizes a catalytically inactive state, thereby avoiding unnecessary ATP hydrolysis.

  18. Elements in nucleotide sensing and hydrolysis of the AAA+ disaggregation machine ClpB: a structure-based mechanistic dissection of a molecular motor

    International Nuclear Information System (INIS)

    Zeymer, Cathleen; Barends, Thomas R. M.; Werbeck, Nicolas D.; Schlichting, Ilme; Reinstein, Jochen

    2014-01-01

    High-resolution crystal structures together with mutational analysis and transient kinetics experiments were utilized to understand nucleotide sensing and the regulation of the ATPase cycle in an AAA+ molecular motor. ATPases of the AAA+ superfamily are large oligomeric molecular machines that remodel their substrates by converting the energy from ATP hydrolysis into mechanical force. This study focuses on the molecular chaperone ClpB, the bacterial homologue of Hsp104, which reactivates aggregated proteins under cellular stress conditions. Based on high-resolution crystal structures in different nucleotide states, mutational analysis and nucleotide-binding kinetics experiments, the ATPase cycle of the C-terminal nucleotide-binding domain (NBD2), one of the motor subunits of this AAA+ disaggregation machine, is dissected mechanistically. The results provide insights into nucleotide sensing, explaining how the conserved sensor 2 motif contributes to the discrimination between ADP and ATP binding. Furthermore, the role of a conserved active-site arginine (Arg621), which controls binding of the essential Mg 2+ ion, is described. Finally, a hypothesis is presented as to how the ATPase activity is regulated by a conformational switch that involves the essential Walker A lysine. In the proposed model, an unusual side-chain conformation of this highly conserved residue stabilizes a catalytically inactive state, thereby avoiding unnecessary ATP hydrolysis

  19. Spatial and temporal disaggregation of the on-road vehicle emission inventory in a medium-sized Andean city. Comparison of GIS-based top-down methodologies

    Science.gov (United States)

    Gómez, C. D.; González, C. M.; Osses, M.; Aristizábal, B. H.

    2018-04-01

    Emission data is an essential tool for understanding environmental problems associated with sources and dynamics of air pollutants in urban environments, especially those emitted from vehicular sources. There is a lack of knowledge about the estimation of air pollutant emissions and particularly its spatial and temporal distribution in South America, mainly in medium-sized cities with population less than one million inhabitants. This work performed the spatial and temporal disaggregation of the on-road vehicle emission inventory (EI) in the medium-sized Andean city of Manizales, Colombia, with a spatial resolution of 1 km × 1 km and a temporal resolution of 1 h. A reported top-down methodology, based on the analysis of traffic flow levels and road network distribution, was applied. Results obtained allowed the identification of several hotspots of emission at the downtown zone and the residential and commercial area of Manizales. Downtown exhibited the highest percentage contribution of emissions normalized by its total area, with values equal to 6% and 5% of total CO and PM10 emissions per km2 respectively. These indexes were higher than those obtained in residential-commercial area with values of 2%/km2 for both pollutants. Temporal distribution showed strong relationship with driving patterns at rush hours, as well as an important influence of passenger cars and motorcycles in emissions of CO both at downtown and residential-commercial areas, and the impact of public transport in PM10 emissions in the residential-commercial zone. Considering that detailed information about traffic counts and road network distribution is not always available in medium-sized cities, this work compares other simplified top-down methods for spatially assessing the on-road vehicle EI. Results suggested that simplified methods could underestimate the spatial allocation of downtown emissions, a zone dominated by high traffic of vehicles. The comparison between simplified methods

  20. Bias-correction and Spatial Disaggregation for Climate Change Impact Assessments at a basin scale

    Science.gov (United States)

    Nyunt, Cho; Koike, Toshio; Yamamoto, Akio; Nemoto, Toshihoro; Kitsuregawa, Masaru

    2013-04-01

    Basin-scale climate change impact studies mainly rely on general circulation models (GCMs) comprising the related emission scenarios. Realistic and reliable data from GCM is crucial for national scale or basin scale impact and vulnerability assessments to build safety society under climate change. However, GCM fail to simulate regional climate features due to the imprecise parameterization schemes in atmospheric physics and coarse resolution scale. This study describes how to exclude some unsatisfactory GCMs with respect to focused basin, how to minimize the biases of GCM precipitation through statistical bias correction and how to cover spatial disaggregation scheme, a kind of downscaling, within in a basin. GCMs rejection is based on the regional climate features of seasonal evolution as a bench mark and mainly depends on spatial correlation and root mean square error of precipitation and atmospheric variables over the target region. Global Precipitation Climatology Project (GPCP) and Japanese 25-uear Reanalysis Project (JRA-25) are specified as references in figuring spatial pattern and error of GCM. Statistical bias-correction scheme comprises improvements of three main flaws of GCM precipitation such as low intensity drizzled rain days with no dry day, underestimation of heavy rainfall and inter-annual variability of local climate. Biases of heavy rainfall are conducted by generalized Pareto distribution (GPD) fitting over a peak over threshold series. Frequency of rain day error is fixed by rank order statistics and seasonal variation problem is solved by using a gamma distribution fitting in each month against insi-tu stations vs. corresponding GCM grids. By implementing the proposed bias-correction technique to all insi-tu stations and their respective GCM grid, an easy and effective downscaling process for impact studies at the basin scale is accomplished. The proposed method have been examined its applicability to some of the basins in various climate

  1. Improving and disaggregating N_2O emission factors for ruminant excreta on temperate pasture soils

    International Nuclear Information System (INIS)

    Krol, D.J.; Carolan, R.; Minet, E.; McGeough, K.L.; Watson, C.J.; Forrestal, P.J.; Lanigan, G.J.; Richards, K.G.

    2016-01-01

    Cattle excreta deposited on grazed grasslands are a major source of the greenhouse gas (GHG) nitrous oxide (N_2O). Currently, many countries use the IPCC default emission factor (EF) of 2% to estimate excreta-derived N_2O emissions. However, emissions can vary greatly depending on the type of excreta (dung or urine), soil type and timing of application. Therefore three experiments were conducted to quantify excreta-derived N_2O emissions and their associated EFs, and to assess the effect of soil type, season of application and type of excreta on the magnitude of losses. Cattle dung, urine and artificial urine treatments were applied in spring, summer and autumn to three temperate grassland sites with varying soil and weather conditions. Nitrous oxide emissions were measured from the three experiments over 12 months to generate annual N_2O emission factors. The EFs from urine treated soil was greater (0.30–4.81% for real urine and 0.13–3.82% for synthetic urine) when compared with dung (− 0.02–1.48%) treatments. Nitrous oxide emissions were driven by environmental conditions and could be predicted by rainfall and temperature before, and soil moisture deficit after application; highlighting the potential for a decision support tool to reduce N_2O emissions by modifying grazing management based on these parameters. Emission factors varied seasonally with the highest EFs in autumn and were also dependent on soil type, with the lowest EFs observed from well-drained and the highest from imperfectly drained soil. The EFs averaged 0.31 and 1.18% for cattle dung and urine, respectively, both of which were considerably lower than the IPCC default value of 2%. These results support both lowering and disaggregating EFs by excreta type. - Highlights: • N_2O emissions were measured from cattle excreta applied to pasture. • N_2O was universally higher from urine compared with dung. • N_2O was driven by rainfall, temperature and soil moisture deficit. • Emission

  2. Foreign labor and regional labor markets: aggregate and disaggregate impact on growth and wages in Danish regions

    DEFF Research Database (Denmark)

    Schmidt, Torben Dall; Jensen, Peter Sandholt

    2013-01-01

    non-negative effects on the job opportunities for Danish workers in regional labor markets, whereas the evidence of a regional wage growth effect is mixed. We also present disaggregated results focusing on regional heterogeneity of business structures, skill levels and backgrounds of foreign labor....... The results are interpreted within a specific Danish labor market context and the associated regional outcomes. This adds to previous findings and emphasizes the importance of labor market institutions for the effect of foreign labor on regional employment growth....

  3. ESTIMATION OF COB-DOUGLAS AND TRANSLOG PRODUCTION FUNCTIONS WITH CAPITAL AND GENDER DISAGGREGATED LABOR INPUTS IN THE USA

    Directory of Open Access Journals (Sweden)

    Gertrude Sebunya Muwanga

    2018-01-01

    Full Text Available This is an empirical investigation of the homogeneity of gender disaggregated labor using the Douglas, single/multi-factor translog production functions; and labor productivity functions for the USA.   The results based on the single factor translog model, indicated that: an increase in the capita/female labor ratio increases aggregate output; male labor is more productive than female labor, which is more productive than capital; a simultaneous increase in quantity allocated and productivity of the leads to an increase in output; female labor productivity has grown slower than male labor productivity; it much easier to substitute male labor for capital compared to female labor; and the three inputs are neither perfect substitutes nor perfect complements. As a consequence, male and female labor are not homogenous inputs. Efforts to investigate the factors influencing gender disaggregated labor productivity; and designing policies to achieve gender parity in numbers/productivity in the labor force and increasing the ease of substitutability between male labor and female labor are required.

  4. Spatial accuracy of a simplified disaggregation method for traffic emissions applied in seven mid-sized Chilean cities

    Science.gov (United States)

    Ossés de Eicker, Margarita; Zah, Rainer; Triviño, Rubén; Hurni, Hans

    The spatial accuracy of top-down traffic emission inventory maps obtained with a simplified disaggregation method based on street density was assessed in seven mid-sized Chilean cities. Each top-down emission inventory map was compared against a reference, namely a more accurate bottom-up emission inventory map from the same study area. The comparison was carried out using a combination of numerical indicators and visual interpretation. Statistically significant differences were found between the seven cities with regard to the spatial accuracy of their top-down emission inventory maps. In compact cities with a simple street network and a single center, a good accuracy of the spatial distribution of emissions was achieved with correlation values>0.8 with respect to the bottom-up emission inventory of reference. In contrast, the simplified disaggregation method is not suitable for complex cities consisting of interconnected nuclei, resulting in correlation valuessituation to get an overview on the spatial distribution of the emissions generated by traffic activities.

  5. A Novel Magnetic Actuation Scheme to Disaggregate Nanoparticles and Enhance Passage across the Blood–Brain Barrier

    Directory of Open Access Journals (Sweden)

    Ali Kafash Hoshiar

    2017-12-01

    Full Text Available The blood–brain barrier (BBB hinders drug delivery to the brain. Despite various efforts to develop preprogramed actuation schemes for magnetic drug delivery, the unmodeled aggregation phenomenon limits drug delivery performance. This paper proposes a novel scheme with an aggregation model for a feed-forward magnetic actuation design. A simulation platform for aggregated particle delivery is developed and an actuation scheme is proposed to deliver aggregated magnetic nanoparticles (MNPs using a discontinuous asymmetrical magnetic actuation. The experimental results with a Y-shaped channel indicated the success of the proposed scheme in steering and disaggregation. The delivery performance of the developed scheme was examined using a realistic, three-dimensional (3D vessel simulation. Furthermore, the proposed scheme enhanced the transport and uptake of MNPs across the BBB in mice. The scheme presented here facilitates the passage of particles across the BBB to the brain using an electromagnetic actuation scheme.

  6. Specific effect of the linear charge density of the acid polysaccharide on thermal aggregation/ disaggregation processes in complex carrageenan/lysozyme systems

    NARCIS (Netherlands)

    Antonov, Y.; Zhuravleva, I.; Cardinaels, R.M.; Moldenaers, P.

    2017-01-01

    We study thermal aggregation and disaggregation processes in complex carrageenan/lysozyme systems with a different linear charge density of the sulphated polysaccharide. To this end, we determine the temperature dependency of the turbidity and the intensity size distribution functions in complex

  7. Defining the spatial scale in modern regional analysis new challenges from data at local level

    CERN Document Server

    Fernández Vázquez, Esteban

    2014-01-01

    This book discusses the concept of region, including techniques of ecological inference applied to estimating disaggregated data from observable aggregates. The final part presents applications in line with the functional areas definition in regional analysis.

  8. Changes in Food Intake in Australia: Comparing the 1995 and 2011 National Nutrition Survey Results Disaggregated into Basic Foods.

    Science.gov (United States)

    Ridoutt, Bradley; Baird, Danielle; Bastiaans, Kathryn; Hendrie, Gilly; Riley, Malcolm; Sanguansri, Peerasak; Syrette, Julie; Noakes, Manny

    2016-05-25

    As nations seek to address obesity and diet-related chronic disease, understanding shifts in food intake over time is an imperative. However, quantifying intake of basic foods is not straightforward because of the diversity of raw and cooked wholefoods, processed foods and mixed dishes actually consumed. In this study, data from the Australian national nutrition surveys of 1995 and 2011, each involving more than 12,000 individuals and covering more than 4500 separate foods, were coherently disaggregated into basic foods, with cooking and processing factors applied where necessary. Although Australians are generally not eating in a manner consistent with national dietary guidelines, there have been several positive changes. Australians are eating more whole fruit, a greater diversity of vegetables, more beans, peas and pulses, less refined sugar, and they have increased their preference for brown and wholegrain cereals. Adult Australians have also increased their intake of nuts and seeds. Fruit juice consumption markedly declined, especially for younger Australians. Cocoa consumption increased and shifts in dairy product intake were mixed, reflecting one of several important differences between age and gender cohorts. This study sets the context for more detailed research at the level of specific foods to understand individual and household differences.

  9. Improving the Communication Pattern in Matrix-Vector Operations for Large Scale-Free Graphs by Disaggregation

    Energy Technology Data Exchange (ETDEWEB)

    Kuhlemann, Verena [Emory Univ., Atlanta, GA (United States); Vassilevski, Panayot S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2013-10-28

    Matrix-vector multiplication is the key operation in any Krylov-subspace iteration method. We are interested in Krylov methods applied to problems associated with the graph Laplacian arising from large scale-free graphs. Furthermore, computations with graphs of this type on parallel distributed-memory computers are challenging. This is due to the fact that scale-free graphs have a degree distribution that follows a power law, and currently available graph partitioners are not efficient for such an irregular degree distribution. The lack of a good partitioning leads to excessive interprocessor communication requirements during every matrix-vector product. Here, we present an approach to alleviate this problem based on embedding the original irregular graph into a more regular one by disaggregating (splitting up) vertices in the original graph. The matrix-vector operations for the original graph are performed via a factored triple matrix-vector product involving the embedding graph. And even though the latter graph is larger, we are able to decrease the communication requirements considerably and improve the performance of the matrix-vector product.

  10. Changes in Food Intake in Australia: Comparing the 1995 and 2011 National Nutrition Survey Results Disaggregated into Basic Foods

    Directory of Open Access Journals (Sweden)

    Bradley Ridoutt

    2016-05-01

    Full Text Available As nations seek to address obesity and diet-related chronic disease, understanding shifts in food intake over time is an imperative. However, quantifying intake of basic foods is not straightforward because of the diversity of raw and cooked wholefoods, processed foods and mixed dishes actually consumed. In this study, data from the Australian national nutrition surveys of 1995 and 2011, each involving more than 12,000 individuals and covering more than 4500 separate foods, were coherently disaggregated into basic foods, with cooking and processing factors applied where necessary. Although Australians are generally not eating in a manner consistent with national dietary guidelines, there have been several positive changes. Australians are eating more whole fruit, a greater diversity of vegetables, more beans, peas and pulses, less refined sugar, and they have increased their preference for brown and wholegrain cereals. Adult Australians have also increased their intake of nuts and seeds. Fruit juice consumption markedly declined, especially for younger Australians. Cocoa consumption increased and shifts in dairy product intake were mixed, reflecting one of several important differences between age and gender cohorts. This study sets the context for more detailed research at the level of specific foods to understand individual and household differences.

  11. Hyperforin prevents beta-amyloid neurotoxicity and spatial memory impairments by disaggregation of Alzheimer's amyloid-beta-deposits.

    Science.gov (United States)

    Dinamarca, M C; Cerpa, W; Garrido, J; Hancke, J L; Inestrosa, N C

    2006-11-01

    The major protein constituent of amyloid deposits in Alzheimer's disease (AD) is the amyloid beta-peptide (Abeta). In the present work, we have determined the effect of hyperforin an acylphloroglucinol compound isolated from Hypericum perforatum (St John's Wort), on Abeta-induced spatial memory impairments and on Abeta neurotoxicity. We report here that hyperforin: (1) decreases amyloid deposit formation in rats injected with amyloid fibrils in the hippocampus; (2) decreases the neuropathological changes and behavioral impairments in a rat model of amyloidosis; (3) prevents Abeta-induced neurotoxicity in hippocampal neurons both from amyloid fibrils and Abeta oligomers, avoiding the increase in reactive oxidative species associated with amyloid toxicity. Both effects could be explained by the capacity of hyperforin to disaggregate amyloid deposits in a dose and time-dependent manner and to decrease Abeta aggregation and amyloid formation. Altogether these evidences suggest that hyperforin may be useful to decrease amyloid burden and toxicity in AD patients, and may be a putative therapeutic agent to fight the disease.

  12. Disaggregating the Distal, Proximal, and Time-Varying Effects of Parent Alcoholism on Children's Internalizing Symptoms

    Science.gov (United States)

    Hussong, A. M.; Cai, L.; Curran, P. J.; Flora, D. B.; Chassin, L. A.; Zucker, R. A.

    2008-01-01

    We tested whether children show greater internalizing symptoms when their parents are actively abusing alcohol. In an integrative data analysis, we combined observations over ages 2 through 17 from two longitudinal studies of children of alcoholic parents and matched controls recruited from the community. Using a mixed modeling approach, we tested…

  13. Disaggregation of SMOS soil moisture over West Africa using the Temperature and Vegetation Dryness Index based on SEVIRI land surface parameters

    DEFF Research Database (Denmark)

    Tagesson, T.; Horion, S.; Nieto, H.

    2018-01-01

    the Temperature and Vegetation Dryness Index (TVDI) that served as SM proxy within the disaggregation process. West Africa (3 N, 26 W; 28 N, 26 E) was selected as a case study as it presents both an important North-South climate gradient and a diverse range of ecosystem types. The main challenge was to set up...... resolution of SMOS SM, with potential application for local drought/flood monitoring of importance for the livelihood of the population of West Africa....

  14. NIR-Red Spectra-Based Disaggregation of SMAP Soil Moisture to 250 m Resolution Based on SMAPEx-4/5 in Southeastern Australia

    Directory of Open Access Journals (Sweden)

    Nengcheng Chen

    2017-01-01

    Full Text Available To meet the demand of regional hydrological and agricultural applications, a new method named near infrared-red (NIR-red spectra-based disaggregation (NRSD was proposed to perform a disaggregation of Soil Moisture Active Passive (SMAP products from 36 km to 250 m resolution. The NRSD combined proposed normalized soil moisture index (NSMI with SMAP data to obtain 250 m resolution soil moisture mapping. The experiment was conducted in southeastern Australia during SMAP Experiments (SMAPEx 4/5 and validated with the in situ SMAPEx network. Results showed that NRSD performed a decent downscaling (root-mean-square error (RMSE = 0.04 m3/m3 and 0.12 m3/m3 during SMAPEx-4 and SMAPEx-5, respectively. Based on the validation, it was found that the proposed NSMI was a new alternative indicator for denoting the heterogeneity of soil moisture at sub-kilometer scales. Attributed to the excellent performance of the NSMI, NRSD has a higher overall accuracy, finer spatial representation within SMAP pixels and wider applicable scope on usability tests for land cover, vegetation density and drought condition than the disaggregation based on physical and theoretical scale change (DISPATCH has at 250 m resolution. This revealed that the NRSD method is expected to provide soil moisture mapping at 250-resolution for large-scale hydrological and agricultural studies.

  15. The relationship among biodiversity, governance, wealth, and scientific capacity at a country level: Disaggregation and prioritization.

    Science.gov (United States)

    Lira-Noriega, Andrés; Soberón, Jorge

    2015-09-01

    At a global level, the relationship between biodiversity importance and capacity to manage it is often assumed to be negative, without much differentiation among the more than 200 countries and territories of the world. We examine this relationship using a database including terrestrial biodiversity, wealth and governance indicators for most countries. From these, principal components analysis was used to construct aggregated indicators at global and regional scales. Wealth, governance, and scientific capacity represent different skills and abilities in relation to biodiversity importance. Our results show that the relationship between biodiversity and the different factors is not simple: in most regions wealth and capacity varies positively with biodiversity, while governance vary negatively with biodiversity. However, these trends, to a certain extent, are concentrated in certain groups of nations and outlier countries. We discuss our results in the context of collaboration and joint efforts among biodiversity-rich countries and foreign agencies.

  16. Understanding the spectrum of residential energy-saving behaviours: French evidence using disaggregated data

    International Nuclear Information System (INIS)

    Belaïd, Fateh; Garcia, Thomas

    2016-01-01

    Analysing household energy-saving behaviours is crucial to improve energy consumption predictions and energy policy making. How should we quantitatively measure them? What are their determinants? This study explores the main factors influencing residential energy-saving behaviours based on a bottom-up multivariate statistical approach using data from the recent French PHEBUS survey. Firstly, we assess energy-saving behaviours on a one-dimension scale using IRT. Secondly, we use linear regression with an innovative variable selection method via adaptive lasso to tease out the effects of both macro and micro factors on the behavioural score. The results highlight the impact of five main attributes incentivizing energy-saving behaviours based on cross-variable analyses: energy price, household income, education level, age of head of household and dwelling energy performance. In addition, our results suggest that the analysis of the inverted U-shape impact of age enables the expansion of the energy consumption life cycle theory to energy-saving behaviours. - Highlights: • We examine the main factors influencing residential energy-saving behaviours. • We use data from the recent French PHEBUS survey. • We use IRT to assess energy-saving behaviours on a one-dimension scale. • We use linear regression with an innovative variable selection method via adaptive lasso. • We highlight the impact of five main attributes incentivizing energy-saving behaviours.

  17. A programmable Si-photonic node for SDN-enabled Bloom filter forwarding in disaggregated data centers

    Science.gov (United States)

    Moralis-Pegios, M.; Terzenidis, N.; Vagionas, C.; Pitris, S.; Chatzianagnostou, E.; Brimont, A.; Zanzi, A.; Sanchis, P.; Marti, J.; Kraft, J.; Rochracher, K.; Dorrestein, S.; Bogdan, M.; Tekin, T.; Syrivelis, D.; Tassiulas, L.; Miliou, A.; Pleros, N.; Vyrsokinos, K.

    2017-02-01

    Programmable switching nodes supporting Software-Defined Networking (SDN) over optical interconnecting technologies arise as a key enabling technology for future disaggregated Data Center (DC) environments. The SDNenabling roadmap of intra-DC optical solutions is already a reality for rack-to-rack interconnects, with recent research reporting on interesting applications of programmable silicon photonic switching fabrics addressing board-to-board and even on-board applications. In this perspective, simplified information addressing schemes like Bloom filter (BF)-based labels emerge as a highly promising solution for ensuring rapid switch reconfiguration, following quickly the changes enforced in network size, network topology or even in content location. The benefits of BF-based forwarding have been so far successfully demonstrated in the Information-Centric Network (ICN) paradigm, while theoretical studies have also revealed the energy consumption and speed advantages when applied in DCs. In this paper we present for the first time a programmable 4x4 Silicon Photonic switch that supports SDN through the use of BF-labeled router ports. Our scheme significantly simplifies packet forwarding as it negates the need for large forwarding tables, allowing for its remote control through modifications in the assigned BF labels. We demonstrate 1x4 switch operation controlling the Si-Pho switch by a Stratix V FPGA module, which is responsible for processing the packet ID and correlating its destination with the appropriate BF-labeled outgoing port. DAC- and amplifier-less control of the carrier-injection Si-Pho switches is demonstrated, revealing successful switching of 10Gb/s data packets with BF-based forwarding information changes taking place at a time-scale that equals the duration of four consecutive packets.

  18. The added value of stochastic spatial disaggregation for short-term rainfall forecasts currently available in Canada

    Science.gov (United States)

    Gagnon, Patrick; Rousseau, Alain N.; Charron, Dominique; Fortin, Vincent; Audet, René

    2017-11-01

    Several businesses and industries rely on rainfall forecasts to support their day-to-day operations. To deal with the uncertainty associated with rainfall forecast, some meteorological organisations have developed products, such as ensemble forecasts. However, due to the intensive computational requirements of ensemble forecasts, the spatial resolution remains coarse. For example, Environment and Climate Change Canada's (ECCC) Global Ensemble Prediction System (GEPS) data is freely available on a 1-degree grid (about 100 km), while those of the so-called High Resolution Deterministic Prediction System (HRDPS) are available on a 2.5-km grid (about 40 times finer). Potential users are then left with the option of using either a high-resolution rainfall forecast without uncertainty estimation and/or an ensemble with a spectrum of plausible rainfall values, but at a coarser spatial scale. The objective of this study was to evaluate the added value of coupling the Gibbs Sampling Disaggregation Model (GSDM) with ECCC products to provide accurate, precise and consistent rainfall estimates at a fine spatial resolution (10-km) within a forecast framework (6-h). For 30, 6-h, rainfall events occurring within a 40,000-km2 area (Québec, Canada), results show that, using 100-km aggregated reference rainfall depths as input, statistics of the rainfall fields generated by GSDM were close to those of the 10-km reference field. However, in forecast mode, GSDM outcomes inherit of the ECCC forecast biases, resulting in a poor performance when GEPS data were used as input, mainly due to the inherent rainfall depth distribution of the latter product. Better performance was achieved when the Regional Deterministic Prediction System (RDPS), available on a 10-km grid and aggregated at 100-km, was used as input to GSDM. Nevertheless, most of the analyzed ensemble forecasts were weakly consistent. Some areas of improvement are identified herein.

  19. A Biodiversity Indicators Dashboard: Addressing Challenges to Monitoring Progress towards the Aichi Biodiversity Targets Using Disaggregated Global Data

    Science.gov (United States)

    Han, Xuemei; Smyth, Regan L.; Young, Bruce E.; Brooks, Thomas M.; Sánchez de Lozada, Alexandra; Bubb, Philip; Butchart, Stuart H. M.; Larsen, Frank W.; Hamilton, Healy; Hansen, Matthew C.; Turner, Will R.

    2014-01-01

    Recognizing the imperiled status of biodiversity and its benefit to human well-being, the world's governments committed in 2010 to take effective and urgent action to halt biodiversity loss through the Convention on Biological Diversity's “Aichi Targets”. These targets, and many conservation programs, require monitoring to assess progress toward specific goals. However, comprehensive and easily understood information on biodiversity trends at appropriate spatial scales is often not available to the policy makers, managers, and scientists who require it. We surveyed conservation stakeholders in three geographically diverse regions of critical biodiversity concern (the Tropical Andes, the African Great Lakes, and the Greater Mekong) and found high demand for biodiversity indicator information but uneven availability. To begin to address this need, we present a biodiversity “dashboard” – a visualization of biodiversity indicators designed to enable tracking of biodiversity and conservation performance data in a clear, user-friendly format. This builds on previous, more conceptual, indicator work to create an operationalized online interface communicating multiple indicators at multiple spatial scales. We structured this dashboard around the Pressure-State-Response-Benefit framework, selecting four indicators to measure pressure on biodiversity (deforestation rate), state of species (Red List Index), conservation response (protection of key biodiversity areas), and benefits to human populations (freshwater provision). Disaggregating global data, we present dashboard maps and graphics for the three regions surveyed and their component countries. These visualizations provide charts showing regional and national trends and lay the foundation for a web-enabled, interactive biodiversity indicators dashboard. This new tool can help track progress toward the Aichi Targets, support national monitoring and reporting, and inform outcome-based policy-making for the

  20. L-band brightness temperature disaggregation for use with S-band and C-band radiometer data for WCOM

    Science.gov (United States)

    Yao, P.; Shi, J.; Zhao, T.; Cosh, M. H.; Bindlish, R.

    2017-12-01

    There are two passive microwave sensors onboard the Water Cycle Observation Mission (WCOM), which includes a synthetic aperture radiometer operating at L-S-C bands and a scanning microwave radiometer operating from C- to W-bands. It provides a unique opportunity to disaggregate L-band brightness temperature (soil moisture) with S-band C-bands radiometer data. In this study, passive-only downscaling methodologies are developed and evaluated. Based on the radiative transfer modeling, it was found that the TBs (brightness temperature) between the L-band and S-band exhibit a linear relationship, and there is an exponential relationship between L-band and C-band. We carried out the downscaling results by two methods: (1) downscaling with L-S-C band passive measurements with the same incidence angle from payload IMI; (2) downscaling with L-C band passive measurements with different incidence angle from payloads IMI and PMI. The downscaling method with L-S bands with the same incident angle was first evaluated using SMEX02 data. The RMSE are 2.69 K and 1.52 K for H and V polarization respectively. The downscaling method with L-C bands is developed with different incident angles using SMEX03 data. The RMSE are 2.97 K and 2.68 K for H and V polarization respectively. These results showed that high-resolution L-band brightness temperature and soil moisture products could be generated from the future WCOM passive-only observations.

  1. A biodiversity indicators dashboard: addressing challenges to monitoring progress towards the Aichi biodiversity targets using disaggregated global data.

    Science.gov (United States)

    Han, Xuemei; Smyth, Regan L; Young, Bruce E; Brooks, Thomas M; Sánchez de Lozada, Alexandra; Bubb, Philip; Butchart, Stuart H M; Larsen, Frank W; Hamilton, Healy; Hansen, Matthew C; Turner, Will R

    2014-01-01

    Recognizing the imperiled status of biodiversity and its benefit to human well-being, the world's governments committed in 2010 to take effective and urgent action to halt biodiversity loss through the Convention on Biological Diversity's "Aichi Targets". These targets, and many conservation programs, require monitoring to assess progress toward specific goals. However, comprehensive and easily understood information on biodiversity trends at appropriate spatial scales is often not available to the policy makers, managers, and scientists who require it. We surveyed conservation stakeholders in three geographically diverse regions of critical biodiversity concern (the Tropical Andes, the African Great Lakes, and the Greater Mekong) and found high demand for biodiversity indicator information but uneven availability. To begin to address this need, we present a biodiversity "dashboard"--a visualization of biodiversity indicators designed to enable tracking of biodiversity and conservation performance data in a clear, user-friendly format. This builds on previous, more conceptual, indicator work to create an operationalized online interface communicating multiple indicators at multiple spatial scales. We structured this dashboard around the Pressure-State-Response-Benefit framework, selecting four indicators to measure pressure on biodiversity (deforestation rate), state of species (Red List Index), conservation response (protection of key biodiversity areas), and benefits to human populations (freshwater provision). Disaggregating global data, we present dashboard maps and graphics for the three regions surveyed and their component countries. These visualizations provide charts showing regional and national trends and lay the foundation for a web-enabled, interactive biodiversity indicators dashboard. This new tool can help track progress toward the Aichi Targets, support national monitoring and reporting, and inform outcome-based policy-making for the protection of

  2. A biodiversity indicators dashboard: addressing challenges to monitoring progress towards the Aichi biodiversity targets using disaggregated global data.

    Directory of Open Access Journals (Sweden)

    Xuemei Han

    Full Text Available Recognizing the imperiled status of biodiversity and its benefit to human well-being, the world's governments committed in 2010 to take effective and urgent action to halt biodiversity loss through the Convention on Biological Diversity's "Aichi Targets". These targets, and many conservation programs, require monitoring to assess progress toward specific goals. However, comprehensive and easily understood information on biodiversity trends at appropriate spatial scales is often not available to the policy makers, managers, and scientists who require it. We surveyed conservation stakeholders in three geographically diverse regions of critical biodiversity concern (the Tropical Andes, the African Great Lakes, and the Greater Mekong and found high demand for biodiversity indicator information but uneven availability. To begin to address this need, we present a biodiversity "dashboard"--a visualization of biodiversity indicators designed to enable tracking of biodiversity and conservation performance data in a clear, user-friendly format. This builds on previous, more conceptual, indicator work to create an operationalized online interface communicating multiple indicators at multiple spatial scales. We structured this dashboard around the Pressure-State-Response-Benefit framework, selecting four indicators to measure pressure on biodiversity (deforestation rate, state of species (Red List Index, conservation response (protection of key biodiversity areas, and benefits to human populations (freshwater provision. Disaggregating global data, we present dashboard maps and graphics for the three regions surveyed and their component countries. These visualizations provide charts showing regional and national trends and lay the foundation for a web-enabled, interactive biodiversity indicators dashboard. This new tool can help track progress toward the Aichi Targets, support national monitoring and reporting, and inform outcome-based policy-making for the

  3. Disaggregation of collective dose-a worked example based on future discharges from the Sellafield nuclear fuel reprocessing site, UK

    International Nuclear Information System (INIS)

    Jones, S R; Lambers, B; Stevens, A

    2004-01-01

    Collective dose has long been advocated as an important measure of the detriment associated with practices that involve the use of radioactivity. Application of collective dose in the context of worker protection is relatively straightforward, whereas its application in the context of discharges to the environment can yield radically different conclusions depending upon the population groups and integration times that are considered. The computer program PC-CREAM98 has been used to provide an indicative disaggregation into individual dose bands of the collective dose due to potential future radioactive discharges from the nuclear fuel reprocessing site at Sellafield in the UK. Two alternative discharge scenarios are considered, which represent a 'stop reprocessing early, minimum discharge' scenario and a 'reprocessing beyond current contracts' scenario. For aerial discharges, collective dose at individual effective dose rates exceeding 0.015 μSv y -1 is only incurred within the UK, and at effective dose rates exceeding 1.5 μSv y -1 is only incurred within about 20 km of Sellafield. The geographical distribution of collective dose from liquid discharges is harder to assess, but it appears that collective dose incurred outside the UK is at levels of individual effective dose rate below 1.5 μSv y -1 , with the majority being incurred at rates of 0.002 μSv y -1 or less. In multi-attribute utility analyses, the view taken on the radiological detriment to be attributed to the two discharge scenarios will depend critically on the weight or monetary value ascribed to collective doses incurred within the differing bands of individual dose rate

  4. Not All Large Customers are Made Alike: Disaggregating Response to Default-Service Day-Ahead Market Pricing

    International Nuclear Information System (INIS)

    Hopper, Nicole; Goldman, Charles; Neenan, Bernie

    2006-01-01

    For decades, policymakers and program designers have gone on the assumption that large customers, particularly industrial facilities, are the best candidates for realtime pricing (RTP). This assumption is based partly on practical considerations (large customers can provide potentially large load reductions) but also on the premise that businesses focused on production cost minimization are most likely to participate and respond to opportunities for bill savings. Yet few studies have examined the actual price response of large industrial and commercial customers in a disaggregated fashion, nor have factors such as the impacts of demand response (DR) enabling technologies, simultaneous emergency DR program participation and price response barriers been fully elucidated. This second-phase case study of Niagara Mohawk Power Corporation (NMPC)'s large customer RTP tariff addresses these information needs. The results demonstrate the extreme diversity of large customers' response to hourly varying prices. While two-thirds exhibit some price response, about 20 percent of customers provide 75-80 percent of the aggregate load reductions. Manufacturing customers are most price-responsive as a group, followed by government/education customers, while other sectors are largely unresponsive. However, individual customer response varies widely. Currently, enabling technologies do not appear to enhance hourly price response; customers report using them for other purposes. The New York Independent System Operator (NYISO)'s emergency DR programs enhance price response, in part by signaling to customers that day-ahead prices are high. In sum, large customers do currently provide moderate price response, but there is significant room for improvement through targeted programs that help customers develop and implement automated load-response strategies

  5. Spatial Disaggregation of CO2 Emissions for the State of California

    Energy Technology Data Exchange (ETDEWEB)

    de la Rue du Can, Stephane; de la Rue du Can, Stephane; Wenzel, Tom; Fischer, Marc

    2008-06-11

    This report allocates California's 2004 statewide carbon dioxide (CO2) emissions from fuel combustion to the 58 counties in the state. The total emissions are allocated to counties using several different methods, based on the availability of data for each sector. Data on natural gas use in all sectors are available by county. Fuel consumption by power and combined heat and power generation plants is available for individual plants. Bottom-up models were used to distribute statewide fuel sales-based CO2 emissions by county for on-road vehicles, aircraft, and watercraft. All other sources of CO2 emissions were allocated to counties based on surrogates for activity. CO2 emissions by sector were estimated for each county, as well as for the South Coast Air Basin. It is important to note that emissions from some sources, notably electricity generation, were allocated to counties based on where the emissions were generated, rather than where the electricity was actually consumed. In addition, several sources of CO2 emissions, such as electricity generated in and imported from other states and international marine bunker fuels, were not included in the analysis. California Air Resource Board (CARB) does not include CO2 emissions from interstate and international air travel, in the official California greenhouse gas (GHG) inventory, so those emissions were allocated to counties for informational purposes only. Los Angeles County is responsible for by far the largest CO2 emissions from combustion in the state: 83 Million metric tonnes (Mt), or 24percent of total CO2 emissions in California, more than twice that of the next county (Kern, with 38 Mt, or 11percent of statewide emissions). The South Coast Air Basin accounts for 122 MtCO2, or 35percent of all emissions from fuel combustion in the state. The distribution of emissions by sector varies considerably by county, with on-road motor vehicles dominating most counties, but large stationary sources and rail travel

  6. Data Warehousing: Beyond Disaggregation.

    Science.gov (United States)

    Rudner, Lawrence M.; Boston, Carol

    2003-01-01

    Discusses data warehousing, which provides information more fully responsive to local, state, and federal data needs. Such a system allows educators to generate reports and analyses that supply information, provide accountability, explore relationships among different kinds of data, and inform decision-makers. (Contains one figure and eight…

  7. Improving and disaggregating N{sub 2}O emission factors for ruminant excreta on temperate pasture soils

    Energy Technology Data Exchange (ETDEWEB)

    Krol, D.J., E-mail: kroldj@tcd.ie [Teagasc, Crops, Land Use and Environment Programme, Johnstown Castle, Co., Wexford (Ireland); Carolan, R. [Agri-Food and Biosciences Institute (AFBI), Belfast BT9 5PX (Ireland); Minet, E. [Teagasc, Crops, Land Use and Environment Programme, Johnstown Castle, Co., Wexford (Ireland); McGeough, K.L.; Watson, C.J. [Agri-Food and Biosciences Institute (AFBI), Belfast BT9 5PX (Ireland); Forrestal, P.J. [Teagasc, Crops, Land Use and Environment Programme, Johnstown Castle, Co., Wexford (Ireland); Lanigan, G.J., E-mail: gary.lanigan@teagasc.ie [Teagasc, Crops, Land Use and Environment Programme, Johnstown Castle, Co., Wexford (Ireland); Richards, K.G. [Teagasc, Crops, Land Use and Environment Programme, Johnstown Castle, Co., Wexford (Ireland)

    2016-10-15

    Cattle excreta deposited on grazed grasslands are a major source of the greenhouse gas (GHG) nitrous oxide (N{sub 2}O). Currently, many countries use the IPCC default emission factor (EF) of 2% to estimate excreta-derived N{sub 2}O emissions. However, emissions can vary greatly depending on the type of excreta (dung or urine), soil type and timing of application. Therefore three experiments were conducted to quantify excreta-derived N{sub 2}O emissions and their associated EFs, and to assess the effect of soil type, season of application and type of excreta on the magnitude of losses. Cattle dung, urine and artificial urine treatments were applied in spring, summer and autumn to three temperate grassland sites with varying soil and weather conditions. Nitrous oxide emissions were measured from the three experiments over 12 months to generate annual N{sub 2}O emission factors. The EFs from urine treated soil was greater (0.30–4.81% for real urine and 0.13–3.82% for synthetic urine) when compared with dung (− 0.02–1.48%) treatments. Nitrous oxide emissions were driven by environmental conditions and could be predicted by rainfall and temperature before, and soil moisture deficit after application; highlighting the potential for a decision support tool to reduce N{sub 2}O emissions by modifying grazing management based on these parameters. Emission factors varied seasonally with the highest EFs in autumn and were also dependent on soil type, with the lowest EFs observed from well-drained and the highest from imperfectly drained soil. The EFs averaged 0.31 and 1.18% for cattle dung and urine, respectively, both of which were considerably lower than the IPCC default value of 2%. These results support both lowering and disaggregating EFs by excreta type. - Highlights: • N{sub 2}O emissions were measured from cattle excreta applied to pasture. • N{sub 2}O was universally higher from urine compared with dung. • N{sub 2}O was driven by rainfall, temperature

  8. Convergence of in-Country Prices for the Turkish Economy : A Panel Data Search for the PPP Hypothesis Using Sub-Regional Disaggregated Data

    Directory of Open Access Journals (Sweden)

    Mustafa METE

    2014-12-01

    Full Text Available This paper tries to examine that in-country prices from the Turkish economy can be specified as a stationary relationship giving support to the long-run purchasing power parity in economics theory. For this purpose, a sub-regional categorization of the economy is considered over the investigation period of 2005-2012, and, following Esaka (2003, the study uses a panel estimation framework consisting of 12 disaggregated consumer price indices to search for whether the relative prices of goods between sub-regions of the Turkish economy can be represented by stationary time series properties.

  9. Core-size regulated aggregation/disaggregation of citrate-coated gold nanoparticles (5-50 nm) and dissolved organic matter: Extinction, emission, and scattering evidence

    Science.gov (United States)

    Esfahani, Milad Rabbani; Pallem, Vasanta L.; Stretz, Holly A.; Wells, Martha J. M.

    2018-01-01

    Knowledge of the interactions between gold nanoparticles (GNPs) and dissolved organic matter (DOM) is significant in the development of detection devices for environmental sensing, studies of environmental fate and transport, and advances in antifouling water treatment membranes. The specific objective of this research was to spectroscopically investigate the fundamental interactions between citrate-stabilized gold nanoparticles (CT-GNPs) and DOM. Studies indicated that 30 and 50 nm diameter GNPs promoted disaggregation of the DOM. This result-disaggregation of an environmentally important polyelectrolyte-will be quite useful regarding antifouling properties in water treatment and water-based sensing applications. Furthermore, resonance Rayleigh scattering results showed significant enhancement in the UV range which can be useful to characterize DOM and can be exploited as an analytical tool to better sense and improve our comprehension of nanomaterial interactions with environmental systems. CT-GNPs having core size diameters of 5, 10, 30, and 50 nm were studied in the absence and presence of added DOM at 2 and 8 ppm at low ionic strength and near neutral pH (6.0-6.5) approximating surface water conditions. Interactions were monitored by cross-interpretation among ultraviolet (UV)-visible extinction spectroscopy, excitation-emission matrix (EEM) spectroscopy (emission and Rayleigh scattering), and dynamic light scattering (DLS). This comprehensive combination of spectroscopic analyses lends new insights into the antifouling behavior of GNPs. The CT-GNP-5 and -10 controls emitted light and aggregated. In contrast, the CT-GNP-30 and CT-GNP-50 controls scattered light intensely, but did not aggregate and did not emit light. The presence of any CT-GNP did not affect the extinction spectra of DOM, and the presence of DOM did not affect the extinction spectra of the CT-GNPs. The emission spectra (visible range) differed only slightly between calculated and actual

  10. Disaggregating Within- and Between-Person Effects of Social Identification on Subjective and Endocrinological Stress Reactions in a Real-Life Stress Situation.

    Science.gov (United States)

    Ketturat, Charlene; Frisch, Johanna U; Ullrich, Johannes; Häusser, Jan A; van Dick, Rolf; Mojzisch, Andreas

    2016-02-01

    Several experimental and cross-sectional studies have established the stress-buffering effect of social identification, yet few longitudinal studies have been conducted within this area of research. This study is the first to make use of a multilevel approach to disaggregate between- and within-person effects of social identification on subjective and endocrinological stress reactions. Specifically, we conducted a study with 85 prospective students during their 1-day aptitude test for a university sports program. Ad hoc groups were formed, in which students completed several tests in various disciplines together. At four points in time, salivary cortisol, subjective strain, and identification with their group were measured. Results of multilevel analyses show a significant within-person effect of social identification: The more students identified with their group, the less stress they experienced and the lower their cortisol response was. Between-person effects were not significant. Advantages of using multilevel approaches within this field of research are discussed. © 2015 by the Society for Personality and Social Psychology, Inc.

  11. Estimation of future levels and changes in profitability: The effect of the relative position of the firm in its industry and the operating-financing disaggregation

    Directory of Open Access Journals (Sweden)

    Borja Amor-Tapia

    2014-01-01

    Full Text Available In this paper we examine how the relative position of a firm's Return on Equity (ROE in industries affects the predictability of the next-year ROE levels, and the ROE changes from year to year. Using Nissim and Penman breakdown into operating and financing drivers, the significant role of the industry factor is established, although changes in signs suggest subtle non-linear relations in the drivers. Our study avoids problems originating from negative signs by analyzing sorts and by making new regressions with disaggregated second-order drivers by signs. This way, our results provide evidence of some different patterns in the influence of the first-level drivers of ROE (the operating factor and the financing factor, and the second-level drivers (profit margin, asset turnover, leverage and return spread on future profitability, depending on the industry spread. The results on the role of contextual factors to improve the estimation of future profitability remain consistent for small and large firms, although adding some nuances.

  12. Short circuit: Disaggregation of adrenocorticotropic hormone and cortisol levels in HIV-positive, methamphetamine-using men who have sex with men.

    Science.gov (United States)

    Carrico, Adam W; Rodriguez, Violeta J; Jones, Deborah L; Kumar, Mahendra

    2018-01-01

    This study examined if methamphetamine use alone (METH + HIV-) and methamphetamine use in combination with HIV (METH + HIV+) were associated with hypothalamic-pituitary-adrenal (HPA) axis dysregulation as well as insulin resistance relative to a nonmethamphetamine-using, HIV-negative comparison group (METH-HIV-). Using an intact groups design, serum levels of HPA axis hormones in 46 METH + HIV- and 127 METH + HIV+ men who have sex with men (MSM) were compared to 136 METH-HIV- men. There were no group differences in prevailing adrenocorticotropic hormone (ACTH) or cortisol levels, but the association between ACTH and cortisol was moderated by METH + HIV+ group (β = -0.19, p < .05). Compared to METH-HIV- men, METH + HIV+ MSM displayed 10% higher log 10 cortisol levels per standard deviation lower ACTH. Both groups of methamphetamine-using MSM had lower insulin resistance and greater syndemic burden (i.e., sleep disturbance, severe depression, childhood trauma, and polysubstance use disorder) compared to METH-HIV- men. However, the disaggregated functional relationship between ACTH and cortisol in METH + HIV+ MSM was independent of these factors. Further research is needed to characterize the bio-behavioral pathways that explain dysregulated HPA axis functioning in HIV-positive, methamphetamine-using MSM. Copyright © 2017 John Wiley & Sons, Ltd.

  13. Multiple pathways to gender-sensitive budget support in the education sector: Analysing the effectiveness of sex-disaggregated indicators in performance assessment frameworks and gender working groups in (education) budget support to Sub-Saharan Africa countries

    OpenAIRE

    Holvoet, Nathalie; Inberg, Liesbeth

    2013-01-01

    In order to correct for the initial gender blindness of the Paris Declaration and related aid modalities as general and sector budget support, it has been proposed to integrate a gender dimension into budget support entry points. This paper studies the effectiveness of (joint) gender working groups and the integration of sex-disaggregated indicators and targets in performance assessment frameworks in the context of education sector budget support delivered to a sample of 17 Sub-Saharan Africa...

  14. A study on agricultural drought vulnerability at disaggregated level in a highly irrigated and intensely cropped state of India.

    Science.gov (United States)

    Murthy, C S; Yadav, Manoj; Mohammed Ahamed, J; Laxman, B; Prawasi, R; Sesha Sai, M V R; Hooda, R S

    2015-03-01

    Drought is an important global hazard, challenging the sustainable agriculture and food security of nations. Measuring agricultural drought vulnerability is a prerequisite for targeting interventions to improve and sustain the agricultural performance of both irrigated and rain-fed agriculture. In this study, crop-generic agricultural drought vulnerability status is empirically measured through a composite index approach. The study area is Haryana state, India, a prime agriculture state of the country, characterised with low rainfall, high irrigation support and stable cropping pattern. By analysing the multiyear rainfall and crop condition data of kharif crop season (June-October) derived from satellite data and soil water holding capacity and groundwater quality, nine contributing indicators were generated for 120 blocks (sub-district administrative units). Composite indices for exposure, sensitivity and adaptive capacity components were generated after assigning variance-based weightages to the respective input indicators. Agricultural Drought Vulnerability Index (ADVI) was developed through a linear combination of the three component indices. ADVI-based vulnerability categorisation revealed that 51 blocks are with vulnerable to very highly vulnerable status. These blocks are located in the southern and western parts of the state, where groundwater quality is saline and water holding capacity of soils is less. The ADVI map has effectively captured the spatial pattern of agricultural drought vulnerability in the state. Districts with large number of vulnerable blocks showed considerably larger variability of de-trended crop yields. Correlation analysis reveals that crop condition variability, groundwater quality and soil factors are closely associated with ADVI. The vulnerability index is useful to prioritise the blocks for implementation of long-term drought management plans. There is scope for improving the methodology by adding/fine-tuning the indicators and

  15. Disaggregating Corporate Freedom of Religion

    DEFF Research Database (Denmark)

    Lægaard, Sune

    2015-01-01

    The paper investigates arguments for the idea in recent American Supreme Court jurisprudence that freedom of religion should not simply be understood as an ordinary legal right within the framework of liberal constitutionalism but as an expression of deference by the state and its legal system...... to religion as a separate and independent jurisdiction with its own system of law over which religious groups are sovereign. I discuss the relationship between, on the one hand, ordinary rights of freedom of association and freedom of religion and, on the other hand, this idea of corporate freedom of religion...

  16. Estimation of future levels and changes in profitability: The effect of the relative position of the firm in its industry and the operating-financing disaggregation

    Directory of Open Access Journals (Sweden)

    Borja Amor-Tapia

    2014-06-01

    Full Text Available In this paper we examine how the relative position of a firm’s Return on Equity (ROE in industries affects the predictability of the next-year ROE levels, and the ROE changes from year to year. Using Nissim and Penman breakdown into operating and financing drivers, the significant role of the industry factor is established, although changes in signs suggest subtle non-linear relations in the drivers. Our study avoids problems originating from negative signs by analyzing sorts and by making new regressions with disaggregated second-order drivers by signs. This way, our results provide evidence of some different patterns in the influence of the first-level drivers of ROE (the operating factor and the financing factor, and the second-level drivers (profit margin, asset turnover, leverage and return spread on future profitability, depending on the industry spread. The results on the role of contextual factors to improve the estimation of future profitability remain consistent for small and large firms, although adding some nuances En este trabajo examinamos si la posición relativa del ROE de la empresa en el sector afecta a la estimación del nivel de ROE en el a˜no posterior, y a la estimación de su variación. Empleando el desglose operativo-financiero de Nissim y Penman, encontramos que el factor sectorial es significativo, aunque las variaciones de los signos sugieren la presencia de relaciones no lineales. Nuestro trabajo evita los problemas generados por los signos negativos en los ratios al emplear cuantiles y realizar regresiones independientes para los diferentes signos que toman las variables. De esta forma, los resultados muestran diferentes patrones en el impacto de los inductores del ROE de primer nivel (los factores operativo y financiero y de segundo nivel (margen de resultados, rotaciones de los activos, endeudamiento y diferencial de rentabilidad sobre la rentabilidad futura, dependiendo del diferencial de rentabilidad con

  17. Disaggregating the Truth: A Re-Analysis of the Costs and Benefits of Michigan's Public Universities. Professional File. Number 125, Summer 2012

    Science.gov (United States)

    Daun-Barnett, Nathan J.

    2012-01-01

    For more than 50 years, human capital theory has been the cornerstone for understanding the value of investing in individuals' productive capacities in terms of both personal social and economic gain and the collective benefits that accrue to society. Vedder and Denhart (2007) challenge the hypothesis that public investment in higher education…

  18. ANALYSIS OF OUTPATIENT PHYSICIANS, PRESCRIPTION OF DISAGGREGANT THERAPY FOR PATIENTS AFTER ACUTE MYOCARDIAL INFARCTION AND/OR CORONARY ANGIOPLASTY WITH STENT IMPLANTATION WITHIN THE RECVAD REGISTRY

    Directory of Open Access Journals (Sweden)

    A. V. Zagrebelnyi

    2015-01-01

    Full Text Available Objective: to estimate the quality of antiaggregants therapy in patients with coronary heart disease in outpatient settings. Materials and methods. The data of the retrospective outpatient RECVAD registry (3690 patients who lived in Ryazan and its Region and had evidence in their outpatient medical records for one of the diagnoses, such as coronary heart disease, hypertension, chronic heart failure, atrial fibrillation, or their concurrence, were used. Forty­nine patients after acute myocardial infarction (AMI and/or percutaneous coro­ nary interventions (PCI with stenting ≤ 1 year before their inclusion in the registry, who were to undergo dual antiaggregant therapy (DAT according to current clinical guidelines (CG, were identified among 427 patients after AMI and/or PCI with coronary angioplasty. Contra­ indications to DAT were simultaneously revealed and a relationship of the use of therapy to their presence was compared. Results. Among the 49 patients who had indications for DAT that was used in 15 (30.6 % cases and that was not in 3 (6.1 % patients in the presence of contraindications, 25 (51.0 % did not receive DAT in the absence of contraindications and 6 (12.3 % patients received the therapy in the presence of contraindications. Conclusion. DAT prescribed by outpatient physicians does not always meet the current CG. There are cases of not using DAT in the presence of obvious indications for DAT and, on the contrary, those of its use in the presence of contraindications. 

  19. Benefit Incidence Analysis of Government Spending on Public-Private Partnership Schooling under Universal Secondary Education Policy in Uganda

    Science.gov (United States)

    Wokadala, J.; Barungi, M.

    2015-01-01

    The study establishes whether government spending on private universal secondary education (USE) schools is equitable across quintiles disaggregated by gender and by region in Uganda. The study employs benefit incidence analysis tool on the Uganda National Panel Survey (UNPS 2009/10) data to establish the welfare impact of public subsidy on…

  20. Differential Targeting of Hsp70 Heat Shock Proteins HSPA6 and HSPA1A with Components of a Protein Disaggregation/Refolding Machine in Differentiated Human Neuronal Cells following Thermal Stress

    Directory of Open Access Journals (Sweden)

    Ian R. Brown

    2017-04-01

    Full Text Available Heat shock proteins (Hsps co-operate in multi-protein machines that counter protein misfolding and aggregation and involve DNAJ (Hsp40, HSPA (Hsp70, and HSPH (Hsp105α. The HSPA family is a multigene family composed of inducible and constitutively expressed members. Inducible HSPA6 (Hsp70B' is found in the human genome but not in the genomes of mouse and rat. To advance knowledge of this little studied HSPA member, the targeting of HSPA6 to stress-sensitive neuronal sites with components of a disaggregation/refolding machine was investigated following thermal stress. HSPA6 targeted the periphery of nuclear speckles (perispeckles that have been characterized as sites of transcription. However, HSPA6 did not co-localize at perispeckles with DNAJB1 (Hsp40-1 or HSPH1 (Hsp105α. At 3 h after heat shock, HSPA6 co-localized with these members of the disaggregation/refolding machine at the granular component (GC of the nucleolus. Inducible HSPA1A (Hsp70-1 and constitutively expressed HSPA8 (Hsc70 co-localized at nuclear speckles with components of the machine immediately after heat shock, and at the GC layer of the nucleolus at 1 h with DNAJA1 and BAG-1. These results suggest that HSPA6 exhibits targeting features that are not apparent for HSPA1A and HSPA8.

  1. A state-level analysis of the economic impacts of medical tourism in Malaysia

    OpenAIRE

    Klijs, J.; Ormond, M.E.; Mainil, T.; Peerlings, J.H.M.; Heijman, W.J.M.

    2016-01-01

    In Malaysia, a country that ranks among the world's most recognised medical tourism destinations, medical tourism is identified as a potential economic growth engine for both medical and non-medical sectors. A state-level analysis of economic impacts is important, given differences between states in economic profiles and numbers, origins, and expenditure of medical tourists. We applied input–output (I–O) analysis, based on state-specific I–O data and disaggregated foreign patient data. The an...

  2. Proposed Method for Disaggregation of Secondary Data: The Model for External Reliance of Localities in the Coastal Management Zone (MERLIN-CMZ)

    Science.gov (United States)

    The Model for External Reliance of Localities In (MERLIN) Coastal Management Zones is a proposed solution to allow scaling of variables to smaller, nested geographies. Utilizing a Principal Components Analysis and data normalization techniques, smaller scale trends are linked to ...

  3. Performance, labour flexibility and migrant workers in hotels: An establishment and departmental level analysis

    OpenAIRE

    Yaduma, N; Williams, A; Lockwood, A; Park, S

    2015-01-01

    © 2015. This paper analyses flexible working, and the employment of migrants, as determinants of performance in hotels, utilising a highly disaggregated data set of actual hours worked and outputs, on a monthly basis, over an 8 year period for 25 establishments within a single firm. It examines not only inter-establishment, but also intra-establishment (departmental) variations in performance. The analysis also systematically compares the findings based on financial versus physical measures, ...

  4. Applying DEA sensitivity analysis to efficiency measurement of Vietnamese universities

    Directory of Open Access Journals (Sweden)

    Thi Thanh Huyen Nguyen

    2015-11-01

    Full Text Available The primary purpose of this study is to measure the technical efficiency of 30 doctorate-granting universities, the universities or the higher education institutes with PhD training programs, in Vietnam, applying the sensitivity analysis of data envelopment analysis (DEA. The study uses eight sets of input-output specifications using the replacement as well as aggregation/disaggregation of variables. The measurement results allow us to examine the sensitivity of the efficiency of these universities with the sets of variables. The findings also show the impact of variables on their efficiency and its “sustainability”.

  5. Analysis of uncertainties in the estimates of nitrous oxide and methane emissions in the UK's greenhouse gas inventory for agriculture

    Science.gov (United States)

    Milne, Alice E.; Glendining, Margaret J.; Bellamy, Pat; Misselbrook, Tom; Gilhespy, Sarah; Rivas Casado, Monica; Hulin, Adele; van Oijen, Marcel; Whitmore, Andrew P.

    2014-01-01

    The UK's greenhouse gas inventory for agriculture uses a model based on the IPCC Tier 1 and Tier 2 methods to estimate the emissions of methane and nitrous oxide from agriculture. The inventory calculations are disaggregated at country level (England, Wales, Scotland and Northern Ireland). Before now, no detailed assessment of the uncertainties in the estimates of emissions had been done. We used Monte Carlo simulation to do such an analysis. We collated information on the uncertainties of each of the model inputs. The uncertainties propagate through the model and result in uncertainties in the estimated emissions. Using a sensitivity analysis, we found that in England and Scotland the uncertainty in the emission factor for emissions from N inputs (EF1) affected uncertainty the most, but that in Wales and Northern Ireland, the emission factor for N leaching and runoff (EF5) had greater influence. We showed that if the uncertainty in any one of these emission factors is reduced by 50%, the uncertainty in emissions of nitrous oxide reduces by 10%. The uncertainty in the estimate for the emissions of methane emission factors for enteric fermentation in cows and sheep most affected the uncertainty in methane emissions. When inventories are disaggregated (as that for the UK is) correlation between separate instances of each emission factor will affect the uncertainty in emissions. As more countries move towards inventory models with disaggregation, it is important that the IPCC give firm guidance on this topic.

  6. Democratic Legitimacy, International Institutions and Cosmopolitan Disaggregation

    OpenAIRE

    Álvarez, David

    2016-01-01

    The paper explores Thomas Christiano’s conception of international legitimacy. It argues that his account fails to fully appreciate the instrumental constraints that international legitimacy imposes on national democracies. His model of Fair Voluntary Association articulates the transmission of political legitimacy through a double aggregation of political consent. First, it “pools” its authority from the foundational cosmopolitan claims of individuals involved in a deeply i...

  7. Fuel demand and fuel efficiency in the US commercial-airline industry and the trucking industry: an analysis of trends and implications. Final report

    Energy Technology Data Exchange (ETDEWEB)

    1982-03-31

    A study of trends in fuel use and efficiency in the US commercial airlines industry is extended back to 1967 in order to compare the relative contributions of the factors influencing efficiency during a period of stable fuel prices (1967 to 1972) versus a period of fuel price growth (1973 to 1980). A similar analysis disaggregates the components of truck efficiency and evaluates their relative impact on fuel consumption in the trucking industry. (LEW)

  8. A pretreatment method for grain size analysis of red mudstones

    Science.gov (United States)

    Jiang, Zaixing; Liu, Li'an

    2011-11-01

    Traditional sediment disaggregation methods work well for loose mud sediments, but not for tightly cemented mudstones by ferric oxide minerals. In this paper, a new pretreatment method for analyzing the grain size of red mudstones is presented. The experimental samples are Eocene red mudstones from the Dongying Depression, Bohai Bay Basin. The red mudstones are composed mainly of clay minerals, clastic sediments and ferric oxides that make the mudstones red and tightly compacted. The procedure of the method is as follows. Firstly, samples of the red mudstones were crushed into fragments with a diameter of 0.6-0.8 mm in size; secondly, the CBD (citrate-bicarbonate-dithionite) treatment was used to remove ferric oxides so that the cementation of intra-aggregates and inter-aggregates became weakened, and then 5% dilute hydrochloric acid was added to further remove the cements; thirdly, the fragments were further ground with a rubber pestle; lastly, an ultrasonicator was used to disaggregate the samples. After the treatment, the samples could then be used for grain size analysis or for other geological analyses of sedimentary grains. Compared with other pretreatment methods for size analysis of mudstones, this proposed method is more effective and has higher repeatability.

  9. Success in Undergraduate Engineering Programs: A Comparative Analysis by Race and Gender

    Science.gov (United States)

    Lord, Susan

    2010-03-01

    Interest in increasing the number of engineering graduates in the United States and promoting gender equality and diversification of the profession has encouraged considerable research on women and minorities in engineering programs. Drawing on a framework of intersectionality theory, this work recognizes that women of different ethnic backgrounds warrant disaggregated analysis because they do not necessarily share a common experience in engineering education. Using a longitudinal, comprehensive data set of more than 79,000 students who matriculated in engineering at nine universities in the Southeastern United States, this research examines how the six-year graduation rates of engineering students vary by disaggregated combinations of gender and race/ethnicity. Contrary to the popular opinion that women drop out of engineering at higher rates, our results show that Asian, Black, Hispanic, Native American, and White women who matriculate in engineering are as likely as men to graduate in engineering in six years. In fact, Asian, Black, Hispanic, and Native American women engineering matriculants graduate at higher rates than men and there is a small difference for white students. 54 percent of White women engineering matriculants graduate in six-years compared with 53 percent of white men. For male and female engineering matriculants of all races, the most likely destination six years after entering college is graduation within engineering. This work underscores the importance of research disaggregated by race and gender and points to the critical need for more recruitment of women into engineering as the low representation of women in engineering education is primarily a reflection of their low representation at matriculation.

  10. Analysis

    DEFF Research Database (Denmark)

    Mathiesen, Brian Vad; Liu, Wen; Zhang, Xiliang

    2014-01-01

    three major technological changes: energy savings on the demand side, efficiency improvements in energy production, and the replacement of fossil fuels by various sources of renewable energy. Consequently, the analysis of these systems must include strategies for integrating renewable sources...

  11. Stepwise data envelopment analysis (DEA); choosing variables for measuring technical efficiency in Norwegian electricity distribution

    International Nuclear Information System (INIS)

    Kittelsen, S.A.C.

    1993-04-01

    Electric power distribution is an activity that in principle delivers a separate product to each customer. A specification of products for a utility as a whole leads potentially to a large number of product aspects including topographic and climatic conditions, and the level of disaggregation of factors and products may give the production and cost functions a high dimensionality. Some aggregation is therefore necessary. Non-parametric methods like Data Envelopment Analysis (DEA) have the advantage that they may give meaningful results when parametric methods would not have enough degrees of freedom, but will have related problems if the variables are collinear or are irrelevant. Although aggregate efficiency measures will not be much affected, rates of transformation will be corrupted and observations with extreme values may be measured as efficient by default. Little work has been done so far on the statistical properties of the non-parametric efficiency measure. This paper utilizes a suggestion by Rajiv Banker to measure the significance of the change in results when disaggregating or introducing an extra variable, and shows how one can let the data participate in deciding which variables should be included in the analysis. 32 refs., 7 figs., 4 tabs

  12. Chemosensitivity of human small cell carcinoma of the lung detected by flow cytometric DNA analysis of drug-induced cell cycle perturbations in vitro

    DEFF Research Database (Denmark)

    Engelholm, S A; Spang-Thomsen, M; Vindeløv, L L

    1986-01-01

    A method based on detection of drug-induced cell cycle perturbation by flow cytometric DNA analysis has previously been described in Ehrlich ascites tumors as a way to estimate chemosensitivity. The method is extended to test human small-cell carcinoma of the lung. Three tumors with different...... sensitivities to melphalan in nude mice were used. Tumors were disaggregated by a combined mechanical and enzymatic method and thereafter have incubated with different doses of melphalan. After incubation the cells were plated in vitro on agar, and drug induced cell cycle changes were monitored by flow...

  13. A Categorical Content Analysis of Highly Cited Literature Related to Trends and Issues in Special Education.

    Science.gov (United States)

    Arden, Sarah V; Pentimonti, Jill M; Cooray, Rochana; Jackson, Stephanie

    2017-07-01

    This investigation employs categorical content analysis processes as a mechanism to examine trends and issues in a sampling of highly cited (100+) literature in special education journals. The authors had two goals: (a) broadly identifying trends across publication type, content area, and methodology and (b) specifically identifying articles with disaggregated outcomes for students with learning disabilities (LD). Content analyses were conducted across highly cited (100+) articles published during a 20-year period (1992-2013) in a sample ( n = 3) of journals focused primarily on LD, and in one broad, cross-categorical journal recognized for its impact in the field. Results indicated trends in the article type (i.e., commentary and position papers), content (i.e., reading and behavior), and methodology (i.e., small proportions of experimental and quasi-experimental designs). Results also revealed stability in the proportion of intervention research studies when compared to previous analyses and a decline in the proportion of those that disaggregated data specifically for students with LD.

  14. The use of SVAR analysis in determining the effects of fiscal shocks in Croatia

    Directory of Open Access Journals (Sweden)

    Raafel Ravnik

    2011-03-01

    Full Text Available In this paper we use multivariate Blanchard-Perotti SVAR methodology to analyze disaggregated short-term effects of fiscal policy on economic activity, inflation and short-term interest rates. The results suggest that the effects of government expenditure shocks and the shock of government revenues are relatively the highest on interest rates and the lowest on inflation. A tax shock in the short term increases the inflation rate and also decreases the short-term interest rate, and after one year stabilization occurs at the initial level, while spending shock leads to a reverse effect. The effects of fiscal policies on the proxy variable of output, i.e. industrial production, are less economically intuitive, because the shock of expenditure decreases and revenue shock permanently increases industrial production. The empirical result shows that a tax shock has a permanent effect on future taxes; while future levels of government spending are not related to current expenditure shocks. Interactions between the components of fiscal policy are also examined and it is concluded that a tax shock increases expenditures permanently, while an expenditure shock does not significantly affect government revenues, which is consistent with the tendency of growth in public debt. Furthermore, it was found that government revenue and expenditure shocks do not have a mirror effect, which justifies disaggregated analysis of fiscal policy shocks.

  15. Energy rent and public policy: an analysis of the Canadian coal industry

    International Nuclear Information System (INIS)

    Gunton, Thomas

    2004-01-01

    This paper analyses issues in resource rent through a case study of the Canadian coal industry. A model of the coal industry is constructed to estimate the magnitude of rent and distribution of coal rent between government and industry over the 30-year period from 1970 to 2000. Disaggregation of results by coal sector shows that rent varied widely, with one sector generating substantial rent and other sectors incurring large losses. The pattern of development of the coal sector followed what can be termed a 'rent dissipation cycle' in which the generation of rent in the profitable sector created excessively optimistic expectations that encouraged new entrants to dissipate rent by developing uneconomic capacity. The analysis also shows that the system used to collect rent was ineffective. The public owner collected only one-third of the rent on the profitable mines and collected royalty revenue from the unprofitable mines even though no rent was generated. The case study illustrates that improvements in private sector planning based on a better appreciation of resource market fundamentals, elimination of government subsidies that encourage uneconomic expansion and more effective rent collection are all needed to avoid rent dissipation and increase the benefits of energy development in producing jurisdictions. The study also illustrates that estimates of rent in the resource sector should disaggregate results by sector and make adjustments for market imperfections to accurately assess the magnitude of potential rent

  16. Randomizing world trade. II. A weighted network analysis

    Science.gov (United States)

    Squartini, Tiziano; Fagiolo, Giorgio; Garlaschelli, Diego

    2011-10-01

    Based on the misleading expectation that weighted network properties always offer a more complete description than purely topological ones, current economic models of the International Trade Network (ITN) generally aim at explaining local weighted properties, not local binary ones. Here we complement our analysis of the binary projections of the ITN by considering its weighted representations. We show that, unlike the binary case, all possible weighted representations of the ITN (directed and undirected, aggregated and disaggregated) cannot be traced back to local country-specific properties, which are therefore of limited informativeness. Our two papers show that traditional macroeconomic approaches systematically fail to capture the key properties of the ITN. In the binary case, they do not focus on the degree sequence and hence cannot characterize or replicate higher-order properties. In the weighted case, they generally focus on the strength sequence, but the knowledge of the latter is not enough in order to understand or reproduce indirect effects.

  17. Disaggregating and mapping crop statistics using hypertemporal remote sensing

    NARCIS (Netherlands)

    Khan, M.R.; Bie, de C.A.J.M.; Keulen, van H.; Smaling, E.M.A.; Real, R.

    2010-01-01

    Governments compile their agricultural statistics in tabular form by administrative area, which gives no clue to the exact locations where specific crops are actually grown. Such data are poorly suited for early warning and assessment of crop production. 10-Daily satellite image time series of

  18. Illicit Financial Flows and Governance : The Importance of Disaggregation

    OpenAIRE

    Reuter, Peter

    2017-01-01

    After decades of billion dollar scandals around long-serving dictators removing vast fortunes from their impoverished nations, the broader phenomenon of which this is part has acquired a label: Illicit Financial Flows (IFFs). The term encompasses the international transfer of moneys generated by bribery, tax evasion and illegal markets. IFFs have been the object of much attention from high...

  19. Recycling energy taxes. Impacts on a disaggregated labour market

    International Nuclear Information System (INIS)

    Bosello, F.; Carraro, C.

    2001-01-01

    This paper analyses the impacts of energy taxes whose revenue is recycled to reduce gross wages and increase employment. The main novel feature of this paper, is the attempt to assess the effectiveness of this fiscal reform by using a labour market model in which both skilled and unskilled workers are used in the production process. This segmentation enables us to compare a policy which aims at reducing unskilled workers' wages, as in the original Delors' White book, with a policy in which the environmental fiscal revenue is used to reduce the gross wage of all workers. Moreover, two policy scenarios will be considered. A non-co-operative one in which each country determines the optimal domestic energy tax to achieve a given employment target and a co-operative one, in which the energy taxes are harmonised to equalise marginal abatement costs in the EU and in which the employment target is set for the EU. Our results show that: (1) an employment double dividend can be achieved in the short run only, even if a trade-off between environment and employment always exists; (2) the effect on employment is larger when the fiscal revenue is recycled into all workers' gross wages rather than into unskilled workers only; (3) a co-operative policy leads to even larger benefits in terms of employment provided that an adequate redistribution of fiscal revenues is adopted by EU countries

  20. Accountability after Structural Disaggregation: Comparing Agency Accountability Arrangements

    NARCIS (Netherlands)

    Overman, Sjors; Van Genugten, Marieke; Van Thiel, Sandra

    2015-01-01

    New accountability instruments – performance indicators, audits, and financial incentives – are expected to replace traditional accountability instruments in NPM reforms. We test this expectation by looking at the accountability arrangements of semi-autonomous agencies as a typical example of NPM

  1. Estimation of disaggregated canal water deliveries in Pakistan using geomatics

    NARCIS (Netherlands)

    Mobin-ud-Din, A.; Stein, A.; Bastiaanssen, W.G.M.

    2004-01-01

    Lack of accurate information on water distribution within an irrigation system is a major roadblock for effective management of scarce water resources. Numerical techniques to estimate canal water distribution require large amounts of data with respect to hydraulic parameters and operation of the

  2. Climate change vulnerability in Ethiopia : disaggregation of Tigray Region

    NARCIS (Netherlands)

    Gidey Gebrehiwot, T.; Gidey, T.G.; van der Veen, A.

    2013-01-01

    Climate change and variability severely affect rural livelihoods and agricultural productivity, yet they are causes of stress vulnerable rural households have to cope with. This paper investigated farming communities' vulnerability to climate change and climate variability across 34

  3. 34 CFR 200.7 - Disaggregation of data.

    Science.gov (United States)

    2010-07-01

    ... minimum group size, interact to affect the statistical reliability of the data and to ensure the maximum... group size meets the requirements of paragraph (a)(2)(i) of this section; (B) An explanation of how... size to produce statistically reliable results, the State must still include students in that subgroup...

  4. Monetary dynamics in the euro area : a disaggregate panel approach

    NARCIS (Netherlands)

    Liu, J.; Kool, C.J.M.

    In this paper, we use panel cointegration estimation to analyze the determinants of heterogeneous monetary dynamics in ten euro area member countries over the period 1999-2013. In particular, we investigate the role of real house prices, real equity prices and cross border bank credit. For the

  5. Disaggregated export demand of Malaysia: evidence from the electronics industry

    OpenAIRE

    Koi Nyen Wong

    2008-01-01

    This study estimates the determinants of foreign demand for Malaysia's top five electronics exports by SITC (Standard International Trade Classification) product groups from 1990 to 2001. Cointegration results indicate a unique long-run relationship between export demand for electronic products and relative prices and foreign income. Both the estimated long-run income and price elasticities of export demand are greater than 1, conforming to a pattern found in most fast-growing economies and i...

  6. Clustering disaggregated load profiles using a Dirichlet process mixture model

    International Nuclear Information System (INIS)

    Granell, Ramon; Axon, Colin J.; Wallom, David C.H.

    2015-01-01

    Highlights: • We show that the Dirichlet process mixture model is scaleable. • Our model does not require the number of clusters as an input. • Our model creates clusters only by the features of the demand profiles. • We have used both residential and commercial data sets. - Abstract: The increasing availability of substantial quantities of power-use data in both the residential and commercial sectors raises the possibility of mining the data to the advantage of both consumers and network operations. We present a Bayesian non-parametric model to cluster load profiles from households and business premises. Evaluators show that our model performs as well as other popular clustering methods, but unlike most other methods it does not require the number of clusters to be predetermined by the user. We used the so-called ‘Chinese restaurant process’ method to solve the model, making use of the Dirichlet-multinomial distribution. The number of clusters grew logarithmically with the quantity of data, making the technique suitable for scaling to large data sets. We were able to show that the model could distinguish features such as the nationality, household size, and type of dwelling between the cluster memberships

  7. Effects of volcanic deposit disaggregation on exposed water composition

    Science.gov (United States)

    Back, W. E.; Genareau, K. D.

    2016-12-01

    Explosive volcanic eruptions produce a variety of hazards. Pyroclastic material can be introduced to water through ash fallout, pyroclastic flows entering water bodies, and/or lahars. Remobilization of tephras can occur soon after eruption or centuries later, introducing additional pyroclastic material into the environment. Introduction of pyroclastic material may alter the dissolved element concentration and pH of exposed waters, potentially impacting drinking water supplies, agriculture, and ecology. This study focuses on the long-term impacts of volcanic deposits on water composition due to the mechanical breakup of volcanic deposits over time. Preliminary work has shown that mechanical milling of volcanic deposits will cause significant increases in dissolved element concentrations, conductivity, and pH of aqueous solutions. Pyroclastic material from seven eruptions sites was collected, mechanically milled to produce grain sizes Soufriere Hills, Ruapehu), mafic (Lathrop Wells) and ultramafic (mantle xenoliths) volcanic deposits. Lathrop Wells has an average bulk concentration of 49.15 wt.% SiO2, 6.11 wt. % MgO, and 8.39 wt. % CaO and produces leachate concentrations of 85.69 mg/kg for Ca and 37.22 mg/kg for Mg. Taupo and Valles Caldera samples have a bulk concentration of 72.9 wt.% SiO2, 0.59 wt. % MgO, and 1.48 wt. % CaO, and produces leachate concentrations of 4.08 mg/kg for Ca and 1.56 mg/kg for Mg. Similar testing will be conducted on the intermediate and ultramafic samples to test the hypothesis that bulk magma composition and mineralogy will directly relate to the increased dissolved element concentration of exposed waters. The measured effects on aqueous solutions will aid in evaluation of impacts to marine and freshwater systems exposed to volcanic deposits.

  8. Neighborhood size of training data influences soil map disaggregation

    Science.gov (United States)

    Soil class mapping relies on the ability of sample locations to represent portions of the landscape with similar soil types; however, most digital soil mapping (DSM) approaches intersect sample locations with one raster pixel per covariate layer regardless of pixel size. This approach does not take ...

  9. Quantifying and Disaggregating Consumer Purchasing Behavior for Energy Systems Modeling

    Science.gov (United States)

    Consumer behaviors such as energy conservation, adoption of more efficient technologies, and fuel switching represent significant potential for greenhouse gas mitigation. Current efforts to model future energy outcomes have tended to use simplified economic assumptions ...

  10. Disaggregated Imaging Spacecraft Constellation Optimization with a Genetic Algorithm

    Science.gov (United States)

    2014-03-27

    Management Air Force Institute of Technology Air University Air Education and Training Command In Partial Fulfillment of the Requirements for the Degree...distinct mod- ules which, once ‘assembled’ on orbit, deliver the capability of the original monolithic system [5].” Jerry Sellers includes a comic in

  11. Global Rice Atlas: Disaggregated seasonal crop calendar and production

    NARCIS (Netherlands)

    Balanza, Jane Girly; Gutierrez, Mary Anne; Villano, Lorena; Nelson, A.D.; Zwart, S.J.; Boschetti, Mirco; Koo, Jawoo; Reinke, Russell; Murty, M. V.R.; Laborte, Alice G.

    2014-01-01

    Purpose: Rice is an important staple crop cultivated in more than 163 million ha globally. Although information on the distribution of global rice production is available by country and, at times, at subnational level, information on its distribution within a year is often lacking in different rice

  12. Convincing State-Builders? Disaggregating Internal Legitimacy in Abkhazia

    OpenAIRE

    Bakke, K. M.; O Loughlin, J.; Toal, G.; Ward, M. D.

    2013-01-01

    De facto states, functional on the ground but unrecognized by most states, have long been black boxes for systematic empirical research. This study investigates de facto states’ internal legitimacy—people's confidence in the entity itself, the regime, and institutions. While internal legitimacy is important for any state, it is particularly important for de facto states, whose lack of external legitimacy has made internal legitimacy integral to their quest for recognition. We propose that the...

  13. Estimating Intermittent Individual Spawning Behavior via Disaggregating Group Data

    Data.gov (United States)

    U.S. Environmental Protection Agency — In order to understand fish biology and reproduction it is important to know the fecundity patterns of individual fish, as frequently established by recording the...

  14. Geographic distribution of hospital beds throughout China: a county-level econometric analysis.

    Science.gov (United States)

    Pan, Jay; Shallcross, David

    2016-11-08

    Geographical distribution of healthcare resources is an important dimension of healthcare access. Little work has been published on healthcare resource allocation patterns in China, despite public equity concerns. Using national data from 2043 counties, this paper investigates the geographic distribution of hospital beds at the county level in China. We performed Gini coefficient analysis to measure inequalities and ordinary least squares regression with fixed provincial effects and additional spatial specifications to assess key determinants. We found that provinces in west China have the least equitable resource distribution. We also found that the distribution of hospital beds is highly spatially clustered. Finally, we found that both county-level savings and government revenue show a strong positive relationship with county level hospital bed density. We argue for more widespread use of disaggregated, geographical data in health policy-making in China to support the rational allocation of healthcare resources, thus promoting efficiency and equity.

  15. The multiple decrement life table: a unifying framework for cause-of-death analysis in ecology.

    Science.gov (United States)

    Carey, James R

    1989-01-01

    The multiple decrement life table is used widely in the human actuarial literature and provides statistical expressions for mortality in three different forms: i) the life table from all causes-of-death combined; ii) the life table disaggregated into selected cause-of-death categories; and iii) the life table with particular causes and combinations of causes eliminated. The purpose of this paper is to introduce the multiple decrement life table to the ecological literature by applying the methods to published death-by-cause information on Rhagoletis pomonella. Interrelations between the current approach and conventional tools used in basic and applied ecology are discussed including the conventional life table, Key Factor Analysis and Abbott's Correction used in toxicological bioassay.

  16. Econometrics analysis of consumer behaviour: a linear expenditure system applied to energy

    International Nuclear Information System (INIS)

    Giansante, C.; Ferrari, V.

    1996-12-01

    In economics literature the expenditure system specification is a well known subject. The problem is to define a coherent representation of consumer behaviour through functional forms easy to calculate. In this work it is used the Stone-Geary Linear Expenditure System and its multi-level decision process version. The Linear Expenditure system is characterized by an easy calculating estimation procedure, and its multi-level specification allows substitution and complementary relations between goods. Moreover, the utility function separability condition on which the Utility Tree Approach is based, justifies to use an estimation procedure in two or more steps. This allows to use an high degree of expenditure categories disaggregation, impossible to reach the Linear Expediture System. The analysis is applied to energy sectors

  17. The Relative Importance of the Service Sector in the Mexican Economy: A Time Series Analysis

    Directory of Open Access Journals (Sweden)

    Carlos Alberto Flores

    2014-01-01

    Full Text Available We conduct a study of the secondary and tertiary sectors with the goal of highlighting the relative im-portance of services in the Mexican economy. We consider a time series analysis approach designed to identify the stochastic nature of the series, as well as to define their long-run and-short run relationships with Gross Domestic Product (GDP. The results of cointegration tests suggest that, for the most part, activities in the secondary and tertiary sectors share a common trend with GDP. Interestingly, the long-run elasticities of GDP with respect to services are on average larger than those with respect to secondary activities. Common cycle tests results identify the existence of common cycles between GDP and the disaggregated sectors, as well as with manufacturing, commerce, real estate and transportation. In this case, the short-run elasticities of secondary activities are on average larger than those corresponding to services.

  18. Changing Attitudes Toward Euthanasia and Suicide for Terminally Ill Persons, 1977 to 2016: An Age-Period-Cohort Analysis.

    Science.gov (United States)

    Attell, Brandon K

    2017-01-01

    Several longitudinal studies show that over time the American public has become more approving of euthanasia and suicide for terminally ill persons. Yet, these previous findings are limited because they derive from biased estimates of disaggregated hierarchical data. Using insights from life course sociological theory and cross-classified logistic regression models, I better account for this liberalization process by disentangling the age, period, and cohort effects that contribute to longitudinal changes in these attitudes. The results of the analysis point toward a continued liberalization of both attitudes over time, although the magnitude of change was greater for suicide compared with euthanasia. More fluctuation in the probability of supporting both measures was exhibited for the age and period effects over the cohort effects. In addition, age-based differences in supporting both measures were found between men and women and various religious affiliations.

  19. Energy production from tannery solid wastes : thermal balance, models of process yields and economic analysis; Produzione di energia da residui conciariprocesso e analisi di fattibilita`

    Energy Technology Data Exchange (ETDEWEB)

    Manzo, G; Grasso, G.; Bufalo, G. [Stazione Sperimentale per l`Industria delle Pelli e Materie Concianti, Naples (Italy)

    1996-01-01

    Present paper deals with a modeling approach to the recovery of thermal energy, chromium and compost from tannery solid wastes, by incineration to ash and biomethanization to digested biomass. A thermal balance on the whole industrial Italian production of tanning residues firstly quantifies the impact of the matter. A model was successively developed in order to compute the caloric content of the different kinds of residues, starting from their elementary composition. Proper models of the process yields, for both the incineration and biomethanization, were also derived. Finally an economic cost analysis of the incineration process was presented, conveniently disaggregated on the single cost elements. This analysis was based on the previously obtained data both of heat and chromium recovery and on matter-balance data of a typical tanning process (chromium shoe upper produced from salted bovine hide).

  20. Solid KHT tumor dispersal for flow cytometric cell kinetic analysis

    International Nuclear Information System (INIS)

    Pallavicini, M.G.; Folstad, L.J.; Dunbar, C.

    1981-01-01

    A bacterial neutral protease was used to disperse KHT solid tumors into single cell suspensions suitable for routine cell kinetic analysis by flow cytometry and for clonogenic cell survival. Neutral protease disaggregation under conditions which would be suitable for routine tumor dispersal was compared with a trypsin/DNase procedure. Cell yield, clonogenic cell survival, DNA distributions of untreated and drug-perturbed tumors, rates of radioactive precursor incorporation during the cell cycle, and preferential cell cycle phase-specific cell loss were investigated. Tumors dispersed with neutral protease yielded approximately four times more cells than those dispersed with trypsin/DNase and approximately a 1.5-fold higher plating efficiency in a semisolid agar system. Quantitative analysis of DNA distributions obtained from untreated and cytosine-arabinoside-perturbed tumors produced similar results with both dispersal procedures. The rates of incorporation of tritiated thymidine during the cell cycle were also similar with neutral protease and trypsin/DNase dispersal. Preferential phase-specific cell loss was not obseved with either technique. We find that neutral protease provides good single cell suspensions of the KHT tumor for cell survival measurements and for cell kinetic analysis of drug-induced perturbations by flow cytometry. In addition, the high cell yields facilitate electronic cell sorting where large numbers of cells are often required

  1. Value of a statistical life in road safety: a benefit-transfer function with risk-analysis guidance based on developing country data.

    Science.gov (United States)

    Milligan, Craig; Kopp, Andreas; Dahdah, Said; Montufar, Jeannette

    2014-10-01

    We model a value of statistical life (VSL) transfer function for application to road-safety engineering in developing countries through an income-disaggregated meta-analysis of scope-sensitive stated preference VSL data. The income-disaggregated meta-analysis treats developing country and high-income country data separately. Previous transfer functions are based on aggregated datasets that are composed largely of data from high-income countries. Recent evidence, particularly with respect to the income elasticity of VSL, suggests that the aggregate approach is deficient because it does not account for a possible change in income elasticity across income levels. Our dataset (a minor update of the OECD database published in 2012) includes 123 scope-sensitive VSL estimates from developing countries and 185 scope-sensitive estimates from high-income countries. The transfer function for developing countries gives VSL=1.3732E-4×(GDP per capita)(∧)2.478, with VSL and GDP per capita expressed in 2005 international dollars (an international dollar being a notional currency with the same purchasing power as the U.S. dollar). The function can be applied for low- and middle-income countries with GDPs per capita above $1268 (with a data gap for very low-income countries), whereas it is not useful above a GDP per capita of about $20,000. The corresponding function built using high-income country data is VSL=8.2474E+3×(GDP per capita)(∧).6932; it is valid for high-income countries but over-estimates VSL for low- and middle-income countries. The research finds two principal significant differences between the transfer functions modeled using developing-country and high-income-country data, supporting the disaggregated approach. The first of these differences relates to between-country VSL income elasticity, which is 2.478 for the developing country function and .693 for the high-income function; the difference is significant at peconomic performance measures for road

  2. Statictical Analysis Of The Conditioning Factors Of Urban Electric Consumption

    International Nuclear Information System (INIS)

    Segura D'Rouville, Juan Joel; Suárez Carreño, Franyelit María

    2017-01-01

    This research work presents the analysis of the most important factors that condition the urban residential electricity consumption. This study shows the quantitative parameters conditioning the electricity consumption. This sector of analysis has been chosen because there is disaggregated information of which are the main social and technological factors that determine its behavior, growth, with the objective of elaborating policies in the management of the electric consumption. The electrical demand considered as the sum of the powers of all the equipment that are used in each of the instants of a full day, is related to the electrical consumption, which is not but the value of the power demanded by a determined consumer Multiplied by the time in which said demand is maintained. In this report we propose the design of a probabilistic model of prediction of electricity consumption, taking into account mainly influential social and technological factors. The statistical process of this database is done through the Stat Graphics software version 4.1, for its extensive didactic in the accomplishment of calculations and associated methods. Finally, the correlation of the variables was performed to classify the determinants in a specific way and thus to determine the consumption of the dwellings. (author)

  3. Analysis of the importance of structural change in non-energy intensive industry for prospective modelling: The French case

    International Nuclear Information System (INIS)

    Seck, Gondia Sokhna; Guerassimoff, Gilles; Maïzi, Nadia

    2016-01-01

    A large number of studies have been conducted on the contribution of technological progress and structural change to the evolution of aggregate energy intensity in the industrial sector. However, no analyses have been done to examine theses changes in the non-energy intensive industry in France. We analyzed their importance in French industry with respect to their energy intensity, energy costs, value added, labour and the diffusion of production sites by using data at the 3-digit level with 236 sectors. Using a new decomposition method that gives no residual, this paper attempted to examine, over 10 years from 1996 to 2005, the changes that occurred in an area that has been neglected in energy analysis. We found that structural change had an overwhelming effect on the decline of aggregate energy intensity. Furthermore, we found that the higher the level of sector disaggregation, the more significant the changes that can be attributed to structural change, due to the homogeneity of this industrial group. The results of our study show that it is important to take into account the effects of structural change in “bottom-up” modelling exercises so as to improve the accuracy of energy demand forecasting for policy-makers and scientists. - Highlights: • Defining NEI industries with a quantitative approach from relevant indicators in France. • Developing new decomposition method given in additive form with no residual in NEI. • Structural change is the overwhelming factor in improving energy performance within NEI. • Revealed consistent trend with level of sector disaggregation if homogeneous industries.

  4. Determinants of Advertising Effectiveness: The Development of an International Advertising Elasticity Database and a Meta-Analysis

    Directory of Open Access Journals (Sweden)

    Sina Henningsen

    2011-12-01

    Full Text Available Increasing demand for marketing accountability requires an efficient allocation of marketing expenditures. Managers who know the elasticity of their marketing instruments can allocate their budgets optimally. Meta-analyses offer a basis for deriving benchmark elasticities for advertising. Although they provide a variety of valuable insights, a major shortcoming of prior meta-analyses is that they report only generalized results as the disaggregated raw data are not made available. This problem is highly relevant because coding of empirical studies, at least to a certain extent, involves subjective judgment. For this reason, meta-studies would be more valuable if researchers and practitioners had access to disaggregated data allowing them to conduct further analyses of individual, e.g., product-level-specific, interests. We are the first to address this gap by providing (1 an advertising elasticity database (AED and (2 empirical generalizations about advertising elasticities and their determinants. Our findings indicate that the average current-period advertising elasticity is 0.09, which is substantially smaller than the value 0f 0.12 that was recently reported by Sethuraman, Tellis, and Briesch (2011. Furthermore, our meta-analysis reveals a wide range of significant determinants of advertising elasticity. For example, we find that advertising elasticities are higher (i for hedonic and experience goods than for other goods; (ii for new than for established goods; (iii when advertising is measured in gross rating points (GRP instead of absolute terms; and (iv when the lagged dependent or lagged advertising variable is omitted.

  5. Satellite Leaf Area Index: Global Scale Analysis of the Tendencies Per Vegetation Type Over the Last 17 Years

    Directory of Open Access Journals (Sweden)

    Simon Munier

    2018-03-01

    Full Text Available The main objective of this study is to detect and quantify changes in the vegetation dynamics of each vegetation type at the global scale over the last 17 years. With recent advances in remote sensing techniques, it is now possible to study the Leaf Area Index (LAI seasonal and interannual variability at the global scale and in a consistent way over the last decades. However, the coarse spatial resolution of these satellite-derived products does not permit distinguishing vegetation types within mixed pixels. Considering only the dominant type per pixel has two main drawbacks: the LAI of the dominant vegetation type is contaminated by spurious signal from other vegetation types and at the global scale, significant areas of individual vegetation types are neglected. In this study, we first developed a Kalman Filtering (KF approach to disaggregate the satellite-derived LAI from GEOV1 over nine main vegetation types, including grasslands and crops as well as evergreen, broadleaf and coniferous forests. The KF approach permits the separation of distinct LAI values for individual vegetation types that coexist within a pixel. The disaggregated LAI product, called LAI-MC (Multi-Cover, consists of world-wide LAI maps provided every 10 days for each vegetation type over the 1999–2015 period. A trend analysis of the original GEOV1 LAI product and of the disaggregated LAI time series was conducted using the Mann-Kendall test. Resulting trends of the GEOV1 LAI (which accounts for all vegetation types compare well with previous regional or global studies, showing a greening over a large part of the globe. When considering each vegetation type individually, the largest global trend from LAI-MC is found for coniferous forests (0.0419 m 2 m − 2 yr − 1 followed by summer crops (0.0394 m 2 m − 2 yr − 1 , while winter crops and grasslands show the smallest global trends (0.0261 m 2 m − 2 yr − 1 and 0.0279 m 2 m − 2 yr − 1 , respectively. The LAI

  6. Speculation and the 2008 oil bubble: The DCOT Report analysis

    International Nuclear Information System (INIS)

    Tokic, Damir

    2012-01-01

    This article analyzes the CFTC's Disaggregated Commitments of Traders (DCOT) Report to get more insights into the behavior of different traders during the 2008 oil bubble. The analysis shows that: (1) the Money Manager category perfectly played the oil bubble, got in early and started selling shortly before the bubble peak; (2) the Producer/Merchant/Processor/User category and the Nonreportable category were covering their short positions into the peak of the bubble; (3) the Swap/Dealer category benefited while the price of oil was rising, but incurred heavy losses as the price of oil collapsed; (4) we find no indications of speculation by any group of traders via the positive feedback trading or rational destabilization; and (5) we do, however, criticize the commercial hedgers for failing to arbitrage the soaring oil prices in 2008. - Highlights: ► We analyze the DCOT Report to study the behavior of traders during the 2008 oil bubble. ► the Money Manger category perfectly played the oil bubble. ► the Producer/Merchant/Processor/User and the Nonreportables engaged in short covering. ► the Swap/Dealer incurred heavy losses as the price of oil collapsed. ► We find no indications of speculation by any category.

  7. Model for Analysis of Energy Demand (MAED-2). User's manual

    International Nuclear Information System (INIS)

    2007-01-01

    The IAEA has been supporting its Member States in the area of energy planning for sustainable development. Development and dissemination of appropriate methodologies and their computer codes are important parts of this support. This manual has been produced to facilitate the use of the MAED model: Model for Analysis of Energy Demand. The methodology of the MAED model was originally developed by. B. Chateau and B. Lapillonne of the Institute Economique et Juridique de l'Energie (IEJE) of the University of Grenoble, France, and was presented as the MEDEE model. Since then the MEDEE model has been developed and adopted to be appropriate for modelling of various energy demand system. The IAEA adopted MEDEE-2 model and incorporated important modifications to make it more suitable for application in the developing countries, and it was named as the MAED model. The first version of the MAED model was designed for the DOS based system, which was later on converted for the Windows system. This manual presents the latest version of the MAED model. The most prominent feature of this version is its flexibility for representing structure of energy consumption. The model now allows country-specific representations of energy consumption patterns using the MAED methodology. The user can now disaggregate energy consumption according to the needs and/or data availability in her/his country. As such, MAED has now become a powerful tool for modelling widely diverse energy consumption patterns. This manual presents the model in details and provides guidelines for its application

  8. Model for Analysis of Energy Demand (MAED-2)

    International Nuclear Information System (INIS)

    2007-01-01

    The IAEA has been supporting its Member States in the area of energy planning for sustainable development. Development and dissemination of appropriate methodologies and their computer codes are important parts of this support. This manual has been produced to facilitate the use of the MAED model: Model for Analysis of Energy Demand. The methodology of the MAED model was originally developed by. B. Chateau and B. Lapillonne of the Institute Economique et Juridique de l'Energie (IEJE) of the University of Grenoble, France, and was presented as the MEDEE model. Since then the MEDEE model has been developed and adopted to be appropriate for modelling of various energy demand system. The IAEA adopted MEDEE-2 model and incorporated important modifications to make it more suitable for application in the developing countries, and it was named as the MAED model. The first version of the MAED model was designed for the DOS based system, which was later on converted for the Windows system. This manual presents the latest version of the MAED model. The most prominent feature of this version is its flexibility for representing structure of energy consumption. The model now allows country-specific representations of energy consumption patterns using the MAED methodology. The user can now disaggregate energy consumption according to the needs and/or data availability in her/his country. As such, MAED has now become a powerful tool for modelling widely diverse energy consumption patterns. This manual presents the model in details and provides guidelines for its application

  9. Phenomenology and Qualitative Data Analysis Software (QDAS: A Careful Reconciliation

    Directory of Open Access Journals (Sweden)

    Brian Kelleher Sohn

    2017-01-01

    Full Text Available An oft-cited phenomenological methodologist, Max VAN MANEN (2014, claims that qualitative data analysis software (QDAS is not an appropriate tool for phenomenological research. Yet phenomenologists rarely describe how phenomenology is to be done: pencil, paper, computer? DAVIDSON and DI GREGORIO (2011 urge QDAS contrarians such as VAN MANEN to get over their methodological loyalties and join the digital world, claiming that all qualitative researchers, whatever their methodology, perform processes aided by QDAS: disaggregation and recontextualization of texts. Other phenomenologists exemplify DAVIDSON and DI GREGORIO's observation that arguments against QDAS often identify problems more closely related to the researchers than QDAS. But the concerns about technology of McLUHAN (2003 [1964], HEIDEGGER (2008 [1977], and FLUSSER (2013 cannot be ignored. In this conceptual article I answer the questions of phenomenologists and the call of QDAS methodologists to describe how I used QDAS to carry out a phenomenological study in order to guide others who choose to reconcile the use of software to assist their research. URN: http://nbn-resolving.de/urn:nbn:de:0114-fqs1701142

  10. Model for Analysis of Energy Demand (MAED-2). User's manual

    International Nuclear Information System (INIS)

    2006-01-01

    The IAEA has been supporting its Member States in the area of energy planning for sustainable development. Development and dissemination of appropriate methodologies and their computer codes are important parts of this support. This manual has been produced to facilitate the use of the MAED model: Model for Analysis of Energy Demand. The methodology of the MAED model was originally developed by. B. Chateau and B. Lapillonne of the Institute Economique et Juridique de l'Energie (IEJE) of the University of Grenoble, France, and was presented as the MEDEE model. Since then the MEDEE model has been developed and adopted to be appropriate for modelling of various energy demand system. The IAEA adopted MEDEE-2 model and incorporated important modifications to make it more suitable for application in the developing countries, and it was named as the MAED model. The first version of the MAED model was designed for the DOS based system, which was later on converted for the Windows system. This manual presents the latest version of the MAED model. The most prominent feature of this version is its flexibility for representing structure of energy consumption. The model now allows country-specific representations of energy consumption patterns using the MAED methodology. The user can now disaggregate energy consumption according to the needs and/or data availability in her/his country. As such, MAED has now become a powerful tool for modelling widely diverse energy consumption patterns. This manual presents the model in details and provides guidelines for its application

  11. Parametric sensitivity analysis of an agro-economic model of management of irrigation water

    Science.gov (United States)

    El Ouadi, Ihssan; Ouazar, Driss; El Menyari, Younesse

    2015-04-01

    The current work aims to build an analysis and decision support tool for policy options concerning the optimal allocation of water resources, while allowing a better reflection on the issue of valuation of water by the agricultural sector in particular. Thus, a model disaggregated by farm type was developed for the rural town of Ait Ben Yacoub located in the east Morocco. This model integrates economic, agronomic and hydraulic data and simulates agricultural gross margin across in this area taking into consideration changes in public policy and climatic conditions, taking into account the competition for collective resources. To identify the model input parameters that influence over the results of the model, a parametric sensitivity analysis is performed by the "One-Factor-At-A-Time" approach within the "Screening Designs" method. Preliminary results of this analysis show that among the 10 parameters analyzed, 6 parameters affect significantly the objective function of the model, it is in order of influence: i) Coefficient of crop yield response to water, ii) Average daily gain in weight of livestock, iii) Exchange of livestock reproduction, iv) maximum yield of crops, v) Supply of irrigation water and vi) precipitation. These 6 parameters register sensitivity indexes ranging between 0.22 and 1.28. Those results show high uncertainties on these parameters that can dramatically skew the results of the model or the need to pay particular attention to their estimates. Keywords: water, agriculture, modeling, optimal allocation, parametric sensitivity analysis, Screening Designs, One-Factor-At-A-Time, agricultural policy, climate change.

  12. The transition of emotions in collective action. The #Yo Soy 132 youth discourse analysis

    Directory of Open Access Journals (Sweden)

    Verónica García Martínez

    2016-12-01

    Full Text Available Emotions belong human nature, they must not be disaggregated, however, they have been rejected by many disciplinary perspectives that overlap rational action to emotional in the analysis of collective action. Although recently the interest arises to analyze how emotions boost social mobilizations. The objective of this work is analyze the emotions arising in the context of the collective action which was taken by young Mexicans in the movement #Yo Soy 132 through social constructionism of emotions as the theoretical perspective and using as method the discourse analysis. This paper is presented in five parts: in the first the emotions in the collective action are discussed, subsequently the studied phenomenon is contextualized, in the third the analysis method is detailed, in the fourth, results are presented, and in the last section, the findings are confronted on the basis of theoretical approaches recovered. It is concluded that regardless of the context, emotions keep on a similar path in collective action, and the emerged emotions are not positive or negative, instead they are impulsor and detractor of such action.

  13. Public Transit Equity Analysis at Metropolitan and Local Scales: A Focus on Nine Large Cities in the US.

    Science.gov (United States)

    Griffin, Greg Phillip; Sener, Ipek Nese

    2016-01-01

    Recent studies on transit service through an equity lens have captured broad trends from the literature and national-level data or analyzed disaggregate data at the local level. This study integrates these methods by employing a geostatistical analysis of new transit access and income data compilations from the Environmental Protection Agency. By using a national data set, this study demonstrates a method for income-based transit equity analysis and provides results spanning nine large auto-oriented cities in the US. Results demonstrate variability among cities' transit services to low-income populations, with differing results when viewed at the regional and local levels. Regional-level analysis of transit service hides significant variation through spatial averaging, whereas the new data employed in this study demonstrates a block-group scale equity analysis that can be used on a national-scale data set. The methods used can be adapted for evaluation of transit and other modes' transportation service in areas to evaluate equity at the regional level and at the neighborhood scale while controlling for spatial autocorrelation. Transit service equity planning can be enhanced by employing local Moran's I to improve local analysis.

  14. Secondary analysis of data can inform care delivery for Indigenous women in an acute mental health inpatient unit.

    Science.gov (United States)

    Bradley, Pat; Cunningham, Teresa; Lowell, Anne; Nagel, Tricia; Dunn, Sandra

    2017-02-01

    There is a paucity of research exploring Indigenous women's experiences in acute mental health inpatient services in Australia. Even less is known of Indigenous women's experience of seclusion events, as published data are rarely disaggregated by both indigeneity and gender. This research used secondary analysis of pre-existing datasets to identify any quantifiable difference in recorded experience between Indigenous and non-Indigenous women, and between Indigenous women and Indigenous men in an acute mental health inpatient unit. Standard separation data of age, length of stay, legal status, and discharge diagnosis were analysed, as were seclusion register data of age, seclusion grounds, and number of seclusion events. Descriptive statistics were used to summarize the data, and where warranted, inferential statistical methods used SPSS software to apply analysis of variance/multivariate analysis of variance testing. The results showed evidence that secondary analysis of existing datasets can provide a rich source of information to describe the experience of target groups, and to guide service planning and delivery of individualized, culturally-secure mental health care at a local level. The results are discussed, service and policy development implications are explored, and suggestions for further research are offered. © 2016 Australian College of Mental Health Nurses Inc.

  15. Analysis of the effects of section 29 tax credits on reserve additions and production of gas from unconventional resources

    International Nuclear Information System (INIS)

    1990-09-01

    Federal tax credits for production of natural gas from unconventional resources can stimulate drilling and reserves additions at a relatively low cost to the Treasury. This report presents the results of an analysis of the effects of a proposed extension of the Section 29 alternative fuels production credit specifically for unconventional gas. ICF Resources estimated the net effect of the extension of the credit (the difference between development activity expected with the extension of the credit and that expected if the credit expires in December 1990 as scheduled). The analysis addressed the effect of tax credits on project economics and capital formation, drilling and reserve additions, production, impact on the US and regional economies, and the net public sector costs and incremental revenues. The analysis was based on explicit modeling of the three dominant unconventional gas resources: Tight sands, coalbed methane, and Devonian shales. It incorporated the most current data on resource size, typical well recoveries and economics, and anticipated activity of the major producers. Each resource was further disaggregated for analysis based on distinct resource characteristics, development practices, regional economics, and historical development patterns

  16. Analysis of the effects of section 29 tax credits on reserve additions and production of gas from unconventional resources

    Energy Technology Data Exchange (ETDEWEB)

    1990-09-01

    Federal tax credits for production of natural gas from unconventional resources can stimulate drilling and reserves additions at a relatively low cost to the Treasury. This report presents the results of an analysis of the effects of a proposed extension of the Section 29 alternative fuels production credit specifically for unconventional gas. ICF Resources estimated the net effect of the extension of the credit (the difference between development activity expected with the extension of the credit and that expected if the credit expires in December 1990 as scheduled). The analysis addressed the effect of tax credits on project economics and capital formation, drilling and reserve additions, production, impact on the US and regional economies, and the net public sector costs and incremental revenues. The analysis was based on explicit modeling of the three dominant unconventional gas resources: Tight sands, coalbed methane, and Devonian shales. It incorporated the most current data on resource size, typical well recoveries and economics, and anticipated activity of the major producers. Each resource was further disaggregated for analysis based on distinct resource characteristics, development practices, regional economics, and historical development patterns.

  17. The Role of Bioeconomy Sectors and Natural Resources in EU Economies: A Social Accounting Matrix-Based Analysis Approach

    Directory of Open Access Journals (Sweden)

    Patricia D. Fuentes-Saguar

    2017-12-01

    Full Text Available The bio-based economy will be crucial in achieving a sustainable development, covering all ranges of natural resources. In this sense, it is very relevant to analyze the economic links between the bioeconomic sectors and the rest of the economy, determining their total and decomposed impact on economic growth. One of the major problems in carrying out this analysis is the lack of information and complete databases that allow analysis of the bioeconomy and its effects on other economic activities. To overcome this issue, disaggregated social accounting matrices have been obtained for the highly bio-based sectors of the 28 European Union member states. Using this complex database, a linear multiplier analysis shows the future key role of bio-based sectors in boosting economic development in the EU. Results show that the bioeconomy has not yet unleashed its full potential in terms of output and job creation. Thus, output and employment multipliers show that many sectors related to the bioeconomy are still underperforming compared to the EU average, particularly those with higher value added; although, they are still crucial sectors for the wealth creation.

  18. Profitable ultrasonic assisted microwave disintegration of sludge biomass: Modelling of biomethanation and energy parameter analysis.

    Science.gov (United States)

    Kavitha, S; Rajesh Banu, J; Kumar, Gopalakrishnan; Kaliappan, S; Yeom, Ick Tae

    2018-04-01

    In this study, microwave irradiation has been employed to disintegrate the sludge biomass profitably by deagglomerating the sludge using a mechanical device, ultrasonicator. The outcomes of the study revealed that a specific energy input of 3.5 kJ/kg TS was found to be optimum for deagglomeration with limited cell lysis. A higher suspended solids (SS) reduction and biomass lysis efficiency of about 22.5% and 33.2% was achieved through ultrasonic assisted microwave disintegration (UMWD) when compared to microwave disintegration - MWD (15% and 20.9%). The results of biochemical methane potential (BMP) test were used to estimate biodegradability of samples. Among the samples subjected to BMP, UMWD showed better amenability towards anaerobic digestion with higher methane production potential of 0.3 L/g COD representing enhanced liquefaction potential of disaggregated sludge biomass. Economic analysis of the proposed method of sludge biomass pretreatment showed a net profit of 2.67 USD/Ton respectively. Copyright © 2018 Elsevier Ltd. All rights reserved.

  19. Determinants of energy demand in the French service sector: A decomposition analysis

    International Nuclear Information System (INIS)

    Mairet, Nicolas; Decellas, Fabrice

    2009-01-01

    This paper analyzes the changes in the energy consumption of the service sector in France over the period 1995-2006, using the logarithmic mean Divisia index I (LMDI I) decomposition method. The analysis is carried out at various disaggregation levels to highlight the specifics of each sub-sector and end-use according to their respective determinants. The results show that in this period the economic growth of the service sector was the main factor that led to the increase in total energy consumption. Structure, productivity, substitution and intensity effects restricted this growth, but with limited effect. By analyzing each end-use, this paper enables a more precise understanding of the impact of these factors. The activity effect was the main determinant of the increase in energy consumption for all end-uses except for air conditioning, for which the equipment rate effect was the main factor. Structural changes in the service sector primarily impacted energy consumption for space heating and cooking. Improvements in productivity limited the growth of energy consumption for all end-uses except for cooking. Finally, energy efficiency improvements mainly affected space-heating energy use.

  20. [Trend analysis of acquired syphilis in Mexico from 2003 to 2013].

    Science.gov (United States)

    Herrera-Ortiz, Antonia; Uribe-Salas, Felipe J; Olamendi-Portugal, Ma Leonidez; García-Cisneros, Santa; Conde-Glez, Carlos Jesús; Sánchez-Alemán, Miguel A

    2015-01-01

    To identify the population group in which syphilis increase was concentrated. The information was collected from the Mexico health statistical yearbooks. The information disaggregated by sex, age group and state during the period 2003 to 2013 was used to form different databases. Linear regression analysis with confidence interval at 95% was used to evaluate changes over time in different population groups. An increase of 0.67 cases per 100,000 population (95%CI 0.30-1.04) in men was detected from 2010. The increase was concentrated in each group of 20-24 and 25-44. The highest incidence of acquired syphilis was reported in the last two years: 2012 and 2013. The last year reported a 1.85 times higher incidence than reported in 2003. Aguascalientes, Distrito Federal, Durango, Mexico, Oaxaca, Puebla, Quintana Roo, Yucatan and Zacatecas reported that syphilis increased during the study period. Acquired syphilis may be reemerging in our country among young men; this increase is not uniform across the country, it is necessary to focus intervention measures for this sexually transmitted infection.

  1. From water use to water scarcity footprinting in environmentally extended input-output analysis.

    Science.gov (United States)

    Ridoutt, Bradley George; Hadjikakou, Michalis; Nolan, Martin; Bryan, Brett A

    2018-05-18

    Environmentally extended input-output analysis (EEIOA) supports environmental policy by quantifying how demand for goods and services leads to resource use and emissions across the economy. However, some types of resource use and emissions require spatially-explicit impact assessment for meaningful interpretation, which is not possible in conventional EEIOA. For example, water use in locations of scarcity and abundance is not environmentally equivalent. Opportunities for spatially-explicit impact assessment in conventional EEIOA are limited because official input-output tables tend to be produced at the scale of political units which are not usually well aligned with environmentally relevant spatial units. In this study, spatially-explicit water scarcity factors and a spatially disaggregated Australian water use account were used to develop water scarcity extensions that were coupled with a multi-regional input-output model (MRIO). The results link demand for agricultural commodities to the problem of water scarcity in Australia and globally. Important differences were observed between the water use and water scarcity footprint results, as well as the relative importance of direct and indirect water use, with significant implications for sustainable production and consumption-related policies. The approach presented here is suggested as a feasible general approach for incorporating spatially-explicit impact assessment in EEIOA.

  2. Energy use in the Greek manufacturing sector: A methodological framework based on physical indicators with aggregation and decomposition analysis

    International Nuclear Information System (INIS)

    Salta, Myrsine; Polatidis, Heracles; Haralambopoulos, Dias

    2009-01-01

    A bottom-up methodological framework was developed and applied for the period 1985-2002, to selected manufacturing sub-sectors in Greece namely, food, beverages and tobacco, iron and steel, non-ferrous metals, non-metallic minerals and paper. Disaggregate physical data were aggregated according to their specific energy consumption (SEC) values and physical energy efficiency indicators were estimated. The Logarithmic Mean Divisia index method was also used and the effects of the production, structure and energy efficiency to changes in sub-sectoral manufacturing energy use were further assessed. Primary physical energy efficiency improved by 28% for the iron and steel and by 9% for the non-metallic minerals industries, compared to the base year 1990. For the food, beverages and tobacco and the paper sub-sectors, primary efficiency deteriorated by 20% and by 15%, respectively; finally electricity efficiency deteriorated by 7% for the non-ferrous metals. Sub-sectoral energy use is mainly driven by production output and energy efficiency changes. Sensitivity analysis showed that alternative SEC values do not influence the results whereas the selected base year is more critical for this analysis. Significant efficiency improvements refer to 'heavy' industry; 'light' industry needs further attention by energy policy to modernize its production plants and improve its efficiency

  3. Energy use in the Greek manufacturing sector: A methodological framework based on physical indicators with aggregation and decomposition analysis

    Energy Technology Data Exchange (ETDEWEB)

    Salta, Myrsine; Polatidis, Heracles; Haralambopoulos, Dias [Energy Management Laboratory, Department of Environment, University of the Aegean, University Hill, Mytilene 81100 (Greece)

    2009-01-15

    A bottom-up methodological framework was developed and applied for the period 1985-2002, to selected manufacturing sub-sectors in Greece namely, food, beverages and tobacco, iron and steel, non-ferrous metals, non-metallic minerals and paper. Disaggregate physical data were aggregated according to their specific energy consumption (SEC) values and physical energy efficiency indicators were estimated. The Logarithmic Mean Divisia index method was also used and the effects of the production, structure and energy efficiency to changes in sub-sectoral manufacturing energy use were further assessed. Primary physical energy efficiency improved by 28% for the iron and steel and by 9% for the non-metallic minerals industries, compared to the base year 1990. For the food, beverages and tobacco and the paper sub-sectors, primary efficiency deteriorated by 20% and by 15%, respectively; finally electricity efficiency deteriorated by 7% for the non-ferrous metals. Sub-sectoral energy use is mainly driven by production output and energy efficiency changes. Sensitivity analysis showed that alternative SEC values do not influence the results whereas the selected base year is more critical for this analysis. Significant efficiency improvements refer to ''heavy'' industry; ''light'' industry needs further attention by energy policy to modernize its production plants and improve its efficiency. (author)

  4. Time-Series Analysis of Continuously Monitored Blood Glucose: The Impacts of Geographic and Daily Lifestyle Factors

    Directory of Open Access Journals (Sweden)

    Sean T. Doherty

    2015-01-01

    Full Text Available Type 2 diabetes is known to be associated with environmental, behavioral, and lifestyle factors. However, the actual impacts of these factors on blood glucose (BG variation throughout the day have remained relatively unexplored. Continuous blood glucose monitors combined with human activity tracking technologies afford new opportunities for exploration in a naturalistic setting. Data from a study of 40 patients with diabetes is utilized in this paper, including continuously monitored BG, food/medicine intake, and patient activity/location tracked using global positioning systems over a 4-day period. Standard linear regression and more disaggregated time-series analysis using autoregressive integrated moving average (ARIMA are used to explore patient BG variation throughout the day and over space. The ARIMA models revealed a wide variety of BG correlating factors related to specific activity types, locations (especially those far from home, and travel modes, although the impacts were highly personal. Traditional variables related to food intake and medications were less often significant. Overall, the time-series analysis revealed considerable patient-by-patient variation in the effects of geographic and daily lifestyle factors. We would suggest that maps of BG spatial variation or an interactive messaging system could provide new tools to engage patients and highlight potential risk factors.

  5. Biofuels policy and the US market for motor fuels: Empirical analysis of ethanol splashing

    Energy Technology Data Exchange (ETDEWEB)

    Walls, W.D., E-mail: wdwalls@ucalgary.ca [Department of Economics, University of Calgary, 2500 University Drive NW, Calgary, Alberta, T2N 1N4 (Canada); Rusco, Frank; Kendix, Michael [US GAO (United States)

    2011-07-15

    Low ethanol prices relative to the price of gasoline blendstock, and tax credits, have resulted in discretionary blending at wholesale terminals of ethanol into fuel supplies above required levels-a practice known as ethanol splashing in industry parlance. No one knows precisely where or in what volume ethanol is being blended with gasoline and this has important implications for motor fuels markets: Because refiners cannot perfectly predict where ethanol will be blended with finished gasoline by wholesalers, they cannot know when to produce and where to ship a blendstock that when mixed with ethanol at 10% would create the most economically efficient finished motor gasoline that meets engine standards and has comparable evaporative emissions as conventional gasoline without ethanol blending. In contrast to previous empirical analyses of biofuels that have relied on highly aggregated data, our analysis is disaggregated to the level of individual wholesale fuel terminals or racks (of which there are about 350 in the US). We incorporate the price of ethanol as well as the blendstock price to model the wholesaler's decision of whether or not to blend additional ethanol into gasoline at any particular wholesale city-terminal. The empirical analysis illustrates how ethanol and gasoline prices affect ethanol usage, controlling for fuel specifications, blend attributes, and city-terminal-specific effects that, among other things, control for differential costs of delivering ethanol from bio-refinery to wholesale rack. - Research Highlights: > Low ethanol prices and tax credits have resulted in discretionary blending of ethanol into fuel supplies above required levels. > This has important implications for motor fuels markets and vehicular emissions. > Our analysis incorporates the price of ethanol as well as the blendstock price to model the wholesaler's decision of whether or not to blend additional ethanol into gasoline at any particular wholesale city

  6. Biofuels policy and the US market for motor fuels: Empirical analysis of ethanol splashing

    International Nuclear Information System (INIS)

    Walls, W.D.; Rusco, Frank; Kendix, Michael

    2011-01-01

    Low ethanol prices relative to the price of gasoline blendstock, and tax credits, have resulted in discretionary blending at wholesale terminals of ethanol into fuel supplies above required levels-a practice known as ethanol splashing in industry parlance. No one knows precisely where or in what volume ethanol is being blended with gasoline and this has important implications for motor fuels markets: Because refiners cannot perfectly predict where ethanol will be blended with finished gasoline by wholesalers, they cannot know when to produce and where to ship a blendstock that when mixed with ethanol at 10% would create the most economically efficient finished motor gasoline that meets engine standards and has comparable evaporative emissions as conventional gasoline without ethanol blending. In contrast to previous empirical analyses of biofuels that have relied on highly aggregated data, our analysis is disaggregated to the level of individual wholesale fuel terminals or racks (of which there are about 350 in the US). We incorporate the price of ethanol as well as the blendstock price to model the wholesaler's decision of whether or not to blend additional ethanol into gasoline at any particular wholesale city-terminal. The empirical analysis illustrates how ethanol and gasoline prices affect ethanol usage, controlling for fuel specifications, blend attributes, and city-terminal-specific effects that, among other things, control for differential costs of delivering ethanol from bio-refinery to wholesale rack. - Research highlights: → Low ethanol prices and tax credits have resulted in discretionary blending of ethanol into fuel supplies above required levels. → This has important implications for motor fuels markets and vehicular emissions. → Our analysis incorporates the price of ethanol as well as the blendstock price to model the wholesaler's decision of whether or not to blend additional ethanol into gasoline at any particular wholesale city-terminal.

  7. 77 FR 26531 - Request for Information To Gather Technical Expertise Pertaining to the Disaggregation of Asian...

    Science.gov (United States)

    2012-05-04

    ...., ethnicity, language, background, gender, etc. 3.2.3 Data Collection and Systems. Please describe how the... in the Ivory Tower: Dilemmas of Racial Inequality in American Higher Education. New York: Teachers...

  8. A disaggregate freight transport model of transport chain and shipment size choice

    NARCIS (Netherlands)

    Windisch, E.; De Jong, G.C.; Van Nes, R.; Hoogendoorn, S.P.

    2010-01-01

    The field of freight transport modelling is relatively young compared to passenger transport modelling. However, some key issues in freight policy, like growing freight shares on the road, advanced logistics concepts or emerging strict freight transport regulations, have been creating increasing

  9. How Does Disaggregating a Pooled Inventory Affect a Marine Aircraft Group?

    Science.gov (United States)

    2014-12-01

    Submitted in partial fulfillment of the requirements for the degree of MASTER OF BUSINESS ADMINISTRATION from the NAVAL...Kang Dr. William R. Gates, Dean Graduate School of Business and Public Policy iv THIS PAGE INTENTIONALLY LEFT BLANK v HOW DOES...Kenneth Doerr and Dr. Keebom Kang, for their guidance and patience throughout the thesis process. Dr. Doerr, thanks for all the Skype chats. Our

  10. Disaggregating the effects of acculturation and acculturative stress on the mental health of Asian Americans.

    Science.gov (United States)

    Hwang, Wei-Chin; Ting, Julia Y

    2008-04-01

    This study examines the impact of level of acculturation and acculturative stress on the mental health of Asian American college students. Hierarchical regression analyses were used to clarify the relation between level of acculturation, acculturative stress, and mental health outcomes (psychological distress and clinical depression). Being less identified with mainstream United States culture was associated with higher psychological distress and clinical depression, but lost significance when acculturative stress was introduced into the model. Retention or relinquishing of identification with one's heritage culture was not associated with mental health outcomes. Although understanding level of acculturation can help us identify those at risk, findings suggest that acculturative stress is a more proximal risk factor and increases risk for mental health problems independently of global perceptions of stress.

  11. On the impact of NWP model resolution and power source disaggregation on photovoltaic power prediction

    Czech Academy of Sciences Publication Activity Database

    Eben, Kryštof; Juruš, Pavel; Resler, Jaroslav; Pelikán, Emil; Krč, Pavel

    2011-01-01

    Roč. 8, - (2011), EMS2011-667-4 [EMS Annual Meeting /11./ and European Conference on Applications of Meteorology /10./. 12.09.2011-16.09.2011, Berlin] Institutional research plan: CEZ:AV0Z10300504 Keywords : photovoltaic power prediction * NWP * numerical model parameterization Subject RIV: DG - Athmosphere Sciences, Meteorology

  12. Nanoblock aggregation-disaggregation of zeolite nanoparticles: Temperature control on crystallinity

    KAUST Repository

    Gao, Feifei; Sougrat, Rachid; Albela, Belé n; Bonneviot, Laurent

    2011-01-01

    performed at 90 °C, most of the final aggregates exhibit ill-oriented assembly. This is consistent with a trial-and-error block-by-block building mechanism that turns into an irreversible and apparently faster process at 90 °C, causing definitively ill

  13. New Method to Disaggregate and Analyze Single Isolated Helminthes Cells Using Flow Cytometry: Proof of Concept

    Directory of Open Access Journals (Sweden)

    Karen Nava-Castro

    2011-01-01

    Full Text Available In parasitology, particularly in helminthes studies, several methods have been used to look for the expression of specific molecules, such as RT-PCR, western blot, 2D-electrophoresis, and microscopy, among others. However, these methods require homogenization of the whole helminth parasite, preventing evaluation of individual cells or specific cell types in a given parasite tissue or organ. Also, the extremely high interaction between helminthes and host cells (particularly immune cells is an important point to be considered. It is really hard to obtain fresh parasites without host cell contamination. Then, it becomes crucial to determine that the analyzed proteins are exclusively from parasitic origin, and not a consequence of host cell contamination. Flow cytometry is a fluorescence-based technique used to evaluate the expression of extra-and intracellular proteins in different type cells, including protozoan parasites. It also allows the isolation and recovery of single-cell populations. Here, we describe a method to isolate and obtain purified helminthes cells.

  14. Urban-rural demarcation within a metropolitan area: a methodology for using small area disaggregation

    CSIR Research Space (South Africa)

    Green, Cheri A

    2008-04-01

    Full Text Available There is ongoing debate with regard to the levels of service provision in urban and rural areas. However, progress with respect to the delivery of planned services can only be efficiently and equitably measured once benchmarks for different areas...

  15. A method for disaggregating clay concretions and eliminating formalin smell in the processing of sediment samples

    DEFF Research Database (Denmark)

    Cedhagen, Tomas

    1989-01-01

    A complete handling procedure for processing sediment samples is described. It includes some improvements of conventional methods. The fixed sediment sample is mixed with a solution of the alkaline detergent AJAX® (Colgate-Palmolive). It is kept at 80-900 C for 20-40 min. This treatment facilitates...

  16. Disaggregating reserve-to-production ratios: An algorithm for United States oil and gas reserve development

    Science.gov (United States)

    Williams, Charles William

    Reserve-to-production ratios for oil and gas development are utilized by oil and gas producing states to monitor oil and gas reserve and production dynamics. These ratios are used to determine production levels for the manipulation of oil and gas prices while maintaining adequate reserves for future development. These aggregate reserve-to-production ratios do not provide information concerning development cost and the best time necessary to develop newly discovered reserves. Oil and gas reserves are a semi-finished inventory because development of the reserves must take place in order to implement production. These reserves are considered semi-finished in that they are not counted unless it is economically profitable to produce them. The development of these reserves is encouraged by profit maximization economic variables which must consider the legal, political, and geological aspects of a project. This development is comprised of a myriad of incremental operational decisions, each of which influences profit maximization. The primary purpose of this study was to provide a model for characterizing a single product multi-period inventory/production optimization problem from an unconstrained quantity of raw material which was produced and stored as inventory reserve. This optimization was determined by evaluating dynamic changes in new additions to reserves and the subsequent depletion of these reserves with the maximization of production. A secondary purpose was to determine an equation for exponential depletion of proved reserves which presented a more comprehensive representation of reserve-to-production ratio values than an inadequate and frequently used aggregate historical method. The final purpose of this study was to determine the most accurate delay time for a proved reserve to achieve maximum production. This calculated time provided a measure of the discounted cost and calculation of net present value for developing new reserves. This study concluded that the theoretical model developed by this research may be used to provide a predictive equation for each major oil and gas state so that a net present value to undiscounted net cash flow ratio might be calculated in order to establish an investment signal for profit maximizers. This equation inferred how production decisions were influenced by exogenous factors, such as price, and how policies performed which lead to recommendations regarding effective policies and prudent planning.

  17. Disaggregating Hot Water Use and Predicting Hot Water Waste in Five Test Homes

    Energy Technology Data Exchange (ETDEWEB)

    Henderson, Hugh [ARIES Collaborative, New York, NY (United States); Wade, Jeremy [ARIES Collaborative, New York, NY (United States)

    2014-04-01

    While it is important to make the equipment (or "plant") in a residential hot water system more efficient, the hot water distribution system also affects overall system performance and energy use. Energy wasted in heating water that is not used is estimated to be on the order of 10%-30% of total domestic hot water (DHW) energy use. This field monitoring project installed temperature sensors on the distribution piping (on trunks and near fixtures) in five houses near Syracuse, NY, and programmed a data logger to collect data at 5 second intervals whenever there was a hot water draw. This data was used to assign hot water draws to specific end uses in the home as well as to determine the portion of each hot water that was deemed useful (i.e., above a temperature threshold at the fixture). Overall, the procedures to assign water draws to each end use were able to successfully assign about 50% of the water draws, but these assigned draws accounted for about 95% of the total hot water use in each home. The amount of hot water deemed as useful ranged from low of 75% at one house to a high of 91% in another. At three of the houses, new water heaters and distribution improvements were implemented during the monitoring period and the impact of these improvements on hot water use and delivery efficiency were evaluated.

  18. Nanoblock aggregation-disaggregation of zeolite nanoparticles: Temperature control on crystallinity

    KAUST Repository

    Gao, Feifei

    2011-04-21

    During the induction period of silicalite-1 formation at 80 °C, primary nanoblocks of 8-11 nm self-assemble together into fragile nanoflocculates of ca. 60 nm that dislocate and reappear according to a slow pseudoperiodical process. Between 22 and 32 h, the nanoflocculates grow up to 350 nm and contain ill- and well-oriented aggregates of ca. 40 nm. After 48 h, only ill-faceted monodomains of ca. 90 nm remains, which self-assemble into larger flocculates of ca. 450 nm. For crystal growth performed at 90 °C, most of the final aggregates exhibit ill-oriented assembly. This is consistent with a trial-and-error block-by-block building mechanism that turns into an irreversible and apparently faster process at 90 °C, causing definitively ill-oriented product. The nanoblocks, aggregates, and flocculates were characterized in nondiluted, nondiluted and ultrasonicated, or diluted and ultrasonicated solutions, using mainly dynamic light scattering and cryo-high-resolution transmission electron microscopy at various tilted angles. © 2011 American Chemical Society.

  19. Disaggregating Hot Water Use and Predicting Hot Water Waste in Five Test Homes

    Energy Technology Data Exchange (ETDEWEB)

    Henderson, H.; Wade, J.

    2014-04-01

    While it is important to make the equipment (or 'plant') in a residential hot water system more efficient, the hot water distribution system also affects overall system performance and energy use. Energy wasted in heating water that is not used is estimated to be on the order of 10 to 30 percent of total domestic hot water (DHW) energy use. This field monitoring project installed temperature sensors on the distribution piping (on trunks and near fixtures) and programmed a data logger to collect data at 5 second intervals whenever there was a hot water draw. This data was used to assign hot water draws to specific end uses in the home as well as to determine the portion of each hot water that was deemed useful (i.e., above a temperature threshold at the fixture). Five houses near Syracuse NY were monitored. Overall, the procedures to assign water draws to each end use were able to successfully assign about 50% of the water draws, but these assigned draws accounted for about 95% of the total hot water use in each home. The amount of hot water deemed as useful ranged from low of 75% at one house to a high of 91% in another. At three of the houses, new water heaters and distribution improvements were implemented during the monitoring period and the impact of these improvements on hot water use and delivery efficiency were evaluated.

  20. The origins of Asteroidal rock disaggregation: Interplay of thermal fatigue and microstructure

    Science.gov (United States)

    Hazeli, Kavan; El Mir, Charles; Papanikolaou, Stefanos; Delbo, Marco; Ramesh, K. T.

    2018-04-01

    The distributions of size and chemical composition in regolith on airless bodies provide clues to the evolution of the solar system. Recently, the regolith on asteroid (25143) Itokawa, visited by the JAXA Hayabusa spacecraft, was observed to contain millimeter to centimeter sized particles. Itokawa boulders commonly display well-rounded profiles and surface textures that appear inconsistent with mechanical fragmentation during meteorite impact; the rounded profiles have been hypothesized to arise from rolling and movement on the surface as a consequence of seismic shaking. This investigation provides a possible explanation of these observations by exploring the primary crack propagation mechanism during thermal fatigue of a chondrite. Herein, we present the evolution of the full-field strains on the surface as a function of temperature and microstructure, and examine the crack growth during thermal cycling. Our experimental results demonstrate that thermal-fatigue-driven fracture occurs under these conditions. The results suggest that the primary fatigue crack path preferentially follows the interfaces between monominerals, leaving the minerals themselves intact after fragmentation. These observations are explained through a microstructure-based finite element model that is quantitatively compared with our experimental results. These results on the interactions of thermal fatigue cracking with the microstructure may ultimately allow us to distinguish between thermally induced fragments and impact products.

  1. Joint Cost, Production Technology and Output Disaggregation in Regulated Motor Carriers

    Science.gov (United States)

    1978-11-01

    The study uses a sample of 252 Class I Instruction 27 Motor Carriers (Instruction 27 carriers earned at least 75 percent of their revenues from intercity transportation of general commodities over a three year period) of general freight that existed ...

  2. DOD Space Systems: Additional Knowledge Would Better Support Decisions about Disaggregating Large Satellites

    Science.gov (United States)

    2014-10-01

    considering new approaches. According to Air Force Space Command, U.S. space systems face intentional and unintentional threats , which have increased...life cycle costs • Demand for more satellites may stimulate new entrants and competition to lower acquisition costs. • Smaller, less complex...Fiscal constraints and growing threats to space systems have led DOD to consider alternatives for acquiring space-based capabilities, including

  3. Shear-induced aggregation or disaggregation in edible oils: Models, computer simulation, and USAXS measurements

    Science.gov (United States)

    Townsend, B.; Peyronel, F.; Callaghan-Patrachar, N.; Quinn, B.; Marangoni, A. G.; Pink, D. A.

    2017-12-01

    The effects of shear upon the aggregation of solid objects formed from solid triacylglycerols (TAGs) immersed in liquid TAG oils were modeled using Dissipative Particle Dynamics (DPD) and the predictions compared to experimental data using Ultra-Small Angle X-ray Scattering (USAXS). The solid components were represented by spheres interacting via attractive van der Waals forces and short range repulsive forces. A velocity was applied to the liquid particles nearest to the boundary, and Lees-Edwards boundary conditions were used to transmit this motion to non-boundary layers via dissipative interactions. The shear was created through the dissipative forces acting between liquid particles. Translational diffusion was simulated, and the Stokes-Einstein equation was used to relate DPD length and time scales to SI units for comparison with USAXS results. The SI values depended on how large the spherical particles were (250 nm vs. 25 nm). Aggregation was studied by (a) computing the Structure Function and (b) quantifying the number of pairs of solid spheres formed. Solid aggregation was found to be enhanced by low shear rates. As the shear rate was increased, a transition shear region was manifested in which aggregation was inhibited and shear banding was observed. Aggregation was inhibited, and eventually eliminated, by further increases in the shear rate. The magnitude of the transition region shear, γ˙ t, depended on the size of the solid particles, which was confirmed experimentally.

  4. Measuring and explaining inflation persistence: disaggregate evidence on the Czech Republic

    Czech Academy of Sciences Publication Activity Database

    Babetskii, Ian; Coricelli, F.; Horváth, R.

    -, č. 1 (2007), s. 1-36 Keywords : inflation dynamics * inflation targeting * persistence Subject RIV: AH - Economics http://www.cnb.cz/m2export/sites/www.cnb.cz/en/research/research_publications/cnb_wp/download/cnbwp_2007_01.pdf

  5. Big IoT data mining for real-time energy disaggregation in buildings

    NARCIS (Netherlands)

    Mocanu, D.C.; Mocanu, E.; Nguyen, H.P.; Gibescu, M.; Liotta, A.

    2017-01-01

    In the smart grid context, the identification and prediction of building energy flexibility is a challenging open question, thus paving the way for new optimized behaviors from the demand side. At the same time, the latest smart meters developments allow us to monitor in real-time the power

  6. Hidden Breast Cancer Disparities in Asian Women: Disaggregating Incidence Rates by Ethnicity and Migrant Status

    Science.gov (United States)

    Quach, Thu; Horn-Ross, Pamela L.; Pham, Jane T.; Cockburn, Myles; Chang, Ellen T.; Keegan, Theresa H. M.; Glaser, Sally L.; Clarke, Christina A.

    2010-01-01

    Objectives. We estimated trends in breast cancer incidence rates for specific Asian populations in California to determine if disparities exist by immigrant status and age. Methods. To calculate rates by ethnicity and immigrant status, we obtained data for 1998 through 2004 cancer diagnoses from the California Cancer Registry and imputed immigrant status from Social Security Numbers for the 26% of cases with missing birthplace information. Population estimates were obtained from the 1990 and 2000 US Censuses. Results. Breast cancer rates were higher among US- than among foreign-born Chinese (incidence rate ratio [IRR] = 1.84; 95% confidence interval [CI] = 1.72, 1.96) and Filipina women (IRR = 1.32; 95% CI = 1.20, 1.44), but similar between US- and foreign-born Japanese women. US-born Chinese and Filipina women who were younger than 55 years had higher rates than did White women of the same age. Rates increased over time in most groups, as high as 4% per year among foreign-born Korean and US-born Filipina women. From 2000–2004, the rate among US-born Filipina women exceeded that of White women. Conclusions. These findings challenge the notion that breast cancer rates are uniformly low across Asians and therefore suggest a need for increased awareness, targeted cancer control, and research to better understand underlying factors. PMID:20147696

  7. Disaggregating Tropical Disease Prevalence by Climatic and Vegetative Zones within Tropical West Africa.

    Science.gov (United States)

    Beckley, Carl S; Shaban, Salisu; Palmer, Guy H; Hudak, Andrew T; Noh, Susan M; Futse, James E

    2016-01-01

    Tropical infectious disease prevalence is dependent on many socio-cultural determinants. However, rainfall and temperature frequently underlie overall prevalence, particularly for vector-borne diseases. As a result these diseases have increased prevalence in tropical as compared to temperate regions. Specific to tropical Africa, the tendency to incorrectly infer that tropical diseases are uniformly prevalent has been partially overcome with solid epidemiologic data. This finer resolution data is important in multiple contexts, including understanding risk, predictive value in disease diagnosis, and population immunity. We hypothesized that within the context of a tropical climate, vector-borne pathogen prevalence would significantly differ according to zonal differences in rainfall, temperature, relative humidity and vegetation condition. We then determined if these environmental data were predictive of pathogen prevalence. First we determined the prevalence of three major pathogens of cattle, Anaplasma marginale, Babesia bigemina and Theileria spp, in the three vegetation zones where cattle are predominantly raised in Ghana: Guinea savannah, semi-deciduous forest, and coastal savannah. The prevalence of A. marginale was 63%, 26% for Theileria spp and 2% for B. bigemina. A. marginale and Theileria spp. were significantly more prevalent in the coastal savannah as compared to either the Guinea savanna or the semi-deciduous forest, supporting acceptance of the first hypothesis. To test the predictive power of environmental variables, the data over a three year period were considered in best subsets multiple linear regression models predicting prevalence of each pathogen. Corrected Akaike Information Criteria (AICc) were assigned to the alternative models to compare their utility. Competitive models for each response were averaged using AICc weights. Rainfall was most predictive of pathogen prevalence, and EVI also contributed to A. marginale and B. bigemina prevalence. These findings support the utility of environmental data for understanding vector-borne disease epidemiology on a regional level within a tropical environment.

  8. Disaggregating Tropical Disease Prevalence by Climatic and Vegetative Zones within Tropical West Africa.

    Directory of Open Access Journals (Sweden)

    Carl S Beckley

    Full Text Available Tropical infectious disease prevalence is dependent on many socio-cultural determinants. However, rainfall and temperature frequently underlie overall prevalence, particularly for vector-borne diseases. As a result these diseases have increased prevalence in tropical as compared to temperate regions. Specific to tropical Africa, the tendency to incorrectly infer that tropical diseases are uniformly prevalent has been partially overcome with solid epidemiologic data. This finer resolution data is important in multiple contexts, including understanding risk, predictive value in disease diagnosis, and population immunity. We hypothesized that within the context of a tropical climate, vector-borne pathogen prevalence would significantly differ according to zonal differences in rainfall, temperature, relative humidity and vegetation condition. We then determined if these environmental data were predictive of pathogen prevalence. First we determined the prevalence of three major pathogens of cattle, Anaplasma marginale, Babesia bigemina and Theileria spp, in the three vegetation zones where cattle are predominantly raised in Ghana: Guinea savannah, semi-deciduous forest, and coastal savannah. The prevalence of A. marginale was 63%, 26% for Theileria spp and 2% for B. bigemina. A. marginale and Theileria spp. were significantly more prevalent in the coastal savannah as compared to either the Guinea savanna or the semi-deciduous forest, supporting acceptance of the first hypothesis. To test the predictive power of environmental variables, the data over a three year period were considered in best subsets multiple linear regression models predicting prevalence of each pathogen. Corrected Akaike Information Criteria (AICc were assigned to the alternative models to compare their utility. Competitive models for each response were averaged using AICc weights. Rainfall was most predictive of pathogen prevalence, and EVI also contributed to A. marginale and B. bigemina prevalence. These findings support the utility of environmental data for understanding vector-borne disease epidemiology on a regional level within a tropical environment.

  9. More than just money: patterns of disaggregated welfare expenditure in the enlarged Europe / Kati Kuitto

    Index Scriptorium Estoniae

    Kuitto, Kati

    2011-01-01

    Artiklis võrreldakse 28 Euroopa riigi (sh Eesti) sotsiaalpoliitikat erinevatetunnuste alusel (toetused ja hüvitisedelanikkonnaagruppidele, tervishoiu- ja sotsiaalteenused, üldised sotsiaalkulud). Tabelid

  10. Dynamics of magnetic particles near a surface : model and experiments on field-induced disaggregation

    NARCIS (Netherlands)

    van Reenen, A.; Gao, Y.; de Jong, Arthur; Hulsen, M.A.; den Toonder, J.M.J.; Prins, M.W.J.

    2014-01-01

    Magnetic particles are widely used in biological research and bioanalytical applications. As the corresponding tools are progressively being miniaturized and integrated, the understanding of particle dynamics and the control of particles down to the level of single particles become important. Here,

  11. A Methodology for the Optimization of Disaggregated Space System Conceptual Designs

    Science.gov (United States)

    2015-06-18

    the Aqua and Terra satellites, part of the EOS. Additionally, the imaging payloads on the Geostationary Operational Environmental Satellites (GOES... Terra ) «block» EOS PM-1 satellite (Aqua) «block» MODIS «block» MODIS 39 The second cost model used for the OFUEGO example is the Small Satellite Cost...assessed under sparse or incomplete vegetation cover. For low density vegetation the equation for estimating soil moisture is: Equation 17

  12. Differential stress response of Saccharomyces hybrids revealed by monitoring Hsp104 aggregation and disaggregation.

    Science.gov (United States)

    Kempf, Claudia; Lengeler, Klaus; Wendland, Jürgen

    2017-07-01

    Proteotoxic stress may occur upon exposure of yeast cells to different stress conditions. The induction of stress response mechanisms is important for cells to adapt to changes in the environment and ensure survival. For example, during exposure to elevated temperatures the expression of heat shock proteins such as Hsp104 is induced in yeast. Hsp104 extracts misfolded proteins from aggregates to promote their refolding. We used an Hsp104-GFP reporter to analyze the stress profiles of Saccharomyces species hybrids. To this end a haploid S. cerevisiae strain, harboring a chromosomal HSP104-GFP under control of its endogenous promoter, was mated with stable haploids of S. bayanus, S. cariocanus, S. kudriavzevii, S. mikatae, S. paradoxus and S. uvarum. Stress response behaviors in these hybrids were followed over time by monitoring the appearance and dissolution of Hsp104-GFP foci upon heat shock. General stress tolerance of these hybrids was related to the growth rate detected during exposure to e.g. ethanol and oxidizing agents. We observed that hybrids were generally more resistant to high temperature and ethanol stress compared to their parental strains. Amongst the hybrids differential responses regarding the appearance of Hsp104-foci and the time required for dissolving these aggregates were observed. The S. cerevisiae/S. paradoxus hybrid, combining the two most closely related strains, performed best under these conditions. Copyright © 2017 Elsevier GmbH. All rights reserved.

  13. Ex-vessel Fish Price Database: Disaggregating Prices for Low-Priced Species from Reduction Fisheries

    Directory of Open Access Journals (Sweden)

    Travis C. Tai

    2017-11-01

    Full Text Available Ex-vessel fish prices are essential for comprehensive fisheries management and socioeconomic analyses for fisheries science. In this paper, we reconstructed a global ex-vessel price database with the following areas of improvement: (1 compiling reported prices explicitly listed as “for reduction to fishmeal and fish oil” to estimate prices separately for catches destined for fishmeal and fish oil production, and other non-direct human consumption purposes; (2 including 95% confidence limit estimates for each price estimation; and (3 increasing the number of input data and the number of price estimates to match the reconstructed Sea Around Us catch database. Our primary focus was to address this first area of improvement as ex-vessel prices for catches destined for non-direct human consumption purposes were substantially overestimated, notably in countries with large reduction fisheries. For example in Peru, 2010 landed values were estimated as 3.8 billion real 2010 USD when using separate prices for reduction fisheries, compared with 5.8 billion using previous methods with only one price for all end-products. This update of the price database has significant global and country-specific impacts on fisheries price and landed value trends over time.

  14. Non-renewable and renewable energy consumption and CO2 emissions in OECD countries: A comparative analysis

    International Nuclear Information System (INIS)

    Shafiei, Sahar; Salim, Ruhul A.

    2014-01-01

    This paper attempts to explore the determinants of CO 2 emissions using the STIRPAT model and data from 1980 to 2011 for OECD countries. The empirical results show that non-renewable energy consumption increases CO 2 emissions, whereas renewable energy consumption decreases CO 2 emissions. Further, the results support the existence of an environmental Kuznets curve between urbanisation and CO 2 emissions, implying that at higher levels of urbanisation, the environmental impact decreases. Therefore, the overall evidence suggests that policy makers should focus on urban planning as well as clean energy development to make substantial contributions to both reducing non-renewable energy use and mitigating climate change. - Highlights: • Examine the relationship between disaggregated energy consumption and CO 2 emission. • The STIRPAT econometric model is used for empirical analysis. • Investigate the popular environmental Kuznets curve (EKC) hypothesis between urbanisation and CO 2 emissions. • Non-renewable energy consumption increases CO 2 emissions whereas renewable energy consumption decreases CO 2 emissions. • There is evidence of the existence of an environmental Kuznets curve between urbanisation and CO 2 emissions

  15. Travel behaviour and the total activity pattern

    NARCIS (Netherlands)

    van der Hoorn, A.I.J.M.

    1979-01-01

    the past years the behavioural basis of travel demand models has been considerably extended. In many cases individual behaviour is taken as the starting point of the analysis. Conventional aggregate models have been complemented by disaggregate models. However, even in most disaggregate models there

  16. "Analyzing the Longitudinal K-12 Grading Histories of Entire Cohorts of Students: Grades, Data Driven Decision Making, Dropping out and Hierarchical Cluster Analysis"

    Directory of Open Access Journals (Sweden)

    Alex J. Bowers

    2010-05-01

    Full Text Available School personnel currently lack an effective method to pattern and visually interpret disaggregated achievement data collected on students as a means to help inform decision making. This study, through the examination of longitudinal K-12 teacher assigned grading histories for entire cohorts of students from a school district (n=188, demonstrates a novel application of hierarchical cluster analysis and pattern visualization in which all data points collected on every student in a cohort can be patterned, visualized and interpreted to aid in data driven decision making by teachers and administrators. Additionally, as a proof-of-concept study, overall schooling outcomes, such as student dropout or taking a college entrance exam, are identified from the data patterns and compared to past methods of dropout identification as one example of the usefulness of the method. Hierarchical cluster analysis correctly identified over 80% of the students who dropped out using the entire student grade history patterns from either K-12 or K-8.

  17. Mercury emissions from coal combustion in Silesia, analysis using geostatistics

    Science.gov (United States)

    Zasina, Damian; Zawadzki, Jaroslaw

    2015-04-01

    Data provided by the UNEP's report on mercury [1] shows that solid fuel combustion in significant source of mercury emission to air. Silesia, located in southwestern Poland, is notably affected by mercury emission due to being one of the most industrialized Polish regions: the place of coal mining, production of metals, stone mining, mineral quarrying and chemical industry. Moreover, Silesia is the region with high population density. People are exposed to severe risk of mercury emitted from both: industrial and domestic sources (i.e. small household furnaces). Small sources have significant contribution to total emission of mercury. Official and statistical analysis, including prepared for international purposes [2] did not provide data about spatial distribution of the mercury emitted to air, however number of analysis on Polish public power and energy sector had been prepared so far [3; 4]. The distribution of locations exposed for mercury emission from small domestic sources is interesting matter merging information from various sources: statistical, economical and environmental. This paper presents geostatistical approach to distibution of mercury emission from coal combustion. Analysed data organized in 2 independent levels: individual, bottom-up approach derived from national emission reporting system [5; 6] and top down - regional data calculated basing on official statistics [7]. Analysis, that will be presented, will include comparison of spatial distributions of mercury emission using data derived from sources mentioned above. Investigation will include three voivodeships of Poland: Lower Silesian, Opole (voivodeship) and Silesian using selected geostatistical methodologies including ordinary kriging [8]. References [1] UNEP. Global Mercury Assessment 2013: Sources, Emissions, Releases and Environmental Transport. UNEP Chemicals Branch, Geneva, Switzerland, 2013. [2] NCEM. Poland's Informative Inventory Report 2014. NCEM at the IEP-NRI, 2014. http

  18. A space-time hybrid hourly rainfall model for derived flood frequency analysis

    Directory of Open Access Journals (Sweden)

    U. Haberlandt

    2008-12-01

    Full Text Available For derived flood frequency analysis based on hydrological modelling long continuous precipitation time series with high temporal resolution are needed. Often, the observation network with recording rainfall gauges is poor, especially regarding the limited length of the available rainfall time series. Stochastic precipitation synthesis is a good alternative either to extend or to regionalise rainfall series to provide adequate input for long-term rainfall-runoff modelling with subsequent estimation of design floods. Here, a new two step procedure for stochastic synthesis of continuous hourly space-time rainfall is proposed and tested for the extension of short observed precipitation time series.

    First, a single-site alternating renewal model is presented to simulate independent hourly precipitation time series for several locations. The alternating renewal model describes wet spell durations, dry spell durations and wet spell intensities using univariate frequency distributions separately for two seasons. The dependence between wet spell intensity and duration is accounted for by 2-copulas. For disaggregation of the wet spells into hourly intensities a predefined profile is used. In the second step a multi-site resampling procedure is applied on the synthetic point rainfall event series to reproduce the spatial dependence structure of rainfall. Resampling is carried out successively on all synthetic event series using simulated annealing with an objective function considering three bivariate spatial rainfall characteristics. In a case study synthetic precipitation is generated for some locations with short observation records in two mesoscale catchments of the Bode river basin located in northern Germany. The synthetic rainfall data are then applied for derived flood frequency analysis using the hydrological model HEC-HMS. The results show good performance in reproducing average and extreme rainfall characteristics as well as in

  19. Risk-based input-output analysis of influenza epidemic consequences on interdependent workforce sectors.

    Science.gov (United States)

    Santos, Joost R; May, Larissa; Haimar, Amine El

    2013-09-01

    Outbreaks of contagious diseases underscore the ever-looming threat of new epidemics. Compared to other disasters that inflict physical damage to infrastructure systems, epidemics can have more devastating and prolonged impacts on the population. This article investigates the interdependent economic and productivity risks resulting from epidemic-induced workforce absenteeism. In particular, we develop a dynamic input-output model capable of generating sector-disaggregated economic losses based on different magnitudes of workforce disruptions. An ex post analysis of the 2009 H1N1 pandemic in the national capital region (NCR) reveals the distribution of consequences across different economic sectors. Consequences are categorized into two metrics: (i) economic loss, which measures the magnitude of monetary losses incurred in each sector, and (ii) inoperability, which measures the normalized monetary losses incurred in each sector relative to the total economic output of that sector. For a simulated mild pandemic scenario in NCR, two distinct rankings are generated using the economic loss and inoperability metrics. Results indicate that the majority of the critical sectors ranked according to the economic loss metric comprise of sectors that contribute the most to the NCR's gross domestic product (e.g., federal government enterprises). In contrast, the majority of the critical sectors generated by the inoperability metric include sectors that are involved with epidemic management (e.g., hospitals). Hence, prioritizing sectors for recovery necessitates consideration of the balance between economic loss, inoperability, and other objectives. Although applied specifically to the NCR, the proposed methodology can be customized for other regions. © 2012 Society for Risk Analysis.

  20. Energy consumption and economic growth: A causality analysis for Greece

    International Nuclear Information System (INIS)

    Tsani, Stela Z.

    2010-01-01

    This paper investigates the causal relationship between aggregated and disaggregated levels of energy consumption and economic growth for Greece for the period 1960-2006 through the application of a later development in the methodology of time series proposed by Toda and Yamamoto (1995). At aggregated levels of energy consumption empirical findings suggest the presence of a uni-directional causal relationship running from total energy consumption to real GDP. At disaggregated levels empirical evidence suggests that there is a bi-directional causal relationship between industrial and residential energy consumption to real GDP but this is not the case for the transport energy consumption with causal relationship being identified in neither direction. The importance of these findings lies on their policy implications and their adoption on structural policies affecting energy consumption in Greece suggesting that in order to address energy import dependence and environmental concerns without hindering economic growth emphasis should be put on the demand side and energy efficiency improvements.

  1. Opening Perspectives from an Integrated Analysis: Language Attitudes, Place of Birth and Self-Identification

    Science.gov (United States)

    Lapresta-Rey, Cecilio; Huguet-Canalís, Ángel; Janés-Carulla, Judit

    2018-01-01

    There is a theoretical and empirical tradition demonstrating the influence of the place of birth and self-identification in the shaping of language attitudes. But very few works analyse their joint effects. The main aim of this study is to analyse both the disaggregated and joint impact of these variables on the shaping of attitudes towards…

  2. Decision analysis multicriteria analysis

    International Nuclear Information System (INIS)

    Lombard, J.

    1986-09-01

    The ALARA procedure covers a wide range of decisions from the simplest to the most complex one. For the simplest one the engineering judgement is generally enough and the use of a decision aiding technique is therefore not necessary. For some decisions the comparison of the available protection option may be performed from two or a few criteria (or attributes) (protection cost, collective dose,...) and the use of rather simple decision aiding techniques, like the Cost Effectiveness Analysis or the Cost Benefit Analysis, is quite enough. For the more complex decisions, involving numerous criteria or for decisions involving large uncertainties or qualitative judgement the use of these techniques, even the extended cost benefit analysis, is not recommended and appropriate techniques like multi-attribute decision aiding techniques are more relevant. There is a lot of such particular techniques and it is not possible to present all of them. Therefore only two broad categories of multi-attribute decision aiding techniques will be presented here: decision analysis and the outranking analysis

  3. A transaction cost analysis of micropayments in mobile commerce

    OpenAIRE

    Gille, Daniel

    2005-01-01

    Personalised, location-related and differentiated services in the mobile digital economy create a demand for suitable pricing models. In the case of disaggregated “microservices” (e.g., small digitalized information or service units), as well as for the acquisition of low-value physical goods, the deployment of micropayments seems appropriate. This paper analyzes the economic efficiency of marginal transaction amounts in the m-commerce area by applying the theoretical approach of transact...

  4. Association of Supply Type with Fecal Contamination of Source Water and Household Stored Drinking Water in Developing Countries: A Bivariate Meta-analysis.

    Science.gov (United States)

    Shields, Katherine F; Bain, Robert E S; Cronk, Ryan; Wright, Jim A; Bartram, Jamie

    2015-12-01

    Access to safe drinking water is essential for health. Monitoring access to drinking water focuses on water supply type at the source, but there is limited evidence on whether quality differences at the source persist in water stored in the household. We assessed the extent of fecal contamination at the source and in household stored water (HSW) and explored the relationship between contamination at each sampling point and water supply type. We performed a bivariate random-effects meta-analysis of 45 studies, identified through a systematic review, that reported either the proportion of samples free of fecal indicator bacteria and/or individual sample bacteria counts for source and HSW, disaggregated by supply type. Water quality deteriorated substantially between source and stored water. The mean percentage of contaminated samples (noncompliance) at the source was 46% (95% CI: 33, 60%), whereas mean noncompliance in HSW was 75% (95% CI: 64, 84%). Water supply type was significantly associated with noncompliance at the source (p water (OR = 0.2; 95% CI: 0.1, 0.5) and HSW (OR = 0.3; 95% CI: 0.2, 0.8) from piped supplies had significantly lower odds of contamination compared with non-piped water, potentially due to residual chlorine. Piped water is less likely to be contaminated compared with other water supply types at both the source and in HSW. A focus on upgrading water services to piped supplies may help improve safety, including for those drinking stored water.

  5. A multi-sectoral decomposition analysis of city-level greenhouse gas emissions: Case study of Tianjin, China

    International Nuclear Information System (INIS)

    Kang, Jidong; Zhao, Tao; Liu, Nan; Zhang, Xin; Xu, Xianshuo; Lin, Tao

    2014-01-01

    To better understand how city-level greenhouse gas (GHG) emissions have evolved, we performed a multi-sectoral decomposition analysis to disentangle the GHG emissions in Tianjin from 2001 to 2009. Five sectors were considered, including the agricultural, industrial, transportation, commercial and other sectors. An industrial sub-sector decomposition analysis was further performed in the six high-emission industrial branches. The results show that, for all five sectors in Tianjin, economic growth was the most important factor driving the increase in emissions, while energy efficiency improvements were primarily responsible for the decrease in emissions. In comparison, the influences from energy mix shift and emission coefficient changes were relatively marginal. The disaggregated decomposition in the industry further revealed that energy efficiency improvement has been widely achieved in the industrial branches, which was especially true for the Smelting and Pressing of Ferrous Metals and Chemical Raw Materials and Chemical Products sub-sectors. However, the energy efficiency declined in a few branches, e.g., Petroleum Processing and Coking Products. Moreover, the increased emissions related to industrial structure shift were primarily due to the expansion of Smelting and Pressing of Ferrous Metals; its share in the total industry output increased from 5.62% to 16.1% during the examined period. - Highlights: • We perform the LMDI analysis on the emissions in five sectors of Tianjin. • Economic growth was the most important factor for the emissions increase. • Energy efficiency improvements mainly contributed to the emission decrease. • Negative energy intensity effect was observed in most of the industrial sub-sectors. • Industrial structure change largely resulted in emission increase

  6. Bayesian analysis of data and model error in rainfall-runoff hydrological models

    Science.gov (United States)

    Kavetski, D.; Franks, S. W.; Kuczera, G.

    2004-12-01

    A major unresolved issue in the identification and use of conceptual hydrologic models is realistic description of uncertainty in the data and model structure. In particular, hydrologic parameters often cannot be measured directly and must be inferred (calibrated) from observed forcing/response data (typically, rainfall and runoff). However, rainfall varies significantly in space and time, yet is often estimated from sparse gauge networks. Recent work showed that current calibration methods (e.g., standard least squares, multi-objective calibration, generalized likelihood uncertainty estimation) ignore forcing uncertainty and assume that the rainfall is known exactly. Consequently, they can yield strongly biased and misleading parameter estimates. This deficiency confounds attempts to reliably test model hypotheses, to generalize results across catchments (the regionalization problem) and to quantify predictive uncertainty when the hydrologic model is extrapolated. This paper continues the development of a Bayesian total error analysis (BATEA) methodology for the calibration and identification of hydrologic models, which explicitly incorporates the uncertainty in both the forcing and response data, and allows systematic model comparison based on residual model errors and formal Bayesian hypothesis testing (e.g., using Bayes factors). BATEA is based on explicit stochastic models for both forcing and response uncertainty, whereas current techniques focus solely on response errors. Hence, unlike existing methods, the BATEA parameter equations directly reflect the modeler's confidence in all the data. We compare several approaches to approximating the parameter distributions: a) full Markov Chain Monte Carlo methods and b) simplified approaches based on linear approximations. Studies using synthetic and real data from the US and Australia show that BATEA systematically reduces the parameter bias, leads to more meaningful model fits and allows model comparison taking

  7. Gender blind? An analysis of global public-private partnerships for health.

    Science.gov (United States)

    Hawkes, Sarah; Buse, Kent; Kapilashrami, Anuj

    2017-05-12

    The Global Public Private Partnerships for Health (GPPPH) constitute an increasingly central part of the global health architecture and carry both financial and normative power. Gender is an important determinant of health status, influencing differences in exposure to health determinants, health behaviours, and the response of the health system. We identified 18 GPPPH - defined as global institutions with a formal governance mechanism which includes both public and private for-profit sector actors - and conducted a gender analysis of each. Gender was poorly mainstreamed through the institutional functioning of the partnerships. Half of these partnerships had no mention of gender in their overall institutional strategy and only three partnerships had a specific gender strategy. Fifteen governing bodies had more men than women - up to a ratio of 5:1. Very few partnerships reported sex-disaggregated data in their annual reports or coverage/impact results. The majority of partnerships focused their work on maternal and child health and infectious and communicable diseases - none addressed non-communicable diseases (NCDs) directly, despite the strong role that gender plays in determining risk for the major NCD burdens. We propose two areas of action in response to these findings. First, GPPPH need to become serious in how they "do" gender; it needs to be mainstreamed through the regular activities, deliverables and systems of accountability. Second, the entire global health community needs to pay greater attention to tackling the major burden of NCDs, including addressing the gendered nature of risk. Given the inherent conflicts of interest in tackling the determinants of many NCDs, it is debatable whether the emergent GPPPH model will be an appropriate one for addressing NCDs.

  8. A strategy for reducing CO_2 emissions from buildings with the Kaya identity – A Swiss energy system analysis and a case study

    International Nuclear Information System (INIS)

    Mavromatidis, Georgios; Orehounig, Kristina; Richner, Peter; Carmeliet, Jan

    2016-01-01

    Within the general context of Greenhouse Gas (GHG) emissions reduction, decomposition analysis allows the quantification of the contribution of different factors to changes in emissions as well as the assessment of the effectiveness of policy and technology measures. The Kaya identity has been widely used for that purpose in order to disaggregate carbon emissions into various driving forces. In this paper, it is applied for the analysis of emissions resulting from energy use at three different scales. First, a decomposition analysis of the carbon emissions for the complete Swiss energy system is presented using the future projections from the Swiss Energy Strategy 2050. The Kaya identity is then applied to the Swiss building sector after it is adapted with factors that are more relatable to building parameters, such as floor area instead of Gross Domestic Product (GDP). Finally, the last level of analysis is a small scale community energy system for a unique Swiss village that aims to significantly reduce its emissions. An energy strategy is developed and its effectiveness is assessed with the adapted Kaya identity and benchmarked against the Swiss average values. The presented method demonstrates how the performance of buildings under various retrofitting scenarios can be benchmarked against future emission targets. - Highlights: • The Kaya identity is used to perform multi-scale emission decomposition analysis. • The original Kaya identity is updated with building-related parameters. • The main drivers of emissions reduction of the Swiss building stock are determined. • An energy strategy to transform the building stock of a Swiss village is developed. • The performance of efficiency measures are benchmarked using the Kaya identity.

  9. Performance analysis

    International Nuclear Information System (INIS)

    2008-05-01

    This book introduces energy and resource technology development business with performance analysis, which has business division and definition, analysis of current situation of support, substance of basic plan of national energy, resource technique development, selection of analysis index, result of performance analysis by index, performance result of investigation, analysis and appraisal of energy and resource technology development business in 2007.

  10. Instrumental analysis

    International Nuclear Information System (INIS)

    Jae, Myeong Gi; Lee, Won Seong; Kim, Ha Hyeok

    1989-02-01

    This book give description of electronic engineering such as circuit element and device, circuit analysis and logic digital circuit, the method of electrochemistry like conductometry, potentiometry and current measuring, spectro chemical analysis with electromagnetic radiant rays, optical components, absorption spectroscopy, X-ray analysis, atomic absorption spectrometry and reference, chromatography like gas-chromatography and liquid-chromatography and automated analysis on control system evaluation of automated analysis and automated analysis system and reference.

  11. Instrumental analysis

    Energy Technology Data Exchange (ETDEWEB)

    Jae, Myeong Gi; Lee, Won Seong; Kim, Ha Hyeok

    1989-02-15

    This book give description of electronic engineering such as circuit element and device, circuit analysis and logic digital circuit, the method of electrochemistry like conductometry, potentiometry and current measuring, spectro chemical analysis with electromagnetic radiant rays, optical components, absorption spectroscopy, X-ray analysis, atomic absorption spectrometry and reference, chromatography like gas-chromatography and liquid-chromatography and automated analysis on control system evaluation of automated analysis and automated analysis system and reference.

  12. Derived flood frequency analysis using different model calibration strategies based on various types of rainfall-runoff data - a comparison

    Science.gov (United States)

    Haberlandt, U.; Radtke, I.

    2013-08-01

    Derived flood frequency analysis allows to estimate design floods with hydrological modelling for poorly observed basins considering change and taking into account flood protection measures. There are several possible choices about precipitation input, discharge output and consequently regarding the calibration of the model. The objective of this study is to compare different calibration strategies for a hydrological model considering various types of rainfall input and runoff output data sets. Event based and continuous observed hourly rainfall data as well as disaggregated daily rainfall and stochastically generated hourly rainfall data are used as input for the model. As output short hourly and longer daily continuous flow time series as well as probability distributions of annual maximum peak flow series are employed. The performance of the strategies is evaluated using the obtained different model parameter sets for continuous simulation of discharge in an independent validation period and by comparing the model derived flood frequency distributions with the observed one. The investigations are carried out for three mesoscale catchments in Northern Germany with the hydrological model HEC-HMS. The results show that: (i) the same type of precipitation input data should be used for calibration and application of the hydrological model, (ii) a model calibrated using a small sample of extreme values works quite well for the simulation of continuous time series with moderate length but not vice versa, (iii) the best performance with small uncertainty is obtained when stochastic precipitation data and the observed probability distribution of peak flows are used for model calibration. This outcome suggests to calibrate a hydrological model directly on probability distributions of observed peak flows using stochastic rainfall as input if its purpose is the application for derived flood frequency analysis.

  13. Hydrological model calibration for derived flood frequency analysis using stochastic rainfall and probability distributions of peak flows

    Science.gov (United States)

    Haberlandt, U.; Radtke, I.

    2014-01-01

    Derived flood frequency analysis allows the estimation of design floods with hydrological modeling for poorly observed basins considering change and taking into account flood protection measures. There are several possible choices regarding precipitation input, discharge output and consequently the calibration of the model. The objective of this study is to compare different calibration strategies for a hydrological model considering various types of rainfall input and runoff output data sets and to propose the most suitable approach. Event based and continuous, observed hourly rainfall data as well as disaggregated daily rainfall and stochastically generated hourly rainfall data are used as input for the model. As output, short hourly and longer daily continuous flow time series as well as probability distributions of annual maximum peak flow series are employed. The performance of the strategies is evaluated using the obtained different model parameter sets for continuous simulation of discharge in an independent validation period and by comparing the model derived flood frequency distributions with the observed one. The investigations are carried out for three mesoscale catchments in northern Germany with the hydrological model HEC-HMS (Hydrologic Engineering Center's Hydrologic Modeling System). The results show that (I) the same type of precipitation input data should be used for calibration and application of the hydrological model, (II) a model calibrated using a small sample of extreme values works quite well for the simulation of continuous time series with moderate length but not vice versa, and (III) the best performance with small uncertainty is obtained when stochastic precipitation data and the observed probability distribution of peak flows are used for model calibration. This outcome suggests to calibrate a hydrological model directly on probability distributions of observed peak flows using stochastic rainfall as input if its purpose is the

  14. Instrumental analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Seung Jae; Seo, Seong Gyu

    1995-03-15

    This textbook deals with instrumental analysis, which consists of nine chapters. It has Introduction of analysis chemistry, the process of analysis and types and form of the analysis, Electrochemistry on basic theory, potentiometry and conductometry, electromagnetic radiant rays and optical components on introduction and application, Ultraviolet rays and Visible spectrophotometry, Atomic absorption spectrophotometry on introduction, flame emission spectrometry and plasma emission spectrometry. The others like infrared spectrophotometry, X-rays spectrophotometry and mass spectrometry, chromatography and the other instrumental analysis like radiochemistry.

  15. Instrumental analysis

    International Nuclear Information System (INIS)

    Kim, Seung Jae; Seo, Seong Gyu

    1995-03-01

    This textbook deals with instrumental analysis, which consists of nine chapters. It has Introduction of analysis chemistry, the process of analysis and types and form of the analysis, Electrochemistry on basic theory, potentiometry and conductometry, electromagnetic radiant rays and optical components on introduction and application, Ultraviolet rays and Visible spectrophotometry, Atomic absorption spectrophotometry on introduction, flame emission spectrometry and plasma emission spectrometry. The others like infrared spectrophotometry, X-rays spectrophotometry and mass spectrometry, chromatography and the other instrumental analysis like radiochemistry.

  16. The impact of eliminating within-country inequality in health coverage on maternal and child mortality: a Lives Saved Tool analysis

    Directory of Open Access Journals (Sweden)

    Adrienne Clermont

    2017-11-01

    Full Text Available Abstract Background Inequality in healthcare across population groups in low-income countries is a growing topic of interest in global health. The Lives Saved Tool (LiST, which uses health intervention coverage to model maternal, neonatal, and child health outcomes such as mortality rates, can be used to analyze the impact of within-country inequality. Methods Data from nationally representative household surveys (98 surveys conducted between 1998 and 2014, disaggregated by wealth quintile, were used to create a LiST analysis that models the impact of scaling up health intervention coverage for the entire country from the national average to the rate of the top wealth quintile (richest 20% of the population. Interventions for which household survey data are available were used as proxies for other interventions that are not measured in surveys, based on co-delivery of intervention packages. Results For the 98 countries included in the analysis, 24–32% of child deaths (including 34–47% of neonatal deaths and 16–19% of post-neonatal deaths could be prevented by scaling up national coverage of key health interventions to the level of the top wealth quintile. On average, the interventions with most unequal coverage rates across wealth quintiles were those related to childbirth in health facilities and to water and sanitation infrastructure; the most equally distributed were those delivered through community-based mass campaigns, such as vaccines, vitamin A supplementation, and bednet distribution. Conclusions LiST is a powerful tool for exploring the policy and programmatic implications of within-country inequality in low-income, high-mortality-burden countries. An “Equity Tool” app has been developed within the software to make this type of analysis easily accessible to users.

  17. Heat recovery with heat pumps in non-energy intensive industry: A detailed bottom-up model analysis in the French food and drink industry

    International Nuclear Information System (INIS)

    Seck, Gondia Sokhna; Guerassimoff, Gilles; Maïzi, Nadia

    2013-01-01

    Highlights: • First bottom-up energy model for NEI at 4-digit level of NACE for energy analysis. • Energy end-use modelling due to the unsuitability of end-product/process approach. • Analysis of heat recovery with HP on industrial processes up to 2020 in French F and D. • Energy consumption and emissions drop respectively by 10% compared to 2001 and 9% to 1990. • Results only achieved at heat temperature below 100 °C, concentrated in 1/3 of F and D sectors. - Abstract: Rising energy prices and environmental impacts inevitably encourage industrials to get involved in promoting energy efficiency and emissions reductions. To achieve this goal, we have developed the first detailed bottom-up energy model for Non-Energy Intensive industry (NEI) to study its global energy efficiency and the potential for CO 2 emissions reduction at a 4-digit level of NACE classification. The latter, which is generally neglected in energy analyses, is expected to play an important role in reducing industry energy intensity in the long term due to its economic and energy significance and relatively high growth rate. In this paper, the modelling of NEI is done by energy end-use owing to the unsuitability of the end-product/process approach used in the Energy Intensive industry modelling. As an example, we analysed the impact of heat recovery with heat pumps (HP) on industrial processes up to 2020 on energy savings and CO 2 emissions reductions in the French food and drink industry (F and D), the biggest NEI sector. The results showed HP could be an excellent and very promising energy recovery technology. For further detailed analysis, the depiction of HP investment cost payments is given per temperature range for each F and D subsector. This model constitutes a useful decision-making tool for assessing potential energy savings from investing in efficient technologies at the highest level of disaggregation, as well as a better subsectoral screening

  18. Physical inactivity as a policy problem: applying a concept from policy analysis to a public health issue.

    Science.gov (United States)

    Rütten, Alfred; Abu-Omar, Karim; Gelius, Peter; Schow, Diana

    2013-03-07

    Despite the recent rapid development of policies to counteract physical inactivity (PI), only a small number of systematic analyses on the evolution of these policies exists. In this article we analyze how PI, as a public health issue, "translates" into a policy-making issue. First, we discuss why PI has become an increasingly important public health issue during the last two decades. We then follow Guy Peters and conceptualize PI as a "policy problem" that has the potential to be linked to policy instruments and policy impact. Analysis indicates that PI is a policy problem that i) is chronic in nature; ii) involves a high degree of political complexity; iii) can be disaggregated into smaller scales; iv) is addressed through interventions that can be difficult to "sell" to the public when their benefits are not highly divisible; v) cannot be solved by government spending alone; vi) must be addressed through a broad scope of activities; and vii) involves interdependencies among both multiple sectors and levels of government.We conclude that the new perspective on PI proposed in this article might be useful and important for i) describing and mapping policies to counteract PI in different contexts; ii) evaluating whether or not existing policy instruments are appropriate to the policy problem of PI, and iii) explaining the factors and processes that underlie policy development and implementation. More research is warranted in all these areas. In particular, we propose to focus on comparative analyses of how the problem of PI is defined and tackled in different contexts, and on the identification of truly effective policy instruments that are designed to "solve" the PI policy problem.

  19. Sensitivity analysis

    Science.gov (United States)

    ... page: //medlineplus.gov/ency/article/003741.htm Sensitivity analysis To use the sharing features on this page, please enable JavaScript. Sensitivity analysis determines the effectiveness of antibiotics against microorganisms (germs) ...

  20. Real analysis

    CERN Document Server

    McShane, Edward James

    2013-01-01

    This text surveys practical elements of real function theory, general topology, and functional analysis. Discusses the maximality principle, the notion of convergence, the Lebesgue-Stieltjes integral, function spaces and harmonic analysis. Includes exercises. 1959 edition.

  1. CSF analysis

    Science.gov (United States)

    Cerebrospinal fluid analysis ... Analysis of CSF can help detect certain conditions and diseases. All of the following can be, but ... An abnormal CSF analysis result may be due to many different causes, ... Encephalitis (such as West Nile and Eastern Equine) Hepatic ...

  2. Semen analysis

    Science.gov (United States)

    ... analysis URL of this page: //medlineplus.gov/ency/article/003627.htm Semen analysis To use the sharing features on this page, please enable JavaScript. Semen analysis measures the amount and quality of a man's semen and sperm. Semen is ...

  3. Functional analysis

    CERN Document Server

    Kantorovich, L V

    1982-01-01

    Functional Analysis examines trends in functional analysis as a mathematical discipline and the ever-increasing role played by its techniques in applications. The theory of topological vector spaces is emphasized, along with the applications of functional analysis to applied analysis. Some topics of functional analysis connected with applications to mathematical economics and control theory are also discussed. Comprised of 18 chapters, this book begins with an introduction to the elements of the theory of topological spaces, the theory of metric spaces, and the theory of abstract measure space

  4. Public Interest Energy Research (PIER) Program. Final Project Report. California Energy Balance Update and Decomposition Analysis for the Industry and Building Sectors

    Energy Technology Data Exchange (ETDEWEB)

    de la Rue du Can, Stephane [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Hasanbeigi, Ali [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Sathaye, Jayant [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2010-12-01

    This report on the California Energy Balance version 2 (CALEB v2) database documents the latest update and improvements to CALEB version 1 (CALEB v1) and provides a complete picture of how energy is supplied and consumed in the State of California. The CALEB research team at Lawrence Berkeley National Laboratory (LBNL) performed the research and analysis described in this report. CALEB manages highly disaggregated data on energy supply, transformation, and end-use consumption for about 40 different energy commodities, from 1990 to 2008. This report describes in detail California's energy use from supply through end-use consumption as well as the data sources used. The report also analyzes trends in energy demand for the "Manufacturing" and "Building" sectors. Decomposition analysis of energy consumption combined with measures of the activity driving that consumption quantifies the effects of factors that shape energy consumption trends. The study finds that a decrease in energy intensity has had a very significant impact on reducing energy demand over the past 20 years. The largest impact can be observed in the industry sector where energy demand would have had increased by 358 trillion British thermal units (TBtu) if subsectoral energy intensities had remained at 1997 levels. Instead, energy demand actually decreased by 70 TBtu. In the "Building" sector, combined results from the "Service" and "Residential" subsectors suggest that energy demand would have increased by 264 TBtu (121 TBtu in the "Services" sector and 143 TBtu in the "Residential" sector) during the same period, 1997 to 2008. However, energy demand increased at a lesser rate, by only 162 TBtu (92 TBtu in the "Services" sector and 70 TBtu in the "Residential" sector). These energy intensity reductions can be indicative of energyefficiency improvements during the past 10 years. The research presented in this report provides a basis for developing an energy-efficiency performance index to measure

  5. Image analysis

    International Nuclear Information System (INIS)

    Berman, M.; Bischof, L.M.; Breen, E.J.; Peden, G.M.

    1994-01-01

    This paper provides an overview of modern image analysis techniques pertinent to materials science. The usual approach in image analysis contains two basic steps: first, the image is segmented into its constituent components (e.g. individual grains), and second, measurement and quantitative analysis are performed. Usually, the segmentation part of the process is the harder of the two. Consequently, much of the paper concentrates on this aspect, reviewing both fundamental segmentation tools (commonly found in commercial image analysis packages) and more advanced segmentation tools. There is also a review of the most widely used quantitative analysis methods for measuring the size, shape and spatial arrangements of objects. Many of the segmentation and analysis methods are demonstrated using complex real-world examples. Finally, there is a discussion of hardware and software issues. 42 refs., 17 figs

  6. Demand modelling of passenger air travel: An analysis and extension, volume 2

    Science.gov (United States)

    Jacobson, I. D.

    1978-01-01

    Previous intercity travel demand models in terms of their ability to predict air travel in a useful way and the need for disaggregation in the approach to demand modelling are evaluated. The viability of incorporating non-conventional factors (i.e. non-econometric, such as time and cost) in travel demand forecasting models are determined. The investigation of existing models is carried out in order to provide insight into their strong points and shortcomings. The model is characterized as a market segmentation model. This is a consequence of the strengths of disaggregation and its natural evolution to a usable aggregate formulation. The need for this approach both pedagogically and mathematically is discussed. In addition this volume contains two appendices which should prove useful to the non-specialist in the area.

  7. Application of dynamic programming for the analysis of complex water resources systems : a case study on the Mahaweli River basin development in Sri Lanka

    NARCIS (Netherlands)

    Kularathna, M.D.U.P.

    1992-01-01

    The technique of Stochastic Dynamic Programming (SDP) is ideally suited for operation policy analyses of water resources systems. However SDP has a major drawback which is appropriately termed as its "curse of dimensionality".

    Aggregation/Disaggregation techniques based on SDP and

  8. Road Safety Data, Collection, Transfer and Analysis DaCoTa. Workpackage 3, Data Warehouse: Deliverable 3.1: Annual statistical report 2010.

    NARCIS (Netherlands)

    Brandstaetter, C. Evgenikos, P. Yannis, G. Papantoniou, P. Argyropoulou E. Broughton, J. Knowles, J. Reurings, M. Vis, M. Pace, J.F. López de Cozar, E. Pérez-Fuster, P. Sanmartín J. & Haddak, M.

    2012-01-01

    The CARE database brings together the disaggregate details of road accidents and casualties across Europe, by combining the national accident databases that are maintained by all EU member states. Access to the CARE database is restricted, however, so it is important that a comprehensive range of

  9. Semiotic Analysis.

    Science.gov (United States)

    Thiemann, Francis C.

    Semiotic analysis is a method of analyzing signs (e.g., words) to reduce non-numeric data to their component parts without losing essential meanings. Semiotics dates back to Aristotle's analysis of language; it was much advanced by nineteenth-century analyses of style and logic and by Whitehead and Russell's description in this century of the role…

  10. Dimensional Analysis

    Indian Academy of Sciences (India)

    Dimensional analysis is a useful tool which finds important applications in physics and engineering. It is most effective when there exist a maximal number of dimensionless quantities constructed out of the relevant physical variables. Though a complete theory of dimen- sional analysis was developed way back in 1914 in a.

  11. Job Analysis

    OpenAIRE

    Bravená, Helena

    2009-01-01

    This bacherlor thesis deals with the importance of job analysis for personnel activities in the company. The aim of this work is to find the most suitable method of job analysis in a particular enterprise, and continues creating descriptions and specifications of each job.

  12. Capital-energy complementarity in aggregate energy-economic analysis

    Energy Technology Data Exchange (ETDEWEB)

    Hogan, W.W.

    1979-10-01

    The interplay between capital and energy will affect the outcome of energy-policy initiatives. A static model clarifies the interpretation of the conflicting empirical evidence on the nature of this interplay. This resolves an apparent conflict between engineering and economc interpretations and points to an additional ambiguity that can be resolved by distinguishing between policy issues at aggregated and disaggregated levels. Restrictions on aggregate energy use should induce reductions in the demand for capital and exacerbate the economic impacts of the energy policy. 32 references.

  13. Prevalence of Smokeless Tobacco among Low Socioeconomic Populations: A Cross-Sectional Analysis.

    Directory of Open Access Journals (Sweden)

    Mohammad Nurul Azam

    Full Text Available Cost, social acceptability and non-stringent regulations pertaining to smokeless tobacco (SLT product sales have made people choose and continue using SLT. If disaggregated data on smokeless forms and smoked practices of tobacco are reviewed, the incidence of SLT remains static. There is a strong positive correlation of SLT intake with the occurrence of adverse cardiovascular disease, particularly in the low socioeconomic populations.To investigate the prevalence of smokeless tobacco, its initiation influence and risk factors associated with the practice among lower socioeconomic populations of Bangladesh. In this study, we explore the utilization of SLT among lower socioeconomic populations in industrialized zone of Bangladesh.A cross-sectional analysis using both quantitative and categorical approaches was employed. Using systematic random sampling method, four focus group discussions (FGDs were conducted and 459 participants were interviewed. Multiple logistic regression model was applied to distinguish the significant factors among the SLT users.Almost fifty percent of the respondents initiated SLT usage at the age of 15-24 years and another 22 percent respondents were smoking and using SLT concurrently. The bulk of the women respondents used SLT during their pregnancy. Nearly twenty five percent of the respondents tried to quit the practice of SLT and one-quarter had a plan to quit SLT in the future. More than twenty percent respondents were suffering from dental decay. A noteworthy correlation was found by gender (p<0.01, sufferings from SLT related disease (p<0.05. The multiple logistic regression analysis suggested that, males were 2.7 times more knowledgeable than that of females (p<0.01 about the adversative health condition of SLT usage. The respondents suffering from SLT related diseases were 3.7 times as more knowledgeable about the effect of the practice of SLT than the respondents without diseases (p<0.01. Regarding the knowledge

  14. Residential Consumer-Centric Demand-Side Management Based on Energy Disaggregation-Piloting Constrained Swarm Intelligence: Towards Edge Computing

    Science.gov (United States)

    Hu, Yu-Chen

    2018-01-01

    The emergence of smart Internet of Things (IoT) devices has highly favored the realization of smart homes in a down-stream sector of a smart grid. The underlying objective of Demand Response (DR) schemes is to actively engage customers to modify their energy consumption on domestic appliances in response to pricing signals. Domestic appliance scheduling is widely accepted as an effective mechanism to manage domestic energy consumption intelligently. Besides, to residential customers for DR implementation, maintaining a balance between energy consumption cost and users’ comfort satisfaction is a challenge. Hence, in this paper, a constrained Particle Swarm Optimization (PSO)-based residential consumer-centric load-scheduling method is proposed. The method can be further featured with edge computing. In contrast with cloud computing, edge computing—a method of optimizing cloud computing technologies by driving computing capabilities at the IoT edge of the Internet as one of the emerging trends in engineering technology—addresses bandwidth-intensive contents and latency-sensitive applications required among sensors and central data centers through data analytics at or near the source of data. A non-intrusive load-monitoring technique proposed previously is utilized to automatic determination of physical characteristics of power-intensive home appliances from users’ life patterns. The swarm intelligence, constrained PSO, is used to minimize the energy consumption cost while considering users’ comfort satisfaction for DR implementation. The residential consumer-centric load-scheduling method proposed in this paper is evaluated under real-time pricing with inclining block rates and is demonstrated in a case study. The experimentation reported in this paper shows the proposed residential consumer-centric load-scheduling method can re-shape loads by home appliances in response to DR signals. Moreover, a phenomenal reduction in peak power consumption is achieved by 13.97%. PMID:29702607

  15. Residential Consumer-Centric Demand-Side Management Based on Energy Disaggregation-Piloting Constrained Swarm Intelligence: Towards Edge Computing

    Directory of Open Access Journals (Sweden)

    Yu-Hsiu Lin

    2018-04-01

    Full Text Available The emergence of smart Internet of Things (IoT devices has highly favored the realization of smart homes in a down-stream sector of a smart grid. The underlying objective of Demand Response (DR schemes is to actively engage customers to modify their energy consumption on domestic appliances in response to pricing signals. Domestic appliance scheduling is widely accepted as an effective mechanism to manage domestic energy consumption intelligently. Besides, to residential customers for DR implementation, maintaining a balance between energy consumption cost and users’ comfort satisfaction is a challenge. Hence, in this paper, a constrained Particle Swarm Optimization (PSO-based residential consumer-centric load-scheduling method is proposed. The method can be further featured with edge computing. In contrast with cloud computing, edge computing—a method of optimizing cloud computing technologies by driving computing capabilities at the IoT edge of the Internet as one of the emerging trends in engineering technology—addresses bandwidth-intensive contents and latency-sensitive applications required among sensors and central data centers through data analytics at or near the source of data. A non-intrusive load-monitoring technique proposed previously is utilized to automatic determination of physical characteristics of power-intensive home appliances from users’ life patterns. The swarm intelligence, constrained PSO, is used to minimize the energy consumption cost while considering users’ comfort satisfaction for DR implementation. The residential consumer-centric load-scheduling method proposed in this paper is evaluated under real-time pricing with inclining block rates and is demonstrated in a case study. The experimentation reported in this paper shows the proposed residential consumer-centric load-scheduling method can re-shape loads by home appliances in response to DR signals. Moreover, a phenomenal reduction in peak power consumption is achieved by 13.97%.

  16. Unpacking Indonesia's independent oil palm smallholders : An actor-disaggregated approach to identifying environmental and social performance challenges

    NARCIS (Netherlands)

    Jelsma, Idsert; Schoneveld, G. C.; Zoomers, A.; van Westen, A. C.M.

    2017-01-01

    Processes of globalization have generated new opportunities for smallholders to participate in profitable global agro-commodity markets. This participation however is increasingly being shaped by differentiated capabilities to comply with emerging public and private quality and safety standards. The

  17. Relations between Parent Psychopathology, Family Functioning, and Adolescent Problems in Substance-Abusing Families: Disaggregating the Effects of Parent Gender

    Science.gov (United States)

    Burstein, Marcy; Stanger, Catherine; Dumenci, Levent

    2012-01-01

    The present study: (1) examined relations between parent psychopathology and adolescent internalizing problems, externalizing problems, and substance use in substance-abusing families; and (2) tested family functioning problems as mediators of these relations. Structural equation modeling was used to estimate the independent effects of parent…

  18. Disaggregation and separation dynamics of magnetic particles in a microfluidic flow under an alternating gradient magnetic field

    Science.gov (United States)

    Cao, Quanliang; Li, Zhenhao; Wang, Zhen; Qi, Fan; Han, Xiaotao

    2018-05-01

    How to prevent particle aggregation in the magnetic separation process is of great importance for high-purity separation, while it is a challenging issue in practice. In this work, we report a novel method to solve this problem for improving the selectivity of size-based separation by use of a gradient alternating magnetic field. The specially designed magnetic field is capable of dynamically adjusting the magnetic field direction without changing the direction of magnetic gradient force acting on the particles. Using direct numerical simulations, we show that particles within a certain center-to-center distance are inseparable under a gradient static magnetic field since they are easy aggregated and then start moving together. By contrast, it has been demonstrated that alternating repulsive and attractive interaction forces between particles can be generated to avoid the formation of aggregations when the alternating gradient magnetic field with a given alternating frequency is applied, enabling these particles to be continuously separated based on size-dependent properties. The proposed magnetic separation method and simulation results have the significance for fundamental understanding of particle dynamic behavior and improving the separation efficiency.

  19. Residential Consumer-Centric Demand-Side Management Based on Energy Disaggregation-Piloting Constrained Swarm Intelligence: Towards Edge Computing.

    Science.gov (United States)

    Lin, Yu-Hsiu; Hu, Yu-Chen

    2018-04-27

    The emergence of smart Internet of Things (IoT) devices has highly favored the realization of smart homes in a down-stream sector of a smart grid. The underlying objective of Demand Response (DR) schemes is to actively engage customers to modify their energy consumption on domestic appliances in response to pricing signals. Domestic appliance scheduling is widely accepted as an effective mechanism to manage domestic energy consumption intelligently. Besides, to residential customers for DR implementation, maintaining a balance between energy consumption cost and users’ comfort satisfaction is a challenge. Hence, in this paper, a constrained Particle Swarm Optimization (PSO)-based residential consumer-centric load-scheduling method is proposed. The method can be further featured with edge computing. In contrast with cloud computing, edge computing—a method of optimizing cloud computing technologies by driving computing capabilities at the IoT edge of the Internet as one of the emerging trends in engineering technology—addresses bandwidth-intensive contents and latency-sensitive applications required among sensors and central data centers through data analytics at or near the source of data. A non-intrusive load-monitoring technique proposed previously is utilized to automatic determination of physical characteristics of power-intensive home appliances from users’ life patterns. The swarm intelligence, constrained PSO, is used to minimize the energy consumption cost while considering users’ comfort satisfaction for DR implementation. The residential consumer-centric load-scheduling method proposed in this paper is evaluated under real-time pricing with inclining block rates and is demonstrated in a case study. The experimentation reported in this paper shows the proposed residential consumer-centric load-scheduling method can re-shape loads by home appliances in response to DR signals. Moreover, a phenomenal reduction in peak power consumption is achieved by 13.97%.

  20. Self-assembled lipoprotein based gold nanoparticles for detection and photothermal disaggregation of β-amyloid aggregates

    KAUST Repository

    Martins, P. A. T.; Alsaiari, Shahad K.; Julfakyan, Khachatur; Nie, Z.; Khashab, Niveen M.

    2017-01-01

    We present a reconstituted lipoprotein-based nanoparticle platform comprising a curcumin fluorescent motif and an NIR responsive gold core. This multifunctional nanosystem is successfully used for aggregation-dependent fluorescence detection and photothermal disassembly of insoluble amyloid aggregates.

  1. Long term building energy demand for India: Disaggregating end use energy services in an integrated assessment modeling framework

    International Nuclear Information System (INIS)

    Chaturvedi, Vaibhav; Eom, Jiyong; Clarke, Leon E.; Shukla, Priyadarshi R.

    2014-01-01

    With increasing population, income, and urbanization, meeting the energy service demands for the building sector will be a huge challenge for Indian energy policy. Although there is broad consensus that the Indian building sector will grow and evolve over the coming century, there is little understanding of the potential nature of this evolution over the longer term. The present study uses a technologically detailed, service based building energy model nested in the long term, global, integrated assessment framework, GCAM, to produce scenarios of the evolution of the Indian buildings sector up through the end of the century. The results support the idea that as India evolves toward developed country per-capita income levels, its building sector will largely evolve to resemble those of the currently developed countries (heavy reliance on electricity both for increasing cooling loads and a range of emerging appliance and other plug loads), albeit with unique characteristics based on its climate conditions (cooling dominating heating and even more so with climate change), on fuel preferences that may linger from the present (for example, a preference for gas for cooking), and vestiges of its development path (including remnants of rural poor that use substantial quantities of traditional biomass). - Highlights: ► Building sector final energy demand in India will grow to over five times by century end. ► Space cooling and appliance services will grow substantially in the future. ► Energy service demands will be met predominantly by electricity and gas. ► Urban centers will face huge demand for floor space and building energy services. ► Carbon tax policy will have little effect on reducing building energy demands

  2. Self-assembled lipoprotein based gold nanoparticles for detection and photothermal disaggregation of β-amyloid aggregates

    KAUST Repository

    Martins, P. A. T.

    2017-01-10

    We present a reconstituted lipoprotein-based nanoparticle platform comprising a curcumin fluorescent motif and an NIR responsive gold core. This multifunctional nanosystem is successfully used for aggregation-dependent fluorescence detection and photothermal disassembly of insoluble amyloid aggregates.

  3. An economic analysis of migration in Mexico.

    Science.gov (United States)

    Greenwood, M J; Ladman, J R

    1978-07-01

    This paper analyzes internal migration in Mexico over the 1960-70 period. A model of the determinants of migration is specified and estimated for aggregated interstate migration flows. Results show that distance serves as a significant deterrent to migration, that higher destination earning levels are attractive to migrants, and that regions with high unemployment rates experience lower rates of inmigration. An unanticipated finding is that regions with higher earning levels have greater rates of outmigration. The data are disaggregated to examine separate migration relationships for each state. The results are that distance is a lesser deterrent for those migrants with more accessible alternatives, that higher earning levels reduce the deterring effects of distance, and that regions with higher earning levels have lower associated elasticities of migration. It is concluded that economic factors have played a crucial role in internal migration and thus in the changing occupational and geographic structure of the Mexican labor force.

  4. Incidents analysis

    International Nuclear Information System (INIS)

    Francois, P.

    1996-01-01

    We undertook a study programme at the end of 1991. To start with, we performed some exploratory studies aimed at learning some preliminary lessons on this type of analysis: Assessment of the interest of probabilistic incident analysis; possibility of using PSA scenarios; skills and resources required. At the same time, EPN created a working group whose assignment was to define a new approach for analysis of incidents on NPPs. This working group gave thought to both aspects of Operating Feedback that EPN wished to improve: Analysis of significant incidents; analysis of potential consequences. We took part in the work of this group, and for the second aspects, we proposed a method based on an adaptation of the event-tree method in order to establish a link between existing PSA models and actual incidents. Since PSA provides an exhaustive database of accident scenarios applicable to the two most common types of units in France, they are obviously of interest for this sort of analysis. With this method we performed some incident analyses, and at the same time explores some methods employed abroad, particularly ASP (Accident Sequence Precursor, a method used by the NRC). Early in 1994 EDF began a systematic analysis programme. The first, transient phase will set up methods and an organizational structure. 7 figs

  5. Incidents analysis

    Energy Technology Data Exchange (ETDEWEB)

    Francois, P

    1997-12-31

    We undertook a study programme at the end of 1991. To start with, we performed some exploratory studies aimed at learning some preliminary lessons on this type of analysis: Assessment of the interest of probabilistic incident analysis; possibility of using PSA scenarios; skills and resources required. At the same time, EPN created a working group whose assignment was to define a new approach for analysis of incidents on NPPs. This working group gave thought to both aspects of Operating Feedback that EPN wished to improve: Analysis of significant incidents; analysis of potential consequences. We took part in the work of this group, and for the second aspects, we proposed a method based on an adaptation of the event-tree method in order to establish a link between existing PSA models and actual incidents. Since PSA provides an exhaustive database of accident scenarios applicable to the two most common types of units in France, they are obviously of interest for this sort of analysis. With this method we performed some incident analyses, and at the same time explores some methods employed abroad, particularly ASP (Accident Sequence Precursor, a method used by the NRC). Early in 1994 EDF began a systematic analysis programme. The first, transient phase will set up methods and an organizational structure. 7 figs.

  6. Numerical analysis

    CERN Document Server

    Khabaza, I M

    1960-01-01

    Numerical Analysis is an elementary introduction to numerical analysis, its applications, limitations, and pitfalls. Methods suitable for digital computers are emphasized, but some desk computations are also described. Topics covered range from the use of digital computers in numerical work to errors in computations using desk machines, finite difference methods, and numerical solution of ordinary differential equations. This book is comprised of eight chapters and begins with an overview of the importance of digital computers in numerical analysis, followed by a discussion on errors in comput

  7. Trace analysis

    International Nuclear Information System (INIS)

    Warner, M.

    1987-01-01

    What is the current state of quantitative trace analytical chemistry? What are today's research efforts? And what challenges does the future hold? These are some of the questions addressed at a recent four-day symposium sponsored by the National Bureau of Standards (NBS) entitled Accuracy in Trace Analysis - Accomplishments, Goals, Challenges. The two plenary sessions held on the first day of the symposium reviewed the history of quantitative trace analysis, discussed the present situation from academic and industrial perspectives, and summarized future needs. The remaining three days of the symposium consisted of parallel sessions dealing with the measurement process; quantitation in materials; environmental, clinical, and nutrient analysis; and advances in analytical techniques

  8. Recursive analysis

    CERN Document Server

    Goodstein, R L

    2010-01-01

    Recursive analysis develops natural number computations into a framework appropriate for real numbers. This text is based upon primary recursive arithmetic and presents a unique combination of classical analysis and intuitional analysis. Written by a master in the field, it is suitable for graduate students of mathematics and computer science and can be read without a detailed knowledge of recursive arithmetic.Introductory chapters on recursive convergence and recursive and relative continuity are succeeded by explorations of recursive and relative differentiability, the relative integral, and

  9. We don’t like (to) party. A typology of Independents in Irish political life, 1922–2007

    OpenAIRE

    Weeks, Liam

    2009-01-01

    This article examines the phenomenon of Independents, or non-party candidates, in Irish political life. It has two main aims: the first is to disaggregate Independents from ‘others’ to provide a definitive dataset of their electoral performance, and to enable more reliable and valid analysis about this actor. The second, and primary, aim is to use this disaggregation to construct a typology of Independents. The background of every Independent candidate contesting a general election between 19...

  10. Parents’ uptake of human papillomavirus vaccines for their children: a systematic review and meta-analysis of observational studies

    Science.gov (United States)

    Tepjan, Suchon; Rubincam, Clara; Doukas, Nick; Asey, Farid

    2018-01-01

    reducing out-of-pocket costs. Limitations of this meta-analysis include the lack of intervention studies and high risk of bias in most studies reviewed. Further studies should disaggregate HPV vaccine uptake by sex of child and parent. PMID:29678965

  11. Parents' uptake of human papillomavirus vaccines for their children: a systematic review and meta-analysis of observational studies.

    Science.gov (United States)

    Newman, Peter A; Logie, Carmen H; Lacombe-Duncan, Ashley; Baiden, Philip; Tepjan, Suchon; Rubincam, Clara; Doukas, Nick; Asey, Farid

    2018-04-20

    To examine factors associated with parents' uptake of human papillomavirus (HPV) vaccines for their children. Systematic review and meta-analysis. Cochrane Library, AIDSLINE, CINAHL, EMBASE, PsycINFO, Social Sciences Abstracts, Ovid MEDLINE, Scholars Portal, Social Sciences Citation Index and Dissertation Abstracts International from inception through November 2017. We included studies that sampled parents and assessed uptake of HPV vaccines for their children (≤18 years) and/or sociodemographics, knowledge, attitudes or other factors associated with uptake. Study risk of bias was assessed using the Effective Public Health Practice Project tool. We pooled data using random-effects meta-analysis and conducted moderation analyses to examine variance in uptake by sex of child and parent. Seventy-nine studies on 840 838 parents across 15 countries were included. The pooled proportion of parents' uptake of HPV vaccines for their children was 41.5% (range: 0.7%-92.8%), twofold higher for girls (46.5%) than for boys (20.3%). In the meta-analysis of 62 studies, physician recommendation (r=0.46 (95% CI 0.34 to 0.56)) had the greatest influence on parents' uptake, followed by HPV vaccine safety concerns (r=-0.31 (95% CI -0.41 to -0.16)), routine child preventive check-up, past 12 months (r=0.22 (95% CI 0.11 to 0.33)) and parents' belief in vaccines (r=0.19 (95% CI 0.08 to 0.29)). Health insurance-covered HPV vaccination (r=0.16 (95% CI 0.04 to 0.29)) and lower out-of-pocket cost (r=-0.15 (95% CI -0.22 to -0.07)) had significant effects on uptake. We found significant moderator effects for sex of child. Findings indicate suboptimal levels of HPV vaccine uptake, twofold lower among boys, that may be improved by increasing physician recommendations, addressing parental safety concerns and promoting parents' positive beliefs about vaccines, in addition to expanding insurance coverage and reducing out-of-pocket costs. Limitations of this meta-analysis include the lack of

  12. Analysis I

    CERN Document Server

    Tao, Terence

    2016-01-01

    This is part one of a two-volume book on real analysis and is intended for senior undergraduate students of mathematics who have already been exposed to calculus. The emphasis is on rigour and foundations of analysis. Beginning with the construction of the number systems and set theory, the book discusses the basics of analysis (limits, series, continuity, differentiation, Riemann integration), through to power series, several variable calculus and Fourier analysis, and then finally the Lebesgue integral. These are almost entirely set in the concrete setting of the real line and Euclidean spaces, although there is some material on abstract metric and topological spaces. The book also has appendices on mathematical logic and the decimal system. The entire text (omitting some less central topics) can be taught in two quarters of 25–30 lectures each. The course material is deeply intertwined with the exercises, as it is intended that the student actively learn the material (and practice thinking and writing ri...

  13. Analysis II

    CERN Document Server

    Tao, Terence

    2016-01-01

    This is part two of a two-volume book on real analysis and is intended for senior undergraduate students of mathematics who have already been exposed to calculus. The emphasis is on rigour and foundations of analysis. Beginning with the construction of the number systems and set theory, the book discusses the basics of analysis (limits, series, continuity, differentiation, Riemann integration), through to power series, several variable calculus and Fourier analysis, and then finally the Lebesgue integral. These are almost entirely set in the concrete setting of the real line and Euclidean spaces, although there is some material on abstract metric and topological spaces. The book also has appendices on mathematical logic and the decimal system. The entire text (omitting some less central topics) can be taught in two quarters of 25–30 lectures each. The course material is deeply intertwined with the exercises, as it is intended that the student actively learn the material (and practice thinking and writing ri...

  14. Nuclear analysis

    International Nuclear Information System (INIS)

    1988-01-01

    Basic studies in nuclear analytical techniques include the examination of underlying assumptions and the development and extention of techniques involving the use of ion beams for elemental and mass analysis. 1 ref., 1 tab

  15. Biorefinery Analysis

    Energy Technology Data Exchange (ETDEWEB)

    2016-06-01

    Fact sheet summarizing NREL's techno-economic analysis and life-cycle assessment capabilities to connect research with future commercial process integration, a critical step in the scale-up of biomass conversion technologies.

  16. Nonlinear analysis

    CERN Document Server

    Gasinski, Leszek

    2005-01-01

    Hausdorff Measures and Capacity. Lebesgue-Bochner and Sobolev Spaces. Nonlinear Operators and Young Measures. Smooth and Nonsmooth Analysis and Variational Principles. Critical Point Theory. Eigenvalue Problems and Maximum Principles. Fixed Point Theory.

  17. Hydroeconomic analysis

    DEFF Research Database (Denmark)

    Bauer-Gottwein, Peter; Riegels, Niels; Pulido-Velazquez, Manuel

    2017-01-01

    Hydroeconomic analysis and modeling provides a consistent and quantitative framework to assess the links between water resources systems and economic activities related to water use, simultaneously modeling water supply and water demand. It supports water managers and decision makers in assessing...... trade-offs between different water uses, different geographic regions, and various economic sectors and between the present and the future. Hydroeconomic analysis provides consistent economic performance criteria for infrastructure development and institutional reform in water policies and management...... organizations. This chapter presents an introduction to hydroeconomic analysis and modeling, and reviews the state of the art in the field. We review available economic water-valuation techniques and summarize the main types of decision problems encountered in hydroeconomic analysis. Popular solution strategies...

  18. Conversation Analysis.

    Science.gov (United States)

    Schiffrin, Deborah

    1990-01-01

    Summarizes the current state of research in conversation analysis, referring primarily to six different perspectives that have developed from the philosophy, sociology, anthropology, and linguistics disciplines. These include pragmatics; speech act theory; interactional sociolinguistics; ethnomethodology; ethnography of communication; and…

  19. Factor analysis

    CERN Document Server

    Gorsuch, Richard L

    2013-01-01

    Comprehensive and comprehensible, this classic covers the basic and advanced topics essential for using factor analysis as a scientific tool in psychology, education, sociology, and related areas. Emphasizing the usefulness of the techniques, it presents sufficient mathematical background for understanding and sufficient discussion of applications for effective use. This includes not only theory but also the empirical evaluations of the importance of mathematical distinctions for applied scientific analysis.

  20. The challenge of modelling nitrogen management at the field scale: simulation and sensitivity analysis of N2O fluxes across nine experimental sites using DailyDayCent

    International Nuclear Information System (INIS)

    Fitton, N; Datta, A; Hastings, A; Kuhnert, M; Smith, P; Topp, C F E; Cloy, J M; Rees, R M; Cardenas, L M; Williams, J R; Smith, K; Chadwick, D

    2014-01-01

    The United Kingdom currently reports nitrous oxide emissions from agriculture using the IPCC default Tier 1 methodology. However Tier 1 estimates have a large degree of uncertainty as they do not account for spatial variations in emissions. Therefore biogeochemical models such as DailyDayCent (DDC) are increasingly being used to provide a spatially disaggregated assessment of annual emissions. Prior to use, an assessment of the ability of the model to predict annual emissions should be undertaken, coupled with an analysis of how model inputs influence model outputs, and whether the modelled estimates are more robust that those derived from the Tier 1 methodology. The aims of the study were (a) to evaluate if the DailyDayCent model can accurately estimate annual N 2 O emissions across nine different experimental sites, (b) to examine its sensitivity to different soil and climate inputs across a number of experimental sites and (c) to examine the influence of uncertainty in the measured inputs on modelled N 2 O emissions. DailyDayCent performed well across the range of cropland and grassland sites, particularly for fertilized fields indicating that it is robust for UK conditions. The sensitivity of the model varied across the sites and also between fertilizer/manure treatments. Overall our results showed that there was a stronger correlation between the sensitivity of N 2 O emissions to changes in soil pH and clay content than the remaining input parameters used in this study. The lower the initial site values for soil pH and clay content, the more sensitive DDC was to changes from their initial value. When we compared modelled estimates with Tier 1 estimates for each site, we found that DailyDayCent provided a more accurate representation of the rate of annual emissions. (paper)

  1. Radioactivation analysis

    International Nuclear Information System (INIS)

    1959-01-01

    Radioactivation analysis is the technique of radioactivation analysis of the constituents of a very small sample of matter by making the sample artificially radioactive. The first stage is to make the sample radioactive by artificial means, e.g. subject it to neutron bombardment. Once the sample has been activated, or made radioactive, the next task is to analyze the radiations given off by the sample. This analysis would indicate the nature and quantities of the various elements present in the sample. The reason is that the radiation from a particular radioisotope. In 1959 a symposium on 'Radioactivation Analysis' was organized in Vienna by the IAEA and the Joint Commission on Applied Radioactivity (ICSU). It was pointed out that there are certain factors creating uncertainties and elaborated how to overcome them. Attention was drawn to the fact that radioactivation analysis had proven a powerful tool tackling fundamental problems in geo- and cosmochemistry, and a review was given of the recent work in this field. Because of its extreme sensitivity radioactivation analysis had been principally employed for trace detection and its most extensive use has been in control of semiconductors and very pure metals. An account of the experience gained in the USA was given, where radioactivation analysis was being used by many investigators in various scientific fields as a practical and useful tool for elemental analyses. Much of this work had been concerned with determining sub microgramme and microgramme concentration of many different elements in samples of biological materials, drugs, fertilizers, fine chemicals, foods, fuels, glass, ceramic materials, metals, minerals, paints, petroleum products, resinous materials, soils, toxicants, water and other materials. In addition to these studies, radioactivation analysis had been used by other investigators to determine isotopic ratios of the stable isotopes of some of the elements. Another paper dealt with radioactivation

  2. Radioactivation analysis

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1959-07-15

    Radioactivation analysis is the technique of radioactivation analysis of the constituents of a very small sample of matter by making the sample artificially radioactive. The first stage is to make the sample radioactive by artificial means, e.g. subject it to neutron bombardment. Once the sample has been activated, or made radioactive, the next task is to analyze the radiations given off by the sample. This analysis would indicate the nature and quantities of the various elements present in the sample. The reason is that the radiation from a particular radioisotope. In 1959 a symposium on 'Radioactivation Analysis' was organized in Vienna by the IAEA and the Joint Commission on Applied Radioactivity (ICSU). It was pointed out that there are certain factors creating uncertainties and elaborated how to overcome them. Attention was drawn to the fact that radioactivation analysis had proven a powerful tool tackling fundamental problems in geo- and cosmochemistry, and a review was given of the recent work in this field. Because of its extreme sensitivity radioactivation analysis had been principally employed for trace detection and its most extensive use has been in control of semiconductors and very pure metals. An account of the experience gained in the USA was given, where radioactivation analysis was being used by many investigators in various scientific fields as a practical and useful tool for elemental analyses. Much of this work had been concerned with determining sub microgramme and microgramme concentration of many different elements in samples of biological materials, drugs, fertilizers, fine chemicals, foods, fuels, glass, ceramic materials, metals, minerals, paints, petroleum products, resinous materials, soils, toxicants, water and other materials. In addition to these studies, radioactivation analysis had been used by other investigators to determine isotopic ratios of the stable isotopes of some of the elements. Another paper dealt with radioactivation

  3. Panel Analysis

    DEFF Research Database (Denmark)

    Brænder, Morten; Andersen, Lotte Bøgh

    2014-01-01

    Based on our 2013-article, ”Does Deployment to War Affect Soldiers' Public Service Motivation – A Panel Study of Soldiers Before and After their Service in Afghanistan”, we present Panel Analysis as a methodological discipline. Panels consist of multiple units of analysis, observed at two or more...... in research settings where it is not possible to distribute units of analysis randomly or where the independent variables cannot be manipulated. The greatest disadvantage in regard to using panel studies is that data may be difficult to obtain. This is most clearly vivid in regard to the use of panel surveys...... points in time. In comparison with traditional cross-sectional studies, the advantage of using panel studies is that the time dimension enables us to study effects. Whereas experimental designs may have a clear advantage in regard to causal inference, the strength of panel studies is difficult to match...

  4. Real analysis

    CERN Document Server

    Loeb, Peter A

    2016-01-01

    This textbook is designed for a year-long course in real analysis taken by beginning graduate and advanced undergraduate students in mathematics and other areas such as statistics, engineering, and economics. Written by one of the leading scholars in the field, it elegantly explores the core concepts in real analysis and introduces new, accessible methods for both students and instructors. The first half of the book develops both Lebesgue measure and, with essentially no additional work for the student, general Borel measures for the real line. Notation indicates when a result holds only for Lebesgue measure. Differentiation and absolute continuity are presented using a local maximal function, resulting in an exposition that is both simpler and more general than the traditional approach. The second half deals with general measures and functional analysis, including Hilbert spaces, Fourier series, and the Riesz representation theorem for positive linear functionals on continuous functions with compact support....

  5. Numerical analysis

    CERN Document Server

    Scott, L Ridgway

    2011-01-01

    Computational science is fundamentally changing how technological questions are addressed. The design of aircraft, automobiles, and even racing sailboats is now done by computational simulation. The mathematical foundation of this new approach is numerical analysis, which studies algorithms for computing expressions defined with real numbers. Emphasizing the theory behind the computation, this book provides a rigorous and self-contained introduction to numerical analysis and presents the advanced mathematics that underpin industrial software, including complete details that are missing from most textbooks. Using an inquiry-based learning approach, Numerical Analysis is written in a narrative style, provides historical background, and includes many of the proofs and technical details in exercises. Students will be able to go beyond an elementary understanding of numerical simulation and develop deep insights into the foundations of the subject. They will no longer have to accept the mathematical gaps that ex...

  6. Numerical analysis

    CERN Document Server

    Rao, G Shanker

    2006-01-01

    About the Book: This book provides an introduction to Numerical Analysis for the students of Mathematics and Engineering. The book is designed in accordance with the common core syllabus of Numerical Analysis of Universities of Andhra Pradesh and also the syllabus prescribed in most of the Indian Universities. Salient features: Approximate and Numerical Solutions of Algebraic and Transcendental Equation Interpolation of Functions Numerical Differentiation and Integration and Numerical Solution of Ordinary Differential Equations The last three chapters deal with Curve Fitting, Eigen Values and Eigen Vectors of a Matrix and Regression Analysis. Each chapter is supplemented with a number of worked-out examples as well as number of problems to be solved by the students. This would help in the better understanding of the subject. Contents: Errors Solution of Algebraic and Transcendental Equations Finite Differences Interpolation with Equal Intervals Interpolation with Unequal Int...

  7. Numerical analysis

    CERN Document Server

    Jacques, Ian

    1987-01-01

    This book is primarily intended for undergraduates in mathematics, the physical sciences and engineering. It introduces students to most of the techniques forming the core component of courses in numerical analysis. The text is divided into eight chapters which are largely self-contained. However, with a subject as intricately woven as mathematics, there is inevitably some interdependence between them. The level of difficulty varies and, although emphasis is firmly placed on the methods themselves rather than their analysis, we have not hesitated to include theoretical material when we consider it to be sufficiently interesting. However, it should be possible to omit those parts that do seem daunting while still being able to follow the worked examples and to tackle the exercises accompanying each section. Familiarity with the basic results of analysis and linear algebra is assumed since these are normally taught in first courses on mathematical methods. For reference purposes a list of theorems used in the t...

  8. Real analysis

    CERN Document Server

    DiBenedetto, Emmanuele

    2016-01-01

    The second edition of this classic textbook presents a rigorous and self-contained introduction to real analysis with the goal of providing a solid foundation for future coursework and research in applied mathematics. Written in a clear and concise style, it covers all of the necessary subjects as well as those often absent from standard introductory texts. Each chapter features a “Problems and Complements” section that includes additional material that briefly expands on certain topics within the chapter and numerous exercises for practicing the key concepts. The first eight chapters explore all of the basic topics for training in real analysis, beginning with a review of countable sets before moving on to detailed discussions of measure theory, Lebesgue integration, Banach spaces, functional analysis, and weakly differentiable functions. More topical applications are discussed in the remaining chapters, such as maximal functions, functions of bounded mean oscillation, rearrangements, potential theory, a...

  9. Clustering analysis

    International Nuclear Information System (INIS)

    Romli

    1997-01-01

    Cluster analysis is the name of group of multivariate techniques whose principal purpose is to distinguish similar entities from the characteristics they process.To study this analysis, there are several algorithms that can be used. Therefore, this topic focuses to discuss the algorithms, such as, similarity measures, and hierarchical clustering which includes single linkage, complete linkage and average linkage method. also, non-hierarchical clustering method, which is popular name K -mean method ' will be discussed. Finally, this paper will be described the advantages and disadvantages of every methods

  10. Convex analysis

    CERN Document Server

    Rockafellar, Ralph Tyrell

    2015-01-01

    Available for the first time in paperback, R. Tyrrell Rockafellar's classic study presents readers with a coherent branch of nonlinear mathematical analysis that is especially suited to the study of optimization problems. Rockafellar's theory differs from classical analysis in that differentiability assumptions are replaced by convexity assumptions. The topics treated in this volume include: systems of inequalities, the minimum or maximum of a convex function over a convex set, Lagrange multipliers, minimax theorems and duality, as well as basic results about the structure of convex sets and

  11. Numerical analysis

    CERN Document Server

    Brezinski, C

    2012-01-01

    Numerical analysis has witnessed many significant developments in the 20th century. This book brings together 16 papers dealing with historical developments, survey papers and papers on recent trends in selected areas of numerical analysis, such as: approximation and interpolation, solution of linear systems and eigenvalue problems, iterative methods, quadrature rules, solution of ordinary-, partial- and integral equations. The papers are reprinted from the 7-volume project of the Journal of Computational and Applied Mathematics on '/homepage/sac/cam/na2000/index.html<

  12. Reentry analysis

    International Nuclear Information System (INIS)

    Biehl, F.A.

    1984-05-01

    This paper presents the criteria, previous nuclear experience in space, analysis techniques, and possible breakup enhancement devices applicable to an acceptable SP-100 reentry from space. Reactor operation in nuclear-safe orbit will minimize the radiological risk; the remaining safeguards criteria need to be defined. A simple analytical point mass reentry technique and a more comprehensive analysis method that considers vehicle dynamics and orbit insertion malfunctions are presented. Vehicle trajectory, attitude, and possible breakup enhancement devices will be integrated in the simulation as required to ensure an adequate representation of the reentry process

  13. Outlier analysis

    CERN Document Server

    Aggarwal, Charu C

    2013-01-01

    With the increasing advances in hardware technology for data collection, and advances in software technology (databases) for data organization, computer scientists have increasingly participated in the latest advancements of the outlier analysis field. Computer scientists, specifically, approach this field based on their practical experiences in managing large amounts of data, and with far fewer assumptions- the data can be of any type, structured or unstructured, and may be extremely large.Outlier Analysis is a comprehensive exposition, as understood by data mining experts, statisticians and

  14. Cluster analysis

    CERN Document Server

    Everitt, Brian S; Leese, Morven; Stahl, Daniel

    2011-01-01

    Cluster analysis comprises a range of methods for classifying multivariate data into subgroups. By organizing multivariate data into such subgroups, clustering can help reveal the characteristics of any structure or patterns present. These techniques have proven useful in a wide range of areas such as medicine, psychology, market research and bioinformatics.This fifth edition of the highly successful Cluster Analysis includes coverage of the latest developments in the field and a new chapter dealing with finite mixture models for structured data.Real life examples are used throughout to demons

  15. Elementary analysis

    CERN Document Server

    Snell, K S; Langford, W J; Maxwell, E A

    1966-01-01

    Elementary Analysis, Volume 2 introduces several of the ideas of modern mathematics in a casual manner and provides the practical experience in algebraic and analytic operations that lays a sound foundation of basic skills. This book focuses on the nature of number, algebraic and logical structure, groups, rings, fields, vector spaces, matrices, sequences, limits, functions and inverse functions, complex numbers, and probability. The logical structure of analysis given through the treatment of differentiation and integration, with applications to the trigonometric and logarithmic functions, is

  16. Risk analysis

    International Nuclear Information System (INIS)

    Baron, J.H.; Nunez McLeod, J.; Rivera, S.S.

    1997-01-01

    This book contains a selection of research works performed in the CEDIAC Institute (Cuyo National University) in the area of Risk Analysis, with specific orientations to the subjects of uncertainty and sensitivity studies, software reliability, severe accident modeling, etc. This volume presents important material for all those researches who want to have an insight in the risk analysis field, as a tool to solution several problems frequently found in the engineering and applied sciences field, as well as for the academic teachers who want to keep up to date, including the new developments and improvements continuously arising in this field [es

  17. Probabilistic liquefaction hazard analysis at liquefied sites of 1956 Dunaharaszti earthquake, in Hungary

    Science.gov (United States)

    Győri, Erzsébet; Gráczer, Zoltán; Tóth, László; Bán, Zoltán; Horváth, Tibor

    2017-04-01

    Liquefaction potential evaluations are generally made to assess the hazard from specific scenario earthquakes. These evaluations may estimate the potential in a binary fashion (yes/no), define a factor of safety or predict the probability of liquefaction given a scenario event. Usually the level of ground shaking is obtained from the results of PSHA. Although it is determined probabilistically, a single level of ground shaking is selected and used within the liquefaction potential evaluation. In contrary, the fully probabilistic liquefaction potential assessment methods provide a complete picture of liquefaction hazard, namely taking into account the joint probability distribution of PGA and magnitude of earthquake scenarios; both of which are key inputs in the stress-based simplified methods. Kramer and Mayfield (2007) has developed a fully probabilistic liquefaction potential evaluation method using a performance-based earthquake engineering (PBEE) framework. The results of the procedure are the direct estimate of the return period of liquefaction and the liquefaction hazard curves in function of depth. The method combines the disaggregation matrices computed for different exceedance frequencies during probabilistic seismic hazard analysis with one of the recent models for the conditional probability of liquefaction. We have developed a software for the assessment of performance-based liquefaction triggering on the basis of Kramer and Mayfield method. Originally the SPT based probabilistic method of Cetin et al. (2004) was built-in into the procedure of Kramer and Mayfield to compute the conditional probability however there is no professional consensus about its applicability. Therefore we have included not only Cetin's method but Idriss and Boulanger (2012) SPT based moreover Boulanger and Idriss (2014) CPT based procedures into our computer program. In 1956, a damaging earthquake of magnitude 5.6 occurred in Dunaharaszti, in Hungary. Its epicenter was located

  18. A gender-specific analysis of suicide methods in deliberate self-harm

    Directory of Open Access Journals (Sweden)

    Kiran K Kumar

    2017-01-01

    Full Text Available Background: Deliberate self-harm (DSH is a major public health concern. Gender differences in suicide methods are a controversial realm with various regional and cultural variations. This study compared and assessed the methods used in DSH attempters as undertaken by men and women, and investigated the possible role of gender and other clinical variables in the selection of suicide method. Materials and Methods: Two hundred subjects fulfilling the inclusion and exclusion criteria were recruited in the study. The sociodemographic details were recorded in the semi-structured pro forma. Detailed assessment of psychiatric morbidity and DSH was done by clinical interview and validated by Mini International Neuropsychiatric Interview-Plus 5.0 and Beck Suicide Intent Scale. Data were analyzed using SAS version 9.2 and SPSS version 17.0. The sample was disaggregated by gender to compare the known correlates of suicide risk on the two most common methods of suicide – poison consumption and drug overdose using multivariate analyses. Results: The analysis revealed that majority of the attempters were in the age group of 11–40 years (91%. Females (63% outnumbered males (37%; poisoning was the most common type of method (50.5%, followed by drug overdose (35%. There were no statistical differences between the two genders with respect to other sociodemographic variables. Males from urban/semi-urban background (odds ratio [OR] = 4.059 and females living alone (OR = 5.723 had high odds ratio of attempting suicide by poison consumption. Females from urban/semi-urban background (P = 0.0514 and male subjects from nuclear families had an increased odds ratio (OR = 4.482 to attempt suicide by drug overdose. There were no statistical differences when the two genders were compared for other variables such as intentionality, lethality, impulsivity, and number of attempts. Conclusions: It appears that gender differences among DSH attempters appear less pronounced in

  19. Wind power projects in the CDM: Methodologies and tools for baselines, carbon financing and sustainability analysis

    International Nuclear Information System (INIS)

    Ringius, L.; Grohnheit, P.E.; Nielsen, L.H.; Olivier, A.L.; Painuly, J.; Villavicencio, A.

    2002-12-01

    The report is intended to be a guidance document for project developers, investors, lenders, and CDM host countries involved in wind power projects in the CDM. The report explores in particular those issues that are important in CDM project assessment and development - that is, baseline development, carbon financing, and environmental sustainability. It does not deal in detail with those issues that are routinely covered in a standard wind power project assessment. The report tests, compares, and recommends methodologies for and approaches to baseline development. To present the application and implications of the various methodologies and approaches in a concrete context, Africa's largest wind farm-namely the 60 MW wind farm located in Zafarana, Egypt- is examined as a hypothetical CDM wind power project The report shows that for the present case example there is a difference of about 25% between the lowest (0.5496 tCO2/MWh) and the highest emission rate (0.6868 tCO 2 /MWh) estimated in accordance with these three standardized approaches to baseline development according to the Marrakesh Accord. This difference in emission factors comes about partly as a result of including hydroelectric power in the baseline scenario. Hydroelectric resources constitute around 21% of the generation capacity in Egypt, and, if excluding hydropower, the difference between the lowest and the highest baseline is reduced to 18%. Furthermore, since the two variations of the 'historical' baseline option examined result in the highest and the lowest baselines, by disregarding this baseline option altogether the difference between the lowest and the highest is reduced to 16%. The ES3-model, which the Systems Analysis Department at Risoe National Laboratory has developed, makes it possible for this report to explore the project-specific approach to baseline development in some detail. Based on quite disaggregated data on the Egyptian electricity system, including the wind power production

  20. Watershed analysis

    Science.gov (United States)

    Alan Gallegos

    2002-01-01

    Watershed analyses and assessments for the Kings River Sustainable Forest Ecosystems Project were done on about 33,000 acres of the 45,500-acre Big Creek watershed and 32,000 acres of the 85,100-acre Dinkey Creek watershed. Following procedures developed for analysis of cumulative watershed effects (CWE) in the Pacific Northwest Region of the USDA Forest Service, the...

  1. Regression Analysis

    CERN Document Server

    Freund, Rudolf J; Sa, Ping

    2006-01-01

    The book provides complete coverage of the classical methods of statistical analysis. It is designed to give students an understanding of the purpose of statistical analyses, to allow the student to determine, at least to some degree, the correct type of statistical analyses to be performed in a given situation, and have some appreciation of what constitutes good experimental design

  2. Relativistic analysis

    International Nuclear Information System (INIS)

    Unterberger, A.

    1987-01-01

    We study the Klein-Gordon symbolic calculus of operators acting on solutions of the free Klein-Gordon equation. It contracts to the Weyl calculus as c→∞. Mathematically, it may also be considered as a pseudodifferential analysis on the unit ball of R n [fr

  3. Consequence analysis

    International Nuclear Information System (INIS)

    Woodard, K.

    1985-01-01

    The objectives of this paper are to: Provide a realistic assessment of consequences; Account for plant and site-specific characteristics; Adjust accident release characteristics to account for results of plant-containment analysis; Produce conditional risk curves for each of five health effects; and Estimate uncertainties

  4. Domain analysis

    DEFF Research Database (Denmark)

    Hjørland, Birger

    2017-01-01

    The domain-analytic approach to knowledge organization (KO) (and to the broader field of library and information science, LIS) is outlined. The article reviews the discussions and proposals on the definition of domains, and provides an example of a domain-analytic study in the field of art studies....... Varieties of domain analysis as well as criticism and controversies are presented and discussed....

  5. IWS analysis

    International Nuclear Information System (INIS)

    Rhoades, W.A.; Dray, B.J.

    1970-01-01

    The effect of Gadolinium-155 on the prompt kinetic behavior of a zirconium hydride reactor has been deduced, using experimental data from the SNAPTRAN machine. The poison material makes the temperature coefficient more positive, and the Type IV sleeves were deduced to give a positive coefficient above 1100 0 F. A thorough discussion of the data and analysis is included. (U.S.)

  6. Analysis report

    International Nuclear Information System (INIS)

    Saadi, Radouan; Marah, Hamid

    2014-01-01

    This report presents results related to Tritium analysis carried out at the CNESTEN DASTE in Rabat (Morocco), on behalf of Senegal, within the framework of the RAF7011 project. It describes analytical method and instrumentation including general uncertainty estimation: Electrolytic enrichment and liquid scintillation counting; The results are expressed in Tritium Unit (TU); Low Limit Detection: 0.02 TU

  7. Survival Analysis

    CERN Document Server

    Miller, Rupert G

    2011-01-01

    A concise summary of the statistical methods used in the analysis of survival data with censoring. Emphasizes recently developed nonparametric techniques. Outlines methods in detail and illustrates them with actual data. Discusses the theory behind each method. Includes numerous worked problems and numerical exercises.

  8. Genetic analysis

    NARCIS (Netherlands)

    Koornneef, M.; Alonso-Blanco, C.; Stam, P.

    2006-01-01

    The Mendelian analysis of genetic variation, available as induced mutants or as natural variation, requires a number of steps that are described in this chapter. These include the determination of the number of genes involved in the observed trait's variation, the determination of dominance

  9. Poetic Analysis

    DEFF Research Database (Denmark)

    Nielsen, Kirsten

    2010-01-01

    The first part of this article presents the characteristics of Hebrew poetry: features associated with rhythm and phonology, grammatical features, structural elements like parallelism, and imagery and intertextuality. The second part consists of an analysis of Psalm 121. It is argued that assonan...

  10. Models of Economic Analysis

    OpenAIRE

    Adrian Ioana; Tiberiu Socaciu

    2013-01-01

    The article presents specific aspects of management and models for economic analysis. Thus, we present the main types of economic analysis: statistical analysis, dynamic analysis, static analysis, mathematical analysis, psychological analysis. Also we present the main object of the analysis: the technological activity analysis of a company, the analysis of the production costs, the economic activity analysis of a company, the analysis of equipment, the analysis of labor productivity, the anal...

  11. Trend analysis

    International Nuclear Information System (INIS)

    Smith, M.; Jones, D.R.

    1991-01-01

    The goal of exploration is to find reserves that will earn an adequate rate of return on the capital invested. Neither exploration nor economics is an exact science. The authors must therefore explore in those trends (plays) that have the highest probability of achieving this goal. Trend analysis is a technique for organizing the available data to make these strategic exploration decisions objectively and is in conformance with their goals and risk attitudes. Trend analysis differs from resource estimation in its purpose. It seeks to determine the probability of economic success for an exploration program, not the ultimate results of the total industry effort. Thus the recent past is assumed to be the best estimate of the exploration probabilities for the near future. This information is combined with economic forecasts. The computer software tools necessary for trend analysis are (1) Information data base - requirements and sources. (2) Data conditioning program - assignment to trends, correction of errors, and conversion into usable form. (3) Statistical processing program - calculation of probability of success and discovery size probability distribution. (4) Analytical processing - Monte Carlo simulation to develop the probability distribution of the economic return/investment ratio for a trend. Limited capital (short-run) effects are analyzed using the Gambler's Ruin concept in the Monte Carlo simulation and by a short-cut method. Multiple trend analysis is concerned with comparing and ranking trends, allocating funds among acceptable trends, and characterizing program risk by using risk profiles. In summary, trend analysis is a reality check for long-range exploration planning

  12. Image Analysis

    DEFF Research Database (Denmark)

    The 19th Scandinavian Conference on Image Analysis was held at the IT University of Copenhagen in Denmark during June 15-17, 2015. The SCIA conference series has been an ongoing biannual event for more than 30 years and over the years it has nurtured a world-class regional research and development...... area within the four participating Nordic countries. It is a regional meeting of the International Association for Pattern Recognition (IAPR). We would like to thank all authors who submitted works to this year’s SCIA, the invited speakers, and our Program Committee. In total 67 papers were submitted....... The topics of the accepted papers range from novel applications of vision systems, pattern recognition, machine learning, feature extraction, segmentation, 3D vision, to medical and biomedical image analysis. The papers originate from all the Scandinavian countries and several other European countries...

  13. Harmonic analysis

    CERN Document Server

    Helson, Henry

    2010-01-01

    This second edition has been enlarged and considerably rewritten. Among the new topics are infinite product spaces with applications to probability, disintegration of measures on product spaces, positive definite functions on the line, and additional information about Weyl's theorems on equidistribution. Topics that have continued from the first edition include Minkowski's theorem, measures with bounded powers, idempotent measures, spectral sets of bounded functions and a theorem of Szego, and the Wiener Tauberian theorem. Readers of the book should have studied the Lebesgue integral, the elementary theory of analytic and harmonic functions, and the basic theory of Banach spaces. The treatment is classical and as simple as possible. This is an instructional book, not a treatise. Mathematics students interested in analysis will find here what they need to know about Fourier analysis. Physicists and others can use the book as a reference for more advanced topics.

  14. Geometric analysis

    CERN Document Server

    Bray, Hubert L; Mazzeo, Rafe; Sesum, Natasa

    2015-01-01

    This volume includes expanded versions of the lectures delivered in the Graduate Minicourse portion of the 2013 Park City Mathematics Institute session on Geometric Analysis. The papers give excellent high-level introductions, suitable for graduate students wishing to enter the field and experienced researchers alike, to a range of the most important areas of geometric analysis. These include: the general issue of geometric evolution, with more detailed lectures on Ricci flow and Kähler-Ricci flow, new progress on the analytic aspects of the Willmore equation as well as an introduction to the recent proof of the Willmore conjecture and new directions in min-max theory for geometric variational problems, the current state of the art regarding minimal surfaces in R^3, the role of critical metrics in Riemannian geometry, and the modern perspective on the study of eigenfunctions and eigenvalues for Laplace-Beltrami operators.

  15. Complex analysis

    CERN Document Server

    Freitag, Eberhard

    2005-01-01

    The guiding principle of this presentation of ``Classical Complex Analysis'' is to proceed as quickly as possible to the central results while using a small number of notions and concepts from other fields. Thus the prerequisites for understanding this book are minimal; only elementary facts of calculus and algebra are required. The first four chapters cover the essential core of complex analysis: - differentiation in C (including elementary facts about conformal mappings) - integration in C (including complex line integrals, Cauchy's Integral Theorem, and the Integral Formulas) - sequences and series of analytic functions, (isolated) singularities, Laurent series, calculus of residues - construction of analytic functions: the gamma function, Weierstrass' Factorization Theorem, Mittag-Leffler Partial Fraction Decomposition, and -as a particular highlight- the Riemann Mapping Theorem, which characterizes the simply connected domains in C. Further topics included are: - the theory of elliptic functions based on...

  16. Spectrographic analysis

    International Nuclear Information System (INIS)

    Quinn, C.A.

    1983-01-01

    The article deals with spectrographic analysis and the analytical methods based on it. The theory of spectrographic analysis is discussed as well as the layout of a spectrometer system. The infrared absorption spectrum of a compound is probably its most unique property. The absorption of infrared radiation depends on increasing the energy of vibration and rotation associated with a covalent bond. The infrared region is intrinsically low in energy thus the design of infrared spectrometers is always directed toward maximising energy throughput. The article also considers atomic absorption - flame atomizers, non-flame atomizers and the source of radiation. Under the section an emission spectroscopy non-electrical energy sources, electrical energy sources and electrical flames are discussed. Digital computers form a part of the development on spectrographic instrumentation

  17. Wavelet analysis

    CERN Document Server

    Cheng, Lizhi; Luo, Yong; Chen, Bo

    2014-01-01

    This book could be divided into two parts i.e. fundamental wavelet transform theory and method and some important applications of wavelet transform. In the first part, as preliminary knowledge, the Fourier analysis, inner product space, the characteristics of Haar functions, and concepts of multi-resolution analysis, are introduced followed by a description on how to construct wavelet functions both multi-band and multi wavelets, and finally introduces the design of integer wavelets via lifting schemes and its application to integer transform algorithm. In the second part, many applications are discussed in the field of image and signal processing by introducing other wavelet variants such as complex wavelets, ridgelets, and curvelets. Important application examples include image compression, image denoising/restoration, image enhancement, digital watermarking, numerical solution of partial differential equations, and solving ill-conditioned Toeplitz system. The book is intended for senior undergraduate stude...

  18. Electrochemical analysis

    International Nuclear Information System (INIS)

    Hwang, Hun

    2007-02-01

    This book explains potentiometry, voltametry, amperometry and basic conception of conductometry with eleven chapters. It gives the specific descriptions on electrochemical cell and its mode, basic conception of electrochemical analysis on oxidation-reduction reaction, standard electrode potential, formal potential, faradaic current and faradaic process, mass transfer and overvoltage, potentiometry and indirect potentiometry, polarography with TAST, normal pulse and deferential pulse, voltammetry, conductometry and conductometric titration.

  19. Survival analysis

    International Nuclear Information System (INIS)

    Badwe, R.A.

    1999-01-01

    The primary endpoint in the majority of the studies has been either disease recurrence or death. This kind of analysis requires a special method since all patients in the study experience the endpoint. The standard method for estimating such survival distribution is Kaplan Meier method. The survival function is defined as the proportion of individuals who survive beyond certain time. Multi-variate comparison for survival has been carried out with Cox's proportional hazard model

  20. Elastodynamic Analysis

    DEFF Research Database (Denmark)

    Andersen, Lars

    This book contains the lecture notes for the 9th semester course on elastodynamics. The first chapter gives an overview of the basic theory of stress waves propagating in viscoelastic media. In particular, the effect of surfaces and interfaces in a viscoelastic material is studied, and different....... Thus, in Chapter 3, an alternative semi-analytic method is derived, which may be applied for the analysis of layered half-spaces subject to moving or stationary loads....

  1. Cluster analysis

    OpenAIRE

    Mucha, Hans-Joachim; Sofyan, Hizir

    2000-01-01

    As an explorative technique, duster analysis provides a description or a reduction in the dimension of the data. It classifies a set of observations into two or more mutually exclusive unknown groups based on combinations of many variables. Its aim is to construct groups in such a way that the profiles of objects in the same groups are relatively homogenous whereas the profiles of objects in different groups are relatively heterogeneous. Clustering is distinct from classification techniques, ...

  2. Water analysis

    International Nuclear Information System (INIS)

    Garbarino, J.R.; Steinheimer, T.R.; Taylor, H.E.

    1985-01-01

    This is the twenty-first biennial review of the inorganic and organic analytical chemistry of water. The format of this review differs somewhat from previous reviews in this series - the most recent of which appeared in Analytical Chemistry in April 1983. Changes in format have occurred in the presentation of material concerning review articles and the inorganic analysis of water sections. Organic analysis of water sections are organized as in previous reviews. Review articles have been compiled and tabulated in an Appendix with respect to subject, title, author(s), citation, and number of references cited. The inorganic water analysis sections are now grouped by constituent using the periodic chart; for example, alkali, alkaline earth, 1st series transition metals, etc. Within these groupings the references are roughly grouped by instrumental technique; for example, spectrophotometry, atomic absorption spectrometry, etc. Multiconstituent methods for determining analytes that cannot be grouped in this manner are compiled into a separate section sorted by instrumental technique. References used in preparing this review were compiled from nearly 60 major journals published during the period from October 1982 through September 1984. Conference proceedings, most foreign journals, most trade journals, and most government publications are excluded. References cited were obtained using the American Chemical Society's Chemical Abstracts for sections on inorganic analytical chemistry, organic analytical chemistry, water, and sewage waste. Cross-references of these sections were also included. 860 references

  3. Economic analysis

    Energy Technology Data Exchange (ETDEWEB)

    None

    1980-06-01

    The Energy Policy and Conservation Act (EPCA) mandated that minimum energy efficiency standards be established for classes of refrigerators and refrigerator-freezers, freezers, clothes dryers, water heaters, room air conditioners, home heating equipment, kitchen ranges and ovens, central air conditioners, and furnaces. EPCA requires that standards be designed to achieve the maximum improvement in energy efficiency that is technologically feasible and economically justified. Following the introductory chapter, Chapter Two describes the methodology used in the economic analysis and its relationship to legislative criteria for consumer product efficiency assessment; details how the CPES Value Model systematically compared and evaluated the economic impacts of regulation on the consumer, manufacturer and Nation. Chapter Three briefly displays the results of the analysis and lists the proposed performance standards by product class. Chapter Four describes the reasons for developing a baseline forecast, characterizes the baseline scenario from which regulatory impacts were calculated and summarizes the primary models, data sources and assumptions used in the baseline formulations. Chapter Five summarizes the methodology used to calculate regulatory impacts; describes the impacts of energy performance standards relative to the baseline discussed in Chapter Four. Also discussed are regional standards and other program alternatives to performance standards. Chapter Six describes the procedure for balancing consumer, manufacturer, and national impacts to select standard levels. Details of models and data bases used in the analysis are included in Appendices A through K.

  4. Vector analysis

    CERN Document Server

    Newell, Homer E

    2006-01-01

    When employed with skill and understanding, vector analysis can be a practical and powerful tool. This text develops the algebra and calculus of vectors in a manner useful to physicists and engineers. Numerous exercises (with answers) not only provide practice in manipulation but also help establish students' physical and geometric intuition in regard to vectors and vector concepts.Part I, the basic portion of the text, consists of a thorough treatment of vector algebra and the vector calculus. Part II presents the illustrative matter, demonstrating applications to kinematics, mechanics, and e

  5. Vector analysis

    CERN Document Server

    Brand, Louis

    2006-01-01

    The use of vectors not only simplifies treatments of differential geometry, mechanics, hydrodynamics, and electrodynamics, but also makes mathematical and physical concepts more tangible and easy to grasp. This text for undergraduates was designed as a short introductory course to give students the tools of vector algebra and calculus, as well as a brief glimpse into these subjects' manifold applications. The applications are developed to the extent that the uses of the potential function, both scalar and vector, are fully illustrated. Moreover, the basic postulates of vector analysis are brou

  6. Understanding analysis

    CERN Document Server

    Abbott, Stephen

    2015-01-01

    This lively introductory text exposes the student to the rewards of a rigorous study of functions of a real variable. In each chapter, informal discussions of questions that give analysis its inherent fascination are followed by precise, but not overly formal, developments of the techniques needed to make sense of them. By focusing on the unifying themes of approximation and the resolution of paradoxes that arise in the transition from the finite to the infinite, the text turns what could be a daunting cascade of definitions and theorems into a coherent and engaging progression of ideas. Acutely aware of the need for rigor, the student is much better prepared to understand what constitutes a proper mathematical proof and how to write one. Fifteen years of classroom experience with the first edition of Understanding Analysis have solidified and refined the central narrative of the second edition. Roughly 150 new exercises join a selection of the best exercises from the first edition, and three more project-sty...

  7. Consensus analysis:

    DEFF Research Database (Denmark)

    Moore, R; Brødsgaard, I; Miller, ML

    1997-01-01

    A quantitative method for validating qualitative interview results and checking sample parameters is described and illustrated using common pain descriptions among a sample of Anglo-American and mandarin Chinese patients and dentists matched by age and gender. Assumptions were that subjects were ...... of covalidating questionnaires that reflect results of qualitative interviews are recommended in order to estimate sample parameters such as intersubject agreement, individual subject accuracy, and minimum required sample sizes.......A quantitative method for validating qualitative interview results and checking sample parameters is described and illustrated using common pain descriptions among a sample of Anglo-American and mandarin Chinese patients and dentists matched by age and gender. Assumptions were that subjects were...... of consistency in use of descriptors within groups, validity of description, accuracy of individuals compared with others in their group, and minimum required sample size were calculated using Cronbach's alpha, factor analysis, and Bayesian probability. Ethnic and professional differences within and across...

  8. Failure Analysis

    International Nuclear Information System (INIS)

    Iorio, A.F.; Crespi, J.C.

    1987-01-01

    After ten years of operation at the Atucha I Nuclear Power Station a gear belonging to a pressurized heavy water reactor refuelling machine, failed. The gear box was used to operate the inlet-outlet heavy-water valve of the machine. Visual examination of the gear device showed an absence of lubricant and that several gear teeth were broken at the root. Motion was transmitted with a speed-reducing device with controlled adjustable times in order to produce a proper fitness of the valve closure. The aim of this paper is to discuss the results of the gear failure analysis in order to recommend the proper solution to prevent further failures. (Author)

  9. Nuclear analysis

    International Nuclear Information System (INIS)

    1988-01-01

    In a search for correlations between the elemental composition of trace elements in human stones and the stone types with relation to their growth pattern, a combined PIXE and x-ray diffraction spectrometry approach was implemented. The combination of scanning PIXE and XRD has proved to be an advance in the methodology of stone analysis and may point to the growth pattern in the body. The exact role of trace elements in the formation and growth of urinary stones is not fully understood. Efforts are thus continuing firstly to solve the analytical problems concerned and secondly to design suitable experiments that would provide information about the occurrence and distribution of trace elements in urine. 1 fig., 1 ref

  10. Spatial Intensity Duration Frequency Relationships Using Hierarchical Bayesian Analysis for Urban Areas

    Science.gov (United States)

    Rupa, Chandra; Mujumdar, Pradeep

    2016-04-01

    algorithm within a Gibbs sampler) is used to obtain the samples of parameters from the posterior distribution of parameters. The spatial maps of return levels for specified return periods, along with the associated uncertainties, are obtained for the summer, monsoon and annual maxima rainfall. Considering various covariates, the best fit model is selected using Deviance Information Criteria. It is observed that the geographical covariates outweigh the climatological covariates for the monsoon maxima rainfall (latitude and longitude). The best covariates for summer maxima and annual maxima rainfall are mean summer precipitation and mean monsoon precipitation respectively, including elevation for both the cases. The scale invariance theory, which states that statistical properties of a process observed at various scales are governed by the same relationship, is used to disaggregate the daily rainfall to hourly scales. The spatial maps of the scale are obtained for the study area. The spatial maps of IDF relationships thus generated are useful in storm water designs, adequacy analysis and identifying the vulnerable flooding areas.

  11. HIV treatment and care services for adolescents: a situational analysis of 218 facilities in 23 sub-Saharan African countries.

    Science.gov (United States)

    Mark, Daniella; Armstrong, Alice; Andrade, Catarina; Penazzato, Martina; Hatane, Luann; Taing, Lina; Runciman, Toby; Ferguson, Jane

    2017-05-16

    In 2013, an estimated 2.1 million adolescents (age 10-19 years) were living with HIV globally. The extent to which health facilities provide appropriate treatment and care was unknown. To support understanding of service availability in 2014, Paediatric-Adolescent Treatment Africa (PATA), a non-governmental organisation (NGO) supporting a network of health facilities across sub-Saharan Africa, undertook a facility-level situational analysis of adolescent HIV treatment and care services in 23 countries. Two hundred and eighteen facilities, responsible for an estimated 80,072 HIV-infected adolescents in care, were surveyed. Sixty per cent of the sample were from PATA's network, with the remaining gathered via local NGO partners and snowball sampling. Data were analysed using descriptive statistics and coding to describe central tendencies and identify themes. Respondents represented three subregions: West and Central Africa ( n  = 59; 27%), East Africa ( n  = 77, 35%) and southern Africa ( n  = 82, 38%). Half (50%) of the facilities were in urban areas, 17% peri-urban and 33% rural settings. Insufficient data disaggregation and outcomes monitoring were critical issues. A quarter of facilities did not have a working definition of adolescence. Facilities reported non-adherence as their key challenge in adolescent service provision, but had insufficient protocols for determining and managing poor adherence and loss to follow-up. Adherence counselling focused on implications of non-adherence rather than its drivers. Facilities recommended peer support as an effective adherence and retention intervention, yet not all offered these services. Almost two-thirds reported attending to adolescents with adults and/or children, and half had no transitioning protocols. Of those with transitioning protocols, 21% moved pregnant adolescents into adult services earlier than their peers. There was limited sexual and reproductive health integration, with 63% of facilities

  12. INAA and chemical analysis of water and sediments sampled in 1996 from the Romanian sector of the Danube river

    International Nuclear Information System (INIS)

    Pantelica, A.; Georgescu, I.I.; Oprica, M.H.I.; Borcia, C.

    1999-01-01

    Water and sediment samples collected during spring 1996 from 20 sampling sites of the Romanian sector of the Danube river and the Black Sea coast were analyzed by instrumental neutron activation analysis (INAA) and by chemical methods to determine major, minor and trace element contents. The concentrations of 43 elements (Ag, Al, As, Au, Ba, Br, Ca, Ce, Cl, Co, Cr, Cu, Cs, Eu, Fe, Ga, Hf, Hg, K, La, Lu, Mg, Mn, Mo, Na, Nd, Ni, Rb, Sb, Sc, Se, Sm, Sr, Ta, Tb, Ti, Th, U, V, W, Yb, Zr, Zn) were investigated by INAA at WWR-S reactor in Bucharest. Chemical methods were used to determine the content of P 2 O 5 and SiO 2 in sediments. For INAA, the water residues and sediment samples were irradiated at the WWR-S reactor in Bucharest at a neutron fluence rate of 2.3·10 12 cm -2 s -1 . As standards, reference materials IAEA-Soil 7 and WTM (sludge from city water treatment, from the Institute of Radioecology and Applied Nuclear Techniques Kosice, Slovakia) as well as chemical compounds of Al, Ca, Mg, V were used. Mono-standard method was applied in the case of Ti and Sr (Cl and Zn as standards, respectively). By chemical methods, the amount of SiO 2 was determined in sediment samples after the treatment with concentrated HCl and residuum dis-aggregation by fusion (melting with a mixture of Na 2 CO 3 and K 2 CO 3 ). Phosphorus was determined by spectrophotometry with ammonium molybdate and by reduction with ascorbic acid. It can be seen that, both for water and sediment samples, the highest contents of Al, Co, Cs, Fe, Rb, and Sb were found at the sites located upstream the Portile de Fier dam: at Turnu Severin (for water) and Orsova (for sediments). Ag, Au, Ni, Yb, Zr were determined only in some of the water samples at the following concentration levels: ng L -1 (Au, Lu), tens of ng L -1 (Ag, Tb, Yb), hundreds of ng L -1 (Ag), μg L -1 (Ni, Zr), tens of μg L -1 (Ni, Ti). From a comparison with results of our previous studies for the Danube bottom sediments, no significant

  13. Ferrous analysis

    International Nuclear Information System (INIS)

    Straub, W.A.

    1987-01-01

    This review is the seventh in the series compiled by using the Dialog on-line CA Search facilities at the Information Resource Center of USS Technical Center covering the period from Oct. 1984 to Nov. 1, 1986. The quest for better surface properties, through the application of various electrochemical and other coating techniques, seems to have increased and reinforces the notion that only through the value added to a steel by proper finishing steps can a major supplier hope to compete profitably. The detection, determination, and control of microalloying constituents has also been generating a lot of interest as evidenced by the number of publications devoted to this subject in the last few years. Several recent review articles amplify on the recent trends in the application of modern analytical technology to steelmaking. Another review has been devoted to the determination of trace elements and the simultaneous determination of elements in metals by mass spectrometry, atomic absorption spectrometry, and multielement emission spectrometry. Problems associated with the analysis of electroplating wastewaters have been reviewed in a recent publication that has described the use of various spectrophotometric methods for this purpose. The collection and treatment of analytical data in the modern steel making environment have been extensively reviewed emphasis on the interaction of the providers and users of the analytical data, its quality, and the cost of its collection. Raw material treatment and beneficiation was the dominant theme

  14. Matrix analysis

    CERN Document Server

    Bhatia, Rajendra

    1997-01-01

    A good part of matrix theory is functional analytic in spirit. This statement can be turned around. There are many problems in operator theory, where most of the complexities and subtleties are present in the finite-dimensional case. My purpose in writing this book is to present a systematic treatment of methods that are useful in the study of such problems. This book is intended for use as a text for upper division and gradu­ ate courses. Courses based on parts of the material have been given by me at the Indian Statistical Institute and at the University of Toronto (in collaboration with Chandler Davis). The book should also be useful as a reference for research workers in linear algebra, operator theory, mathe­ matical physics and numerical analysis. A possible subtitle of this book could be Matrix Inequalities. A reader who works through the book should expect to become proficient in the art of deriving such inequalities. Other authors have compared this art to that of cutting diamonds. One first has to...

  15. Uncertainty analysis

    International Nuclear Information System (INIS)

    Thomas, R.E.

    1982-03-01

    An evaluation is made of the suitability of analytical and statistical sampling methods for making uncertainty analyses. The adjoint method is found to be well-suited for obtaining sensitivity coefficients for computer programs involving large numbers of equations and input parameters. For this purpose the Latin Hypercube Sampling method is found to be inferior to conventional experimental designs. The Latin hypercube method can be used to estimate output probability density functions, but requires supplementary rank transformations followed by stepwise regression to obtain uncertainty information on individual input parameters. A simple Cork and Bottle problem is used to illustrate the efficiency of the adjoint method relative to certain statistical sampling methods. For linear models of the form Ax=b it is shown that a complete adjoint sensitivity analysis can be made without formulating and solving the adjoint problem. This can be done either by using a special type of statistical sampling or by reformulating the primal problem and using suitable linear programming software

  16. A prospective analysis of Brazilian biofuel economy: Land use, infrastructure development and fuel pricing policies

    Science.gov (United States)

    Nunez Amortegui, Hector Mauricio

    Being the two largest ethanol producers in the world, transportation fuel policies in Brazil and the U.S. affect not only their domestic markets but also the global food and biofuel economy. Hence, the complex biofuel policy climate in these countries leaves the public with unclear conclusions about the prospects for supply and trade of agricultural commodities and biofuels. In this dissertation I develop a price endogenous mathematical programming model to simulate and analyze the impacts of biofuel policies in Brazil and the U.S. on land use in these countries, agricultural commodity and transportation fuel markets, trade, and global environment. The model maximizes the social surplus represented by the sum of producers' and consumers' surpluses, including selected agricultural commodity markets and fuel markets in the U.S., Brazil, Argentina, China, and the Rest-of-the-World (ROW), subject to resource limitations, material balances, technical constraints, and policy restrictions. Consumers' surplus is derived from consumption of agricultural commodities and transportation fuels by vehicles that generate vehicle-kilometers-traveled (VKT). While in the other regional components aggregate supply and demand functions are assumed for the commodities included in the analysis, the agricultural supply component is regionally disaggregated for Brazil and the U.S., and the transportation fuel sector is regionally disaggregated for Brazil. The U.S. agricultural supply component includes production of fourteen major food/feed crops, including soybeans, corn and wheat, and cellulosic biofuel feedstocks. The Brazil component includes eight major annual crops, including soybeans, corn, wheat, and rice, and sugarcane as the energy crop. A particular emphasis is given to the beef-cattle production in Brazil and the potential for livestock semi-intensification in Brazilian pasture grazing systems as a prospective pathway for releasing new croplands. In the fuel sector of both

  17. Experimental modal analysis

    Energy Technology Data Exchange (ETDEWEB)

    Ibsen, Lars Bo; Liingaard, M.

    2006-12-15

    This technical report concerns the basic theory and principles for experimental modal analysis. The sections within the report are: Output-only modal analysis software, general digital analysis, basics of structural dynamics and modal analysis and system identification. (au)

  18. Theoretical numerical analysis a functional analysis framework

    CERN Document Server

    Atkinson, Kendall

    2005-01-01

    This textbook prepares graduate students for research in numerical analysis/computational mathematics by giving to them a mathematical framework embedded in functional analysis and focused on numerical analysis. This helps the student to move rapidly into a research program. The text covers basic results of functional analysis, approximation theory, Fourier analysis and wavelets, iteration methods for nonlinear equations, finite difference methods, Sobolev spaces and weak formulations of boundary value problems, finite element methods, elliptic variational inequalities and their numerical solu

  19. Analysis of Heat Transfer

    International Nuclear Information System (INIS)

    2003-08-01

    This book deals with analysis of heat transfer which includes nonlinear analysis examples, radiation heat transfer, analysis of heat transfer in ANSYS, verification of analysis result, analysis of heat transfer of transition with automatic time stepping and open control, analysis of heat transfer using arrangement of ANSYS, resistance of thermal contact, coupled field analysis such as of thermal-structural interaction, cases of coupled field analysis, and phase change.

  20. Information security risk analysis

    CERN Document Server

    Peltier, Thomas R

    2001-01-01

    Effective Risk AnalysisQualitative Risk AnalysisValue AnalysisOther Qualitative MethodsFacilitated Risk Analysis Process (FRAP)Other Uses of Qualitative Risk AnalysisCase StudyAppendix A: QuestionnaireAppendix B: Facilitated Risk Analysis Process FormsAppendix C: Business Impact Analysis FormsAppendix D: Sample of ReportAppendix E: Threat DefinitionsAppendix F: Other Risk Analysis OpinionsIndex

  1. The Impact of Sea Level Rise on Developing Countries: A Comparative Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Dasgupta, S. [World Bank, Washington, DC (United States)

    2008-07-01

    Sea-level rise (SLR) due to climate change is a serious global threat: The scientific evidence is now overwhelming. In this paper, Geographic Information System software has been used to overlay the best available, spatially-disaggregated global data on land, population, agriculture, urban extent, wetlands, and GDP, to assess the consequences of continued SLR for 84 coastal developing countries. Estimates suggest that even a one-meter rise in sea level in coastal countries of the developing world would submerge 194,000 square kilometers of land area, and turn at least 56 million people into environmental refugees. At the country level results are extremely skewed.

  2. The Impact of Sea Level Rise on Developing Countries: A Comparative Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Dasgupta, Susmita (World Bank, Washington, DC (United States))

    2008-07-01

    Sea-level rise (SLR) due to climate change is a serious global threat: The scientific evidence is now overwhelming. In this paper, Geographic Information System software has been used to overlay the best available, spatially-disaggregated global data on land, population, agriculture, urban extent, wetlands, and GDP, to assess the consequences of continued SLR for 84 coastal developing countries. Estimates suggest that even a one-meter rise in sea level in coastal countries of the developing world would submerge 194,000 square kilometers of land area, and turn at least 56 million people into environmental refugees. At the country level results are extremely skewed

  3. Economic shocks, governance and violence: A subnational level analysis of Africa

    OpenAIRE

    Kibriya, Shahriar; Xu, Zhicheng P.; Zhang, Yu

    2015-01-01

    By using a geo-coded disaggregated dataset in sub-Saharan Africa over the period 1997–2013, we exploit year-to-year rainfall variation as an instrumental variable to estimate the causal effect of economic shocks on civil conflict conditional on governance quality. We confirm earlier findings that adverse rainfall shocks increase the likelihood of conflict in sub-Saharan Africa. We also investigate the role of governance quality on conflict in sub-Saharan Africa. The results underscore that im...

  4. System analysis and design

    International Nuclear Information System (INIS)

    Son, Seung Hui

    2004-02-01

    This book deals with information technology and business process, information system architecture, methods of system development, plan on system development like problem analysis and feasibility analysis, cases for system development, comprehension of analysis of users demands, analysis of users demands using traditional analysis, users demands analysis using integrated information system architecture, system design using integrated information system architecture, system implementation, and system maintenance.

  5. Impact-disrupted gunshot residue: A sub-micron analysis using a novel collection protocol

    Directory of Open Access Journals (Sweden)

    V. Spathis

    2017-06-01

    structured as well as disaggregated splats. The relative compositions of the characteristic elements that are present in GSR also change in the different splat morphologies sampled, which may contribute to the particles' physical structures. Two distinct populations of splats were also observed: circular and elongated, which suggest the residues hit the substrate at different angles. The difference in the splat impact angle can be ascribed to the position of the residues within the firearm discharge plume; particles get caught up in the vortex that is created by the discharge gases behind the projectile as it leaves the barrel, thereby affecting their directionality and flight time. This reasoning could also justify the existence of both spheroidal and splat particles at certain distances. The novel sampling and analytical techniques used in this experiment have provided previously unknown information in relation to GSR structure and formation which could have greater implications to its current analysis amongst laboratories and law enforcement agencies worldwide.

  6. Impact of a carbon tax on the Chilean economy: A computable general equilibrium analysis

    International Nuclear Information System (INIS)

    García Benavente, José Miguel

    2016-01-01

    In 2009, the government of Chile announced their official commitment to reduce national greenhouse gas emissions by 20% below a business-as-usual projection by 2020. Due to the fact that an effective way to reduce emissions is to implement a national carbon tax, the goal of this article is to quantify the value of a carbon tax that will allow the achievement of the emission reduction target and to assess its impact on the economy. The approach used in this work is to compare the economy before and after the implementation of the carbon tax by creating a static computable general equilibrium model of the Chilean economy. The model developed here disaggregates the economy in 23 industries and 23 commodities, and it uses four consumer agents (households, government, investment, and the rest of the world). By setting specific production and consumptions functions, the model can assess the variation in commodity prices, industrial production, and agent consumption, allowing a cross-sectoral analysis of the impact of the carbon tax. The benchmark of the economy, upon which the analysis is based, came from a social accounting matrix specially constructed for this model, based on the year 2010. The carbon tax was modeled as an ad valorem tax under two scenarios: tax on emissions from fossil fuels burned only by producers and tax on emissions from fossil fuels burned by producers and households. The abatement cost curve has shown that it is more cost-effective to tax only producers, rather than to tax both producers and households. This is due to the fact that when compared to the emission level observed in 2010, a 20% emission reduction will cause a loss in GDP of 2% and 2.3% respectively. Under the two scenarios, the tax value that could lead to that emission reduction is around 26 US dollars per ton of CO_2-equivalent. The most affected productive sectors are oil refinery, transport, and electricity — having a contraction between 7% and 9%. Analyzing the electricity

  7. Analysis of Project Finance | Energy Analysis | NREL

    Science.gov (United States)

    Analysis of Project Finance Analysis of Project Finance NREL analysis helps potential renewable energy developers and investors gain insights into the complex world of project finance. Renewable energy project finance is complex, requiring knowledge of federal tax credits, state-level incentives, renewable

  8. Safety analysis fundamentals

    International Nuclear Information System (INIS)

    Wright, A.C.D.

    2002-01-01

    This paper discusses the safety analysis fundamentals in reactor design. This study includes safety analysis done to show consequences of postulated accidents are acceptable. Safety analysis is also used to set design of special safety systems and includes design assist analysis to support conceptual design. safety analysis is necessary for licensing a reactor, to maintain an operating license, support changes in plant operations

  9. An example of multidimensional analysis: Discriminant analysis

    International Nuclear Information System (INIS)

    Lutz, P.

    1990-01-01

    Among the approaches on the data multi-dimensional analysis, lectures on the discriminant analysis including theoretical and practical aspects are presented. The discrimination problem, the analysis steps and the discrimination categories are stressed. Examples on the descriptive historical analysis, the discrimination for decision making, the demonstration and separation of the top quark are given. In the linear discriminant analysis the following subjects are discussed: Huyghens theorem, projection, discriminant variable, geometrical interpretation, case for g=2, classification method, separation of the top events. Criteria allowing the obtention of relevant results are included [fr

  10. Well-to-wheels energy use and greenhouse gas emissions analysis of plug-in hybrid electric vehicles.

    Energy Technology Data Exchange (ETDEWEB)

    Elgowainy, A.; Burnham, A.; Wang, M.; Molburg, J.; Rousseau, A.; Energy Systems

    2009-03-31

    technologies and grid generation mixes was wider than the spread of petroleum energy use, mainly due to the diverse fuel production technologies and feedstock sources for the fuels considered in this analysis. The PHEVs offered reductions in petroleum energy use as compared with regular hybrid electric vehicles (HEVs). More petroleum energy savings were realized as the AER increased, except when the marginal grid mix was dominated by oil-fired power generation. Similarly, more GHG emissions reductions were realized at higher AERs, except when the marginal grid generation mix was dominated by oil or coal. Electricity from renewable sources realized the largest reductions in petroleum energy use and GHG emissions for all PHEVs as the AER increased. The PHEVs that employ biomass-based fuels (e.g., biomass-E85 and -hydrogen) may not realize GHG emissions benefits over regular HEVs if the marginal generation mix is dominated by fossil sources. Uncertainties are associated with the adopted PHEV fuel consumption and marginal generation mix simulation results, which impact the WTW results and require further research. More disaggregate marginal generation data within control areas (where the actual dispatching occurs) and an improved dispatch modeling are needed to accurately assess the impact of PHEV electrification. The market penetration of the PHEVs, their total electric load, and their role as complements rather than replacements of regular HEVs are also uncertain. The effects of the number of daily charges, the time of charging, and the charging capacity have not been evaluated in this study. A more robust analysis of the VMT share of the CD operation is also needed.

  11. Semen Analysis Test

    Science.gov (United States)

    ... Sources Ask Us Also Known As Sperm Analysis Sperm Count Seminal Fluid Analysis Formal Name Semen Analysis This ... semen Viscosity—consistency or thickness of the semen Sperm count—total number of sperm Sperm concentration (density)—number ...

  12. Handbook of Applied Analysis

    CERN Document Server

    Papageorgiou, Nikolaos S

    2009-01-01

    Offers an examination of important theoretical methods and procedures in applied analysis. This book details the important theoretical trends in nonlinear analysis and applications to different fields. It is suitable for those working on nonlinear analysis.

  13. Wind power projects in the CDM: Methodologies and tools for baselines, carbon financing and substainability analysis[CDM=Clean Development Mechanism

    Energy Technology Data Exchange (ETDEWEB)

    Ringius, L.; Grohnheit, P.E.; Nielsen, L.H.; Olivier, A.L.; Painuly, J.; Villavicencio, A.

    2002-12-01

    The report is intended to be a guidance document for project developers, investors, lenders, and CDM host countries involved in wind power projects in the CDM. The report explores in particular those issues that are important in CDM project assessment and development - that is, baseline development, carbon financing, and environmental sustainability. It does not deal in detail with those issues that are routinely covered in a standard wind power project assessment. The report tests, compares, and recommends methodologies for and approaches to baseline development. To present the application and implications of the various methodologies and approaches in a concrete context, Africa's largest wind farm-namely the 60 MW wind farm located in Zafarana, Egypt- is examined as a hypothetical CDM wind power project The report shows that for the present case example there is a difference of about 25% between the lowest (0.5496 tCO2/MWh) and the highest emission rate (0.6868 tCO{sub 2}/MWh) estimated in accordance with these three standardized approaches to baseline development according to the Marrakesh Accord. This difference in emission factors comes about partly as a result of including hydroelectric power in the baseline scenario. Hydroelectric resources constitute around 21% of the generation capacity in Egypt, and, if excluding hydropower, the difference between the lowest and the highest baseline is reduced to 18%. Furthermore, since the two variations of the 'historical' baseline option examined result in the highest and the lowest baselines, by disregarding this baseline option altogether the difference between the lowest and the highest is reduced to 16%. The ES3-model, which the Systems Analysis Department at Risoe National Laboratory has developed, makes it possible for this report to explore the project-specific approach to baseline development in some detail. Based on quite disaggregated data on the Egyptian electricity system, including the wind

  14. Uncertainty in particle number modal analysis during transient operation of compressed natural gas, diesel, and trap-equipped diesel transit buses.

    Science.gov (United States)

    Holmén, Britt A; Qu, Yingge

    2004-04-15

    The relationships between transient vehicle operation and ultrafine particle emissions are not well-known, especially for low-emission alternative bus technologies such as compressed natural gas (CNG) and diesel buses equipped with particulate filters/traps (TRAP). In this study, real-time particle number concentrations measured on a nominal 5 s average basis using an electrical low pressure impactor (ELPI) for these two bus technologies are compared to that of a baseline catalyst-equipped diesel bus operated on ultralow sulfur fuel (BASE) using dynamometer testing. Particle emissions were consistently 2 orders of magnitude lower for the CNG and TRAP compared to BASE on all driving cycles. Time-resolved total particle numbers were examined in terms of sampling factors identified as affecting the ability of ELPI to quantify the particulate matter number emissions for low-emitting vehicles such as CNG and TRAP as a function of vehicle driving mode. Key factors were instrument sensitivity and dilution ratio, alignment of particle and vehicle operating data, sampling train background particles, and cycle-to-cycle variability due to vehicle, engine, after-treatment, or driver behavior. In-cycle variability on the central business district (CBD) cycle was highest for the TRAP configuration, but this could not be attributed to the ELPI sensitivity issues observed for TRAP-IDLE measurements. Elevated TRAP emissions coincided with low exhaust temperature, suggesting on-road real-world particulate filter performance can be evaluated by monitoring exhaust temperature. Nonunique particle emission maps indicate that measures other than vehicle speed and acceleration are necessary to model disaggregated real-time particle emissions. Further testing on a wide variety of test cycles is needed to evaluate the relative importance of the time history of vehicle operation and the hysteresis of the sampling train/dilution tunnel on ultrafine particle emissions. Future studies should

  15. Costs and expected gain in lifetime health from intensive care versus general ward care of 30,712 individual patients: a distribution-weighted cost-effectiveness analysis.

    Science.gov (United States)

    Lindemark, Frode; Haaland, Øystein A; Kvåle, Reidar; Flaatten, Hans; Norheim, Ole F; Johansson, Kjell A

    2017-08-21

    Clinicians, hospital managers, policy makers, and researchers are concerned about high costs, increased demand, and variation in priorities in the intensive care unit (ICU). The objectives of this modelling study are to describe the extra costs and expected health gains associated with admission to the ICU versus the general ward for 30,712 patients and the variation in cost-effectiveness estimates among subgroups and individuals, and to perform a distribution-weighted economic evaluation incorporating extra weighting to patients with high severity of disease. We used a decision-analytic model that estimates the incremental cost per quality-adjusted life year (QALY) gained (ICER) from ICU admission compared with general ward care using Norwegian registry data from 2008 to 2010. We assigned increasing weights to health gains for those with higher severity of disease, defined as less expected lifetime health if not admitted. The study has inherent uncertainty of findings because a randomized clinical trial comparing patients admitted or rejected to the ICU has never been performed. Uncertainty is explored in probabilistic sensitivity analysis. The mean cost-effectiveness of ICU admission versus ward care was €11,600/QALY, with 1.6 QALYs gained and an incremental cost of €18,700 per patient. The probability (p) of cost-effectiveness was 95% at a threshold of €22,000/QALY. The mean ICER for medical admissions was €10,700/QALY (p = 97%), €12,300/QALY (p = 93%) for admissions after acute surgery, and €14,700/QALY (p = 84%) after planned surgery. For individualized ICERs, there was a 50% probability that ICU admission was cost-effective for 85% of the patients at a threshold of €64,000/QALY, leaving 15% of the admissions not cost-effective. In the distributional evaluation, 8% of all patients had distribution-weighted ICERs (higher weights to gains for more severe conditions) above €64,000/QALY. High-severity admissions gained the most, and were more

  16. "Does anger regulation mediate the discrimination-mental health link among Mexican-origin adolescents? A longitudinal mediation analysis using multilevel modeling": Correction to Park et al. (2016).

    Science.gov (United States)

    2017-02-01

    Reports an error in "Does Anger Regulation Mediate the Discrimination-Mental Health Link Among Mexican-Origin Adolescents? A Longitudinal Mediation Analysis Using Multilevel Modeling" by Irene J. K. Park, Lijuan Wang, David R. Williams and Margarita Alegría ( Developmental Psychology , Advanced Online Publication, Nov 28, 2016, np). In the article, there were several typographical errors in the Recruitment and Procedures section. The percentage of mothers who responded to survey items should have been 99.3%. Additionally, the youths surveyed at T2 and T3 should have been n=246 . Accordingly, the percentage of youths surveyed in T2 and T3 should have been 91.4% and the percentage of mothers surveyed at T2 and T3 should have been 90.7%. Finally, the youths missing at T2 should have been n= 23, and therefore the attrition rate for youth participants should have been 8.6. All versions of this article have been corrected. (The following abstract of the original article appeared in record 2016-57671-001.) Although prior research has consistently documented the association between racial/ethnic discrimination and poor mental health outcomes, the mechanisms that underlie this link are still unclear. The present 3-wave longitudinal study tested the mediating role of anger regulation in the discrimination-mental health link among 269 Mexican-origin adolescents ( M age = 14.1 years, SD = 1.6; 57% girls), 12 to 17 years old. Three competing anger regulation variables were tested as potential mediators: outward anger expression, anger suppression, and anger control. Longitudinal mediation analyses were conducted using multilevel modeling that disaggregated within-person effects from between-person effects. Results indicated that outward anger expression was a significant mediator; anger suppression and anger control were not significant mediators. Within a given individual, greater racial/ethnic discrimination was associated with more frequent outward anger expression. In turn

  17. An analysis of socio-demographic patterns in child malnutrition trends using Ghana demographic and health survey data in the period 1993-2008.

    Science.gov (United States)

    Amugsi, Dickson A; Mittelmark, Maurice B; Lartey, Anna

    2013-10-16

    A small but growing body of research indicates that progress in reducing child malnutrition is substantially uneven from place to place, even down to the district level within countries. Yet child malnutrition prevalence and trend estimates available for public health planning are mostly available only at the level of global regions and/or at country level. To support carefully targeted intervention to reduce child malnutrition, public health planners and policy-makers require access to more refined prevalence data and trend analyses than are presently available. Responding to this need in Ghana, this report presents trends in child malnutrition prevalence in socio-demographic groups within the country's geographic regions. The study uses the Ghana Demographic and Health Surveys (GDHS) data. The GDHS are nationally representative cross-sectional surveys that have been carried out in many developing countries. These surveys constitute one of the richest sources of information currently available to examine time trends in child malnutrition. Data from four surveys were used for the analysis: 1993, 1998, 2003 and 2008. The results show statistically significant declining trends at the national level for stunting (F (1, 7204) = 7.89, p ≤ .005), underweight (F (1, 7441) = 44.87, p ≤ .001) and wasting (F (1, 7130) = 6.19, p ≤ .013). However, analyses of the sex-specific trends revealed that the declining trends in stunting and wasting were significant among males but not among females. In contrast to the national trend, there were significantly increasing trends in stunting for males (F (1, 2004) = 3.92, p ≤ .048) and females (F (1, 2004) = 4.34, p ≤ .037) whose mothers had higher than primary education, while the trends decreased significantly for males and females whose mothers had no education. At the national level in Ghana, child malnutrition is significantly declining. However, the aggregate national trend masks important deviations in certain socio

  18. An analysis of socio-demographic patterns in child malnutrition trends using Ghana demographic and health survey data in the period 1993–2008

    Science.gov (United States)

    2013-01-01

    Background A small but growing body of research indicates that progress in reducing child malnutrition is substantially uneven from place to place, even down to the district level within countries. Yet child malnutrition prevalence and trend estimates available for public health planning are mostly available only at the level of global regions and/or at country level. To support carefully targeted intervention to reduce child malnutrition, public health planners and policy-makers require access to more refined prevalence data and trend analyses than are presently available. Responding to this need in Ghana, this report presents trends in child malnutrition prevalence in socio-demographic groups within the country’s geographic regions. Methods The study uses the Ghana Demographic and Health Surveys (GDHS) data. The GDHS are nationally representative cross-sectional surveys that have been carried out in many developing countries. These surveys constitute one of the richest sources of information currently available to examine time trends in child malnutrition. Data from four surveys were used for the analysis: 1993, 1998, 2003 and 2008. Results The results show statistically significant declining trends at the national level for stunting (F (1, 7204) = 7.89, p ≤ .005), underweight (F (1, 7441) = 44.87, p ≤ .001) and wasting (F (1, 7130) = 6.19, p ≤ .013). However, analyses of the sex-specific trends revealed that the declining trends in stunting and wasting were significant among males but not among females. In contrast to the national trend, there were significantly increasing trends in stunting for males (F (1, 2004) = 3.92, p ≤ .048) and females (F (1, 2004) = 4.34, p ≤ .037) whose mothers had higher than primary education, while the trends decreased significantly for males and females whose mothers had no education. Conclusions At the national level in Ghana, child malnutrition is significantly declining

  19. Part 1. Short-term effects of air pollution on mortality: results from a time-series analysis in Chennai, India.

    Science.gov (United States)

    Balakrishnan, Kalpana; Ganguli, Bhaswati; Ghosh, Santu; Sankar, S; Thanasekaraan, Vijaylakshmi; Rayudu, V N; Caussy, Harry

    2011-03-01

    This report describes the results of a time-series analysis of the effect of short-term exposure to particulate matter with an aerodynamic diameter part of its Public Health and Air Pollution in Asia (PAPA) initiative. The study involved integration and analysis of retrospective data for the years 2002 through 2004. The data were obtained from relevant government agencies in charge of routine data collection. Data on meteorologic confounders (including temperature, relative humidity, and dew point) were available on all days of the study period. Data on mortality were also available on all days, but information on cause-of-death (including accidental deaths) could not be reliably ascertained. Hence, only all-cause daily mortality was used as the major outcome for the time-series analyses. Data on PM10, nitrogen dioxide (NO2), and sulfur dioxide (SO2) were limited to a much smaller number of days, but spanned the full study period. Data limitations resulting from low sensitivity of gaseous pollutant measurements led to using only PM10 in the main analysis. Of the eight operational ambient air quality monitor (AQM) stations in the city, seven met the selection criteria set forth in the common protocol developed for the three PAPA studies in India. In addition, all raw data used in the analysis were subjected to additional quality assurance (QA) and quality control (QC) criteria to ensure the validity of the measurements. Two salient features of the PM10 data set in Chennai were a high percentage of missing readings and a low correlation among daily data recorded by the AQMs. The latter resulted partly because each AQM had a small footprint (approximate area over which the air pollutant measurements recorded in the AQM are considered valid), and partly because of differences in source profiles among the 10 zones within the city. The zones were defined by the Chennai Corporation based on population density. Alternative exposure series were developed to control for

  20. [Gender analysis of papers published in Revista de Neurología (2002-2006)].

    Science.gov (United States)

    Aleixandre-Benavent, R; Alonso-Arroyo, A; González-Alcaide, G; González de Dios, J; Sempere, A P; Valderrama-Zurián, J C

    There is an ongoing interest in the society in promoting gender equality and in women integration in research activities. The purpose of this work is to identify from a gender perspective the bibliometric characteristics of articles published in Revista de Neurología journal during the 2002-2006 period. Records were obtained from Science Citation Index database of ISI-Thomson platform. The following indicators were determined, disaggregated by gender: year of publication, type of document, number and order of signatures, number of collaborators, signature/papers index and in the institutional and geographical level. 4527 authors were identified, 2614 (57.74%) men and 1913 (42.,26%) women. The highest women's participation took place in original articles (39.01% of signatures). 44.5% of authors with one published article were women, while 'big women producers' (those with more than 9 articles) only were 16.67%. A greater productivity in men and a greater rate of collaboration in women has been detected. Scientific activity studies disaggregated by gender give an essential information in order to establish the basis of a scientific policy for promoting the woman as researcher. The evolution in the number of female authors in Revista de Neurología journal does not present an aiming growth to reach the parity in the next years. A low presence of women in positions of high productivity has been detected, whose causes should be identified.

  1. Energy use for economic growth: A trivariate analysis from Tunisian agriculture sector

    International Nuclear Information System (INIS)

    Sebri, Maamar; Abid, Mehdi

    2012-01-01

    Following the importance of energy in the agrarian economies, the investigation of the causal relationship between energy consumption in agriculture sector and economic growth has a fundamental role in implementing suitable policies. This paper examines the causal relationship between energy consumption and agricultural value added, controlling for trade openness, in Tunisia from 1980 to 2007. The relationship is investigated at aggregated as well as disaggregated components of energy consumption, including oil and electricity. Using Granger's technique, it is shown that various results are obtained regarding the direction of causality between competing variables. Nevertheless, the most common finding suggest that trade openness and both aggregated and disaggregated energy consumption Granger causes agricultural value added. Therefore, the energy-led growth and trade-led growth hypotheses are supported in the Tunisian agriculture sector. An important policy implication resulting from this study is that energy can be considered as a limiting factor to agriculture value added and, therefore, shocks to energy supply would have a negative impact onto agriculture performance. Furthermore, trade liberalization seems to be a stimulus factor to the Tunisian agriculture development. - Highlights: ► We study the energy consumption-economic growth nexus of Tunisian agriculture sector. ► We use Johansen's cointegration approach and Granger causality. ► Energy consumption can be considered as limiting factor to agricultural performance. ► Electrical energy will represent an important input to agricultural production growth.

  2. Shape analysis in medical image analysis

    CERN Document Server

    Tavares, João

    2014-01-01

    This book contains thirteen contributions from invited experts of international recognition addressing important issues in shape analysis in medical image analysis, including techniques for image segmentation, registration, modelling and classification, and applications in biology, as well as in cardiac, brain, spine, chest, lung and clinical practice. This volume treats topics such as, anatomic and functional shape representation and matching; shape-based medical image segmentation; shape registration; statistical shape analysis; shape deformation; shape-based abnormity detection; shape tracking and longitudinal shape analysis; machine learning for shape modeling and analysis; shape-based computer-aided-diagnosis; shape-based medical navigation; benchmark and validation of shape representation, analysis and modeling algorithms. This work will be of interest to researchers, students, and manufacturers in the fields of artificial intelligence, bioengineering, biomechanics, computational mechanics, computationa...

  3. NCEP SST Analysis

    Science.gov (United States)

    Organization Search Go Search Polar Go MMAB SST Analysis Main page About MMAB Our Mission Our Personnel EMC Branches Global Climate & Weather Modeling Mesoscale Modeling Marine Modeling and Analysis Contact EMC (RTG_SST_HR) analysis For a regional map, click the desired area in the global SST analysis and anomaly maps

  4. Foundations of factor analysis

    CERN Document Server

    Mulaik, Stanley A

    2009-01-01

    Introduction Factor Analysis and Structural Theories Brief History of Factor Analysis as a Linear Model Example of Factor AnalysisMathematical Foundations for Factor Analysis Introduction Scalar AlgebraVectorsMatrix AlgebraDeterminants Treatment of Variables as Vectors Maxima and Minima of FunctionsComposite Variables and Linear Transformations Introduction Composite Variables Unweighted Composite VariablesDifferentially Weighted Composites Matrix EquationsMulti

  5. K Basin Hazard Analysis

    International Nuclear Information System (INIS)

    PECH, S.H.

    2000-01-01

    This report describes the methodology used in conducting the K Basins Hazard Analysis, which provides the foundation for the K Basins Final Safety Analysis Report. This hazard analysis was performed in accordance with guidance provided by DOE-STD-3009-94, Preparation Guide for U. S. Department of Energy Nonreactor Nuclear Facility Safety Analysis Reports and implements the requirements of DOE Order 5480.23, Nuclear Safety Analysis Report

  6. Quantitative analysis chemistry

    International Nuclear Information System (INIS)

    Ko, Wansuk; Lee, Choongyoung; Jun, Kwangsik; Hwang, Taeksung

    1995-02-01

    This book is about quantitative analysis chemistry. It is divided into ten chapters, which deal with the basic conception of material with the meaning of analysis chemistry and SI units, chemical equilibrium, basic preparation for quantitative analysis, introduction of volumetric analysis, acid-base titration of outline and experiment examples, chelate titration, oxidation-reduction titration with introduction, titration curve, and diazotization titration, precipitation titration, electrometric titration and quantitative analysis.

  7. K Basin Hazard Analysis

    Energy Technology Data Exchange (ETDEWEB)

    PECH, S.H.

    2000-08-23

    This report describes the methodology used in conducting the K Basins Hazard Analysis, which provides the foundation for the K Basins Final Safety Analysis Report. This hazard analysis was performed in accordance with guidance provided by DOE-STD-3009-94, Preparation Guide for U. S. Department of Energy Nonreactor Nuclear Facility Safety Analysis Reports and implements the requirements of DOE Order 5480.23, Nuclear Safety Analysis Report.

  8. K Basins Hazard Analysis

    International Nuclear Information System (INIS)

    WEBB, R.H.

    1999-01-01

    This report describes the methodology used in conducting the K Basins Hazard Analysis, which provides the foundation for the K Basins Safety Analysis Report (HNF-SD-WM-SAR-062/Rev.4). This hazard analysis was performed in accordance with guidance provided by DOE-STD-3009-94, Preparation Guide for U. S. Department of Energy Nonreactor Nuclear Facility Safety Analysis Reports and implements the requirements of DOE Order 5480.23, Nuclear Safety Analysis Report

  9. Qualitative Content Analysis

    OpenAIRE

    Philipp Mayring

    2000-01-01

    The article describes an approach of systematic, rule guided qualitative text analysis, which tries to preserve some methodological strengths of quantitative content analysis and widen them to a concept of qualitative procedure. First the development of content analysis is delineated and the basic principles are explained (units of analysis, step models, working with categories, validity and reliability). Then the central procedures of qualitative content analysis, inductive development of ca...

  10. RELIABILITY ANALYSIS OF BENDING ELIABILITY ANALYSIS OF ...

    African Journals Online (AJOL)

    eobe

    Reliability analysis of the safety levels of the criteria slabs, have been .... was also noted [2] that if the risk level or β < 3.1), the ... reliability analysis. A study [6] has shown that all geometric variables, ..... Germany, 1988. 12. Hasofer, A. M and ...

  11. DTI analysis methods : Voxel-based analysis

    NARCIS (Netherlands)

    Van Hecke, Wim; Leemans, Alexander; Emsell, Louise

    2016-01-01

    Voxel-based analysis (VBA) of diffusion tensor imaging (DTI) data permits the investigation of voxel-wise differences or changes in DTI metrics in every voxel of a brain dataset. It is applied primarily in the exploratory analysis of hypothesized group-level alterations in DTI parameters, as it does

  12. Analysis of Precision of Activation Analysis Method

    DEFF Research Database (Denmark)

    Heydorn, Kaj; Nørgaard, K.

    1973-01-01

    The precision of an activation-analysis method prescribes the estimation of the precision of a single analytical result. The adequacy of these estimates to account for the observed variation between duplicate results from the analysis of different samples and materials, is tested by the statistic T...

  13. Choice Model and Influencing Factor Analysis of Travel Mode for Migrant Workers: Case Study in Xi’an, China

    OpenAIRE

    Hong Chen; Zuo-xian Gan; Yu-ting He

    2015-01-01

    Based on the basic theory and methods of disaggregate choice model, the influencing factors in travel mode choice for migrant workers are analyzed, according to 1366 data samples of Xi’an migrant workers. Walking, bus, subway, and taxi are taken as the alternative parts of travel modes for migrant workers, and a multinomial logit (MNL) model of travel mode for migrant workers is set up. The validity of the model is verified by the hit rate, and the hit rates of four travel modes are all great...

  14. Hazard Analysis Database Report

    CERN Document Server

    Grams, W H

    2000-01-01

    The Hazard Analysis Database was developed in conjunction with the hazard analysis activities conducted in accordance with DOE-STD-3009-94, Preparation Guide for U S . Department of Energy Nonreactor Nuclear Facility Safety Analysis Reports, for HNF-SD-WM-SAR-067, Tank Farms Final Safety Analysis Report (FSAR). The FSAR is part of the approved Authorization Basis (AB) for the River Protection Project (RPP). This document describes, identifies, and defines the contents and structure of the Tank Farms FSAR Hazard Analysis Database and documents the configuration control changes made to the database. The Hazard Analysis Database contains the collection of information generated during the initial hazard evaluations and the subsequent hazard and accident analysis activities. The Hazard Analysis Database supports the preparation of Chapters 3 ,4 , and 5 of the Tank Farms FSAR and the Unreviewed Safety Question (USQ) process and consists of two major, interrelated data sets: (1) Hazard Analysis Database: Data from t...

  15. Circuit analysis for dummies

    CERN Document Server

    Santiago, John

    2013-01-01

    Circuits overloaded from electric circuit analysis? Many universities require that students pursuing a degree in electrical or computer engineering take an Electric Circuit Analysis course to determine who will ""make the cut"" and continue in the degree program. Circuit Analysis For Dummies will help these students to better understand electric circuit analysis by presenting the information in an effective and straightforward manner. Circuit Analysis For Dummies gives you clear-cut information about the topics covered in an electric circuit analysis courses to help

  16. Cluster analysis for applications

    CERN Document Server

    Anderberg, Michael R

    1973-01-01

    Cluster Analysis for Applications deals with methods and various applications of cluster analysis. Topics covered range from variables and scales to measures of association among variables and among data units. Conceptual problems in cluster analysis are discussed, along with hierarchical and non-hierarchical clustering methods. The necessary elements of data analysis, statistics, cluster analysis, and computer implementation are integrated vertically to cover the complete path from raw data to a finished analysis.Comprised of 10 chapters, this book begins with an introduction to the subject o

  17. It's Not Just "Any" Day: When the Sun Rises on D-Day at One Rural District, Educators Meet to Disaggregate the Data

    Science.gov (United States)

    Beck, Lisa D.

    2008-01-01

    No Child Left Behind (NCLB) brought with it a barrage of data from standardized tests, but when do teachers have time to analyze student data? The first days of school are hectic preparing classrooms, organizing supplies, learning the names on class rosters, and completing mounds of paperwork. This article describes D-Day--a day for data…

  18. Activation analysis in food analysis. Pt. 9

    International Nuclear Information System (INIS)

    Szabo, S.A.

    1992-01-01

    An overview is presented on the application of activation analysis (AA) techniques for food analysis, as reflected at a recent international conference titled Activation Analysis and its Applications. The most popular analytical techniques include instrumental neutron AA, (INAA or NAA), radiochemical NAA (RNAA), X-ray fluorescence analysis and mass spectrometry. Data are presented for the multielemental NAA of instant soups, for elemental composition of drinking water in Iraq, for Na, K, Mn contents of various Indian rices, for As, Hg, Sb and Se determination in various seafoods, for daily microelement takeup in China, for the elemental composition of Chinese teas. Expected development trends in AA are outlined. (R.P.) 24 refs.; 8 tabs

  19. Synovial fluid analysis

    Science.gov (United States)

    Joint fluid analysis; Joint fluid aspiration ... El-Gabalawy HS. Synovial fluid analysis, synovial biopsy, and synovial pathology. In: Firestein GS, Budd RC, Gabriel SE, McInnes IB, O'Dell JR, eds. Kelly's Textbook of ...

  20. Ecosystem Analysis Program

    International Nuclear Information System (INIS)

    Burgess, R.L.

    1978-01-01

    Progress is reported on the following research programs: analysis and modeling of ecosystems; EDFB/IBP data center; biome analysis studies; land/water interaction studies; and computer programs for development of models

  1. Confirmatory Composite Analysis

    NARCIS (Netherlands)

    Schuberth, Florian; Henseler, Jörg; Dijkstra, Theo K.

    2018-01-01

    We introduce confirmatory composite analysis (CCA) as a structural equation modeling technique that aims at testing composite models. CCA entails the same steps as confirmatory factor analysis: model specification, model identification, model estimation, and model testing. Composite models are

  2. Introductory numerical analysis

    CERN Document Server

    Pettofrezzo, Anthony J

    2006-01-01

    Written for undergraduates who require a familiarity with the principles behind numerical analysis, this classical treatment encompasses finite differences, least squares theory, and harmonic analysis. Over 70 examples and 280 exercises. 1967 edition.

  3. Gap Analysis: Application to Earned Value Analysis

    OpenAIRE

    Langford, Gary O.; Franck, Raymond (Chip)

    2008-01-01

    Sponsored Report (for Acquisition Research Program) Earned Value is regarded as a useful tool to monitor commercial and defense system acquisitions. This paper applies the theoretical foundations and systematics of Gap Analysis to improve Earned Value Management. As currently implemented, Earned Value inaccurately provides a higher value for the work performed. This preliminary research indicates that Earned Value calculations can be corrected. Value Analysis, properly defined and enacted,...

  4. Importance-performance analysis based SWOT analysis

    OpenAIRE

    Phadermrod, Boonyarat; Crowder, Richard M.; Wills, Gary B.

    2016-01-01

    SWOT analysis, a commonly used tool for strategic planning, is traditionally a form of brainstorming. Hence, it has been criticised that it is likely to hold subjective views of the individuals who participate in a brainstorming session and that SWOT factors are not prioritized by their significance thus it may result in an improper strategic action. While most studies of SWOT analysis have only focused on solving these shortcomings separately, this study offers an approach to diminish both s...

  5. Discourse analysis and Foucault's

    Directory of Open Access Journals (Sweden)

    Jansen I.

    2008-01-01

    Full Text Available Discourse analysis is a method with up to now was less recognized in nursing science, althoughmore recently nursing scientists are discovering it for their purposes. However, several authors have criticized thatdiscourse analysis is often misinterpreted because of a lack of understanding of its theoretical backgrounds. In thisarticle, I reconstruct Foucault’s writings in his “Archaeology of Knowledge” to provide a theoretical base for futurearchaeological discourse analysis, which can be categorized as a socio-linguistic discourse analysis.

  6. Bayesian Mediation Analysis

    OpenAIRE

    Yuan, Ying; MacKinnon, David P.

    2009-01-01

    This article proposes Bayesian analysis of mediation effects. Compared to conventional frequentist mediation analysis, the Bayesian approach has several advantages. First, it allows researchers to incorporate prior information into the mediation analysis, thus potentially improving the efficiency of estimates. Second, under the Bayesian mediation analysis, inference is straightforward and exact, which makes it appealing for studies with small samples. Third, the Bayesian approach is conceptua...

  7. Applied longitudinal analysis

    CERN Document Server

    Fitzmaurice, Garrett M; Ware, James H

    2012-01-01

    Praise for the First Edition "". . . [this book] should be on the shelf of everyone interested in . . . longitudinal data analysis.""-Journal of the American Statistical Association   Features newly developed topics and applications of the analysis of longitudinal data Applied Longitudinal Analysis, Second Edition presents modern methods for analyzing data from longitudinal studies and now features the latest state-of-the-art techniques. The book emphasizes practical, rather than theoretical, aspects of methods for the analysis of diverse types of lo

  8. Regression analysis by example

    CERN Document Server

    Chatterjee, Samprit

    2012-01-01

    Praise for the Fourth Edition: ""This book is . . . an excellent source of examples for regression analysis. It has been and still is readily readable and understandable."" -Journal of the American Statistical Association Regression analysis is a conceptually simple method for investigating relationships among variables. Carrying out a successful application of regression analysis, however, requires a balance of theoretical results, empirical rules, and subjective judgment. Regression Analysis by Example, Fifth Edition has been expanded

  9. Wind energy analysis system

    OpenAIRE

    2014-01-01

    M.Ing. (Electrical & Electronic Engineering) One of the most important steps to be taken before a site is to be selected for the extraction of wind energy is the analysis of the energy within the wind on that particular site. No wind energy analysis system exists for the measurement and analysis of wind power. This dissertation documents the design and development of a Wind Energy Analysis System (WEAS). Using a micro-controller based design in conjunction with sensors, WEAS measure, calcu...

  10. Slice hyperholomorphic Schur analysis

    CERN Document Server

    Alpay, Daniel; Sabadini, Irene

    2016-01-01

    This book defines and examines the counterpart of Schur functions and Schur analysis in the slice hyperholomorphic setting. It is organized into three parts: the first introduces readers to classical Schur analysis, while the second offers background material on quaternions, slice hyperholomorphic functions, and quaternionic functional analysis. The third part represents the core of the book and explores quaternionic Schur analysis and its various applications. The book includes previously unpublished results and provides the basis for new directions of research.

  11. Computational movement analysis

    CERN Document Server

    Laube, Patrick

    2014-01-01

    This SpringerBrief discusses the characteristics of spatiotemporal movement data, including uncertainty and scale. It investigates three core aspects of Computational Movement Analysis: Conceptual modeling of movement and movement spaces, spatiotemporal analysis methods aiming at a better understanding of movement processes (with a focus on data mining for movement patterns), and using decentralized spatial computing methods in movement analysis. The author presents Computational Movement Analysis as an interdisciplinary umbrella for analyzing movement processes with methods from a range of fi

  12. Descriptive data analysis.

    Science.gov (United States)

    Thompson, Cheryl Bagley

    2009-01-01

    This 13th article of the Basics of Research series is first in a short series on statistical analysis. These articles will discuss creating your statistical analysis plan, levels of measurement, descriptive statistics, probability theory, inferential statistics, and general considerations for interpretation of the results of a statistical analysis.

  13. Trend Analysis Using Microcomputers.

    Science.gov (United States)

    Berger, Carl F.

    A trend analysis statistical package and additional programs for the Apple microcomputer are presented. They illustrate strategies of data analysis suitable to the graphics and processing capabilities of the microcomputer. The programs analyze data sets using examples of: (1) analysis of variance with multiple linear regression; (2) exponential…

  14. Bayesian Mediation Analysis

    Science.gov (United States)

    Yuan, Ying; MacKinnon, David P.

    2009-01-01

    In this article, we propose Bayesian analysis of mediation effects. Compared with conventional frequentist mediation analysis, the Bayesian approach has several advantages. First, it allows researchers to incorporate prior information into the mediation analysis, thus potentially improving the efficiency of estimates. Second, under the Bayesian…

  15. Automation of activation analysis

    International Nuclear Information System (INIS)

    Ivanov, I.N.; Ivanets, V.N.; Filippov, V.V.

    1985-01-01

    The basic data on the methods and equipment of activation analysis are presented. Recommendations on the selection of activation analysis techniques, and especially the technique envisaging the use of short-lived isotopes, are given. The equipment possibilities to increase dataway carrying capacity, using modern computers for the automation of the analysis and data processing procedure, are shown

  16. Practical data analysis

    CERN Document Server

    Cuesta, Hector

    2013-01-01

    Each chapter of the book quickly introduces a key 'theme' of Data Analysis, before immersing you in the practical aspects of each theme. You'll learn quickly how to perform all aspects of Data Analysis.Practical Data Analysis is a book ideal for home and small business users who want to slice & dice the data they have on hand with minimum hassle.

  17. Mathematical analysis fundamentals

    CERN Document Server

    Bashirov, Agamirza

    2014-01-01

    The author's goal is a rigorous presentation of the fundamentals of analysis, starting from elementary level and moving to the advanced coursework. The curriculum of all mathematics (pure or applied) and physics programs include a compulsory course in mathematical analysis. This book will serve as can serve a main textbook of such (one semester) courses. The book can also serve as additional reading for such courses as real analysis, functional analysis, harmonic analysis etc. For non-math major students requiring math beyond calculus, this is a more friendly approach than many math-centric o

  18. Foundations of mathematical analysis

    CERN Document Server

    Johnsonbaugh, Richard

    2010-01-01

    This classroom-tested volume offers a definitive look at modern analysis, with views of applications to statistics, numerical analysis, Fourier series, differential equations, mathematical analysis, and functional analysis. Upper-level undergraduate students with a background in calculus will benefit from its teachings, along with beginning graduate students seeking a firm grounding in modern analysis. A self-contained text, it presents the necessary background on the limit concept, and the first seven chapters could constitute a one-semester introduction to limits. Subsequent chapters discuss

  19. Analysis in usability evaluations

    DEFF Research Database (Denmark)

    Følstad, Asbjørn; Lai-Chong Law, Effie; Hornbæk, Kasper

    2010-01-01

    While the planning and implementation of usability evaluations are well described in the literature, the analysis of the evaluation data is not. We present interviews with 11 usability professionals on how they conduct analysis, describing the resources, collaboration, creation of recommendations......, and prioritization involved. The interviews indicate a lack of structure in the analysis process and suggest activities, such as generating recommendations, that are unsupported by existing methods. We discuss how to better support analysis, and propose four themes for future research on analysis in usability...

  20. http Log Analysis

    DEFF Research Database (Denmark)

    Bøving, Kristian Billeskov; Simonsen, Jesper

    2004-01-01

    This article documents how log analysis can inform qualitative studies concerning the usage of web-based information systems (WIS). No prior research has used http log files as data to study collaboration between multiple users in organisational settings. We investigate how to perform http log...... analysis; what http log analysis says about the nature of collaborative WIS use; and how results from http log analysis may support other data collection methods such as surveys, interviews, and observation. The analysis of log files initially lends itself to research designs, which serve to test...... hypotheses using a quantitative methodology. We show that http log analysis can also be valuable in qualitative research such as case studies. The results from http log analysis can be triangulated with other data sources and for example serve as a means of supporting the interpretation of interview data...

  1. The ATLAS Analysis Model

    CERN Multimedia

    Amir Farbin

    The ATLAS Analysis Model is a continually developing vision of how to reconcile physics analysis requirements with the ATLAS offline software and computing model constraints. In the past year this vision has influenced the evolution of the ATLAS Event Data Model, the Athena software framework, and physics analysis tools. These developments, along with the October Analysis Model Workshop and the planning for CSC analyses have led to a rapid refinement of the ATLAS Analysis Model in the past few months. This article introduces some of the relevant issues and presents the current vision of the future ATLAS Analysis Model. Event Data Model The ATLAS Event Data Model (EDM) consists of several levels of details, each targeted for a specific set of tasks. For example the Event Summary Data (ESD) stores calorimeter cells and tracking system hits thereby permitting many calibration and alignment tasks, but will be only accessible at particular computing sites with potentially large latency. In contrast, the Analysis...

  2. Multivariate analysis with LISREL

    CERN Document Server

    Jöreskog, Karl G; Y Wallentin, Fan

    2016-01-01

    This book traces the theory and methodology of multivariate statistical analysis and shows how it can be conducted in practice using the LISREL computer program. It presents not only the typical uses of LISREL, such as confirmatory factor analysis and structural equation models, but also several other multivariate analysis topics, including regression (univariate, multivariate, censored, logistic, and probit), generalized linear models, multilevel analysis, and principal component analysis. It provides numerous examples from several disciplines and discusses and interprets the results, illustrated with sections of output from the LISREL program, in the context of the example. The book is intended for masters and PhD students and researchers in the social, behavioral, economic and many other sciences who require a basic understanding of multivariate statistical theory and methods for their analysis of multivariate data. It can also be used as a textbook on various topics of multivariate statistical analysis.

  3. Visual physics analysis VISPA

    International Nuclear Information System (INIS)

    Actis, Oxana; Brodski, Michael; Erdmann, Martin; Fischer, Robert; Hinzmann, Andreas; Mueller, Gero; Muenzer, Thomas; Plum, Matthias; Steggemann, Jan; Winchen, Tobias; Klimkovich, Tatsiana

    2010-01-01

    VISPA is a development environment for high energy physics analyses which enables physicists to combine graphical and textual work. A physics analysis cycle consists of prototyping, performing, and verifying the analysis. The main feature of VISPA is a multipurpose window for visual steering of analysis steps, creation of analysis templates, and browsing physics event data at different steps of an analysis. VISPA follows an experiment-independent approach and incorporates various tools for steering and controlling required in a typical analysis. Connection to different frameworks of high energy physics experiments is achieved by using different types of interfaces. We present the look-and-feel for an example physics analysis at the LHC and explain the underlying software concepts of VISPA.

  4. Cost benefit analysis cost effectiveness analysis

    International Nuclear Information System (INIS)

    Lombard, J.

    1986-09-01

    The comparison of various protection options in order to determine which is the best compromise between cost of protection and residual risk is the purpose of the ALARA procedure. The use of decision-aiding techniques is valuable as an aid to selection procedures. The purpose of this study is to introduce two rather simple and well known decision aiding techniques: the cost-effectiveness analysis and the cost-benefit analysis. These two techniques are relevant for the great part of ALARA decisions which need the use of a quantitative technique. The study is based on an hypothetical case of 10 protection options. Four methods are applied to the data

  5. HAZARD ANALYSIS SOFTWARE

    International Nuclear Information System (INIS)

    Sommer, S; Tinh Tran, T.

    2008-01-01

    Washington Safety Management Solutions, LLC developed web-based software to improve the efficiency and consistency of hazard identification and analysis, control selection and classification, and to standardize analysis reporting at Savannah River Site. In the new nuclear age, information technology provides methods to improve the efficiency of the documented safety analysis development process which includes hazard analysis activities. This software provides a web interface that interacts with a relational database to support analysis, record data, and to ensure reporting consistency. A team of subject matter experts participated in a series of meetings to review the associated processes and procedures for requirements and standard practices. Through these meetings, a set of software requirements were developed and compiled into a requirements traceability matrix from which software could be developed. The software was tested to ensure compliance with the requirements. Training was provided to the hazard analysis leads. Hazard analysis teams using the software have verified its operability. The software has been classified as NQA-1, Level D, as it supports the analysis team but does not perform the analysis. The software can be transported to other sites with alternate risk schemes. The software is being used to support the development of 14 hazard analyses. User responses have been positive with a number of suggestions for improvement which are being incorporated as time permits. The software has enforced a uniform implementation of the site procedures. The software has significantly improved the efficiency and standardization of the hazard analysis process

  6. Graig Lwyd (Group VII Lithic Assemblages from the Excavations at Parc Bryn Cegin, Llandygai, Gwynedd, Wales – Analysis and Interpretation

    Directory of Open Access Journals (Sweden)

    J. Ll. W. Williams

    2009-09-01

    Full Text Available This article draws attention to a unique sequence of flaked debitage at Bryn Cegin and describes the conscious exfoliation of carefully and laboriously executed ground and polished axes of Graig Lwyd source rock over a protracted time-span, covering the whole of the Neolithic period in Wales. It also draws attention to the same phenomenon being practised in other parts of Britain, establishing it as one of a number of specialised acts involving the burial of stone and flint axes. It is argued that Graig Lwyd polished axes, unflaked nuclei, and axe-making debitage were brought from the source area to Bryn Cegin, where they were systematically disaggregated and the resultant flaked assemblage buried in a series of pits. The act of disaggregation appears not to have been undertaken for any domestic/utilitarian purpose, but may be considered as an example of the ritual fragmentation of a highly valued commodity, a phenomenon that has been identified elsewhere in the Neolithic of Britain.

  7. Metal oxide nanoparticle transport in porous media – an analysis about (un)certainties in environmental research

    International Nuclear Information System (INIS)

    Heidmann, I

    2013-01-01

    Research about the fate and behavior of engineered nanoparticles in the environment is despite its wide applications still in the early stages. The fast-growing area of nanoparticle research and the high level of uncertainty create a big challenge for describing clearly the recent state of the current scientific knowledge. Therefore, in this study the certain knowledge, the known uncertainties and the identified knowledge gaps concerning mobility of engineered metal oxide nanoparticles in porous media are analyzed. The mobility of nanoparticles is mainly investigated in model laboratory studies under well-defined conditions, which are often not realistic for natural systems. In these model systems, nanoparticles often retain in the pore system due to aggregation and sedimentation. However, under environmental conditions, the presence of natural organic matter may cause stabilization or disaggregation of nanoparticles and favors therefore higher mobility of nanoparticles. Additionally, potential higher mobility of particles using preferential flow paths is not considered. Knowledge of the long-term behavior of nanoparticles concerning disaggregation, dissolution or remobilization in soils under environmental conditions is scarce. Scientific uncertainty itself is rarely mentioned in the research papers. Seldom known methodically uncertainties in nanoparticle characterization are referred to. The uncertainty about the transferability of the results to environmental conditions is discussed more often. Due to the sparse studies concerning natural material or natural pore systems, certain conclusions concerning the mobility of nanoparticles in the soil environment are not possible to drawn.

  8. Quantitative analysis of investment allocation over various resources of health care systems by using views of product lines

    Science.gov (United States)

    Tai, Guangfu; Williams, Peter

    2013-11-01

    Hospitals can be viewed as service enterprises, of which the primary function is to provide specific sets of diagnostic and therapeutic medical services to individual patients. Each patient has certain diagnosis and therapeutic attributes in common with some other patients. Thus, patients with similar medical attributes could be 'processed' in one 'product line' of medical services, and individual treatments for patients within one 'product line' can be regarded as incurring identical consumption of health care resources. This article presents a theoretical framing for resource planning and investment allocation of various resources from a macro perspective of costs that demonstrates the need to plan capacity at the disaggregated resource level. The result of a balanced line ('optimal') is compared with an alternative scheme of 'the same ratio composing of resources' under the same monetary constraints. Thus, it is demonstrated that planning at the disaggregated level affords much better use of resources than achieved in common practice of budget control by simple percentage increase/decrease in distributing a financial vote.

  9. Functional analysis and applications

    CERN Document Server

    Siddiqi, Abul Hasan

    2018-01-01

    This self-contained textbook discusses all major topics in functional analysis. Combining classical materials with new methods, it supplies numerous relevant solved examples and problems and discusses the applications of functional analysis in diverse fields. The book is unique in its scope, and a variety of applications of functional analysis and operator-theoretic methods are devoted to each area of application. Each chapter includes a set of problems, some of which are routine and elementary, and some of which are more advanced. The book is primarily intended as a textbook for graduate and advanced undergraduate students in applied mathematics and engineering. It offers several attractive features making it ideally suited for courses on functional analysis intended to provide a basic introduction to the subject and the impact of functional analysis on applied and computational mathematics, nonlinear functional analysis and optimization. It introduces emerging topics like wavelets, Gabor system, inverse pro...

  10. From analysis to surface

    DEFF Research Database (Denmark)

    Bemman, Brian; Meredith, David

    it with a “ground truth” analysis of the same music pro- duced by a human expert (see, in particular, [5]). In this paper, we explore the problem of generating an encoding of the musical surface of a work automatically from a systematic encoding of an analysis. The ability to do this depends on one having...... an effective (i.e., comput- able), correct and complete description of some aspect of the structure of the music. Generating the surface struc- ture of a piece from an analysis in this manner serves as a proof of the analysis' correctness, effectiveness and com- pleteness. We present a reductive analysis......In recent years, a significant body of research has focused on developing algorithms for computing analyses of mu- sical works automatically from encodings of these works' surfaces [3,4,7,10,11]. The quality of the output of such analysis algorithms is typically evaluated by comparing...

  11. Fundamentals of functional analysis

    CERN Document Server

    Farenick, Douglas

    2016-01-01

    This book provides a unique path for graduate or advanced undergraduate students to begin studying the rich subject of functional analysis with fewer prerequisites than is normally required. The text begins with a self-contained and highly efficient introduction to topology and measure theory, which focuses on the essential notions required for the study of functional analysis, and which are often buried within full-length overviews of the subjects. This is particularly useful for those in applied mathematics, engineering, or physics who need to have a firm grasp of functional analysis, but not necessarily some of the more abstruse aspects of topology and measure theory normally encountered. The reader is assumed to only have knowledge of basic real analysis, complex analysis, and algebra. The latter part of the text provides an outstanding treatment of Banach space theory and operator theory, covering topics not usually found together in other books on functional analysis. Written in a clear, concise manner,...

  12. Computational Music Analysis

    DEFF Research Database (Denmark)

    This book provides an in-depth introduction and overview of current research in computational music analysis. Its seventeen chapters, written by leading researchers, collectively represent the diversity as well as the technical and philosophical sophistication of the work being done today...... on well-established theories in music theory and analysis, such as Forte's pitch-class set theory, Schenkerian analysis, the methods of semiotic analysis developed by Ruwet and Nattiez, and Lerdahl and Jackendoff's Generative Theory of Tonal Music. The book is divided into six parts, covering...... music analysis, the book provides an invaluable resource for researchers, teachers and students in music theory and analysis, computer science, music information retrieval and related disciplines. It also provides a state-of-the-art reference for practitioners in the music technology industry....

  13. Analysis apparatus and method of analysis

    International Nuclear Information System (INIS)

    1976-01-01

    A continuous streaming method developed for the excution of immunoassays is described in this patent. In addition, a suitable apparatus for the method was developed whereby magnetic particles are automatically employed for the consecutive analysis of a series of liquid samples via the RIA technique

  14. Human reliability analysis

    International Nuclear Information System (INIS)

    Dougherty, E.M.; Fragola, J.R.

    1988-01-01

    The authors present a treatment of human reliability analysis incorporating an introduction to probabilistic risk assessment for nuclear power generating stations. They treat the subject according to the framework established for general systems theory. Draws upon reliability analysis, psychology, human factors engineering, and statistics, integrating elements of these fields within a systems framework. Provides a history of human reliability analysis, and includes examples of the application of the systems approach

  15. Emission spectrochemical analysis

    International Nuclear Information System (INIS)

    Rives, R.D.; Bruks, R.R.

    1983-01-01

    The emission spectrochemical method of analysis based on the fact that atoms of elements can be excited in the electric arc or in the laser beam and will emit radiation with characteristic wave lengths is considered. The review contains the data on spectrochemical analysis, of liquids geological materials, scheme of laser microprobe. The main characteristics of emission spectroscopy, atomic absorption spectroscopy and X-ray fluorescent analysis, are aeneralized

  16. LULU analysis program

    International Nuclear Information System (INIS)

    Crawford, H.J.; Lindstrom, P.J.

    1983-06-01

    Our analysis program LULU has proven very useful in all stages of experiment analysis, from prerun detector debugging through final data reduction. It has solved our problem of having arbitrary word length events and is easy enough to use that many separate experimenters are now analyzing with LULU. The ability to use the same software for all stages of experiment analysis greatly eases the programming burden. We may even get around to making the graphics elegant someday

  17. Mastering Clojure data analysis

    CERN Document Server

    Rochester, Eric

    2014-01-01

    This book consists of a practical, example-oriented approach that aims to help you learn how to use Clojure for data analysis quickly and efficiently.This book is great for those who have experience with Clojure and who need to use it to perform data analysis. This book will also be hugely beneficial for readers with basic experience in data analysis and statistics.

  18. Fast neutron activation analysis

    International Nuclear Information System (INIS)

    Pepelnik, R.

    1986-01-01

    Since 1981 numerous 14 MeV neutron activation analyses were performed at Korona. On the basis of that work the advantages of this analysis technique and therewith obtained results are compared with other analytical methods. The procedure of activation analysis, the characteristics of Korona, some analytical investigations in environmental research and material physics, as well as sources of systematic errors in trace analysis are described. (orig.) [de

  19. Stochastic Analysis 2010

    CERN Document Server

    Crisan, Dan

    2011-01-01

    "Stochastic Analysis" aims to provide mathematical tools to describe and model high dimensional random systems. Such tools arise in the study of Stochastic Differential Equations and Stochastic Partial Differential Equations, Infinite Dimensional Stochastic Geometry, Random Media and Interacting Particle Systems, Super-processes, Stochastic Filtering, Mathematical Finance, etc. Stochastic Analysis has emerged as a core area of late 20th century Mathematics and is currently undergoing a rapid scientific development. The special volume "Stochastic Analysis 2010" provides a sa

  20. The ATLAS Analysis Architecture

    International Nuclear Information System (INIS)

    Cranmer, K.S.

    2008-01-01

    We present an overview of the ATLAS analysis architecture including the relevant aspects of the computing model and the major architectural aspects of the Athena framework. Emphasis will be given to the interplay between the analysis use cases and the technical aspects of the architecture including the design of the event data model, transient-persistent separation, data reduction strategies, analysis tools, and ROOT interoperability

  1. U.S. regional greenhouse gas emissions analysis comparing highly resolved vehicle miles traveled and CO2 emissions: mitigation implications and their effect on atmospheric measurements

    Science.gov (United States)

    Mendoza, D. L.; Gurney, K. R.

    2010-12-01

    Carbon dioxide (CO2) is the most abundant anthropogenic greenhouse gas and projections of fossil fuel energy demand show CO2 concentrations increasing indefinitely into the future. After electricity production, the transportation sector is the second largest CO2 emitting economic sector in the United States, accounting for 32.3% of the total U.S. emissions in 2002. Over 80% of the transport sector is composed of onroad emissions, with the remainder shared by the nonroad, aircraft, railroad, and commercial marine vessel transportation. In order to construct effective mitigation policy for the onroad transportation sector and more accurately predict CO2 emissions for use in transport models and atmospheric measurements, analysis must incorporate the three components that determine the CO2 onroad transport emissions: vehicle fleet composition, average speed of travel, and emissions regulation strategies. Studies to date, however, have either focused on one of these three components, have been only completed at the national scale, or have not explicitly represented CO2 emissions instead relying on the use of vehicle miles traveled (VMT) as an emissions proxy. National-level projections of VMT growth is not sufficient to highlight regional differences in CO2 emissions growth due to the heterogeneity of vehicle fleet and each state’s road network which determines the speed of travel of vehicles. We examine how an analysis based on direct CO2 emissions and an analysis based on VMT differ in terms of their emissions and mitigation implications highlighting potential biases introduced by the VMT-based approach. This analysis is performed at the US state level and results are disaggregated by road and vehicle classification. We utilize the results of the Vulcan fossil fuel CO2 emissions inventory which quantified emissions for the year 2002 across all economic sectors in the US at high resolution. We perform this comparison by fuel type,12 road types, and 12 vehicle types

  2. Circuit analysis with Multisim

    CERN Document Server

    Baez-Lopez, David

    2011-01-01

    This book is concerned with circuit simulation using National Instruments Multisim. It focuses on the use and comprehension of the working techniques for electrical and electronic circuit simulation. The first chapters are devoted to basic circuit analysis.It starts by describing in detail how to perform a DC analysis using only resistors and independent and controlled sources. Then, it introduces capacitors and inductors to make a transient analysis. In the case of transient analysis, it is possible to have an initial condition either in the capacitor voltage or in the inductor current, or bo

  3. Textile Technology Analysis Lab

    Data.gov (United States)

    Federal Laboratory Consortium — The Textile Analysis Labis built for evaluating and characterizing the physical properties of an array of textile materials, but specifically those used in aircrew...

  4. International Market Analysis

    DEFF Research Database (Denmark)

    Sørensen, Olav Jull

    2009-01-01

    The review presents the book International Market Analysis: Theories and Methods, written by John Kuiada, professor at Centre of International Business, Department of Business Studies, Aalborg University. The book is refreshingly new in its way of looking at a classical problem. It looks at market...... analysis from the point of vie of ways of thinking about markets. Furthermore, the book includes the concept of learning in the analysis of markets og how the way we understand business reality influneces our choice of methodology for market analysis....

  5. Chemical Security Analysis Center

    Data.gov (United States)

    Federal Laboratory Consortium — In 2006, by Presidential Directive, DHS established the Chemical Security Analysis Center (CSAC) to identify and assess chemical threats and vulnerabilities in the...

  6. Geospatial Data Analysis Facility

    Data.gov (United States)

    Federal Laboratory Consortium — Geospatial application development, location-based services, spatial modeling, and spatial analysis are examples of the many research applications that this facility...

  7. Analysis of food contaminants

    National Research Council Canada - National Science Library

    Gilbert, John

    1984-01-01

    ... quantification methods used in the analysis of mycotoxins in foods - Confirmation and quantification of trace organic food contaminants by mass spectrometry-selected ion monitoring - Chemiluminescence...

  8. Chemical Analysis Facility

    Data.gov (United States)

    Federal Laboratory Consortium — FUNCTION: Uses state-of-the-art instrumentation for qualitative and quantitative analysis of organic and inorganic compounds, and biomolecules from gas, liquid, and...

  9. Thermogravimetric Analysis Laboratory

    Data.gov (United States)

    Federal Laboratory Consortium — At NETL’s Thermogravimetric Analysis Laboratory in Morgantown, WV, researchers study how chemical looping combustion (CLC) can be applied to fossil energy systems....

  10. Sensitivity and uncertainty analysis

    CERN Document Server

    Cacuci, Dan G; Navon, Ionel Michael

    2005-01-01

    As computer-assisted modeling and analysis of physical processes have continued to grow and diversify, sensitivity and uncertainty analyses have become indispensable scientific tools. Sensitivity and Uncertainty Analysis. Volume I: Theory focused on the mathematical underpinnings of two important methods for such analyses: the Adjoint Sensitivity Analysis Procedure and the Global Adjoint Sensitivity Analysis Procedure. This volume concentrates on the practical aspects of performing these analyses for large-scale systems. The applications addressed include two-phase flow problems, a radiative c

  11. Stimulation of Cysteine-Coated CdSe/ZnS Quantum Dot Luminescence by meso-Tetrakis (p-sulfonato-phenyl) Porphyrin

    Science.gov (United States)

    Parra, Gustavo G.; Ferreira, Lucimara P.; Gonçalves, Pablo J.; Sizova, Svetlana V.; Oleinikov, Vladimir A.; Morozov, Vladimir N.; Kuzmin, Vladimir A.; Borissevitch, Iouri E.

    2018-02-01

    Interaction between porphyrins and quantum dots (QD) via energy and/or charge transfer is usually accompanied by reduction of the QD luminescence intensity and lifetime. However, for CdSe/ZnS-Cys QD water solutions, kept at 276 K during 3 months (aged QD), the significant increase in the luminescence intensity at the addition of meso-tetrakis (p-sulfonato-phenyl) porphyrin (TPPS4) has been observed in this study. Aggregation of QD during the storage provokes reduction in the quantum yield and lifetime of their luminescence. Using steady-state and time-resolved fluorescence techniques, we demonstrated that TPPS4 stimulated disaggregation of aged CdSe/ZnS-Cys QD in aqueous solutions, increasing the quantum yield of their luminescence, which finally reached that of the fresh-prepared QD. Disaggregation takes place due to increase in electrostatic repulsion between QD at their binding with negatively charged porphyrin molecules. Binding of just four porphyrin molecules per single QD was sufficient for total QD disaggregation. The analysis of QD luminescence decay curves demonstrated that disaggregation stronger affected the luminescence related with the electron-hole annihilation in the QD shell. The obtained results demonstrate the way to repair aged QD by adding of some molecules or ions to the solutions, stimulating QD disaggregation and restoring their luminescence characteristics, which could be important for QD biomedical applications, such as bioimaging and fluorescence diagnostics. On the other hand, the disaggregation is important for QD applications in biology and medicine since it reduces the size of the particles facilitating their internalization into living cells across the cell membrane.

  12. Analysis of the cooperative ATPase cycle of the AAA+ chaperone ClpB from Thermus thermophilus by using ordered heterohexamers with an alternating subunit arrangement.

    Science.gov (United States)

    Yamasaki, Takashi; Oohata, Yukiko; Nakamura, Toshiki; Watanabe, Yo-hei

    2015-04-10

    The ClpB/Hsp104 chaperone solubilizes and reactivates protein aggregates in cooperation with DnaK/Hsp70 and its cofactors. The ClpB/Hsp104 protomer has two AAA+ modules, AAA-1 and AAA-2, and forms a homohexamer. In the hexamer, these modules form a two-tiered ring in which each tier consists of homotypic AAA+ modules. By ATP binding and its hydrolysis at these AAA+ modules, ClpB/Hsp104 exerts the mechanical power required for protein disaggregation. Although ATPase cycle of this chaperone has been studied by several groups, an integrated understanding of this cycle has not been obtained because of the complexity of the mechanism and differences between species. To improve our understanding of the ATPase cycle, we prepared many ordered heterohexamers of ClpB from Thermus thermophilus, in which two subunits having different mutations were cross-linked to each other and arranged alternately and measured their nucleotide binding, ATP hydrolysis, and disaggregation abilities. The results indicated that the ATPase cycle of ClpB proceeded as follows: (i) the 12 AAA+ modules randomly bound ATP, (ii) the binding of four or more ATP to one AAA+ ring was sensed by a conserved Arg residue and converted another AAA+ ring into the ATPase-active form, and (iii) ATP hydrolysis occurred cooperatively in each ring. We also found that cooperative ATP hydrolysis in at least one ring was needed for the disaggregation activity of ClpB. © 2015 by The American Society for Biochemistry and Molecular Biology, Inc.

  13. Longitudinal Meta-analysis

    NARCIS (Netherlands)

    Hox, J.J.; Maas, C.J.M.; Lensvelt-Mulders, G.J.L.M.

    2004-01-01

    The goal of meta-analysis is to integrate the research results of a number of studies on a specific topic. Characteristic for meta-analysis is that in general only the summary statistics of the studies are used and not the original data. When the published research results to be integrated

  14. Statistical data analysis

    International Nuclear Information System (INIS)

    Hahn, A.A.

    1994-11-01

    The complexity of instrumentation sometimes requires data analysis to be done before the result is presented to the control room. This tutorial reviews some of the theoretical assumptions underlying the more popular forms of data analysis and presents simple examples to illuminate the advantages and hazards of different techniques

  15. Activation analysis. Detection limits

    International Nuclear Information System (INIS)

    Revel, G.

    1999-01-01

    Numerical data and limits of detection related to the four irradiation modes, often used in activation analysis (reactor neutrons, 14 MeV neutrons, photon gamma and charged particles) are presented here. The technical presentation of the activation analysis is detailed in the paper P 2565 of Techniques de l'Ingenieur. (A.L.B.)

  16. SMART performance analysis methodology

    International Nuclear Information System (INIS)

    Lim, H. S.; Kim, H. C.; Lee, D. J.

    2001-04-01

    To ensure the required and desired operation over the plant lifetime, the performance analysis for the SMART NSSS design is done by means of the specified analysis methodologies for the performance related design basis events(PRDBE). The PRDBE is an occurrence(event) that shall be accommodated in the design of the plant and whose consequence would be no more severe than normal service effects of the plant equipment. The performance analysis methodology which systematizes the methods and procedures to analyze the PRDBEs is as follows. Based on the operation mode suitable to the characteristics of the SMART NSSS, the corresponding PRDBEs and allowable range of process parameters for these events are deduced. With the developed control logic for each operation mode, the system thermalhydraulics are analyzed for the chosen PRDBEs using the system analysis code. Particularly, because of different system characteristics of SMART from the existing commercial nuclear power plants, the operation mode, PRDBEs, control logic, and analysis code should be consistent with the SMART design. This report presents the categories of the PRDBEs chosen based on each operation mode and the transition among these and the acceptance criteria for each PRDBE. It also includes the analysis methods and procedures for each PRDBE and the concept of the control logic for each operation mode. Therefore this report in which the overall details for SMART performance analysis are specified based on the current SMART design, would be utilized as a guide for the detailed performance analysis

  17. Contrast analysis : A tutorial

    NARCIS (Netherlands)

    Haans, A.

    2018-01-01

    Contrast analysis is a relatively simple but effective statistical method for testing theoretical predictions about differences between group means against the empirical data. Despite its advantages, contrast analysis is hardly used to date, perhaps because it is not implemented in a convenient

  18. Interactive Controls Analysis (INCA)

    Science.gov (United States)

    Bauer, Frank H.

    1989-01-01

    Version 3.12 of INCA provides user-friendly environment for design and analysis of linear control systems. System configuration and parameters easily adjusted, enabling INCA user to create compensation networks and perform sensitivity analysis in convenient manner. Full complement of graphical routines makes output easy to understand. Written in Pascal and FORTRAN.

  19. Marketing research cluster analysis

    Directory of Open Access Journals (Sweden)

    Marić Nebojša

    2002-01-01

    Full Text Available One area of applications of cluster analysis in marketing is identification of groups of cities and towns with similar demographic profiles. This paper considers main aspects of cluster analysis by an example of clustering 12 cities with the use of Minitab software.

  20. SWOT ANALYSIS - CHINESE PETROLEUM

    Directory of Open Access Journals (Sweden)

    Chunlan Wang

    2014-01-01

    Full Text Available This article was written in early December 2013,combined with the historical development andthe latest data on the Chinese Petroleum carried SWOT- analysis. This paper discusses corporate resources, cost, management and external factorssuch as the political environment and the marketsupply and demand, conducted a comprehensiveand profound analysis.

  1. Evaluating Style Analysis

    NARCIS (Netherlands)

    de Roon, F.A.; Nijman, T.E.; Ter Horst, J.R.

    2000-01-01

    In this paper we evaluate applications of (return based) style analysis.The portfolio and positivity constraints imposed by style analysis are useful in constructing mimicking portfolios without short positions.Such mimicking portfolios can be used, e.g., to construct efficient portfolios of mutual

  2. Evaluating Style Analysis

    NARCIS (Netherlands)

    F.A. de Roon (Frans); T.E. Nijman (Theo); B.J.M. Werker

    2000-01-01

    textabstractIn this paper we evaluate applications of (return based) style analysis. The portfolio and positivity constraints imposed by style analysis are useful in constructing mimicking portfolios without short positions. Such mimicking portfolios can be used e.g. to construct efficient

  3. Qualitative Content Analysis

    Directory of Open Access Journals (Sweden)

    Satu Elo

    2014-02-01

    Full Text Available Qualitative content analysis is commonly used for analyzing qualitative data. However, few articles have examined the trustworthiness of its use in nursing science studies. The trustworthiness of qualitative content analysis is often presented by using terms such as credibility, dependability, conformability, transferability, and authenticity. This article focuses on trustworthiness based on a review of previous studies, our own experiences, and methodological textbooks. Trustworthiness was described for the main qualitative content analysis phases from data collection to reporting of the results. We concluded that it is important to scrutinize the trustworthiness of every phase of the analysis process, including the preparation, organization, and reporting of results. Together, these phases should give a reader a clear indication of the overall trustworthiness of the study. Based on our findings, we compiled a checklist for researchers attempting to improve the trustworthiness of a content analysis study. The discussion in this article helps to clarify how content analysis should be reported in a valid and understandable manner, which would be of particular benefit to reviewers of scientific articles. Furthermore, we discuss that it is often difficult to evaluate the trustworthiness of qualitative content analysis studies because of defective data collection method description and/or analysis description.

  4. Cognitive task analysis

    NARCIS (Netherlands)

    Schraagen, J.M.C.

    2000-01-01

    Cognitive task analysis is defined as the extension of traditional task analysis techniques to yield information about the knowledge, thought processes and goal structures that underlie observable task performance. Cognitive task analyses are conducted for a wide variety of purposes, including the

  5. Numerical Limit Analysis:

    DEFF Research Database (Denmark)

    Damkilde, Lars

    2007-01-01

    Limit State analysis has a long history and many prominent researchers have contributed. The theoretical foundation is based on the upper- and lower-bound theorems which give a very comprehensive and elegant formulation on complicated physical problems. In the pre-computer age Limit State analysis...... also enabled engineers to solve practical problems within reinforced concrete, steel structures and geotechnics....

  6. Isogeometric failure analysis

    NARCIS (Netherlands)

    Verhoosel, C.V.; Scott, M.A.; Borden, M.J.; Borst, de R.; Hughes, T.J.R.; Mueller-Hoeppe, D.; Loehnert, S.; Reese, S.

    2011-01-01

    Isogeometric analysis is a versatile tool for failure analysis. On the one hand, the excellent control over the inter-element continuity conditions enables a natural incorporation of continuum constitutive relations that incorporate higher-order strain gradients, as in gradient plasticity or damage.

  7. Biological sequence analysis

    DEFF Research Database (Denmark)

    Durbin, Richard; Eddy, Sean; Krogh, Anders Stærmose

    This book provides an up-to-date and tutorial-level overview of sequence analysis methods, with particular emphasis on probabilistic modelling. Discussed methods include pairwise alignment, hidden Markov models, multiple alignment, profile searches, RNA secondary structure analysis, and phylogene...

  8. Reactor Safety Analysis

    International Nuclear Information System (INIS)

    Arien, B.

    2000-01-01

    The objective of SCK-CEN's programme on reactor safety is to develop expertise in probabilistic and deterministic reactor safety analysis. The research programme consists of two main activities, in particular the development of software for reliability analysis of large systems and participation in the international PHEBUS-FP programme for severe accidents. Main achievements in 1999 are reported

  9. Factorial Analysis of Profitability

    OpenAIRE

    Georgeta VINTILA; Ilie GHEORGHE; Ioana Mihaela POCAN; Madalina Gabriela ANGHEL

    2012-01-01

    The DuPont analysis system is based on decomposing the profitability ratio in factors of influence. This paper describes the factorial analysis of profitability based on the DuPont system. Significant importance is given to the impact on various indicators on the shares value and profitability.

  10. Spool assembly support analysis

    International Nuclear Information System (INIS)

    Norman, B.F.

    1994-01-01

    This document provides the wind/seismic analysis and evaluation for the pump pit spool assemblies. Hand calculations were used for the analysis. UBC, AISC, and load factors were used in this evaluation. The results show that the actual loads are under the allowable loads and all requirements are met

  11. Amplitude and Ascoli analysis

    International Nuclear Information System (INIS)

    Hansen, J.D.

    1976-01-01

    This article discusses the partial wave analysis of two, three and four meson systems. The difference between the two approaches, referred to as amplitude and Ascoli analysis is discussed. Some of the results obtained with these methods are shown. (B.R.H.)

  12. Enabling interdisciplinary analysis

    Science.gov (United States)

    L. M. Reid

    1996-01-01

    'New requirements for evaluating environmental conditions in the Pacific Northwest have led to increased demands for interdisciplinary analysis of complex environmental problems. Procedures for watershed analysis have been developed for use on public and private lands in Washington State (Washington Forest Practices Board 1993) and for federal lands in the Pacific...

  13. Shot loading platform analysis

    International Nuclear Information System (INIS)

    Norman, B.F.

    1994-01-01

    This document provides the wind/seismic analysis and evaluation for the shot loading platform. Hand calculations were used for the analysis. AISC and UBC load factors were used in this evaluation. The results show that the actual loads are under the allowable loads and all requirements are met

  14. Marketing research cluster analysis

    OpenAIRE

    Marić Nebojša

    2002-01-01

    One area of applications of cluster analysis in marketing is identification of groups of cities and towns with similar demographic profiles. This paper considers main aspects of cluster analysis by an example of clustering 12 cities with the use of Minitab software.

  15. Towards Cognitive Component Analysis

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Ahrendt, Peter; Larsen, Jan

    2005-01-01

    Cognitive component analysis (COCA) is here defined as the process of unsupervised grouping of data such that the ensuing group structure is well-aligned with that resulting from human cognitive activity. We have earlier demonstrated that independent components analysis is relevant for representing...

  16. Qualitative Content Analysis

    OpenAIRE

    Satu Elo; Maria Kääriäinen; Outi Kanste; Tarja Pölkki; Kati Utriainen; Helvi Kyngäs

    2014-01-01

    Qualitative content analysis is commonly used for analyzing qualitative data. However, few articles have examined the trustworthiness of its use in nursing science studies. The trustworthiness of qualitative content analysis is often presented by using terms such as credibility, dependability, conformability, transferability, and authenticity. This article focuses on trustworthiness based on a review of previous studie...

  17. Interaction Analysis and Supervision.

    Science.gov (United States)

    Amidon, Edmund

    This paper describes a model that uses interaction analysis as a tool to provide feedback to a teacher in a microteaching situation. The author explains how interaction analysis can be used for teacher improvement, describes the category system used in the model, the data collection methods used, and the feedback techniques found in the model. (JF)

  18. Activation analysis. Chapter 4

    International Nuclear Information System (INIS)

    1976-01-01

    The principle, sample and calibration standard preparation, activation by neutrons, charged particles and gamma radiation, sample transport after activation, activity measurement, and chemical sample processing are described for activation analysis. Possible applications are shown of nondestructive activation analysis. (J.P.)

  19. Proximate Analysis of Coal

    Science.gov (United States)

    Donahue, Craig J.; Rais, Elizabeth A.

    2009-01-01

    This lab experiment illustrates the use of thermogravimetric analysis (TGA) to perform proximate analysis on a series of coal samples of different rank. Peat and coke are also examined. A total of four exercises are described. These are dry exercises as students interpret previously recorded scans. The weight percent moisture, volatile matter,…

  20. NOTATIONAL ANALYSIS OF SPORT

    OpenAIRE

    Ian M. Franks; Mike Hughes

    2004-01-01

    This book addresses and appropriately explains the notational analysis of technique, tactics, individual athlete/team exercise and work-rate in sport. The book offers guidance in: developing a system, analyzes of data, effective coaching using notational performance analysis and modeling sport behaviors. It updates and improves the 1997 edition